Does motivation arises out of an interaction between a person and a particular situation?

  • Journal List
  • HHS Author Manuscripts
  • PMC4986920

Cogn Affect Behav Neurosci. Author manuscript; available in PMC 2016 Aug 16.

Show

Published in final edited form as:

PMCID: PMC4986920

NIHMSID: NIHMS755802

Todd S. Braver,

Does motivation arises out of an interaction between a person and a particular situation?
Marie K. Krug, Kimberly S. Chiew, Wouter Kool, J. Andrew Westbrook, Nathan J. Clement, R. Alison Adcock, Deanna M. Barch, Matthew M. Botvinick, Charles S. Carver, Roshan Cools, Ruud Custers, Anthony Dickinson, Carol S. Dweck, Ayelet Fishbach, Peter M. Gollwitzer, Thomas M. Hess, Derek M. Isaacowitz, Mara Mather, Kou Murayama, Luiz Pessoa, Gregory R. Samanez-Larkin, and Leah H. Somerville, for the MOMCAI group

Abstract

Recent years have seen a rejuvenation of interest in studies of motivation–cognition interactions arising from many different areas of psychology and neuroscience. The present issue of Cognitive, Affective, & Behavioral Neuroscience provides a sampling of some of the latest research from a number of these different areas. In this introductory article, we provide an overview of the current state of the field, in terms of key research developments and candidate neural mechanisms receiving focused investigation as potential sources of motivation–cognition interaction. However, our primary goal is conceptual: to highlight the distinct perspectives taken by different research areas, in terms of how motivation is defined, the relevant dimensions and dissociations that are emphasized, and the theoretical questions being targeted. Together, these distinctions present both challenges and opportunities for efforts aiming toward a more unified and cross-disciplinary approach. We identify a set of pressing research questions calling for this sort of cross-disciplinary approach, with the explicit goal of encouraging integrative and collaborative investigations directed toward them.

Keywords: Cognitive control, Aging, Development, Dopamine, Reward

The construct of motivation has been a central part of psychology since the earliest days of James and Wundt. It is a construct that spans many levels of analysis, complexity, and scope, from cellular and systems neuroscience, to individual differences and social psychology (plus applied domains such as educational and industrial/organizational psychology, and clinical psychology and psychiatry). Recently, interest in scientific studies of motivation has been rejuvenated, arising from three distinct scientific perspectives and research traditions: (a) cognitive, systems, and computational neuroscience; (b) social, affective, and personality psychology; and (c) aging, developmental, and lifespan research. This special issue of Cognitive, Affective, & Behavioral Neuroscience is the direct result of a recent effort to integrate and cross-fertilize these three research streams through a small-group conference sponsored by the National Institute of Aging (with additional support from the Scientific Research Network on Decision Neuroscience and Aging): Mechanisms of Motivation, Cognition, and Aging Interactions (MOMCAI). This special issue provides a sampling of the latest research that originates from these different traditions, with a number of the contributions coming from the conference participants.

In this introductory article, our goal is primarily conceptual: to define the space of the domain being covered in the Special Issue, as we currently see it. Specifically, we highlight some key unresolved theoretical questions and challenges that need to be addressed by the field, while also highlighting what we believe are some of the most profitable research strategies. Our hope is that this introductory article will serve as something like a roadmap for investigators interested in getting involved with this research area. More importantly, we hope to stimulate cross-talk and the cross-fertilization of ideas among investigators working in disparate research traditions.

The article is organized into five different sections. The first section briefly covers some of the recent developments that have rejuvenated the study of motivation–cognition interactions from different research perspectives. In the second section, we discuss how motivation is defined and studied, with different emphases and foci, in each of these different traditions. Third, we describe some of the relevant dimensions and distinctions within the domain of motivation, which help to further define and taxonomize this domain. The fourth section focuses on the candidate neural mechanisms arising from cognitive neuroscience research that are thought contribute to motivation–cognition interactions. In the final section, we highlight what in our view are some of the most pressing research questions and “low-hanging fruit” that we hope will be targeted in future investigations within this domain.

Recent developments

Recent research in cognitive, computational, and systems neuroscience has begun to uncover some of the underlying core mechanisms by which reward signals and motivational state changes modulate ongoing neurocognitive processing. In particular, this work suggests that performing tasks in a context with available reward incentives leads to enhancements in specific cognitive processes, such as active maintenance in working memory, preparatory attention, episodic encoding, and decision making (Locke & Braver, 2010; Maddox & Markman, 2010; Pessoa, 2009; Pessoa & Engelmann, 2010; Shohamy & Adcock, 2010). These cognitive effects appear to occur via modulation of specific neural circuits involving the prefrontal cortex (PFC), midbrain dopamine system, and related subcortical structures such as the basal ganglia and hippocampus. The experimental work has been paralleled by theoretical developments involving the reinforcement learning computational framework. This framework postulates that inputs coding the current and predicted motivational values of events are utilized by the brain as learning signals to adjust decision-making biases (K. C. Berridge, 2007; Daw & Shohamy, 2008; McClure, Daw, & Montague, 2003; Niv, Daw, Joel, & Dayan, 2007).

A second stream of research development has come from the social, affective, and personality perspective. In this domain, investigations have focused on the types of goals that individuals select to pursue, and the internal and external influences on goal pursuit. In recent years, two surprising findings have emerged: (1) the explicit motivational value of behavioral goals is often not a strong determinant of whether those goals will be implemented and realized (Gollwitzer, 1999), because nonconscious influences can alter goal pursuit, primarily by modulating the perceived motivational value associated with goal outcomes (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Custers & Aarts, 2010); and (2) goal pursuit follows specific stages (e.g., planning vs. implementing) and time courses, such that goal-directed behavior can increase, decrease, or fluctuate over time, depending on the nature of the goal and the feedback received (Gollwitzer, 2012). This work has spawned a host of experimental paradigms and research strategies for specifying and elucidating the nature of nonconscious effects on goal pursuit (Bargh & Morsella, 2010), effective strategies for emotional regulation and self-control (Kross & Ayduk, 2011), the causes of self-regulatory persistence (Job, Walton, Bernecker, & Dweck, 2013) or depletion and failure (Baumeister & Vohs, 2007), and major sources of personality differences (Sorrentino, 2013).

The role of motivation–cognition interactions has also been emphasized in recent aging and developmental research. On the aging side, a primary focus has been on motivational reprioritization among older adults (Charles, 2010; Heckhausen, Wrosch, & Schulz, 2010). In the socioemotional domain, accumulating studies have suggested that older adults can exhibit better emotion regulation than can younger adults in some contexts, as well as a stable or increased focus toward positive affect (Carstensen et al., 2011; Mather, 2012; Urry & Gross, 2010). Such findings are somewhat puzzling, given that emotion regulation is generally hypothesized to depend on executive control processes and supporting brain systems (e.g., prefrontal cortex) that are well-established as showing age-related decline (Ochsner & Gross, 2005). Specifically, one theoretical account postulates that these effects reflect increased motivation toward emotionally meaningful goals and those associated with positive affect among older adults, as they get closer to the end of their life (Carstensen, 2006; Carstensen, Isaacowitz, & Charles, 1999). Contrasting accounts have also focused on motivational reprioritization, but instead as a specific response to age-related cognitive decline. According to such accounts, older adults will restrict cognitive engagement to (a) activities associated with maintenance or loss prevention, as opposed to growth (Baltes, 1997), or (b) tasks with the greatest implications for self (Hess, in press).

A different emphasis has arisen from the developmental perspective. Here, the focus has been on potentially diverging trajectories in the maturation of cognitive versus affective neural circuits. Specifically, adolescence has been highlighted as a period in which cognitive control processes are especially sensitive to incentive-related motivational influences (Geier, Terwilliger, Teslovich, Velanova, & Luna, 2010; Prencipe et al., 2011; Somerville & Casey, 2010; Steinberg, 2010a; van den Bos, Cohen, Kahnt, & Crone, 2012; Van Leijenhorst et al., 2010). These trajectories diverge once again in older age, with cognitive prefrontal circuits being more affected than emotional prefrontal circuits (Mather, 2012).

Although the body of work examining motivational influences on basic cognition and higher-level goal pursuit is rapidly growing, often there is little cross-talk between neurocognitively focused researchers and those taking social/personality and lifespan perspectives. This is problematic, because all of these perspectives are likely to be required in order to achieve a comprehensive understanding of how motivation impacts psychological and behavioral function. A number of challenges must be overcome to enable such integration. In the next two sections, we outline the key challenges of (a) defining motivation and (b) specifying its relevant dimensions.

Motivational definitions and operationalization

A key challenge for cross-disciplinary integration is to establish a unified definition for motivation and how motivational consequences are operationalized in experimental investigations. Indeed, different research traditions have emphasized distinct aspects of motivation. Here we briefly discuss how motivation has been defined and operationalized from within these different traditions.

Animal learning/systems neuroscience

Historically, studies of motivation in the animal-learning tradition have strongly focused on homeostatic drive accounts, in which physiological deviations from an internal set-point lead to shifts in motivational state (e.g., thirst, hunger) that trigger corrective behaviors (Bindra, 1974; Hull, 1943; Toates, 1986). However, contemporary research has been strongly influenced by the discovery that variations in the magnitude and quality of a reinforcer or the outcome of an instrumental action have behavioral effects that parallel those induced by physiological shifts in motivational state. This finding suggests that such states, rather than inducing drives, motivate behavior by modulating expectancies regarding the outcome (i.e., its incentive value). Because the incentive value of an action outcome must be learned, much of the current research focuses on the learning processes that mediate motivational control over behavior (K. C. Berridge, 2004). Incentive learning is investigated using standard Pavlovian and instrumental conditioning paradigms and assessed in terms of the behavioral, physiological, and neural responses that develop to cue stimuli (CSs) previously associated with rewarding or aversive outcomes.

In the domain of systems neuroscience, motivation is construed as having both activational and directional functions (Salamone & Correa, 2012), with the former being related to the nonspecific energization or invigoration of responding (typically assessed in terms of response rate or intensity), and the latter referring to specific response biases (typically assessed in terms of choice or place preferences). Behavior is further considered to be under goal-directed motivational control if it meets two additional criteria: (1) It is sensitive to the current incentive value of the outcome, and (2) it is sensitive to action–outcome contingencies (Dickinson & Balleine, 1995). A canonical paradigm for investigating goal-directed motivational effects is the outcome revaluation procedure (Dickinson, 1985), which is used to demonstrate how a change in the motivational state of the animal (selective satiation, physiological deprivation, aversive conditioning, etc.) can immediately impact Pavlovian responses (e.g., licking) and can also bias instrumental behaviors (e.g., rate of lever pressing), even in the absence of further contact with the reinforcer. Studies are typically conducted with primary reinforcers, such as food, liquid, or sexual stimuli, used as incentives.

Social, affective, and personality psychology

Social and personality psychologists use motivational constructs to describe why a person in a given situation selects one response over another, or makes a given response with stronger intensity or frequency (Bargh, Gollwitzer, & Oettingen, 2010). This conceptualization follows that of animal learning and systems neuroscience studies, in focusing on both the activational and directional functions of motivation. However, in the social, affective, and personality tradition, the primary interest is in how the direction and intensity of motivation arise from the expectations and needs of the individual (Weiner, 1992). A key theoretical framework is the conceptualization of motivation in terms of goals. Here, goals are considered to be mental representations of desired states, which serve as an intermediate construct that actually generates the activational and directional components of motivation (Austin & Vancouver, 1996; R. Custers & Aarts, 2005; Elliot & Fryer, 2008). Additionally, social and personality psychologists, use the term motive to refer to higher-order classes of incentives, such as achievement, power, affiliation, and intimacy, that may be intrinsically attractive to an individual (McClelland, 1985b). Motives can exhibit state-like properties, such that they reflect different situational construals, but they also have dispositions, which are relatively stable and trait-like (Gollwitzer, Barry, & Oettingen, 2011; Schultheiss & Brunstein, 2010).

Gollwitzer (1990) coined the summary terms feasibility and desirability to describe the directional and activational determinants of motivation, respectively. Feasibility reflects expectations of the probability of attaining the desired future outcome, on the basis of experiences in the past (Bandura, 1977; Mischel & Moore, 1973). These expectations can specify whether or not (a) one is capable of performing a certain behavior that is necessary to achieve a desired outcome (i.e., self-efficacy expectations), (b) the performed behavior will lead to the desired outcome (i.e., outcome expectations), or (c) one will reach the desired outcome (general expectations; Oettingen & Mayer, 2002). In contrast, desirability is defined as the estimated value of a specific future outcome (i.e., the perceived attractiveness of the expected short- and long-term consequences, within and outside the person, of having reached the desired future).

The dimension of desirability is often further subdivided in terms of motive strength and incentive value. Motive strength is defined primarily in terms of the individual, and relates to the class of incentives that the individual usually finds attractive. Thus, motive strength typically refers to the long-term likelihood that an individual will engage in actions of any type that would tend to satisfy the motive. In contrast, incentive value is defined in terms of the properties of the stimulus, and specifies the behavioral choices made within a particular domain of action. As an example, high achievement motive strength will cause an individual to see challenging tasks as attractive and seek out opportunities to engage in them. Tasks that provide the opportunity for achievement pride will have high incentive value and will be associated with specific behavioral choices that indicate high effort expenditure and task persistence.

A typical experimental paradigm within social, affective, and personality psychology examines the intensity or frequency of motivated behavioral responses in terms of these three factors: feasibility, motive, and incentive value (McClelland, 1985b). Response measures can be collected via laboratory performance tasks, but they are also commonly acquired through self-report or experience-sampling approaches. Likewise, measures of motive, incentive value, and feasibility (expectancy) can be taken from personality questionnaires, implicit rating tasks (e.g., projective methods, such as the Thematic Apperception Test; Murray, 1943), or experimental manipulations of success likelihood. A canonical finding is the presence of a three-way multiplicative interaction among these factors that predicts response strength (i.e., the frequency or intensity of a given behavior; McClelland, 1985a).

Cognitive neuroscience

In cognitive neuroscience, motivation is often formulated in terms of neural representations of expected outcomes that predict decisions regarding effort investment. Experimental investigations commonly operationalize motivation in terms of the transient neural responses evoked by extrinsic incentive cues. These cues are used to signal parametrically manipulated rewards (typically monetary) available for instrumental actions, on the assumption that motive strength will covary quantitatively with reward amount. The monetary incentive delay (MID) task is a canonical paradigm for investigating such effects (Knutson, Fong, Adams, Varner, & Hommer, 2001): Pretrial cues indicate the amount of monetary reward to be earned (or penalty avoided) by making a sufficiently fast button-press response to a brief visual target, with the allowable response window typically manipulated to ensure a specific reward rate. This paradigm is used to identify cue-related activation in candidate motivation-linked brain regions (e.g., midbrain dopamine system, nucleus accumbens) that tracks the expected incentive value (i.e., Amount × Success Probability) of the target action. A limitation of these types of paradigms is that they do not directly indicate a motivational effect, because typical behavioral indices of effort investment—accuracy and reinforcement rate—are experimentally controlled (and even reaction time, which is not typically controlled, is almost never considered a dependent measure). Instead, the expected value of an action is often treated as an assumed proxy for motivation in many cognitive neuroscience studies.

Another approach that has been utilized to decouple effort investment from simple motor behaviors (e.g., response speed/vigor) is to examine how fluctuations in incentive value modulate engagement in effortful cognitive processing. In this case, the motivation triggered by an incentive cue is related not only to the expected value of the action outcome, but also to the efficacy in obtaining it via a targeted neurocognitive process. A canonical example of this approach is the incentivized-encoding paradigm, in which pretrial incentive cues indicate the incentive value associated with successful memorization of an upcoming visual stimulus, with payoffs delivered at a later memory test session (Adcock, Thangavel, Whitfield-Gabrieli, Knutson, & Gabrieli, 2006; Wittmann et al., 2005). This paradigm has been used to demonstrate incentive-related mediation of successful memorization, in terms of the enhanced activation of motivation-linked neural circuits (e.g., dopaminergic pathways) and/or functional connectivity with memory systems (i.e., medial temporal lobe). Similar approaches have been used to target different cognitive processes, such as working memory, task switching, attentional selection, response inhibition, and decision making (Braem, Verguts, Roggeman, & Notebaert, 2012; Krebs, Boehler, Roberts, Song, & Woldorff, 2012; Krebs, Boehler, & Woldorff, 2010; Padmala & Pessoa, 2011; van Steenbergen, Band, & Hommel, 2012; Taylor et al., 2004).

Cognitive aging and development

In cognitive-aging research, motivational constructs have been invoked to explain changes in the selection of cognitive activities, level of engagement, and biases in attention and perceptual processing. A common approach in this tradition is to assess cognitive task selection and engagement as a function of the motivational value associated with that task (see, e.g., Freund, 2006; Germain & Hess, 2007).

A finding of particular interest within this domain is the positivity effect—in which memory and attention in older adults appear to be asymmetrically biased toward affectively positive items or events (i.e., Age × Valence interactions). An influential hypothesis is that positivity biases are the result of chronically active emotion regulation goals—that is, a heightened motivation to focus on the positive and avoid the negative (Mather & Carstensen, 2005; Reed & Carstensen, 2012). A standard experimental approach for testing this hypothesis is to put emotion regulation goals in competition with other goals and compare their expression to unconstrained conditions. The assumption is that age differences in active emotion regulation goals will be less strongly expressed when those goals are competing with experimentally imposed task goals (e.g., remembering items for a subsequent memory test). This approach has been used to demonstrate that (a) larger positivity effects (Age × Valence interactions) are observed during unconstrained conditions, relative to those that provide task-related goals (e.g., to remember items for a subsequent memory test; Reed, Chan, & Mikels, 2014); and conversely, (b) positivity effects can emerge in younger adults instructed to focus on their emotions (Kennedy, Mather, & Carstensen, 2004; Mather & Johnson, 2000). Another experimental approach to the positivity effect is to focus on the role of cognitive control, under the assumption that control is required to maintain emotion regulation goals in an active and accessible state. The key finding is that positivity effects are reduced in older adults with low cognitive control abilities, or under task conditions with high cognitive control demands (Knight et al., 2007; Mather & Knight, 2005; Petrican, Moscovitch, & Schimmack, 2008). However, it is important to note that, to date, the influence of motivational variables (e.g., motive strength, incentive value) on positivity effects have not been assessed directly.

Motivational constructs have also been invoked in the developmental literature as a primary means of explaining the apparent surge in risky decision making that occurs during adolescence (Somerville & Casey, 2010; Spear, 2000). Both rodent (Douglas, Varlinskaya, & Spear, 2003) and human models (Cauffman et al., 2010; Steinberg et al., 2008; Luciana and Collins, 2012) suggest that reward seeking, novelty seeking, and exploratory behavior peaks in adolescence. These behaviors are interpreted in terms of the unique trajectories of brain development that occur during this age period, in which the key mechanisms that modulate dopamine circuitry function are maximally activated, leading to biased dynamic interactions within subcortical–cortical neural circuits. Specifically, these neurodevelopmental changes are thought to up-regulate the signaling strength of motivationally salient information, such that this information exerts a disproportionately strong influence over adolescents’ choices, actions, and regulatory capacity (Somerville & Casey, 2010; Spear, 2000; Steinberg, 2004; see the Pressing Research Questions section for further discussion).

The standard experimental approach to this issue is to elicit motivational context-specificity effects, involving the same types of incentive manipulations used in the cognitive neuroscience literature, to demonstrate that adolescents show adult-like decision making under some circumstances, but selective disruptions under conditions in which salient affective–motivational cues or contexts are present (e.g.,. Figner, Mackinlay, Wilkening, & Weber, 2009). Current work aims to define the necessary and sufficient features of environmental cues and contexts that lead to heightened approach motivational behavior in adolescents.

Summary

The preceding sections highlighted the differences in how motivation is defined and investigated in various subfields. In animal behavioral neuroscience, the emphasis is on learning and conditioning processes, using primary incentives (food, liquid, and sexual stimuli) and measuring simple behaviors (physiological reflexes, response rates, and stimulus preferences). In social and personality psychology, the emphasis is on the pursuit of temporally extended goals involving high-level incentives (power, achievement, and affiliation) and assessing self-reported beliefs and goal striving behaviors. In cognitive neuroscience and adolescent developmental research, the emphasis is on neural representations of incentive value, typically using monetary rewards, and assessing how these modulate effortful cognitive processing. Finally, in cognitive-aging research, there is an emphasis on emotion–cognition interactions, using affectively valenced stimuli and measuring attentional and memory biases.

This comparison across research domains reveals shortcomings within each subfield. For example, systems and computational neuroscience studies typically focus on very simple goal-directed behaviors, and thus have only rarely addressed why or how motivational factors can influence high-level cognitive processing. In contrast, human cognitive neuroscience studies have tended to use rather narrow experimental manipulations of motivational state (i.e., monetary reward incentives), and thus often fail to exploit the higher degree of experimental control that comes from using biologically relevant incentives, such as food and liquids, that are more easily linked to motivational factors (e.g., physiological shifts, satiation, subjective preferences, etc.; Galvan & McGlennen, 2013; Krug & Braver, in press). Conversely, although social and personality psychologists more commonly explore the types of complex factors that are known to moderate human motivation (e.g., personality traits, affective context, or situational construals), this work does not typically take advantage of the experimental precision and additional leverage afforded by the paradigms and methods employed in cognitive and neuroscience research (e.g., neuroimaging, pharmacological interventions, etc.). Finally, in cognitive-aging and developmental studies, motivational mechanisms of age-related differences are often postulated without being explicitly tested with the types of experimental manipulations employed in either the neuroscience or social/personality literatures. Greater cross-fertilization would be highly fruitful in helping each subfield address its own limitations, by bridging between constructs and paradigms, such that motivation–cognition interactions could be understood at various levels of analysis.

Motivational dimensions and distinctions

A second key challenge to cross-disciplinary integration is to identify the relevant dimensions by which to taxonomize motivational influences on behavior. As will become clear below, the motivational dimensions and distinctions that have been investigated and emphasized vary significantly across disciplinary subfields. As a consequence, researchers working in one subfield may not be aware of the distinctions prominent in another, and as such, may not be sufficiently informed and constrained by them in their own research investigations. The goals of this section are to highlight these distinctions and to show how they challenge theory development and experimentation on the mechanisms of motivation–cognition interactions.

Goal-directed control versus other forms of incentive-based learning

Motivation is most often conceptualized as being goal-directed, in that effort is invested toward instrumental actions that bring about desirable outcomes, in relationship to the incentive value of those outcomes. However, through incentive-based learning mechanisms, stimulus–response associations may also form that are independent of the current incentive value of a goal, as in the case of habits. Habits are important for behavioral control in that they enable efficient and automatized responding that does not require representation of action–outcome associations (Balleine & Killcross, 2006; Dickinson & Balleine, 2000).

Within the animal and systems neuroscience literature, considerable work has been devoted to distinguishing motivational effects on goal-directed versus habitual behavioral control. As we described above, one classic approach is to identify goal-direct behaviors via outcome revaluation procedures, since habitual behaviors have been found to be insensitive to such manipulations (Dickinson & Balleine, 1994). A second test is Pavlovian–instrumental transfer (PIT; Dickinson & Balleine, 1994; Estes, 1943), in which presentation of a Pavlovian cue (i.e., predictive of reward not contingent on instrumental behavior) can enhance instrumental responding, although the cue had not previously been paired with such instrumental responses. One form of PIT, termed general PIT, enhances instrumental responses even when they are not linked to the Pavlovian outcome (e.g., for a thirsty animal, a water-predicting cue can increase instrumental responding for a food reward; Dickinson & Dawson, 1987). General PIT is thus activational rather directional, and appears to have a greater influence when behavior is under habitual control (Holland, 2004).

The phenomenon of PIT highlights the motivational effects of Pavlovian stimuli. Pavlovian motivational control has been referred to as incentive salience, which may be reflected in the subjective experience of “wanting” (K. C. Berridge & Robinson, 1998). Incentive salience indexes the motivational power of learned Pavlovian CSs (i.e., those previously associated with appetitive or aversive outcomes) to invigorate behaviors. Incentive salience is wholly motivational, in that it is a function not only of the learned outcome value transferred to the CS, but also of the current physiological state (e.g., hunger, satiety, etc.). Nevertheless, incentive salience is not thought to be goal-directed, in the sense described above. Indeed, Pavlovian responses appear to be hard-wired and reflexive, such that activated behaviors are somewhat inflexible, and may actually be maladaptive (Dayan, Niv, Seymour, & Daw, 2006; Hershberger, 1986). A core feature of incentive salience is that the Pavlovian CSs can sometimes become “motivational magnets,” triggering approach (or avoidance) behaviors directed toward the cue itself (rather than the outcome they signify; K. C. Berridge & Robinson, 1998; K. C. Berridge, Robinson, & Aldridge, 2009).

In more recent years, there has been increasing mutual influence between systems neuroscience studies of animal learning and the computational framework of reinforcement learning. This framework formalizes learning algorithms by which agents maximize expected long-term reward (Sutton & Barto, 1998). Thus, reinforcement learning refers to learning the value of events, actions, and stimuli. An important distinction in this literature has been between model-free versus model-based reinforcement learning, a computational distinction that parallels the habitual versus goal-directed control distinction (Daw, Niv, & Dayan, 2005). In model-free learning, action control is based on the learned (stored or “cached”) incentive values and behavioral responses that are associated with specific stimulus cues (eventually leading to habit formation). In contrast, model-based learning involves a forward simulation in which the incentive value of an action is directly computed using a sequential transition model of its associated outcomes.

Until recently, most reinforcement learning investigations have targeted the computational and neurobiological mechanisms that contribute to model-free processes (Doll, Simon, & Daw, 2012). One of the key reinforcement learning mechanisms that has been best studied is the reward prediction error (RPE), the primary signal that drives CS–UCS learning from reward outcomes. The RPE is now well-established to be encoded in the phasic activity of midbrain dopamine neurons and their mesocorticolimbic targets (i.e., ventral striatum; Schultz & Dickinson, 2000). However, the RPE may also reflect other forms of surprise signal triggered by salient, but not reward-predicting sensory cues (Bromberg-Martin, Matsumoto, & Hikosaka, 2010; D’Ardenne, Lohrenz, Bartley, & Montague, 2013; Dommett et al., 2005; Lammel, Lim, & Malenka, 2014; Redgrave, Gurney, & Reynolds, 2008). The relationship between the motivational and reinforcement learning functions of dopamine are still a matter of controversy, however (K. C. Berridge, 2012). Most reinforcement learning accounts have neglected motivational variables (Dayan & Balleine, 2002); thus, the proposed RPE-type mechanisms that govern learning of CS+ reward values do not typically incorporate instantaneous effects of change in motivational state, or whether instrumental responding is goal-directed.

Approach versus avoidance motivation

A fundamental distinction within the domain of motivation is between whether the motive is to seek out and approach some object or activity, or instead whether the motive is avoidance—that is, to escape from the object or activity. The affective responses associated with these orientations differ, and the actions to which they relate also differ (Guitart-Masip et al., 2012). The distinction between approach and avoidance motivation is one that must be dealt with cautiously, however. It tends to be assumed that positive affect is associated with approach and negative affect with avoidance, but that is not always the case. A good deal of evidence indicates that anger and irritability are related to thwarted approach rather than to threat motivation (Carver & Harmon-Jones, 2009; Harmon-Jones, 2003).

The distinction between approach and avoidance motivation has been operationalized in diverse ways across the various subfields engaged in motivational research (Elliot, 2008). In animal and human neuroscience, the distinction is often made in terms of the brain systems involved. For example, a classic distinction is between a mesocorticolimbic dopaminergic behavioral activation system (BAS) associated with approach motivation, and a behavioral inhibition system (BIS), originally localized to the septo-hippocampal system, associated with avoidance motivation (Gray, 1987). In contrast, for personality psychologists, approach and avoidance motivations are typically discussed in terms of stable individual differences in habitual orientations to the world, and assessed in terms of self-report scales (Carver, in press). These individual differences are typically discussed in the framework of reward sensitivity and threat sensitivity (e.g., BIS/BAS scale; Carver & White, 1994), or in related self-regulatory dimensions, such as promotion (focus on advancement and accomplishment) versus prevention (focus on safety and security; Higgins, 1997).

Activation of these systems is commonly elicited with different types of incentives, such as rewards versus punishments, or in humans, monetary gains versus losses. However, this work also indicates more complexity than the intuitive valence-based dimensions. For example, in the cognitive literature, support has been found for a regulatory fit account, in which a promotion focus (either trait-related or an experimentally induced state) will produce better performance when task incentives are framed in terms of monetary gain, rather than avoidance of monetary loss, whereas a prevention focus will show the opposite pattern (Maddox & Markman, 2010).

In the human cognitive neuroscience literature, ongoing debate has focused on whether specific brain regions within motivational networks are valence- or affect-specific. For example, some human neuroimaging studies have found that nucleus accumbens activation is greater on trials incentivized by contingent gains relative to losses in the monetary incentive delay task (Cooper & Knutson, 2008). In other studies, however, both the accumbens and the VTA respond during anticipation of both monetary losses and gains (Carter, Macinnes, Huettel, & Adcock, 2009; Choi, Padmala, Spechler, & Pessoa, 2013; Cooper & Knutson, 2008), and some studies report even greater responses under aversive than approach motivation (Niznikiewicz & Delgado, 2011). This result is paralleled by animal studies in which nucleus accumbens and ventral tegmental area have been found to reflect both appetitive (desire) and aversive (dread) motivation, although potentially in anatomically segregated subregions (Bromberg-Martin et al., 2010; Lammel et al., 2012; S. M. Reynolds & Berridge, 2008; Roitman, Wheeler, & Carelli, 2005). Similar complexities arise in regions often associated with aversive reinforcement learning, such as amygdala and anterior cingulate cortex (Hommer et al., 2003; Shackman et al., 2011), which also show responses to positive valence and involvement in appetitive learning.

The lack of valence specificity in human studies using monetary incentives could reflect the fact that in such studies gains and losses do not present a true valence asymmetry. More specifically, unless participants are endowed on a prior visit and asked to pay back the experimenter, even if they lose on a given trial, they still leave the experimental session with a net gain. Likewise, loss of a positive incentive is not necessarily equivalent to those involving punishment. The use of primary incentives alleviates this problem, but introduces others. One potentially promising approach has been to utilize selective patterns in the physiological activation of motivational systems as a reliable index of the meaning evoked by the objective incentives. Such distributed patterns have been differentially elicited by task incentive structures; for example, in the Incentivized Encoding paradigm, shock threats (aversive motivation) were associated with distinct patterns of activation and connectivity (amygdala/parahippocampal cortex) as compared to those found for monetary rewards (approach motivation; VTA/hippocampus). These findings imply that engagement of distinct neural circuits impacts the types of memory traces formed under distinct motivational conditions (Murty, Labar, & Adcock, 2012), whether or not these differences are best accounted for by valence.

Transient versus sustained motivation

Animal and human neuroscience studies have typically investigated transient motivational effects associated with specific external cues. Motivational influences are not just transitory, however, but can also persist in a tonic fashion across behavioral contexts. Recent findings have suggested the presence of sustained motivational effects, using incentive context paradigms (Jimura, Locke, & Braver, 2010). Here, the incentive value of cognitive task performance is manipulated in a block-wise manner, but also more transiently via orthogonally manipulated trial-specific reward cues. Incentive context has been found to be associated with enhanced task performance and sustained neural activity, but these effects were independent of trial-specific incentive value (Chiew & Braver, 2013; Jimura et al., 2010).

Similarly, physiological investigations, chiefly focused on the dopamine system, have overwhelmingly focused on transient responses to discrete motivational cues, despite a wealth of pharmacological research in animals, healthy humans and patient populations that demonstrates a role for dopamine not just in processing and learning about discrete rewards, but also in motivation and sustained motivated behavior (K. C. Berridge, 2007; Salamone & Correa, 2012). Moreover, whereas the anatomy of dopaminergic synapses in the striatum suggests high temporal precision, dopaminergic effects on learning can potentially bridge multiple synapses and phasic events (Lisman, Grace, & Duzel, 2011). Dopaminergic synaptic anatomy outside the striatum in cortex and in the hippocampus includes significant distances between terminals and receptors, consistent with modulation over slower, sustained time scales (Shohamy & Adcock, 2010).

One theoretical account explains these sustained motivational effects in terms of incentive context-related changes in tonic dopamine (Niv et al., 2007). According to this account, tonic dopamine signals the long-term average reward rate of the current environment. This signal is thought to lead to a generalized increase in the vigor or intensity of action, by indicating an increased “opportunity cost” of response latency. In other words, when the current environmental context has high incentive value, increasing the speed of all actions (even those not directly rewarded) will typically enable more rewards to be harvested per unit time. As such, sustained motivation may have connections with general PIT effects, which are also thought to produce a more nonspecific invigoration of behavioral responding (Niv et al., 2007). Interestingly, recent evidence from microdialysis has demonstrated tonic dopamine efflux correlated with long-term average reward rates selectively in PFC terminal regions, but not the nucleus accumbens (St Onge, Ahn, Phillips, & Floresco, 2012). Other work has shown tonic dopamine release, as well as sustained firing of dopamine neurons, under conditions related to anticipatory, sustained motivated behaviors (Fiorillo, Tobler, & Schultz, 2003; Howe, Tierney, Sandberg, Phillips, & Graybiel, 2013; Totah, Kim, & Moghaddam, 2013). These dopamine-mediated effects of sustained motivation on response vigor are just beginning to be examined in humans (Beierholm et al, 2013).

Conscious versus nonconscious motivation

Motivated behavior is often assumed to start with conscious awareness and the formation of explicit intentions. However, as noted above, provocative findings over the last two decades, primarily from within the social and personality literature, have highlighted a distinction between conscious versus non-conscious motivation, and the presence of implicit (or nonconscious) goal pursuit, in which motivated behavior is instigated by environmental cues that may not reach conscious awareness (Custers, Eitam, & Bargh, 2012). This idea has led to a research focus that contrasts goal pursuit under conditions in which goals are implicitly versus explicitly activated. The typical methodological approach to implicit goal priming is the presentation of words, pictures, or other stimuli, either in seemingly unrelated tasks preceding the experimental task or by subliminal priming, both of which render conscious awareness of this influence less likely. These priming manipulations both increase the tendency to engage in goal-relevant action patterns and the vigor with which goal pursuit is executed (Custers & Aarts, 2010).

Recent studies have extended this approach to focus on implicit priming of reward cues to motivate cognitive performance. In these studies, the reward that can be earned on a particular trial is cued at its beginning, either clearly visible, or presented subliminally. Subliminally presented high-reward cues have been found to induce more cognitive effort expenditure than low-reward cues (Bijleveld, Custers, & Aarts, 2009; Capa, Bustin, Cleeremans, & Hansenne, 2011). A few cognitive neuroscience studies using subliminally presented reward cues have demonstrated that these engage subcortical motivation-linked brain regions, such as the ventral pallidum, in proportion to incentive value (Pessiglione et al., 2007; Schmidt et al., 2008). The cognitive performance effects of subliminal reward cues have been found to diverge in some instances from that of clearly visible reward cues, specifically under conditions in which visible rewards lead to a strategic change in behavior. For example, in some cases whereas subliminal reward cues only boost expenditure of effort, visible rewards lead to a speed–accuracy trade-off (Bijleveld, Custers, & Aarts, 2010). Likewise, subliminal reward cues modulate cognitive performance even on trials in which rewards are known to be unattainable, whereas such effects are not present for clearly visible reward cues (Zedelius, Veling, & Aarts, 2012). If the effects of subliminal reward cues had been mediated by conscious processes (e.g., perceiving that the trial has high or low incentive value), such a divergence should be absent. Hence, it appears that reward cues can motivate behavior in the sense that the expenditure of effort is increased, even without people being aware of it (for further discussion, see Bargh & Morsella, 2008).

Extrinsic versus intrinsic motivation

Animal and human neuroscience studies have almost uniformly focused on extrinsic motivation, the neural and behavioral responses to extrinsically provided incentives (e.g., food, money, etc.). However, in social and personality psychology, extrinsic motivation is strongly distinguished from various forms of intrinsic motivation. Intrinsic motivation is defined as engagement in a task for the inherent pleasure and satisfaction derived from the task itself (Deci & Ryan, 1985). Intrinsic motivation appears to drive behavior in a way that is different from, and potentially even in competition with, extrinsic motivation. The most provocative example of this competition is the undermining effect (Deci, 1971; Deci, Koestner, & Ryan, 1999; Ryan, Mims, & Koestner, 1983; also called the “motivation crowding-out effect”: Camerer & Hogarth, 1999; Frey & Jegen, 2001; or “overjustification effect”: Lepper, Greene, & Nisbett, 1973), a phenomenon in which people’s intrinsic motivation is decreased by receiving performance-contingent extrinsic rewards.

The standard approach for demonstrating undermining effects on intrinsic motivation is through free-choice paradigms. Here, willingness to voluntarily engage in a target task is assessed after a preceding phase in which the targeted task is performed either under conditions in which performance-contingent extrinsic rewards are provided or not (manipulated across groups). A large number of studies have shown that the extrinsic reward group spends significantly less time than the control group engaging in the target task during the free-choice period, providing evidence that the extrinsic rewards undermine intrinsic motivation for the task (Deci et al., 1999; Tang & Hall, 1995; Wiersma, 1992). Although intrinsic motivation has been mostly neglected in cognitive and neuroscience studies, one study has shown neural evidence of the undermining effect, in that removing performance-contingent extrinsic rewards led to reduced activity in reward motivation regions (anterior striatum, dopaminergic midbrain) during a subsequent unrewarded performance phase (when compared to a never-rewarded control group) (Murayama, Matsumoto, Izuma, & Matsumoto, 2010). Other studies using different paradigms, such as those involving interesting trivia questions (Kang et al., 2009), inherently pleasurable music (Salimpoor et al., 2013), and self-determined choice (Leotti & Delgado, 2011; Murayama et al., 2013), have also indicated that intrinsic motivation may be related to the modulation of reward circuitry (e.g., striatum). In the reinforcement learning literature, some researchers have attempted to expand the basic framework to incorporate computational mechanisms of intrinsic motivation (Oudeyer & Kaplan, 2007; Singh, Lewis, Barto, & Sorg, 2010).

Goal setting versus goal striving

In social psychological treatments, the motivated pursuit of goals is often separated into goal-setting and goal-striving phases (Gollwitzer & Moskowitz, 1996; Oettingen & Gollwitzer, 2001). Goal setting refers to the processes and determinants of how a particular goal gets selected for pursuit, whereas goal striving indicates the processes by which a particular goal, once implemented, is used to modulate ongoing behavior. Goal-setting research is aimed at demonstrating that goal selection can be influenced by various factors, such as how the goal is assigned (by self or other), framed (the goal content) and internally represented (the goal structure). Here the approach/avoidance (or relatedly, promotion versus prevention) motivational distinction becomes especially relevant, in terms of both trait-related individual differences (what goals the individual finds desirable), as well situational context manipulations (to minimize failure or maximize success).

Gollwitzer (1990) suggested that whereas goal setting can be characterized in terms of motivational principles, goal striving is best characterized in terms of volitional factors. These include action initiation, persistence, goal-shielding, feedback integration, and disengagement. Accordingly, goal-striving research has primarily focused on the kinds and effectiveness of self-regulatory strategies that are implemented to attain the goal. Surprisingly, increasing the strength of goal activation (intention) may sometimes produce only limited impacts on successful goal attainment (Webb & Sheeran, 2006). Instead, volitional self-regulatory strategies are needed to prepare for potential obstacles standing in the way of attaining the desired future, and to stay on track and pursue the desired future even in the face of difficulties and temptations.

Two key self-regulatory strategies that are a focus of current investigation are mental contrasting and implementation intentions. Mental contrasting allows people to explicitly consider possible resistances and conflicts when trying to reach a desired future (Oettingen, 2012). This means that people mentally juxtapose the desired future (e.g., completing a writing project) with obstacles in present reality (e.g., following an invitation to socialize). Such contrasts are used to project success expectations, so that these can determine the intensity of goal pursuit. Implementation intentions (Gollwitzer, 1999; Gollwitzer & Oettingen, 2011) are a strategy that involves generating “if . . . , then . . .” plans to link a critical situation with an action that is instrumental to reaching a desired future (e.g., “if it is Saturday afternoon and my friends invite me to watch a movie, then I will tell them that I will first finish my writing project”). These plans offer a shortcut to automated responding (i.e., creating ad-hoc habits). In other words, if–then plans allow people to perform automatized responses in the specified critical situation in a fast and effortless way, and without any further conscious intent. It is worth pointing out that the automated nature of implementation intentions suggests a potential similarity to habitual control, as studied in the animal learning literature. However, in implementation intentions, the resilience to shifting motivational states is created not by overlearned associations, but rather by the prospective decision to avoid outcome revaluation.

The goal-setting and goal-striving phases can also be distinguished in terms of their differential “mindsets,” in that goal setting is associated with a deliberative mindset, whereas goal striving is associated with an implemental mindset (Gollwitzer, 2012). The deliberative mindset is characterized by general attentional broadening and a cognitive focus on desirability and feasibility information, whereas the implemental mindset is characterized by strengthened goal representations, upwardly biased assessments of feasibility, and more general attentional narrowing. One methodological approach used to investigate these mindsets and phases is to interrupt participants and have them engage in cognitive tasks while they are in the midst of deciding upon a goal to pursue (deliberative mindset), or immediately after they have chosen one (implemental mindset) (Heckhausen & Gollwitzer, 1987).

Positive versus negative feedback

Feedback is thought to play a fundamental role in goal pursuit, by providing individuals with information on how to evaluate their commitment to goal striving, in terms of whether, what, and how much to invest in their goals (Fishbach, Koo, & Finkelstein, in press). An important distinction has been postulated between the motivational consequences of positive (completed actions, strengths, correct responses) and negative (remaining actions, weaknesses, and incorrect responses) feedback (Fishbach & Dhar, 2005; Fishbach, Dhar, & Zhang, 2006; Kluger & DeNisi, 1996). A key finding is that positive feedback increases motivation (and, thus, goal pursuit) when it is used to evaluate commitment, by signaling that the goal is of high value and attainable. In contrast, negative feedback increases motivation when it is used to evaluate progress: that more effort is needed to accomplish the goal (e.g., cybernetic models; Carver & Scheier, 1998; Higgins, 1987). Indeed, whereas positive feedback for successes can signal sufficient accomplishment, and “licenses” the individual to disengage with the goal (Monin & Miller, 2001), when people think of their goals in cybernetic terms (e.g., “closing a gap”), negative feedback is motivating.

In general, positive feedback should be more effective than negative feedback when goal commitment is lower, because positive feedback increases commitment. Negative feedback, in contrast, will be more effective than positive feedback when goal commitment is already high, because it signals greater discrepancy (i.e., a larger gap to be closed). A promising approach to investigate feedback effects has been to explore how they interact with goal commitment level to influence motivation. For example, Koo and Fishbach (2008) manipulated feedback by emphasizing either completed or missing goal actions (e.g., positive feedback [“you have completed 50% of the work to date”] vs. negative feedback [“you have 50% of the work left to do”]). When the goal commitment level was low, positive feedback on completed actions increased motivation more than negative feedback did. Conversely, when goal commitment was high, the reverse pattern was obtained (greater increase in motivation with negative feedback). It is interesting to note that this perspective on negative feedback as sometimes increasing motivation contrasts with the one typically adopted in the cognitive and neuroscience literatures, in which it is assumed that negative feedback will have an immediate impact in reducing reward value estimates.

Summary

As the above sections have detailed, the distinctions and dimensions investigated in studies of motivation vary greatly in terms of disciplinary focus. Some, including distinctions between phases of high-level goal pursuit (e.g., goal setting vs. goal striving), are studied almost exclusively from within one domain. Others, such as the approach/avoidance distinction, have been studied from multiple perspectives. Yet, even in such cases, important differences in emphases are present. For example, approach versus avoidance motivation is typically studied as a stable trait variable in the personality literature, but as a state manipulation in systems and cognitive neuroscience.

Many important challenges remain for cross-disciplinary integration in the study of motivation–cognition interactions. Challenges arise even at the level of defining our terms: Some concepts and phenomena do not currently extend across fields, and those that do sometimes have different usage or implications. Table 1 presents the differential representations and usage of key concepts across fields, including some examples of potential conflicts in usage. Our hope is that, as researchers become more aware of the motivational dimensions and distinctions that are emphasized in other subfields, they will be inspired to initiate further cognitive neuroscience and cross-disciplinary investigations and to bring these concepts into even closer alignment. The explorations into conscious versus unconscious (Pessiglione et al., 2007; Schmidt et al., 2008) and intrinsic versus extrinsic (Murayama et al., 2010) motivation that are beginning to occur from a cognitive neuroscience perspective offer promising examples of these efforts.

Table 1

Do we speak the same language? Disciplines of research on mot90vation have had substantially different foci and operationalizations, but frank conflicts in terminology and usage are relatively few

TerminologyAnimal Literature EmphasisSocial & Personality Psychology EmphasisReinforcement Learning EmphasisHuman Cognitive Neuroscience EmphasisConsistency, Conflict, or Gaps in Accounts
Motivation Components: Directional component orients toward goal state, and activation component invigorates and energizes action. Motives: Expectations, needs, and efficacy of the individual, more than quantifiable incentive values. Energization and directional components are recognized. “Motivated” behavior = action selection/decision making driven by considerations of reward or utility; these are modulated by motivational state. Quantification: Neural representations of the expected value of future events predict decisions to invest effort Consistency
  1. Behavior is driven by reward or utility; not limited to drive reduction

  2. Moderated by perceived efficacy

  3. Regulates effort investment

Gap
Individual motives are usually neglected in the cognitive and animal literature, whereas quantification is limited in social accounts.
Goal Internal subjective states that generate activational and directional components of motivation Mental representations of desired states, characterized by feasibility and desirability, commitment, and beliefs The implicit, constant goal of the organism is to maximize reward. Operationalized as active maintenance of internal representations of desired states Consistency
All postulate desired and predicted states that may differ from current state.
Gap
Cognitive neuroscience and animal literature emphasize transient goals; reinforcement learning and social accounts include study of common stable goals; personality research focuses on individual differences in goals.
Goal pursuit Learning which actions bring about the valued outcome—unlike habit, goal pursuit behavior is sensitive to outcome revaluation. Distinctions between goal setting or mere activation of a goal representation and goal striving Nested goal hierarchies are a necessary framework for model-based reinforcement learning. In some accounts, conceptualized in terms of affective valence states Consistency
All accounts emphasize effort.
Gap
Not all disciplines recognize subprocesses.
Habit Stimulus–response associations unmoored from modification by outcomes or incentive salience Goal–action links causing automatic activation of behavior May depend on prior model-free reinforcement learning Mostly considered in research on addiction, mainly recapitulates animal literature. Consistency
Represents highly automatized link to action
Conflict
Social psychology concept of habit as goal-oriented is at odds with animal literature account of habit as goal-independent.
Incentive value Eventually decoupled by learning from hedonic impact of incentives Not typically discussed, except in terms of individual motives Defined by Magnitude × Success Probability Defined by Magnitude, Valence × Success Probability Consistency
Value includes Magnitude × Success Probability
Gap
Computation of incentive value is relatively unexamined in terms of individual motives.
Incentive salience Dictated by learned incentive value and current state Not typically discussed, but implied by emphasis on individual motives Salience concept is ambiguous, sometimes meaning “associability” rather than value Inferred from activation of motivational architecture during anticipation and subsequent instrumental behavior Consistency
Quantifies influence of a stimulus on behavior
Gap
Current definitions do not consider individual motives or long-term goals.
Intrinsic vs. extrinsic motivation Animal models have found this distinction challenging to explore. Intrinsic motivation is engagement in a task for inherent satisfaction; extrinsic reinforcement may undermine this effect. Classically, no distinction (recent work has included information structure as a reinforcer) Most cognitive neuroscience work has focused on extrinsic motivation, given focus on quantifiable reinforcement. Consistency
Most approaches acknowledge but do not manipulate this distinction.
Conflict
Tension with ideas about common currency of “reward”

Mechanisms of motivation–cognition interactions

One of the challenges for cognitive, affective, and behavioral neuroscience research is to provide an account of motivation–cognition interaction in terms of the neural mechanisms that enable such interactions to occur. The key challenge is that although “motivation” and “cognition” are usefully specified as distinct psychological entities, it is not clear that they have separable implementations in the brain. Indeed, the neural systems implicated in the internal representation of cognitive goals, and the active maintenance and manipulation of information in working memory (e.g., frontoparietal and frontostriatal circuits), bear a striking similarity to those implicated in the generation of motivated behaviors. Thus, mechanistic accounts of motivation–cognition interactions run the risk of drawing a false dichotomy, if they are couched in terms of a discrete point of interface between two distinct neural systems (Pessoa, 2013).

Despite this caveat, several neural candidate mechanisms have been described that enable shifts in motivational state to be transmitted into a form that can modulate cognitive processing (see Fig. 1). These candidates fall into several broad classes: (1) broadcast neuromodulation, influencing cellular-level physiologic response properties; (2) communication between large-scale brain networks, via either direct pathways or shifts in network topology; and (3) the engagement of specific brain computational hubs that serve as integrative convergence zones. All of these mechanisms implicate some form of neuromodulatory transmission. Of the brain neuromodulatory systems, the one most closely linked to motivation is dopamine. We therefore first will consider the regulation of dopamine release and its effects on its targets as a useful model mechanism for the transmission of motivational signals. We will then move on to discuss network and circuit interactions. Finally, we will highlight specific computational hubs in the striatum, anterior cingulate cortex, and lateral PFC that are thought to play increasingly well-understood roles in motivated cognition.

Does motivation arises out of an interaction between a person and a particular situation?

Diagram showing candidate neural mechanisms of motivation–cognition interaction. Left figures show broadcast neuromodulation of the dopamine system and anterior cingulate cortex, in medial view (upper), and of lateral prefrontal cortex (PFC) and striatum, in lateral view (lower). At right is the network mode of communication between a frontoparietal network and cortical and subcortical valuation networks. The right panel is from “Embedding Reward Signals Into Perception and Cognition,” by L. Pessoa and J. B. Engelmann, 2010, Frontiers in Neuroscience, 4, article 17, Fig. 3. Copyright 2010 by Pessoa and Engelmann. Adapted with permission

Broadcast neuromodulation: dopamine (and other systems)

Widespread projections enable neuromodulatory systems to reach large portions of the cortical surface and subcortical areas, from which they can rapidly influence neuronal activity. The broadcast release of global neuromodulators, such as dopamine and norepinephrine, is thus likely to have complex rather than monotonic effects, which nevertheless may have synergistic actions at multiple levels of functioning. Dopamine, in particular, is known to have a range of effects on cellular-level physiology, including modulating synaptic learning signals (Calabresi, Picconi, Tozzi, & Di Filippo, 2007; Lisman et al., 2011; J. N. Reynolds & Wickens, 2002), altering neuronal excitability (Henze, Gonzalez-Burgos, Urban, Lewis, & Barrionuevo, 2000; Nicola, Surmeier, & Malenka, 2000), enhancing the signal-to-noise ratio (Durstewitz & Seamans, 2008; Thurley, Senn, & Luscher, 2008), and impacting the temporal patterning of neural activity (Walters, Ruskin, Allers, & Bergstrom, 2000). Such effects in subcortical and cortical targets (e.g., frontal cortex) could alter processing efficiency in a number of ways, such as by sharpening cortical tuning (Gamo & Arnsten, 2011), heightening perceptual sensitivity and discrimination (Pleger et al., 2009), enhancing attentional or cognitive control and working memory function (Pessoa & Engelmann, 2010), and enhancing targeted long-term memory encoding (Shohamy & Adcock, 2010).

The dynamic changes in neurophysiology that result from release of the neuromodulators implicated in motivation are evident not only cellularly, but also at the circuit level. As one example, functional MRI evidence has shown that reward-versus punishment-motivated learning reconfigures neural circuits, with marked consequences for the sensitivity of memory encoding systems (Adcock et al., 2006; Murty & Adcock, 2013; Murty et al., 2012). These reconfigurations are evident both in systems thought to primarily implement motivation, and in the broader networks devoted to the memory encoding task. For example, during intentional encoding, learning under reward incentives increases connectivity and activation in the VTA and hippocampus, whereas learning under threat engages amygdala and parahippocampal cortex. The consequences of these differences in the neural implementation of memory encoding translate into qualitatively different memory traces, because hippocampal encoding embeds items in context to support more flexible representations, whereas parahippocampal encoding selectively emphasizes features of the scene. These findings imply that motivated states can influence the content and form of long-term memory formation, potentially tailoring the memory trace to support future behaviors consistent with that same motivational state.

Network interactions: direct communication and topological reconfiguration

Interactions between motivation and cognition appear to rely on the communication between “task networks” (e.g., the dorsal frontoparietal network engaged during attention tasks) and “valuation networks,” which involve both subcortical regions, such as those in the striatum, and cortical ones, such as orbitofrontal cortex. These interactions are suggested to take place via multiple modes of communication. The first mode involves direct pathways between task and valuation networks. One example is the pathway between orbitofrontal and lateral PFC (Barbas & Pandya, 1989). Another example involves the pathways between the extensively interconnected lateral surface of frontal cortex (including dorsolateral PFC) and cingulate regions (Morecraft & Tanji, 2009). Finally, the caudate is connected with several regions of frontal cortex (including lateral sectors) and parietal cortex, in part via the thalamus (Alexander, DeLong, & Strick, 1986). Thus, direct pathways provide a substrate for cognitive–motivational interactions.

A second mode of communication that might enable motivational modulation of cognitive processing is through a reconfiguration of network topology and structure. Network analysis provides useful tools from which to quantitatively characterize topological relationships within and between brain networks. For example, in one recent study, Kinnison, Padmala, Choi, and Pessoa (2012) compared network properties and relationships in attentional and valuation networks during trials with low versus high reward value. It was found that on control trials the two networks were relatively segregated (modular) and locally efficient (high within-network functional connectivity), but on high-reward trials between-network connectivity increased, decreasing the decomposability of the two networks. This finding suggests that a primary consequence of changes in reward motivational value is to increase the coupling and integration between motivational and cognitive brain networks. Such reconfigurations of network topology could potentially arise from neuromodulatory influences, since similar changes have been identified as a consequence of noradrenergic response to stressors (Hermans et al., 2011) and dopamine precursor depletion (Carbonell et al., 2014).

Striatum: linking motivation to cognition and action

Work with behaving experimental animals has long highlighted the importance of the striatum as a nexus mediating between motivation, cognition, and action (Baldo & Kelley, 2007; Belin, Jonkman, Dickinson, Robbins, & Everitt, 2009; Mogenson, Jones, & Yim, 1980). The nucleus accumbens in particular has been suggested as a key node, which may translate dopaminergic incentive value signals into a source of behavioral energization, drive, and the psychological experience of wanting (K. C. Berridge, 2003). This is consistent with animal data suggesting that the nucleus accumbens processes both the hedonic and motivational components of reward, within distinct subregions (S. M. Reynolds & Berridge, 2008). Likewise, neuroanatomical data from non-human primates have revealed an arrangement of spiraling connections between the midbrain and the striatum that seems perfectly suited to subserve a dopamine-mediated mechanism directing information flow from ventromedial to dorsomedial to dorsolateral regions of the striatum (Haber, Fudge, & McFarland, 2000). In turn, an increasingly consensual view is that the reciprocal circuits between the striatum and frontal cortex function as a gating mechanism that prevents actions (and thoughts) from being released until the contextually and sequentially appropriate points in time (Mink, 1996; O’Reilly & Frank, 2006). Taken together, these accounts suggest that dopaminergic input to the striatum serves to mediate the interaction between motivation, cognition, and action.

Accumulating evidence from genetic and neuroimaging (fMRI and dopamine PET) work with human volunteers and patients supports this hypothesis. For example, a recent dopamine PET study revealed that individual differences in baseline dopamine synthesis capacity in the dorsomedial striatum of healthy young volunteers predicted the effects of reward motivation on Stroop-like task performance (E. Aarts et al., 2014). Moreover, genetic differences in a dopamine transporter polymorphism were found to modulate the effects of reward on fMRI activation of the dorsomedial striatum (caudate nucleus) during conditions of high cognitive control demand (task-switching; E. Aarts, van Holstein, & Cools, 2011). Likewise, it has been found that the ventral striatum exhibits common activation in tracking the effects of incentive value on both physical and mental effort exertion (Schmidt, Lebreton, Cléry-Melin, Daunizeau, & Pessiglione, 2012), and that its response to rewards is discounted as a function of the degree of effort exerted to obtain it (Botvinick, Huffstetler, & McGuire, 2009). These results further suggest that striatal dopamine might be a key mechanism in energizing both cognitive and motor behaviors on the basis of their current motivational value.

Anterior cingulate cortex (ACC): computing the expected value of control

Another key hub is a region of dorsomedial PFC that spans the presupplementary motor area and dorsal ACC. The single-cell electrophysiology literature has suggested that neurons in this region encode multiple aspects of reward, such as proximity to the reward within a behavioral sequence (Shidara & Richmond, 2002), the value of the ongoing task (Amiez, Joseph, & Procyk, 2006; Sallet et al., 2007), the temporal integration of reward history (Kennerley, Walton, Behrens, Buckley, & Rushworth, 2006), and the need to change response strategy (Shima & Tanji, 1998). A general consensus view is that the ACC and adjacent dorsomedial PFC serve an evaluative role in monitoring and adjusting levels of control (Botvinick, 2007; Holroyd & Coles, 2002; Ridderinkhof, Ullsperger, Crone, & Nieuwenhuis, 2004; Rushworth & Behrens, 2008; Shackman et al., 2011), potentially in response to motivational variables (Kouneiher, Charron, & Koechlin, 2009).

These ideas were recently formalized in an integrative account that suggests that the ACC might serve as a critical interface between motivation and executive function, by computing the “expected value of control” (Shenhav, Botvinick, & Cohen, 2013). Here, the imposition of top-down control in cognitive information processing is understood as both yielding potential rewards (e.g., through enablement of context-appropriate responses) but also as carrying intrinsic subjective costs (Inzlicht, Schmeichel, & Macrae, 2014; Kool, McGuire, Rosen, & Botvinick, 2010; Kool, McGuire, Wang, & Botvinick, 2013; Kurzban, Duckworth, Kable, & Myers, 2013; Westbrook, Kester, & Braver, 2013). The decision as to whether executive resources should be invoked, favoring controlled over automatic processing, is based on a cost–benefit analysis, weighing potential payoffs against their attendant costs (e.g., Kool & Botvinick, 2014). On the basis of a wide range of evidence, Shenhav, Botvinick, and Cohen (2013) proposed that the ACC might serve as a critical hub in the relevant cost–benefit calculations, serving to link cognitive control with incentives and other motivational variables.

Lateral PFC: integrating motivation with cognitive goal representations

A wealth of findings in both animal and human neuroscience studies suggest that the lateral PFC might serve as a convergence zone in which motivational and cognitive variables are integrated. The integration of these signals reflects more than just additive contributions of cognitive demands and reward value, but actually enhances functional coding within PFC, such as by maximizing signal-to-noise ratio, enhancing discriminability of visuospatial signals, and increasing the amount of information transmitted by PFC neurons (Kobayashi, Lauwereyns, Koizumi, Sakagami, & Hikosaka, 2002; Leon & Shadlen, 1999; Pessoa, 2013; Watanabe, 1996; Watanabe, Hikosaka, Sakagami, & Shirakawa, 2002). The dual mechanisms of control (DMC) framework suggests a specific mechanism by which these motivational influences on lateral PFC activity might modulate cognitive processing (Braver, 2012; Braver & Burgess, 2007; Braver, Paxton, Locke, & Barch, 2009).

According to the DMC framework, cognitive control can be accomplished either via a transient, stimulus-triggered, reactive mode or a tonic and anticipatory (i.e., contextually triggered) proactive mode. Proactive control is the more effective mode, because it enables preconfiguration of the cognitive system for expected task demands. However, it is thought to be metabolically or computational costly, because it depends upon the active representation and sustained maintenance of task goals in lateral PFC. Thus, it should be preferred under conditions involving reward maximization and/or contexts with high motivational value. Computationally, proactive control is thought to be achieved via dopaminergic inputs to lateral PFC, which enable both appropriate goal updating (via phasic dopamine signals) and stable maintenance (via tonic dopamine release) in accordance with current reward estimates (Braver & Cohen, 2000; O’Reilly, 2006). In contrast, the reactive mode, because it is transient and stimulus-triggered, may be less dopamine-dependent and may also involve a wider network of brain regions.

Several studies have shown that pairing task contexts or trials with high reward value shifts performance toward pro-active control, as indicated both by behavioral performance indicators and PFC activity dynamics (Braver, 2012; Chiew & Braver, 2013; Jimura et al., 2010; H. S. Locke & Braver, 2008). Conversely, in nonrewarded contexts, lateral PFC activity has been found to reflect the subjective cost associated with exerting cognitive control (as estimated via both self-report and the tendency to avoid high control conditions; McGuire & Botvinick, 2010). Indeed, the robust findings of motivational influences on PFC activity and performance in tasks with high control demands suggest the possibility that proactive control shifts might be a primary mechanism by which the cognitive effects of motivation are mediated.

Summary

As the above sections indicate, a number of candidate neural mechanisms have been proposed to mediate motivation–cognition interactions (Fig. 1). These range from more global and system-wide mechanisms, such as broadcast neuromodulation and network-level interactions, to the more focal computational hubs. The neuromodulatory effects of dopamine may serve as a unifying mechanism underlying motivational influences on neurocognitive processing across a range of levels. Specifically, as we previously described, dopamine has effects at the cellular level that are consistent with a range of motivation–cognition interactions (changing cortical excitability, signal-to-noise ratio, synaptic plasticity, etc.). Likewise, dopamine serves as a major input and neuromodulator of activation in each of the regions that have been identified as likely convergence hubs for the integration of motivational and cognitive signals: striatum, anterior cingulate cortex, and lateral PFC. Finally, more recent work has suggested that changes in dopamine tone can produce substantial effects on network-level dynamics and topology (e.g., Carbonell et al., 2014). Thus, one important direction for future research will be to determine more rigorously whether these different levels of motivational neural mechanisms can indeed be unified in terms of dopamine neuromodulation.

Nevertheless, it is critical to acknowledge that though most of the neuromodulatory-focused motivational research has targeted dopamine effects, the dopamine system has well-known and strong interactions with other neuromodulatory systems, such as acetylcholine, norepinephrine, serotonin, and adenosine. Thus, these other neuromodulators will need to be properly considered in order to form a complete picture of motivated cognition (Daw, Kakade, & Dayan, 2002; McClure, Gilzenrat, & Cohen, 2006; Salamone et al., 2009; Sarter, Gehring, & Kozak, 2006). Likewise, although we have focused on the set of candidate hubs that have received the most attention in recent research, this set is clearly not exhaustive. Indeed, other potential motivation–cognition hubs have also been noted in the literature, such as the posterior cingulate cortex (Mohanty, Gitelman, Small, & Mesulam, 2008; Small et al., 2005) and anterior insula (Mizuhiki, Richmond, & Shidara, 2012).

Finally, it is clear that our understanding of the neural mechanisms of motivation–cognition interaction will require not only better integration between levels of analysis (neuromodulation, regionally localized effects, network-level interactions), but also the development of neurocomputational frameworks that can accommodate these effects and better link them to cognitive and behavioral functioning. Work in this area is just beginning, but one of the most promising directions may be to expand the reinforcement learning framework to incorporate motivational variables. For example, initial attempts have been put forward to demonstrate the computational mechanisms by which motivation might modulate reward prediction error signals (Zhang, Berridge, Tindell, Smith, & Aldridge, 2009), simple model-free Pavlovian learning (Dayan & Balleine, 2002), and generalized response vigor (Niv et al., 2007). At a higher level, some accounts have utilized hierarchical extensions of reinforcement learning to begin to explain how reward and motivational signals might also be used to prioritize, select, and maintain temporally extended goals and more abstract action plans (Botvinick, 2012; Holroyd & Yeung, 2012). Excitingly, these accounts have put forward initial sketches of the respective roles for the dopamine system, along with striatum, ACC, and lateral PFC in these processes. Thus, more work in this area is clearly needed. Indeed, one of the primary challenges will be to demonstrate whether such mechanisms and computational frameworks can be used to account for the various dimensions and distinctive components of motivational influence that were detailed in earlier sections.

Pressing research questions

The previous sections highlighted some of the conceptual obstacles that challenge an integrative and cross-disciplinary investigation of motivation–cognition interactions, as well as some of the promising candidate neural mechanisms that are the focus of current research. In this section, we discuss what we see are some of the current experimental and methodological challenges. Specifically, we lay out a number of unresolved and puzzling issues that seem central to this domain, but which may represent potential “low-hanging fruit” that are ripe for investigation. Indeed, one of the goals of this section is to direct investigators toward these open questions, in the hopes of inspiring new research efforts targeted at them.

A concern that is commonly raised in studies of motivation–cognition interactions is whether effects attributed to motivational factors may actually reflect another related, but potentially distinct construct. The most frequent candidates in this regard are affect, attention, arousal, and high-level decision-making strategies. This important and longstanding issue has seen increased experimental focus in recent years, but targeted efforts are still needed. Below, we describe work focused on each of these constructs in turn.

The potential distinction between affect and motivation has been most directly addressed in the animal neuroscience literature, in terms of the distinction between the hedonic impact versus incentive value of rewards and punishments. The work of Berridge represents a major theoretical influence in this regard, employing pharmacological and lesion manipulations to demonstrate that “wanting” can be dissociated from “liking” (K. C. Berridge et al., 2009). The key methodological approach here is to assess the hedonic impact of food rewards during consumption via orofacial response patterns, while using Pavlovian and instrumental appetitive behaviors to assess incentive effects. This has suggested that liking and wanting can be dissociated neurally in terms of anatomical substrates (e.g., distinct “hotspots” within the nucleus accum-bens and ventral pallidum) and neurotransmitter modulation (GABA and dopamine) (Kringelbach & Berridge, 2009; S. M. Reynolds & Berridge, 2008).

Similarly, a more recent stream of research within human cognitive neuroscience has addressed the dissociability of positive affect and reward motivation (Chiew & Braver, 2011). Positive affect has been shown to have numerous influences on cognitive processing including enhanced creativity, broadened attentional focus, and greater cognitive flexibility (Carver, 2003; Easterbrook, 1959; Fredrickson & Branigan, 2005; Isen, Daubman, & Nowicki, 1987). Here, the critical question is whether these influences can be dissociated neurally and behaviorally from the potentially overlapping effects of reward motivation. Such overlap could occur because the receipt of motivating rewards has positive affective consequences, or because positive affect induces approach motivated behaviors. These types of overlap present considerable methodological challenges. One approach has been to operationalize reward motivation in terms of performance-contingent rewards, whereas positive affect is operationalized in terms of either randomly delivered rewards or incidental, positively valenced stimuli (Braem et al., 2013; Chiew & Braver, 2011; Dreisbach & Fischer, 2012). Another approach has been to induce affect that varies in motivational intensity, on the basis of the theoretical assumption that high motivational intensity, whether for positive or negative affect, produces attentional narrowing, whereas low motivational intensity induces attentional broadening (Harmon-Jones, Gable, & Price, 2013). Supportive evidence has been found with different kinds of stimuli used to induce high- versus low-intensity positive affect (e.g., desire: delicious desserts; amusement: humorous cats; Gable & Harmon-Jones, 2008).

The conceptual similarity between attention and motivation has also been frequently noted (Maunsell, 2004; Pessoa & Engelmann, 2010). The term attention is often used similarly to motivation, in describing how processing resources are allocated, how they can be captured by salient stimulus cues, and how they are influenced by behavioral goals and expectations. However, there are points of conceptual dissociation: Motivation is primarily related to the representation of incentive value and the energization of instrumental behaviors, whereas attention is primarily concerned with mechanisms of perceptual and response selection. A common methodological approach has been to orthogonally manipulate attentional and motivational factors within the same experimental design (Geier et al., 2010; Krebs et al., 2012). In terms of the neural mechanisms of attention and motivation, Pessoa and Engelmann detailed a number of possible different scenarios: (a) full independence, via distinct neural pathways; (b) mediation, in which at least part of motivational influence is mediated by changes in attentional processes and neural systems; and (c) integration, in which there is tight coupling between motivational and attentional brain systems, either in terms of convergence zones (hubs) or via network-wide interactions.

The relationship of motivation to arousal has been less well studied, particularly since arousal is a construct that is often underspecified experimentally. Nevertheless, arousal may imply the energization or invigoration of cognitive processing and behavior; this is also a central component of motivation. Traditionally, arousal has been identified with the locus coeruleus–norepinephrine (LC-NE) system (C. W. Berridge & Waterhouse, 2003), whereas motivational signaling has been conceptualized in terms of dopamine activity (Wise & Rompre, 1989). The relationships between these two neuromodulatory systems, and arousal and motivation more generally, have not been systematically investigated in cognitive neuroscience research. Methodologically, it seems possible to manipulate arousal independently of motivation (e.g., via pharmacological challenge, physical exertion, sleep–wake cycle, stress, etc.), which should enable a targeted examination of the relationship between the two constructs.

A final issue concerns the role of motivation versus high-level decision-making strategies in modulating task performance. This concern relates to the fact that manipulations of performance incentives have been a staple of cognitive research for decades, and have been traditionally used to modulate high-level cognitive strategies (e.g., response bias in signal detection experiments; Green & Swets, 1966). Yet such work is usually not construed in terms of motivation, but rather in decision theoretic terms related to strategic performance optimization. Thus, it has been questioned whether it is necessary or even relevant to appeal to volitional and motivational factors when describing such effects. A variety of methodological approaches can be used to address this issue, such as using symbolic versus real incentives (Hübner & Schlösser, 2010; Krug & Braver, in press), identifying idiosyncratic effects of subjective reward preference (O’Doherty, Buchanan, Seymour, & Dolan, 2006), exploiting stable individual differences related to reward and punishment sensitivity (Engelmann, Damaraju, Padmala, & Pessoa, 2009; Jimura et al., 2010), leveraging differential developmental trajectories of deliberative versus affective–motivational processes (Somerville, Hare, & Casey, 2011), and examining implicit or subliminal rather than explicit incentive cues (Bijleveld et al., 2009; Pessiglione et al., 2007). All of these approaches tend to support the attribution of incentive effects on behavior and brain activity to motivational, rather than strategic factors.

Why does motivation sometimes impair cognitive performance?

In folk psychological terms, being motivated implies being goal-driven. Accordingly, motivation is commonly assumed to have only beneficial and monotonic influences on goal pursuit. In line with this intuition, reward motivation often produces a general enhancing effect on cognition. However, motivation does not always improve and may in fact impair task performance in a variety of conditions (Bonner, Hastie, Sprinkle, & Young, 2000; Bonner & Sprinkle, 2002; Camerer & Hogarth, 1999). For example, the “choking under pressure” phenomenon has been coined to describe instances in which cognitive performance falters when motivational salience is high (Baumeister & Showers, 1986; Beilock, 2010; Callan & Schweighofer, 2008; Mobbs et al., 2009). The affective, motivational, and cognitive factors that elicit such phenomenon are still not well understood.

One account of choking phenomena is that they stem from increased and distracting anxiety (Callan & Schweighofer, 2008), occurring especially in high-stakes situations (e.g., evaluative tests). Both state and trait anxiety effects have been implicated in processes of overarousal (i.e., U-shaped curve effects; Yerkes & Dodson, 1908) or diversion of attention and working memory toward the source of anxiety (e.g., threat monitoring; Eysenck, Derakshan, Santos, & Calvo, 2007). It is still not clear how to predict the motivational or cognitive factors that will elicit these anxiety-type effects. However, recent work using skin conductance as a marker of physiological arousal has found evidence of processes consistent with a noradrenergic contribution to such paradoxical incentive effects (Murty, LaBar, Hamilton, & Adcock, 2011).

A second account suggests that motivation can produce impairing effects directly, even without elicitation of anxiety or overarousal, simply by heightened activation in motivational brain circuits (Mobbs et al., 2009; Padmala & Pessoa, 2010), and possibly supraoptimal levels of dopamine (E. Aarts et al., 2014). One version holds that high motivation shifts the balance of influence toward an impulsive limbic reward system myopically focused on immediate rewards, and away from a more prospectively oriented prefrontal cortical system oriented toward maximizing long-run gains (Loewenstein, Rick, & Cohen, 2008; S. M. McClure, Laibson, Loewenstein, & Cohen, 2004). Another version focuses more directly on interactions between striatal and cortical dopaminergic systems, and argues, in particular, that dopamine has contrasting effects on cognitive control depending on the current task demands, associated neural systems, and baseline levels of dopamine in these neural systems (Cools & D’Esposito, 2011; Cools & Robbins, 2004). Accordingly, incentive motivation should enhance processes associated with cognitive flexibility (e.g., task switching) via striatal dopamine effects, but can also, as a consequence, produce impairments associated with increased distractibility and reduced cognitive focus (E. Aarts et al., 2011). However, the fit of this account to experimental findings is somewhat mixed, indicating that further theoretical and experimental work will be needed to provide a more comprehensive understanding of motivational impairment effects.

For example, a related, but distinct account is that of regulatory fit (Maddox & Markman, 2010), which suggests that motivational effects on performance depend upon the interaction of three factors: (a) whether approach or avoidance motivation is activated (promotion or prevention focus); (b) the incentive structure of the task (gains or loss related); and (c) the cognitive processes that are required to optimize task performance. Specifically, under conditions in which the current regulatory focus matches the task incentive structure (i.e., promotion focus with gain incentives, or prevention focus with loss incentives), processes associated with cognitive flexibility should be enhanced. In contrast, if there is a regulatory mismatch, task performance can be impaired, particularly when successful performance demands high cognitive flexibility. In one supportive study testing this account, choking effects were observed when participants were put under high performance pressure (prevention focus) with a gain incentive structure (i.e., a regulatory mismatch), but only when the classification learning task relied upon the flexible application of categorization rules (Worthy, Markman, & Maddox, 2009).

How does motivation modulate cognitive effort?

As we described previously, a primary account of motivation–cognition interactions is that motivation not only influences performance in cognitively effortful activities, but also the willingness to engage in them in the first place. Indeed, some accounts suggest that the enhanced cognitive performance may actually result from selection of more effortful strategies, assuming that more effortful cognitive strategies are more effective (e.g., proactive control; Braver, 2012). A role for motivation in the selection of effortful strategies is often neglected since strategy selection is typically considered in strict decision-theoretic terms of performance optimization. And yet, recent work has confirmed that participants will avoid cognitively effortful tasks, all else being equal (Botvinick, 2007; Kool et al., 2010). Thus, selection of effortful cognitive strategies should depend on cost-benefit considerations, weighing the incentive benefits of increased performance against the apparent cost of effort.

Several state and trait factors may influence the subjective cost of cognitive effort. In the personality literature, it is well-established that individuals show stable, trait-like differences in their “need for cognition,” which refers to preferences for effortful cognitive activities (Cacioppo, Petty, Feinstein, & Jarvis, 1996). More recently, experimental paradigms have been developed that enable direct assessment of avoidance rates for cognitive tasks (Botvinick, 2007; Kool et al., 2010). Related paradigms directly estimate the subjective value of cognitive effort in terms of an economic decision (Westbrook et al., 2013): what additional amount of monetary reward will an individual trade away to avoid a high working memory load task in favor of a matched task with lower load? Individuals high in need for cognition were found to trade away less reward than those low in need for cognition. Additionally, state factors also moderate these effects, as they became stronger when working memory loads increased, but were proportionally smaller when incentive magnitude increased. Such results are consistent with the idea that motivational incentives can influence willingness to expend cognitive effort, yet there is no direct evidence yet that these effects mediate strategy selection within a particular cognitive task.

Motivational value could interact with cognitive effort by means of a number of possible mechanisms. First, motivation might modulate the computation and estimation of effort costs. For example, motivational incentives have been shown to affect the rate of accumulation of a physical effort cost signal, arising in the anterior insula, which predicts decisions about when to rest (Meyniel, Sergent, Rigoux, Daunizeau, & Pessiglione, 2013). Another proposal postulates that effort cost computations and effort-reward functions are directly mediated by dopaminergic mechanisms (Phillips, Walton, & Jhou, 2007). There is support for this idea from the animal literature, but only for physical effort (Breton, Mullett, Conover, & Shizgal, 2013; Salamone & Correa, 2012).

A related possibility is that reward motivation might decrease effort costs. This decreased effort cost could occur directly, via dopaminergic broadcast effects. As we described above, these could increase the fluency of cognitive processing via a variety of mechanisms (e.g., enhanced signal-to-noise ratio, sharpened cortical tuning, altered neuronal excitability, heightened perceptual sensitivity). Motivation could also decrease effort costs indirectly, by increasing cognitive control, and thus the ability to successfully meet increased effort demands. Such an account would be consistent with proposed mechanisms of motivation–cognition interactions that postulate effects on how and when cognitive control is allocated (e.g., proactive control, expected value of control). This type of account also aligns with the influential ego depletion literature in social psychology (Baumeister, Vohs, & Tice, 2007), which assumes that exertion of control depletes a limited resource (but see Inzlicht et al., 2014; Kurzban et al., 2013), and that motivation compensates for depletion by decreasing people’s tendency to conserve will-power (Muraven & Slessareva, 2003). Likewise, as we discuss further below, it is also consistent with the finding that people’s beliefs about the reward value of cognitive effort have a strong influence on their willingness to engage in it (Blackwell, Trzesniewski, & Dweck, 2007; Dweck, 2012).

A final possibility is that the motivational effects on effortful cognitive engagement occur through a primarily affective route. For example, it is intuitive to think that increasing incentive motivation changes the affective valence of cognitive effort from primarily aversive to primarily rewarding. Indeed, accounts of this flavor have been put forward in the animal learning literature to explain the effects of reinforcing high-effort behaviors in the development of “work ethic” (Clement, Feltus, Kaiser, & Zentall, 2000) or “learned industriousness” (Eisenberger, 1992). A similar type of interpretation is present in accounts from the social-personality literature that assume bidirectional affective–motivational interactions, such that making a cognitive goal a desired outcome increases the positive affect associated with it, and vice versa (H. Aarts, Custers, & Veltkamp, 2008). Such effects would be particularly relevant for studies investigating how to enhance cognitive engagement in relatively hypodopaminergic populations (e.g., healthy aging) and clinical syndromes (e.g., anergia, anhedonia).

How do motivation–cognition interactions change across the lifespan?

Development

A primary goal of neurodevelopmental research is to specify the biological mechanisms that dynamically influence behavior from childhood to adulthood. The adolescent period is especially interesting, in that some aspects of the brain have reached adult-level structure and connectivity, whereas others, including the prefrontal cortex, show developmentally lagged trajectories, not reaching adult volume and connectivity until the late twenties. Lagged development of the prefrontal cortex has been implicated in the still-maturing capacity for adolescents to instantiate impulse control and other forms of self-regulation (Casey, Galvan, & Hare, 2005; Rubia et al., 2006). In contrast, critical components of dopaminergic neurocircuitry, including the ventral striatum and orbitofrontal cortex, are functionally sensitized during adolescence (Andersen, Dumont, & Teicher, 1997; Brenhouse, Sonntag, & Andersen, 2008). Likewise, fMRI studies have demonstrated that the adolescent striatum shows a greater magnitude of response to reward cues relative to both children and adults (Galvan et al., 2006; Somerville et al., 2011) and shows exaggerated prediction error learning signals (Cohen et al., 2010).

As such, the adolescent brain is thought to be in a unique state of heightened incentive salience signaling, paired with an underdeveloped capacity for impulse control (Somerville & Casey, 2010; Steinberg, 2010b). This combination is thought to represent a developmentally normative “imbalance” that could lead to a heightened influence of motivational cues on adolescents’ behavior and decisions. Indeed, studies probing dynamic interactions within striatocortical circuitry have demonstrated adolescent-specific patterns of neural reactivity and heightened functional connectivity that parallel a reduced capacity to withhold behavioral responses to appetitive cues (Somerville et al., 2011). Evolutionarily inspired accounts argue that the adolescent brain might exist in such a state of bias in order to facilitate exploratory behavior to leave safety in search of mates and resources (Spear, 2000).

Despite initial support for this framework, numerous fundamental questions remain. Although a growing number of studies have measured incentive salience responding or cognitive control across development, only a few have manipulated both processes within the same experimental design. Thus, our understanding of how dynamic striatocortical interactions and connectivity might shape selective shifts in adolescent cognitive behavior is still poor. In addition, it is unclear how particular contextual factors that influence adolescent motivated and risky behavior in the real world (such as the presence of peers or affectively arousing contexts) dynamically modulate striatocortical interactions and ultimately, motivated behavior during this complex phase of the lifespan.

Aging

Much cognitive-aging research has focused on identifying the nature of age-related change in specific cognitive processes, as well as understanding the underlying neural mechanisms. Although cognitive and neurobiological factors such as processing speed, working memory, or gray matter volume may be predictive, they clearly do not explain all of the age-related variance in performance (for a review, see Allaire, 2012). Motivation-based accounts are also being increasingly emphasized as relevant for determining age differences in cognitive performance.

Age-related motivational influences may be evident in response to changes in the costs of engaging in cognitive activity. Hess and colleagues (Hess, in press; Hess & Emery, 2012) have argued that such costs increase in later life, and may negatively impact the motivation to engage cognitive resources in support of performance. The resultant shifts in the costs relative to the benefits of engaging in particular activities are hypothesized to result in both reduced overall levels of participation in cognitively demanding activities, and in increased salience of the self-relevance of the task in determining engagement. Some support for this selective-engagement account has been observed experimentally in terms of self-report, physiological, and behavioral indicators regarding the costs of cognitive activity (e.g., Ennis, Hess, & Smith, 2013; Westbrook et al., 2013), as well as with self-reported shifts from a more extrinsic to a more intrinsic motivational focus in later life (Hess, Emery, & Neupert, 2012). These findings have led to the interesting suggestion that some of the age-related variance observed on such tasks may reflect motivational influences, and that the observed age effects may overestimate age differences in underlying ability. However, it will be important for further research to be able to disentangle and quantitatively estimate the distinct contributions of motivational and cognitive performance effects on age differences in behavioral performance.

As we described above, motivational accounts have also been put forward to describe the positivity effect in cognitive aging. One tool for exploring whether such effects reflect a shift in motivational goals is to use eyetracking, so as to provide real-time measures of visual attention. These have resulted in fairly clear evidence that older adults look less at negative and more at positive stimuli than do their younger counterparts (see, e.g., Isaacowitz, Wadlinger, Goren, & Wilson, 2006). These age differences are magnified when participants come to the task in a bad mood (Isaacowitz, Toner, Goren, & Wilson, 2008). But do these effects reflect motivation? Positive looking behaviors could conceivably arise for motivational reasons (i.e., due to age-related prioritization of emotional goals). However, direct evidence for a motivational explanation of these findings at this point is lacking. It may be that these effects result from age-related changes in goals, but that remains to be tested empirically. Thus, it remains an open question whether age differences in looking and looking–feeling links really arise from age differences in motivation, and if they do, what specific configurations of goals lead to these patterns. To determine this, studies will be needed that directly assess goals and track individual differences in goal states through looking patterns and mood changes, as well as studies that manipulate goals and put them in competition to determine effects on looking and mood across different age groups.

In general, theories of cognitive aging are strongly based in descriptions of neurobiological change, whereas none of the current motivational theories of aging integrate neurobiology. One account interprets the age-related positivity effect described above in terms of a potential retuning of amygdala sensitivity from a negative emotional bias in young adulthood toward a relatively more positive emotional bias in older age (Mather et al., 2004) as argued by the “aging-brain” hypothesis (Cacioppo, Berntson, Bechara, Tranel, & Hawkley, 2011; for an opposing view, see Nashiro, Sakaki, & Mather, 2012). Similarly, there is evidence for intact reward motivation and enhancement of positive anticipation relative to negative anticipation in older adults’ self-reported emotional ratings and neural activation in the striatum and anterior insula (Samanez-Larkin et al., 2007). In contrast, a large literature suggests that many of the brain systems implicated in motivational enhancement of cognition decline structurally and functionally with age. For example, studies have shown relatively linear decline in D1-like and D2-like dopamine receptors and dopamine transporters across adulthood (and mixed evidence for age differences in synthesis capacity; Backman, Nyberg, Lindenberger, Li, & Farde, 2006). Some have argued that differential age-related decline of specific neural systems may account for the divergent trajectories of motivational and cognitive functions (e.g., MacPherson, Phillips, & Della Sala, 2002), but there is much debate about these theories, and they are not well supported by larger, cross-sectional and longitudinal studies of brain aging (Driscoll et al., 2009; Raz, Ghisletta, Rodrigue, Kennedy, & Lindenberger, 2010; Walhovd et al., 2011). All of this is further complicated by a wave of seemingly contradictory findings on age differences in sensitivity to positive and negative information in reward-based tasks (e.g., Eppinger, Schuck, Nystrom, & Cohen, 2013; Frank & Kong, 2008; Samanez-Larkin et al., 2007). Thus, testable neurobiologically based models of age differences in motivation and cognition urgently need to be developed.

How do motivational incentives get translated into goals?

Traditional theories (e.g., Ajzen, 1991) assume that the high perceived feasibility and desirability of an imagined future outcome will always result in a strong intention (i.e., a goal) to reach this outcome. Under such condition, the desired outcome (or incentive) is likely to transform into a goal. Extensive research has revealed, however, that even when the perceived feasibility of an attractive future outcome (i.e., a positive incentive) is high, people do not always commit to striving for it (e.g., imagine the highly attractive and feasible future outcome of becoming a skilled piano player). Thus, a key question remains regarding what factors are critical to ensuring that a highly motivating outcome translates into a change in cognitive goals.

Social psychological research has suggested important roles for both mental contrasting and mindset theory in the translation of an incentivized outcome into a goal commitment, even given high feasibility. Mental contrasting is a process of simulating both the desired future outcome as well as potential obstacles. This process is thought to activate expectations of overcoming the obstacles: If expectations are high, people will actively pursue (commit to and strive for) reaching the desired future, but if they are low, then people will refrain from goal pursuit, either reducing efforts or curbing them altogether (Oettingen, Pak, & Schnetter, 2001).

According to mindset theory (Gollwitzer, 1990, 2012; also known as the Rubicon model), goal setting is the process of transition from a predecisional deliberative phase into the postdecisional implementation phase. In the predecisional phase, the desirability and feasibility of a wish need to be fully and completely deliberated before the person can move from indecisiveness to decisiveness. Accordingly, when people feel that they have deliberated enough, they feel justified to move (i.e., “cross the Rubicon”) into implementation. Indeed, Gollwitzer, Heckhausen, and Ratajczak (1990) observed that as-yet-undecided people were more likely to make a decision after they had been asked to list likely positive and negative, short-term and long-term consequences of goal attainment.

Although these accounts of goal setting may apply well to the types of abstract, higher-order, and temporally extended outcomes that are typically studied in social and personality psychology, it is not at all clear that they fit well for the types of goals, motivational incentives, and behaviors that are the focus of standard cognitive neuroscience studies. Thus, more work will be needed to understand whether the concepts of mental contrasting and mindsets can be “translated” into more basic experimental domains. It is also thus unknown what cognitive and neural mechanisms underlie components of higher-order processes of mental contrasting and goal setting.

How do beliefs impact motivations?

An important, but often overlooked, area of motivation involves the study of beliefs and their impact. Recent research has shown that people’s beliefs (e.g., about the fixedness or malleability of personal attributes) predict their school achievement, the success of their relationships, the hardiness of their willpower, and their willingness to compromise for peace in the face of conflict (see Dweck, 2012). These beliefs do so by changing the goals that people are motivated to pursue and the ways that they pursue them. Moreover, the same lines of research show that changing people’s beliefs can change these goals and outcomes. Beliefs can change the meaning of the seemingly same experience, determining whether an individual will view challenges as threats or opportunities (e.g., Tomaka, Blascovich, Kibler, & Ernst, 1997), or setbacks as indicating a lack of ability or signaling that a change in effort or strategy is called for (e.g., Blackwell et al., 2007; Walton & Cohen, 2007). Beliefs can change the meaning of effort, from something unpleasant that makes people feel less competent, to something positive that signals learning (Blackwell et al., 2007). These different meanings have profoundly different motivational consequences.

Yet research is only just beginning to uncover the potential cognitive and neural mechanisms by which beliefs impact motivation. For example, in one study, individuals who differed in their beliefs about intelligence showed distinct patterns of behavioral and neural responses to errors in a demanding cognitive task. Specifically, individuals possessing a “growth mindset” (i.e., that intelligence is malleable and can be developed) showed higher accuracy after making an error, and this effect was mediated by a posterror event-related potential component (termed the Pe) thought to reflect error awareness and attentional allocation (Steinhauser & Yeung, 2010). Thus, it is possible to interpret these results as suggesting that beliefs about intelligence alter (a) how task errors are interpreted by the brain and (b) their motivational impact on subsequent performance. Given that such research is only in its infancy, much additional work will be needed to understand the neural mechanisms of belief development and change, and how such processes alter the landscape of motivation–cognition interactions.

Should other motivational constructs receive neuroscience investigation?

In addition to the topics discussed in this article, many motivational constructs and phenomena have been proposed and/or examined in psychology yet have still received little attention in neuroscience (Reeve & Lee, 2012). In fact, psychologists have proposed a number of motivational constructs to explain human behavior (some of which are already discussed in this article), such as intrinsic motivation (Deci & Ryan, 1985), need for achievement (McClelland, Atkinson, Clark, & Lowell, 1976), need to belong (Baumeister & Leary, 1995), self-efficacy (Bandura, 1977), achievement goals (Dweck, 1986), self-enhancement motives (Sedikides & Strube, 1997), and self-consistency motive (Aronson, 1968), just to name a few. These topics may be an important avenue for future research in neuroscience. Yet, they also present an important challenge: Can an integrative account be developed that incorporates the myriad of motivational constructs proposed in psychology into the theoretical frameworks used in neuroscience and/or computational models?

This is a critical question for understanding the complicated nature of motivation. For example, there is a long-lasting tradition in psychology to distinguish intrinsic interest and extrinsic motivation (Deci & Ryan, 1985), and most research in psychology has stood on the assumption that these motivations are distinct, qualitatively different entities. Viewed from the reinforcement learning theory framework, however, extrinsic motivation and intrinsic motivation may come from a common reward-processing mechanism to produce motivated behavior, with extrinsic motivation being focused on immediate, tangible reward, and intrinsic motivation being focused on invisible, future reward (Singh et al., 2010; see also Daw, O’Doherty, Dayan, Seymour, & Dolan, 2006). Likewise, as we described above, intrinsic and extrinsic motivations seem to activate common striatal reward areas (Murayama et al., 2010), suggesting a common neural basis. As another example, a large literature in social psychology has posited that a host of human social behaviors can be interpreted in terms of a fundamental “cognitive consistency motive” (or a “dissonance reduction motive”): a drive to reduce psychologically dissonant cognitions by modifying them to be consistent (Abelson, 1968; Aronson, 1968; Festinger, 1957). However, many cognitive dissonance phenomena have been successfully simulated in a computational model in which dissonance reduction occurs as an emergent product of much simpler cognitive phenomena (i.e., low-level constraint satisfaction mechanisms; Shultz & Lepper, 1998). Together, these examples suggest that neuroscience and computationally based theories may be able to provide accounts of complex motivational phenomena in terms of simpler and potentially more unifying mechanisms.

Motivation is invisible. Yet people are extremely talented in ascribing motivational concepts to interpret behavioral patterns. When we see a person acting in an unusual way, we cannot help thinking “why does he or she do that?” Even infants have basic inclination to infer others’ intentions or motives (Woodward, 1998). Many studies have shown that people are good at giving post-hoc explanations (i.e., motivation) for behavior that was actually induced unconsciously by extraneous factors (Nisbett & Wilson, 1977). This inborn tendency to attribute motivation to action may have contributed to the current myriad of definitions, hypotheses, and constructs to describe motivation (as we have discussed in this article). With the advance in the neuroscientific and computational approaches to motivation, the time may now be ripe to integrate these divergent views on motivation in a coherent, parsimonious way, instead of using motivation as a convenient “catch-all” to explain (or explain away) complex aspects of human behavior.

General conclusions

As we suggested at the outset of this article, it is indeed an exciting time for the study of motivation–cognition interactions. Although studies of motivation have been an active focus within psychology and neuroscience for decades, there has clearly also been a recent rejuvenation of interest. This rejuvenation is due, at least in part, to the growing body of exciting new findings occurring across a range of areas, including dissociations between goal-directed versus habitual motivational control, subliminal priming of goal pursuit, ego depletion, and related influences on the engagement of cognitive effort, age-related positivity biases, and adolescent oversensitivity to incentive motivation. Likewise, emerging insights into the mechanisms of motivation have been prompted by new evidence that motivation influences cognition in areas where it had previously been thought irrelevant—for example, in long-term memory formation. As we have reviewed above, and as is detailed in this special issue, some of these findings are having a strong impact on, and are being impacted by, current cognitive neuroscience research.

Yet for all the rejuvenation, excitement, and new findings, many challenges remain. We argue that the most critical and formidable challenge is that, with few exceptions, research on motivation–cognition interactions has been somewhat balkanized. Each of the different subfields tends to work largely in isolation, with the questions being pursued and methods being utilized showing little influence from, and awareness of, the parallel work going on in other areas. This balkanization has an impact even at the conceptual level, in terms of the definitions and dimensions that are used to taxonomize the domain and specify the relevant theoretical issues to be investigated.

Nevertheless, we believe that the time is ripe to move toward greater cross-disciplinary interaction and integration. A large number of pressing research questions are only just beginning to be addressed by current studies. We believe that the field is now poised to make rapid progress on these and related questions, but that such progress will critically depend on the adoption of an integrative, collaborative approach. Indeed, an explicit goal of this article, and of the special issue, is to encourage researchers toward such an approach, by highlighting not only the challenges, but also the opportunities, that come about from greater awareness of the breadth of motivation–cognition work occurring throughout psychology and neuroscience. Our hope is that the forging of new cross-disciplinary approaches and collaborations, hopefully inspired by this special issue, will lead us toward a more unified and comprehensive account of the mechanisms of motivation–cognition interaction.

Contributor Information

Todd S. Braver, Department of Psychology, Washington University in St. Louis, CB1125, One Brookings Drive, St. Louis, MO 63130, USA.

Marie K. Krug, Department of Psychology, Washington University in St. Louis, CB1125, One Brookings Drive, St. Louis, MO 63130, USA.

Kimberly S. Chiew, Center for Cognitive Neuroscience, Duke University, Durham, NC, USA.

Wouter Kool, Department of Psychology, Princeton University, Princeton, NJ, USA.

J. Andrew Westbrook, Department of Psychology, Washington University in St. Louis, CB1125, One Brookings Drive, St. Louis, MO 63130, USA.

Nathan J. Clement, Center for Cognitive Neuroscience, Duke University, Durham, NC, USA.

R. Alison Adcock, Center for Cognitive Neuroscience, Duke University, Durham, NC, USA.

Deanna M. Barch, Department of Psychology, Washington University in St. Louis, CB1125, One Brookings Drive, St. Louis, MO 63130, USA.

Matthew M. Botvinick, Department of Psychology, Princeton University, Princeton, NJ, USA.

Charles S. Carver, Department of Psychology, University of Miami, Miami, FL, USA.

Roshan Cools, Radboud University Nijmegen Medical Center, Nijmegen, The Netherlands.

Ruud Custers, Cognitive, Perceptual, and Brain Sciences, University College London, London, UK.

Anthony Dickinson, Experimental Psychology, University of Cambridge, Cambridge, UK.

Carol S. Dweck, Department of Psychology, Stanford University, Stanford, CA, USA.

Ayelet Fishbach, Booth School of Business, University of Chicago, Chicago, IL, USA.

Peter M. Gollwitzer, Department of Psychology, New York University, New York, NY, USA.

Thomas M. Hess, Department of Psychology, North Carolina State University, Raleigh, NC, USA.

Derek M. Isaacowitz, Department of Psychology, Northeastern University, Boston, MA, USA.

Mara Mather, University of Southern California, Los Angeles, CA, USA.

Kou Murayama, Department of Psychology, University of Reading, Reading, UK.

Luiz Pessoa, Department of Psychology, University of Maryland, College Park, MD, USA.

Gregory R. Samanez-Larkin, Department of Psychology, Yale University, New Haven, CT, USA.

Leah H. Somerville, Department of Psychology, Harvard University, Cambridge, MA, USA.

References

  • Aarts H, Custers R, Veltkamp M. Goal priming and the affective–motivational route to nonconscious goal pursuit. Social Cognition. 2008;26:555–577. [Google Scholar]
  • Aarts E, van Holstein M, Cools R. Striatal dopamine and the interface between motivation and cognition. Frontiers in Psychology. 2011;2:163. [PMC free article] [PubMed] [Google Scholar]
  • Aarts E, Wallace DL, Dang LC, Jagust WJ, Cools R, D’Esposito M. Dopamine and the cognitive downside of a promised bonus. Psychological Science. 2014;25:1003–1009. doi: 10.1177/0956797613517240. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Abelson RP. Theories of cognitive consistency: A sourcebook. Chicago, IL: Rand McNally; 1968. [Google Scholar]
  • Adcock RA, Thangavel A, Whitfield-Gabrieli S, Knutson B, Gabrieli JDE. Reward-motivated learning: Mesolimbic activation precedes memory formation. Neuron. 2006;50:507–517. [PubMed] [Google Scholar]
  • Ajzen I. The theory of planned behavior. Organizational Behavior and Human Decision Processes. 1991;50:179–211. [Google Scholar]
  • Alexander GE, DeLong MR, Strick PL. Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annual Review of Neuroscience. 1986;9:357–381. [PubMed] [Google Scholar]
  • Allaire JC. Everyday cognition. In: Whitbourne SK, Sliwinski MJ, editors. Blackwell handbook of adulthood and aging. Hoboken, NJ: Wiley-Blackwell; 2012. pp. 190–207. [Google Scholar]
  • Amiez C, Joseph JP, Procyk E. Reward encoding in the monkey anterior cingulate cortex. Cerebral Cortex. 2006;16:1040–1055. [PMC free article] [PubMed] [Google Scholar]
  • Andersen SL, Dumont NL, Teicher MH. Developmental differences in dopamine synthesis inhibition by (±)-7-OH-DPAT. Naunyn-Schmiedeberg's Archives of Pharmacology. 1997;356:173–181. doi: 10.1007/PL00005038. [PubMed] [CrossRef] [Google Scholar]
  • Aronson E. Dissonance theory: Progress and problems. In: Abelson RP, Aronson E, McGuire WJ, Newcomb TM, Rosenberg MJ, Tannenbaum PH, editors. Theories of cognitive consistency: A sourcebook. Chicago, IL: Rand McNally; 1968. pp. 5–27. [Google Scholar]
  • Austin JT, Vancouver JB. Goal constructs in psychology: Structure, process, and content. Psychological Bulletin. 1996;120:338–375. [Google Scholar]
  • Backman L, Nyberg L, Lindenberger U, Li SC, Farde L. The correlative triad among aging, dopamine, and cognition: Current status and future prospects. Neuroscience & Biobehavioral Reviews. 2006;30:791–807. [PubMed] [Google Scholar]
  • Baldo BA, Kelley AE. Discrete neurochemical coding of distinguishable motivational processes: Insights from nucleus accumbens control of feeding. Psychopharmacology. 2007;191:439–459. [PubMed] [Google Scholar]
  • Balleine BW, Killcross S. Parallel incentive processing: An integrated view of amygdala function. Trends in Neurosciences. 2006;29:272–279. [PubMed] [Google Scholar]
  • Baltes PB. On the incomplete architecture of human ontogeny. Selection, optimization, and compensation as foundation of developmental theory. American Psychologist. 1997;52:366–380. [PubMed] [Google Scholar]
  • Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review. 1977;84:191–215. [PubMed] [Google Scholar]
  • Barbas H, Pandya DN. Architecture and intrinsic connections of the prefrontal cortex in the rhesus monkey. Journal of Comparative Neurology. 1989;286:353–375. [PubMed] [Google Scholar]
  • Bargh JA, Gollwitzer PM, Lee-Chai A, Barndollar K, Trotschel R. The automated will: Nonconscious activation and pursuit of behavioral goals. Journal of Personality and Social Psychology. 2001;81:1014–1027. [PMC free article] [PubMed] [Google Scholar]
  • Bargh JA, Gollwitzer PM, Oettingen G. Motivation. In: Fiske S, Gilbert D, Lindzey G, editors. Handbook of social psychology. 5. New York, NY: Wiley; 2010. pp. 268–316. [Google Scholar]
  • Bargh JA, Morsella E. The unconscious mind. Perspectives in Psychological Science. 2008;3:73–79. [PMC free article] [PubMed] [Google Scholar]
  • Bargh JA, Morsella E. Unconscious behavioral guidance systems. In: Agnew C, Carlston D, Graziano W, Kelly J, editors. Then a miracle occurs: Focusing on behavior in social psychological theory and research. New York, NY: Oxford University Press; 2010. pp. 89–118. [Google Scholar]
  • Baumeister RF, Leary MR. The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychological Bulletin. 1995;117:497–529. doi: 10.1037/0033-2909.117.3.497. [PubMed] [CrossRef] [Google Scholar]
  • Baumeister RF, Showers CJ. A review of paradoxical performance effects: Choking under pressure in sports and mental tests. European Journal of Social Psychology. 1986;16:361–383. [Google Scholar]
  • Baumeister RF, Vohs KD. Self-regulation, ego depletion, and motivation. Social and Personality Psychology Compass. 2007;1:115–128. [Google Scholar]
  • Baumeister RF, Vohs KD, Tice DM. The strength model of self-control. Current Directions in Psychological Science. 2007;16:351–355. [Google Scholar]
  • Beilock S. Choke: What the secrets of the brain reveal about getting it right when you have to. New York, NY: Free Press; 2010. [Google Scholar]
  • Beierholm U, Guitart-Masip M, Economides M, Chowdhury R, Düzel E, Dolan R, Dayan P. Dopamine modulates reward-related vigor. Neuropsychopharmacology. 2013;38(8):1495–1503. [PMC free article] [PubMed] [Google Scholar]
  • Belin D, Jonkman S, Dickinson A, Robbins TW, Everitt BJ. Parallel and interactive learning processes within the basal ganglia: Relevance for the understanding of addiction. Behavioural Brain Research. 2009;199:89–102. [PubMed] [Google Scholar]
  • Berridge KC. Pleasures of the brain. Brain and Cognition. 2003;52:106–128. [PubMed] [Google Scholar]
  • Berridge KC. Motivation concepts in behavioral neuroscience. Physiology & Behavior. 2004;81:179–209. doi: 10.1016/j.physbeh.2004.02.004. [PubMed] [CrossRef] [Google Scholar]
  • Berridge KC. The debate over dopamine’s role in reward: The case for incentive salience. Psychopharmacology. 2007;191:391–431. [PubMed] [Google Scholar]
  • Berridge KC. From prediction error to incentive salience: Mesolimbic computation of reward motivation. European Journal of Neuroscience. 2012;35:1124–1143. [PMC free article] [PubMed] [Google Scholar]
  • Berridge KC, Robinson TE. What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Research Reviews. 1998;28:309–369. [PubMed] [Google Scholar]
  • Berridge KC, Robinson TE, Aldridge JW. Dissecting components of reward: “Liking”, “wanting”, and learning. Current Opinion in Pharmacology. 2009;9:65–73. [PMC free article] [PubMed] [Google Scholar]
  • Berridge CW, Waterhouse BD. The locus coeruleus-noradrenergic system: Modulation of behavioral state and state-dependent cognitive processes. Brain Research Reviews. 2003;42:33–84. [PubMed] [Google Scholar]
  • Bijleveld E, Custers R, Aarts H. The unconscious eye opener: Pupil dilation reveals strategic recruitment of resources upon presentation of subliminal reward cues. Psychological Science. 2009;20:1313–1315. doi: 10.1111/j.1467-9280.2009.02443.x. [PubMed] [CrossRef] [Google Scholar]
  • Bijleveld E, Custers R, Aarts H. Unconscious reward cues increase invested effort, but do not change speed–accuracy tradeoffs. Cognition. 2010;115:330–335. [PubMed] [Google Scholar]
  • Bindra D. A motivational view of learning, performance, and behavior modification. Psychological Review. 1974;81:199–213. [PubMed] [Google Scholar]
  • Blackwell LS, Trzesniewski KH, Dweck CS. Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development. 2007;78:246–263. [PubMed] [Google Scholar]
  • Bonner SE, Hastie R, Sprinkle GS, Young SM. A review of the effects of financial incentives on performance in laboratory tasks: Implications for management accounting. Journal of Management Accounting Research. 2000;12:19–64. [Google Scholar]
  • Bonner SE, Sprinkle GS. The effects of monetary incentives on effort and task performance: Theories, evidence, and a framework for research. Accounting, Organizations and Society. 2002;27:303–345. [Google Scholar]
  • Botvinick MM. Conflict monitoring and decision making: Reconciling two perspectives on anterior cingulate function. Cognitive, Affective, & Behavioral Neuroscience. 2007;7:356–366. doi: 10.3758/CABN.7.4.356. [PubMed] [CrossRef] [Google Scholar]
  • Botvinick MM. Hierarchical reinforcement learning and decision making. Current Opinion in Neurobiology. 2012;22:956–962. doi: 10.1016/j.conb.2012.05.008. [PubMed] [CrossRef] [Google Scholar]
  • Botvinick MM, Huffstetler S, McGuire JT. Effort discounting in human nucleus accumbens. Cognitive, Affective, & Behavioral Neuroscience. 2009;9:16–27. [PMC free article] [PubMed] [Google Scholar]
  • Braem S, King JA, Korb FM, Krebs RM, Notebaert W, Egner T. Affective modulation of cognitive control is determined by performance-contingency and mediated by ventrome-dial prefrontal and cingulate cortex. Journal of Neuroscience. 2013;33:16961–16970. [PMC free article] [PubMed] [Google Scholar]
  • Braem S, Verguts T, Roggeman C, Notebaert W. Reward modulates adaptations to conflict. Cognition. 2012;125:324–332. [PubMed] [Google Scholar]
  • Braver TS. The variable nature of cognitive control: A dual mechanisms framework. Trends in Cognitive Sciences. 2012;16:106–113. doi: 10.1016/j.tics.2011.12.010. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Braver TS, Burgess GC. Explaining the many varieties of working memory variation: Dual mechanisms of cognitive control. In: Conway A, Jarrold C, Kane M, Miyake A, Towse J, editors. Variation in working memory. Oxford, UK: Oxford University Press; 2007. pp. 76–106. [Google Scholar]
  • Braver TS, Cohen JD. On the control of control: The role of dopamine in regulating prefrontal function and working memory. In: Monsell S, Driver J, editors. Attention and performance. XVIII. Cambridge, MA: MIT Press; 2000. pp. 713–737. [Google Scholar]
  • Braver TS, Paxton JL, Locke HS, Barch DM. Flexible neural mechanisms of cognitive control within human prefrontal cortex. Proceedings of the National Academy of Sciences. 2009;106:7351–7356. [PMC free article] [PubMed] [Google Scholar]
  • Brenhouse HC, Sonntag KC, Andersen SL. Transient D1 dopamine receptor expression on prefrontal cortex projection neurons: Relationship to enhanced motivational salience of drug cues in adolescence. Journal of Neuroscience. 2008;28:2375–2382. [PMC free article] [PubMed] [Google Scholar]
  • Breton YA, Mullett A, Conover K, Shizgal P. Validation and extension of the reward-mountain model. Frontiers in Behavioral Neuroscience. 2013;7:125. [PMC free article] [PubMed] [Google Scholar]
  • Bromberg-Martin ES, Matsumoto M, Hikosaka O. Dopamine in motivational control: Rewarding, aversive, and alerting. Neuron. 2010;68:815–834. [PMC free article] [PubMed] [Google Scholar]
  • Cacioppo JT, Berntson GG, Bechara A, Tranel D, Hawkley LC. Could an aging brain contribute to subjective well-being? The value added by a social neuroscience perspective. In: Todorov A, Fiske ST, Prentice D, editors. Social neuroscience: Toward understanding the underpinnings of the social mind. New York, NY: Oxford University Press; 2011. pp. 249–262. [Google Scholar]
  • Cacioppo JT, Petty RE, Feinstein JA, Jarvis WBG. Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin. 1996;119:197–253. doi: 10.1037/0033-2909.119.2.197. [CrossRef] [Google Scholar]
  • Calabresi P, Picconi B, Tozzi A, Di Filippo M. Dopamine-mediated regulation of corticostriatal synaptic plasticity. Trends in Neurosciences. 2007;30:211–219. [PubMed] [Google Scholar]
  • Callan DE, Schweighofer N. Positive and negative modulation of word learning by reward anticipation. Human Brain Mapping. 2008;29:237–249. [PMC free article] [PubMed] [Google Scholar]
  • Camerer CF, Hogarth RM. The effects of financial incentives in experients: A review and capital-labor-production framework. Journal of Risk and Uncertainty. 1999;19:7–42. [Google Scholar]
  • Capa RL, Bustin GM, Cleeremans A, Hansenne M. Conscious and unconscious reward cues can affect a critical component of executive control. Experimental Psychology. 2011;58:370–375. [PubMed] [Google Scholar]
  • Carbonell F, Nagano-Saito A, Leyton M, Cisek P, Benkelfat C, He Y, Dagher A. Dopamine precursor depletion impairs structure and efficiency of resting state brain functional networks. Neuropharmacology. 2014 doi: 10.1016/j.neuropharm.2013.12.021. Advance online publication. [PubMed] [CrossRef] [Google Scholar]
  • Carstensen LL. The influence of a sense of time on human development. Science. 2006;312:1913–1915. [PMC free article] [PubMed] [Google Scholar]
  • Carstensen LL, Isaacowitz DM, Charles ST. Taking time seriously: A theory of socioemotional selectivity. American Psychologist. 1999;54:165–181. [PubMed] [Google Scholar]
  • Carstensen LL, Turan B, Scheibe S, Ram N, Ersner-Hershfield H, Samanez-Larkin GR, Nesselroade JR. Emotional experience improves with age: Evidence based on over 10 years of experience sampling. Psychology and Aging. 2011;26:21–33. doi: 10.1037/a0021285. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Carter RM, Macinnes JJ, Huettel SA, Adcock RA. Activation in the VTA and nucleus accumbens increases in anticipation of both gains and losses. Frontiers in Behavioral Neuroscience. 2009;3:21. doi: 10.3389/neuro.08.021.2009. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Carver CS. Pleasure as a sign you can attend to something else: Placing positive feelings within a general model of affect. Cognition and Emotion. 2003;17:241–261. [PubMed] [Google Scholar]
  • Carver CS. Behavioral approach, behavioral avoidance, and behavioral inhibition. In: Mikulincer M, Shaver PR, editors. APA handbook of personality and social psychology: Vol. 4. Personality processes and individual differences. Washington, DC: American Psychological Association; in press. [Google Scholar]
  • Carver CS, Harmon-Jones E. Anger is an approach-related affect: Evidence and implications. Psychological Bulletin. 2009;135:183–204. doi: 10.1037/a0013965. [PubMed] [CrossRef] [Google Scholar]
  • Carver CS, Scheier MF. On the self-regulation of behavior. New York, NY: Cambridge University Press; 1998. [Google Scholar]
  • Carver CS, White T. Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: The BIS/BAS scales. Journal of Personality and Social Psychology. 1994;67:319–333. doi: 10.1037/0022-3514.67.2.319. [CrossRef] [Google Scholar]
  • Casey BJ, Galvan A, Hare TA. Changes in cerebral functional organization during cognitive development. Current Opinion in Neurobiology. 2005;15:239–244. [PubMed] [Google Scholar]
  • Cauffman E, Shulman EP, Steinberg L, Claus E, Banich MT, Graham S, Woolard J. Age differences in affective decision making as indexed by performance on the Iowa Gambling Task. Developmental Psychology. 2010;46:193–207. doi: 10.1037/a0016128. [PubMed] [CrossRef] [Google Scholar]
  • Charles ST. Strength and vulnerability integration: A model of emotional well-being across adulthood. Psychological Bulletin. 2010;136:1068–1091. [PMC free article] [PubMed] [Google Scholar]
  • Chiew KS, Braver TS. Positive affect versus reward: Emotional and motivational influences on cognitive control. Frontiers in Psychology. 2011;2:279. [PMC free article] [PubMed] [Google Scholar]
  • Chiew KS, Braver TS. Temporal dynamics of motivation–cognitive control interactions revealed by high-resolution pupillometry. Frontiers in Psychology. 2013;4:15. [PMC free article] [PubMed] [Google Scholar]
  • Choi JM, Padmala S, Spechler P, Pessoa L. Pervasive competition between threat and reward in the brain. Social Cognitive and Affective Neuroscience. 2013 doi: 10.1093/scan/nst053. Advance online publication. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Clement TS, Feltus JR, Kaiser DH, Zentall TR. “Work ethic” in pigeons: Reward value is directly related to the effort or time required to obtain the reward. Psychonomic Bulletin & Review. 2000;7:100–106. [PubMed] [Google Scholar]
  • Cohen JR, Asarnow RF, Sabb FW, Bilder RM, Bookheimer SY, Knowlton BJ, Poldrack RA. A unique adolescent response to reward prediction errors. Nature Neuroscience. 2010;13:669–671. doi: 10.1038/nn.2558. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Cools R, D’Esposito M. Inverted-U-shaped dopamine actions on human working memory and cognitive control. Biological Psychiatry. 2011;69:e113–e125. [PMC free article] [PubMed] [Google Scholar]
  • Cools R, Robbins TW. Chemistry of the adaptive mind. Philosophical Transactions of the Royal Society A. 2004;362:2871–2888. doi: 10.1098/rsta.2004.1468. [PubMed] [CrossRef] [Google Scholar]
  • Cooper JC, Knutson B. Valence and salience contribute to nucleus accumbens activation. NeuroImage. 2008;39:538–547. [PMC free article] [PubMed] [Google Scholar]
  • Custers R, Aarts H. Positive affect as implicit motivator: On the nonconscious operation of behavioral goals. Journal of Personality and Social Psychology. 2005;89:129–142. [PubMed] [Google Scholar]
  • Custers R, Aarts H. The unconscious will: How the pursuit of goals operates outside of conscious awareness. Science. 2010;329:47–50. [PubMed] [Google Scholar]
  • Custers R, Eitam B, Bargh JA. Conscious and unconscious processes in goal pursuit. In: Aarts H, Elliot AJ, editors. Goal-directed behavior. New York, NY: Psychology Press; 2012. pp. 231–266. [Google Scholar]
  • D’Ardenne K, Lohrenz T, Bartley KA, Montague PR. Computational heterogeneity in the human mesencephalic dopamine system. Cognitive, Affective, & Behavioral Neuroscience. 2013;13:747–756. doi: 10.3758/s13415-013-0191-5. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Daw ND, Kakade S, Dayan P. Opponent interactions between serotonin and dopamine. Neural Networks. 2002;15:603–616. [PubMed] [Google Scholar]
  • Daw ND, Niv Y, Dayan P. Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience. 2005;8:1704–1711. [PubMed] [Google Scholar]
  • Daw ND, O’Doherty JP, Dayan P, Seymour B, Dolan RJ. Cortical substrates for exploratory decisions in humans. Nature. 2006;441:876–879. [PMC free article] [PubMed] [Google Scholar]
  • Daw ND, Shohamy D. The cognitive neuroscience of motivation and learning. Social Cognition. 2008;26:593–620. [Google Scholar]
  • Dayan P, Balleine BW. Reward, motivation, and reinforcement learning. Neuron. 2002;36:285–298. [PubMed] [Google Scholar]
  • Dayan P, Niv Y, Seymour B, Daw ND. The misbehavior of value and the discipline of the will. Neural Networks. 2006;19:1153–1160. [PubMed] [Google Scholar]
  • Deci EL. Effects of externally mediated rewards on intrinsic motivation. Journal of Personality and Social Psychology. 1971;18:105–115. [Google Scholar]
  • Deci EL, Koestner R, Ryan RM. A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin. 1999;125:627–668. disc. 692–700. [PubMed] [Google Scholar]
  • Deci EL, Ryan RM. Intrinsic motivation and self-determination in human behavior. New York, NY: Plenum Press; 1985. [Google Scholar]
  • Dickinson A. Actions and habits: The development of behavioural autonomy. Philosophical Transactions of the Royal Society B. 1985;308:67–78. [Google Scholar]
  • Dickinson A, Balleine B. Motivational control of goal-directed action. Animal Learning & Behavior. 1994;22:1–18. doi: 10.3758/BF03199951. [CrossRef] [Google Scholar]
  • Dickinson A, Balleine B. Motivational control of instrumental action. Current Directions in Psychological Science. 1995;4:162–167. [Google Scholar]
  • Dickinson A, Balleine BW. Causal cognition and goal-directed action. In: Heyes C, Huber L, editors. The evolution of cognition. Cambridge, MA: MIT Press; 2000. pp. 185–204. [Google Scholar]
  • Dickinson A, Dawson GR. Pavlovian processes in the motivational control of instrumental performance. Quarterly Journal of Experimental Psychology. 1987;39B:201–213. [Google Scholar]
  • Doll BB, Simon DA, Daw ND. The ubiquity of model-based reinforcement learning. Current Opinion in Neurobiology. 2012;22:1075–1081. [PMC free article] [PubMed] [Google Scholar]
  • Dommett E, Coizet V, Blaha CD, Martindale J, Lefebvre V, Walton N, Redgrave P. How visual stimuli activate dopaminergic neurons at short latency. Science. 2005;307:1476–1479. doi: 10.1126/science.1107026. [PubMed] [CrossRef] [Google Scholar]
  • Douglas LA, Varlinskaya EI, Spear LP. Novel-object place conditioning in adolescent and adult male and female rats: Effects of social isolation. Physiology and Behavior. 2003;80:317–325. [PubMed] [Google Scholar]
  • Dreisbach G, Fischer R. The role of affect and reward in the conflict-triggered adjustment of cognitive control. Frontiers in Human Neuroscience. 2012;6:342. [PMC free article] [PubMed] [Google Scholar]
  • Driscoll I, Davatzikos C, An Y, Wu X, Shen D, Kraut M, Resnick SM. Longitudinal pattern of regional brain volume change differentiates normal aging from MCI. Neurology. 2009;72:1906–1913. doi: 10.1212/WNL.0b013e3181a82634. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Durstewitz D, Seamans JK. The dual-state theory of prefrontal cortex dopamine function with relevance to catechol-o-methyltransferase genotypes and schizophrenia. Biological Psychiatry. 2008;64:739–749. [PubMed] [Google Scholar]
  • Dweck CS. Motivational processes affecting learning. American Psychologist. 1986;41:1040–1048. [Google Scholar]
  • Dweck CS. Mindset: How you can fulfil your potential. London, UK: Constable & Robinson Limited; 2012. [Google Scholar]
  • Easterbrook JA. The effect of emotion on cue utilization and the organization of behavior. Psychological Review. 1959;66:183–201. [PubMed] [Google Scholar]
  • Eisenberger R. Learned industriousness. Psychological Review. 1992;99:248–267. [PubMed] [Google Scholar]
  • Elliot AJ. Handbook of approach and avoidance motivation. New York, NY: Psychology Press; 2008. [Google Scholar]
  • Elliot AJ, Fryer JW. The goal construct in psychology. In: Shah J, Gardner W, editors. Handbook of motivation science. New York, NY: Guilford Press; 2008. pp. 235–250. [Google Scholar]
  • Engelmann JB, Damaraju E, Padmala S, Pessoa L. Combined effects of attention and motivation on visual task performance: Transient and sustained motivational effects. Frontiers in Human Neuroscience. 2009;3:4. [PMC free article] [PubMed] [Google Scholar]
  • Ennis GE, Hess TM, Smith BT. The impact of age and motivation on cognitive effort: Implications for cognitive engagement in older adulthood. Psychology and Aging. 2013;28:495–504. [PMC free article] [PubMed] [Google Scholar]
  • Eppinger B, Schuck NW, Nystrom LE, Cohen JD. Reduced striatal responses to reward prediction errors in older compared with younger adults. Journal of Neuroscience. 2013;33:9905–9912. [PMC free article] [PubMed] [Google Scholar]
  • Estes WK. Discriminative conditioning: I. A discriminative property of conditioned anticipation. Journal of Experimental Psychology. 1943;32:150–155. [Google Scholar]
  • Eysenck MW, Derakshan N, Santos R, Calvo MG. Anxiety and cognitive performance: Attentional control theory. Emotion. 2007;7:336–353. doi: 10.1037/1528-3542.7.2.336. [PubMed] [CrossRef] [Google Scholar]
  • Festinger L. A theory of cognitive dissonance. Stanford, CA: Stanford University Press; 1957. [Google Scholar]
  • Figner B, Mackinlay RJ, Wilkening F, Weber EU. Affective and deliberative processes in risky choice: Age differences in risk taking in the Columbia Card Task. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2009;35:709–730. [PubMed] [Google Scholar]
  • Fiorillo CD, Tobler PN, Schultz W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science. 2003;299:1898–1902. [PubMed] [Google Scholar]
  • Fishbach A, Dhar R. Goals as excuses or guides: The liberating effect of perceived goal progress on choice. Journal of Consumer Research. 2005;32:370–377. [Google Scholar]
  • Fishbach A, Dhar R, Zhang Y. Subgoals as substitutes or complements: The role of goal accessibility. Journal of Personality and Social Psychology. 2006;91:232–242. [PubMed] [Google Scholar]
  • Fishbach A, Koo M, Finkelstein SR. Motivation resulting from completed and missing actions. In: Olson J, Zanna MP, editors. Advances in experimental social psychology. Vol. 50. New York, NY: Elsevier; in press. [Google Scholar]
  • Frank MJ, Kong L. Learning to avoid in older age. Psychology and Aging. 2008;23:392–398. [PubMed] [Google Scholar]
  • Fredrickson BL, Branigan C. Positive emotions broaden the scope of attention and thought-action repertoires. Cognition and Emotion. 2005;19:313–332. [PMC free article] [PubMed] [Google Scholar]
  • Freund AM. Age-differential motivational consequences of optimization versus compensation focus in younger and older adults. Psychology and Aging. 2006;21:240–252. [PubMed] [Google Scholar]
  • Frey BS, Jegen R. Motivation crowding theory. Journal of Economic Surveys. 2001;15:589–611. [Google Scholar]
  • Gable PA, Harmon-Jones E. Approach-motivated positive affect reduces breadth of attention. Psychological Science. 2008;19:476–482. [PubMed] [Google Scholar]
  • Galvan A, Hare TA, Parra CE, Penn J, Voss H, Glover G, Casey BJ. Earlier development of the accumbens relative to orbitofrontal cortex might underlie risk-taking behavior in adolescents. Journal of Neuroscience. 2006;26:6885–6892. [PMC free article] [PubMed] [Google Scholar]
  • Galvan A, McGlennen KM. Enhanced striatal sensitivity to aversive reinforcement in adolescents versus adults. Journal of Cognitive Neuroscience. 2013;25:284–296. [PubMed] [Google Scholar]
  • Gamo NJ, Arnsten AF. Molecular modulation of prefrontal cortex: Rational development of treatments for psychiatric disorders. Behavioral Neuroscience. 2011;125:282–296. [PMC free article] [PubMed] [Google Scholar]
  • Geier CF, Terwilliger R, Teslovich T, Velanova K, Luna B. Immaturities in reward processing and its influence on inhibitory control in adolescence. Cerebral Cortex. 2010;20:1613–1629. [PMC free article] [PubMed] [Google Scholar]
  • Germain CM, Hess TM. Motivational influences on controlled processing: Moderating distractibility in older adults. Neuropsychology, Development, and Cognition B. 2007;14:462–486. doi: 10.1080/13825580600611302. [PubMed] [CrossRef] [Google Scholar]
  • Gollwitzer PM. Action phases and mind-sets. In: Higgins ET, Sorrentino RM, editors. The handbook of motivation and cognition: Foundations of social behavior. Vol. 2. New York, NY: Guilford Press; 1990. pp. 53–92. [Google Scholar]
  • Gollwitzer PM. Implementation intentions: Strong effects of simple plans. American Psychologist. 1999;54:493–503. [Google Scholar]
  • Gollwitzer PM. Mindset theory of action phases. In: Van Lange P, Kruglanski AW, Higgins ET, editors. Handbook of theories of social psychology. Vol. 1. London, UK: Sage; 2012. pp. 526–545. [Google Scholar]
  • Gollwitzer PM, Barry H, Oettingen G. Needs and incentives as sources of goals. In: Aarts H, Elliot A, editors. Goal-directed behavior. New York, NY: Psychology Press; 2011. pp. 115–149. [Google Scholar]
  • Gollwitzer PM, Heckhausen H, Ratajczak H. From weighing to willing: Approaching a change decision through pre-or postdecisional mentation. Organizational Behavior and Human Decision Processes. 1990;45:41–65. [Google Scholar]
  • Gollwitzer PM, Moskowitz GB. Goal effects on action and cognition. In: Higgins ET, Kruglanski AW, editors. Social psychology: Handbook of basic principles. New York, NY: Guilford Press; 1996. pp. 361–399. [Google Scholar]
  • Gollwitzer PM, Oettingen G. Planning promotes goal striving. In: Vohs KD, Baumeister RF, editors. Handbook of self-regulation: Research, theory, and applications. 2. New York, NY: Guilford Press; 2011. pp. 162–185. [Google Scholar]
  • Gray JA. The psychology of fear and stress. Cambridge, UK: Cambridge University Press; 1987. [Google Scholar]
  • Green DM, Swets JA. Signal detection theory and psychophyics. New York, NY: Wiley; 1966. [Google Scholar]
  • Guitart-Masip M, Huys QJ, Fuentemilla L, Dayan P, Duzel E, Dolan RJ. Go and no-go learning in reward and punishment: Interactions between affect and effect. NeuroImage. 2012;62:154–166. [PMC free article] [PubMed] [Google Scholar]
  • Haber SN, Fudge JL, McFarland NR. Striatonigrostriatal pathways in primates form an ascending spiral from the shell to the dorsolateral striatum. Journal of Neuroscience. 2000;20:2369–2382. [PMC free article] [PubMed] [Google Scholar]
  • Harmon-Jones E. Anger and the behavioral approach system. Personality and Individual Differences. 2003;35:995–1005. [Google Scholar]
  • Harmon-Jones E, Gable PA, Price TF. Does negative affect always narrow and positive affect always broaden the mind? Considering the influence of motivational intensity on cognitive scope. Current Directions in Psychological Science. 2013;22:301–307. [Google Scholar]
  • Heckhausen H, Gollwitzer PM. Thought contents and cognitive functioning in motivational versus volitional states of mind. Motivation and Emotion. 1987;11:101–120. [Google Scholar]
  • Heckhausen J, Wrosch C, Schulz R. A motivational theory of life-span development. Psychological Review. 2010;117:32–60. doi: 10.1037/a0017668. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Henze DA, Gonzalez-Burgos GR, Urban NN, Lewis DA, Barrionuevo G. Dopamine increases excitability of pyramidal neurons in primate prefrontal cortex. Journal of Neurophysiology. 2000;84:2799–2809. [PubMed] [Google Scholar]
  • Hermans EJ, van Marle HJ, Ossewaarde L, Henckens MJ, Qin S, van Kesteren MT, Fernández G. Stress-related noradrenergic activity prompts large-scale neural network reconfiguration. Science. 2011;334:1151–1153. doi: 10.1126/science.1209603. [PubMed] [CrossRef] [Google Scholar]
  • Hershberger WA. An approach through the looking glass. Animal Learning & Behavior. 1986;14:443–451. [Google Scholar]
  • Hess TM. Selective engagement of cognitive resources: Motivational influences on older adults’ cognitive functioning. Perspectives on Psychological Science in press. [PMC free article] [PubMed] [Google Scholar]
  • Hess TM, Emery L. Memory in context: The impact of age-related goals on performance. In: Naveh-Benjamin M, Ohta N, editors. Perspectives on memory and aging. New York, NY: Psychology Press; 2012. [Google Scholar]
  • Hess TM, Emery L, Neupert SD. Longitudinal relationships between resources, motivation, and functioning. Journals of Gerontology. 2012;67B:299–308. doi: 10.1093/geronb/gbr100. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Higgins ET. Self-discrepancy: A theory relating self and affect. Psychological Review. 1987;94:319–340. [PubMed] [Google Scholar]
  • Higgins ET. Beyond pleasure and pain. American Psychologist. 1997;52:1280–1300. [PubMed] [Google Scholar]
  • Holland PC. Relations between Pavlovian–instrumental transfer and reinforcer devaluation. Journal of Experimental Psychology: Animal Behavior Processes. 2004;30:104–117. doi: 10.1037/0097-7403.30.2.104. [PubMed] [CrossRef] [Google Scholar]
  • Holroyd CB, Coles MG. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review. 2002;109:679–709. doi: 10.1037/0033-295X.109.4.679. [PubMed] [CrossRef] [Google Scholar]
  • Holroyd CB, Yeung N. Motivation of extended behaviors by anterior cingulate cortex. Trends in Cognitive Sciences. 2012;16:122–128. doi: 10.1016/j.tics.2011.12.008. [PubMed] [CrossRef] [Google Scholar]
  • Hommer DW, Knutson B, Fong GW, Bennett S, Adams CM, Varnera JL. Amygdalar recruitment during anticipation of monetary rewards: An event-related fMRI study. Annals of the New York Academy of Sciences. 2003;985:476–478. [PubMed] [Google Scholar]
  • Howe MW, Tierney PL, Sandberg SG, Phillips PE, Graybiel AM. Prolonged dopamine signalling in striatum signals proximity and value of distant rewards. Nature. 2013;500:575–579. [PMC free article] [PubMed] [Google Scholar]
  • Hübner R, Schlösser J. Monetary reward increases attentional effort in the flanker task. Psychonomic Bulletin & Review. 2010;17:821–826. doi: 10.3758/PBR.17.6.821. [PubMed] [CrossRef] [Google Scholar]
  • Hull CL. Principles of behavior, an introduction to behavior theory. New York, NY: Appleton-Century-Crofts; 1943. [Google Scholar]
  • Inzlicht M, Schmeichel BJ, Macrae CN. Why self-control seems (but may not be) limited. Trends in Cognitive Sciences. 2014;18:127–133. [PubMed] [Google Scholar]
  • Isaacowitz DM, Toner K, Goren D, Wilson HR. Looking while unhappy: Mood-congruent gaze in young adults, positive gaze in older adults. Psychological Science. 2008;19:848–853. [PMC free article] [PubMed] [Google Scholar]
  • Isaacowitz DM, Wadlinger HA, Goren D, Wilson HR. Selective preference in visual fixation away from negative images in old age? An eye-tracking study. Psychology and Aging. 2006;21:40–48. [PubMed] [Google Scholar]
  • Isen AM, Daubman KA, Nowicki GP. Positive affect facilitates creative problem solving. Journal of Personality and Social Psychology. 1987;52:1122–1131. [PubMed] [Google Scholar]
  • Jimura K, Locke HS, Braver TS. Prefrontal cortex mediation of cognitive enhancement in rewarding motivational contexts. Proceedings of the National Academy of Sciences. 2010;107:8871–8876. [PMC free article] [PubMed] [Google Scholar]
  • Job V, Walton GM, Bernecker K, Dweck CS. Beliefs about willpower determine the impact of glucose on self-control. Proceedings of the National Academy of Sciences. 2013;110:14837–14842. [PMC free article] [PubMed] [Google Scholar]
  • Kang MJ, Hsu M, Krajbich IM, Loewenstein G, McClure SM, Wang JT, Camerer CF. The wick in the candle of learning: Epistemic curiosity activates reward circuitry and enhances memory. Psychological Science. 2009;20:963–973. doi: 10.1111/j.1467-9280.2009.02402.x. [PubMed] [CrossRef] [Google Scholar]
  • Kennedy Q, Mather M, Carstensen LL. The role of motivation in the age-related positivity effect in autobiographical memory. Psychological Science. 2004;15:208–214. [PubMed] [Google Scholar]
  • Kennerley SW, Walton ME, Behrens TE, Buckley MJ, Rushworth MF. Optimal decision making and the anterior cingulate cortex. Nature Neuroscience. 2006;9:940–947. [PubMed] [Google Scholar]
  • Kinnison J, Padmala S, Choi JM, Pessoa L. Network analysis reveals increased integration during emotional and motivational processing. Journal of Neuroscience. 2012;32:8361–8372. [PMC free article] [PubMed] [Google Scholar]
  • Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119:254–284. [Google Scholar]
  • Knight M, Seymour TL, Gaunt JT, Baker C, Nesmith K, Mather M. Aging and goal-directed emotional attention: Distraction reverses emotional biases. Emotion. 2007;7:705–714. [PubMed] [Google Scholar]
  • Knutson B, Fong GW, Adams CM, Varner JL, Hommer D. Dissociation of reward anticipation and outcome with event-related fMRI. NeuroReport. 2001;12:3683–3687. [PubMed] [Google Scholar]
  • Kobayashi S, Lauwereyns J, Koizumi M, Sakagami M, Hikosaka O. Influence of reward expectation on visuospatial processing in macaque lateral prefrontal cortex. Journal of Neurophysiology. 2002;87:1488–1498. [PubMed] [Google Scholar]
  • Koo M, Fishbach A. Dynamics of self-regulation: How (un)accomplished goal actions affect motivation. Journal of Personality and Social Psychology. 2008;94:183–195. [PubMed] [Google Scholar]
  • Kool W, Botvinick M. A labor/leisure tradeoff in cognitive control. Journal of Experimental Psychology: General. 2014;143:131–141. doi: 10.1037/a0031048. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Kool W, McGuire JT, Rosen ZB, Botvinick MM. Decision making and the avoidance of cognitive demand. Journal of Experimental Psychology: General. 2010;139:665–682. [PMC free article] [PubMed] [Google Scholar]
  • Kool W, McGuire JT, Wang GJ, Botvinick MM. Neural and behavioral evidence for an intrinsic cost of self-control. PLoS ONE. 2013;8:e72626. doi: 10.1371/journal.pone.0072626. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Kouneiher F, Charron S, Koechlin E. Motivation and cognitive control in the human prefrontal cortex. Nature Neuroscience. 2009;12:939–945. [PubMed] [Google Scholar]
  • Krebs RM, Boehler CN, Roberts KC, Song AW, Woldorff MG. The involvement of the dopaminergic midbrain and cortico–striatal–thalamic circuits in the integration of reward prospect and attentional task demands. Cerebral Cortex. 2012;22:607–615. [PMC free article] [PubMed] [Google Scholar]
  • Krebs RM, Boehler CN, Woldorff MG. The influence of reward associations on conflict processing in the Stroop task. Cognition. 2010;117:341–347. [PMC free article] [PubMed] [Google Scholar]
  • Kringelbach ML, Berridge KC. Towards a functional neuroanatomy of pleasure and happiness. Trends in Cognitive Sciences. 2009;13:479–487. [PMC free article] [PubMed] [Google Scholar]
  • Kross E, Ayduk O. Making meaning of negative experiences by self-distancing. Current Directions in Psychological Science. 2011;20:187–199. [Google Scholar]
  • Krug MK, Braver TS. Motivation and cognitive control: Going beyond monetary incentives. In: Bijleveld E, Aarts H, editors. The psychological science of money. New York, NY: Springer; in press. [Google Scholar]
  • Kurzban R, Duckworth A, Kable JW, Myers J. An opportunity cost model of subjective effort and task performance. Behavioral and Brain Sciences. 2013;36:661–679. [PMC free article] [PubMed] [Google Scholar]
  • Lammel S, Lim BK, Malenka RC. Reward and aversion in a heterogeneous midbrain dopamine system. Neuropharmacology. 2014;76(Pt B):351–359. doi: 10.1016/j.neuropharm.2013.03.019. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Lammel S, Lim BK, Ran C, Huang KW, Betley MJ, Tye KM, Malenka RC. Input-specific control of reward and aversion in the ventral tegmental area. Nature. 2012;491:212–217. doi: 10.1038/nature11527. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Leon MI, Shadlen MN. Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron. 1999;24:415–425. [PubMed] [Google Scholar]
  • Leotti LA, Delgado MR. The inherent reward of choice. Psychological Science. 2011;22:1310–1318. [PMC free article] [PubMed] [Google Scholar]
  • Lepper MR, Greene D, Nisbett RE. Undermining childrens’ intrinsic interest with extrinsic reward: Test of the “overjustification” hypothesis. Journal of Personality and Social Psychology. 1973;28:129–137. [Google Scholar]
  • Lisman J, Grace AA, Duzel E. A neoHebbian framework for episodic memory: Role of dopamine-dependent late LTP. Trends in Neurosciences. 2011;34:536–547. doi: 10.1016/j.tins.2011.07.006. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Locke HS, Braver TS. Motivational influences on cognitive control: Behavior, brain activation, and individual differences. Cognitive, Affective, & Behavioral Neuroscience. 2008;8:99–112. doi: 10.3758/CABN.8.1.99. [PubMed] [CrossRef] [Google Scholar]
  • Locke HS, Braver TS. Motivational influences on cognitive control: A cognitive neuroscience perspective. In: Hassin RR, Ochsner KN, Trope Y, editors. Self control in society, mind, and brain. New York, NY: Oxford University Press; 2010. pp. 114–140. [Google Scholar]
  • Loewenstein G, Rick S, Cohen JD. Neuroeconomics. Annual Review of Psychology. 2008;59:647–672. [PubMed] [Google Scholar]
  • Luciana M, Collins PF. Incentive Motivation, Cognitive Control, and the Adolescent Brain: Is It Time for a Paradigm Shift? Child development perspectives. 2012;6(4):392–399. [PMC free article] [PubMed] [Google Scholar]
  • MacPherson SE, Phillips LH, Della Sala S. Age, executive function, and social decision making: A dorsolateral prefrontal theory of cognitive aging. Psychology and Aging. 2002;17:598–609. [PubMed] [Google Scholar]
  • Maddox WT, Markman AB. The motivation–cognition interface in learning and decision-making. Current Directions in Psychological Science. 2010;19:106–110. doi: 10.1177/0963721410364008. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Mather M. The emotion paradox in the aging brain. Annals of the New York Academy of Sciences. 2012;1251:33–49. [PMC free article] [PubMed] [Google Scholar]
  • Mather M, Canli T, English T, Whitfield S, Wais P, Ochsner K, Carstensen LL. Amygdala responses to emotionally valenced stimuli in older and younger adults. Psychological Science. 2004;15:259–263. doi: 10.1111/j.0956-7976.2004.00662.x. [PubMed] [CrossRef] [Google Scholar]
  • Mather M, Carstensen LL. Aging and motivated cognition: The positivity effect in attention and memory. Trends in Cognitive Sciences. 2005;9:496–502. [PubMed] [Google Scholar]
  • Mather M, Johnson MK. Choice-supportive source monitoring: Do our decisions seem better to us as we age? Psychology and Aging. 2000;15:596–606. [PubMed] [Google Scholar]
  • Mather M, Knight M. Goal-directed memory: The role of cognitive control in older adults’ emotional memory. Psychology and Aging. 2005;20:554–570. [PubMed] [Google Scholar]
  • Maunsell JH. Neuronal representations of cognitive state: Reward or attention? Trends in Cognitive Sciences. 2004;8:261–265. [PubMed] [Google Scholar]
  • McClelland DC. How motives, skills, and values determine what people do. American Psychologist. 1985a;41:812–825. [Google Scholar]
  • McClelland DC. Human motivation. Glenville, IL: Scott Foresman; 1985b. [Google Scholar]
  • McClelland DC, Atkinson JW, Clark RA, Lowell EL. The achievement motive. Oxford, UK: Irvington; 1976. [Google Scholar]
  • McClure SM, Daw ND, Montague PR. A computational substrate for incentive salience. Trends in Neurosciences. 2003;26:423–428. [PubMed] [Google Scholar]
  • McClure SM, Gilzenrat MS, Cohen JD. An exploration–exploitation model based on norepinepherine and dopamine activity. In: Weiss Y, Sholkopf B, Platt J, editors. Advances in neural information processing systems. Vol. 18. Cambridge, MA: MIT Press; 2006. pp. 867–874. [Google Scholar]
  • McClure SM, Laibson DI, Loewenstein G, Cohen JD. Separate neural systems value immediate and delayed monetary rewards. Science. 2004;306:503–507. [PubMed] [Google Scholar]
  • McGuire JT, Botvinick MM. Prefrontal cortex, cognitive control, and the registration of decision costs. Proceedings of the National Academy of Sciences. 2010;107:7922–7926. [PMC free article] [PubMed] [Google Scholar]
  • Meyniel F, Sergent C, Rigoux L, Daunizeau J, Pessiglione M. Neurocomputational account of how the human brain decides when to have a break. Proceedings of the National Academy of Sciences. 2013;110:2641–2646. [PMC free article] [PubMed] [Google Scholar]
  • Mink JW. The basal ganglia: Focused selection and inhibition of competing motor programs. Progress in Neurobiology. 1996;50:381–425. [PubMed] [Google Scholar]
  • Mischel W, Moore B. Effects of attention to symbolically presented rewards on self-control. Journal of Personality and Social Psychology. 1973;28:172–179. [PubMed] [Google Scholar]
  • Mizuhiki T, Richmond BJ, Shidara M. Encoding of reward expectation by monkey anterior insular neurons. Journal of Neurophysiology. 2012;107:2996–3007. [PMC free article] [PubMed] [Google Scholar]
  • Mobbs D, Hassabis D, Seymour B, Marchant JL, Weiskopf N, Dolan RJ, Frith CD. Choking on the money: Reward-based performance decrements are associated with midbrain activity. Psychological Science. 2009;20:955–962. doi: 10.1111/j.1467-9280.2009.02399.x. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Mogenson GJ, Jones DL, Yim CY. From motivation to action: Functional interface between the limbic system and the motor system. Progress in Neurobiology. 1980;14:69–97. [PubMed] [Google Scholar]
  • Mohanty A, Gitelman DR, Small DM, Mesulam MM. The spatial attention network interacts with limbic and monoaminergic systems to modulate motivation-induced attention shifts. Cerebral Cortex. 2008;18:2604–2613. [PMC free article] [PubMed] [Google Scholar]
  • Monin B, Miller DT. Moral credentials and the expression of prejudice. Journal of Personality and Social Psychology. 2001;81:33–43. [PubMed] [Google Scholar]
  • Morecraft R, Tanji J. Cingulofrontal interactions and cingulate skeletomotor areas. In: Vogt BA, editor. Cingulate neurobiology and disease. Oxford, UK: Oxford University Press; 2009. pp. 113–144. [Google Scholar]
  • Muraven M, Slessareva E. Mechanisms of self-control failure: Motivation and limited resources. Personality and Social Psychology Bulletin. 2003;29:894–906. [PubMed] [Google Scholar]
  • Murayama K, Matsumoto M, Izuma K, Matsumoto K. Neural basis of the undermining effect of monetary reward on intrinsic motivation. Proceedings of the National Academy of Sciences. 2010;107:20911–20916. [PMC free article] [PubMed] [Google Scholar]
  • Murayama K, Matsumoto M, Izuma K, Sugiura A, Ryan RM, Deci EL, Matsumoto K. How self-determined choice facilitates performance: A key role of the ventromedial prefrontal cortex. Cerebral Cortex. 2013 doi: 10.1093/cercor/bht317. Advance online publication. [PubMed] [CrossRef] [Google Scholar]
  • Murray HA. Thematic apperception test: Manual. Cambridge, MA: Harvard University Press; 1943. [Google Scholar]
  • Murty VP, Adcock RA. Enriched encoding: Reward motivation organizes cortical networks for hippocampal detection of unexpected events. Cerebral Cortex. 2013 doi: 10.1093/cercor/bht063. Advance online publication. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Murty VP, Labar KS, Adcock RA. Threat of punishment motivates memory encoding via amygdala, not midbrain, interactions with the medial temporal lobe. Journal of Neuroscience. 2012;32:8969–8976. [PMC free article] [PubMed] [Google Scholar]
  • Murty VP, LaBar KS, Hamilton DA, Adcock RA. Is all motivation good for learning? Dissociable influences of approach and avoidance motivation in declarative memory. Learning and Memory. 2011;18:712–717. [PMC free article] [PubMed] [Google Scholar]
  • Nashiro K, Sakaki M, Mather M. Age differences in brain activity during emotion processing: Reflections of age-related decline or increased emotion regulation? Gerontology. 2012;58:156–163. [PMC free article] [PubMed] [Google Scholar]
  • Nicola SM, Surmeier J, Malenka RC. Dopaminergic modulation of neuronal excitability in the striatum and nucleus accumbens. Annual Review of Neuroscience. 2000;23:185–215. [PubMed] [Google Scholar]
  • Nisbett RE, Wilson TD. Telling more than we can know: Verbal reports on mental processes. Psychological Review. 1977;84:231–259. [Google Scholar]
  • Niv Y, Daw ND, Joel D, Dayan P. Tonic dopamine: Opportunity costs and the control of response vigor. Psychopharmacology. 2007;191:507–520. [PubMed] [Google Scholar]
  • Niznikiewicz MA, Delgado MR. Two sides of the same coin: Learning via positive and negative reinforcers in the human striatum. Developmental Cognitive Neuroscience. 2011;1:494–505. doi: 10.1016/j.dcn.2011.07.006. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • O’Doherty JP, Buchanan TW, Seymour B, Dolan RJ. Predictive neural coding of reward preference involves dissociable responses in human ventral midbrain and ventral striatum. Neuron. 2006;49:157–166. [PubMed] [Google Scholar]
  • O’Reilly RC. Biologically based computational models of high-level cognition. Science. 2006;314:91–94. [PubMed] [Google Scholar]
  • O’Reilly RC, Frank MJ. Making working memory work: A computational model of learning in the prefrontal cortex and basal ganglia. Neural Computation. 2006;18:283–328. [PubMed] [Google Scholar]
  • Ochsner KN, Gross JJ. The cognitive control of emotion. Trends in Cognitive Sciences. 2005;9:242–249. [PubMed] [Google Scholar]
  • Oettingen G. Future thought and behaviour change. European Review of Social Psychology. 2012;23:1–63. [Google Scholar]
  • Oettingen G, Gollwitzer PM. Goal setting and goal striving. In: Tesser A, Schwarz N, editors. Blackwell handbook of social psychology. Oxford, UK: Blackwell; 2001. pp. 329–347. [Google Scholar]
  • Oettingen G, Mayer D. The motivating function of thinking about the future: Expectations versus fantasies. Journal of Personality and Social Psychology. 2002;83:1198–1212. [PubMed] [Google Scholar]
  • Oettingen G, Pak H, Schnetter K. Self-regulation of goal setting: Turning free fantasies about the future into binding goals. Journal of Personality and Social Psychology. 2001;80:736–753. [PubMed] [Google Scholar]
  • Oudeyer PY, Kaplan F. What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics. 2007;1:6. [PMC free article] [PubMed] [Google Scholar]
  • Padmala S, Pessoa L. Interactions between cognition and motivation during response inhibition. Neuropsychologia. 2010;48:558–565. doi: 10.1016/j.neuropsychologia.2009.10.017. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Padmala S, Pessoa L. Reward reduces conflict by enhancing attentional control and biasing visual cortical processing. Journal of Cognitive Neuroscience. 2011;23:3419–3432. doi: 10.1162/jocn_a_00011. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Pessiglione M, Schmidt L, Draganski B, Kalisch R, Lau H, Dolan RJ, Frith CD. How the brain translates money into force: A neuroimaging study of subliminal motivation. Science. 2007;316:904–906. [PMC free article] [PubMed] [Google Scholar]
  • Pessoa L. How do emotion and motivation direct executive control? Trends in Cognitive Sciences. 2009;13:160–166. [PMC free article] [PubMed] [Google Scholar]
  • Pessoa L. The cognitive–emotional brain: From interactions to integration. Cambridge, MA: MIT Press; 2013. [Google Scholar]
  • Pessoa L, Engelmann JB. Embedding reward signals into perception and cognition. Frontiers in Neuroscience. 2010;4:17. doi: 10.3389/fnins.2010.00017. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Petrican R, Moscovitch M, Schimmack U. Cognitive resources, valence, and memory retrieval of emotional events in older adults. Psychology and Aging. 2008;23:585–594. [PubMed] [Google Scholar]
  • Phillips PE, Walton ME, Jhou TC. Calculating utility: Preclinical evidence for cost–benefit analysis by mesolimbic dopamine. Psychopharmacology. 2007;191:483–495. [PubMed] [Google Scholar]
  • Pleger B, Ruff CC, Blankenburg F, Klöppel S, Driver J, Dolan RJ. Influence of dopaminergically mediated reward on somatosensory decision-making. PLoS Biology. 2009;7:e1000164. doi: 10.1371/journal.pbio.1000164. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Prencipe A, Kesek A, Cohen J, Lamm C, Lewis MD, Zelazo PD. Development of hot and cool executive function during the transition to adolescence. Journal of Experimental Child Psychology. 2011;108:621–637. [PubMed] [Google Scholar]
  • Raz N, Ghisletta P, Rodrigue KM, Kennedy KM, Lindenberger U. Trajectories of brain aging in middle-aged and older adults: Regional and individual differences. NeuroImage. 2010;51:501–511. [PMC free article] [PubMed] [Google Scholar]
  • Redgrave P, Gurney K, Reynolds J. What is reinforced by phasic dopamine signals? Brain Research Reviews. 2008;58:322–339. [PubMed] [Google Scholar]
  • Reed AE, Carstensen LL. The theory behind the age-related positivity effect. Frontiers in Psychology. 2012;3:339. [PMC free article] [PubMed] [Google Scholar]
  • Reed AE, Chan L, Mikels JA. Meta-analysis of the age-related positivity effect: Age differences in preferences for positive over negative information. Psychology and Aging. 2014;29:1–15. [PubMed] [Google Scholar]
  • Reeve J, Lee W. Neuroscience and human motivation. In: Ryan RM, editor. The Oxford handbook of motivation. New York, NY: Oxford University Press; 2012. pp. 365–380. [Google Scholar]
  • Reynolds SM, Berridge KC. Emotional environments retune the valence of appetitive versus fearful functions in nucleus accumbens. Nature Neuroscience. 2008;11:423–425. [PMC free article] [PubMed] [Google Scholar]
  • Reynolds JNJ, Wickens JR. Dopamine-dependent plasticity of corticostriatal synapses. Neural Networks. 2002;15:507–521. doi: 10.1016/S0893-6080(02)00045-X. [PubMed] [CrossRef] [Google Scholar]
  • Ridderinkhof KR, Ullsperger M, Crone EA, Nieuwenhuis S. The role of the medial frontal cortex in cognitive control. Science. 2004;306:443–447. doi: 10.1126/science.1100301. [PubMed] [CrossRef] [Google Scholar]
  • Roitman MF, Wheeler RA, Carelli RM. Nucleus accumbens neurons are innately tuned for rewarding and aversive taste stimuli, encode their predictors, and are linked to motor output. Neuron. 2005;45:587–597. [PubMed] [Google Scholar]
  • Rubia K, Smith AB, Woolley J, Nosarti C, Heyman I, Taylor E, Brammer M. Progressive increase of frontostriatal brain activation from childhood to adulthood during event-related tasks of cognitive control. Human Brain Mapping. 2006;27:973–993. doi: 10.1002/hbm.20237. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Rushworth MF, Behrens TE. Choice, uncertainty and value in prefrontal and cingulate cortex. Nature Neuroscience. 2008;11:389–397. [PubMed] [Google Scholar]
  • Ryan RM, Mims V, Koestner R. Relation of reward contingency and interpersonal context to intrinsic motivation: A review and test using cognitive evaluation theory. Journal of Personality and Social Psychology. 1983;45:736–750. [Google Scholar]
  • Salamone JD, Correa M. The mysterious motivational functions of mesolimbic dopamine. Neuron. 2012;76:470–485. [PMC free article] [PubMed] [Google Scholar]
  • Salamone JD, Farrar AM, Font L, Patel V, Schlar DE, Nunes EJ, Sager TN. Differential actions of adenosine A1 and A2A antagonists on the effort-related effects of dopamine D2 antagonism. Behavioural Brain Research. 2009;201:216–222. doi: 10.1016/j.bbr.2009.02.021. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Salimpoor VN, van den Bosch I, Kovacevic N, McIntosh AR, Dagher A, Zatorre RJ. Interactions between the nucleus accumbens and auditory cortices predict music reward value. Science. 2013;340:216–219. [PubMed] [Google Scholar]
  • Sallet J, Quilodran R, Rothé M, Vezoli J, Joseph JP, Procyk E. Expectations, gains, and losses in the anterior cingulate cortex. Cognitive, Affective, & Behavioral Neuroscience. 2007;7:327–336. doi: 10.3758/CABN.7.4.327. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Samanez-Larkin GR, Gibbs SE, Khanna K, Nielsen L, Carstensen LL, Knutson B. Anticipation of monetary gain but not loss in healthy older adults. Nature Neuroscience. 2007;10:787–791. [PMC free article] [PubMed] [Google Scholar]
  • Sarter M, Gehring WJ, Kozak R. More attention must be paid: The neurobiology of attentional effort. Brain Research Reviews. 2006;51:145–160. [PubMed] [Google Scholar]
  • Schmidt L, d’Arc BF, Lafargue G, Galanaud D, Czernecki V, Grabli D, … Pessiglione M. Disconnecting force from money: Effects of basal ganglia damage on incentive motivation. Brain. 2008;131:1303–1310. doi: 10.1093/brain/awn045. [PubMed] [CrossRef] [Google Scholar]
  • Schmidt L, Lebreton M, Cléry-Melin ML, Daunizeau J, Pessiglione M. Neural mechanisms underlying motivation of mental versus physical effort. PLoS Biology. 2012;10:e1001266. doi: 10.1371/journal.pbio.1001266. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Schultheiss OC, Brunstein JC. Implicit motives. Oxford, UK: Oxford University Press; 2010. [Google Scholar]
  • Schultz W, Dickinson A. Neural coding of prediction errors. Annual Review of Neuroscience. 2000;23:473–500. [PubMed] [Google Scholar]
  • Sedikides C, Strube MJ. Self-evaluation: To thine own self be good, to thine own self be sure, to thine own self be true, and to thine own self be better. In: Zanna MP, editor. Advances in experimental social psychology. Vol. 29. New York, NY: Academic Press; 1997. pp. 209–269. [Google Scholar]
  • Shackman AJ, Salomons TV, Slagter HA, Fox AS, Winter JJ, Davidson RJ. The integration of negative affect, pain and cognitive control in the cingulate cortex. Nature Reviews Neuroscience. 2011;12:154–167. [PMC free article] [PubMed] [Google Scholar]
  • Shenhav A, Botvinick MM, Cohen JD. The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron. 2013;79:217–240. [PMC free article] [PubMed] [Google Scholar]
  • Shidara M, Richmond BJ. Anterior cingulate: Single neuronal signals related to degree of reward expectancy. Science. 2002;296:1709–1711. [PubMed] [Google Scholar]
  • Shima K, Tanji J. Role for cingulate motor area cells in voluntary movement selection based on reward. Science. 1998;282:1335–1338. [PubMed] [Google Scholar]
  • Shohamy D, Adcock RA. Dopamine and adaptive memory. Trends in Cognitive Sciences. 2010;14:464–472. [PubMed] [Google Scholar]
  • Shultz TR, Lepper MR. The consonance model of dissonance reduction. In: Read SJ, Miller LC, editors. Connectionist and PDP models of social reasoning and social behavior. Hillsdale, NJ: Erlbaum; 1998. pp. 211–244. [Google Scholar]
  • Singh S, Lewis RL, Barto AG, Sorg J. Intrinsically motivated reinforcement learning: An evolutionary perspective. IEEE Transactions on Autonomous Mental Development. 2010;2:70–82. [Google Scholar]
  • Small DM, Gitelman D, Simmons K, Bloise SM, Parrish T, Mesulam MM. Monetary incentives enhance processing in brain regions mediating top-down control of attention. Cerebral Cortex. 2005;15:1855–1865. [PubMed] [Google Scholar]
  • Somerville LH, Casey BJ. Developmental neurobiology of cognitive control and motivational systems. Current Opinion in Neurobiology. 2010;20:236–241. [PMC free article] [PubMed] [Google Scholar]
  • Somerville LH, Hare T, Casey BJ. Frontostriatal maturation predicts cognitive control failure to appetitive cues in adolescents. Journal of Cognitive Neuroscience. 2011;23:2123–2134. [PMC free article] [PubMed] [Google Scholar]
  • Sorrentino RM. Looking for B = f (P, E): The exception still forms the rule. Motivation and Emotion. 2013;37:4–13. [Google Scholar]
  • Spear LP. The adolescent brain and age-related behavioral manifestations. Neuroscience & Biobehavioral Reviews. 2000;24:417–463. [PubMed] [Google Scholar]
  • St Onge JR, Ahn S, Phillips AG, Floresco SB. Dynamic fluctuations in dopamine efflux in the prefrontal cortex and nucleus accumbens during risk-based decision making. Journal of Neuroscience. 2012;32:16880–16891. [PMC free article] [PubMed] [Google Scholar]
  • Steinberg L. Risk taking in adolescence: What changes, and why? Annals of the New York Academy of Sciences. 2004;1021:51–58. [PubMed] [Google Scholar]
  • Steinberg L. Adolescence. 9. New York, NY: McGraw Hill; 2010a. [Google Scholar]
  • Steinberg L. A dual systems model of adolescent risk-taking. Developmental Psychobiology. 2010b;52:216–224. [PubMed] [Google Scholar]
  • Steinberg L, Albert D, Cauffman E, Banich M, Graham S, Woolard J. Age differences in sensation seeking and impulsivity as indexed by behavior and self-report: Evidence for a dual systems model. Developmental Psychology. 2008;44:1764–1778. [PubMed] [Google Scholar]
  • Steinhauser M, Yeung N. Decision processes in human performance monitoring. Journal of Neuroscience. 2010;30:15643–15653. [PMC free article] [PubMed] [Google Scholar]
  • Sutton RS, Barto AG. Reinforcement learning. Cambridge, MA: MIT Press; 1998. [Google Scholar]
  • Tang SH, Hall VC. The overjustification effect: A meta-analysis. Applied Cognitive Psychology. 1995;9:365–404. [Google Scholar]
  • Taylor SF, Welsh RC, Wager TD, Phan KL, Fitzgerald KD, Gehring WJ. A functional neuroimaging study of motivation and executive function. NeuroImage. 2004;21(3):1045–1054. [PubMed] [Google Scholar]
  • Thurley K, Senn W, Luscher HR. Dopamine increases the gain of the input–output response of rat prefrontal pyramidal neurons. Journal of Neurophysiology. 2008;99:2985–2997. [PubMed] [Google Scholar]
  • Toates F. Motivational systems. Cambridge, UK: Cambridge University Press; 1986. [Google Scholar]
  • Tomaka J, Blascovich J, Kibler J, Ernst JM. Cognitive and physiological antecedents of threat and challenge appraisal. Journal of Personality and Social Psychology. 1997;73:63–72. [PubMed] [Google Scholar]
  • Totah NK, Kim Y, Moghaddam B. Distinct prestimulus and poststimulus activation of VTA neurons correlates with stimulus detection. Journal of Neurophysiology. 2013;110:75–85. [PMC free article] [PubMed] [Google Scholar]
  • Urry HL, Gross JJ. Emotion regulation in older age. Current Directions in Psychological Science. 2010;19:352–357. [Google Scholar]
  • van den Bos W, Cohen MX, Kahnt T, Crone EA. Striatum–medial prefrontal cortex connectivity predicts developmental changes in reinforcement learning. Cerebral Cortex. 2012;22:1247–1255. [PMC free article] [PubMed] [Google Scholar]
  • Van Leijenhorst L, Gunther Moor B, Op de Macks ZA, Rombouts SA, Westenberg PM, Crone EA. Adolescent risky decision-making: Neurocognitive development of reward and control regions. NeuroImage. 2010;51:345–355. [PubMed] [Google Scholar]
  • van Steenbergen H, Band GP, Hommel B. Reward valence modulates conflict-driven attentional adaptation: Electrophysiological evidence. Biological Psychology. 2012;90:234–241. [PubMed] [Google Scholar]
  • Walhovd KB, Westlye LT, Amlien I, Espeseth T, Reinvang I, Raz N, Fjell AM. Consistent neuroanatomical age-related volume differences across multiple samples. Neurobiology of Aging. 2011;32:916–932. doi: 10.1016/j.neurobiolaging.2009.05.013. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Walters JR, Ruskin DN, Allers KA, Bergstrom DA. Pre- and postsynaptic aspects of dopamine-mediated transmission. Trends in Neurosciences. 2000;23(10, Suppl):S41–S47. [PubMed] [Google Scholar]
  • Walton GM, Cohen GL. A question of belonging: Race, social fit, and achievement. Journal of Personality and Social Psychology. 2007;92:82–96. [PubMed] [Google Scholar]
  • Watanabe M. Reward expectancy in primate prefrontal neurons. Nature. 1996;382:629–632. [PubMed] [Google Scholar]
  • Watanabe M, Hikosaka K, Sakagami M, Shirakawa S. Coding and monitoring of motivational context in the primate prefrontal cortex. Journal of Neuroscience. 2002;22:2391–2400. [PMC free article] [PubMed] [Google Scholar]
  • Webb TL, Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychological Bulletin. 2006;132:249–268. doi: 10.1037/0033-2909.132.2.249. [PubMed] [CrossRef] [Google Scholar]
  • Weiner B. Human motivation: Metaphors, theories, and research. Newbury Park, CA: Sage; 1992. [Google Scholar]
  • Westbrook A, Kester D, Braver TS. What is the subjective cost of cognitive effort? Load, trait, and aging effects revealed by economic preference. PLoS ONE. 2013;8:e68210. doi: 10.1371/journal.pone.0068210. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Wiersma UJ. The effects of extrinsic rewards in instrinsic motivation: A meta-analysis. Journal of Occupational and Organizational Psychology. 1992;65:101–114. [Google Scholar]
  • Wise RA, Rompre PP. Brain dopamine and reward. Annual Review of Psychology. 1989;40:191–225. [PubMed] [Google Scholar]
  • Wittmann BC, Schott BH, Guderian S, Frey JU, Heinze HJ, Duzel E. Reward-related fMRI activation of dopaminergic midbrain is associated with enhanced hippocampus-dependent long-term memory formation. Neuron. 2005;45:459–467. [PubMed] [Google Scholar]
  • Woodward AL. Infants selectively encode the goal object of an actor’s reach. Cognition. 1998;69:1–34. [PubMed] [Google Scholar]
  • Worthy DA, Markman AB, Maddox WT. What is pressure? Evidence for social pressure as a type of regulatory focus. Psychonomic Bulletin & Review. 2009;16:344–349. doi: 10.3758/PBR.16.2.344. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Yerkes RM, Dodson JD. The relation of strength of stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology. 1908;18:459–482. [Google Scholar]
  • Zedelius CM, Veling H, Aarts H. When unconscious rewards boost cognitive task performance inefficiently: The role of consciousness in integrating value and attainability information. Frontiers in Human Neuroscience. 2012;6:219. doi: 10.3389/fnhum.2012.00219. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Zhang J, Berridge KC, Tindell AJ, Smith KS, Aldridge JW. A neural computational model of incentive salience. PLoS Computational Biology. 2009;5:e1000437. [PMC free article] [PubMed] [Google Scholar]

What factors affect motivation cycle?

Many different factors influence motivation, including the organism's internal physiological states, the current environmental conditions, as well as the organism's past history and experiences.

Which motivation theory suggests that people compare the outcomes they received to the outcomes received by others?

Equity theory states that motivation is affected by the outcomes we receive for our inputs compared to the outcomes and inputs of other people.

Which of the following are personal factors in motivation?

Factors in Self-Motivation.
Self-confidence and self-efficacy..
Positive thinking, and positive thinking about the future..
Focus and strong goals..
A motivating environment..

Which theory of motivation argues that objectives determine the work actions of people?

Expectancy theory, initially put forward by Victor Vroom at the Yale School of Management, suggests that behavior is motivated by anticipated results or consequences. Vroom proposed that a person decides to behave in a certain way based on the expected result of the chosen behavior.