Insulua activity – inverted U, with max activity when probability is 50%, suggesting encoding of risk.
Neuroecon Seminar Pres
Risk and Reward: Valuation in Decision-Making Neuroeconomics Seminar 10/13/09 Trevor Kvaran
Outline <ul><li>Chapter 23 </li></ul><ul><ul><li>Why care about how valuation is computed? </li></ul></ul><ul><ul><li>Models of valuation </li></ul></ul><ul><ul><li>Neurobiological evidence </li></ul></ul><ul><li>Chapter 25 </li></ul><ul><ul><li>Neuroanatomy of the striatum </li></ul></ul><ul><ul><li>The role of the striatum in valuation </li></ul></ul>
Chapter 23: Take Home Message <ul><li>Valuation and choice are separable processes. </li></ul><ul><li>Valuation for decisions under risk is accomplished by computing expected reward and risk, not by computing probabilities and utilities. </li></ul><ul><li>This model can be extended to decisions under ambiguity (uncertainty). </li></ul>
Clarifying Terms <ul><li>Bossaerts, Preuschoff, & Hsu (BPH) seem to use uncertainty and ambiguity in uncommon ways. </li></ul><ul><li>For BPH, “decisions under uncertainty” are any decisions that involve probabilistic choices. </li></ul><ul><li>“ Decisions under ambiguity” are decisions where probabilities are unknown. </li></ul>
Why care about how value is computed? <ul><li>Will hopefully allow for better prediction of choices. </li></ul>
Are Valuation and Choice Separable Processes? <ul><li>Evidence from Berns et al. (2007) </li></ul><ul><ul><li>neural activity during passive trials was predictive of choices during active trials. </li></ul></ul><ul><ul><li>Implication: if valuation and choice are separable processes, revealed preference may rest on questionable assumptions (is this right?). </li></ul></ul><ul><ul><li>Potential problem: participants knew they would eventually be making active decisions, creating an incentive for passive valuation. </li></ul></ul>
Default Actions <ul><li>Default Actions: </li></ul><ul><ul><li>Stimulus-insensitive, computationally simple, goal-oriented behaviors. </li></ul></ul><ul><ul><li>Default actions reflect choices that on average maximize utility </li></ul></ul><ul><ul><li>Similar to other dual-process explanations, prefrontal cortex activation may be involved in overriding default actions. </li></ul></ul>De Martino et al. (2006)
Phil’s Questions <ul><li>OFC activation decreases with subjects tendency to become risk-seeking with losses. Might this suggest this type of behaviour is "automatic", or non-conscious? Could we reduce this behaviour through cognitive means? </li></ul><ul><li>Its noted that in the Random Utility Model, choice is always optimal; by contrast in the default action valuation model, choice is often sub-optimal. Are economic agents constrained by some kind of "prepotent" response that impairs their adjustment to environment changes? </li></ul>
David’s Question <ul><li>Bossaerts, Preuschoff, and Hsu (2009) highlight that the computation of value is distinct from the computation of choice and, therefore, that choice can (at least in principle) be sub-optimal. That is, a decision-maker could make a physical act of choice that goes against his or her true preference. When and why might this disjunction between true and revealed preference happen? </li></ul>
More Default Action Evidence <ul><li>Caudate neurons that “prefer” a rewarded direction increase their firing rate prior to stimulus onset and decrease firing rate if rewarded direction is inappropriate. </li></ul><ul><li>Increased errors for non-rewarded direction. </li></ul><ul><li>Could support default action hypothesis. </li></ul>Lauwereyns et al. (2002)
David’s Question <ul><li>Bossaerts et al. (2009) speculate that a bias toward “default actions” might cause a person to act against his or her true preference. By “default action,” the authors mean a behavior that an organism automatically performs unless overridden by other processes. Default actions are “stimulus-insensitive and goal-oriented” and, therefore, “robust to lapses of attention” (356), for they prevent the organism from having to constantly expend effort interpreting stimuli. </li></ul><ul><li>See, for example, their interpretation of the De Martino et al. (2006) data on page 357; briefly, caudate neurons are thought to encode a preference to saccade in the direction that is more regularly rewarded – this is the bias toward a default action, essentially – and mistakes are often made when the stimulus demands a saccade in the opposite, infrequently rewarded direction. They infer that “the mistake was caused by the monkey’s inability to overcome its default action” (358). How is the alternative explanation of a perceptual error ruled out? In particular, might the monkey just ignore the stimulus, thinking it already knows what the stimulus will be (i.e., the regularly rewarded direction)? The stated normative rationale for a default action, after all, is to prevent the organism from having to constantly expend effort interpreting stimuli. If the monkey assumes (albeit wrongly) that the stimulus demands a saccade in the more regularly rewarded direction, then there is not really a conflict between true and revealed preference – it does what it wants, just under wrong assumptions. What seems needed is a way to determine that the monkey successfully processed the stimuli but nonetheless choose the default action. </li></ul><ul><li>Maybe the authors simply mean to describe perceptual errors, but, if so, then the separation of true and revealed preference is not so interesting, at least not to me. More intriguing is a general problem of, as the philosophers would call it, akrasia: acting against one’s own – and known – better judgment. </li></ul>
Policy Implications <ul><li>BPH raise interesting questions about the policy implications of the default action valuation model. </li></ul><ul><li>If correct, it implies that “revealed preferences” may not reliably track true preferences. </li></ul><ul><li>Any thoughts? </li></ul>
Risk Assessment and Learning <ul><li>Risk has often been ignored in reinforcement learning models. </li></ul><ul><li>To learn optimally, the risk of a prediction error should be assessed. </li></ul><ul><ul><li>If risk is high, small change to predictions. </li></ul></ul><ul><li>Tobler et al (2005) suggests that scaled prediction errors are calculated. </li></ul>
Filippo’s Question <ul><li>The mean-variance models discussed by Bossaerts et al. is a useful extension of standard “first moment” utility models. Moreover, their “Taylor-style” approach - i.e. accumulating terms in the utility function in order to better represent choices settings – is methodologically interesting. To a certain extent, one can imagine of comparing the fit of models with e.g. n-1 terms from the Taylor expansion and see which term has the most dramatic marginal effect. In this way, it would be possible to keep track of the information used by the decision-maker. It is also remarkable that simple Bayesian algorithm, which were not designed to identify minimum-variance strategies, do in fact find them (this is the result of a set of simulations I run.) </li></ul><ul><li>Therefore, variance-minimization may be considered an explicatory factor in subjects’ (and algorithmic) behavior. The extent to which variance minimization does account for actual computation in the brain is nonetheless unclear. In principle, the same outcome can be achieved without the requirement of an explicit effort in variance minimizing, and it may emerge as a side effect of other computations (like in the case of my algorithm.) </li></ul><ul><li>Brain data presented by Bossaerts et al. substantiate the idea that risk-minimization plays a role in human’s choices, it is nonetheless not clear whether there is in fact a ‘risk-encoding’ evaluation signal, or whether risk evaluation is the emergent effect of other computations. </li></ul><ul><li>As Bossaerts et al. note, the integration of ‘evaluation signals’ is one of the most challenging question in the neuroscience of choices. In light of this question, some issues may be raised with respect to their approach. First of all, it is unlikely that these hypothesized signals are integrated in a linear way2. For this reason, it seems equally hard to identify individual components of the evaluation process by fitting a linear composition of independent signals. Moreover, the identification problem mentioned above constitutes an even more serious challenge here. Indeed, the way in which two processes interact is different from the way in which emerging by-products of two processes interact. </li></ul><ul><li>Fore these reasons, I am not sure whether the ease in the interpretability of these terms, when considered independently (or linearly merged), can compensate the complexity that will likely emerge when we will try to put these ‘signals’ together. Alternatively, Bayesian models developed to account for integration of different signals (e.g. empirical Bayes) might be more generative and more manageable. </li></ul>
Phil’s Question <ul><li>I found it interesting that risk encoding may play a role in learning. Perceived risk could affect the learning rate, with more risk-averse agents learning more slowly. It could be interesting to study this in more detail, perhaps examining whether some temporary manipulation of risk aversion could influence learning. </li></ul>
Evaluating Reward and Risk “ linear” relationships in striatum (reward), inverted u-shape in insula (risk).
Integrating Reward and Risk Is this evidence of integrating risk and reward, or simply that risk and reward are both encoded in PFC?
Decisions Under Ambiguity <ul><li>Models for decisions under ambiguity </li></ul><ul><ul><li>Maxmin </li></ul></ul><ul><ul><ul><li>Utilities are computed assuming worst case scenarios about probabilities. </li></ul></ul></ul><ul><ul><li>ά-maxmin </li></ul></ul><ul><ul><ul><li>Best and worst case scenarios can both be considered. </li></ul></ul></ul><ul><ul><ul><ul><li>ά>.50 = ambiguity-averse </li></ul></ul></ul></ul><ul><ul><ul><ul><li>ά <.5 = ambiguity-seeking </li></ul></ul></ul></ul><ul><li>Assuming ά-maxmin, utilities for decisions under ambiguity can be conceived as a trade-off between mean reward and risk. </li></ul>
<ul><li>Kaisa’s Question </li></ul><ul><ul><li>Expected utility & prospect theories and risk-return models provide two different approaches to modeling decision making under risk. What are the advantages/disadvantages of these approaches, and how well do the brain imaging findings from these two approaches support each other </li></ul></ul><ul><li>Mirre’s Question </li></ul><ul><ul><li>In decision theory ambiguity aversion is distinguished from risk aversion. However, brain data suggests a similar neural mechanism for both risk and ambiguity. Knowing this, to what extent is ambiguity aversion different from risk aversion behavior? </li></ul></ul>
Chapter 25: Take Home Message <ul><li>Subjective valuation is represented prior to choice (anticipated valuation), informs choice, and can be updated following choice (outcome valuation). </li></ul><ul><li>Ventral striatal regions represent information about anticipated value. </li></ul><ul><li>Dorsal striatal regions represent information about outcome values. </li></ul>
Striatal Neuroanatomy <ul><li>Set of subcortical structures near the center of the brain. </li></ul><ul><li>Includes three structures: </li></ul><ul><ul><li>Caudate, Putamen, Nucleus Accumbens </li></ul></ul>
Striatal Neuroanatomy <ul><li>Striatum can also be divided into ventral and dorsal parts. </li></ul><ul><li>Ventral striatum includes Nacc and lower caudate and putamen. </li></ul><ul><li>Dorsal striatum includes higher parts of caudate and putamen. </li></ul>
Striatal Connectivity <ul><li>The Striatum has a distinct “ascending spiral” connectivity with the prefrontal cortex. </li></ul><ul><li>Ventral striatal regions connect to ventromedial cortical regions (associated with emotion and motivation), while more dorsal striatal regions connect to dorsolateral cortical regions (associated with movement and memory). </li></ul>
Valuation: evidence from rats <ul><li>Converging evidence that dopamine release in the Nacc occurs in response to anticipated reward. </li></ul><ul><ul><li>Dopamine increases when: </li></ul></ul><ul><ul><ul><li>Perception of escape from predator is high. </li></ul></ul></ul><ul><ul><ul><li>Smell of food is introduced. </li></ul></ul></ul><ul><ul><ul><li>Introduced to a receptive female. </li></ul></ul></ul><ul><ul><ul><li>Introduced to new rats. </li></ul></ul></ul>
Anticipated value: evidence from human neuroimaging <ul><li>Knutsen et al. (2001) found scaled activation to anticipated gains in NAcc. </li></ul>
Outcome value: evidence from neuroimaging <ul><li>Delgado et al. (2000) found caudate sensitive to outcome information. </li></ul><ul><li>Further studies (O’Doherty et al., 2004; Delgado et al., 2005) suggest caudate is particularly sensitive when outcome information can inform future decisions. </li></ul>
Kaisa’s Question <ul><li>There is some evidence in the literature that the ventral striatum would be evaluating outcomes in respect to reference point (Tom et al 2007, De Martino et al 2009), whereas dorsal parts of the ventral striatum are related to reference independent evaluation of outcomes (Pine et al 2009, Tobler et al 2007, De Martino et al 2009). I started wondering how do these findings match with the view that ventral parts of the striatum is related to the evaluation of expected gains whereas the dorsal parts are more involved in evaluating the value of an outcome and in selection of actions based on the outcomes. </li></ul>
Alex’s Question <ul><li>King-Casa et al., (2005) found that striatum activation predicted participants tendency to invest in a partner who had cooperated with them before, and Delgrado et al (2005) found that reputation of a partner influencing future social gains correlated with striatum activation as well. Can we extend these findings on expectation studies (e.g.: knowing that players usually make generous offers induces responders to reject more unfair trials, see Sanfey 2009 on mind and society)? </li></ul><ul><li>My point is: do expectations influence behavior because of increased rewarding expectations (as reputation study showed, leading to a modulation in the striatum) or because of more negative emotional reactivity (leading to a modulation in the insula activation) ? </li></ul><ul><li>Note: Kliemann et al (2009) prior record paper may be relevant. RTPJ (TOM) increased for unfair players. </li></ul>
Mirre’s Question <ul><li> </li></ul><ul><li>Several imaging studies suggested that activity in the striatum reflects a common neural currency of reward, that is, for rewards in the economics (e.g. money) as well as in the social (e.g. status, being liked) domain. My question is whether this is true, given the fact that the striatum consists of quite a few different components and the low resolution of fMRI does not allow for distinguishing between the smaller striatal components. </li></ul>
Cinzia’s Question <ul><li>Striatum encode subjective value in risk conditions. Thus ventral striatal activation has been investigated in several studies with gambling task. They showed that ventral striatal activation predicts a switching to the high-risk choices (Kuhnen and Knutson, 2005) and that this can also be exogenously controlled. Indeed, after positive pictures exposition subjects showed a higher-risk behavior, this was associated with an increase in the ventral striatal activation (Knutson et al., 2008). Ventral striatal activation can predict subsequent choices and economic behavior. </li></ul><ul><li>Can we conclude that “ventral striatal activation affects risk seeking behavior”? How the activation of the ventral striatal affects the behavior? Does its activation affect only the behavior in the subsequent trial or the choices along the full task (e.g. gambling task)? </li></ul>
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.