The experimental study of economic exchange behavior revealed many discrepancies between normative theory of strategic rationality (game theory) and actual behavior. In many games games where defection and competition is expected by game theory, subjects robustly display cooperative behavior. In the ultimatum game, for instance, a ‘proposer’ makes an offer to a ‘responder’ that can either accept or refuse the offer; if the responder refuses, both players get nothing. The rational outcome is a minimal offer by the first player and an unconditional acceptance of the offer by the second. In fact, proposers make ‘fair’ offers, about 50% of the amount, responders tend to accept these offers and reject most of the ‘unfair’ offers (less than 20%;Oosterbeek et al., 2004). Cooperative and prosocial behavior is also observed in similar games, e.g. the trust game and the prisoner’s dilemma (Camerer, 2003). Neuroeconomics, the study of the neural mechanisms of decision-making (Glimcher, 2003), also showed that subjects seems to entertain prosocial preferences. Brain scans of people playing the ultimatum game indicates that unfair offers trigger, in the responders’ brain, a ‘moral disgust’: the anterior insula, an area involved in disgust and other negative emotional responses, is more active when unfair offers are proposed (Sanfey et al., 2003). In the prisoner’s dilemma and the trust game, similar activations have been found: cooperation and punishment of unfair players elicit positive affective emotions, while unfairness elicit negative one (de Quervain et al., 2004; Rilling et al., 2002).
The received view of these behavioral and neural data is that human beings are endowed with genuinely altruistic cognitive mechanisms, a view now labelled “Strong Reciprocity” (SR). According to SR, an innate propensity for altruistic punishment and altruistic rewarding makes us averse to inequity (Fehr & Rockenbach, 2004). In this talk, I argue that this moral optimism is far-fetched. Yes, the ‘cold logic’ model of rationality is not an accurate description of our decision-making mechanisms, but the SR model, I shall argue, relies on unwarranted assumptions. I present another model–the ‘hot logic’ approach–according to which human agents are selfish agents adapted to trade, exchange and partner selection in biological markets (Noë et al., 2001). Cognitive mechanisms of decision-making aims primarily at maximizing positive outcomes and minimizing negative ones. This initial hedonism is gradually modulated by social norms, by which agents learn how to maximise their utility given the norms. The ‘hot logic’ approach provide a simpler explanation of cooperation and fairness: subjects make ‘fair’ offers in the ultimatum game because they know their offer would be rejected otherwise. Responders affective reaction to ‘unfair offers’ is in fact a reaction to the loss of an expected monetary gain: they anticipated that the proposer would comply with social norms. This claim is supported by other imaging studies showing that loss of money can be aversive, and that actual and counterfactual utility recruit the same neural resources (Delgado et al., 2006; Montague et al., 2006). This approach explains why subjects make lower offers in the dictator game (an ultimatum game in which the responder make an offer and the responder's role is entirely passive) than in the ultimatum, why, when using a computer displaying eyespots, almost twice as many participants transfer money in the dictator (Haley & Fessler, 2005), and why attractive people are offered more in the ultimatum (Solnick & Schweitzer, 1999). In every case, agents seek to maximize a complex hedonic utility function, where the reward and the losses can be monetary, emotional or social (reputation, acceptance, etc.). SR is thus seen as cooperative habits that are not repaid (Burnham & Johnson, 2005).