Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Get tr doc


Published on

  • Be the first to comment

  • Be the first to like this

Get tr doc

  1. 1. DESIGNING DECISION SUPPORT SYSTEMS TO HELP AVOID BIASES & MAKE ROBUST DECISIONS, WITH EXAMPLES FROM ARMY PLANNING B. Chandrasekaran Department of Computer & Information Science The Ohio State University Columbus, OH 43210 ABSTRACT Our focus in this paper is two sets of issues related to optimality in relation to DSS’s for planning. The first is a This paper is concerned with two sets of issues proposal that DSS’s for planning should aim to supportrelated to optimality in planning. The first is a proposal the planner in generating a plan that is robust, i.e., hasthat design of decision support systems (DSS’s) for satisfactory performance even when reality differs fromplanning should aim to support the planner in generating a assumptions. Such a plan would sacrifice optimalityplan that is robust, i.e., has satisfactory performance even when reality is as assumed in favor of reasonablewhen reality differs from assumptions. Such a plan would performance over a larger range of situations. The secondsacrifice optimality when reality is as assumed for issue is how DSS’s might be designed to overcome somereasonable performance over a larger range of situations. of the biases in human decision-making that result fromWe discuss how this proposed refocus follows from the the structure of human cognition and lead to humansin-principle incompleteness, and common errorfulness, of making sub-optimal choices.domain models required to assess the performance ofplans. The second issue related to optimality is the degree 2. FROM OPTIMALITY TO ROBUSTNESSto which human judgment in planning is subject to anumber of biases, all detracting from optimality. The Any evaluation of choice alternatives can require useFraming Bias arises from the Bounded Rationality of of models. Performance equations, simulations, etc., allhuman cognition. The Transitivity Bias is a result of make use of models of the relevant part of the world. Thistreating small and large differences in the criteria values reliance on models is a property of all prediction, whetheras of equal importance. Our analysis leads to design by computer programs or by thinking.recommendations for DSSs that provide a measure ofprotection against these biases. The ideas are motivated There are two fundamental sources of uncertainty inand illustrated with Army planning examples. prediction. The first is the fact the world is stochastic in nature, so at best we can only get a probability 1. INTRODUCTION distribution of the values for the various criteria of interest. The second problem is more serious: models are Decision support systems (DSSs) for planning in principle incomplete, and even in the dimensions thatgenerally support activities related to selecting an optimal are included, they may be inaccurate to various degrees.plan, i.e., a plan that maximizes or minimizes, as These problems imply that when a COA is deemed to beappropriate, the performance criteria of interest. We have superior to another COA based on criteria valuesreported (Josephson, et al 1998; Hillis, et al, 2003) on the predicted by model-based simulations, this decision coulddevelopment and use of an architecture for generating be wrong, and sometimes grievously so; the rejected COAplans, simulating their performance on multiple, might in fact be better.potentially contending criteria, selecting Pareto-optimalsubsets, and, finally, applying trade-offs visually to The inevitable gap between model and reality is notnarrow the final decision. Such simulations of course an issue for computer simulations alone; as Secretaryrequire domain models. In Army planning, such models Rumsfeld’s now-famous remark about “unknownmight be of Blue and Red forces and the environment. unknowns” suggests, simulations that human mindsWith the increasing availability computer-based perform are also subject to the same problem. However,simulations, military Courses of Action (COA) planning humans often, though not always, have a sense of whichcan go beyond the doctrinally required three distinct of their models may be questionable, and. have evolved aCOAs to accomplish the mission. In principle, planners set of heuristics by which they sometimes critique theircan generate, simulate, compare and select from a large models and simulations to test their robustness. Thenumber of COAs, thus vastly increasing the chances for awareness of this gap between models and reality, and isfinding good COAs. implications for the quality of decisions, is not widespread in research on simulation-based decision
  2. 2. Form Approved Report Documentation Page OMB No. 0704-0188Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering andmaintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, ArlingtonVA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if itdoes not display a currently valid OMB control number.1. REPORT DATE 2. REPORT TYPE 3. DATES COVERED01 DEC 2008 N/A -4. TITLE AND SUBTITLE 5a. CONTRACT NUMBERDesigning Decision Support Systems To Help Avoid Biases & Make 5b. GRANT NUMBERRobust Decisions, With Examples From Army Planning 5c. PROGRAM ELEMENT NUMBER6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBERDepartment of Computer & Information Science The Ohio StateUniversity Columbus, OH 432109. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S)12. DISTRIBUTION/AVAILABILITY STATEMENTApproved for public release, distribution unlimited13. SUPPLEMENTARY NOTESSee also ADM002187. Proceedings of the Army Science Conference (26th) Held in Orlando, Florida on 1-4December 2008, The original document contains color images.14. ABSTRACT15. SUBJECT TERMS16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE UU 8 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18
  3. 3. support. While the builder of a model might have a sense 2.2. Robustness of COA’sof the strengths and weaknesses of his models that maytemper his reliance on the results of planning, todays There are two different but related robustnesscomputer simulations use models composed out of pieces concepts:built by different people, without the planner having i. Alternatives A and B have approximately similaraccess to the unarticulated intuition of the model builders. expected outcomes, but the one whose ratio of upside/downside is higher is the more robust one1.2.1. Uncertainty and Gaps in the Simulation Models ii. Alternatives A and B have approximately similar expected outcomes, but the one whose outcomes are less Aren’t gaps and errors in the models are all simply sensitive to simulation assumptions and uncertainties iskinds of uncertainty, to be handled by some kind of the more robust one.overarching probabilistic framework? There are manymethodological and practical issues in treating all model The robustness we have in mind to cope with modelgaps and errors in a probabilistic framework. The errors is type (ii) above. While everyone would agree,difference between uncertainty and gaps is that in the when pressed, that indeed simulations and reality differ, Iformer, we know what we don’t know and have some don’t think people in the field are sensitized to the extentidea of the structure of missing knowledge. We model to which simulation-based decision comparisons can bethe uncertainty by a probability distribution over the problematic, and hence the decision finally chosen maystructure. In the case of gaps, we either aren’t aware of be deeply flawed as a result of this mismatch betweenimportant aspects of reality, or have mistakenly decided reality and simulation. While there is no absolutethat they are not relevant. How does one assign guarantee against this problem – as mentioned earlier,distributions over structures whose existence the modeler human thinking itself is based on world models and sois unaware of and has no mathematical model for? limits of simulation apply to human thinking in general – nevertheless, decision support systems ought to empower In the case of probabilistically modeled uncertainty, the DM to evaluate how sensitive a decision alternative isthe standard approach is to generate estimates of expected with respect to the specific assumptions in the simulation.values for the various outcomes of interest by running We regard development of such systems as the next majorMonte Carlo simulations. However, as Bankes (Bankes, challenge for decision support.2002, Lempert, et al, 2002) points out, difficult decisionsabout the future cannot be made on the basis of expected This issue is drawing attention in decision theory invalues for at least two reasons. The first is that if the general (Bankes, 2002, Lempert, et al, 2002). Foroutcome measures are correlated, then individual example, in (Lempert, et al, 2002) the authors remark,expected values will give a false picture of the future, but “Reliance by decision makers on formal analyticthis can be taken care of by a more sophisticated stance methodologies can increase susceptibility to surprise astowards computing the joint expected values. More such methods commonly use available information toseriously, however, expected value-based assessments fail develop single-point forecasts or probability distributionsto indicate both dangers and opportunities that may lie of future events. In doing so, traditional analyses divertnearby, and possibilities for driving the future states to attention from information potentially important toavoid dangers and exploit opportunities. In addition to understanding and planning for effects of surprise.” Theydeveloping such a sense of the decision landscape, what use the term “deep uncertainty” to refer to the situationthe decision maker (DM) needs is an understanding of where the system model is uncertain. They propose anhow sensitive the outcomes are to the assumptions about approach that they call robust adaptive planning (RAP),the uncertainty, and correspondingly, how to reduce the which involves creation of large ensembles of plausiblesensitivity of the desired outcomes to the uncertainties future scenarios. They call a strategy robust if it “performs reasonably well, compared to the alternatives, Both of these issues – protecting against uncertainties over a wide range of plausible scenarios.” The idea is that,in the models used and protecting against the gaps and using a user-friendly interface, the DM looks forerrors in models – require a shift in point of view from alternatives that are robust in this sense, rather thanoptimality based on expected values to robustness. In optimal in the traditional sense.order to realize the full potential of the vastly increasedsearch spaces made possible by computer simulation, it is There is no universal solution, or even attitude, to theessential that the decision support system empower the problem of protecting oneself against surprises.planner to explore the plans for robustness of the selected According to historian Eric Bergerud, "Bismarck inplans. particular never thought that events could be predicted 1 Due to Paul K. Davis (personal communication).
  4. 4. with precision. When a policy was pursued a range of the DM might be able to determine that while theoutcomes could be expected. The trick was to develop probability of achieving the objective varies considerablypolicy where the minimum outcome (today we might call depending on whether and how much it rains, if subgoal Sit a worst case scenario) was acceptable. If a triumph is achieved, then rain doesn’t affect the final outcomeensued, great. If it was something in between, dont die of very much, and that increasing a certain COA parametersurprise." This assumes that the goal is to minimize the to a specific value increases the chances of achieving Sworst case scenario. Not all missions can be approached independent of rain. What the planner has done in thisthis way. There are times when one might wish maximize case is an example of identifying the “dangers andthe best, taking chances of bad outcomes. The decision opportunities” mentioned earlier and how they can besupport system has to support exploring the space of used to make the plan more robust. As another example,alternatives under a variety of attitudes, conservative or let us consider a large uncertainty in enemy position. Therisk-taking, as the commander deems appropriate. DM might explore how to make a COA robust with respect to this uncertainty. He might see if he can design a2.3. Exploring Robustness of Plans COA such that the initial stages of the COA are largely oriented towards getting more information about the A model consists of a set of assumptions of the location, and based on the information, later stages of therelevant part of reality. As mentioned, robustness of a COA can be assigned appropriate parameters. Again, theplan (or decision, more generally) is a measure of the goal is less to simply obtain estimated values for thesensitivity of a plan to the various assumptions and to any outcomes than to explore the COA space to takeerroneous or missing relevant knowledge. In principle, a advantage of opportunities.plan cannot be uniformly robust with respect to all itsassumptions, let alone with respect to knowledge that it Second, the planner might try to identify assumptionsdoesn’t contain. If nothing about the world is as assumed, whose validity is key to achieving the desiredwell, the planner is out of luck. So in practice, robustness performance. This is a bit harder than the goal we justexploration should be oriented towards making use of discussed, since we are not examining the sensitivity of aislands of relative certainty to critique and protect against specific assumption, but trying to discover theless certain parts of models. Further, techniques for assumptions, among all assumptions in the model, thattesting for robustness with respect to assumptions may be key to performance. Example: “In the COA forexplicitly incorporated in the model are different from disrupting enemy communication network, what are thethose to test for robustness against missing knowledge. variables highly correlated with the outcome variable, and which model assumptions do they especially depend on?” First, the planner might explore the sensitivity of In this case, once we identify the assumptions, we canrelevant outcomes to specific assumptions whose devote additional empirical resources to verify or test thereliability he has concerns about. For example, it would assumptions for completeness and accuracy. A visualbe good to know if the likelihood of taking the objective interface such as in (Josephson, et al, 1998) may be usefulchanges much or little for a given COA whether or not it for this as shown in (Chandrasekaran & Goldman, 2006).rains, even though the actual uncertainty about the rainmight be quite large. Sensitivity analyses of this type are A variation on identifying key assumptions is tocommon. Such a sensitivity analysis may be numerical perform reverse simulations from desired results so as toparametric, as in: “How much will the performance identify ranges for the values of key variables that have achange if parameter p in the model is off by 10%?” It can bearing on the outcomes of interest. The planner mightbe qualitative & structural as in: “In my model, I am then be able to identify model assumptions that relate toassuming that the mullahs would rather give up their these variables. For example, an analysis might indicatesupport of the Sadr faction than risk extending US that capture of an asset cannot be guaranteed if a certainpresence closer to their border. How does the likelihood class of events happens, e.g., ambush at a certain location.of a deal change if I give up this assumption?” Such This in turn could enable the planner to identify the modelanalyses call for varying the specific model assumptions, assumption, “No enemy presence in area A,” as crucial tosimulating over contingencies of each model, and getting whether or not an ambush is possible. Notice that this isa statistical estimate of the range of variability of the not a case where the planner was especially concernedperformance. In (Chandrasekaran & Goldman, 2006) we about the reliability of this assumption in the model. It isreport on conducting this kind of exploration in planning, that this assumption has been identified as worth double-using a visual interface that makes relations between checking (in comparison with other assumptions) becausevariables quite salient. it has been identified as key to the performance. Given that an outcome is sensitive to an assumption, Highest in the level of difficulty is exploring thethe planner might look to see if there are additional existence of relevant model gaps – missing knowledge,elements that impact the sensitivity. In the rain example, including unknown unknowns. Ultimately, there is no
  5. 5. real guarantee that we would be able to identify all the practically verify any component of his models, to one ofrelevant gaps in the models and fix them. But the situation selectivity in critiquing not hopeless. If we start with the assumption that themodel we have is generally good, but potentially missing The planner might also modify the plan in variousimportant components, we can adopt the strategy of ways. First, he can try to reduce the sensitivity, e.g., byidentifying regions of certainty and completeness which adding components that restore performance in casemight then be used to suggest areas of the model of whose reality is different from the assumption. Second, he cancompleteness we are less confident. Several strategies organize the plan so that some of the initial steps have themight be considered. twin goal of achieving some of the plan goals as well provide information about reality, and later steps in the First, certain features or behaviors might suggest the plan take advantage of the information gained. Forneed for additional domain knowledge of specific kinds. example, a network disruption plan might haveIf taking a specific enemy asset turns out to be crucial in components that attempt to destroy some enemy assets,achieving plan goals, additional mental and modeling but the results also give information about enemyresources might be devoted to buttressing the model with defenses. Branches in the plan might be taken dependingadditional knowledge about enemy capacities to defend upon this information: “We are assuming enemy strengththe asset, or weaknesses in Blue capabilities to take the is weak here, but if it turns out wrong, we can reinforceasset. Second, a reverse simulation – from the goal from here.” Finally, the plan might be buttressed withbackwards – and additional reasoning might help. In the elements that will help achieve goals even in the absencesituation where the final approach to the objective is of relevant knowledge about some aspects of reality. Oneidentified, the planner might realize that his model has way to protect a plan against missing weather models is toonly crude terrain information, quite insufficient to have components in the plan that ensure success over aconfidently rule out the prospect of an ambush. This kind variety of weathers.of exploration is quite focused; our search for potentiallyrelevant missing knowledge is driven by evidence that it Based on insights about events on edge of phaseis especially related to the outcome. Third, odd or transitions, plans may be modified to cause or avoid keycounterintuitive behaviors in the simulation might trigger events identified to be critically related to not only for incorrect, but missing, knowledge. Example: From an analysis of a simulation, we find that when variable v takes values above say v0, an outcome An important consideration is that we are not variable of interest radically changes in value in anabstractly interested in critiquing or improving the unsatisfactory direction. This then can be the basis forsimulation model, but with respect to evaluating one or a explicitly examining the domain to see which of thesmall set of COAs, and in response to specific observed phenomena in the domain might have the effect ofbehaviors of the simulation. This is likely to focus the moving v beyond the critical value and identifying waysneed for additional investment in model-building to in which v can be kept within the desirable range.specific questions, reducing the amount of work involved. Notice that many of these strategies are in fact2.4. Increasing the Robustness of plans heuristics used in human planning on an everyday basis. We usually have a sense of trust or lack of trust in our Some of the ways in which a plan robustness can be models of the world that we use in our mental simulation,increased have been mentioned earlier in the context of and adopt strategies of early least commitment,robustness exploration, but it is still useful to bring them information gathering, consistency and reasonable valuestogether. checking, and so on. However, the challenge for DSS’s is to provide computational support to perform such If robustness analysis indicates sensitivity to certain activities. We have outlined above some availableassumptions or certain simulation states, the planner has approaches, but much remains to be done before aseveral options. In the case of the assumptions, time and framework is available that integrates multiple types ofresources permitting, verification efforts can be explorations with plan modification.selectively focused on the specific assumptions. Anexample q a key simulation state is the discovery that one 3. PROTECTING DECISION MAKER FROMnay to reach the objective is along a specific avenue of COGNITIVE BIASESapproach. Because of its importance, verification antmodeling efforts can be focused on knowledge related to We now shift our focus to how DSS’s may bethis avenue of approach. The analysis has moved the designed to protect a DM against certain biases that seemepistemic state of the planner from one of anything and inherent to human cognition and that often can result ineverything could be faulty, which makes it impossible to the DM making suboptimal choices. We start by placing
  6. 6. the decision-making problem in a larger naturalistic cognition and their origin, we might seek ways in whichcontext. DSS’s might help the DM avoid, or mitigate, these limitations. So-called “biases” are a window into the3. 1. Decision- Making in the Wild workings of the architecture. Let’s consider two, Framing and Transitivity biases. It is not surprising that investigations of biases inhuman cognition use formally stated reasoning problems 3.2. Framing Biasfor which there are clear rational answers. However,everyday decision making – what one might call decision- Consider the following experiment (De Martino, etmaking in the wild (DMITW) – involves situations that al, 2006). Subjects in a college campus received aare pre-formal: the DM has first to convert the situation message indicating that they would initially receive £50.into a problem statement. A man wins $20,000 in a Next, they had to choose between a "sure" option and alottery. What should he do: Should he pay off some of his "gamble" option. The "sure" option was presented in twoloans? Buy a new car? Repair his house? Perhaps different ways, or "frames" to different groups of subjects:something that hasn’t occurred to him yet, but even more either "keep £20" (Gain frame) or "lose £30" (Lossessential? Once the alternatives are all laid out, and for frame). The "gamble" option was identical in both frameseach alternative, either a utility (u) is specified, or criteria and presented as a pie chart depicting the probability ofof interest identified and evaluated, the formal problem winning. In the experiments, the subjects who werecan be solved as utility maximization or a Pareto-optimal presented the “sure” option as a Gain frame chose it oversolution for a multi-criterial decision problem. “gambling” at a statistically significant higher rate than those who were presented the “sure” option as a Loss While mistakes made in solving the formal version frame, though of course both frames had identicalare interesting, rarely is the “rationality” in how the DM payoffs, and the calculations involved were trivial. Howformulates the problem is discussed. Some readers might a choice was framed –described – made more differenceremember the old TV ad in which a man makes the choice that we might expect.of something to drink, starts drinking it, sees someonedrinking V8, and hits his forehead in mock self-blame, “I 3.3 Architecture of Cognition and Problem Solvingcould have had a V8.” The implication was that he hadknown about V8. His brain failed him; it missed Cognition can be analyzed architecturally in manygenerating the choice alternative V8. If he had not known layers. We focus on the problem solving (PS) level ofabout V8, it would not be a cognitive failure. A similar analysis, such as embodied in Soar (Laird, et al, 1987). Incognitive failure would occur in the case of the lottery this account, the agent has goals in his working memorywinner if he failed to generate an important choice (WM). The presence of goals causes relevant knowledgealternative, or mis-assessed the value of an alternative by to be retrieved from long-term memory (LTM) andforgetting to consider a way it could be useful. A brought to WM. This knowledge helps set up a problemcognition that fails to bring to the fore relevant space, which may result in subgoals. Some subgoals mayinformation that is available is not firing on all cylinders; fail, in which case other subgoals are pursued. Thisit is being less than optimal. As a result, the decision recursive process goes on, until all the subgoals aremade is less than optimal. achieved, and the main goal is as well. The net effect of all this is that problem solving involves search in problem Conversely, a decision that might appear not quite spaces. The search is open-ended, i.e., the size of therational when viewed narrowly within the formal search space cannot be pre-specified. It is alsostatement of a problem might actually make sense if opportunistic, being the result of the descriptions of theviewed in the larger context of the work done by specific goals, specific world states, and specific pieces ofcognition. Let us keep in mind the following anecdote knowledge. Successful searches are cached in LTM forabout Philosopher Sidney Morgenbesser for later reuse. Heuristics and pre-compiled solutions can reducediscussion. Once, after finishing dinner, Morgenbesser search, which is why most every day problems are solveddecided to order dessert. The waitress told him he had rapidly. One specific kind of heuristic is rapid selectiontwo choices: apple pie and blueberry pie. He ordered the between alternatives based on their types. Additionally,apple pie. After a few minutes the waitress returned and the architecture may have built-in preferences for certainsaid that they also had cherry pie, at which point kinds of alternatives over others.Morgenbesser said, “In that case I’ll have the blueberrypie.” This was supposedly a joke, but might the above In the example of a person winning a sum of moneybehavior actually be rational? in the lottery, both the generation of alternatives and their assessments will typically involve such searches. A These issues turn the focus on the cognitive subgoal of “keep spouse happy” might bring to WM fromarchitecture. Once we understand the limitations of LTM recent episodes of disharmony, which over a
  7. 7. sequence of additional retrievals might eventually suggest 3.4. Architectural Explanation: The Framing Biasremodeling the kitchen as a possible choice to spend themoney on. Evaluating the relative merit of one alternative We can now look at the Framing Bias example inover another, e.g., remodeling kitchen vs buying a new the light of the architecture and its properties. “Loss ofcar, would typically involve a mental simulation of the £20,” and “Gain of £30” may be equivalent in theconsequences of each choice, which again requires experimental setup, but the architectural hard-wiredrepeated access to LTM. Limitations of time, knowledge preference mechanism has only the terms “loss” andand memory access all in various ways contribute to the “gain” to go by. The subjects had limited time, and theoutput of these processes not being as complete or hardwired mechanism left its mark in the experimentalaccurate as they could be, leading to the concept of the results.Bounded Rationality of the human as a cognitive agent(Simon, 1972). Which choice is rational may be obvious 3.5. Architectural Explanation: Rationality of theonce the alternatives are generated and multi-criterially Morgenbesser choiceevaluated, but as mentioned, that doesn’t speak to therationality of the generation and evaluation of the Consider this hypothetical account of Morgenbesser’salternatives themselves. behavior. When he was told of apple and blueberry pies as choices, the choices and his dining goals evoked two Let us look a bit more closely at two steps in the criteria: taste and calorie content. Using additionalprocess, retrieval from LTM, and choosing between knowledge and access to his own preferences, suppose healternatives in the state space. LTM is accessed to solve found apple pie to be the Pareto-optimal choice, which ofsubgoals – relevant content is brought to WM to setup course he then ordered. When the waitress came backproblem spaces. The associative memory retrieval process with cherry pie as a choice, suppose that this new termworks by using the contents of WM as indexes to identify evoked an LTM element about pesticide use in cherryrelevant LTM elements. The indexes are provided by the farming. Morgenbesser now decides to use pesticide useterms in the description in WM of the states, goals and as a third criterion, and in this new formulation of thesubgoals. decision problem – three criteria -- blueberry pie becomes Pareto-optimal over apple and cherry pies. In this In selecting between alternatives, the architecture has hypothetical account, Morgenbesser was not joking, butto assign preference values to the alternatives. LTM may making a rational decision.directly provide them either as previously compiledvalues or as heuristics, or the architecture may set up As this hypothetical shows, the same feature of thecomputing preferences as a subgoal. In either case, it is architecture that played a role in the generation of thesimply another instance of solving for a subgoal, in a Framing Bias also can potentially increase the overallmanner described in the previous paragraph. However, rationality of DMITW. This is also a caution that issuesunder certain conditions, the architecture may use hard- of rationality in DM are more complex than laboratorywired heuristic preferences; this does not require studies of formal reasoning problems might indicate.accessing LTM. It seems a plausible hypothesis that ourevolutionary ancestors derived some advantage from a 3.6. Transitivity and Other Biaseshard-wired choice selection mechanism that, underrequirements of rapid decision making, preferred Another bias that is relevant is what has been calledalternatives that evoked gain over those that evoked loss, Arrow’s Paradox in theories of voting, and shows up as aand alternatives that evoked safety over those that evoked kind of transitivity failure in individual decision-making.danger. A group of voters may prefer candidate A to B, B to C, but C to A, when the choices are presented in pairs. The A feature that is common to the operation of both of corresponding example in individual decision-making isthese mechanisms, viz., retrieval from LTM and selection when the DM is given pairs of decision alternative’s,among alternatives by hard-wired preferences, is that they (A,B), (B,C) and (C, A), and prefers A to B, B to C anddepend on the descriptive language used, i.e., the terms in C to A.which the relevant items in WM are couched. The samestate or subgoal described using different terms, but in a Why does this kind of ostensibly irrational behaviorlogically equivalent way, may retrieve different elements arise? Looking at the voting example, think of each voterof LTM. The hard-wired preference mechanism, as a criterion in a 3-choice, n-criterion (where n is thesimilarly, responds to the terms, such as loss or gain, in number of voters) MCDM problem. Suppose each voter isthe description, not to the totality of the meanings, of the asked to distribute 1 unit of preference over the threealternatives. candidates, each allocation representing a degree of liking of each candidate. For example, if V1, V2 and V3 are voters, suppose V1 assigns 0.5, 0.3 and 0.2, V2 assigns 0.3,
  8. 8. 0.36 and 0.34, and V3 assigns 0.3, 0.2 and 0.5, to A, B and I suggested earlier in the discussion on the exampleC respectively. Straight voting will exhibit Arrow’s problem of how to spend a modest lottery win that bothParadox, but choosing the candidate with the highest total generating relevant criteria and coming up with relativeallocation by the voters does not. Viewing each voter as scores on these criteria for various alternatives oftena criterion in an MCDM problem, the Pareto-optimal requires a kind of mental simulation of the consequencedecision rule will not exhibit a transitivity error. However, of a choice. The effectiveness of this simulation, likemore than one candidate may end up in the Pareto- that of all problem-solving, depends on relevantoptimal set. In the specific example above, all three knowledge being retrieved to set up the problem spacechairs will be Pareto-optimal. The DM will need to call and evaluate the states.upon additional trade-off preferences to select betweenthem. Suppose one of the criteria in a COA planning task is “fuel consumption during the operation,” and suppose3. 7. Decision Support Systems And Framing Effects that a DM is trying to apply trade-off preferences between two Pareto-optimal COA’s. If COA1 is better than COA2 Retracing what we said, real word problems don’t by 10% on the fuel consumption criterion, and COA2 iscome fully formalized, and DM’s often have to engage in better than COA1 on the criterion, “time to completeopen-ended problem solving to figure out the goals, the mission,” the DM, by thinking through various scenarios,alternatives, the criteria to evaluate the alternates, and the might conclude that COA2 is better for his purposes.criteria values themselves. This activity requires repeated Suppose now that the same fuel criterion is framedaccess to information in LTM, and may also make use of differently, “fuel remaining for future mission.” Thearchitecturally hard-wired preference mechanisms. latter can be obtained quite readily from the former, so itRationality in this process consists of making sure that the is logically equivalent. However, the process of mentalDM makes full use of all the relevant knowledge, unlike simulation might be rather different in this case. The DMthe person who wished he had chosen V8. If our analysis might now be reminded of a possible missionis correct, framing – how goals, states, criteria are immediately following the current one, something thatdescribed – has an impact on the decision process because had not explicitly been part of the specifications for theretrieval of knowledge from LTM and the architectural current mission planning task. Now, his simulation of thepreference mechanisms depend on the terms in the consequences of the two alternative COA’s yieldsdescription in WM. Thus, framing may reduce the different results. The relative unimportance of a 10%rationality by failure to bring to WM all the relevant difference in fuel consumption is replaced by theknowledge the agent has. significant potential advantage of a 10% additional fuel for the next mission. Even though future missions were Consider a COA planning problem, which the not explicitly part of the formal statement of the currentplanner wishes to view as a multi-criterial decision- COA problem, the re-framing of the criterion has elicitedmaking (MCDM) problem. To illustrate the issue of new knowledge, which in turn has made available to theframing, let us assume that the problem has been commander an additional comparison criterion, readinessformalized, i.e., the COA alternatives, criteria and their for next day’s mission.values have been specified. The DSS can help bygenerating the Pareto-optimal subset. Narrowing the The lesson is not that one or the other framing mustchoice calls for injection of additional information by the be chosen – perhaps the designer should considerDM. A common form of additional information is trade- including both framings, if then is reason to believe thatoff preferences. For example, he may notice that while each will help the DM invoke different relevantalternative A1 scores higher on criterion C1, alternative A2 knowledge. In general, the DSS should encourage the DMscores higher on criterion C2, but A2’s C1 value is only to bring to bear as much of his potentially relevant5% smaller, while its C2 value is 20% higher. He may knowledge as possible.decide that on balance, A2 is really the better choice. Thisdecision is not simply one of balancing 5% against 20%, 4. CONCLUDING REMARKSsince the criteria may be quite incommensurable. Rather,he applies additional domain knowledge that convinces This paper has focused on two issues related tohim that the utility of A2 will be higher than that of A1. optimality in planning; and their implications for theDecision theory assumes the existence of a differential design of DSS’s. The first was that the real goal ofutility to explain the ability of a DM to make a trade–off, planning should be to come up with plans that are robustbut the processes a human DM engages in to access or over a variety of situations rather than optimal withestimate this differential utility have not been much respect to the assumptions embedded in the models used.discussed. The second focused on how the structure of human cognition can explain the Framing Effect that has been observed in human decision-making and that often leads
  9. 9. to sub-optimal decisions. Additionally, we noted that Rational Decision-Making in the Human Brain,much of the discussion of optimality has been in the Science, Vol. 313. no. 5787, pp. 684 - 687, 2006.context of formalized decision situations. However,every day human decision making, what we called Hillis, David; Michael Barnes, Liana Suantak, Jerry“decision-making in the wild,” is devoted to formulating Schlabach, B. Chandrasekaran, John R. Josephson,– in some ways formalizing – the decision task, i.e. to Mark Carroll, "Collaborative Visualization Tools Forgetting to a place where the traditional notion of Courses Of Action (Coa) In Anti-Terroristrationality or optimality can even be applied. While it is Operations: Combining Coevolution And Paretouseful for DSS’s to help a DM to arrive at an optimal Optimization," Proceedings of the Army Researchsolution for a well-formulated problem, the deeper Laboratories Collaborative Technology Alliancechallenge is to help the DM formulate the problem by Symposium, April 29 - May 1, 2003.bringing to bear his knowledge, understand the decisionspace, and explore the robustness of his solutions bychanging the assumptions. Josephson, John R., B. Chandrasekaran, Mark Carroll, Naresh Iyer, Bryon Wasacz, Giorgio Rizzoni, DSS’s should exploit the strengths of the human Qingyuan Li, David A. Erb, "An Architecture forcognitive architecture, and mitigate its weaknesses. Its Exploring Large Design Spaces," Proc. of thestrengths are a large, potentially open-ended stove of National Conf on AI (AAAI-98), AAAI Press/Theknowledge, including knowledge about accessing external MIT Press, pp. 143-150knowledge; and an ability to bring relevant knowledge, ifvery slowly, to bear on the task at hand. Its weaknesses Kaste, Richard; Janet OMay, Eric Heilman, B.are that it is slow, its evolutionarily hard-wired Chandrasekaran and John Josephson, "Fromarchitectural constraints can cause biases, as can the fact Simulation To Insights: Experiments In The Use Of athat access to knowledge depends on cues in descriptions Multi-Criterial Viewer To Develop Understanding Of& may miss relevant knowledge; and has other memory The COA Space," Proceedings of the Army Researchlimitations, including a low capacity working memory. Laboratories Collaborative Technology Alliance Symposium, April 29 - May 1, 2003. ACKNOWLEDGMENTS Laird, J.E., Newell, A., & P.S. Rosenbloom. (1987). Soar: An architecture for general intelligence. Artificial This research was supported by the author’s Intelligence, 33, 1-64.participation in the Advanced Decision ArchitecturesCollaborative Technology Alliance sponsored by the U.S.Army Research Laboratory under Cooperative Agreement Lempert, R., S. Popper, and S. Bankes, ConfrontingDAAD19-01-2-0009. Years of discussions with John Surprise. Social Science Computer Review, 2002.Josephson have helped in the evolution of these ideas. 20(4): p. 420-440.Stimulating collaboration with Michael Barnes andPatricia McDermott on decision support systems is Simon, Herbert A. 1982. Models of Bounded Rationality,acknowledged with appreciation. Vols. 1 and 2. MIT Press. REFERENCESBankes, S.C., Tools and techniques for developing policies for complex and uncertain systems. PNAS, 2002. 99 (suppl. 3): p. 7263-7266.Chandrasekaran, B. and Mark Goldman, "Exploring Robustness of Plans for Simulation-Based Course of Action Planning: A Framework and an Example," Proceedings of the 2007 IEEE Symposium on Computational Intelligence in Multicriteria Decision Making (MCDM 2007), April 1-5, Honolulu, HI, Institution of Electrical and Electronic Engineers, ISBN: 1-4244-0698-6, pp. 185-192.De Martino, Benedetto; Dharshan Kumaran, Ben Seymour, Raymond J. Dolan, Frames, Biases, and