Decision making
Upcoming SlideShare
Loading in...5
×
 

Decision making

on

  • 196 views

 

Statistics

Views

Total Views
196
Views on SlideShare
196
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • The most common term is Trade Study. I’ve also seen Trade-off Study. But I prefer Tradeoff Study. <br />
  • Asterisks (*) in the title or in individual bullets indicate that there are comments for the instructor in the notes section. <br /> This slide is intended for the instructor, not the students. <br />
  • This slide is intended for the instructor, not the students. <br />
  • Telephones are in your pockets and purses. <br />
  • Slides with a red title are overview slides, each bullet will be discussed with several subsequent slides. <br />
  • The purpose of these slides is to show the big picture and where tradeoff studies fit into the big picture. <br /> The top level activity is CMMI, DAR is one process area of CMMI, and tradeoff studies are one technique of DAR. <br />
  • When I give a quote without a source, I am usually the author. I just put the quote marks around it to make it seem more important. <br />
  • The left column has the CMMI specific practices associated with DAR. <br />
  • Perform Decision Analysis and Resolution PS0317 <br /> Perform Formal Evaluation PD0240 <br /> When designing a process, put as many things in parallel as possible. <br />
  • I seldom make such bold statements. <br />
  • The task of allocating resources is not a tradeoff study, but it certainly would use the results of a tradeoff study. <br /> The quote is probably from CMMI. <br />
  • Give the students a copy of the letter, which is available at www.sie.arizona.edu/sysengr/slides/tradeoffMath.doc, page 24. <br />
  • Ref: Decide Formal Evaluation <br />
  • Ref: Guide Formal Evaluations <br />
  • Ref: Guide Formal Evaluations <br />
  • Ref: Establish Evaluation Criteria <br />
  • Some people will do a tradeoff study when buying a house or a car, but seldom for lesser purchases. <br /> All companies should have a repository of good evaluation criteria that have been used. <br /> Each would contain the following slots <br /> Name of criterion <br /> Description <br /> Weight of importance (priority) <br /> Basic measure <br /> Units <br /> Measurement method <br /> Input (with expected values or the domain) <br /> Output <br /> Scoring function (type and parameters) <br />
  • Evaluation criteria: Cost, Preparation Time, Tastiness, Novelty, Low Fat, Contains the Five Food Groups, Complements Merlot Wine, Distance to Venue, length of line, messiness, who you are eating with (if it’s your Mormon boss you should forgo the beer) <br /> If you get them wrong, you’ll get the rhinoceros instead of the chocolate torte. <br />
  • *If these very important requirements are performance related, then they are called key performance parameters. <br /> **Killer criteria for today’s lunch: must be vegetarian, non alcoholic, Kosher, diabetic, <br />
  • *The Creativity Tools Memory Jogger, by D. Ritter & M. Brassard, GOAL/QPC 1998, explains several tools for creative brainstorming. <br /> **If a requirement cannot be traded off then it should not be in the tradeoff study. <br /> ***The make-reuse-buy process is a part of the Decision Analysis and Resolution (DAR) process. <br />
  • Candidate meals: pizza, hamburger, fish & chips, chicken sandwich, beer, tacos, bread and water. <br /> Be sure that you consider left-overs in the refrigerator. <br />
  • Ref: Select Evaluation Methods <br />
  • Additional sources include customer statements, expert opinion, historical data, surveys, and the real system. <br />
  • Ref: Evaluate Alternatives <br />
  • Ref: Select Preferred Solutions <br />
  • Ref: Expert Review of Trade off Studies <br />
  • Note that this slide says that the formal evaluations should be reviewed. <br /> It does not say that the results of the formal evaluations should be reviewed. <br />
  • IPT stands for integrated product team or integrated product development team. <br />
  • These results might be the preferred alternatives, <br /> or they could be recommendations to expand the search, re-evaluate the original problem statement, or negotiate goals and capabilities with the stakeholders. <br /> A most important part of these results is the sensitivity analysis. <br />
  • Slide 46 lists some possible methods. <br /> The title of this slide is the example that we will present in the next 18 slides. <br /> In these next 18 slides, the phrases in pink will be the DAR specific practices (rectangular boxes of the process diagram) we are referring to. <br /> Some people get confused by the recursion in this example. <br /> The May-June 2007 issue of the American Scientist says recursive thinking is the only thing that distinguishes humans from animals. <br /> I do a tradeoff study to select a tradeoff study tool. <br />
  • *MAUT was originally called Multicriterion Decision Analysis. The first complete exposition of MCDA was given in 1976 by Keeney, R. L., & Raiffa, H. Decisions With Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, reprinted, Cambridge University Press, 1993. <br /> **AHP is often implemented with the software tool Expert Choice. <br />
  • Sorry if this is confusing, but this example is recursive. <br /> MAUT and AHP are both the alternatives being evaluated and the methods being used to select the preferred alternatives. <br />
  • In this example we are not using scoring functions, therefore the evaluation data are the Scores. <br /> The evaluation data are derived from approximations, models, simulations or experiments on prototypes. <br /> Typically the evaluation data are normalized on a scale of 0 to 1 before the calculations are done: for simplicity, we have not done that here. <br /> The numbers in this example indicate that MAUT is twice as easy to use as AHP. <br />
  • Weights are usually based on expert opinion or quantitative decision techniques. <br /> Typically the weights are normalized on a scale of 0 to 1 before the calculations are done: I did not do that here. <br /> How did I we get the weights of importance? I pulled them out of the blue sky. Is there a systematic way to get weights? Yes, there are many. One is the AHP. <br />
  • If you had ten criteria, then this matrix would be ten by ten. <br />
  • Remember the numbers in the right column. They will go into the matrix seven slides from here. <br /> Expert Choice has two methods for normalization, and they often give slightly different numbers. <br /> It might be difficult to square large matrices, so Saaty (1980) gave 4 approximation methods. <br /> AHP, exact solution <br /> Raise the preference matrix (with forced reciprocals) to arbitrarily large powers, and divide the sum of each row by the sum of the elements of the matrix to get a weights column. (Dr. Bahill’s example, with a power of 2) <br /> To compute the Consistency Index: <br /> Multiply preference matrix by weights column <br /> Divide the elements of this new column by the elements in the weights column <br /> Sum the components and divide by the number of components. This gives λmax (called the maximum or principal eigenvalue). <br /> The closer λmax is to n, the elements in the preference matrix, the more consistent the result. <br /> Deviation from consistency may be represented the Consistency Index (C.I.) = (λmax – n)/(n-1) <br /> Calculating the average C.I. from a many randomly generated preference matrices gives the Random Index (R.I.), <br /> which depends on the number of preference matrix columns (or rows): <br /> 1,0.00; 2,0.00; 3,0.58; 4,0.90; 5,1.12; 6,1.24; 7,1.32; 8,1.41; 9,1.45; 10,1.49; 11,1.51; 12,1.48; 13,1.56; 14,1.57; 15,1.59. <br /> The ratio of the C.I. to the average R.I. for the same order matrix is called the Consistency Ratio (C.R.). A Consistency Ratio of 0.10 or less is considered acceptable. <br /> Saaty, T. L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. New York, McGraw-Hill, 1980. <br /> Saaty gives 4 approximation methods: <br /> The crudest: Sum the elements in each row and normalize by dividing each sum by the total of all the sums, thus the results now add up to unity. The first entry of the resulting vector is the priority of the first activity (or criterion); the second of the second activity and so on. <br /> Better: Take the sum of the elements in each column and form the reciprocals of these sums. To normalize so that these numbers add to unity, divide each reciprocal by the sum of the reciprocals. <br /> Good: Divide the elements of each column by the sum of that column (i.e., normalize the column) and then add the elements in each resulting row and divide this sum by the number of elements in the row. This is a process of averaging over the normalized columns. (Dr. Goldberg’s example) <br /> Good: Multiply the n elements in each row and take the nth root. Normalize the resulting numbers. <br />
  • Obviously you really want the inverse of price. All criteria must be phrased as more is better. <br />
  • Filling in this table is an in-class exercise <br />
  • All of the students should get this far. <br /> If you think that tastiness is moderately less important than price, then you could put in 1/3 or -3 depending on the software you are using. <br />
  • Some of the students might do this. <br />
  • Remember the numbers in the right column. They will go into the matrix two slides from here. <br />
  • Remember the numbers in the right column. They will go into the matrix on the next slide. <br />
  • *The AHP software (Expert Choice) can also use the product combining function. <br /> Of course there is AHP software (e. g. Expert Choice) that will do all of the math for you. <br /> **The original data had only one significant figure, so these numbers should be rounded to one digit after the decimal point. <br />
  • The AHP software computes an inconsistency index. If A is preferred to B, and B is preferred to C, then A should be preferred to C. AHP detects intransitivities and presents it as an inconsistency index. <br />
  • The result is robust. <br />
  • For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1. <br /> This function gives more weight to the weights of importance. <br />
  • We only care about absolute values. <br /> If the sensitivity is positive it means when the parameter gets bigger, the function gets bigger. <br /> If the sensitivity is negative it means when the parameter gets bigger, the function gets smaller. <br />
  • Improve the DAR process. <br /> Add some other techniques, such as AHP, to the DAR web course, not done yet <br /> Fix the utility curves document, done by Harley Henning Spring 2005 <br /> Add image theory to the DAR process, proposed for summer 2007 <br /> Change linkages in the documentation system, done Fall 2004 <br /> Create a course, Decision Making and Tradeoff Studies, done Fall 2004 <br />
  • This example should be familiar to the students. <br /> It shows that tradeoff studies really are done. <br /> The web site used to have a really good tradeoff study right up front. <br />
  • You cannot read this slide. <br /> It shows the tree structure of the criteria. <br /> It is expanded in the next 4 slides. <br />
  • This section is the heart of this course. <br /> It is intended to teach the students how to do a good tradeoff study. <br />
  • so that the decision maker can trust the results of a tradeoff study <br />
  • The God Anubis weighing of the heart of the dead against Maat&apos;s feather of Truth. <br /> If your heart doesn’t balance with the feather of truth, then the crocodile monster eats you up. <br />
  • Back in the Image Theory section we said there were two types of decisions. <br /> Adoption decisions determine whether to add new goals to the trajectory image or new plans to the strategic image. This could include Allocating resources. <br /> Progress decisions determine whether a plan is making progress toward achieving a goal. This could include Making plans. <br />
  • The complete design of a Pinewood Derby is given in chapter 5 of Chapman, W. L., Bahill, A. T., and Wymore, A.W., Engineering Modeling and Design, CRC Press Inc., Boca Raton, FL, 1992, which is located at <br /> http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf <br />
  • This is only a fragment of the Pinewood Derby tradeoff study. <br />
  • In football and baseball the managers do tradeoff studies to select each play, <br /> except at the beginning of some football games where they have a preplanned sequence of plays. <br /> In basketball they select plays with tradeoff studies only a few times per game. <br /> One of my friends (from India) argued with me about the selecting a husband or wife comment. <br />
  • You should do tradeoff studies at the very beginning of the design process, but you also do tradeoff studies throughout the whole system life cycle. <br /> The 80-20 principle was invented by Juran and attributed to Pareto in the 1st ed of Juran’s Quality Control Handbook. <br /> Much later in his article, Mea Culpa, he comments on the development of his idea, <br /> and notes that many quality colleagues urged him to correct the attribution. <br /> The original data for this slide come from a Toyota auto manufacturing report, from around 1985. <br />
  • The last bullet provides a segue to the next topic, “Well how do people think?” <br />
  • Assume you are going to lunch in Little Italy or on Coronado Island and you don’t know any of the restaurants in the area. <br /> You drive along until you get “close enough” and then decide to take the next parking space you see. <br /> You don’t do a tradeoff study of parking lots and different on-street areas. <br /> You park your car. <br /> Then you walk along and look at restaurant-1. Let’s say that you decide that it is not satisfactory. <br /> You look at restaurant-2. Let’s say that you decide that it is not satisfactory. <br /> You look at restaurant-3. Let’s say that you find it to be satisfactory. But you keep on looking. <br /> You look at restaurant-4 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-4. <br /> You look at restaurant-5 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-5. <br /> You look at restaurant-6 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-6. <br /> Now let’s assume that your friends say that they are hungry and tired and they don’t want to look any more. <br /> You probably go back to restaurant-3. <br /> You never considered doing a tradeoff study of all six restaurants. <br /> At the most you did pair-wise comparisons. <br />
  • Driving down a freeway looking for a gas station, I might see a gas station with a price of $2.60 per gallon. <br /> I would say that is too expensive. The next gas station might ask $2.65, I would also pass that one by. <br /> However, I might start to run out of gas, and then see a station offering $2.70 per gallon. <br /> I would take it, because the expense of going back to the first station would be too high. <br /> T. D. Seeley, P. K. Visscher and K. M. Passino, Group Decision Making in Honey Bee Swarms, American Scientist, 94(3): 220-229, May-June 2006. <br />
  • Customers of eBay might use either strategy. <br /> At first I asked my wife and niece to look for Tinkertoy kits on eBay and let me know what was available. <br /> Then I switched strategies and said, Buy any kit you see that contains a yellow figure or a red lid. <br />
  • Often we need a burning platform to get people to move. <br />
  • There is one goal and everyone agrees upon it. <br /> DMs have unlimited information and the cognitive ability to use it efficiently. They know all of the opportunities open to them and all of the consequences. <br /> The optimal course of action can be described and it will, in the long run, be more profitable than any other. <br /> A synonym often used for prescriptive model is normative model. In contrast a descriptive model explains what people actually do. <br /> Von Neumann and Morgenstern (1947) <br />
  • Systems engineers do not seek optimal designs, we seek satisficing designs. <br /> Systems engineers are not philosophers. <br /> Philosophers spend endless hours trying to phrase a proposition so that it can have only one interpretation. <br /> SEs try to be unambiguous, but not at the cost of never getting anything written. <br /> H. A. Simon, A behavioral model of rational choice, Quarterly Journal of Economics, 59, 99-118, 1955. <br />
  • Our first example of irrationality is that often we have wrong information in our heads. <br /> What American city is directly north of Santiago Chile? <br /> Most Americans would say that New Orleans or Detroit is north of Santiago, instead of Boston <br /> Or, if you travel from Los Angeles to Reno Nevada, in what direction would you travel? <br /> Most Americans would suggest that Reno is northeast of LA, instead of northwest. <br /> Which end of the Panama canal is farther West the Atlantic side or the Pacific side? <br /> Most Americans would say the Pacific. <br /> These examples were derived from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994. <br />
  • The previous slide gave examples of one type of cognitive illusion. <br /> In the next slides we will give examples of a few more types. <br /> A couple dozen more types are given in <br /> Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994. <br />
  • Probably the most famous and most studied optical illusion was created by German psychiatrist Franz Müller-Lyer in 1889. <br /> Which of the two horizontal line segments is longer? <br /> Although your visual system tells you that the one on the left is longer, a ruler will confirm that they are equal in length. <br /> Do you think that the slide&apos;s’ title is centered? It is. <br />
  • Stare at the black cross. When do the green dots come from? <br /> This illusion is from http://www.patmedia.net/marklevinson/cool/cool_illusion.html <br /> The illusion only works in PowerPoint presentation mode. <br /> However if you stare at the black &quot; +&quot; in the centre, the moving dot turns to green.Now, concentrate on the black &quot; + &quot; in the centre of the picture. <br /> After a short period, all the pink dots will slowly disappear, and you will only see only a single green dot rotating. <br /> Another good web site for visual illusions is http://www.socsci.uci.edu/~ddhoff/ <br />
  • The upper-left quadrant is defined as rational behavior. <br /> EV means expected value. SEV is subjective expected value. <br /> In the next slides we will show how human behavior differs from rational behavior. <br /> Edwards, W., &quot;An Attempt to Predict Gambling Decisions,&quot; Mathematical Models of Human Behavior, Dunlap, J.W. (Editor), <br /> Dunlap and Associates, Stamford, CT, 1955, pp. 12-32. <br />
  • People overestimate events with low probabilities, like being killed by a terrorist or in an airplane crash, <br /> and underestimate high probability events, such as adults dying of cardiovascular disease. <br /> The existence of state lotteries depends upon such overestimation of small probabilities. <br /> At the right side of this figure, <br /> the probability of a brand new car starting every time is very close to 1.0. But a lot of people put jumper cables in the trunk and buy memberships in AAA. <br /> M. G. Preston and P. Baratta, An experimental study of the auction-value of an uncertain outcome, American Journal of Psychology, 61, pp. 183-193, 1948. <br /> Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185. <br /> Tversky and Kahneman, (1992) <br /> Drazen Prelec, in D. Kahneman & A. Tversky (Eds.) “Choices, Values and Frames” (2000) <br /> Animals exhibit similar behavior. <br /> People overestimate low probabilities and do not distinguish much between intermediate probabilities. Rats show this pattern too (Kagel 1995). <br /> People are more risk-averse when the set of gamble choices is better. <br /> But humans also violate this pattern, and so do rats (Kagel 1995). <br /> People also exhibit “context-dependence”: Whether A is chosen more often than B can depend on the <br /> presence of an irrelevant third choice C (which is dominated and never chosen). <br /> Context dependence means people compare choices within a set rather than assigning separate numerical utilities. <br /> Honeybees exhibit the same pattern (Shafir, et al. 2002). <br /> Animals are also risk averse, as defined about a dozen slides from here. <br /> John Kagel, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995. <br /> S. Shafir, T. M. Waite and B. H. Smith. “Context-dependent violations of rational choice in honeybees (Apis mellifera) and gray jays (Perisoreus <br /> canadensis).” Behavioral Ecology and Sociobiology, 2002, 51, 180-187. <br /> Every year 50 Americans die of cardiovascular disease for every one that dies of AIDS. <br />
  • Humans are not good at computing probabilities, as is illustrated by the Monty Hall Paradox. This paradox was invented by Martin Gardner and published in his Scientific American column in 1959. It is called the Monty Hall paradox because of its resemblance to the TV show Let’s Make a Deal. I have taken this version from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994. <br /> I am running a game that I can repeat hundreds of times. <br /> On a table in front of me are a stack of ten-dollar bills and three identical boxes, each with a lid. <br /> You are my subject. <br /> Here are the rules for each game. <br /> You leave the room and while you are out, I put a ten-dollar bill in one of the three boxes. <br /> Then I close the lids on the boxes. <br /> I know which box contains the ten-dollar bill, but you don’t. <br /> Now I invite you back into the room and you try to guess which box contains the money. <br /> If you guess correctly, you get to keep the ten-dollar bill. <br />
  • Each game is divided into two phases. <br /> In the first phase, you point to your choice. <br /> (You cannot not open, lift, weigh, shake or manipulate the boxes.) <br /> The boxes remain closed. <br />
  • After you make your choice, I open one of the two remaining boxes. <br /> I will always open an empty box (remember that I know where the ten-dollar bill is). <br />
  • Having seen one empty box (the one that I just opened) you now see two closed boxes, one of which contains the ten-dollar bill. <br />
  • Leave this slide up for a while and let people discuss what they think. <br />
  • This explanation is from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994. <br />
  • This table explains three bets: A, B and C. The p’s are the probabilities, the x’s are the outcomes, is the mean and is the variance. This table shows, for example, that half the time bet C would pay $1 and the other half of the time it would pay $19. Thus, this bet has an expected value of $10 and a variance of $9. This is a comparatively big variance, so the risk (or uncertainty) is said to be high. Most people prefer the A bet, the certain bet. <br /> To model risk averseness across different situations the coefficient of variability is often better than variance. <br /> Coefficient of variability = (Standard Deviation) / (Expected Value). <br /> In choosing between alternatives that are identical with respect to quantity (expected value) and quality of reinforcement, but that differ with respect to probability of reinforcement humans, rats (Battalio, Kagel and MacDonald, 1985), bumblebees (Real, 1991), honeybees (Shafir, Watts and Smith, 2002) and gray jays (Shafir, Watts and Smith, 2002) prefer the alternative with the lower variance. <br /> To avoid the confusion caused by system engineers and decision theorist using the word risk in two different ways, we can refuse to use the word risk and instead use ambiguity, uncertainty and hazards. <br /> J. H. Kagel, R. C. Battalio and L. Greene, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995. <br />
  • A little while ago, a wild fire was heading toward our house. We packed our car with our valuables, but we did not have room to save everything, so I put my wines in the swimming pool. We put the dog in the car and drove off. When we came back, the house was burned to the ground, but the swimming pool survived. However, all of the labels had soaked off of the wine bottles. Tonight I am giving a dinner party to celebrate our survival. I am serving mushrooms that I picked in the forest while we were waiting for the fire to pass. There may be some hazard here, because I am not a mushroom expert. We will drink some of my wine: therefore, there is some uncertainty here. You know that none of my wines are bad, but some are much better than others. Finally I tell you that my sauce for the mushrooms contains saffron and oyster sauce. This produces ambiguity, because you probably do not know what these ingredients taste like. How would you respond to each of these choices? <br /> Hazard: Would you prefer my forest picked mushrooms or portabella mushrooms from the grocery store? <br /> Uncertainty: Would you prefer one of my wines or a Kendall-Jackson merlot? <br /> Ambiguity: Would you prefer my saffron and oyster sauce or marinara sauce? <br /> Decisions involving these three concepts are probably made in different parts of the brain. Hsu, Bhatt, Adolphs, Tranel and Camerer [2005] used the Ellsberg paradox to explain the difference between ambiguity and uncertainty. They gave their subjects a deck of cards and told them it contained 10 red cards and 10 blue cards (the uncertain deck). Another deck had 20 red or blue cards but the percentage of each was unknown (the ambiguous deck). The subjects could take their chances drawing a card from the uncertain deck: if the card were the color they predicted they won $10, else they got nothing. Or they could just take $3 and quit. Most people picked a card. Then their subjects were offered the same bets with the ambiguous deck. Most people took the $3 avoiding the ambiguous decision. Hsu et al. recorded functional magnetic resonance images (fMRI) of the brain while their subjects made these decisions. While contemplating decision about the uncertain deck, the dorsal striatum showed the most activity and when contemplating decisions about the ambiguous deck the amygdala and the orbitofrontal cortex showed the most activity. <br /> Ambiguity, uncertainty and hazards are three different things. But people prefer to avoid all three. <br />
  • This slide also shows saturation. <br /> This slide also shows the importance of the reference point: $10 to a poor man means a lot more than $10 to a rich man. <br /> Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185. <br /> Massimo would prefer that we label the ordinate and abscissa as subjective worth and numerical value. <br />
  • The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. <br /> The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. <br /> The $3 bet has consequences that you might have to give me two million dollars. <br /> The $1 bet has the highest utility for most engineers. <br /> The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation. <br />
  • The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. <br /> The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. <br /> The $3 bet has consequences that you might have to give me two million dollars. <br /> The $1 bet has the highest utility for most engineers. <br /> The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation. <br />
  • The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. <br /> The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. <br /> The $3 bet has consequences that you might have to give me two million dollars. <br /> The $1 bet has the highest utility for most engineers. <br /> The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation. <br />
  • The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not. <br /> The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill. <br /> The $3 bet has consequences that you might have to give me two million dollars. <br /> The $1 bet has the highest utility for most engineers. <br /> The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation. <br />
  • The $1 bet has the highest utility for most engineers. <br />
  • Savage (1954) <br />
  • Kahneman got the Nobel Prize in 2002 for his part in developing Prospect Theory. <br /> Prospect theory is often called a descriptive model for human decision making. <br />
  • In the last two dozen slides, we showed how human behavior differed from rational behavior. Next we are going to show that tradeoff studies can help move you toward more rational decisions. <br />
  • Evaluation data for evaluation criteria come from approximations, product literature, analysis, models, simulations, experiments and prototypes. <br />
  • This is a template that can be used for criteria. <br />
  • This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf <br /> A lot of confusion has been caused by failure to differentiate between the name of the criterion and the basic measure for that criterion. <br /> As in this case, the words are often very similar. <br /> At this point it might also be useful to differentiate between metric and measure. <br /> Measure. A measure indicates the degree to which an entity possesses and exhibits a quality or an attribute. A measure has a specified method, which when executed produces values (or metrics) for the measure. <br /> Metric. A measured, calculated or derived value (or number) used by a measure to quantify the degree to which an entity possesses and exhibits a quality or an attribute. <br /> Measurement. A value obtained by measuring, which makes it a type of metric. <br />
  • Spend some time on this criteria, because we will bring it back later. <br /> Monotonic increasing, lower=0, baseline=90, slope=0.1, upper=100, plot limits 70 to 100. <br />
  • This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf <br /> This second example was chosen to highlight the difference between the name of the criterion and the basic measure for that criterion. <br /> This Pinewood Derby chapter is from W. L. Chapman, A. T. Bahill and A. W. Wymore, Engineering modeling and design, CRC Press, Boca Raton, 1992. <br /> The reason we are using such an old reference is to show that we didn’t just jimmy up the example. <br /> It has been around for a long time. <br />
  • Of course, it depends on the circumstances. <br /> if availability were a probabilistic value, then it could be used. <br /> Perhaps like going to the library to get a copy of the latest best-selling book. <br />
  • These are sometimes hierarchal with attributes, criteria and then objectives. <br /> But an SEI papers says criteria contain attributes and objectives. <br />
  • Other MoPs could be overall GPA, GPA in the major, extracurricular activities, summer internships, number of undergraduate credits, number of graduate credits, honorary societies, special awards, semesters in the program, <br />
  • From left to right, Moe Howard, Jerry (Curley) Howard and Larry Fine. <br />
  • If you are not using a scoring function, then instead of Total Life Cycle Cost, use the negative or the reciprocal <br />
  • http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf <br />
  • When we showed people the top curve and asked, “How would you feel about an alternative that gave 90% happy scouts?” they typically said, “It’s pretty good.” <br /> In contrast, when we showed people the bottom curve and asked, “How would you feel about an alternative that gave 10% happy scouts?” they typically said, “It’s not very good.” <br /> When we allowed them to change the parameters, they typically pushed the baseline for the Percent Unhappy Scouts scoring function to the left. <br />
  • The solution to this problem is to group all of the husband’s criteria into one higher level criterion called power. <br />
  • The deprecated words maximize and minimize should not be used in requirements, but they are OK in goals. <br /> On the other hand we could rewrite this as <br /> Selection criteria: The preferred alternative will be the one that produces the largest amount of food. <br />
  • I would like to have a rich, intransitive uncle. <br /> Assume that I have an Alpha Romero and a BMW. <br /> And my Uncle has a Corvette. <br /> I would love to hear him say, <br /> “I prefer your BMW to my Corvette, therefore I will give you $2000 and my Corvette for your BMW.” <br /> Next he might say, <br /> “I prefer your Alpha Romero to my BMW, therefore I will give you $2000 and my BMW for your Alpha Romero.” <br /> And finally I would wait with baited breath for him to say, <br /> “I prefer your Corvette to my Alpha Romero, therefore I will give you $2000 and my Alpha Romero for your Corvette.” <br /> We would now have our original cars, but I would be $6000 richer. <br /> I would call him Uncle Money Pump. <br /> This example can start with any car and go in either direction. The only trick is that you must go in a circle. <br />
  • The NAND operator is not associative. <br />
  • The “A Prioritization Process” paper explains why each of these aspects is important. <br /> Read that paper before discussing this slide. <br /> Botta, Rick, and A. Terry Bahill, “A Prioritization Process,” Engineering Management Journal, 19:4 (2007), pp. 20-27. <br />
  • Mnemonic: ordinal is ordering, as in rank ordering. <br />
  • *Those bullets are ORed. <br /> *The systems engineer should derive straw men priorities for all of the criteria. These priorities shall be numbers (usually integers) in the range of 0 to 10, where 10 is the most important. Then he or she should meet with the customer (how ever many people that might be). For each criterion, the systems engineer should lead a discussion of the criteria in the above table and then try to get a consensus for the priority. In the first pass, he or she might ask each stakeholder to evaluate each criterion and take the average value. However, if the customer only looks at one or two criteria and says the criterion is a 10, then it’s a 10. <br /> *Yes rank ordering gives ordinal numbers not cardinal numbers, but often the technique works well. <br /> *The systems engineer can help the customer make pair-wise comparisons of all the criteria and then use the analytic hierarchy process to derive the priorities. This would not be feasible without a commercial tool such as Expert Choice. This tool is discussed in Ref: COTS-Based Engineering Design of a Tradeoff Study Tool. <br /> *One algorithmic technique is on Karl Wiegers’ web site. <br /> *If all of the alternatives are very close on a criterion, then you might want to discount (give a low weight to) that criterion. <br /> Many other methods for deriving weights exist, including: the ratio method [Edwards, 1977], tradeoff method [Keeney and Raiffa, 1976], swing weights [Kirkwood, 1992], rank-order centroid techniques [Buede, 2000], and paired comparison techniques discussed in Buede [2000] such as the Analytic Hierarchy Process [Saaty, 1980], trade offs [Watson and Buede, 1987], balance beam [Watson and Buede, 1987], judgments and lottery questions [Keeney and Raiffa, 1976]. These methods are more formal and some have an axiomatic basis. For a comparison of weighting techniques, see Borcherding, Eppel, and Winterfeldt [1991]. <br /> K. Borcherding, T. Eppel, D. von Winterfeldt, Comparison of weighting judgments in multiattribute utility measurement, Management Science 37: 1603-1619, 1991. <br /> D. Buede, The Engineering Design of Systems, John Wiley, New York, 2000. <br /> W. Edwards, How to Use Multiattribute Utility Analysis for Social Decision Making, IEEE Trans Syst Man Cybernetics, SMC-7: 326-340, 1977. <br /> R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, 1976. <br /> C. W. Kirkwood, Strategic Decision Making: Multiobjective Decision Analysis with Spreadsheets, Duxbury Press, Belmont, 1997. <br /> T. L. Saaty, The Analytical Hierarchy Process, McGraw-Hill, New York, 1980. <br /> S. R. Watson, and D. M. Buede, Decision Synthesis: The Principles and Practice of Decision Analysis, Cambridge University Press, Cambridge, UK, 1987. <br /> The method of swing weighting is based on comparisons of how does the swing from 0 to 10 on one preference scale compare to the 0 to 10 swing on another scale? Assessors should take into account both the difference between the least and most preferred options, and how much they care about that difference. For example, in purchasing a car, you might consider its cost to be important. However, in a particular tradeoff study for a new car, you might have narrowed your choice to a few cars. If they only differ in price by $400, you might not care very much about price. That criterion would receive a low weight because the difference between the highest and lowest price cars is so small. <br />
  • D. Redelmeier, and E. Shafir, Medical decision making in situations that offer multiple alternatives, Journal of the American Medical Association, 273(4) (1995), 302-305. <br />
  • A sacred cow is an idea that is unreasonably held to be immune to criticism. <br /> Saving the spotted owl, gnatcatchers, the Ferruginous Pigmy Owl and putting out all forest fires have been sacred cows to environmentalists. <br /> Most things that are termed politically correct are sacred cows. <br /> In Tucson, all transportation proposals contain the light rail alternative, because the lobby for this technology is very strong. <br />
  • *G. A. Miller, The magical number seven, plus or minus two: some limits on our capacity for processing information, <br /> The Psychological Review, 1956, vol. 63, pp. 81-97, www.well.com/user/smalin/miller.html. <br /> ** D.A. Redelmeier and E. Shafir, Medical decision making in situations that offer multiple alternatives, JAMA, Jan. 25, 1995, 273 (4) 302-305. <br />
  • CAIV is only used in the requirements phase. After the requirements are set it is too late. <br />
  • Near the end of this process the data will be quantitative and objective. <br /> But in the beginning they will be based on personal opinion of domain experts. <br /> There are techniques to help get such data from the experts. <br /> The literature on this topic is called preference elicitation (see Chen and Pu, 20xx). <br />
  • Cardinal measures indicate size or quantity. They were introduced about 15 slides ago. <br /> Fuzzy numbers will be discussed about 40 slides from here. <br />
  • Input of 88% produces output of 0.31. Input of 91% produces output of 0.6. <br />
  • The Bad example is just a linear transformation. You can do better than that. <br /> The output is intended to be cardinal (not ordinal) numbers. That is an output of 0.8 is intended to be twice as important as an output of 0.4. <br />
  • The propose of this slide is to show that different combining methods can produce different preferred alternatives. <br />
  • When I was living in Pittsburgh, I went to the Carnegie Institute. <br /> I saw the fossil skeleton of a Brontosaur (that is what it was called at that time). <br /> I asked the guard, “How old are those dinosaur bones?’ <br /> He replied, “They are 70 million, four years and six months.” <br /> “That is an awfully precise number,” I said. How do you know their age so precisely?” Is there a new form of radiocarbon dating?” <br /> The guard answered, “Well, they told me that those dinosaur bones were 70 million years old when I started working here, and that was four and a half years ago.” <br /> This story is an example of false precision. <br /> Often students list their results with six digits after the decimal point, because that is the default on their calculators. <br /> You should not accept the default value. <br /> Deliberately choose the number of digits after the decimal point. <br /> In my last slide I choose two, because that was necessary and sufficient to show the differences between the alternatives. <br /> The number of digits to print can also be determined by the technique of significant figures. <br />
  • Monotonic decreasing, lower=0, baseline=3, slope=-0.34, upper=10, plot limits 0 to 6. <br />
  • Monotonic increasing, lower=0, baseline=3, slope=0.34, upper=10, plot limits 0 to 10. <br />
  • Please do not try to explain this equation. <br /> It is only here in case someone asks about it. <br /> SSF1 is the first of twelve Standard Scoring Functions. <br />
  • If you could reduce the probability of loss of life for operators of your system from one in a million to one in ten million, <br /> I’m sure your customer would be happy. <br /> Using logarithms is a way to show this. <br />
  • That slide is spoken, “You can add dollars and pounds, but you can’t add dollars and pounds.” <br /> Therefore you need scoring functions in order to combine apples and oranges. <br />
  • An atomic bomb (actually a thermonuclear weapon) costs a billion dollars and lasted a nanosecond. <br />
  • Wymore (1993) calls the criteria space the buildability cotyledon. <br />
  • These criteria are for selecting a printer for a computer. <br /> Cost is the inverse of selling price, because I didn’t want to use scoring functions yet. <br /> There will be lots of printers in the lower left area, but they are all inferior. <br /> There will be no printers in the upper right corner, because this is the infeasible region. <br /> The best alternatives will be on the quarter-circle. <br />
  • We cover these slides real fast. The detail is not important. <br />
  • By coincidence, (d Sum)/dx = - (d Product)/dx <br />
  • Alternatives on a circle could be cost and pages per minute for a laser printer. <br /> Alternatives on a straight line could be sharing a pie; pie for you and pie for me. <br /> Alternatives on an hyperbola could be various soda pop packages or human muscle. <br />
  • This sign was unknowingly based on a cartoon by Dana Fradon published in the New Yorker in 1976. <br /> Clearview is the font now used by the U. S. Highway administration. This is an approximation of it. <br />
  • The Sum is simpler if you are going to compute sensitivity functions, because it has fewer interaction terms. <br /> The product combining function is often called the Nash Product after Nobel Laureate John Nash who used this function in 1950. <br /> It is also called the Nash Bargaining Solution. <br /> The following three items are analogous. <br /> Risk is the probability of occurrence times the severity or consequences of the event. <br /> In the sum combining function we use the input value times the weight. <br /> Subjective expected utility is the probability times the utility. <br /> Transmission of light in an optical system is the product of the individual optical element transmissions. <br /> Probability chains are often multiplicative. For example, the probability of a missile kill is the product of probability of target detection, probability of successful launch, probability of successful guidance, probability of warhead detonation, probability of killing a critical area of the target. <br />
  • Minimax is not XOR, because it doesn’t alternate between criteria. It chooses just one criterion. <br />
  • They change the algorithm every year. <br /> See www.bcsfootball.org <br /> In contrast to this NASCAR uses the first 26 races to narrow down the field. The After the first 26 races the top ten drivers plus any other drivers within 400 points of the leader are selected to compete in the last ten races, which determine the champion. <br />
  • The next dozen slides will discuss this combining function. <br />
  • Which athlete has the most championship rings? Yogi Berra, with 10? <br /> No, Bill Russell with 11 in the NBA and 2 in the NCAA, all as a player. <br /> John Wooden has 12 as a college basketball coach. <br /> Joe DiMaggio had 9 as a player. <br /> Phil Jackson and Red Auerbach each have 9 NBA rings. <br /> Bob Hayes is the only person with an Olympic gold medal and a Super Bowl ring. <br /> The Pittsburg Stealers won 4 in the 1970’s. <br />
  • Use minimin to design a bat for Alex Rodriguez, because he always hits the ball right on the sweet spot. <br /> Use minimax for Terry Bahill. The ball wont’ go as far for a perfect hit, but it will not be a disaster for a mishit. <br />
  • This decision to build on the mountain top is not based on expected values. <br /> Assume one violent thunderstorm is expected per decade. <br /> The expected loss for the mountain top is $10K/year, <br /> whereas the expected loss for the river bank is only $9K/year. <br />
  • This slide uses the numbers from the previous slide. <br />
  • Probability density functions are often used to help obtain evaluation data. <br /> For instance, for a particular alternative, the average response time may be given by a certain type of a probability density function with a specified mean and variance. <br /> In designing system experiments, we could say the system input shall be determined by a certain type of a probability density function with a specified mean and variance. <br />
  • I don’t recommend using the product combining function for the whole data base. I think it would be appropriate for a criterion of benefit to cost ratio. <br />
  • In this tradeoff study the Cost and Performance criteria were summed together with weights that totaled 1.0. <br /> Weightcost times Cost Score + Weightperformance times Performance Score = alternative rating <br /> Weightcost + Weightperformance = 1.0 <br /> These functions were derived from simulations. <br /> They show that for resources poor packs the single elimination race is the best, whereas for resource rich packs the round robins are best. <br />
  • These functions were derived from prototype races. <br /> They show that for resources poor packs the double elimination race is the best, whereas for resource rich packs the round robins are best. <br />
  • For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1. <br /> This function gives more weight to the weights of importance. <br />
  • The most important parameter is S11. Therefore, we should gather more experimental data and interview more domain experts for this parameter: we should spend extra resources on this parameter. The minus signs for S12 and S22 merely mean that an increase in either of these parameters will cause a decrease in the performance index. <br /> Note that, for example, because the sensitivity of F with respect to Wt1 contains S11 and S12, there will be interaction terms. <br />
  • SFS11=20 ((0.735-0.29)-(0.71-0.29))=20 (0.025)=0.5 <br />
  • We have k=2 criteria: cost and quantity and i=8 alternatives. <br /> The 3-liter bottle may not look like it is closest to the Ideal Point because the horizontal and vertical scales are not the same. <br />
  • This table used the modified Minkowski metrics. <br />
  • You do not have to present all three of these decision tree examples. <br />
  • The baseball manager must make a decision about his pitcher. <br /> He could use a tradeoff study, as illustrated above, or a decision tree as shown in the next slide. <br />
  • In Abbott and Costello&apos;s famous routine “Who’s on First?” Who was the first baseman and the pitcher was Tomorrow, but I’m getting too silly now. <br />
  • These data are for Barry Bonds. <br /> J. P. Reiter, Should teams walk or pitch to Barry Bonds? Baseball Research Journal, 32, (2004), 63-69. <br /> J. F. Jarvis, An analysis of the intentional base-on-ball, Presented at SABR-29, Phoenix, AZ, 1999 ( http://knology.net/~johnfjarvis/IBBanalysis.html ) <br /> Maybe we should first ask if we are playing in San Francisco’s AT&T park, where the average wind speed is 10 mph from home plate toward right field. <br />
  • Getting into the decision maker’s head is a segue to the next slide. <br />
  • Reference for the Myers-Briggs model: D. Keirsey, Please Understand Me II, Prometheus Nemesis Book Company, 1998. <br />
  • Faced with a decision between two packages of ground beef, one labeled “95% lean,&quot; the other “5% fat,&quot; which would you choose? <br /> The meat is exactly the same, but most people would pick &quot;80% lean.&quot; <br /> The language used to describe options often influences what people choose, a phenomenon behavioral economists call the framing effect. <br /> Some researchers have suggested that this effect results from unconscious emotional reactions. <br />
  • This is like the Wheel of Fortune. You spin the wheel and see where the arrow points. <br /> The black areas on the pie charts are the probabilities of winning: 0.09 and 0.94. <br /> The expected values of the two bets are $5.103 and $5.076: this is close enough to be called equal. <br /> Lichtenstein and Slovic (1971) reported that, when given a choice, most people preferred the P bet, <br /> but wanted more money to sell the $ bet (median=$7.04) than P bet ($4.00). <br /> Attractiveness ratings (e.g., 0=very very unattractive to 80=very very attractive) showed an even stronger preference for the P bet. <br /> This is stronger than the previous slide on phrasing, because the same subjects are changing their minds depending on the phrasing. <br /> Lichtenstein and Slovic (1971). Reversals of preferences between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55. <br />
  • You wrote down a lot of criteria, but obviously there were a lot of important ones that you neglected. The stomach test brought them to the surface. <br /> You cannot use this test very often. <br /> And it only works for really important things. <br /> This test comes from Eb Rechtin. <br />
  • Anywhere I put a use case name I set it in the Veranda font. <br />
  • Re: the title meta summary <br /> Aristotle wrote his treatise on Physics. Then after that he wrote his treatise on Philosophy, which he called Meta Physics or after Physics. <br /> Philosophy is at a higher level of abstraction than Physics. <br />

Decision making Decision making Presentation Transcript

  • Decision AnalysisDecision Analysis and Tradeoff Studiesand Tradeoff Studies Terry BahillTerry Bahill Systems and Industrial EngineeringSystems and Industrial Engineering University of ArizonaUniversity of Arizona terry@sie.arizona.eduterry@sie.arizona.edu ©, 2000-10, Bahill©, 2000-10, Bahill This file is located inThis file is located in http://www.sie.arizona.edu/sysengr/slides/http://www.sie.arizona.edu/sysengr/slides/
  • 03/24/14 © 2009 Bahill2 AcknowledgementAcknowledgement This research was supported by AFOSR/MURI F49620-03-1-0377.
  • 03/24/14 © 2009 Bahill3 Timing estimate for this course*Timing estimate for this course* • Introduction (10 minutes) • Decision analysis and resolution (49 slides, 40 minutes) • San Diego Airport example (7 slides, 5 minutes) • The tradeoff study process and potential problems (238 slides, 145 minutes) • Summary (6 slides, 10 minutes) • Dog system exercise (140 minutes) • Mathematical summary of tradeoff methods (38 slides, 70 minutes) • Course summary (10 minutes) • Breaks (50 minutes) • Total (480 minutes)
  • 03/24/14 © 2009 Bahill4 OutlineOutline** • This course starts with brief model of human decision making (slides 14-27). Then it presents a crisp description of the tradeoff study processes (Slides 14- 67), which includes a simple example of choosing between two combining methods. • Then it shows a complex, but well-known tradeoff study example that most people will be familiar with: the San Diego airport site selection (Slides 68-75). • Then we go back and examine many difficulties that could arise when designing a tradeoff study; we show many methods that have been used to overcome these potential problems (Slides 76-338). • The course is summarized with slides 339-346. • In the Dog System Exercise, students create their own solutions for a tradeoff study. These exercises will be computer based. The students complete one of the exercise’s eight parts. Then we give them our solutions. They complete another portion and we give them another solution. The computers will be preloaded with all of the problems and solutions. The students will use Excel spreadsheets and a simple program for graphing scoring (utility) functions. • After the exercise there will be a mathematical summary of tradeoff methods. Students who are algebraically challenged may excuse themselves.
  • 03/24/14 © 2009 Bahill5 Course administrationCourse administration • AWO: • Course Name: Decision Making and Tradeoff Studies • Course Number: • Facilities Telephones* Bathrooms Vending Machines Exits ExitExit
  • 03/24/14 © 2009 Bahill6 Course objectivesCourse objectives** • The students should be able to  Understand human decision making  Use many techniques, including tradeoff studies, to help select among alternatives  Decide whether a problem is a good candidate for a tradeoff study  Establish evaluation criteria with weights of importance  Understand scoring (utility) functions  Perform a valid tradeoff study  Fix the do nothing problem  Use several different combining functions  Perform a sensitivity analysis  Be aware of many tradeoff methods  Develop a decision tree
  • 03/24/14 © 2009 Bahill7 Student introductionsStudent introductions •Name •Current program assignment •Related experience
  • Decision AnalysisDecision Analysis and Resolutionand Resolution
  • 03/24/14 © 2009 Bahill9 CMMICMMI • The Capability Maturity Model Integrated (CMMI) is a collection of best practices from diverse engineering companies • Improvements to our organization will come from process improvements, not from people improvements or technology improvements • CMMI provides guidance for improving an organization’s processes • One of the CMMI process areas is Decision Analysis and Resolution (DAR)
  • 03/24/14 © 2009 Bahill10 DARDAR • Programs and Departments select the decision problems that require DAR and incorporate them in their plans (e.g. SEMPs) • DAR is a common process • Common processes are tools that the user gets, tailors and uses • DAR is invoked throughout the whole program lifecycle whenever a critical decision is to be made • DAR is invoked by IPT leads on programs, financial analysts, program core teams, etc. • Invoke the DAR Process in work instructions, in gate reviews, in phase reviews or with other triggers, which can be used anytime in the system life cycle
  • 03/24/14 © 2009 Bahill11 Typical decisionsTypical decisions • Decision problems that may require a formal decision process  Tradeoff studies  Bid/no-bid  Make-reuse-buy  Formal inspection versus checklist inspection  Tool and vendor selection  Cost estimating  Incipient architectural design  Hiring and promotions  Helping your customer to choose a solution
  • 03/24/14 © 2009 Bahill12 It’s not done just onceIt’s not done just once • A tradeoff study is not something that you do once at the beginning of a project. • Throughout a project you are continually making tradeoffs  creating team communication methods  selecting components  choosing implementation techniques  designing test programs  maintaining schedule • Many of these tradeoffs should be formally documented.
  • 03/24/14 © 2009 Bahill13 PurposePurpose** “In all decisions you gain something and lose something. Know what they are and do it deliberately.”
  • 03/24/14 © 2009 Bahill14 Tradeoff StudiesTradeoff Studies
  • 03/24/14 © 2009 Bahill15 A simple tradeoff studyA simple tradeoff study
  • 03/24/14 © 2009 Bahill16 DAR Specific Practice Decide if formal evaluation is needed When to do a tradeoff study Establish Evaluation Criteria What is in a tradeoff study Identify Alternative Solutions Select Evaluation Methods Evaluate Alternatives Select Preferred Solutions CMMI’s DAR processCMMI’s DAR process
  • 03/24/14 © 2009 Bahill17 Tradeoff Study ProcessTradeoff Study Process** These tasks are drawn serially, but they are not performed in a serial manner. Rather, it is an iterative process with many feedback loops, which are not shown. Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL ∑
  • 03/24/14 © 2009 Bahill18 When creating a processWhen creating a process the most important facets are • illustrating tasks that can be done in parallel • suggesting feedback loops • configuration management • including a process to improve the process
  • 03/24/14 © 2009 Bahill19 Humans make four types of decisions:Humans make four types of decisions: • Allocating resources among competing projects* • Generating plans, schedules and novel ideas • Negotiating agreements • Choosing amongst alternatives  Alternatives can be examined in series or parallel.  When examined in series it is called sequential search  When examined in parallel it is called a tradeoff or a trade study  “Tradeoff studies address a range of problems from selecting high-level system architecture to selecting a specific piece of commercial off the shelf hardware or software. Tradeoff studies are typical outputs of formal evaluation processes.”*
  • 03/24/14 © 2009 Bahill20 HistoryHistory Ben Franklin’s letter* to Joseph Priestly outlined one of the first descriptions of a tradeoff study.
  • 03/24/14 © 2009 Bahill21 Decide if Formal Evaluation is NeededDecide if Formal Evaluation is Needed Decide ifDecide if FormalFormal Evaluation isEvaluation is NeededNeeded Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill22 Is formal evaluation needed?Is formal evaluation needed? Companies should have polices for when to do formal decision analysis. Criteria include • When the decision is related to a moderate or high-risk issue • When the decision affects work products under configuration management • When the result of the decision could cause significant schedule delays • When the result of the decision could cause significant cost overruns • On material procurement of the 20 percent of the parts that constitute 80 percent of the total material costs
  • 03/24/14 © 2009 Bahill23 Guidelines for formal evaluationGuidelines for formal evaluation • When the decision is selecting one or a few alternatives from a list • When a decision is related to major changes in work products that have been baselined • When a decision affects the ability to achieve project objectives • When the cost of the formal evaluation is reasonable when compared to the decision’s impact • On design-implementation decisions when technical performance failure may cause a catastrophic failure • On decisions with the potential to significantly reduce design risk, engineering changes, cycle time or production costs
  • 03/24/14 © 2009 Bahill24 Establish Evaluation CriteriaEstablish Evaluation Criteria Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods EstablishEstablish EvaluationEvaluation CriteriaCriteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill25 Establish evaluation criteriaEstablish evaluation criteria** • Establish and maintain criteria for evaluating alternatives • Each criterion must have a weight of importance • Each criterion should link to a tradeoff requirement, i.e. a requirement whose acceptable value can be more or less depending on quantitative values of other requirements. • Criteria must be arranged hierarchically. The top-level may be performance, cost, schedule and risk.  Program Management should prioritize these four criteria at the beginning of the project and make sure everyone knows the priorities. • All companies should have a repository of generic evaluation criteria.
  • 03/24/14 © 2009 Bahill26 What will you eat for lunch today?What will you eat for lunch today? •In class exercise. •Write some evaluation criteria that will, help you decide.*
  • 03/24/14 © 2009 Bahill27 Killer tradesKiller trades •Evaluating alternatives is expensive. •Therefore, early in tradeoff study, identify very important requirements* that can eliminate many alternatives. •These requirements produce killer criteria.** •Subsequent killer trades can often eliminate 90% of the possible alternatives.
  • 03/24/14 © 2009 Bahill28 Identify Alternative SolutionsIdentify Alternative Solutions Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria IdentifyIdentify AlternativeAlternative SolutionsSolutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill29 Identify alternative solutionsIdentify alternative solutions • Identify alternative solutions for the problem statement • Consider unusual alternatives in order to test the system requirements* • Do not list alternatives that do not satisfy all mandatory requirements** • Consider use of commercial off the shelf and in- house entities*** • Use killer trades to eliminate thousands of infeasible alternatives
  • 03/24/14 © 2009 Bahill30 What will you eat for lunch today?What will you eat for lunch today? •In class exercise. •List some alternatives for today’s lunch.*
  • 03/24/14 © 2009 Bahill31 Select Evaluation MethodsSelect Evaluation Methods Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement SelectSelect EvaluationEvaluation MethodsMethods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill32 Select evaluation methodsSelect evaluation methods • Select the source of the evaluation data and the method for evaluating the data • Typical sources for evaluation data include approximations, product literature, analysis, models, simulations, experiments and prototypes* • Methods for combining data and evaluating alternatives include Multi-Attribute Utility Technique (MAUT), Ideal Point, Search Beam, Fuzzy Databases, Decision Trees, Expected Utility, Pair- wise Comparisons, Analytic Hierarchy Process (AHP), Financial Analysis, Simulation, Monte Carlo, Linear Programming, Design of Experiments, Group Techniques, Quality Function Deployment (QFD), radar charts, forming a consensus and Tradeoff Studies
  • 03/24/14 © 2009 Bahill33 Collect evaluation dataCollect evaluation data •Using the appropriate source (approximations, product literature, analysis, models, simulations, experiments or prototypes) collect data for evaluating each alternative.
  • 03/24/14 © 2009 Bahill34 Evaluate AlternativesEvaluate Alternatives Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria EvaluateEvaluate AlternativesAlternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill35 Evaluate alternativesEvaluate alternatives • Evaluate alternative solutions using the evaluation criteria, weights of importance, evaluation data, scoring functions and combining functions. • Evaluating alternative solutions involves analysis, discussion and review. Iterative cycles of analysis are sometimes necessary. Supporting analyses, experimentation, prototyping, or simulations may be needed to substantiate scoring and conclusions.
  • 03/24/14 © 2009 Bahill36 Select Preferred SolutionsSelect Preferred Solutions Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives SelectSelect PreferredPreferred SolutionsSolutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review PreferredPreferred SolutionsSolutions Present Results Present Results Put In PPAL Put In PPAL
  • 03/24/14 © 2009 Bahill37 Select preferred solutionsSelect preferred solutions • Select preferred solutions from the alternatives based on evaluation criteria. • Selecting preferred alternatives involves weighing and combining the results from the evaluation of alternatives. Many combining methods are available. • The true value of a formal decision process might not be listing the preferred alternatives. More important outputs are stimulating thought processes and documenting their outcomes. • A sensitivity analysis will help validate your recommendations. • The least sensitive criteria should be given weights of 0.
  • 03/24/14 © 2009 Bahill38 Perform Expert ReviewPerform Expert Review Decide if Formal Evaluation is Needed Decide if Formal Evaluation is Needed Problem Statement Problem Statement Select Evaluation Methods Select Evaluation Methods Establish Evaluation Criteria Establish Evaluation Criteria Identify Alternative Solutions Identify Alternative Solutions Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Evaluate Alternatives Evaluate Alternatives Select Preferred Solutions Select Preferred Solutions Formal Evaluations Formal Evaluations Perform Expert Review Perform Expert Review Preferred Solutions Preferred Solutions Present Results Present Results Put In PPAL Put In PPAL ∑
  • 03/24/14 © 2009 Bahill39 Perform expert reviewPerform expert review11 • Formal evaluations should be reviewed* at regular gate reviews such as SRR, PDR and CDR or by special expert reviews • Technical reviews started about the same time as Systems Engineering, in 1960. The concept was formalized with MIL-STD-1521 in 1972. • Technical reviews are still around, because there is evidence that they help produce better systems at less cost.
  • 03/24/14 © 2009 Bahill40 Perform expert reviewPerform expert review22 • Technical reviews evaluate the product of an IPT* • They are conducted by a knowledgeable board of specialists including supplier and customer representatives • The number of board members should be less than the number of IPT members • But board expertise should be greater than the IPT’s experience base
  • 03/24/14 © 2009 Bahill41 Who should come to the review?Who should come to the review? • Program Manager • Chief Systems Engineer • Review Inspector • Lead Systems Engineer • Domain Experts • IPT Lead • Facilitator • Stakeholders for this decision  Builder  Customer  Designer  Tester  PC Server • Depending on the decision, the Lead Hardware Engineer and the Lead Software Engineer
  • 03/24/14 © 2009 Bahill42 Present resultsPresent results Present the results* of the formal evaluation to the original decision maker and other relevant stakeholders.
  • 03/24/14 © 2009 Bahill43 Put in the PALPut in the PAL • Formal evaluations reviewed by experts should be put in the organizational Process Asset Library (PAL) or the Project Process Asset Library (PPAL) • Evaluation data for tradeoff studies come from approximations, analysis, models, simulations, experiments and prototypes. Each time better data is obtained the PAL should be updated. • Formal evaluations should be designed with reuse in mind.
  • 03/24/14 © 2009 Bahill44 Closed Book Quiz, 5 minutesClosed Book Quiz, 5 minutes Fill in the empty boxesFill in the empty boxes Problem Statement Problem Statement Proposed Alternatives Proposed Alternatives Evaluation Criteria Evaluation Criteria Formal Evaluations Formal Evaluations Preferred Solutions Preferred Solutions∑
  • 03/24/14 © 2009 Bahill45 Tradeoff Study ExampleTradeoff Study Example
  • 03/24/14 © 2009 Bahill46 Example: What method shouldExample: What method should we use for evaluating alternatives?we use for evaluating alternatives?** • Is formal evaluation needed? • Check the Guidance for Formal Evaluations • We find that many of its criteria are satisfied including “On decisions with the potential to significantly reduce design risk … cycle time ...” • Establish evaluation criteria • Ease of Use • Familiarity • Killer criterion • Engineers must think that use of the technique is intuitive.
  • 03/24/14 © 2009 Bahill47 Example (continued)Example (continued)11 • Identify alternative solutions  Linear addition of weight times scores, Multiattribute Utility Theory (MAUT).* This method is often called a “trade study.” It is often implemented with an Excel spreadsheet.  Analytic Hierarchy Process (AHP)**
  • 03/24/14 © 2009 Bahill48 Example (continued)Example (continued)22 • Select evaluation methods  The evaluation data will come from expert opinion  Common methods for combining data and evaluating alternatives include: Multi-Attribute Utility Technique (MAUT), Decision Trees, Analytic Hierarchy Process (AHP), Pair-wise Comparisons, Ideal Point, Search Beam, etc.  In the following slides we will use two methods: linear addition of weight times scores (MAUT) and the Analytic Hierarchy Process (AHP)*
  • 03/24/14 © 2009 Bahill49 Example (continued)Example (continued)33 • Evaluate alternatives  Let the weights and evaluation data be integers between 1 and 10, with 10 being the best. The computer can normalize the weights if necessary.
  • 03/24/14 © 2009 Bahill50 Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)11 Criteria Weight of Importance MAUT AHP Ease of Use 8 4 Familiarity Sum of weight times score Assess evaluation data* row by row
  • 03/24/14 © 2009 Bahill51 Multi-Attribute Utility Technique (MAUT)Multi-Attribute Utility Technique (MAUT)22 Criteria Weight* of Importance MAUT AHP Ease of Use 9 8 4 Familiarity 3 9 2 Sum of weight times score 99 42 The winner
  • 03/24/14 © 2009 Bahill52 Analytic Hierarchy Process (AHP)Analytic Hierarchy Process (AHP) Verbal scale Numerical value Equally important, likely or preferred 1 Moderately more important, likely or preferred 3 Strongly more important, likely or preferred 5 Very strongly more important, likely or preferred 7 Extremely more important, likely or preferred 9
  • 03/24/14 © 2009 Bahill53 AHP, make comparisonsAHP, make comparisons Create a matrix with the criteria on the diagonal and make pair-wise comparisons*Ease of Use Ease of Use is moderately more important than Familiarity (3) Reciprocal of 3 = 1/3 Familiarity
  • 03/24/14 © 2009 Bahill54 AHP, compute weightsAHP, compute weights • Create a matrix • Square the matrix • Add the rows • Normalize* 1 1 2 3 3 3 1 3 1 3 2 6 8 1 1 2 2 0.7 . 5.6 5 0 27       × = ⇒ ⇒           
  • 03/24/14 © 2009 Bahill55 In-class exerciseIn-class exercise • Use these criteria to help select your lunch today. Closeness, distance to the venue. Is it in the same building, the next building or do you have to get in a car and drive? Tastiness, including gustatory delightfulness, healthiness, novelty and savoriness. Price,* total purchase price including tax and tip.
  • 03/24/14 © 2009 Bahill56 To help select lunch todayTo help select lunch today11 • closeness is ??? more important than tastiness, • closeness is ??? more important than price, • tastiness is ??? more important than price. Closeness Tastiness Price Closeness Tastiness Price
  • 03/24/14 © 2009 Bahill57 To help select lunch todayTo help select lunch today22 • closeness is strongly more important (5) than tastiness, • closeness is very strongly more important (7) than price, • tastiness is moderately more important (3) than price. Closeness Tastiness Price Closeness 1 5 7 Tastiness 1 3 Price 1
  • 03/24/14 © 2009 Bahill58 To help select lunch todayTo help select lunch today33 1 5 7 1 5 7 3 12.3 29 44.3 0.73 1 1 1 3 1 3 0.8 3 7.4 11.2 0.19 5 5 0.4 1.4 3 4.8 0.08 1 1 1 1 1 1 7 3 7 3                  × = ⇒ ⇒                    Closeness Tastiness Price Weight of Importance Closeness 1 5 7 0.73 Tastiness 1/5 1 3 0.19 Price 1/7 1/3 1 0.08
  • 03/24/14 © 2009 Bahill59 AHP, get scoresAHP, get scores Compare each alternative on the first criterion 1 1 2 2 1 2 1 2 2 4 6 1 1 1 2 3 0.67 0.33       × = ⇒ ⇒           Ease of Use MAUT In terms of Ease of Use, MAUT is slightly preferred (2) 1/2 AHP
  • 03/24/14 © 2009 Bahill60 AHP, get scoresAHP, get scores22 Compare each alternative on the second criterion 1 1 5 5 1 5 1 5 2 10 0.83 0.17 12 1 1 0.4 2 2.4       × = ⇒ ⇒           Familiarity MAUT In terms of Familiarity, MAUT is strongly preferred (5) 1/5 AHP
  • 03/24/14 © 2009 Bahill61 AHP, form comparison matrixAHP, form comparison matrix**** Combine with linear addition* Criteria Weight of Importance MAUT AHP Ease of Use 0.75 0.67 0.33 Familiarity 0.25 0.83 0.17 Sum of weight times score 0.71 0.29 The winner
  • 03/24/14 © 2009 Bahill62 Example (continued)Example (continued)44 • Select Preferred Solutions  Linear addition of weight times scores (MAUT) was the preferred alternative  Now consider new criteria, such as Repeatability of Result, Consistency*, Time to Compute  Do a sensitivity analysis
  • 03/24/14 © 2009 Bahill63 Sensitivity analysis, simpleSensitivity analysis, simple In terms of Familiarity, MAUT was strongly preferred (5) over the AHP. Now change this 5 to a 3 and to a 7. • Changing the scores for Familiarity does not change the recommended alternative. • This is good. • It means the Tradeoff study is robust with respect to these scores. Final Score Familiarity MAUT AHP 3 0.69 0.31 5 0.71 0.29 7 0.72 0.28
  • 03/24/14 © 2009 Bahill64 Sensitivity analysis, analyticSensitivity analysis, analytic Compute the six semirelative-sensitivity functions, which are defined as which reads, the semirelative-sensitivity function of the performance index F with respect to the parameter β is the partial derivative of F with respect to β times β with everything evaluated at the normal operating point (NOP). F NOP F Sβ β β ∂ = ∂ %
  • 03/24/14 © 2009 Bahill65 Sensitivity analysisSensitivity analysis22 For the performance index use the alternative rating for MAUT minus the alternative rating for AHP* F = F1 - F2 = Wt1×S11 + Wt2×S21 – Wt1×S12 –Wt2×S22 Criteria Weight of Importance MAUT AHP Ease of Use Wt1 S11 S12 Familiarity Wt2 S21 S22 Sum of weight times score F1 F2
  • 03/24/14 © 2009 Bahill66 Sensitivity analysisSensitivity analysis33 The semirelative-sensitivity functions* ( ) ( ) 1 2 11 21 12 22 11 12 1 21 22 2 1 11 2 21 1 12 2 22 0.26 0.16 0.50 0.21 -0.25 -0.04 F Wt F Wt F S F S F S F S S S S Wt S S S Wt S Wt S S Wt S S Wt S S Wt S = − = = − = = = = = = − = = − = % % % % % % S11 is the most important parameter. So go back and reevaluate it.
  • 03/24/14 © 2009 Bahill67 Sensitivity analysisSensitivity analysis44 • The most important parameter is the score for MAUT on the criterion Ease of Use • We should go back and re-evaluate the derivation of that score Ease of Use MAUT In terms of Ease of Use, MAUT is slightly preferred (2) 1/2 AHP
  • 03/24/14 © 2009 Bahill68
  • 03/24/14 © 2009 Bahill69 Example (continued)Example (continued)55 • Perform expert review of the tradeoff study. • Present results to original decision maker. • Put tradeoff study in PAL. • Improve the DAR process.  Add some other techniques, such as AHP, to the DAR web course  Fix the utility curves document  Add image theory to the DAR process  Change linkages in the documentation system  Create a course, Decision Making and Tradeoff Studies
  • 03/24/14 © 2009 Bahill70 Quintessential exampleQuintessential example A Tradeoff Study of Tradeoff Study Tools is available at http://www.sie.arizona.edu/sysengr/sie554/tradeoffStudyOfT radeoffStudyTools.doc
  • San Diego CountySan Diego County Regional AirportRegional Airport Tradeoff StudyTradeoff Study This tradeoff study has cost $17 million.This tradeoff study has cost $17 million. http://www.san.org/authority/assp/index.asp http://www.san.org/airport_authority/archives/index.asp#master_plan
  • 03/24/14 © 2009 Bahill72 The evaluation criteria treeThe evaluation criteria tree** Operational Requirement Optimal Airport Layout Runway Alignment Terrain Weather Existing land uses Wildlife Hazards Joint Use and National Defense Compatibility Expandability Ground Access Travel Time, percentage of population in three travel time segments Roadway Network Capacity, existing and projected daily roadway volumes Highway and Transit Accessibility, distance to existing and planned freeways Environmental Impacts Quantity of residential land to be displaced by the airport development Noise Impact, population within each of three specific decibel ranges Biological Resources Wetlands Protected species Water quality Significant cultural resources Site Development Evaluations
  • 03/24/14 © 2009 Bahill73 Top-level criteriaTop-level criteria 1. Operational Requirements 2. Ground Access 3. Environmental Impacts 4. Site Development Evaluations These four evaluation criteria are then decomposed into a hierarchy
  • 03/24/14 © 2009 Bahill74 Operational RequirementsOperational Requirements Optimal Airport Layout Runway Alignment Terrain, weather and existing land uses Wildlife Hazards Joint Use and National Defense Compatibility Expandability
  • 03/24/14 © 2009 Bahill75 Ground AccessGround Access • Travel Time, percentage of population in three travel time segments • Roadway Network Capacity, existing and projected daily roadway volumes • Highway and Transit Accessibility, distance to existing and planned freeways
  • 03/24/14 © 2009 Bahill76 Environmental ImpactsEnvironmental Impacts • Quantity of residential land to be displaced by the airport development • Noise Impact, population within each of three specific decibel ranges • Biological Resources  Wetlands  Protected species • Water quality • Significant cultural resources
  • 03/24/14 © 2009 Bahill77 Alternative LocationsAlternative Locations • Miramar Marine Corps Air Station • East Miramar • North Island Naval Air Station • March Air Force Base • Marine Corps Base Camp Pendleton • Imperial County desert site • Campo and Borrego Springs • Lindberg Field • Off-Shore floating airport • Corte Madera Valley
  • 03/24/14 © 2009 Bahill78
  • Tradeoff Studies:Tradeoff Studies: the Process and Potentialthe Process and Potential ProblemsProblems**
  • 03/24/14 © 2009 Bahill80 Outline of this sectionOutline of this section • Problem statement • Models of human decision making • Components of a tradeoff study  Problem statement  Evaluation criteria  Weights of importance  Alternative solutions  The do nothing alternative  Different distributions of alternatives  Evaluation data  Scoring functions  Scores  Combining functions  Preferred alternatives  Sensitivity analysis • Other tradeoff techniques  The ideal point  The search beam  Fuzzy sets  Decision trees • The wrong answer • Tradeoff study on tradeoff study tools • Summary
  • 03/24/14 © 2009 Bahill81 ReferenceReference J. Daniels, P. W. Werner and A. T. Bahill, Quantitative Methods for Tradeoff Analyses, Systems Engineering, 4(3), 199-212, 2001.
  • 03/24/14 © 2009 Bahill82 PurposePurpose The systems engineer’s job is to elucidate domain knowledge and capture the values and preferences of the decision maker, so that the decision maker (and other stakeholders) will have confidence in the decision. The decision maker balances effort with confidence*
  • 03/24/14 © 2009 Bahill83
  • 03/24/14 © 2009 Bahill84 Tradeoff studiesTradeoff studies • Humans exhibit four types of decision making activities 1. Allocating resources among competing projects 2. Making plans, which includes scheduling 3. Negotiating agreements 4. Choosing alternatives from a list  Series  Parallel, a tradeoff study 
  • 03/24/14 © 2009 Bahill85 A typical tradeoff study matrix Alternative-A Alternative-B Criteria Qualitative weight Normalized weight Scoring function Input value Output score Score times weight Input value Output score Score times weight Criterion-1 1 to 10 0 to 1 Type and parameters Natural units 0 to 1 0 to 1 Natural units 0 to 1 0 to 1 Criterion-2 1 to 10 0 to 1 Type and parameters Natural units 0 to1 0 to 1 Natural units 0 to1 0 to 1 Sum 0 to1 0 to1
  • 03/24/14 © 2009 Bahill86 Pinewood Derby*
  • 03/24/14 © 2009 Bahill87 Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring Evaluation criteria Input value Score Weight Score times weight 1. Average Races per Car 6 0.94 0.20 0.19 2. Number of Ties 0 1 0.20 0.20 3. Happiness 0.87 0.60 0.52 Qualitative weight Normalized weight Input value Scoring function Output score Score times weight 3.1 Percent Happy Scouts 10 0.50 96 0.98 0.49 3.2 Number of Irate Parents 5 0.25 1 0.50 0.13 3.3 Number of Lane Repeats 5 0.25 0 1.00 0.25 Sum 0.87 0.91 http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
  • 03/24/14 © 2009 Bahill88 When do people do tradeoff studies?When do people do tradeoff studies? • Buying a car • Buying a house • Selecting a job • These decisions are important, you have lots of time to make the decision and alternatives are apparent.* • We would not use a tradeoff study to select a drink for lunch or to select a husband or wife. • You would also do a tradeoff study when your boss asks you to do one.
  • 03/24/14 © 2009 Bahill89 Do the tradeoff studies upfrontDo the tradeoff studies upfront before all of the costs are locked inbefore all of the costs are locked in**
  • 03/24/14 © 2009 Bahill90 Why discuss this topic?Why discuss this topic? • Many multicriterion decision-making techniques exist, but few decision-makers use them. • Perhaps, because  They seem complicated  Different techniques have given different preferred alternatives  Different life experiences give different preferred alternatives  People don’t think that way*
  • 03/24/14 © 2009 Bahill91 Models of Human Decision MakingModels of Human Decision Making
  • 03/24/14 © 2009 Bahill92 Series versus parallelSeries versus parallel11 • Looking at alternatives in parallel is not an innate human action. • Usually people select one hypothesis and work on it until it is disproved, then they switch to a new alternative: that’s the scientific method. • Such serial processing of alternatives has been demonstrated for  Fire fighters  Airline pilots  Physicians  Detectives  Baseball managers  People looking for restaurants*
  • 03/24/14 © 2009 Bahill93 Series versus parallelSeries versus parallel22 • V. V. Krishnan has a model of animals searching for habitat (home, breeding area, hunting area, etc.) • It uses the value of each habitat and the cost of moving between sites. • When travel between sites is inexpensive, e. g. birds or honeybees* searching for a nest site, the search is often a tradeoff study comparing alternatives in parallel. • When travel is expensive, e.g. beavers searching for a dam site, the search is usually sequential.
  • 03/24/14 © 2009 Bahill94 Series versus parallelSeries versus parallel33 ** • If a person is looking for a new car, he or she might perform a tradeoff study. • Whereas a person looking for a used car might use a sequential search, because the availability of cars would change day by day.
  • 03/24/14 © 2009 Bahill95 The need for changeThe need for change** •People do not make good decisions. •A careful tradeoff study will help you overcome human ineptitude and thereby make better decisions.
  • 03/24/14 © 2009 Bahill96 Rational decisionsRational decisions** • One goal • Perfect information • The optimal course of action can be described • This course maximizes expected value • This is a prescriptive model. We tell people that, in an ideal world, this is how they should make decisions.
  • 03/24/14 © 2009 Bahill97 SatisficingSatisficing** • When making decisions there is always uncertainty, too little time and insufficient resources to explore the whole problem space. • Therefore, people cannot make rational decisions. • The term satisficing was coined by Noble Laureate Herb Simon in 1955. • Simon proposed that people do not attempt to find an optimal solution. Instead, they search for alternatives that are good enough, alternatives that satisfice.
  • 03/24/14 © 2009 Bahill98
  • 03/24/14 © 2009 Bahill99 Humans are not rationalHumans are not rational** 11 • Mark Twain said,  “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” • Humans are often very certain of knowledge that is false.  What American city is directly north of Santiago Chile?  If you travel from Los Angeles to Reno Nevada, in what direction would you travel? • Most humans think that there are more words that start with the letter r, than there are with r as the third letter.
  • 03/24/14 © 2009 Bahill100 IllusionsIllusions** • We call these cognitive illusions. • We believe them with as much certainty as we believe optical illusions.
  • 03/24/14 © 2009 Bahill101 The MThe Müüller-Lyer Illusionller-Lyer Illusion**
  • 03/24/14 © 2009 Bahill102
  • 03/24/14 © 2009 Bahill103
  • 03/24/14 © 2009 Bahill104 Humans judge probabilities poorlyHumans judge probabilities poorly**
  • 03/24/14 © 2009 Bahill105 Monty Hall ParadoxMonty Hall Paradox11 **
  • 03/24/14 © 2009 Bahill106 Monty Hall ParadoxMonty Hall Paradox22 **
  • 03/24/14 © 2009 Bahill107 Monty Hall ParadoxMonty Hall Paradox33 **
  • 03/24/14 © 2009 Bahill108 Monty Hall ParadoxMonty Hall Paradox44 **
  • 03/24/14 © 2009 Bahill109 Monty Hall ParadoxMonty Hall Paradox55 ** • Now here is your problem. • Are you better off sticking to your original choice or switching? • A lot of people say it makes no difference. • There are two boxes and one contains a ten- dollar bill. • Therefore, your chances of winning are 50/50. • However, the laws of probability say that you should switch.
  • Monty Hall knew which door had the donkeyMonty Hall knew which door had the donkey 03/24/14 © 2009 Bahill110
  • 03/24/14 © 2009 Bahill111 Monty Hall ParadoxMonty Hall Paradox66 ** • The box you originally chose has, and always will have, a one-third probability of containing the ten-dollar bill. • The other two, combined, have a two-thirds probability of containing the ten-dollar bill. • But at the moment when I open the empty box, then the other one alone will have a two-thirds probability of containing the ten-dollar bill. • Therefore, your best strategy is to always switch!
  • 03/24/14 © 2009 Bahill112 UtilityUtility • We have just discussed the right column, subjective probability. • Now we will discuss the bottom row, utility
  • 03/24/14 © 2009 Bahill113 UtilityUtility • Utility is a measure of the happiness, satisfaction or reward a person gains (or loses) from receiving a good or service. • Utilities are numbers that express relative preferences using a particular set of assumptions and methods. • Utilities include both subjectively judged value and the assessor's attitude toward risk.
  • 03/24/14 © 2009 Bahill114 RiskRisk • Systems engineers use risk to evaluate and manage bad things that could happen, hazards. Risk is measured with the frequency (or probability) of occurrence times the severity of the consequences. • However, in economics and in the psychology of decision making, risk is defined as the variance of the expected value, uncertainty.* p1 x1 p2 x2 Risk, uncertainty A 1.0 $10 $10 $0 none B 0.5 $5 0.5 $15 $10 $25 medium C 0.5 $1 0.5 $19 $10 $81 high 2 σµ
  • 03/24/14 © 2009 Bahill115 Ambiguity, uncertainty and hazards*Ambiguity, uncertainty and hazards* • Hazard: Would you prefer my forest picked mushrooms or portabella mushrooms from the grocery store? • Uncertainty: Would you prefer one of my wines or a Kendall-Jackson Napa Valley merlot? • Ambiguity: Would you prefer my saffron and oyster sauce or marinara sauce?
  • 03/24/14 © 2009 Bahill116 Gains and losses are not valued equallyGains and losses are not valued equally**
  • 03/24/14 © 2009 Bahill117 Humans are not rationalHumans are not rational22 • Even if they had the knowledge and resources, people would not make rational decisions, because they do not evaluate utility rationally. • Most people would be more concerned with a large potential loss than with a large potential gain. Losses are felt more strongly than equal gains. • Which of these wagers would you prefer to take?* $2 with probability of 0.5 and $0 with probability 0.5 $1 with probability of 0.99 and $1,000,000 with probability 0.00000001 $3 with probability of 0.999999 and -$1,999,997 with probability 0.000001
  • 03/24/14 © 2009 Bahill118 Humans are not rationalHumans are not rational33 $2 with probability of 0.5 or $0 with probability 0.5 $0
  • 03/24/14 © 2009 Bahill119 Humans are not rationalHumans are not rational44 $1 with probability of 0.99 $1,000,000 with probability 0.00000001
  • 03/24/14 © 2009 Bahill120 Humans are not rationalHumans are not rational55 You owe me two million dollars! $3 with probability of 0.999999 -$1,999,997 with probability 0.000001
  • 03/24/14 © 2009 Bahill121 Humans are not rationalHumans are not rational66 • Which of these wagers would you prefer to take? $2 with probability of 0.5 or $0 with probability 0.5 $1 with probability of 0.99 or $1,000,000 with probability 0.00000001 $3 with probability of 0.999999 or -$1,999,997 with probability 0.000001 • Most engineers prefer the $2 bet • Very few people choose the $3 bet • All three have an expected value of $1
  • 03/24/14 © 2009 Bahill122 Subjective expected utilitySubjective expected utility combines two subjective concepts: utility and probability. • Utility is a measure of the happiness or satisfaction a person gains from receiving a good or service. • Subjective probability is the person’s assessment of the frequency or likelihood of the event occurring. • The subjective expected utility is the product of the utility times the probability.
  • 03/24/14 © 2009 Bahill123 Subjective expected utility theorySubjective expected utility theory models human decision making as maximizing subjective expected utility  maximizing, because people choose the set of alternatives with the highest total utility,  subjective, because the choice depends on the decision maker’s values and preferences, not on reality (e.g. advertising improves subjective perceptions of a product without improving the product), and  expected, because the expected value is used. • This is a first-order model for human decision making. • Sometimes it is called Prospect Theory*.
  • 03/24/14 © 2009 Bahill124
  • 03/24/14 © 2009 Bahill125 Why teach tradeoff studies?Why teach tradeoff studies? • Because emotions, cognitive illusions, biases, fallacies, fear of regret and use of heuristics make humans far from ideal decision makers. • Using tradeoff studies judiciously can help you make rational decisions. • We would like to help you move your decisions from the normal human decision-making lower- right quadrant to the ideal decision-making upper-left quadrant.
  • 03/24/14 © 2009 Bahill126 Components of a tradeoff studyComponents of a tradeoff study  Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Normalized scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill127 Problem statementProblem statement • Stating the problem properly is one of the systems engineer’s most important tasks, because an elegant solution to the wrong problem is less than worthless. • Problem stating is more important than problem solving. • The problem statement  describes the customer’s needs,  states the goals of the project,  delineates the scope of the problem,  reports the concept of operations,  describes the stakeholders,  lists the deliverables and  presents the key decisions that must be made.
  • 03/24/14 © 2009 Bahill128 Components of a tradeoff studyComponents of a tradeoff study • Problem statement Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill129 Evaluation criteriaEvaluation criteria • are derived from high priority tradeoff requirements. • should be independent, but show compensation. • Each alternative will be given a value that indicates the degree to which it satisfies each criterion. This should help distinguish between alternatives. • Evaluation criteria might be things like performance, cost, schedule, risk, security, reliability and maintainability.
  • 03/24/14 © 2009 Bahill130 Evaluation criterion templateEvaluation criterion template • Name of criterion • Description • Weight of importance (priority) • Basic measure • Units • Measurement method • Input (with expected values or the domain) • Output • Scoring function (type and parameters) • Traces to (requirement of document)
  • 03/24/14 © 2009 Bahill131 Example criterion packageExample criterion package11 • Name of criterion: Percent Happy Scouts • Description: The percentage of scouts that leave the race with a generally happy feeling. This criterion was suggested by Sales and Marketing and the Customer. • Weight of importance: 10 • Basic measure:* Percentage of scouts who leave the event looking happy, contented or pleased • Units: percentage • Measurement method: Estimate by the Pinewood Derby Marshall • Input: The domain is 0 to 100%. The expected values are 70 to 100%.
  • 03/24/14 © 2009 Bahill132 Example criterion pacExample criterion packkageage22 • Output: 0 to 1 • Scoring function:* Monotonic increasing with lower threshold of 0, baseline of 90, baseline slope of 0.1 and upper threshold of 100.
  • 03/24/14 © 2009 Bahill133 Second example criterion packageSecond example criterion package11 ** • Name of criterion: Total Event Time • Description: The total event time will be calculated by subtracting the start time from the end time. • Weight of importance: 8 • Basic measure: Duration of the derby from start to finish. • Units: Hours • Measurement method: Observation, recording and calculation by the Pinewood Derby Marshall. • Input: The domain is 0 to 8 hours. The expected values are 1 to 6 hours.
  • 03/24/14 © 2009 Bahill134 Second example criterion pacSecond example criterion packkageage22 • Output: 0 to 1 • Scoring function: Biphasic hill shape with lower threshold of 0, lower baseline of 2, lower baseline slope of 0.67, optimum of 3.5, upper baseline of 4.5, upper baseline slope of -1 and upper threshold of 8.
  • 03/24/14 © 2009 Bahill135 Verboten criteriaVerboten criteria • Availability should not be a criterion, because it cannot be traded off.* • Assume oranges are available 6 months out of the year. • Would it make sense to do a tradeoff study selecting between apples and oranges and give oranges an availability expected value of 0.5? • Suppose your tradeoff study selects oranges, but it is October and oranges are not available: the tradeoff study makes no sense.
  • 03/24/14 © 2009 Bahill136 Mini-summaryMini-summary Evaluation criteria are quantitative measures for evaluating how well a system satisfies its performance, cost, schedule or risk requirements.
  • 03/24/14 © 2009 Bahill137 Evaluation criteria are also calledEvaluation criteria are also called • Attributes* • Objectives • Metrics • Measures • Quality characteristics • Figures of merit • Acceptance criteria “Regardless of what has gone before, the acceptance criteria determine what is actually built.”
  • 03/24/14 © 2009 Bahill138 Other similar termsOther similar terms • Index • Indicators • Factors • Scales • Measures of Effectiveness • Measures of Performance
  • 03/24/14 © 2009 Bahill139 MoE versus MoPMoE versus MoP • Generally, it is not worth the effort to debate nuances of these terms. But here is an example. • Measures of Effectiveness (MoEs) show how well (utility or value) a part of the system mission is satisfied. For an undergraduate student trying to earn a Bachelors degree, his or her class (Freshman, Sophomore, Junior or Senior) would be an MoE. • Measures of Performance (MoPs) show how well the system functions. For our undergraduate student, their grade point average would be an MoP.* • MoEs are often computed using several MoPs.
  • MoEs versus MoPsMoEs versus MoPs22 •The city of Tucson wants to widen Grant Road between I-10 and Alvernon Road. They want six lanes with a median, a 45 mph speed limit, and no traffic jams. •MoEs  cars per day averaged over two weeks  cars per hour between 5 and 6 PM, Monday to Friday, averaged over two weeks •MoPs  number of pot holes after one year  traffic noise (in dB) at local store fronts  smoothness of the surface  esthetics of landscaping  straightness of the road  travel time from I-10 to Alvernon  number of traffic lights 03/24/14 © 2009 Bahill140
  • MoEs versus MoPsMoEs versus MoPs33 • MoEs are typically owned by the customer • MoPs are typically owned by the contractor 03/24/14 © 2009 Bahill141
  • 03/24/14 © 2009 Bahill142 Moe* thinks at a higher level than the mop does
  • MoEs, MoPs, KPIs, FoMsMoEs, MoPs, KPIs, FoMs and evaluation criteriaand evaluation criteria • MoEs quantify how well the mission is satisfied • MoPs quantify how well the system functions • Key performance indices (KPIs) are the most important MoPs • Evaluation criteria are MoPs that are used in tradeoff studies • Figures of Merit (FoMs) are the same as evaluation criteria. • All of these must trace to requirements 03/24/14 © 2009 Bahill143
  • 03/24/14 © 2009 Bahill144 Properties of Good Evaluation CriteriaProperties of Good Evaluation Criteria
  • 03/24/14 © 2009 Bahill145 Properties of good evaluation criteriaProperties of good evaluation criteria • Criteria should be objective • Criteria should be quantitative • Wording of criteria is very important • Criteria should be independent • Criteria should show compensation • Criteria should be linked to requirements • The criteria set should be hierarchical • The criteria set should cover the domain evenly • The criteria set should be transitive • Temporal order should not be important • Criteria should be time invariant Overview slide
  • 03/24/14 © 2009 Bahill146 Evaluation criteria propertiesEvaluation criteria properties • These properties deal with  verification  the combining function  individual criteria  sets of criteria • But problems created by violating these properties can be ameliorated by reengineering the criteria
  • 03/24/14 © 2009 Bahill147 Evaluation criteria should be objectiveEvaluation criteria should be objective (observer independent)(observer independent) • Being Pretty or Nice should not be a criterion for selecting crewmembers • In sports, Most Valuable Player selections are often controversial • Deriving a consensus for the Best Football Player of the Century would be impossible
  • 03/24/14 © 2009 Bahill148 Evaluation criteria should be quantitativeEvaluation criteria should be quantitative Each criterion should have a scoring function
  • 03/24/14 © 2009 Bahill149 Evaluation criteria should be worded in aEvaluation criteria should be worded in a positive manner, so that more is betterpositive manner, so that more is better** • Use Uptime rather than Downtime. • Use Mean Time Between Failures rather than Failure Rate. • Use Probability of Success, rather than Probability of Failure. • When using scoring functions make sure more output is better • “Nobody does it like Sara LeeSM ”
  • 03/24/14 © 2009 Bahill150 Exercise: rewrite this statementExercise: rewrite this statement We have a surgical procedure that should cure your problem. Statistically one percent of the people who undergo this surgery die. Would you like to have this surgery?
  • 03/24/14 © 2009 Bahill151 Percent happy scoutsPercent happy scouts • The Pinewood Derby tradeoff study had these criteria  Percent Happy Scouts  Number of Irate Parents • Because people evaluate losses and gains differently, the Preferred alternatives might have been different if they had used  Percent Unhappy Scouts  Number of Ecstatic Parents
  • 03/24/14 © 2009 Bahill152
  • 03/24/14 © 2009 Bahill153 Criteria should be independentCriteria should be independent • Human Sex and IQ are independent • Human Height and Weight are dependent
  • 03/24/14 © 2009 Bahill154 The importance of independenceThe importance of independence Buying a new car, couple-1 criteria • Wife  Safety • Husband  Peak Horse Power
  • 03/24/14 © 2009 Bahill155 Buying a new car, couple-2 criteriaBuying a new car, couple-2 criteria • Wife  Safety • Husband  Maximum Horse Power  Peak Torque  Top Speed  Time for the Standing Quarter Mile  Engine Size (in liters)  Number of Cylinders.  Time to Accelerate 0 to 60 mph What kind of a car do you think they will buy?*
  • 03/24/14 © 2009 Bahill156 Criteria should show compensationCriteria should show compensation From the Systems Engineering literature, tradeoff requirements show compensation Dictionary definition compensate v. 1. To offset: counterbalance. Compensate means to tradeoff. You are happy to accept less of one thing in order to get more of another and vice versa.
  • 03/24/14 © 2009 Bahill157 Perfect compensationPerfect compensation • Astronauts growing food on a trip to Mars • Two criteria: Amount of Rice Grown and Amount of Beans Grown • Goal: maximize* total amount of food • A lot of rice and a few beans is just as good as lots of beans and little rice • We can tradeoff beans for rice
  • 03/24/14 © 2009 Bahill158 No compensationNo compensation • A system that produces oxygen and water for our astronauts • A system that produced a huge amount of water, but no oxygen might get the highest score, but, clearly, it would not support life for long. • From Systems Engineering, mandatory requirements show no compensation
  • 03/24/14 © 2009 Bahill159 Choosing today’s lunchChoosing today’s lunch • Candidate meals: pizza, hamburger, fish & chips, chicken sandwich, beer, tacos, bread and water • Criteria: Cost, Preparation Time, Tastiness, Novelty, Low Fat, Contains the Five Food Groups, Complements Merlot Wine, Closeness of Venue • These criteria are independent and also show compensation • Criteria are usually nouns, noun phrases or verb phrases
  • 03/24/14 © 2009 Bahill160
  • 03/24/14 © 2009 Bahill161
  • 03/24/14 © 2009 Bahill162
  • 03/24/14 © 2009 Bahill163 Sometimes it is hard to get bothSometimes it is hard to get both independence and compensationindependence and compensation • If two criteria are independent, they might not show compensation • If they show compensation, they might not be independent • Independence is more important for mandatory requirements • Compensation is more important for tradeoff requirements
  • 03/24/14 © 2009 Bahill164 RelationshipsRelationships • Each evaluation criterion must be linked to a tradeoff requirement.  Or in early design phases to a Mission statement, ConOps, OCD or company policy. • But only a few tradeoff requirements are used in the tradeoff study.
  • 03/24/14 © 2009 Bahill165 Evaluation criteria hierarchyEvaluation criteria hierarchy • The criteria tree should be hierarchical • The top level often contains  Performance  Cost  Schedule  Risk • Dependent entries are grouped into subcategories • The criteria set should cover the domain evenly
  • 03/24/14 © 2009 Bahill166 Evaluation criteria set should be transitiveEvaluation criteria set should be transitive** If A is preferred to B, and B is preferred to C, then A should be preferred to C. This property is needed for assigning weights.
  • 03/24/14 © 2009 Bahill167 Temporal orderTemporal order should not be importantshould not be important Criteria should be created so that the temporal order is not important for verifying or combining.
  • 03/24/14 © 2009 Bahill168 The temporal order of verifyingThe temporal order of verifying criteria should not be importantcriteria should not be important • Criteria requiring that clothing be Flame Proof and Water Resistant would make the verification results depend on which we tested first  If the criteria depend on temporal order, then an expert system or a decision tree might be more suitable
  • 03/24/14 © 2009 Bahill169 Temporal orderTemporal order should not be importantshould not be important • Fragment of a job application • Q: “Have you ever been arrested?”  A: “No.” • Q: “Why?”  A: “Never got caught.”
  • 03/24/14 © 2009 Bahill170 The temporal order of combiningThe temporal order of combining criteria should not be importantcriteria should not be important • Consider a combining function (CF) that adds two numbers truncating the fraction (0.2 CF 0.6) CF 0.9 = 0, however, (0.9 CF 0.6) CF 0.2 = 1, the result depends on the order. • With the Boolean NAND* function (↑) (0 ↑1) ↑ 1 = 0 however, (1 ↑1) ↑ 0 = 1, the result depends on the order.
  • Order of presentation is importantOrder of presentation is important • The stared question is the only question that department and college promotion committees look at. It is the only question reported in the TCE History. • Larry Alimony’s CIEQ • I would take another course that was taught this way • The course was quite boring • The instructor seemed interested in students as individuals • The instructor exhibited a through knowledge of the subject matter What is your overall rating of this instructor’s teaching effectiveness? • TCE  What is your overall rating of this instructor’s teaching effectiveness? • What is your overall rating of the course? • Rate the usefulness of HW, projects, etc. • What is your rating of this instructor compared to other instructors? • The difficulty level of the course is … 03/24/14 © 2009 Bahill171
  • 03/24/14 © 2009 Bahill172 Criteria should be time invariantCriteria should be time invariant • Criteria should not change with time • It would be nice if the evaluation data also did not change with time, but this is unrealistic
  • 03/24/14 © 2009 Bahill173 Evaluation cEvaluation criteria libraryriteria library • Criteria should be created so that they can be reused. • Your company should have library of generic criteria. • Each criterion package would have the following slots  Name  Description  Weight of importance (priority)  Basic measure  Units  Measurement method  Input (with allowed and expected range)  Output  Scoring function (type and parameters)  Trace to (document)
  • 03/24/14 © 2009 Bahill174 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria  Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill175 Weights of importanceWeights of importance The decision maker should assign weights so that the more important criteria will have more effect on the outcome.
  • 03/24/14 © 2009 Bahill176 Using weightsUsing weights For the Sum Combining Function For the Product Combining Function, the weights should be put in the exponent j j 1 weightOutput score n j= = ∏ j j 1 Output weight score n j= = ∑
  • 03/24/14 © 2009 Bahill177 Part of a Pinewood Derby tradeoff studyPart of a Pinewood Derby tradeoff study Performance figures of merit evaluated on a prototype for a Round Robin with Best Time Scoring Figure of Merit Input value Score Weight Score times weight 1. Average Races per Car 6 0.94 0.20 0.19 2. Number of Ties 0 1 0.20 0.20 3. Happiness 0.87 0.60 0.52 Qualitative weight Normalized weight Input value Scoring function Score Score times weight 3.1 Percent Happy Scouts 10 0.50 96 0.98 0.49 3.2 Number of Irate Parents 5 0.25 1 0.50 0.13 3.3 Number of Lane Repeats 5 0.25 0 1.00 0.25 Sum 0.87 0.91
  • 03/24/14 © 2009 Bahill178 Aspects that help establish weightsAspects that help establish weights Reference: A Prioritization Process Organizational Commitment Time Required Criticality to Mission Success Risk Architecture Safety Business Value Complexity Priority of Scenarios (use cases) Implementation Difficulty Frequency of Use Stability Benefit Dependencies Cost Reuse Potential Benefit to Cost Ratio When it is needed
  • 03/24/14 © 2009 Bahill179
  • 03/24/14 © 2009 Bahill180 Cardinal versus ordinalCardinal versus ordinal • Weights should be cardinal measures not ordinal measures. • Cardinal measures indicate size or quantity. • Ordinal measures merely indicate rank ordering.* • Cardinal numbers do not just tell us that one criteria is more important than another – they tell us how much more important. • If one criteria has a weight of 6 and another a weight of 3, then the first is twice as important as the second.
  • 03/24/14 © 2009 Bahill181 Methods for deriving weights*Methods for deriving weights* • Decision maker assigns numbers between 1 and 10 to criteria* • Decision maker rank orders the criteria* • Decision maker makes pair-wise comparisons of criteria (AHP)* • Algorithms are available that combine performance, cost, schedule and risk • Quality Function Deployment (QFD) • The method of swing weights • Some people advocate assigning weights only after deriving evaluation data*
  • 03/24/14 © 2009 Bahill182 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance  Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill183 AlternativesAlternatives
  • 03/24/14 © 2009 Bahill184 The Do Nothing AlternativeThe Do Nothing Alternative
  • 03/24/14 © 2009 Bahill185 The status quoThe status quo "Selecting an option from a group of similar options can be difficult to justify and thus may increase the apparent attractiveness of retaining the status quo. To avoid this tendency, the decision maker should identify each potentially attractive option and compare it directly to the status quo, in the absence of competing alternatives. If such direct comparison yields discrepant judgments, the decision maker should reflect on the inconsistency before making a final choice." Redelmeier and Shafir, 1995
  • 03/24/14 © 2009 Bahill186 Selecting a new carSelecting a new car Bahill has a Datsun 240Z with 160,000 miles His replacement options are DoDo NothingNothing
  • 03/24/14 © 2009 Bahill187 The Do Nothing alternatives forThe Do Nothing alternatives for replacing a Datsun 240Z  Status quo, keep the 240Z  Nihilism, do without a car, i.e., walk or take the bus
  • 03/24/14 © 2009 Bahill188 If the Do Nothing alternative wins,If the Do Nothing alternative wins, your Cost, Schedule and Risk criteria may have overwhelmed your Performance criteria.
  • 03/24/14 © 2009 Bahill189 If a Do Nothing alternative winsIf a Do Nothing alternative wins22 • Just as you should not add apples and oranges, you should not combine Performance, Cost, Schedule and Risk criteria with each other  Combine the Performance criteria (with their weights normalized so that they add up to one)  Combine the Cost criteria  Combine the Schedule criteria  Combine the Risk criteria • Then the Performance, Cost, Schedule and Risk combinations can be combined with clearly stated weights, 1/4, 1/4, 1/4 and 1/4 could be the default. • If a Do Nothing alternative still wins, you may have the weight for Performance too low.
  • 03/24/14 © 2009 Bahill190 Balanced scorecardBalanced scorecard The Business community says that you should balance these perspectives:  Innovation (Learning and Growth)  Internal Processes  Customer  Financial
  • 03/24/14 © 2009 Bahill191 Sacred cowsSacred cows** • One important purpose for including a do nothing alternative (and other bizarre alternatives) is to help get the requirements right. If a bizarre alternative wins the tradeoff analysis, then you do not have the requirements right. • Similarly including sacred cows in the alternatives, will also test the adequacy of the requirements. • “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” -- Richard Feynman
  • 03/24/14 © 2009 Bahill192 Alternative conceptsAlternative concepts • When formulating alternative concepts, remember Miller’s* “magical number seven, plus or minus two.” • Also remember that introducing more alternatives only confuses the DM and makes him or her less likely to choose one of the new alternatives.**
  • 03/24/14 © 2009 Bahill193 SynonymsSynonyms • Alternative concepts • Alternative solutions • Alternative designs • Alternative architectures • Options
  • 03/24/14 © 2009 Bahill194 RiskRisk • The risks included in a tradeoff study should only be those that can be traded-off. Do not include the highest-level risks. • Risks might be computed in a separate section, because they usually use the product combining function.
  • 03/24/14 © 2009 Bahill195 CAIVCAIV • Cost as an independent variable (CAIV) • Treating CAIV means that you should do the tradeoff study with a specific cost and then go talk to your customer and see what performance, schedule and risk requirements he or she is willing to give up in order to get that cost. • So if you want to treat CAIV, then keep your tradeoff study independent of cost: that is, do not use cost criteria in your tradeoff study.
  • 03/24/14 © 2009 Bahill196 Two types of requirementsTwo types of requirements •There are two types of requirements mandatory requirements tradeoff requirements
  • 03/24/14 © 2009 Bahill197 Mandatory requirementsMandatory requirements • Mandatory requirements specify necessary and sufficient capabilities that the system must have to satisfy customer needs and expectations. • They use the words shall or must. • They are either passed or failed, with no in between. • They should not be included in a tradeoff study. • Here is an example of a mandatory requirement:  The system shall not violate federal, state or local laws.
  • 03/24/14 © 2009 Bahill198 Tradeoff requirementsTradeoff requirements • Tradeoff requirements state capabilities that would make the customer happier. • They use the words should or want. • They use measures of effectiveness and scoring functions. • They are evaluated with multicriterion decision techniques. • There will be tradeoffs among these requirements. • Here is an example of a tradeoff requirement: Dinner should have items from each of the five food groups: Grains, Vegetables, Fruits, Wine, Milk , and Meat and Beans. • Mandatory requirements are often the upper or lower limits of tradeoff requirements.
  • 03/24/14 © 2009 Bahill199 Mandatory requirementsMandatory requirements should not be in a tradeoff study, because they cannot be traded off. • Including them screws things up incredibly.
  • 03/24/14 © 2009 Bahill200 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions  Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill201 Evaluation dataEvaluation data11 • Evaluation data come from approximations, product literature, analysis, models, simulations, experiments and prototypes. • It would be nice if these values were objective, but sometimes we must resort to elicitation of personal preferences.* • They will be measured in natural units.
  • 03/24/14 © 2009 Bahill202 Evaluation dataEvaluation data22 • Evaluation data should be entered into the matrix one row (one criterion) at a time. • They indicate the degree to which each alternative satisfies each criterion. • They are not probabilities: they are more like fuzzy numbers, degree of membership or degree of fulfillment.
  • 03/24/14 © 2009 Bahill203 UncertaintyUncertainty • Evaluation data (and weights of importance) should, when convenient, have measures of uncertainty associated with the data. • This could be done with probability density functions, fuzzy numbers, variance, expected range, certainty factors, confidence intervals, or simple color coding.
  • 03/24/14 © 2009 Bahill204 NormalizationNormalization** • Evaluation data are transformed into normalized scores by scoring functions (utility curves) or qualitative scales (fuzzy sets). • The outputs of such transformations should be cardinal numbers representing the DMs utility.
  • 03/24/14 © 2009 Bahill205 Scoring function exampleScoring function example This scoring function reflects the DM’s utility that he would be twice as satisfied if there were 91% happy scouts compared to 88% happy scouts.*
  • 03/24/14 © 2009 Bahill206 QualitativeQualitative scales examplesscales examples Evaluation data Qualitative evaluation Output Good example 0 to 86% happy scouts Not satisfied 0.2 86 to 89% happy scouts Marginally satisfied 0.4 89 to 91% happy scouts Satisfied 0.6 91 to 93% happy scouts Very satisfied 0.8 93 to 100% happy scouts Elated 1.0 Bad example 0 to 20% happy scouts Not satisfied 0.2 20 to 40% happy scouts Marginally satisfied 0.4 40 to 60% happy scouts Satisfied 0.6 60 to 80 % happy scouts Very satisfied 0.8 80 to 100% happy scouts Elated 1.0
  • 03/24/14 © 2009 Bahill207 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data  Scoring functions • Scores • Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill208 What is the best package of soda pop to buy?*What is the best package of soda pop to buy?* Regular price of Coca-Cola in Tucson, January 1995. The Cost criterion is the reciprocal of price. The Performance criterion is the quantity in liters.  Choosing Amongst Alternative Soda Pop Packages Data Criteria Trade-off Values Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Sum Minus Product Com- promise with p=2 Com- promise with p=10 1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00 20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67 1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27 2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00 6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13 3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00 12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26 24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
  • 03/24/14 © 2009 Bahill209 Numerical precisionNumerical precision**
  • 03/24/14 © 2009 Bahill210 The preferred alternative depends on the unitsThe preferred alternative depends on the units For the Sum but not for the Product Tradeoff Function. Choosing Amongst Alternative Soda Pop Packages, Effect of Units Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Quantity (barrels) Sum Product 1 can 0.50 2.00 0.35 2.35 0.70 0.0003 2.0003 0.0060 20 oz 0.60 1.67 0.59 2.26 0.98 0.0050 1.6717 0.0084 1 liter 0.79 1.27 1.00 2.27 1.27 0.0085 1.2785 0.0108 2 liter 1.29 0.78 2.00 2.78 1.56 0.0170 0.7837 0.0132 6 pack 2.29 0.44 2.13 2.57 0.94 0.0181 0.4548 0.0079 3 liter 1.69 0.59 3.00 3.59 1.78 0.0256 0.6173 0.0151 12 pack 3.59 0.28 4.26 4.54 1.19 0.0363 0.3148 0.0101 24 pack 5.19 0.19 8.52 8.71 1.62 0.0726 0.2653 0.0140
  • 03/24/14 © 2009 Bahill211 Scoring functionsScoring functions • Criteria should always have scoring functions so that the preferred alternatives do not depend on the units used. • Scoring functions are also called  utility functions  utility curves  value functions  normalization functions  mappings
  • 03/24/14 © 2009 Bahill212 Scoring function for CostScoring function for Cost**
  • 03/24/14 © 2009 Bahill213 Scoring function for QuantityScoring function for Quantity** A simple program that creates graphs such as these is available for free at http://www.sie.arizona.edu/sysengr/slides. It is called the Wymorian Scoring Function tool.
  • 03/24/14 © 2009 Bahill214 The scoring function equationThe scoring function equation** ( )2×Slope× Baseline+CriteriaValue-2×Lower 1 SSF1 Baseline-Lower 1 CriteriaValue-Lower =   +   
  • 03/24/14 © 2009 Bahill215 Evaluation data may be logarithmicEvaluation data may be logarithmic**
  • 03/24/14 © 2009 Bahill216 The need for scoring functionsThe need for scoring functions11 ** • You can add $s and £s, but • you can’t add $s and lbs.
  • 03/24/14 © 2009 Bahill217 The need for scoring functionsThe need for scoring functions22 • Would you add values for something that cost a billion dollars and lasted a nanosecond?* • Alt-1 cost a hundred dollars and lasts one millisecond, Sum = 100.001. • Alt-2 only cost ninety nine dollars but it lasts two millisecond, Sum = 99.002. • Does the duration have any effect on the decision?
  • 03/24/14 © 2009 Bahill218 Different Distributions of Alternatives inDifferent Distributions of Alternatives in Criteria SpaceCriteria Space** May Produce DifferentMay Produce Different Preferred AlternativesPreferred Alternatives
  • Tradeoff of requirements*Tradeoff of requirements* 03/24/14 © 2009 Bahill219 0.25 0.50 0.75 1.00 0.00 5 10 15 200 Pages per Minute Cost(1/k$) 4P 4Plus 4Si
  • 03/24/14 © 2009 Bahill220 Pareto OptimalPareto Optimal Moving from one alternative to another will improve at least one criterion and worsen at least one criterion, i.e., there will be tradeoffs. “The true value of a service or product is determined by what one is willing to give up to obtain it.”
  • 03/24/14 © 2009 Bahill221 NomenclatureNomenclature Real-world data will not fall neatly onto lines such as the circle in the pervious slide. But often they may be bounded by such functions. In the operations research literature such data sets are called convex, although the function bounding them is called concave (Kuhn and Tucker, 1951).
  • 03/24/14 © 2009 Bahill222 Different distributionsDifferent distributions The feasible alternatives may have different distributions in the criteria space. These include:  Circle  Straight Line  Hyperbola
  • 03/24/14 © 2009 Bahill223 Alternatives on a circleAlternatives on a circle** Alternatives on a Circle Assume the alternatives are on the circle x2 + y2 = 1 Sum Combining Function: with the derivative d(Sum Combining Function)/ Product Combining Function: with the derivative d(Product Combining Function)/dx Both Combining Functions have maxima at x=y=0.707 (This result does depend on the weights.)
  • 03/24/14 © 2009 Bahill224 Alternatives on a straight-LineAlternatives on a straight-Line Assume the alternatives are on the straight-line y = -x + 1 Sum Combining Function: x + y = x - x + 1 = 1 All alternatives are optimal (i.e. selection is not possible) Product Combining Function: x * y = -x2 + x with d(Product Combining Function)/dx = -2x + 1 Product Combining Function: maximum at x=0.5 Sum Combining Function: all alternatives are equally good Product Combining Function seems better for decision aiding
  • 03/24/14 © 2009 Bahill225 Alternatives on a hyperbolaAlternatives on a hyperbola** Alternatives on a Hyperbola Assume the alternatives are on the hyperbola (x + 1)(y + 1) = 2 Sum Combining Function: x + y = with d(Sum Combining Function)/dx = Product Combining Function: x * y = with d(Product Combining Function)/dx =
  • 03/24/14 © 2009 Bahill226
  • 03/24/14 © 2009 Bahill227
  • A lively baseball debateA lively baseball debate • For over 30 years baseball statisticians have argued over the best measure of offensive effectiveness. • Two of the most popular measures are  On-base plus slugging OPS = OBP + SLG  Batter’s run average BRA = OBP x SLG • I think their arguments ignored the most relevant data, the shape of the distribution of OBP and SLG for major league players. • If it is circular either will work. • If it is hyperbolic, do not use the sum. 03/24/14 © 2009 Bahill228
  • 03/24/14 © 2009 Bahill229 Muscle force-velocity relationshipMuscle force-velocity relationship • (Force + F0 )(velocity + vmax) = constant, where F0 (the isometric force) and vmax (the maximum muscle velocity) are constants. • Humans sometimes use one combining function and sometimes they use another. • If a bicyclist wants maximum acceleration, he or she uses the point (0, F0). If there is no resistance and maximum speed is desired, use the point (vmax, 0). These solutions result from maximizing the sum of force and velocity. • However, if there is energy dissipation (e.g., Friction, air resistance) and maximum speed is desired, choose the maximum power point, the maximum product of force and velocity. • This shows that the appropriate tradeoff function may depend on the task at hand.
  • 03/24/14 © 2009 Bahill230 Nonconvex data setsNonconvex data sets The muscle force-velocity relationship fit neatly onto lines such as this hyperbola. This will not always be the case. But when it is not, the data may be bounded by such functions. In the operations research literature such data sets are called concave, although the function bounding them is called convex (Kuhn and Tucker, 1951).
  • 03/24/14 © 2009 Bahill231 Mini-summaryMini-summary • The Product Combining Function always favors alternatives with moderate scores for all criteria. It rejects alternatives with a low score for any criterion. • Therefore the Product Combining Function may seem better than the Sum Combining Function. But the Sum Combining Function is used much more in systems engineering.
  • 03/24/14 © 2009 Bahill232 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores  Combining functions • Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill233 Summation is not alwaysSummation is not always the best way to combine datathe best way to combine data**
  • 03/24/14 © 2009 Bahill234 Popular combining functionsPopular combining functions • Sum Combining Function = x + y  Used most often by engineers • Product Combining Function = x ∗ y  Cost to benefit ratio  Risk analyses  Game theory* • Sum Minus Product = x + y - xy  Probability theory  Fuzzy logic systems  Expert system certainty factors • Compromise = ( ) 1/pp p x + y
  • 03/24/14 © 2009 Bahill235 XORXOR** • The previous combining functions implemented an AND function of the criteria. • There is no combining function that implements the exclusive or (XOR) function, e.g. • Criterion-1: Fuel consumption in highway driving, miles per gallon of gasoline. Baseline = 23 mpg. • Criterion-2: Fuel consumption in highway driving, miles per gallon of diesel fuel. Baseline = 26 mpg. • You want to use criterion-1 for alternatives with gasoline engines and criterion-2 for alternatives with diesel engines.
  • 03/24/14 © 2009 Bahill236 The American public acceptsThe American public accepts the Sum Combining Functionthe Sum Combining Function • It is used to rate NFL quarterbacks • It is used to select the best college football teams
  • 03/24/14 © 2009 Bahill237 NFL quarterback passer ratingsNFL quarterback passer ratings BM stands for basic measure BM1 = (Completed Passes) / (Pass Attempts) BM2 = (Passing Yards) / (Pass Attempts) BM3 = (Touchdown Passes) / (Pass Attempts) BM4 = Interceptions / (Pass Attempts) Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) + 25(- BM4+0.095)]*100/6
  • 03/24/14 © 2009 Bahill238 College football BCSCollege football BCS** BM1 = Polls: AP media & ESPN coaches BM2 = Computer Rankings: Seattle Times, NY Times, Jeff Sagarin, etc. BM3 = Strength of Schedule BM4 = Number of Losses Rating = [BM1 + BM2 + BM3 - BM4] http://sports.espn.go.com/ncf/abcsports/BCSStandings www.bcsFootball.org
  • 03/24/14 © 2009 Bahill239 What is the best package of soda pop to buy?What is the best package of soda pop to buy?** Regular price of Coca-Cola in Tucson, January 1995. The Cost criterion is the reciprocal of price. The Performance criterion is the quantity in liters.  Choosing Amongst Alternative Soda Pop Packages Data Criteria Trade-off Values Item Price (dollars) Cost (dollars-1) Quantity (liters) Sum Product Sum Minus Product Com- promise with p=2 Com- promise with p=10 1 can 0.50 2.00 0.35 2.35 0.70 1.65 2.03 2.00 20 oz 0.60 1.67 0.59 2.26 0.98 1.27 1.77 1.67 1 liter 0.79 1.27 1.00 2.27 1.27 1.00 1.62 1.27 2 liter 1.29 0.78 2.00 2.78 1.56 1.22 2.15 2.00 6 pack 2.29 0.44 2.13 2.57 0.94 1.63 2.17 2.13 3 liter 1.69 0.59 3.00 3.59 1.78 1.81 3.06 3.00 12 pack 3.59 0.28 4.26 4.54 1.19 3.35 4.27 4.26 24 pack 5.19 0.19 8.52 8.71 1.62 7.09 8.52 8.52
  • 03/24/14 © 2009 Bahill240 ResultsResults • The Product Combining Function suggests that the preferred package is the three liter bottle • However, the other combining functions all recommend the 24 pack • Plotting these data on Cartesian coordinates produces a nonconvex distribution • The best hyperbolic fit to these data is (quantity + 0.63)(cost + 0.08) = 2
  • 03/24/14 © 2009 Bahill241 Soda pop dataSoda pop data 0 0.5 1 1.5 2 2.5 0 5 10 Quantity (liters) Cost(1/dollars)
  • 03/24/14 © 2009 Bahill242
  • 03/24/14 © 2009 Bahill243 Which matchesWhich matches human decision making?human decision making? • For a nonconvex distribution, the Sum Combining Function will favor the points at either end of the distribution. Sometimes this matches human decision making.  I usually buy a case of soda for my family.  A person working in an office building on a Sunday afternoon might buy a single can from the vending machine. • A frugal person might want to maximize the product of cost and performance, i.e. the maximum liters/dollar (the biggest bang for the buck), which is the three liter bottle. This matches the recommendation of the Product Combining Function.
  • 03/24/14 © 2009 Bahill244 Which matches humanWhich matches human decision making?decision making? (cont.)(cont.) This example shows that for a nonconvex distribution of alternatives, the choice of the combining function determines the preferred alternative.
  • 03/24/14 © 2009 Bahill245 Who was the best NFL quarterback?Who was the best NFL quarterback? • NFL quarterback passer ratings • BM1 = (Completed Passes) / (Pass Attempts) • BM2 = (Passing Yards) / (Pass Attempts) • BM3 = (Touchdown Passes) / (Pass Attempts) • BM4 = Interceptions / (Pass Attempts) • Rating = [5(BM1-0.3) + 0.25(BM2-3) + 20(BM3) + 25(-BM4+0.095)]*100/6
  • 03/24/14 © 2009 Bahill246 The best NFL quarterback for 1999The best NFL quarterback for 1999 http://www.football.espn.go.com/nfl/statistics/ Sum (p=1) Product Sum Minus Product Compromise with p=2 Compromise with p=∞ Kurt Warner Kurt Warner Kurt Warner Kurt Warner Kurt Warner Steve Beuerlein Jeff George Steve Beuerlein Steve Beuerlein Jeff George Jeff George Steve Beuerlein Jeff George Peyton Manning Steve Beuerlein Peyton Manning Peyton Manning Peyton Manning Jeff George Peyton Manning
  • The best NFL quarterback 1994The best NFL quarterback 1994 03/24/14 © 2009 Bahill247 Sum Product Sum Minus Product Compromise with p=∞ Steve Young Steve Young Steve Bono Steve Bono John Elway John Elway Bubby Brister Steve Young Dan Marino Dan Marino Steve Beuerlein Bobby Herbert Bobby Herbert Bobby Herbert Jeff George Dan Marino Eric Kramer Warren Moon Neil O’Donnell Eric Kramer
  • 03/24/14 © 2009 Bahill248 A manned mission to MarsA manned mission to Mars11 • The astronauts will grow beans and rice • Lots of beans and a little rice is just as good as lots of rice and a few beans • Both the Sum and the Product Combining Functions work fine
  • 03/24/14 © 2009 Bahill249 A manned mission to MarsA manned mission to Mars22 • The astronauts need a system that produces oxygen and water • The Product Combining Function works fine • But the Sum Combining Function could recommend zero water or zero oxygen
  • 03/24/14 © 2009 Bahill250 Implementing the combining functionsImplementing the combining functions • The Analytic Hierarchy Process (implemented by the commercial tool Expert Choice) allows the user to choose between the sum and the product combining functions. • You would have to implement the other combining functions by yourself.
  • 03/24/14 © 2009 Bahill251 TheThe compromise combining function*compromise combining function* Compromise = ( ) 1/ pp p x y+
  • 03/24/14 © 2009 Bahill252 When shouldWhen should pp be 1, 2 orbe 1, 2 or ∞∞?? • Use p = 1 if the criteria show perfect compensation • Use p = 2 if you want Euclidean distance. • Use p = ∞ if you are selecting a hero and there is no compensation • Compromise = ( ) 1/ pp p x y+
  • 03/24/14 © 2009 Bahill253 IfIf pp == ∞∞ • The preferred alternative is the one with the largest criterion • There is no compensation, because only one criterion (the largest) is considered • Compromise Output = • If p is large and x > y then xp >> yp and Compromise Output 1/ pp p x y +  1/ pp x x = = 
  • 03/24/14 © 2009 Bahill254 UseUse pp == ∞∞ when selectingwhen selecting • the greatest athlete of the century using Number of National Championship Rings* and Peak Salary • the baseball player of the week using Home Runs and Pitching Strikeouts • a movie using Romance, Action and Comedy
  • 03/24/14 © 2009 Bahill255 NBA teams seem to useNBA teams seem to use pp == ∞∞ • When drafting basketball players • Criteria are Height and Assists • They want seven-foot players with ten assists per game (the ideal point) • In years when there are many point guards but no centers, they draft the best point guards • Choose the criterion with the maximum score (Assists) and then select the alternative whose number of Assists has the minimum distance to the ideal point
  • 03/24/14 © 2009 Bahill256 UseUse pp == ∞∞ when choosing minimaxwhen choosing minimax • A water treatment plant to reduce the amount of mercury, lead and arsenic in the water. • Trace amounts are not of concern. • First, find the poison with the maxmaximum concentration, then choose the alternative with the miniminimum amount of that poison. • Hence the term minimaxminimax.
  • 03/24/14 © 2009 Bahill257 Design of a baseball batDesign of a baseball bat • The ball goes the farthest, if it hits the sweet spot of the bat • Error = |sweet spot - hit point| • Loss = number of feet short of 500 • For an amateur use minimax: minimize the Loss, if the Error is maximum • For Alex Rodriguez use minimin
  • 03/24/14 © 2009 Bahill258 The distanceThe distance the ballthe ball travelstravels depends ondepends on where the ballwhere the ball hits the bathits the bat**
  • 03/24/14 © 2009 Bahill259 UseUse pp == ∞∞ if you are very risk averseif you are very risk averse • A million dollar house on a river bank: a 100-year flood would cause $900K damage • A million dollar house on a mountain top: a violent thunderstorm would cause $100K damage • Minimax: choose the worst risk, the 100-year flood, and choose the alternative that minimizes it: build your house on the mountain top*
  • 03/24/14 © 2009 Bahill260 UseUse pp = 1 if you are probabilistic= 1 if you are probabilistic** • Risk equals (probability times severity of a 100 year flood) plus (probability times severity of a violent thunderstorm) • Risk(River Bank) = 0.01×0.9 + 0.1×0 = 0.009 • Risk(Mountain Top) = 0.01×0 + 0.1×0.1 = 0.010 • Therefore, build your house on the river bank
  • 03/24/14 © 2009 Bahill261 SynonymsSynonyms • Combining functions are also called  objective functions  optimization functions  performance indices • Combining functions may include probability density functions*
  • 03/24/14 © 2009 Bahill262 Summary about combining functionsSummary about combining functions • Summation of weighted scores is the most common. • Product combining function eliminates alternatives with a zero for any criterion.* • Compromise function with p=∞ uses only one criterion.
  • 03/24/14 © 2009 Bahill263 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions  Preferred alternatives • Sensitivity analysis
  • 03/24/14 © 2009 Bahill264 Select preferred alternativesSelect preferred alternatives • Select the preferred alternatives. • Present the results of the tradeoff study to the original decision maker and other relevant stakeholders. • A sensitivity analysis will help validate your study.
  • 03/24/14 © 2009 Bahill265 SynonymsSynonyms • Preferred alternatives • Recommended alternatives • Preferred solutions
  • 03/24/14 © 2009 Bahill266 Components of a tradeoff studyComponents of a tradeoff study • Problem statement • Evaluation criteria • Weights of importance • Alternative solutions • Evaluation data • Scoring functions • Scores • Combining functions • Preferred alternatives  Sensitivity analysis
  • 03/24/14 © 2009 Bahill267 PurposePurpose A sensitivity analysis identifies the most important parameters in a tradeoff study.
  • 03/24/14 © 2009 Bahill268 Sensitivity analysesSensitivity analyses • A sensitivity analysis of the tradeoff study is imperative. • Vary the inputs and parameters and discover which ones are the most important. • The Pinewood Derby had 89 criteria. Only three of them could change the preferred alternative.
  • 03/24/14 © 2009 Bahill269 Sensitivity analysis of Pinewood Derby (simulation data)Sensitivity analysis of Pinewood Derby (simulation data)
  • 03/24/14 © 2009 Bahill270 The Do Nothing alternativesThe Do Nothing alternatives • The double elimination tournament was the status quo. • The single elimination tournament was the nihilistic do nothing alternative.
  • 03/24/14 © 2009 Bahill271 Sensitivity analysis of Pinewood Derby (prototype data)Sensitivity analysis of Pinewood Derby (prototype data) Sensitivity of Pinewood Derby (prototype data) 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Performance Weight OverallScore Double elimination Round robin, best-time Round robin, points
  • 03/24/14 © 2009 Bahill272 Semirelative-sensitivity functionsSemirelative-sensitivity functions The semirelative-sensitivity of the function F to variations in the parameter α is 0 NOP F F Sα α α ∂ = ∂ %
  • 03/24/14 © 2009 Bahill273 Tradeoff studyTradeoff study A Generic Tradeoff Study Criteria Weight of Importance Alternative 1 Alternative 2 Criterion 1 Wt1 S11 S12 Criterion 2 Wt2 S21 S22 Final Score F1 F2 A Numeric Example of a Tradeoff Study Alternatives Criteria Weight of Importance Umpire’s Assistant Seeing Eye Dog Accuracy 0.75 0.67 0.33 Silence of Signaling 0.25 0.83 0.17 Sum of weight times score 0.71 The winner 0.29 1 1 11 2 21 2 1 12 2 22andF Wt S Wt S F Wt S Wt S= × + × = × + ×
  • 03/24/14 © 2009 Bahill274 Which parameters could changeWhich parameters could change the recommendations?the recommendations? Use this performance index* Compute the semirelative-sensitivity functions. 1 2 1 11 2 21 1 12 2 22 0.420F F F Wt S Wt S Wt S Wt S= − = × + × − × − × =
  • 03/24/14 © 2009 Bahill275 Semirelative-sensitivity functions*Semirelative-sensitivity functions* ( ) ( ) 1 2 11 21 12 22 11 12 1 21 22 2 1 11 2 21 1 12 2 22 0.26 0.16 0.50 0.21 -0.25 -0.04 F Wt F Wt F S F S F S F S S S S Wt S S S Wt S Wt S S Wt S S Wt S S Wt S = − = = − = = = = = = − = = − = % % % % % %
  • 03/24/14 © 2009 Bahill276 What about interactions?What about interactions? The semirelative-sensitivity function for the interaction of Wt1 and S11 is which is bigger than the first-order terms. 1 11 0 0 0 0 2 1 11 1 11 1 11 NOP 0.5025F Wt S F S Wt S Wt S Wt S − ∂ = = = ∂ ∂ %
  • 03/24/14 © 2009 Bahill277 InteractionsInteractions So interactions are important. Semirelative Sensitivity Values Showing Interaction Effects Function Nominal values Values increased by 10% New F values Total change in z =0.75 =0.82 0.446 0.026 =0.67 =0.74 0.470 0.050 =0.75 =0.67 =0.82 =0.74 0.501 0.081 0.081
  • 03/24/14 © 2009 Bahill278 Estimating derivativesEstimating derivatives If (x-x0) and f” are small, then the second term on the right can be neglected. 2 0 0 0 0 ( ) ( ) ( ) ( )( ) ( ) 2! f f x f x f x x x x x ζ′′ ′− = − + −
  • 03/24/14 © 2009 Bahill279 Tradeoff study exampleTradeoff study example For a +5% parameter change the semirelative-sensitivity function is This is very easy to compute. 0 0 0 20 0.05 F F F S Fβ β β β β ∆ ∆ = = = ∆ ∆ % 11 20(0.025) 0.5F SS ∗ = =% Tradeoff Study with S11 Increased by 5% Criteria Weight of Importance Umpire’s Assistant Seeing Eye dog Accuracy 0.75 0.70 0.33 Silence of Signaling 0.25 0.83 0.17 Sum of weight times score 0.74 0.29
  • 03/24/14 © 2009 Bahill280 Estimated semirelative sensitivitiesEstimated semirelative sensitivities This is the same result that we previously obtained analytically. The Semirelative Sensitivity of the Difference Between the Two Output Scores computed with a Plus 5% Parameter Perturbation Function Value +0.26 +0.16 +0.50 +0.21 -0.25 -0.04
  • 03/24/14 © 2009 Bahill281 But what about the second-order terms?But what about the second-order terms? Namely When using the sum of weighted scores combining function the second derivatives are all zero. So our estimations are all right. This is not true for the product combining function or most other common combining functions. See Daniels, Werner and Bahill [2001] for explanations of other combining functions. 2 0 ( ) ( ) 2! f x x ζ′′ − 1 1 11 2 21 2 1 12 2 22andF Wt S Wt S F Wt S Wt S= × + × = × + × 1 2 1 2 1 11 21 2 12 22andWt Wt Wt Wt F S S F S S= × = ×
  • 03/24/14 © 2009 Bahill282 The moral of this storyThe moral of this story The perturbation step size (x – x0) should be small. Five and ten percent step sizes are probably too big, but we have been getting away with it, because we usually use the sum combining function.
  • 03/24/14 © 2009 Bahill283 Derivative of a function of two variablesDerivative of a function of two variables • Let us examine the second-order terms, those inside the { }, for two reasons to see if they are large and must be included in computing the first derivative to estimate the effects of interactions on the sensitivity analysis { } 0 0 0 0 0 0 0 0 2 2 0 0 0 0 ( , ) ( , ) ( , )( ) ( , )( ) ( , )( ) 2 ( , )( )( ) ( , )( ) x y xx xy yy f x y f x y f x y x x f x y y y f x x f x x y y f y yζ η ζ η ζ η ′ ′− = − + − ′′ ′′ ′′+ − + − − + −
  • 03/24/14 © 2009 Bahill284 InteractionsInteractions Previously we derived the analytic semirelative-sensitivity function for the interaction of Wt1 and S11 as, which is bigger than the first-order semirelative-sensitivity functions. 1 11 0 0 0 0 2 1 11 1 11 1 11 NOP 0.5025F Wt S F S Wt S Wt S Wt S − ∂ = = = ∂ ∂ %
  • 03/24/14 © 2009 Bahill285 InteractionsInteractions For a 5% change in parameter values, a simple-minded approximation is using our tradeoff study values we get This does not match the analytic value. What went wrong? ( ) ( ) 2 2 0 0 0 0 0 0 20 0.05 0.05 F FF F S Fα β α β α β α β α β − ∆∆ ∆ ≈ = = ∆ ∆ ∆ % ( )1 11 2 20 0.6125F Wt SS F− ≈ ∆ =%
  • 03/24/14 © 2009 Bahill286 How big are the second-order terms?How big are the second-order terms? In estimating the sum of the first order-terms is 0.00038 the sum of second order terms is 0.00123. The second-order terms cannot be ignored. 1 12 F Wt SS − %
  • 03/24/14 © 2009 Bahill287 Step sizeStep size Can we fix this problem by using a smaller step size? If we reduce the step size to 0.1% This still does not match the analytic result. ( ) ( ) 2 2 0 0 0 0 0 0 1000 0.001 0.001 F FF F S Fα β α β α β α β α β − ∆∆ ∆ ≈ = = ∆ ∆ ∆ % ( )1 11 2 1000 0.5746F Wt SS F− ≈ ∆ =%
  • 03/24/14 © 2009 Bahill288 It’s not the step sizeIt’s not the step size But this time the fault is not that of too large of a step size, because in estimating the sum of the first order-terms is 0.000757 and the sum of second order terms is 0.000001. The second order terms can be ignored. 1 11 F Wt SS − %
  • 03/24/14 © 2009 Bahill289 What went wrong?What went wrong? In the previous computations, we changed both parameters at the same time and then compared the value of the function to the value of the function at its normal operating point. However, this is not the correct estimation for the second-partial derivative.
  • 03/24/14 © 2009 Bahill290 Estimating the second partialsEstimating the second partials11 To estimate the second-partial derivatives we should start with 2 0 0 0 0 0( , ) ( , ) ( , )f f fα αα β α β α β α β β ′ ′∂ − ≈ ∂ ∂ ∆ 0 0 0 0 2 0 0 ( , ) ( , ) ( , ) ( , ) ( , ) f f f f f α β α β α β α β α β α α α β β − − − ∂ ∆ ∆≈ ∂ ∂ ∆ 2 0 0 0 0 0 0( , ) ( , ) ( , ) ( , ) ( , )f f f f fα β α β α β α β α β α β α β ∂ − − + ≈ ∂ ∂ ∆ ∆
  • 03/24/14 © 2009 Bahill291 Estimating the second partialsEstimating the second partials22 2 1 11 1 11 ( , ) 0.4207580 0.4205025 0.4202550 0.4200000 1 0.00075*0.00067 f Wt S Wt S ∂ − − + ≈ = ∂ ∂ Values to be Used in Estimating the Second Derivative Terms Parameter values with a 0.1% step size, that is =0.00075 and =0.00067 Function values =0.75075 =0.67067 0.4207580 =0.67067 0.4205025 =0.75075 0.4202550 =0.75000 =0.67000 0.4200000
  • 03/24/14 © 2009 Bahill292 Estimating the sensitivity functionsEstimating the sensitivity functions To get the semirelative-sensitivity function we multiply the second-partial derivative by the normal values of Wt1 and S11 to get Now, this is the same result that we derived in the analytic semirelative-sensitivity section. 1 11 0 0 0 0 2 1 11 1 11 1 11 1 11 NOP ( , ) 1 0.5025F Wt S f Wt S S Wt S Wt S Wt S − ∂ = = × = ∂ ∂ %
  • 03/24/14 © 2009 Bahill293 Lessons learnedLessons learned • The perturbation step size should be small. Five and 10% perturbations are not acceptable. • It is incorrect to estimate the second partial derivative by changing two parameters at the same time and then comparing that value of the function to the value of the function at its normal operating point. Estimating second derivatives requires evaluation of four not two numerator terms.
  • 03/24/14 © 2009 Bahill294 Other Techniques for Combining Data inOther Techniques for Combining Data in Order to Find the Preferred alternativesOrder to Find the Preferred alternatives
  • 03/24/14 © 2009 Bahill295 The Ideal PointThe Ideal Point11 • The ideal point is the point where all the criteria have their optimal scores. • In the soda pop example we will define the ideal point as the intercepts of the hyperbola fit to the data.
  • 03/24/14 © 2009 Bahill296 The Ideal PointThe Ideal Point22 The preferred alternative is found by minimizing the distance to the ideal point using LP metrics. where zk is the score of the kth criterion, wk is the weight of the kth criterion, z*k is the kth component of the ideal point, z*k is the kth component of the anti-ideal point and n is the number of criteria. The criteria index is k and the alternatives index is i. 1 1 n p p p p k k k L w d =   =     ∑ * * * - - k k k k k z z d z z =
  • 03/24/14 © 2009 Bahill297 The Ideal PointThe Ideal Point33 Our modified Minkowski metrics: 1 n p p p k k k L w d =   =     ∑
  • 03/24/14 © 2009 Bahill298 Ideal PointIdeal Point44** T h e Id e a l P o in t 0 1 2 3 0 1 0 2 0 Q u a n t it y ( lit e r s ) Cost(1/dollars) d i I d e a l P o in t
  • 03/24/14 © 2009 Bahill299 The Ideal PointThe Ideal Point55 ** Using wi = 1 and p equal to 1, 2, and ∞ we get Using the Ideal Point to Select Soda Pop Packages Data Criteria Trade-off Values Item Price (dollars) Cost (dollars-1 ) Quantity (liters) L1 norm L2 norm L∞ norm 1 can 0.50 2.00 0.35 1.34 1.04 0.986 20 oz 0.60 1.67 0.59 1.44 1.07 0.976 1 liter 0.79 1.27 1.00 1.55 1.13 0.959 2 liter 1.29 0.78 2.00 1.66 1.18 0.918 6 pack 2.29 0.44 2.13 1.77 1.25 0.913 3 liter 1.69 0.59 3.00 1.68 1.19 0.877 12 pack 3.59 0.28 4.26 1.73 1.23 0.909 24 pack 5.19 0.19 8.52 1.58 1.14 0.938
  • 03/24/14 © 2009 Bahill300 The Search Beam techniqueThe Search Beam technique • Construct a vector between the anti-ideal point, the nadir (the origin in this example), and the ideal point, then re-examine solutions close to this vector. • The nadir might be the point where each criterion takes on its minimum value, or it might be the status quo. • The 6 pack and 3 liter bottle are closest to this vector. Of these, the 3 liter bottle is closest to the ideal point, so it is chosen.
  • 03/24/14 © 2009 Bahill301 Search BeamSearch Beam22 Use of the Ideal Point 0 1 2 3 0 10 20 Quantity (liters) Cost(1/dollars) I d e a l P o in t T h e S e a r c h B e a m N a d ir
  • 03/24/14 © 2009 Bahill302 Fuzzy Logic, rationaleFuzzy Logic, rationale • Some things are described well by probability theory. Such as the probability that John Wayne was a tall person is around 1.0. • But what is the probability that George W. Bush is a tall person? • This question does not have a good answer. • The theory of Fuzzy Logic was invented to model such questions. • With fuzzy logic the question becomes, “What is the possibility that George W. Bush belongs to the set of people called tall?”
  • 03/24/14 © 2009 Bahill303 Fuzzy Logic, exampleFuzzy Logic, example • Here is a fuzzy set for tall people. • Of course, it could be refined for males or females, old or young people, and for country of origin.
  • 03/24/14 © 2009 Bahill304 Fuzzy Sets for PerformanceFuzzy Sets for Performance FiveFuzzySetsforthePerformanceFigureof Merit VeryHighHighMediumLowVeryLow 3210 1.0 0.0 Degreeof Membership Quantity(liters) 4
  • 03/24/14 © 2009 Bahill305 Fuzzy Sets for CostFuzzy Sets for Cost VeryLowLowMediumHighVeryHigh 0.51.01.52.02.5 1.0 0.0 Degreeof Membership Cost(1/dollars) FiveFuzzySetsfor theCost Figureof Merit 0
  • 03/24/14 © 2009 Bahill306 Fuzzy rules for a single can Rule number Fuzzy premises Consequences Cost Volume 1 Very Low Very Low 1 Can 2 Very Low Low 1 Can 3 Very Low Medium 1 Can 4 Very Low High 1 Can 5 Very Low Very High 1 Can 6 Low Very Low 1 Can 7 Low Low 1 Can 8 Low Medium 1 Can 9 Low High 1 Can 10 Low Very High 1 Can 11 Medium Very Low 1 Can 12 Medium Low 1 Can 13 Medium Medium 1 Can 14 Medium High 1 Can 15 Medium Very High 1 Can 16 High Very Low 1 Can 17 High Low 1 Can 18 High Medium 1 Can 19 High High 1 Can 20 High Very High 1 Can 21 Very High Very Low 1 Can 22 Very High Low 1 Can 23 Very High Medium 1 Can 24 Very High High 1 Can 25 Very High Very High 1 Can
  • 03/24/14 © 2009 Bahill307 Degree of fulfillmentDegree of fulfillment • Assume premises are connected by ANDs • Use product rule for AND
  • 03/24/14 © 2009 Bahill308 Single can, degree of fulfillment (DoF) Rule number Cost µ Volume µ Package DoF 1 Very Low 0.00 Very Low 0.65 1 Can 0.00 2 Very Low 0.00 Low 0.35 1 Can 0.00 3 Very Low 0.00 Medium 0.00 1 Can 0.00 4 Very Low 0.00 High 0.00 1 Can 0.00 5 Very Low 0.00 Very High 0.00 1 Can 0.00 6 Low 0.00 Very Low 0.65 1 Can 0.00 7 Low 0.00 Low 0.35 1 Can 0.00 8 Low 0.00 Medium 0.00 1 Can 0.00 9 Low 0.00 High 0.00 1 Can 0.00 10 Low 0.00 Very High 0.00 1 Can 0.00 11 Medium 0.00 Very Low 0.65 1 Can 0.00 12 Medium 0.00 Low 0.35 1 Can 0.00 13 Medium 0.00 Medium 0.00 1 Can 0.00 14 Medium 0.00 High 0.00 1 Can 0.00 15 Medium 0.00 Very High 0.00 1 Can 0.00 16 High 0.00 Very Low 0.65 1 Can 0.00 17 High 0.00 Low 0.35 1 Can 0.00 18 High 0.00 Medium 0.00 1 Can 0.00 19 High 0.00 High 0.00 1 Can 0.00 20 High 0.00 Very High 0.00 1 Can 0.00 21 Very High 1.00 Very Low 0.65 1 Can 0.65 22 Very High 1.00 Low 0.35 1 Can 0.35 23 Very High 1.00 Medium 0.00 1 Can 0.00 24 Very High 1.00 High 0.00 1 Can 0.00 25 Very High 1.00 Very High 0.00 1 Can 0.00
  • 03/24/14 © 2009 Bahill309 Rules with non-zero degree of fulfillment (DoF) Rule number Cost µ Volume µ Package DoF 21 Very High 1.00 Very Low 0.65 1 Can 0.65 22 Very High 1.00 Low 0.35 1 Can 0.35 37 Medium 0.46 Low 1.00 1 liter 0.46 42 High 0.54 Low 1.00 1 liter 0.54 58 Low 0.44 Medium 1.00 2 liter 0.44 63 Medium 0.56 Medium 1.00 2 liter 0.56 78 Very Low 0.12 Medium 0.87 6 pack 0.10 79 Very Low 0.12 High 0.13 6 pack 0.02 83 Low 0.88 Medium 0.87 6 pack 0.77 84 Low 0.88 High 0.13 6 pack 0.11 109 Low 0.82 High 1.00 3 liter 0.82 114 Medium 0.18 High 1.00 3 liter 0.18 125 Very Low 0.44 Very High 1.00 12 pack 0.44 130 Low 0.56 Very High 1.00 12 pack 0.56 150 Very Low 0.62 Very High 1.00 24 pack 0.62 155 Low 0.38 Very High 1.00 24 pack 0.38
  • 03/24/14 © 2009 Bahill310 Can we use this fuzzyCan we use this fuzzy rule base to give advice?rule base to give advice?11 • Suppose our customer says, “I want a little bit of soda pop.” • We would convert that to, “Cost= don’t care AND Quantity = Very Low.” • The rule base recommends, “Buy a single can DoF = 0.65.”
  • 03/24/14 © 2009 Bahill311 Can we use this fuzzyCan we use this fuzzy rule base to give advice?rule base to give advice?22 • Suppose our customer says, “A few of my friends and I cashed in all our empty bottles. We want to buy some soda pop and put it in this little cooler.” • We would convert that to, “Cost = Low AND Quantity = Medium.” • Two rules succeed: one for the 2 liter bottle and one for the 6 pack. The highest DoF is for the 6 pack. Therefore, we would recommend, “Buy a 6 pack, DoF = 0.77.”
  • 03/24/14 © 2009 Bahill312 Can we use this fuzzyCan we use this fuzzy rule base to give advice?rule base to give advice?33 • Suppose our customer has a picnic cooler full of ice and says, “I want a lot of soda pop.” • We would convert that to, “Cost = don’t care AND Quantity = Very High.” • Two rules succeed for the 12 pack. Using a sum minus product combining rule, we would recommend, “Buy a 12 pack, DoF = 0.75.” • However, two rules also succeed for the 24 pack. Using the same combining rule, we would also recommend, “Buy a 24 pack, DoF = 0.76.”
  • 03/24/14 © 2009 Bahill313 The technique used determines the resultThe technique used determines the result Technique Preferred alternative --------------------------------------------------------------------- Sum 24 pack Product 3 liter bottle Sum Minus Product 24 pack Compromise 24 pack Ideal point L1 norm single can L2 norm single can L infinity 3 liter bottle Modified Minkowski 12 pack Search beam 3 liter bottle Fuzzy rule base 6, 12 or 24 pack
  • 03/24/14 © 2009 Bahill314 Technique used determines the resultTechnique used determines the result22 But by clever selection of weights and scoring functions we could also get the 20 ounce, the one liter and two liter bottles.
  • 03/24/14 © 2009 Bahill315 Decision treesDecision trees** • Another, not necessarily tradeoff study, tool for decision analysis and resolution. • Example key decisions and their alternatives  Is formal evaluation needed? [yes, no]  Evaluation data source? [approximations, analysis, models and simulations, experiments, prototypes]  Combining function? [sum, product, sum minus product, compromise]  Alternatives? [alt-1, alt-2, alt-3]  Question order may be important, e. g. ask about dog system function before fertility.  OK, the next slide is the decision tree for these questions.
  • 03/24/14 © 2009 Bahill316
  • 03/24/14 © 2009 Bahill317 Killer tradesKiller trades • We do not have time to analyze all 60 possibilities. So we limit the number of things to be studied by doing killer trades. That is, we answer certain questions and kill off large parts of the decision tree. • In this example we will say that a formal evaluation is necessary, we will use approximation data and the sum combining function. • This means that our tradeoff study matrix only needs three columns, one for each alternative.
  • 03/24/14 © 2009 Bahill318 Tradeoff study by a baseball manager Alternatives → Criteria ↓ Present pitcher Right- hand short reliever Left- hand short reliever Right- hand long reliever Left- hand long reliever Pitcher effectiveness Inning Men on base Score Bullpen readiness
  • 03/24/14 © 2009 Bahill319
  • 03/24/14 © 2009 Bahill320 Should we walk this famous slugger?Should we walk this famous slugger?
  • 03/24/14 © 2009 Bahill321 Some Cautions from Decision TheorySome Cautions from Decision Theory
  • 03/24/14 © 2009 Bahill322 ValuesValues • Your job is to help a decision maker make valid decisions. • This is a difficult and iterative task. • It entails discovering the decision makers weights of importance, scoring functions, and preferred combining functions. • You must get into the head of the decision maker and discover his or her preferences and values*
  • 03/24/14 © 2009 Bahill323 Personality typesPersonality types • Different people have different personality types. • The Myers-Briggs model is one way of describing these personality types. • Sensory - Thinking – Judging people are likely to appreciate the tradeoff study techniques we have presented. • Intuitive – Feeling people most likely will not.
  • 03/24/14 © 2009 Bahill324 PhrasingPhrasing • The way you phrase the question may determine the answer you will get. • When asked whether they would approve surgery in a hypothetical medical emergency, many more people accepted surgery when the chance of survival was given as 99 percent than when the chance of death was given as 1 percent.
  • 03/24/14 © 2009 Bahill325 Preference ReversalsPreference Reversals** $ bet Has higher dollar value P bet Has higher probability Although the expected values are the same, most people preferred to play the P bet, however most people wanted a higher selling price for the $ bet. Lichtenstein & Slovic (1971) $5.40 $0 $56.70 $0
  • 03/24/14 © 2009 Bahill326 Factors affecting human decisionsFactors affecting human decisions  the decision maker corporate culture the decision maker’s values personality types risk averseness biases, illusions and use of heuristics  information displayed wording of the question context  the decision effort required to make the decision difficulty of making the decision time allowed to make the decision needed accuracy of the decision cost of the decision likelihood of regret
  • 03/24/14 © 2009 Bahill327 Temporal orderTemporal order • You will get more consistent results if you first work on the criteria then fill in the matrix of evaluation data row by row assign weights last, that way criteria that have no affect on the outcome can be given minimal weights
  • 03/24/14 © 2009 Bahill328 When you getWhen you get “The Wrong Answer”“The Wrong Answer” you could changeyou could change • Weights of importance • Scores for the alternatives • Parameters of the scoring functions • Parameters of the combining function • The combining function itself • The tradeoff method
  • 03/24/14 © 2009 Bahill329 But we think,But we think, If you got the wrong answer, then you got the requirements wrong.
  • 03/24/14 © 2009 Bahill330
  • 03/24/14 © 2009 Bahill331 Possible missing requirementsPossible missing requirements • Need for Storage Space • Time Before Soda Loses Carbonization • Need for a Glass • Availability of Cold Soda in the Desired Size • Ziggy’s Trips to the Restroom
  • 03/24/14 © 2009 Bahill332 The feeling in your stomach testThe feeling in your stomach test** • Assume you are trying to make an important decision, like “Should I quit my job and become a consultant?” • You have done a tradeoff study, but the results are equivocal. • How should you decide? • Get a coin. Assign heads and tails, e.g. heads I quit my job, tails I keep my job. Flip the coin and look at the result. What is the immediate feeling in your stomach? • If it was heads, but your stomach is in turmoil, then keep your job.
  • 03/24/14 © 2009 Bahill333 LimitationsLimitations
  • 03/24/14 © 2009 Bahill334 LimitationsLimitations • Limited time and resources guarantee that a tradeoff study will never contain all possible criteria. • Tradeoff studies produce satisficing (not optimal) solutions. • A tradeoff study reflects only one view of the problem. Different tradeoff analysts might choose different criteria and weights and therefore would paint a different picture of the problem. • We ignored human decision-making mistakes for which we have no corrective action, such as closed mindedness, lies, conflict of interest, political correctness and favoritism.
  • 03/24/14 © 2009 Bahill335 UncertaintyUncertainty • We studied two independent tradeoff studies that had a variability or uncertainty statistic associated with each evaluation datum. • These statistics were carried throughout the whole computational process, so that at the end the recommended alternatives had associated uncertainty statistics. • Both of these studies were incomprehensible. • Therefore, we did not try to accommodate uncertainty, changes and dependencies in the evaluation data.
  • 03/24/14 © 2009 Bahill336 Speed BumpSpeed Bump
  • 03/24/14 © 2009 Bahill337 A Tradeoff Study ofA Tradeoff Study of Tradeoff Study ToolsTradeoff Study Tools
  • 03/24/14 © 2009 Bahill338 COTS-Based Engineering ProcessCOTS-Based Engineering Process • When choosing commercial off the shelf (COTS) products the following generic criteria may be convenient:  Percent of requirements satisfied  Vendor viability  Total life cycle cost  Apparent interface ease  Architectural compatibility  Foreign components  User interface ease of use  Observable states
  • 03/24/14 © 2009 Bahill339 Specific criteriaSpecific criteria • For tradeoff study tools these specific criteria may be convenient:  Rationale is easy to understand  Can verify calculations with paper and pencil  Works with nonconvex distributions of alternatives  Implements scoring functions (utility curves)  Has multiple combining functions  Performs sensitivity analyses
  • 03/24/14 © 2009 Bahill340 A tradeoff study onA tradeoff study on tradeoff study toolstradeoff study tools • A tradeoff study was performed starting with 60 COTS decision analysis tools. • These were the final Preferred alternatives  Pinewood by Bahill Intelligent Computer Systems  Hiview by Catalyze Ltd.  Logical Decisions for Windows by Logical Decisions Inc.  Expert Choice by Expert Choice Inc. See A Tradeoff Study of Tradeoff Study Tools http://www.sie.arizona.edu/sysengr/sie554/tradeoffStudyOfTradeoffStudyTools.doc
  • 03/24/14 © 2009 Bahill341 Use Cases fromUse Cases from A Tradeoff Study of Tradeoff Study ToolsA Tradeoff Study of Tradeoff Study Tools
  • 03/24/14 © 2009 Bahill342 Architecture of a tradeoff study toolArchitecture of a tradeoff study tool
  • 03/24/14 © 2009 Bahill343 Use case diagramUse case diagram ud TradeoffStudyTool Tradeoff Study Tool Tradeoff Analyst Create a Tradeoff Study Complete Criteria Module Fill In Input Module Company Resources PAL «include» «include»
  • 03/24/14 © 2009 Bahill344 Create a Tradeoff StudyCreate a Tradeoff Study** 11 Iteration: 2.1 Brief Description: Tradeoff Analyst completes the four modules of the tradeoff study tool and gives the results to the decision maker. Every aspect of a tradeoff study requires extensive discussion with the decision maker and other stakeholders. Added Value: This helps a decision maker to make better decisions and it documents the process that was used to make these decisions. Level: User goal Scope: Applies to a decision problem that is appropriate for a tradeoff study. Primary Actor: Tradeoff Analyst (this could be a person or a team).
  • 03/24/14 © 2009 Bahill345 Create a Tradeoff StudyCreate a Tradeoff Study22 Supporting Actors: Tradeoff Analyst will get the tradeoff study tool and documents from Company Resources. Tradeoff Analyst will put the results of the tradeoff study in the project assets library (PAL). Frequency: Company wide, once a week Precondition: A decision maker has asked Tradeoff Analyst to perform a tradeoff study. Preliminary criteria, weights, alternatives and criteria values must already be defined and be in the hands of Tradeoff Analyst. Trigger: Tradeoff Analyst starts the process.
  • 03/24/14 © 2009 Bahill346 Create a Tradeoff StudyCreate a Tradeoff Study33 Main Success Scenario: 1. Tradeoff Analyst copies the company tradeoff study spreadsheet into his or her computer. 2. Tradeoff Analyst selects the Criteria Module for development. 3. Include Complete Criteria Module. 4. Tradeoff Analyst selects the Input Module for development. 5. Include Fill Input Module. 6. The system transfers data from the Criteria Module into the Output Matrices. 7. The system computes preferred alternatives using the combining function chosen by Tradeoff Analyst.
  • 03/24/14 © 2009 Bahill347 Create a Tradeoff StudyCreate a Tradeoff Study44 Main Success Scenario (continued): 8. The system transfers data from the Output Matrices into the Summary Module. 9. The system displays the Summary Module for Tradeoff Analyst’s inspection. 10. Tradeoff Analyst looks at the preferred alternatives in the Summary Module. 11. Tradeoff Analyst repeats steps 2 to 10 until he or she is satisfied. 12. Tradeoff Analyst submits the tradeoff study for expert review. 13. Tradeoff Analyst submits the tradeoff study to the decision maker and places it in the Process Asset Library (PAL) [exit use case]
  • 03/24/14 © 2009 Bahill348 Create a Tradeoff StudyCreate a Tradeoff Study55 Unanchored Alternate Flow: Tradeoff Analyst can stop the system at any time; all entered data and intermediate results will be saved [exit use case]. Postcondition: Tradeoff Analyst has planed a tradeoff study. Specific Requirements Functional Requirements: Note: Transferring data from the Criteria Module into other modules and interchanging information with Company Resources and the PAL are supplementary requirements.
  • 03/24/14 © 2009 Bahill349 Create a Tradeoff StudyCreate a Tradeoff Study66 Functional Requirements (continued): FR1-1 The system shall compute preferred alternatives using the combining function chosen by Tradeoff Analyst. FR1-2 The system shall transfer information from the Output Matrices into the Summary Module. FR1-3 The system shall display the Summary Module. Nonfunctional Requirements: NFR1 At least six different combining functions shall be available for use by Tradeoff Analyst. Author/owner: Terry Bahill Last changed: February 23, 2006
  • 03/24/14 © 2009 Bahill350 Concrete inclusion use casesConcrete inclusion use cases The next two use cases are concrete inclusion use cases to the Create a Tradeoff Study use case.
  • 03/24/14 © 2009 Bahill351 Complete Criteria ModuleComplete Criteria Module11 Iteration: 2.1 Brief Description: Tradeoff Analyst enters data into the Criteria Module and designs scoring functions. If this inclusion use case is called by the base use case, then it is context sensitive; the spreadsheet that is open is the spreadsheet that is used. If the actor initiates the use case, then the name of the spreadsheet to be used must be queried. Added Value: Tradeoff Analyst understands the criteria and develops scoring functions. Level: Low level Scope: Criteria Module Primary Actor: Tradeoff Analyst
  • 03/24/14 © 2009 Bahill352 Complete Criteria ModuleComplete Criteria Module22 Frequency: Company wide, once a week Precondition: Criteria must already be defined and be in the hands of Tradeoff Analyst. Trigger: This use case is initiated by the Create a Tradeoff Study use case or by the Tradeoff Analyst. Main Success Scenario: 1a. When triggered by the Create a Tradeoff Study use case, Tradeoff Analyst replaces criteria of the template with problem domain criteria and describes these criteria in the notes section. 2. Tradeoff Analyst works on the criteria one at a time and may rewrite, decompose or derive criteria.
  • 03/24/14 © 2009 Bahill353 Complete Criteria ModuleComplete Criteria Module33 Main Success Scenario (continued): 3. Tradeoff Analyst selects limits, slopes and baselines for the scoring function of each criterion. 4. The system draws a scoring function for each criterion. 5. Tradeoff Analyst readjusts limits, slopes and baselines for each criterion. This requires discussion with the decision maker. 6. The system redraws the scoring function for each criterion. 7. Tradeoff Analyst assigns a weight of importance to each criterion. 8. The system computes normalized weights.
  • 03/24/14 © 2009 Bahill354 Complete Criteria ModuleComplete Criteria Module44 Main Success Scenario (continued): 9. The system displays alternative combining functions and accepts the function chosen by Tradeoff Analyst. 10. Tradeoff Analyst repeats this process until satisfied with the results. 11. Tradeoff Analyst expresses desire to finish this use case. 12. The system transfers criteria to the Input Module [exit use case]. Anchored Alternate Flow: 1b. When triggered by the Tradeoff Analyst, Tradeoff Analyst specifies the file to be worked on.
  • 03/24/14 © 2009 Bahill355 Complete Criteria ModuleComplete Criteria Module55 Unanchored Alternate Flow: Tradeoff Analyst can stop the system at any time; all entered data and intermediate results will be saved [exit use case]. Postcondition: Tradeoff Analyst knows what the criteria are and where they are stored. Specific Requirements Functional Requirements: FR2-1 The Criteria Module shall accept scoring function parameters from Tradeoff Analyst. FR2-2 The Criteria Module shall create and graph scoring functions. FR2-3 The Criteria Module shall accept changes in scoring function parameters and criteria from Tradeoff Analyst.
  • 03/24/14 © 2009 Bahill356 Complete Criteria ModuleComplete Criteria Module66 Functional Requirements (continued): FR2-4 The Criteria Module shall accept un-normalized weights from Tradeoff Analyst. FR2-5 The Criteria Module shall normalize the weights. FR2-6 The Criteria Module shall accept changes in weights from Tradeoff Analyst. FR2-7 The Criteria Module shall display alternative combining functions and accept the function chosen by Tradeoff Analyst.
  • 03/24/14 © 2009 Bahill357 Complete Criteria ModuleComplete Criteria Module77 Nonfunctional Requirements: NFR2-1 Scoring function graphs must be updated within 100 milliseconds of a change in a parameter. NFR2-2 Computing normalized weights shall take less than 100 milliseconds. Business Rules: BR-1. The weights entered by Tradeoff Analyst shall be numbers (usually integers) in the range of 0 to 10, where 10 is the most important.
  • 03/24/14 © 2009 Bahill358 Fill Input ModuleFill Input Module11 Iteration: 2.1 Brief Description: Tradeoff Analyst enters criteria values for the alternatives into the Input Module. If this inclusion use case is called by the base use case, then it is context sensitive, the spreadsheet that is open is the spreadsheet that is used. If the actor initiates the use case, then the name of the spreadsheet to be used must be queried. Added Value: These criteria values can be used to compute preferred alternatives. Level: Low level Scope: Input Module Primary Actor: Tradeoff Analyst Frequency: Company wide, once a week
  • 03/24/14 © 2009 Bahill359 Fill Input ModuleFill Input Module22 Precondition: Alternatives must already be defined and their preliminary criteria values must be in the hands of Tradeoff Analyst. Trigger: This use case is triggered by the Create a Tradeoff Study use case or by the Tradeoff Analyst. Main Success Scenario: 1a. When triggered by the Create a Tradeoff Study use case, Tradeoff Analyst describes his or her alternatives. 2. The system updates the Input Module. 3. Tradeoff Analyst concentrates on one row at a time and fills in criteria values for the alternatives.
  • 03/24/14 © 2009 Bahill360 Fill Input ModuleFill Input Module33 Main Success Scenario (continued): 4. Tradeoff Analyst reassesses the criteria values until satisfied with the results. 5. The Input Module sends criteria values to the Criteria Module [exit use case]. Anchored Alternate Flow: 1b. When triggered by the Tradeoff Analyst, Tradeoff Analyst specifies the file to be worked on. Unanchored Alternate Flow: Tradeoff Analyst can stop the system at any time; all entered data and intermediate results will be saved [exit use case].
  • 03/24/14 © 2009 Bahill361 Fill Input ModuleFill Input Module44 Postcondition: Tradeoff Analyst knows where the alternatives are described and where their criteria values are stored. Specific Requirements Functional Requirements: FR3-1 The Input Module shall accept criteria values from Tradeoff Analyst. FR3-2 The Input Module shall accept changes in criteria values from Tradeoff Analyst. Author/owner: Terry Bahill Last changed: February 25, 2006
  • 03/24/14 © 2009 Bahill362 Supplementary requirementsSupplementary requirements • SR1 The system shall interchange information with Company Resources and the PAL. • SR2 The Criteria Module shall transfer information to and from the Input Module. • SR3 The Criteria Module shall transfer information to and from the Output Matrices.
  • SummarySummary
  • 03/24/14 © 2009 Bahill364 SummarySummary11 • Decompose criteria into subcriteria • Put subcriteria in separate columns • Normalize weights • Derive evaluation data  approximations  product literature  analysis  models and simulations  experiments  prototypes • Create scoring functions • Combine data in separate areas • Add columns for alternatives including Do Nothing
  • 03/24/14 © 2009 Bahill365 SummarySummary22 • There are many multicriterion decision making techniques • Often they give different recommendations • If the alternatives form a nonconvex set, then many techniques will have difficulty • If you got the “wrong answer,”“wrong answer,” then you probably got the requirements wrong
  • 03/24/14 © 2009 Bahill366 SummarySummary33 • You should use a formal, mathematical technique to evaluate alternative designs • Standards (e.g. CMMI) require it • Government organizations require it • Company policy requires it • Common sense requires it • But when you do, be careful or mere artifacts will determine your recommendation
  • 03/24/14 © 2009 Bahill367 SummarySummary44 • Good industry practices for ensuring success of tradeoff studies include  having teams evaluate the data  evaluating the data with many iterations  peer review of the results and recommendations
  • 03/24/14 © 2009 Bahill368 SpeculationSpeculation Observation As you do a better job of getting the requirements right, the preferred alternatives of different teams converge. Speculation As you do a better job of getting the necessary and sufficient requirements, the preferred alternatives of the various tradeoff combining techniques will converge.
  • 03/24/14 © 2009 Bahill369 SummarySummary55 • Getting an answer is not the most important facet of a tradeoff study. • Documenting the tradeoff process and the data is often the most important contribution.  Think about the San Diego County airport site selection • Corporate culture and the decision maker’s personality determine how well the recommendations of a tradeoff study will be received. • Doing a tradeoff study will help you get the requirements right.
  • 03/24/14 © 2009 Bahill370 SummarySummary66 • Emotions, illusions, biases and use of heuristics make humans far from ideal decision makers. • Using tradeoff studies thoughtfully can help move your decisions from the normal human decision-making lower- right quadrant to the ideal decision-making upper-left quadrant.
  • Dog System Exercise
  • 03/24/14 © 2009 Bahill372 Tradeoff study exercise, generalTradeoff study exercise, general 1. Find the folder named SandiaDogSelector on the desktop of your computer. 2a. Open it, read dogProb0.doc and do the exercise. 2b. Wait for the instructor 2c. Read dogSol0.doc 3. Read dogProb1.doc and do the exercise 4. Wait for the instructor 5. Read dogSol1.doc 6. Wait for the instructor 7. Read dogProb2.doc and do the exercise 8. Wait for the instructor Etc.
  • 03/24/14 © 2009 Bahill373 Tradeoff study exercise, detailsTradeoff study exercise, details 0. Read the problem statement (dogProb0.doc) and write some preliminary requirements, 5 minutes, wait for solutions (dogSol0.doc), 2 minute discussion. 1. Identify key system decisions and their alternatives (dogProb1.doc). 8 minutes, wait for solutions (dogSol1.doc), 7 minute discussion. 2. Fill in the Decision Tree Worksheet, use text boxes or do it on paper (dogProb2.doc). 8 minutes, wait for solutions (dogSol2.doc), 7 minute discussion. 3. Use the Decision Resolution Worksheet (dogProb3.doc) to perform the Killer Trades. 8 minutes, wait for solutions (dogSol3.doc). 4. Define the tradeoff studies that still need to be done and list them on the Decision Resolution Worksheet (dogProb4.doc). 5 minutes, wait for solutions (dogSol4.doc).
  • 03/24/14 © 2009 Bahill374 Tradeoff study exercise, detailsTradeoff study exercise, details 5. List evaluation criteria and weights of importance on the Criteria Description Worksheet (dogProb5.doc). 20 minutes, wait for solutions (dogSol5.doc), 5 minute discussion.
  • 03/24/14 © 2009 Bahill375 You can get criteria from your PAL
  • 03/24/14 © 2009 Bahill376 Tradeoff study exerciseTradeoff study exercise 6. Perform a tradeoff study using the Tradeoff Matrix Spreadsheet (dogProb6.xls). 30 minutes, wait for solutions, In 10 minutes discuss both dogSol6.xls and dogSol6.doc. For scoring functions open the folder named SSF and use the tool named SSF.exe 7. Fix the Do Nothing problem (dogProb7.doc and dogProb7.xls). 5 minutes, wait for solutions. In 5 minutes discuss the sensitivity analysis in dogSol7.doc and dogSol7.xls.
  • 03/24/14 © 2009 Bahill377 Tradeoff study exerciseTradeoff study exercise 8. Recompute your tradeoff matrix using a combining function other than the sum of weighted scores (dogProb8.doc and dogProb8.xls). 15 minutes, wait for solutions. In 5 minutes discuss the sensitivity analysis in dogSol8.doc and also the solutions in dogSol8.xls.
  • Mathematical Summary of Tradeoff Techniques
  • 03/24/14 © 2009 Bahill379 EquationsEquations • The following section uses algebraic equations to summarize the tradeoff methods we have just discussed. These slides are located at www.sie.arizona.edu/sysengr/slides/tradeoffMath.doc • If you are equation intolerant, you can leave now and we won’t be offended. • Or, if in the middle of the presentation you find that you have exceeded your equation viewing limit, you may leave. • Please fill out a course evaluation questionnaire before you go.
  • 03/24/14 © 2009 Bahill380
  • 03/24/14 © 2009 Bahill381 AcronymAcronym listlist AHP Analytic Hierarchy Process BCS Bowl Championship Series BM Basic Measure CDR Critical Design Review CF Combining Function CMMI Capability Maturity Model Integrated COTS Commercial Off The Shelf DAR Decision Analysis and Resolution DM Decision Maker DoF Degree of Fulfillment EV Expected Value IPT Integrated Product Development Team IQ Intelligence Quotient MAUT Multi-Attribute Utility Technique NFL National Football League NOP Normal Operating Point PAL Process Asset Library PC Personal Computer PDR Preliminary Design Review QFD Quality Function Deployment SEMP Systems Engineering Management Plan SRR System Requirements Review Wt Weight
  • 03/24/14 © 2009 Bahill382 Course materialsCourse materials • This slide show, we present this in Vista  For the “Humans are not rational2” slide, bring two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation. • Dog System Exercise  problems and solutions  this is 21 files plus one folder  we need computers for this exercise  Load the files onto the desktop of the PCs before the class • Mathematical Summary MS Word Slides • The student computers will need PowerPoint, MS Word and Excel. • Optional handouts include Ben Franklin’s letter and the GOAL/QPC Creativity Tools Memory Jogger.
  • 03/24/14 © 2009 Bahill383 HistoryHistory • This course is based on material from Terry Bahill’s Systems Engineering Process course at the University of Arizona. • Bahill adapted it for BAE in the Fall of 2004 where it was reviewed by Rob Culver, Bill Wuersch, and John Volanski and it was piloted October 12-13, 2004. • The human decision making material was added at the UofA in Fall 2005.