Isabel Evans - Working Ourselves out of a Job: A Passion For Improvement - Eu...TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Working Ourselves out of a Job: A Passion For Improvement by Isabel Evans.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Anders Claesson - Test Strategies in Agile Projects - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation onTest Strategies in Agile Projects by Anders Claesson . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Ane Clausen - Success with Automated Regression Test revisedTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Success with Automated Regression Test revised by Ane Clausen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Jarian van de Laar - Test Policy - Test Strategy TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Test Policy - Test Strategy by Jarian van de Laar. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Michael Snyman - Software Test Automation Success TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Software Test Automation Success by Michael Snyman. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Isabel Evans - Working Ourselves out of a Job: A Passion For Improvement - Eu...TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Working Ourselves out of a Job: A Passion For Improvement by Isabel Evans.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Anders Claesson - Test Strategies in Agile Projects - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation onTest Strategies in Agile Projects by Anders Claesson . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Ane Clausen - Success with Automated Regression Test revisedTEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Success with Automated Regression Test revised by Ane Clausen. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Jarian van de Laar - Test Policy - Test Strategy TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Test Policy - Test Strategy by Jarian van de Laar. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Michael Snyman - Software Test Automation Success TEST Huddle
EuroSTAR Software Testing Conference 2009 presentation on Software Test Automation Success by Michael Snyman. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Risk-based Testing,A Must For Medical Devices by Bjarne Mansson .
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Dirk Van Dael - Test Accounting - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Test Accounting by Dirk Van Dael. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Stuart Reid - ISO 29119: The New International Software Testing StandardTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on ISO 29119: The New International Software Testing Standard by Stuart Reid. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Creating Customer Value With Agile Testing by Ben Walters. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
DoD Joint Weapons System Product Support Business Case Analysis ExampleRon Giuntini
This is a cleaned, declassified full Business Case Analysis that Giuntini & Co. prepared for a Department of Defense Joint Weapons Systems deployment. This BCA follows all formal DoD template regulations and was highly regarded as a near perfect example within the applicable DoD branches involved.
A pre study for selecting a supplier relationship management toolAlaa Karam
The architecture point of view is dominating in this pre-study. According to the Business Architect, the main tasks of a Business/IT architect is to provide a cost efficient and accurate solution that is according to the business requirements and aligned with the business and IT strategies and constraints.
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Benchmarking is the process of continually searching for the best methods, practices and processes, and either adopting or adapting their good features and implementing them to become the “best of the best.” To become the best-in-class, organizations need to implement the right process to get there.
Based on the world renowned Xerox Benchmarking Process model pioneered by Robert C. Camp, this presentation deck covers the benefits of benchmarking, various types of benchmarking, identifying what to benchmark, and provides a detailed step by step guidance on how to systematically carry out a benchmarking project. It also includes useful tips on benchmarking, benchmarking etiquettes and the critical success factors.
LEARNING OBJECTIVES
1. Gain a broad understanding of the key concepts of benchmarking.
2. Learn how to identify, assess and implement various types of benchmarking projects to meet the your organization’s goals based on the Xerox Benchmarking model.
3. Gain awareness of the code of conduct for benchmarking and make preparations to get the most out of a site visit.
4. Define the critical success factors in benchmarking implementation.
5. Kick-start benchmarking projects that are aligned to your company’s strategic goals.
CONTENTS
Introduction to Benchmarking
The Xerox Benchmarking Process
Step 1: What to benchmark?
Step 2: Whom to benchmark?
Step 3: Data collection
Step 4: Determine current performance “gap”
Step 5: Project future performance levels
Step 6: Communicate findings and gain acceptance
Step 7: Establish goals
Step 8: Develop action plans
Step 9: Implement actions and monitor progress
Step 10: Re-calibrate benchmarks
Benchmarking Roles and Responsibilities
Benchmarking Inspection Checklist (Toll-gate Review)
Benchmarking Etiquette
Benchmarking Site Visit
Benchmarking Pitfalls & Success
To download this complete presentation, please visit: http://www.oeconsulting.com.sg
This example is designed to give an idea of how TransparentChoice can be used to select the most efficient IT portfolio. This model is inspired by Gartner’s recommendations for picking IT projects.
Bjarne Mansson - Risk-based Testing,A Must For Medical Devices - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Risk-based Testing,A Must For Medical Devices by Bjarne Mansson .
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Dirk Van Dael - Test Accounting - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Test Accounting by Dirk Van Dael. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Stuart Reid - ISO 29119: The New International Software Testing StandardTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on ISO 29119: The New International Software Testing Standard by Stuart Reid. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Ben Walters - Creating Customer Value With Agile Testing - EuroSTAR 2011TEST Huddle
EuroSTAR Software Testing Conference 2011 presentation on Creating Customer Value With Agile Testing by Ben Walters. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
DoD Joint Weapons System Product Support Business Case Analysis ExampleRon Giuntini
This is a cleaned, declassified full Business Case Analysis that Giuntini & Co. prepared for a Department of Defense Joint Weapons Systems deployment. This BCA follows all formal DoD template regulations and was highly regarded as a near perfect example within the applicable DoD branches involved.
A pre study for selecting a supplier relationship management toolAlaa Karam
The architecture point of view is dominating in this pre-study. According to the Business Architect, the main tasks of a Business/IT architect is to provide a cost efficient and accurate solution that is according to the business requirements and aligned with the business and IT strategies and constraints.
[Note: This is a partial preview. To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Benchmarking is the process of continually searching for the best methods, practices and processes, and either adopting or adapting their good features and implementing them to become the “best of the best.” To become the best-in-class, organizations need to implement the right process to get there.
Based on the world renowned Xerox Benchmarking Process model pioneered by Robert C. Camp, this presentation deck covers the benefits of benchmarking, various types of benchmarking, identifying what to benchmark, and provides a detailed step by step guidance on how to systematically carry out a benchmarking project. It also includes useful tips on benchmarking, benchmarking etiquettes and the critical success factors.
LEARNING OBJECTIVES
1. Gain a broad understanding of the key concepts of benchmarking.
2. Learn how to identify, assess and implement various types of benchmarking projects to meet the your organization’s goals based on the Xerox Benchmarking model.
3. Gain awareness of the code of conduct for benchmarking and make preparations to get the most out of a site visit.
4. Define the critical success factors in benchmarking implementation.
5. Kick-start benchmarking projects that are aligned to your company’s strategic goals.
CONTENTS
Introduction to Benchmarking
The Xerox Benchmarking Process
Step 1: What to benchmark?
Step 2: Whom to benchmark?
Step 3: Data collection
Step 4: Determine current performance “gap”
Step 5: Project future performance levels
Step 6: Communicate findings and gain acceptance
Step 7: Establish goals
Step 8: Develop action plans
Step 9: Implement actions and monitor progress
Step 10: Re-calibrate benchmarks
Benchmarking Roles and Responsibilities
Benchmarking Inspection Checklist (Toll-gate Review)
Benchmarking Etiquette
Benchmarking Site Visit
Benchmarking Pitfalls & Success
To download this complete presentation, please visit: http://www.oeconsulting.com.sg
This example is designed to give an idea of how TransparentChoice can be used to select the most efficient IT portfolio. This model is inspired by Gartner’s recommendations for picking IT projects.
Making Smart Choices: Strategies for CMMI Adoptionrhefner
The CMMI® was written to apply to a variety of project environments -- defense, commercial; development, maintenance, services; small to large project teams. The authors used words like “adequate”, “appropriate”, “as needed”, and “selected”. When a project or organization adopts the CMMI model for process improvement, they (consciously or unconsciously) make choices about how it will be implemented – scope, scale, documentation, and decision-making to name a few. These choices have a profound effect on the speed and cost of CMMI® adoption. Rick Heffner describes the strategic implications of the CMMI on planning and implementing project processes. He identifies the decisions to be made, the options available, and the relationships between these options and project contexts and business objectives. Take away a deeper understanding of the model, and better strategies for its adoption. By understanding your options and making smart choices, CMMI® adopters can ensure that the promised benefits of CMMI®-based improvement are realized.
How to Evaluate Solutions and Build your Evaluation CommitteeBlytheco
In the fourth installment of the series "Are You Ready for Replatforming?", we take a look at a formalized process for creating criteria and steps for making an ERP or CRM solution transition, including who should be involved in the process and how they should participate.
Detailed concepts of the Plan Do Check Act Process – Critical to achieving an...ASQ Buffalo NY
The PDCA process has been the most widely used process and management system improvement methodology world-wide and the foundation of virtually every ISO standard developed.
PDCA (plan–do–check–act) is an iterative four-step management method used in business for the control and continuous improvement of processes and products. When used and managed properly, this process can go a long way in ensuring you meet your organizational goals.
Julie Gardiner - Branch out using Classification Trees for Test Case Design -...TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Branch out using Classification Trees for Test Case Design by Julie Gardiner. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
71. San Diego CountySan Diego County
Regional AirportRegional Airport
Tradeoff StudyTradeoff Study
This tradeoff study has cost $17 million.This tradeoff study has cost $17 million.
http://www.san.org/authority/assp/index.asp
http://www.san.org/airport_authority/archives/index.asp#master_plan
The most common term is Trade Study. I’ve also seen Trade-off Study. But I prefer Tradeoff Study.
Asterisks (*) in the title or in individual bullets indicate that there are comments for the instructor in the notes section.
This slide is intended for the instructor, not the students.
This slide is intended for the instructor, not the students.
Telephones are in your pockets and purses.
Slides with a red title are overview slides, each bullet will be discussed with several subsequent slides.
The purpose of these slides is to show the big picture and where tradeoff studies fit into the big picture.
The top level activity is CMMI, DAR is one process area of CMMI, and tradeoff studies are one technique of DAR.
When I give a quote without a source, I am usually the author. I just put the quote marks around it to make it seem more important.
The left column has the CMMI specific practices associated with DAR.
Perform Decision Analysis and Resolution PS0317
Perform Formal Evaluation PD0240
When designing a process, put as many things in parallel as possible.
I seldom make such bold statements.
The task of allocating resources is not a tradeoff study, but it certainly would use the results of a tradeoff study.
The quote is probably from CMMI.
Give the students a copy of the letter, which is available at www.sie.arizona.edu/sysengr/slides/tradeoffMath.doc, page 24.
Ref: Decide Formal Evaluation
Ref: Guide Formal Evaluations
Ref: Guide Formal Evaluations
Ref: Establish Evaluation Criteria
Some people will do a tradeoff study when buying a house or a car, but seldom for lesser purchases.
All companies should have a repository of good evaluation criteria that have been used.
Each would contain the following slots
Name of criterion
Description
Weight of importance (priority)
Basic measure
Units
Measurement method
Input (with expected values or the domain)
Output
Scoring function (type and parameters)
Evaluation criteria: Cost, Preparation Time, Tastiness, Novelty, Low Fat, Contains the Five Food Groups, Complements Merlot Wine, Distance to Venue, length of line, messiness, who you are eating with (if it’s your Mormon boss you should forgo the beer)
If you get them wrong, you’ll get the rhinoceros instead of the chocolate torte.
*If these very important requirements are performance related, then they are called key performance parameters.
**Killer criteria for today’s lunch: must be vegetarian, non alcoholic, Kosher, diabetic,
*The Creativity Tools Memory Jogger, by D. Ritter & M. Brassard, GOAL/QPC 1998, explains several tools for creative brainstorming.
**If a requirement cannot be traded off then it should not be in the tradeoff study.
***The make-reuse-buy process is a part of the Decision Analysis and Resolution (DAR) process.
Candidate meals: pizza, hamburger, fish & chips, chicken sandwich, beer, tacos, bread and water.
Be sure that you consider left-overs in the refrigerator.
Ref: Select Evaluation Methods
Additional sources include customer statements, expert opinion, historical data, surveys, and the real system.
Ref: Evaluate Alternatives
Ref: Select Preferred Solutions
Ref: Expert Review of Trade off Studies
Note that this slide says that the formal evaluations should be reviewed.
It does not say that the results of the formal evaluations should be reviewed.
IPT stands for integrated product team or integrated product development team.
These results might be the preferred alternatives,
or they could be recommendations to expand the search, re-evaluate the original problem statement, or negotiate goals and capabilities with the stakeholders.
A most important part of these results is the sensitivity analysis.
Slide 46 lists some possible methods.
The title of this slide is the example that we will present in the next 18 slides.
In these next 18 slides, the phrases in pink will be the DAR specific practices (rectangular boxes of the process diagram) we are referring to.
Some people get confused by the recursion in this example.
The May-June 2007 issue of the American Scientist says recursive thinking is the only thing that distinguishes humans from animals.
I do a tradeoff study to select a tradeoff study tool.
*MAUT was originally called Multicriterion Decision Analysis. The first complete exposition of MCDA was given in 1976 by Keeney, R. L., & Raiffa, H. Decisions With Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, reprinted, Cambridge University Press, 1993.
**AHP is often implemented with the software tool Expert Choice.
Sorry if this is confusing, but this example is recursive.
MAUT and AHP are both the alternatives being evaluated and the methods being used to select the preferred alternatives.
In this example we are not using scoring functions, therefore the evaluation data are the Scores.
The evaluation data are derived from approximations, models, simulations or experiments on prototypes.
Typically the evaluation data are normalized on a scale of 0 to 1 before the calculations are done: for simplicity, we have not done that here.
The numbers in this example indicate that MAUT is twice as easy to use as AHP.
Weights are usually based on expert opinion or quantitative decision techniques.
Typically the weights are normalized on a scale of 0 to 1 before the calculations are done: I did not do that here.
How did I we get the weights of importance? I pulled them out of the blue sky. Is there a systematic way to get weights? Yes, there are many. One is the AHP.
If you had ten criteria, then this matrix would be ten by ten.
Remember the numbers in the right column. They will go into the matrix seven slides from here.
Expert Choice has two methods for normalization, and they often give slightly different numbers.
It might be difficult to square large matrices, so Saaty (1980) gave 4 approximation methods.
AHP, exact solution
Raise the preference matrix (with forced reciprocals) to arbitrarily large powers, and divide the sum of each row by the sum of the elements of the matrix to get a weights column. (Dr. Bahill’s example, with a power of 2)
To compute the Consistency Index:
Multiply preference matrix by weights column
Divide the elements of this new column by the elements in the weights column
Sum the components and divide by the number of components. This gives λmax (called the maximum or principal eigenvalue).
The closer λmax is to n, the elements in the preference matrix, the more consistent the result.
Deviation from consistency may be represented the Consistency Index (C.I.) = (λmax – n)/(n-1)
Calculating the average C.I. from a many randomly generated preference matrices gives the Random Index (R.I.),
which depends on the number of preference matrix columns (or rows):
1,0.00; 2,0.00; 3,0.58; 4,0.90; 5,1.12; 6,1.24; 7,1.32; 8,1.41; 9,1.45; 10,1.49; 11,1.51; 12,1.48; 13,1.56; 14,1.57; 15,1.59.
The ratio of the C.I. to the average R.I. for the same order matrix is called the Consistency Ratio (C.R.). A Consistency Ratio of 0.10 or less is considered acceptable.
Saaty, T. L. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. New York, McGraw-Hill, 1980.
Saaty gives 4 approximation methods:
The crudest: Sum the elements in each row and normalize by dividing each sum by the total of all the sums, thus the results now add up to unity. The first entry of the resulting vector is the priority of the first activity (or criterion); the second of the second activity and so on.
Better: Take the sum of the elements in each column and form the reciprocals of these sums. To normalize so that these numbers add to unity, divide each reciprocal by the sum of the reciprocals.
Good: Divide the elements of each column by the sum of that column (i.e., normalize the column) and then add the elements in each resulting row and divide this sum by the number of elements in the row. This is a process of averaging over the normalized columns. (Dr. Goldberg’s example)
Good: Multiply the n elements in each row and take the nth root. Normalize the resulting numbers.
Obviously you really want the inverse of price. All criteria must be phrased as more is better.
Filling in this table is an in-class exercise
All of the students should get this far.
If you think that tastiness is moderately less important than price, then you could put in 1/3 or -3 depending on the software you are using.
Some of the students might do this.
Remember the numbers in the right column. They will go into the matrix two slides from here.
Remember the numbers in the right column. They will go into the matrix on the next slide.
*The AHP software (Expert Choice) can also use the product combining function.
Of course there is AHP software (e. g. Expert Choice) that will do all of the math for you.
**The original data had only one significant figure, so these numbers should be rounded to one digit after the decimal point.
The AHP software computes an inconsistency index. If A is preferred to B, and B is preferred to C, then A should be preferred to C. AHP detects intransitivities and presents it as an inconsistency index.
The result is robust.
For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1.
This function gives more weight to the weights of importance.
We only care about absolute values.
If the sensitivity is positive it means when the parameter gets bigger, the function gets bigger.
If the sensitivity is negative it means when the parameter gets bigger, the function gets smaller.
Improve the DAR process.
Add some other techniques, such as AHP, to the DAR web course, not done yet
Fix the utility curves document, done by Harley Henning Spring 2005
Add image theory to the DAR process, proposed for summer 2007
Change linkages in the documentation system, done Fall 2004
Create a course, Decision Making and Tradeoff Studies, done Fall 2004
This example should be familiar to the students.
It shows that tradeoff studies really are done.
The web site used to have a really good tradeoff study right up front.
You cannot read this slide.
It shows the tree structure of the criteria.
It is expanded in the next 4 slides.
This section is the heart of this course.
It is intended to teach the students how to do a good tradeoff study.
so that the decision maker can trust the results of a tradeoff study
The God Anubis weighing of the heart of the dead against Maat's feather of Truth.
If your heart doesn’t balance with the feather of truth, then the crocodile monster eats you up.
Back in the Image Theory section we said there were two types of decisions.
Adoption decisions determine whether to add new goals to the trajectory image or new plans to the strategic image. This could include Allocating resources.
Progress decisions determine whether a plan is making progress toward achieving a goal. This could include Making plans.
The complete design of a Pinewood Derby is given in chapter 5 of Chapman, W. L., Bahill, A. T., and Wymore, A.W., Engineering Modeling and Design, CRC Press Inc., Boca Raton, FL, 1992, which is located at
http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
This is only a fragment of the Pinewood Derby tradeoff study.
In football and baseball the managers do tradeoff studies to select each play,
except at the beginning of some football games where they have a preplanned sequence of plays.
In basketball they select plays with tradeoff studies only a few times per game.
One of my friends (from India) argued with me about the selecting a husband or wife comment.
You should do tradeoff studies at the very beginning of the design process, but you also do tradeoff studies throughout the whole system life cycle.
The 80-20 principle was invented by Juran and attributed to Pareto in the 1st ed of Juran’s Quality Control Handbook.
Much later in his article, Mea Culpa, he comments on the development of his idea,
and notes that many quality colleagues urged him to correct the attribution.
The original data for this slide come from a Toyota auto manufacturing report, from around 1985.
The last bullet provides a segue to the next topic, “Well how do people think?”
Assume you are going to lunch in Little Italy or on Coronado Island and you don’t know any of the restaurants in the area.
You drive along until you get “close enough” and then decide to take the next parking space you see.
You don’t do a tradeoff study of parking lots and different on-street areas.
You park your car.
Then you walk along and look at restaurant-1. Let’s say that you decide that it is not satisfactory.
You look at restaurant-2. Let’s say that you decide that it is not satisfactory.
You look at restaurant-3. Let’s say that you find it to be satisfactory. But you keep on looking.
You look at restaurant-4 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-4.
You look at restaurant-5 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-5.
You look at restaurant-6 and you compare it to restaurant-3. Let’s say that you decide that restaurant-3 is better than restaurant-6.
Now let’s assume that your friends say that they are hungry and tired and they don’t want to look any more.
You probably go back to restaurant-3.
You never considered doing a tradeoff study of all six restaurants.
At the most you did pair-wise comparisons.
Driving down a freeway looking for a gas station, I might see a gas station with a price of $2.60 per gallon.
I would say that is too expensive. The next gas station might ask $2.65, I would also pass that one by.
However, I might start to run out of gas, and then see a station offering $2.70 per gallon.
I would take it, because the expense of going back to the first station would be too high.
T. D. Seeley, P. K. Visscher and K. M. Passino, Group Decision Making in Honey Bee Swarms, American Scientist, 94(3): 220-229, May-June 2006.
Customers of eBay might use either strategy.
At first I asked my wife and niece to look for Tinkertoy kits on eBay and let me know what was available.
Then I switched strategies and said, Buy any kit you see that contains a yellow figure or a red lid.
Often we need a burning platform to get people to move.
There is one goal and everyone agrees upon it.
DMs have unlimited information and the cognitive ability to use it efficiently. They know all of the opportunities open to them and all of the consequences.
The optimal course of action can be described and it will, in the long run, be more profitable than any other.
A synonym often used for prescriptive model is normative model. In contrast a descriptive model explains what people actually do.
Von Neumann and Morgenstern (1947)
Systems engineers do not seek optimal designs, we seek satisficing designs.
Systems engineers are not philosophers.
Philosophers spend endless hours trying to phrase a proposition so that it can have only one interpretation.
SEs try to be unambiguous, but not at the cost of never getting anything written.
H. A. Simon, A behavioral model of rational choice, Quarterly Journal of Economics, 59, 99-118, 1955.
Our first example of irrationality is that often we have wrong information in our heads.
What American city is directly north of Santiago Chile?
Most Americans would say that New Orleans or Detroit is north of Santiago, instead of Boston
Or, if you travel from Los Angeles to Reno Nevada, in what direction would you travel?
Most Americans would suggest that Reno is northeast of LA, instead of northwest.
Which end of the Panama canal is farther West the Atlantic side or the Pacific side?
Most Americans would say the Pacific.
These examples were derived from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
The previous slide gave examples of one type of cognitive illusion.
In the next slides we will give examples of a few more types.
A couple dozen more types are given in
Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
Probably the most famous and most studied optical illusion was created by German psychiatrist Franz Müller-Lyer in 1889.
Which of the two horizontal line segments is longer?
Although your visual system tells you that the one on the left is longer, a ruler will confirm that they are equal in length.
Do you think that the slide's’ title is centered? It is.
Stare at the black cross. When do the green dots come from?
This illusion is from http://www.patmedia.net/marklevinson/cool/cool_illusion.html
The illusion only works in PowerPoint presentation mode.
However if you stare at the black " +" in the centre, the moving dot turns to green.Now, concentrate on the black " + " in the centre of the picture.
After a short period, all the pink dots will slowly disappear, and you will only see only a single green dot rotating.
Another good web site for visual illusions is http://www.socsci.uci.edu/~ddhoff/
The upper-left quadrant is defined as rational behavior.
EV means expected value. SEV is subjective expected value.
In the next slides we will show how human behavior differs from rational behavior.
Edwards, W., "An Attempt to Predict Gambling Decisions," Mathematical Models of Human Behavior, Dunlap, J.W. (Editor),
Dunlap and Associates, Stamford, CT, 1955, pp. 12-32.
People overestimate events with low probabilities, like being killed by a terrorist or in an airplane crash,
and underestimate high probability events, such as adults dying of cardiovascular disease.
The existence of state lotteries depends upon such overestimation of small probabilities.
At the right side of this figure,
the probability of a brand new car starting every time is very close to 1.0. But a lot of people put jumper cables in the trunk and buy memberships in AAA.
M. G. Preston and P. Baratta, An experimental study of the auction-value of an uncertain outcome, American Journal of Psychology, 61, pp. 183-193, 1948.
Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185.
Tversky and Kahneman, (1992)
Drazen Prelec, in D. Kahneman & A. Tversky (Eds.) “Choices, Values and Frames” (2000)
Animals exhibit similar behavior.
People overestimate low probabilities and do not distinguish much between intermediate probabilities. Rats show this pattern too (Kagel 1995).
People are more risk-averse when the set of gamble choices is better.
But humans also violate this pattern, and so do rats (Kagel 1995).
People also exhibit “context-dependence”: Whether A is chosen more often than B can depend on the
presence of an irrelevant third choice C (which is dominated and never chosen).
Context dependence means people compare choices within a set rather than assigning separate numerical utilities.
Honeybees exhibit the same pattern (Shafir, et al. 2002).
Animals are also risk averse, as defined about a dozen slides from here.
John Kagel, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995.
S. Shafir, T. M. Waite and B. H. Smith. “Context-dependent violations of rational choice in honeybees (Apis mellifera) and gray jays (Perisoreus
canadensis).” Behavioral Ecology and Sociobiology, 2002, 51, 180-187.
Every year 50 Americans die of cardiovascular disease for every one that dies of AIDS.
Humans are not good at computing probabilities, as is illustrated by the Monty Hall Paradox. This paradox was invented by Martin Gardner and published in his Scientific American column in 1959. It is called the Monty Hall paradox because of its resemblance to the TV show Let’s Make a Deal. I have taken this version from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
I am running a game that I can repeat hundreds of times.
On a table in front of me are a stack of ten-dollar bills and three identical boxes, each with a lid.
You are my subject.
Here are the rules for each game.
You leave the room and while you are out, I put a ten-dollar bill in one of the three boxes.
Then I close the lids on the boxes.
I know which box contains the ten-dollar bill, but you don’t.
Now I invite you back into the room and you try to guess which box contains the money.
If you guess correctly, you get to keep the ten-dollar bill.
Each game is divided into two phases.
In the first phase, you point to your choice.
(You cannot not open, lift, weigh, shake or manipulate the boxes.)
The boxes remain closed.
After you make your choice, I open one of the two remaining boxes.
I will always open an empty box (remember that I know where the ten-dollar bill is).
Having seen one empty box (the one that I just opened) you now see two closed boxes, one of which contains the ten-dollar bill.
Leave this slide up for a while and let people discuss what they think.
This explanation is from Massimo Piattelli-Palmarini, Inevitable illusions: how mistakes of reason rule our minds, John Wiley & Sons, 1994.
This table explains three bets: A, B and C. The p’s are the probabilities, the x’s are the outcomes, is the mean and is the variance. This table shows, for example, that half the time bet C would pay $1 and the other half of the time it would pay $19. Thus, this bet has an expected value of $10 and a variance of $9. This is a comparatively big variance, so the risk (or uncertainty) is said to be high. Most people prefer the A bet, the certain bet.
To model risk averseness across different situations the coefficient of variability is often better than variance.
Coefficient of variability = (Standard Deviation) / (Expected Value).
In choosing between alternatives that are identical with respect to quantity (expected value) and quality of reinforcement, but that differ with respect to probability of reinforcement humans, rats (Battalio, Kagel and MacDonald, 1985), bumblebees (Real, 1991), honeybees (Shafir, Watts and Smith, 2002) and gray jays (Shafir, Watts and Smith, 2002) prefer the alternative with the lower variance.
To avoid the confusion caused by system engineers and decision theorist using the word risk in two different ways, we can refuse to use the word risk and instead use ambiguity, uncertainty and hazards.
J. H. Kagel, R. C. Battalio and L. Greene, Economic Choice Theory: An Experimental Analysis of Animal Behavior, Cambridge University Press, 1995.
A little while ago, a wild fire was heading toward our house. We packed our car with our valuables, but we did not have room to save everything, so I put my wines in the swimming pool. We put the dog in the car and drove off. When we came back, the house was burned to the ground, but the swimming pool survived. However, all of the labels had soaked off of the wine bottles. Tonight I am giving a dinner party to celebrate our survival. I am serving mushrooms that I picked in the forest while we were waiting for the fire to pass. There may be some hazard here, because I am not a mushroom expert. We will drink some of my wine: therefore, there is some uncertainty here. You know that none of my wines are bad, but some are much better than others. Finally I tell you that my sauce for the mushrooms contains saffron and oyster sauce. This produces ambiguity, because you probably do not know what these ingredients taste like. How would you respond to each of these choices?
Hazard: Would you prefer my forest picked mushrooms or portabella mushrooms from the grocery store?
Uncertainty: Would you prefer one of my wines or a Kendall-Jackson merlot?
Ambiguity: Would you prefer my saffron and oyster sauce or marinara sauce?
Decisions involving these three concepts are probably made in different parts of the brain. Hsu, Bhatt, Adolphs, Tranel and Camerer [2005] used the Ellsberg paradox to explain the difference between ambiguity and uncertainty. They gave their subjects a deck of cards and told them it contained 10 red cards and 10 blue cards (the uncertain deck). Another deck had 20 red or blue cards but the percentage of each was unknown (the ambiguous deck). The subjects could take their chances drawing a card from the uncertain deck: if the card were the color they predicted they won $10, else they got nothing. Or they could just take $3 and quit. Most people picked a card. Then their subjects were offered the same bets with the ambiguous deck. Most people took the $3 avoiding the ambiguous decision. Hsu et al. recorded functional magnetic resonance images (fMRI) of the brain while their subjects made these decisions. While contemplating decision about the uncertain deck, the dorsal striatum showed the most activity and when contemplating decisions about the ambiguous deck the amygdala and the orbitofrontal cortex showed the most activity.
Ambiguity, uncertainty and hazards are three different things. But people prefer to avoid all three.
This slide also shows saturation.
This slide also shows the importance of the reference point: $10 to a poor man means a lot more than $10 to a rich man.
Kahneman, D. and Tversky, A., Prospect Theory: An Analysis of Decision under Risk, Econometrica 46 (2) (1979), 171-185.
Massimo would prefer that we label the ordinate and abscissa as subjective worth and numerical value.
The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not.
The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill.
The $3 bet has consequences that you might have to give me two million dollars.
The $1 bet has the highest utility for most engineers.
The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not.
The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill.
The $3 bet has consequences that you might have to give me two million dollars.
The $1 bet has the highest utility for most engineers.
The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not.
The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill.
The $3 bet has consequences that you might have to give me two million dollars.
The $1 bet has the highest utility for most engineers.
The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
The $2 bet means I put down a $2 bill and flip a coin to see if you get it or not.
The $1 bet means I give you one dollar and a state lottery ticket. If the lottery ticket is a winner, you keep the $1 million, else you keep the dollar bill.
The $3 bet has consequences that you might have to give me two million dollars.
The $1 bet has the highest utility for most engineers.
The message of this slide can be dramatically demonstrated with two $2 bills, a coin, two $1 bills, a lottery ticket and the last two slides of this presentation.
The $1 bet has the highest utility for most engineers.
Savage (1954)
Kahneman got the Nobel Prize in 2002 for his part in developing Prospect Theory.
Prospect theory is often called a descriptive model for human decision making.
In the last two dozen slides, we showed how human behavior differed from rational behavior. Next we are going to show that tradeoff studies can help move you toward more rational decisions.
Evaluation data for evaluation criteria come from approximations, product literature, analysis, models, simulations, experiments and prototypes.
This is a template that can be used for criteria.
This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
A lot of confusion has been caused by failure to differentiate between the name of the criterion and the basic measure for that criterion.
As in this case, the words are often very similar.
At this point it might also be useful to differentiate between metric and measure.
Measure. A measure indicates the degree to which an entity possesses and exhibits a quality or an attribute. A measure has a specified method, which when executed produces values (or metrics) for the measure.
Metric. A measured, calculated or derived value (or number) used by a measure to quantify the degree to which an entity possesses and exhibits a quality or an attribute.
Measurement. A value obtained by measuring, which makes it a type of metric.
Spend some time on this criteria, because we will bring it back later.
Monotonic increasing, lower=0, baseline=90, slope=0.1, upper=100, plot limits 70 to 100.
This example comes from the Pinewood Derby study located at http://www.sie.arizona.edu/sysengr/pinewood/pinewood.pdf
This second example was chosen to highlight the difference between the name of the criterion and the basic measure for that criterion.
This Pinewood Derby chapter is from W. L. Chapman, A. T. Bahill and A. W. Wymore, Engineering modeling and design, CRC Press, Boca Raton, 1992.
The reason we are using such an old reference is to show that we didn’t just jimmy up the example.
It has been around for a long time.
Of course, it depends on the circumstances.
if availability were a probabilistic value, then it could be used.
Perhaps like going to the library to get a copy of the latest best-selling book.
These are sometimes hierarchal with attributes, criteria and then objectives.
But an SEI papers says criteria contain attributes and objectives.
Other MoPs could be overall GPA, GPA in the major, extracurricular activities, summer internships, number of undergraduate credits, number of graduate credits, honorary societies, special awards, semesters in the program,
From left to right, Moe Howard, Jerry (Curley) Howard and Larry Fine.
If you are not using a scoring function, then instead of Total Life Cycle Cost, use the negative or the reciprocal
When we showed people the top curve and asked, “How would you feel about an alternative that gave 90% happy scouts?” they typically said, “It’s pretty good.”
In contrast, when we showed people the bottom curve and asked, “How would you feel about an alternative that gave 10% happy scouts?” they typically said, “It’s not very good.”
When we allowed them to change the parameters, they typically pushed the baseline for the Percent Unhappy Scouts scoring function to the left.
The solution to this problem is to group all of the husband’s criteria into one higher level criterion called power.
The deprecated words maximize and minimize should not be used in requirements, but they are OK in goals.
On the other hand we could rewrite this as
Selection criteria: The preferred alternative will be the one that produces the largest amount of food.
I would like to have a rich, intransitive uncle.
Assume that I have an Alpha Romero and a BMW.
And my Uncle has a Corvette.
I would love to hear him say,
“I prefer your BMW to my Corvette, therefore I will give you $2000 and my Corvette for your BMW.”
Next he might say,
“I prefer your Alpha Romero to my BMW, therefore I will give you $2000 and my BMW for your Alpha Romero.”
And finally I would wait with baited breath for him to say,
“I prefer your Corvette to my Alpha Romero, therefore I will give you $2000 and my Alpha Romero for your Corvette.”
We would now have our original cars, but I would be $6000 richer.
I would call him Uncle Money Pump.
This example can start with any car and go in either direction. The only trick is that you must go in a circle.
The NAND operator is not associative.
The “A Prioritization Process” paper explains why each of these aspects is important.
Read that paper before discussing this slide.
Botta, Rick, and A. Terry Bahill, “A Prioritization Process,” Engineering Management Journal, 19:4 (2007), pp. 20-27.
Mnemonic: ordinal is ordering, as in rank ordering.
*Those bullets are ORed.
*The systems engineer should derive straw men priorities for all of the criteria. These priorities shall be numbers (usually integers) in the range of 0 to 10, where 10 is the most important. Then he or she should meet with the customer (how ever many people that might be). For each criterion, the systems engineer should lead a discussion of the criteria in the above table and then try to get a consensus for the priority. In the first pass, he or she might ask each stakeholder to evaluate each criterion and take the average value. However, if the customer only looks at one or two criteria and says the criterion is a 10, then it’s a 10.
*Yes rank ordering gives ordinal numbers not cardinal numbers, but often the technique works well.
*The systems engineer can help the customer make pair-wise comparisons of all the criteria and then use the analytic hierarchy process to derive the priorities. This would not be feasible without a commercial tool such as Expert Choice. This tool is discussed in Ref: COTS-Based Engineering Design of a Tradeoff Study Tool.
*One algorithmic technique is on Karl Wiegers’ web site.
*If all of the alternatives are very close on a criterion, then you might want to discount (give a low weight to) that criterion.
Many other methods for deriving weights exist, including: the ratio method [Edwards, 1977], tradeoff method [Keeney and Raiffa, 1976], swing weights [Kirkwood, 1992], rank-order centroid techniques [Buede, 2000], and paired comparison techniques discussed in Buede [2000] such as the Analytic Hierarchy Process [Saaty, 1980], trade offs [Watson and Buede, 1987], balance beam [Watson and Buede, 1987], judgments and lottery questions [Keeney and Raiffa, 1976]. These methods are more formal and some have an axiomatic basis. For a comparison of weighting techniques, see Borcherding, Eppel, and Winterfeldt [1991].
K. Borcherding, T. Eppel, D. von Winterfeldt, Comparison of weighting judgments in multiattribute utility measurement, Management Science 37: 1603-1619, 1991.
D. Buede, The Engineering Design of Systems, John Wiley, New York, 2000.
W. Edwards, How to Use Multiattribute Utility Analysis for Social Decision Making, IEEE Trans Syst Man Cybernetics, SMC-7: 326-340, 1977.
R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, John Wiley, New York, 1976.
C. W. Kirkwood, Strategic Decision Making: Multiobjective Decision Analysis with Spreadsheets, Duxbury Press, Belmont, 1997.
T. L. Saaty, The Analytical Hierarchy Process, McGraw-Hill, New York, 1980.
S. R. Watson, and D. M. Buede, Decision Synthesis: The Principles and Practice of Decision Analysis, Cambridge University Press, Cambridge, UK, 1987.
The method of swing weighting is based on comparisons of how does the swing from 0 to 10 on one preference scale compare to the 0 to 10 swing on another scale? Assessors should take into account both the difference between the least and most preferred options, and how much they care about that difference. For example, in purchasing a car, you might consider its cost to be important. However, in a particular tradeoff study for a new car, you might have narrowed your choice to a few cars. If they only differ in price by $400, you might not care very much about price. That criterion would receive a low weight because the difference between the highest and lowest price cars is so small.
D. Redelmeier, and E. Shafir, Medical decision making in situations that offer multiple alternatives, Journal of the American Medical Association, 273(4) (1995), 302-305.
A sacred cow is an idea that is unreasonably held to be immune to criticism.
Saving the spotted owl, gnatcatchers, the Ferruginous Pigmy Owl and putting out all forest fires have been sacred cows to environmentalists.
Most things that are termed politically correct are sacred cows.
In Tucson, all transportation proposals contain the light rail alternative, because the lobby for this technology is very strong.
*G. A. Miller, The magical number seven, plus or minus two: some limits on our capacity for processing information,
The Psychological Review, 1956, vol. 63, pp. 81-97, www.well.com/user/smalin/miller.html.
** D.A. Redelmeier and E. Shafir, Medical decision making in situations that offer multiple alternatives, JAMA, Jan. 25, 1995, 273 (4) 302-305.
CAIV is only used in the requirements phase. After the requirements are set it is too late.
Near the end of this process the data will be quantitative and objective.
But in the beginning they will be based on personal opinion of domain experts.
There are techniques to help get such data from the experts.
The literature on this topic is called preference elicitation (see Chen and Pu, 20xx).
Cardinal measures indicate size or quantity. They were introduced about 15 slides ago.
Fuzzy numbers will be discussed about 40 slides from here.
Input of 88% produces output of 0.31. Input of 91% produces output of 0.6.
The Bad example is just a linear transformation. You can do better than that.
The output is intended to be cardinal (not ordinal) numbers. That is an output of 0.8 is intended to be twice as important as an output of 0.4.
The propose of this slide is to show that different combining methods can produce different preferred alternatives.
When I was living in Pittsburgh, I went to the Carnegie Institute.
I saw the fossil skeleton of a Brontosaur (that is what it was called at that time).
I asked the guard, “How old are those dinosaur bones?’
He replied, “They are 70 million, four years and six months.”
“That is an awfully precise number,” I said. How do you know their age so precisely?” Is there a new form of radiocarbon dating?”
The guard answered, “Well, they told me that those dinosaur bones were 70 million years old when I started working here, and that was four and a half years ago.”
This story is an example of false precision.
Often students list their results with six digits after the decimal point, because that is the default on their calculators.
You should not accept the default value.
Deliberately choose the number of digits after the decimal point.
In my last slide I choose two, because that was necessary and sufficient to show the differences between the alternatives.
The number of digits to print can also be determined by the technique of significant figures.
Please do not try to explain this equation.
It is only here in case someone asks about it.
SSF1 is the first of twelve Standard Scoring Functions.
If you could reduce the probability of loss of life for operators of your system from one in a million to one in ten million,
I’m sure your customer would be happy.
Using logarithms is a way to show this.
That slide is spoken, “You can add dollars and pounds, but you can’t add dollars and pounds.”
Therefore you need scoring functions in order to combine apples and oranges.
An atomic bomb (actually a thermonuclear weapon) costs a billion dollars and lasted a nanosecond.
Wymore (1993) calls the criteria space the buildability cotyledon.
These criteria are for selecting a printer for a computer.
Cost is the inverse of selling price, because I didn’t want to use scoring functions yet.
There will be lots of printers in the lower left area, but they are all inferior.
There will be no printers in the upper right corner, because this is the infeasible region.
The best alternatives will be on the quarter-circle.
We cover these slides real fast. The detail is not important.
By coincidence, (d Sum)/dx = - (d Product)/dx
Alternatives on a circle could be cost and pages per minute for a laser printer.
Alternatives on a straight line could be sharing a pie; pie for you and pie for me.
Alternatives on an hyperbola could be various soda pop packages or human muscle.
This sign was unknowingly based on a cartoon by Dana Fradon published in the New Yorker in 1976.
Clearview is the font now used by the U. S. Highway administration. This is an approximation of it.
The Sum is simpler if you are going to compute sensitivity functions, because it has fewer interaction terms.
The product combining function is often called the Nash Product after Nobel Laureate John Nash who used this function in 1950.
It is also called the Nash Bargaining Solution.
The following three items are analogous.
Risk is the probability of occurrence times the severity or consequences of the event.
In the sum combining function we use the input value times the weight.
Subjective expected utility is the probability times the utility.
Transmission of light in an optical system is the product of the individual optical element transmissions.
Probability chains are often multiplicative. For example, the probability of a missile kill is the product of probability of target detection, probability of successful launch, probability of successful guidance, probability of warhead detonation, probability of killing a critical area of the target.
Minimax is not XOR, because it doesn’t alternate between criteria. It chooses just one criterion.
They change the algorithm every year.
See www.bcsfootball.org
In contrast to this NASCAR uses the first 26 races to narrow down the field. The After the first 26 races the top ten drivers plus any other drivers within 400 points of the leader are selected to compete in the last ten races, which determine the champion.
The next dozen slides will discuss this combining function.
Which athlete has the most championship rings? Yogi Berra, with 10?
No, Bill Russell with 11 in the NBA and 2 in the NCAA, all as a player.
John Wooden has 12 as a college basketball coach.
Joe DiMaggio had 9 as a player.
Phil Jackson and Red Auerbach each have 9 NBA rings.
Bob Hayes is the only person with an Olympic gold medal and a Super Bowl ring.
The Pittsburg Stealers won 4 in the 1970’s.
Use minimin to design a bat for Alex Rodriguez, because he always hits the ball right on the sweet spot.
Use minimax for Terry Bahill. The ball wont’ go as far for a perfect hit, but it will not be a disaster for a mishit.
This decision to build on the mountain top is not based on expected values.
Assume one violent thunderstorm is expected per decade.
The expected loss for the mountain top is $10K/year,
whereas the expected loss for the river bank is only $9K/year.
This slide uses the numbers from the previous slide.
Probability density functions are often used to help obtain evaluation data.
For instance, for a particular alternative, the average response time may be given by a certain type of a probability density function with a specified mean and variance.
In designing system experiments, we could say the system input shall be determined by a certain type of a probability density function with a specified mean and variance.
I don’t recommend using the product combining function for the whole data base. I think it would be appropriate for a criterion of benefit to cost ratio.
In this tradeoff study the Cost and Performance criteria were summed together with weights that totaled 1.0.
Weightcost times Cost Score + Weightperformance times Performance Score = alternative rating
Weightcost + Weightperformance = 1.0
These functions were derived from simulations.
They show that for resources poor packs the single elimination race is the best, whereas for resource rich packs the round robins are best.
These functions were derived from prototype races.
They show that for resources poor packs the double elimination race is the best, whereas for resource rich packs the round robins are best.
For a tradeoff study with many alternatives, where the rankings change often, a better performance index is just the alternative rating of the winning alternative, F1.
This function gives more weight to the weights of importance.
The most important parameter is S11. Therefore, we should gather more experimental data and interview more domain experts for this parameter: we should spend extra resources on this parameter. The minus signs for S12 and S22 merely mean that an increase in either of these parameters will cause a decrease in the performance index.
Note that, for example, because the sensitivity of F with respect to Wt1 contains S11 and S12, there will be interaction terms.
We have k=2 criteria: cost and quantity and i=8 alternatives.
The 3-liter bottle may not look like it is closest to the Ideal Point because the horizontal and vertical scales are not the same.
This table used the modified Minkowski metrics.
You do not have to present all three of these decision tree examples.
The baseball manager must make a decision about his pitcher.
He could use a tradeoff study, as illustrated above, or a decision tree as shown in the next slide.
In Abbott and Costello's famous routine “Who’s on First?” Who was the first baseman and the pitcher was Tomorrow, but I’m getting too silly now.
These data are for Barry Bonds.
J. P. Reiter, Should teams walk or pitch to Barry Bonds? Baseball Research Journal, 32, (2004), 63-69.
J. F. Jarvis, An analysis of the intentional base-on-ball, Presented at SABR-29, Phoenix, AZ, 1999 ( http://knology.net/~johnfjarvis/IBBanalysis.html )
Maybe we should first ask if we are playing in San Francisco’s AT&T park, where the average wind speed is 10 mph from home plate toward right field.
Getting into the decision maker’s head is a segue to the next slide.
Reference for the Myers-Briggs model: D. Keirsey, Please Understand Me II, Prometheus Nemesis Book Company, 1998.
Faced with a decision between two packages of ground beef, one labeled “95% lean," the other “5% fat," which would you choose?
The meat is exactly the same, but most people would pick "80% lean."
The language used to describe options often influences what people choose, a phenomenon behavioral economists call the framing effect.
Some researchers have suggested that this effect results from unconscious emotional reactions.
This is like the Wheel of Fortune. You spin the wheel and see where the arrow points.
The black areas on the pie charts are the probabilities of winning: 0.09 and 0.94.
The expected values of the two bets are $5.103 and $5.076: this is close enough to be called equal.
Lichtenstein and Slovic (1971) reported that, when given a choice, most people preferred the P bet,
but wanted more money to sell the $ bet (median=$7.04) than P bet ($4.00).
Attractiveness ratings (e.g., 0=very very unattractive to 80=very very attractive) showed an even stronger preference for the P bet.
This is stronger than the previous slide on phrasing, because the same subjects are changing their minds depending on the phrasing.
Lichtenstein and Slovic (1971). Reversals of preferences between bids and choices in gambling decisions. Journal of Experimental Psychology, 89, 46-55.
You wrote down a lot of criteria, but obviously there were a lot of important ones that you neglected. The stomach test brought them to the surface.
You cannot use this test very often.
And it only works for really important things.
This test comes from Eb Rechtin.
Anywhere I put a use case name I set it in the Veranda font.
Re: the title meta summary
Aristotle wrote his treatise on Physics. Then after that he wrote his treatise on Philosophy, which he called Meta Physics or after Physics.
Philosophy is at a higher level of abstraction than Physics.