Measuring Risk - What Doesn’t Work and What Does


Published on

The topics for this webinar include:

The Problem – Why your method may be a “management placebo” and why that is the biggest risk you have Problems that many methods ignore – and problems some methods introduce What Does Work – Studies reveal some methods show consistent, measurable improvements on the forecasts and decisions of managers
Examples of Real Improvements
Overview of Applied Information Economics (AIE) Process Common Objections to quantitative methods and the misconceptions behind them
Questions & Answers

Published in: Business
  • mscottesq - if your email me at I will honor your request ...for the kids
    Are you sure you want to  Yes  No
    Your message goes here
  • Thanks Jody. While I have a personal interest in the subject, the presentation will be most valuable helping the urban, high school kids I teach in a Supplemental Educational Services program through a non-profit understand why the study of math is relevant. It shows real world application of what are otherwise dry, academic principles. (Now, if I can only get the email from you to go through. Schools block 'social networking' sties and availability of LI is spotty.)


    Are you sure you want to  Yes  No
    Your message goes here
  • Thanks Jody and shall watch the video.
    Are you sure you want to  Yes  No
    Your message goes here
  • I agree, if you would like to view the recorded version it is here

    Doug Hubbard is not only a great writer but also a great speaker
    Are you sure you want to  Yes  No
    Your message goes here
  • As provocative this presentation is, I like it. Slide 6 says that experts might not give the same answer to the same problem. I fully agree. Humans suffer from the Butterfly Effect and small changes in the environment, mood or whatever factor might lead experts to different outcome. To me, this presentation is a must read for all
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • May 18, 2010
  • May 18, 2010
  • May 18, 2010
  • May 18, 2010
  • May 18, 2010
  • May 18, 2010
  • May 18, 2010
  • Measuring Risk - What Doesn’t Work and What Does

    1. 1. Applied Information Economics © 2010 HDR and Aliado Accesso LLC Measuring risk – What Works and What Doesn’t
    2. 2. Background <ul><li>In the past 16 years, I conducted 60 major risk/return analysis projects so far </li></ul><ul><li>I noticed that what were thought of as “impossible” measurements could actually be measured </li></ul><ul><li>I also noticed that risk management and much decision analysis in business was mostly unscientific and did not reflect the latest research </li></ul><ul><li>I wrote these two books published by John Wiley & Sons </li></ul>© 2010 HDR and Aliado Accesso LLC
    3. 3. Challenges <ul><li>How can we measure “intangibles”? </li></ul><ul><li>How do we know that our method of analyzing big decisions “works” (i.e. has a measurable improvement for our forecasts and decisions)? </li></ul><ul><li>How can we use proven, quantitative methods when, apparently, we lack sufficient data, or the problem is too complex? </li></ul>© 2010 HDR and Aliado Accesso LLC
    4. 4. Key Lesson: Skepticism <ul><li>In defense of many popular methods for decision analysis, portfolio prioritization and metrics, you may have heard (or said) the following: </li></ul><ul><ul><li>“ Our method is structured and formal” </li></ul></ul><ul><ul><li>“ It helps us build consensus” </li></ul></ul><ul><ul><li>“ It’s easily understood and relatively fast” </li></ul></ul><ul><ul><li>“ It is a proven method” (proven meaning somebody else did it this way and said they liked it) </li></ul></ul><ul><li>If the method uses a weighted score, or labels risks as “high/medium/low” then you should be suspicious. </li></ul>© 2010 HDR and Aliado Accesso LLC
    5. 5. Analysis Placebos <ul><li>Gathering more information makes you feel better but, at some point, begins to reduce decision quality while confidence continues to increase. Tsai C., Klayman J., Hastie R. “Effects of amount of information on judgment accuracy and confidence” Org. Behavior and Human Decision Processes, Vol. 107, No. 2, 2008, pp 97-105 </li></ul><ul><li>Interaction with others also increases decision confidence but, again, at some point decisions are not improved while confidence continues to increase Heath C., Gonzalez R. “Interaction with Others Increases Decision Confidence but Not Decision Quality: Evidence against Information Collection Views of Interactive Decision Making” Organizational Behavior and Human Decision Processes, Vol. 61, No. 3, 1995, pp 305-326 </li></ul><ul><li>Formal training in detecting lies makes individuals slightly worse at detecting lies in controlled experiments – but there confidence in their judgments increases dramatically. Kassin, S.M., Fong, C.T. “I’m innocent!: Effect of training on judgments of truth and deception in the interrogation room” Law and Human Behavior, 23 pp 499-516, 1999 </li></ul>Studies have shown that it is very easy for a decision-making process to increase confidence in forecasts and decisions even if measured outcomes (return on decisions, forecasts, etc.) are not improved – or even made worse © 2010 HDR and Aliado Accesso LLC
    6. 6. Errors in Expert Judgment <ul><li>Overconfidence – Their chance of being right is much less than they believe. </li></ul><ul><li>Inconsistency and influence by irrelevant factors – When given the same sets of problems to evaluate, experts have a hard time giving the same answers. Their memory is reconstructed so that they believe they always had one preference when in fact they didn’t. Factors which experts may insist have no bearing on their judgment show correlations to their judgments. </li></ul><ul><li>Misinterpretation – We tend to interpret cues about risks, measurements and decision problems in a way that is mathematically irrational. </li></ul>Human expertise is an important input in and it is hard to completely automate. But there are certain types of errors in human judgment we know how to measure and control for: “ Experience is inevitable. Learning is not.” Paul Schoemaker © 2010 HDR and Aliado Accesso LLC
    7. 7. Real Reasons Decisions Change <ul><li>Fear and anger effect the perception of risk and risk tolerance (Lerner, Keltner, 2001). </li></ul><ul><li>A small study presented at Cognitive Neuroscience Society meeting in 2009 by a grad student at U. of Michigan showed that simply being briefly exposed to smiling faces makes people more risk tolerant in betting games. </li></ul><ul><li>An NIH funded study conducted by Brian Knutson of Stanford showed that emotional stimulation caused subjects to take riskier bets in betting game. </li></ul><ul><li>Risk preferences show a strong correlation to testosterone levels – which change daily ( Sapienza , Zingales, Maestripieri, 2009). </li></ul><ul><li>“ Emerging preferences” affects our perception of risk and risk aversion and that emerging preferences are perceived as something the decision maker always had. (DeKay) </li></ul><ul><li>Research on effects like “anchoring” show how exposure to an unrelated number prior to the decision affects the choice. This implies that even the order investments are presented in can affect choices. (Kahneman, Tversky) </li></ul>Our self-image of how tolerant or averse we are toward risk is much more fluid than we think. We will imagine our risk appetite is a more permanent part of our character than it really is. Controlling for this means 1) being aware of the issue, 2) documenting risk aversion with “risk boundaries” 3) multiple estimates of risks © 2010 HDR and Aliado Accesso LLC
    8. 8. Scale Errors <ul><li>The use of scales simply obscures (doesn’t alleviate) the lack of information and potential disagreements - this creates an illusion of communication (Budescu) </li></ul><ul><li>Arbitrary changes to the scale (1 to 5 vs. 1 to 10) have unexpected effects on how people distribute their responses on a scale – which lead to major differences in outcomes (Fox). </li></ul><ul><li>Popular weighted scores add error to unaided human judgment. Scale error is added even if scales are “well defined” by introducing an extreme rounding error. It is possible to have one risk 10 or 50 times greater than another risk end up in the same final group. (Cox) </li></ul>Relative Impact of Sponsor Level on Project Failure Actual 3 4 1 Scale 2 C-Level SVP Manager VP Based on the relative value of the sponsor levels based on historical data of project failures (Note that the error was just enough that the order might even be wrong) Scales are simple. But our response behaviors when we use them are not. Typical scales combine several complex, subtle errors © 2010 HDR and Aliado Accesso LLC
    9. 9. What Does Work? <ul><li>“ Calibrate” experts to realistically assess probabilities. </li></ul><ul><li>For certain problems, remove inconsistencies in judgment. </li></ul><ul><li>“ Do the math”– don’t rely on intuition entirely. </li></ul><ul><ul><li>Use the “calibrated” judgments of experts in Monte Carlo simulations. </li></ul></ul><ul><ul><li>Simple historical models and actual measurements usually outperform human judges. </li></ul></ul><ul><ul><li>Compute the “Expected Value of Information” to identify important measures. </li></ul></ul><ul><li>Document basic decision criteria - especially risk vs. return. </li></ul>Each of these address known errors or been tested in multiple controlled experiments with measurable results – not just case anecdotal case studies with reactions of users as an indicator of effectiveness . Don’t reinvent the wheel – scientifically proven, effective risk analysis methods have been applied to other equally difficult problems where there is limited historical data and lots of uncertainty. Examples: nuclear power, insurance of rare and complex events, oil exploration © 2010 HDR and Aliado Accesso LLC
    10. 10. Calibrated Probabilities <ul><li>1997: An experiment Hubbard conducted with Giga Information Group proves people can be trained to assess probabilities of uncertain forecasts </li></ul><ul><li>Hubbard has calibrated hundreds of people since then </li></ul><ul><li>Calibrated probabilities are the basis for modeling the current state of uncertainty </li></ul>Giga Analysts Giga Clients Statistical Error “ Ideal” Confidence 30% 40% 50% 60% 70% 80% 90% 100% 50% 60% 80% 90% 100% 70% Assessed Chance Of Being Correct Percent Correct 99 # of Responses © 2010 HDR and Aliado Accesso LLC 25 75 71 65 58 21 17 68 152 65 45 21
    11. 11. “ Smoothing” Inconsistencies <ul><li>No matter how much experience experts have, they appear to be unable to apply what they learned consistently </li></ul><ul><li>Methods that statistically “smooth” their estimates show reduced error in several studies for many different kinds of problems </li></ul>Reduction in Forecasting Error Compared to Expert Judgment R&D Portfolio Priorities Battlefield Fuel Forecasts IT Portfolio Priorities Cancer patient recovery Changes in stock prices Mental illness prognosis Psychology course grades Business failures 0% 10% 20% 30% My Studies Other Published Studies First Estimate Second Estimate Movie Box Office Forecasts © 2010 HDR and Aliado Accesso LLC
    12. 12. Quantitative Modeling: It Works <ul><li>I n the United Kingdom between 1844 and 1853, 149 insurance companies were formed. By the end of this period, just 59 survived. Those that failed tended to be those that did not use mathematically valid premium calculations. (Buhlmann, 1997) </li></ul><ul><li>Over 150 studies have shown areas of judgment where historical models outperform expert judgment – even though the humans insist each item is unique and requires the “human touch”. (Meehl, 1954; Dawes 1996) </li></ul><ul><li>One researcher in the oil industry found a correlation between the use of quantitative risk analysis methods and financial performance – and the improvement in performance started when they started using the quantitative methods. (F. Macmillan, 2000) </li></ul><ul><li>Data at NASA from over 100 space missions showed that Monte Carlo simulations and historical models beat other methods for estimating cost, schedule and mission risks (I published this in The Failure of Risk Management and OR/MS Today ) </li></ul>© 2010 HDR and Aliado Accesso LLC Event A Event B Demand %Orders Lost OR Lost Revenue
    13. 13. Red Herrings of Modeling <ul><li>We need to be careful of red herring arguments against models. </li></ul><ul><ul><li>“ We cannot model that…it is too complex.” </li></ul></ul><ul><ul><li>“ Models will have error and therefore we should not attempt it.” </li></ul></ul><ul><ul><li>“ We don’t have sufficient data to use for a model.” </li></ul></ul><ul><ul><li>“ The model failed to predict X, therefore modeling has no value.” </li></ul></ul><ul><li>Build on George E. P. Box: “Essentially, all models are wrong, but some are useful.” </li></ul><ul><ul><li>Some models are more useful than others. </li></ul></ul><ul><ul><li>Everyone uses a model – even if it is intuition or “common sense” </li></ul></ul><ul><ul><li>So the question is not whether a model is “right” or whether to use a model at all. </li></ul></ul><ul><ul><li>The question is whether one model measurably outperforms another. </li></ul></ul><ul><ul><li>A proposed model (quantitative or otherwise) should be preferred if the error reduction compared to the current model (expert judgment, perhaps) is enough to justify the cost of the new model. </li></ul></ul>© 2010 HDR and Aliado Accesso LLC
    14. 14. Applied Information Economics <ul><li>AIE is a practical application of quantitative methods to decision analysis problems </li></ul><ul><li>Goal: Optimizing Uncertainty Reduction –Balancing measurably improved decisions and analysis effort </li></ul><ul><li>It answers two questions: </li></ul><ul><ul><li>Given the current uncertainty, what is the best decision? </li></ul></ul><ul><ul><li>What additional analysis or measurements are justified? </li></ul></ul><ul><li>Every component of the method is based on empirical research that shows it improves decisions </li></ul>© 2010 HDR and Aliado Accesso LLC
    15. 15. Making the Best Bet © 2010 HDR and Aliado Accesso LLC Model The Current State of Uncertainty – Initially use calibrated estimates and then actual measurements Optimize Decision – Use the quantified Risk/Return boundary of the Decision makers to determine which decision is preferred. Define the Decision and Identify Relevant Variables. Set up the “Business Case” for the decision, using these variables – Calibration Training Compute the value of additional Information – Determine what to measure and how much effort to spend on measuring it. Measure where the information value is high – Reduce uncertainty using any of the methods No Yes Is there significant value to more information?
    16. 16. A Few Examples <ul><li>IT </li></ul><ul><ul><li>Prioritizing IT portfolios </li></ul></ul><ul><ul><li>Risk of software development </li></ul></ul><ul><ul><li>The value of better information </li></ul></ul><ul><ul><li>The value of better security </li></ul></ul><ul><ul><li>The Risk of obsolescence and optimal technology upgrades </li></ul></ul><ul><ul><li>Vendor selection </li></ul></ul><ul><ul><li>The value of infrastructure </li></ul></ul><ul><ul><li>Performance metrics for the business value of applications </li></ul></ul><ul><li>Engineering </li></ul><ul><ul><li>The risks of major engineering projects </li></ul></ul><ul><ul><li>Mining operations </li></ul></ul>AIE was applied initially to IT business cases. But over the last 16 years it has also been applied to other decision analysis problems in all areas of Business Cases, Performance Metrics, Risk Analysis, and Portfolio Prioritization <ul><li>Business </li></ul><ul><ul><li>Market forecasts </li></ul></ul><ul><ul><li>The risk/return of expanding operations </li></ul></ul><ul><ul><li>Business valuations for venture capital and mergers and acquisitions </li></ul></ul><ul><ul><li>Movie project selection </li></ul></ul><ul><li>Environment </li></ul><ul><ul><li>The value of safer drinking water </li></ul></ul><ul><ul><li>The value of “scrubbers” on smoke stacks </li></ul></ul><ul><ul><li>The value of better pesticide control for saving endangered species </li></ul></ul><ul><li>Military </li></ul><ul><ul><li>Forecasting fuel for Marines in the battlefield </li></ul></ul><ul><ul><li>Measuring the effectiveness of combat training to reduce roadside bomb/IED casualties </li></ul></ul><ul><ul><li>R&D portfolios </li></ul></ul>© 2010 HDR and Aliado Accesso LLC
    17. 17. Uncertainty, Risk & Measurement <ul><li>The “Measurement Theory” definition of measurement: “A measurement is an observation that results in information (reduction of uncertainty) about a quantity.” </li></ul><ul><li>An Actuary's approach to Risk Measurement : “To quantify probability and loss of an undesirable possibility” </li></ul><ul><li>The value of a Measurement: “The monetized reduction in risk from making decisions under less uncertainty” </li></ul><ul><li>We model uncertainty statistically – with Monte Carlo simulations </li></ul>Measuring Uncertainty, Risk and the Value of Information are closely related concepts, important measurements themselves, and precursors to most other measurements © 2010 HDR and Aliado Accesso LLC
    18. 18. The Impact of Computing Information Value <ul><li>The Value of Information is Computable: AIE uses a relatively simple (and 60 year-old) set of algorithms from decision theory to compute the value of information </li></ul><ul><li>The Priority of Measurements is Reversed: This calculation reveals that most organizations will consistently focus on low-value measurements and ignore high-value measurements - this is the “measurement inversion” </li></ul><ul><li>Only a Few Measurements Are Really Needed: We also found that, if anything, fewer measurements were required after the information values were known. </li></ul><ul><li>Some Additional Empirical Measurements are almost always needed: I found that 97% of the models I built justified further measurement according to the information values. </li></ul>Traditional Measurement Priorities Value of Information © 2010 HDR and Aliado Accesso LLC
    19. 19. Five Useful Assumptions <ul><li>Its been measured before </li></ul><ul><li>You have more data than you think </li></ul><ul><li>You need less data than you think </li></ul><ul><li>New data is more economical than you think </li></ul><ul><li>All measurements have error, but your subjective estimates of that error has even more error. </li></ul>“ It’s amazing what you can see when you look” Yogi Berra © 2010 HDR and Aliado Accesso LLC
    20. 20. Measuring the “Impossible” <ul><li>Several clever sampling methods exist that can measure more with less data than you might think </li></ul><ul><li>Examples: </li></ul><ul><li>Estimating the number of tanks created by the Germans in WWII </li></ul><ul><li>Clinical trials with extremely small samples </li></ul><ul><li>Measuring undetected computer viruses or hacking attempts </li></ul><ul><li>Estimating the population of fish in the ocean </li></ul><ul><li>Measuring unreported crimes or the size of the black market </li></ul><ul><li>Using “near misses” to measure catastrophic but rare events </li></ul>WWII German Tank Production Estimates © 2010 HDR and Aliado Accesso LLC
    21. 21. Quantifying Risk Aversion Acceptable Risk/Return Boundary Investment Region <ul><li>The simplest element of Harry Markowitz’s Nobel Prize-winning method “Modern Portfolio Theory” is documenting how much risk an investor accepts for a given return. </li></ul><ul><li>The “Investment Boundary” states how much risk an investor is willing to accept for a given return. </li></ul><ul><li>For our purposes, we modified Markowitz’s approach a bit. </li></ul>Investment © 2010 HDR and Aliado Accesso LLC
    22. 22. Cost vs. Value of AIE <ul><li>The cost of analysis routinely comes in below 1% and has always been under 2% of the investment size - including initial training </li></ul><ul><li>This is still less than some industries spend on risk analysis of investments of similar size and risk </li></ul><ul><li>It is also sometimes less time-consuming than the previous non-quantitative analysis techniques used by the firm (One of the reasons this analysis is efficient is we conduct a Value of Information Analysis - we only measure what is economically justified) </li></ul><ul><li>Using the standard VIA calculation for the value of AIE analysis, AIE itself was the best investment of all the investments we analyzed - very conservative measures of payoffs put $20 to every $1 spent on AIE </li></ul>© 2010 HDR and Aliado Accesso LLC
    23. 23. Final Tips <ul><li>Learn how to think about uncertainty, risk and information value in a quantitative way </li></ul><ul><li>Assume its been measured before </li></ul><ul><li>You have more data than you think and you need less data than you think </li></ul><ul><li>Methods that reduce your uncertainty are more economical than many managers assume </li></ul><ul><li>Don’t let “exception anxiety” cause you to avoid any observations at all </li></ul><ul><li>Just do it </li></ul>© 2010 HDR and Aliado Accesso LLC
    24. 24. Questions? <ul><li>Jody Keyser </li></ul><ul><li>[email_address] </li></ul><ul><li> </li></ul><ul><li>1-888-373-0680 </li></ul>© 2010 HDR and Aliado Accesso LLC
    25. 25. Supplementary Material © 2010 HDR and Aliado Accesso LLC
    26. 26. “ Proper” Incentives <ul><li>Incentives can also help build a culture of “high-performance forecasting” </li></ul><ul><li>A method called the “Brier Score” can be shown to compute rational incentives that optimize calibration of probabilities (Murphy, Winkler) </li></ul><ul><li>Since the Brier score cannot be gamed in any way other than simply applying the best estimate on each probability, it is called a “proper” score </li></ul><ul><li>Brier score =  (Outcome(X i )-P(X i )) 2 </li></ul><ul><li>Other viable methods include “prediction markets” </li></ul>i Results of Brier Score applied to weather forecasts © 2010 HDR and Aliado Accesso LLC
    27. 27. Prediction Markets <ul><li>Simulated trading markets are a proven method of generating probabilities for uncertain events </li></ul><ul><li>Research shows that it works even without purely monetary reward systems </li></ul>Source: Servan-Schreiber, et. al. Electronic Markets, v 14-3, September 2004 © 2010 HDR and Aliado Accesso LLC
    28. 28. Increasing Value & Cost of Info. <ul><li>EVPI – Expected Value of Perfect Information </li></ul><ul><li>ECI – Expected Cost of Information </li></ul><ul><li>EVI – Expected Value of Information </li></ul>Dollar Value/Cost $0 $$$ Low certainty High certainty EVPI Aim for this range Perfect Information © 2010 HDR and Aliado Accesso LLC EVI ECI
    29. 29. The Value of the “First Few” <ul><li>Uncertainty reduces much faster on the first few observations than you might think </li></ul><ul><li>Myth: When uncertainty is high, lots of data is needed to reduce it. </li></ul><ul><li>Fact: just the opposite is true. </li></ul><ul><li>As number of samples increase, the 90% CI gets much narrower but… </li></ul><ul><li>… each new sample reduces uncertainty only slightly and… </li></ul><ul><li>… beyond about 30 samples, you need to quadruple the sample size to cut error by half </li></ul>-100% 0 100% 0 5 10 15 20 25 30 35 40 90% CI Number of samples Mean of the sample vs. actual <ul><li>With a few samples, there is still high uncertainty but… </li></ul><ul><li>… each new sample reduces uncertainty a lot and… </li></ul><ul><li>… the first few samples reduce uncertainty the most when initial uncertainty is high. </li></ul>© 2010 HDR and Aliado Accesso LLC
    30. 30. Bayesian Sampling <ul><li>Bayesian inversion allows for samples of extremely small sizes when we use some prior knowledge about what is likely. </li></ul><ul><li>This can be used anytime when the cost of a sample is extremely high – e.g. rocket launches, cancer patients, a complete inspection of a ship, plane or building, etc. </li></ul>3 Launches, 0 Failures 5 Launches, 1 Failure Baseline Failure Rate of a Rocket After 5 Launches 1 Launch, 0 Failures 0 Launches, 0 Failures © 2010 HDR and Aliado Accesso LLC
    31. 31. Issues with a “Risk Map” <ul><li>Does your “Risk Map” look more like the top or bottom chart? If more like the top, how do the errors mentioned earlier compare to the variance among the clustered responses? </li></ul><ul><li>Clustering means that all the previous errors mentioned before make up a large part of the difference between scores of individual risks. </li></ul><ul><li>How does this address the measured response behaviors of overconfidence, partition dependence, framing, anchoring, etc.? </li></ul><ul><li>How does this address correlations, common mode failures, and cascade failures? These factors can make a few “low risk” items add up to one very big risk. </li></ul><ul><li>Risk maps like this may be ok for initial brainstorming, but don’t make critical decisions based on it. </li></ul>Likelihood Impact © 2010 HDR and Aliado Accesso LLC
    32. 32. The Illusion of Cognition <ul><li>Anchoring: Simply being exposed to arbitrary, even irrelevant numbers affects subsequent subjective estimates (Kahneman) </li></ul><ul><li>Influential Inferiors: Give a test group two choices, A and B. For a control group add choice C - clearly just an inferior version of B. The percentage of people who choose B will increase over the control group. (Ariely) </li></ul><ul><li>Defaults: Implicit or explicitly “defaults” affect choices, dramatically. (Ariely) </li></ul><ul><li>Framing logically identical choices as loss avoidance vs. opportunity seeking changes preferences. (Kahneman) </li></ul>The “Illusion of Cognition” is a phrase in the decision psychology literature that refers to the misconception that our choices are based on rational thinking. Risk assessment methods employ structures that can introduce these problems. © 2010 HDR and Aliado Accesso LLC
    33. 33. First, Do No Harm <ul><li>“ Gut Feel” is the baseline. Anything that “works” has to show an improvement on this. Measured sources of error : inconsistency, overconfidence, various biases, inaccurate estimates </li></ul><ul><li>The worst case is not “gut feel” – some methods add more error </li></ul><ul><li>The best case isn’t perfection – just measurably reduced error compared to gut feel </li></ul>© 2010 HDR and Aliado Accesso LLC Method Gut Feel Weighted Score Traditional Financial Quantitative Models Measured Improvement to Judgment? Baseline No: Remove no errors and add new errors Maybe: Decomposition helps; false precision Yes: Proven w/controlled tests Does it quantify risk? Only intuitively No, it attempts to describe risk; No, but may attempt to adjust for it Yes Determines High-Payoff Measures? No No: Turns some good measures into scores No Yes (w/AIE) Net Benefit? Baseline No: Probably Worse Maybe Slightly Best
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.