Risk management
Upcoming SlideShare
Loading in...5

Risk management



Risk research for defense procurement

Risk research for defense procurement



Total Views
Views on SlideShare
Embed Views



1 Embed 1

http://www.slashdocs.com 1


Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Risk management Risk management Presentation Transcript

  • Risk Management for Forest Rangers 1
  • Jeran Binning 2
  • ‗‗AGAINST THE GODS• The story that I have to tell is marked all the way through by a persistent tension between those who assert that the best decisions are based on quantification and numbers, determined by the patterns of the past, and those who base their decisions on more subjective degrees of belief about the uncertain future. This is a controversy that has never been resolved.‘• — FROM THE INTRODUCTION TO ‗‗AGAINST THE GODS: THE REMARKABLE STORY OF RISK,‘‘ BY PETER L. BERNSTEIN 3
  • Risk Wei-jan: Chinese for opportunity through danger As long as we wish for safety, we will have difficulty pursuing what matters. - Peter Block Risk has a double-edged nature. Risk can cut, risk can heal. - James Neill 4
  • Risk 5
  • John Maynard Keynes• ―There is no harm in being sometimes wrong- especially if one is promptly found out. 6
  • Predictive ToolsValue at Risk VaR 7
  • Value at Risk VaR• David Einhorn, who founded Greenlight Capital, a prominent hedge fund, wrote not long ago that VaR was ―relatively useless as a risk-management tool and potentially catastrophic when its use creates a false sense of security among senior managers and watchdogs. This is like an air bag that works all the time, except when you have a car accident.”• NY Times Magazine pp27 4 January 2009 8
  • Wall Street Journal• Any of these metrics that work in a typical oscillating market…are not working right now,‖ Mr. Rueckert said.• Among the other indicators that aren‘t working: 10-day, 50-day and 200-day moving averages, the put/call ratio and the idea of ―capitulation.‖• ―Capitulation‖ is the concept that stocks require a purgative, high-volume plunge to mark the bottom of the bear market. Guess what: the stock market has seen little other than purgative and high-volume plunges since the failure of Lehman Brothers hit the tape on Sept. 15, and there‘s no sign of a bottom yet.• In an attempt to debunk ―the capitulation myth,‖ Mr. Rueckert, of Birinyi Associates, found an item in the New York Times three days after the bottom of the 1982 bear market that promised the end would take the form of a ―crushing…swift plunge.‖ According to his analysis, bear markets usually end with a whimper rather than a bang.• To date, the best predictor of a market turn was probably an email that circulated among Wall Street traders on Oct. 27, the day of the interim bottom. The analysis was based on lunar cycles, a cornerstone of astrology.• Wall Street Journal January 5th 2009 9
  • Probabilities/Risk • The major mistake that people make is that they are not very good at dealing with a lot of uncertainty. • So, rather than a rational assessment of data and probabilities, they like stories and they make decisions based more on mental images rather than a sober assessment of their portfolio and how a particular stock fits into it."•James Scott is Managing Director of Global Public Markets for General Motors Asset Management and a member of its Management and Investment Committees. Before joining GMAM, he was President of QuantitativeManagements Associates, a subsidiary of Prudential Financial. Prior to that, Mr. Scott was a Professor at Columbia Business School.Mr. Scott holds a B.A. from Rice University and a Ph.D. in economics from Carnegie Mellon University. He serves as an Associate Editor of the Financial Analysts Journal and the Journal of Investment Management, as aDirector of the Institute for Quantitative Research in Finance and as Research Director of the Heilbrunn Center for Graham and Dodd Investing at Columbia Business School. 10
  • John Maynard Keynes Risk vs Uncertainty• By ―uncertain‖ knowledge … I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty…. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention…. About these matters, there is no scientific basis on which to form any calculable probability whatever. We simply do not know! 11
  • Predictive Tools • Prediction is very difficult, especially about the future.’ - Niels Bohr – Physicist (1885-1962) “It is tough to make predictions, especially about the future.” - Yogi Berra, Baseball Savant 12
  • Predicting the Future• Presidents, Movies, and Influenza• Such markets have been created to predict the next president, Hollywood blockbusters, and flu outbreaks. The newest prediction market, launched in February 2008, focuses on predicting future events in the tech industry, such as whether Yahoo! will accept Microsofts acquisition. But Ho and his co-author, Kay-Yut Chen, a principal scientist at Hewlett-Packard Laboratories, believe that prediction markets also are well-suited to forecasting demand for new product innovations, particularly in the high-tech arena. H-P tested prediction markets to forecast sales of several existing and new products and found that six of eight prediction markets were more accurate than official forecasts. "Prediction markets work because you get a lot of people and ask them to put their money where their mouth is," Chen says. Based on their analysis of several existing prediction markets, Ho and Chen provide a step- by-step guide for firms on how to create a prediction market. They suggest recruiting at least 50 participants and providing a strong monetary incentive to promote active trading. Ho and Chen recommend average compensation of at least $500 for each participant. The firm then creates ten different forecasts – either according to sales or units sold – and gives each participant a set number of shares and cash to trade, buy, and sell, according to their beliefs about which forecast is most accurate. After a product is launched and sales are observed, participants who own shares in the prediction that matches actual sales receive $1 a share. 13
  • Cal Berkeley • One novel way to improve such forecasts is a prediction market, says Teck-Hua Ho, the Haas Schools William Halford Jr. Family Professor of Marketing. Ho recently coauthored an article titled "New Product Blockbusters: The Magic and Science of Prediction Markets" in the 50th anniversary issue of the Haas School business journal, California Management Review. A prediction market is an exchange in which participants vote on a possible outcome by buying and selling shares that correspond to a particular forecast, similar to trading in the stock market. Shares in a forecast that participants believe is most likely trade for a higher price than shares in a less likely scenario. "The key idea behind a prediction market is pooling the knowledge of many people within a company," Ho says. "Its a very powerful tool for firms with many different pockets of expertise or a widely dispersed or isolated workforce." 14
  • Phony Forecasting (or Nerds and Herds)• Extremistan might not be so bad if you could predict when outliers would occur and what their magnitude might be. But no one can do this precisely.• Consider hit movies: Screenwriter William Goldman is famous for describing the ―secret‖ of Hollywood hits: “Nobody can predict one”.• Similarly, no one knew whether a book by a mother on welfare about a boy magician with an odd birthmark would flop or make the author a billionaire.• Stock prices are the same way. Anyone who claims to be able to predict the price of a stock or commodity years in the future is a charlatan.• Yet the magazines are filled with the latest ―insider‖ advice about what the market will do. Ditto for technology.• Do you know what the ―next big thing‖ will be? No. No one does. Prognosticators generally miss the big important events – the black swans that impel history. 15
  • Astrology Astrology AstrologyUniversum - C. Flammarion, Holzschnitt, Paris 1888, Kolorit : Heikenwaelder Hugo, Wien 1998 16
  • Physics Envy• Social scientists have suffered from physics envy, since physics has been very successful at creating mathematical models with huge predictive value.• In financial economics, particularly in a field called risk management, the predictive value of the models is no different from astrology. Indeed it resembles astrology (without the elegance).• They give you an ex-post ad-hoc explanation.• Nassim Taleb 17
  • POLICY ANALYSIS MARKET Strategic Insight DARPAs Policy Analysis Market for Intelligence: Outside the Box or Off the Wall? by Robert Looney Sept 2003• Although the Policy Analysis Market appears to be a dead issue, it did break new ground in the countrys search for better intelligence. The PAM idea embodied a solid body of theory and proven empirical capability. While one can quibble about how closely PAM markets would approximate the efficient market hypothesis, there is no doubt trading on many future events would come close enough to provide valuable intelligence. Thus, while it was a public relations disaster, some version of the program will likely be introduced on a restricted basis, perhaps along the lines suggested above, in an attempt to better tap the countrys disperse knowledge base, human insight, and analytical expertise. This solution is far from perfect, not allowing realization of the full potential of the program.• Lou Dobbs (2003), has perhaps best summed up this unfortunate episode: ―We will never know if the Policy Analysis Market would have been successful. But if there were even a small chance that it could have been a useful tool, there should be, at a minimum, further discussion of the idea. This is, after all, not a matter of just partisan politics but one of national security. And forcing the resignations of those involved with the planning is a strong deterrent to progressive thinking, of which we have no surplus.‖ 18
  • POLICY ANALYSIS MARKET• Poindexter also faced immense criticism from the media and politicians about the Policy Analysis Market project, a prediction market that would have rewarded participants for accurately predicting geopolitical trends in the Middle East. This was portrayed in the media as profiting from the assassination of heads of state and acts of terrorism due to such events being mentioned on illustrative sample screens showing the interface.• The controversy over the futures market led to a Congressional audit of the Information Awareness Office in general, which revealed a fundamental lack of privacy protection for American citizens.• Funding for the IAO was subsequently cut and Poindexter retired from DARPA on August 12, 2003. Wikipedia 19
  • Analysis of DOD Major Defense Acquisition Program Portfolios ( FY 2008 dollars) •Source: GAO analysis of DOD data. FY 2000 FY 2005 FY 2007Number of Programs 95 75 91•Total planned commitments $790 B $1.5 T $1.6 T•Commitments outstanding $380 B $887 B $858 B•Portfolio performance•Change RDT&E costs from first estimate 27% 33% 40%•Change acquisition cost from first estimate 6% 18% 26%•Estimated total acquisition cost growth $42 B $202 B $295 BPrograms with = >25% increase in Program Acquisition Unit Cost 37% 44% 44%•Ave schedule delay delivering initial capability 16 mos 17 mos 21 mos 20
  • DoD Risk Definition “A measure of future uncertainties in achieving program goals and objectives within defined cost, schedule and performance constraints.”Each risk event has three components:− A future root cause;− The probability of the future root cause occurring; and− The consequence / impact if the root cause occurs.
  • Risk Identification• After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems. Hence, risk identification can start with the source of problems, or with the problem itself.• Source analysis Risk sources may be internal or external to the system that is the target of risk management. Examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport.• Problem analysis Risks are related to identified threats. For example: the threat of losing money, the threat of abuse of privacy information or the threat of accidents and casualties. The threats may exist with various entities, most important with shareholders, customers and legislative bodies such as the government.• When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; privacy information may be stolen by employees even within a closed network; lightning striking a Boeing 747 during takeoff may make all people onboard immediate casualties.• The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are: – Objectives-based risk identification Organizations and project teams have objectives. Any event that may endanger achieving an objective partly or completely is identified as risk. – Scenario-based risk identification In scenario analysis different scenarios are created. The scenarios may be the alternative ways to achieve an objective, or an analysis of the interaction of forces in, for example, a market or battle. Any event that triggers an undesired scenario alternative is identified as risk - see Futures Studies for methodology used by Futurists. – Taxonomy-based risk identification The taxonomy in taxonomy-based risk identification is a breakdown of possible risk sources. Based on the taxonomy and knowledge of best practices, a questionnaire is compiled. The answers to the questions reveal risks. Taxonomy-based risk identification in software industry can be found in CMU/SEI-93-TR-6.• Common-risk Checking In several industries lists with known risks are available. Each risk in the list can be checked for application to a particular situation. An example of known risks in the software industry is the Common Vulnerability and Exposures list found at http://cve.mitre.org• Risk Charting This method combines the above approaches by listing Resources at risk, Threats to those resources Modifying Factors which may increase or reduce the risk and Consequences it is wished to avoid. Creating a matrix under these headings enables a variety of approaches. One can begin with resources and consider the threats they are exposed to and the consequences of each. Alternatively one can start with the threats and examine which resources they would affect, or one can begin with the consequences and determine which combination of threats and resources would be involved to bring them about. 22
  • Assessment• Once risks have been identified, they must then be assessed as to their potential severity of loss and to the probability of occurrence. These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of the probability of an unlikely event occurring. Therefore, in the assessment process it is critical to make the best educated guesses possible in order to properly prioritize the implementation of the risk management plan.• The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents.• Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for immaterial assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information.• Nevertheless, risk assessment should produce such information for the management of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is:• Rate of occurrence multiplied by the impact of the event equals risk frequency x impact = risk 23
  • Assessment• Later research has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed.• In business it is imperative to be able to present the findings of risk assessments in financial terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms.• The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualized loss expectancy) and compares the expected loss value to the security control implementation costs (cost-benefit analysis). 24
  • Potential risk treatments• Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:• Avoidance (elimination) AVOID• Reduction (mitigation) / CONTROL• Retention (acceptance and budgeting) / ACCEPTANCE• Transfer (insurance or hedging) / TRANSFER• Ideal use of these strategies may not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions. 25
  • Risk avoidance• Includes not performing an activity that could carry risk. – Examples:• not buying a property or business in order to not take on the liability that comes with it.• not flying in order to not take the risk that the airplane were to be hijacked. 26
  • Risk reduction– Involves methods that reduce the severity of the loss or the likelihood of the loss from occurring. Examples include sprinklers designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk but the cost may be prohibitive as a strategy.– Modern software development methodologies reduce risk by developing and delivering software incrementaly. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration.– Outsourcing could be an example of risk reduction if the outsourcer can demonstrate higher capability at managing or reducing risks. In this case companies outsource only some of their departmental needs. For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a call center. 27
  • • Involves accepting the loss when it occurs. True self insurance falls in this category.Risk retention Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. – All risks that are not avoided or transferred are retained by default. – This includes risks that are so large or catastrophic that they either cannot be insured against or the premiums would be infeasible • War is an example since most property and risks are not insured against war, so the loss attributed by war is retained by the insured. • Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of the organization too much. 28
  • Risk transfer• Means causing another party to accept the risk, typically by contract or by hedging.• Insurance is one type of risk transfer that uses contracts.• Other times it may involve contract language that transfers a risk to another party without the payment of an insurance premium. – Liability among construction or other contractors is very often transferred this way.• On the other hand, taking offsetting positions in derivatives is typically how firms use hedging to financially manage risk. 29
  • Sunk Cost• In economics and business decision-making, sunk costs are costs that cannot be recovered once they have been incurred. Sunk costs are sometimes contrasted with variable costs, which are the costs that will change due to the proposed course of action, and which are costs that will be incurred if an action is taken. In microeconomic theory, only variable costs are relevant to a decision. Economics proposes that a rational actor does not let sunk costs influence ones decisions, because doing so would not be assessing a decision exclusively on its own merits. The decision-maker may make rational decisions according to their own incentives; these incentives may dictate different decisions than would be dictated by efficiency or profitability, and this is considered an and distinct from a sunk cost problem.• For example, when one pre-orders a non-refundable and non-transferable movie ticket, the price of the ticket becomes a sunk cost. Even if the ticket-buyer decides that he would rather not go to the movie, there is no way to get back the money he originally paid. Therefore, the sunk cost of the ticket should have no bearing on the decision of whether or not to actually go to the movie. In other words, it is a fallacy to conclude that he should go to the movie so as to avoid "wasting" the cost of the ticket.• While sunk costs should not affect the rational decision makers best choice, the sinking of a cost can. Until you commit your resources, the sunk cost becomes known as an avoidable fixed cost, and should be included in any decision making processes. If the cost is large enough, it could potentially alter your next best choice, or opportunity cost. For example, if you are considering pre-ordering movie tickets, but havent actually purchased them yet, the cost to you remains avoidable. If the price of the tickets rises to an amount that requires you to pay more than the value you place on them, the cost should be figured into your decision- making, and you should reallocate your resources to your next best choice. 30
  • Opportunity Lost?– Avoidance may seem the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed.– Not entering a business to avoid the risk of loss also avoids the possibility of earning profits. 31
  • Portfolio Investment Management• Large-scale Defense infrastructure modernization programs such as Global Combat Support have complex inter-dependencies and long-time horizons that render fully informed investment decisions difficult to achieve before substantial, and unrecoverable, resources are committed. (sunk cost) – However complex these decisions, they, nonetheless, can be decomposed along three basic dimensions: – Uncertainty – Timing – Irreversibility• These primary parameters define the value of investment options available to a firm,regardless of whether it is in the public or private sector.R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paperpresented at the 4th Annual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School 32
  • Level of Activity over Life Cycle Monitoring and ControlLevel of Activity Execute Plan Close Initiate Start Finish Time Average Duty Cycle for DOD systems is ten years 33
  • System of Systems Engineering SoSe• System of Systems Engineering (SoSE) methodology is heavily used in Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains (as per [1]and [2]). Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.• System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare the decision-makers for informed architecting of System-of- Systems problems.• Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three- phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. 34
  • 35
  • • Complex System of systems 36
  • 37
  • Complex system of systems• Difficulty with System of systems? The technical complexity The programmatic complexity of integrating software intensive systems The absence of accurate cost information at the onset of major systems/ software Programs 38
  • Portfolio Investment Management-UncertaintyUnfortunately, algorithms capable of modeling the effects of these variables are relatively few, especially for the uncertainty and irreversibility of investment decisions (Dixit & Pyndik, 1994, p. 211).For large-scale information Technology (IT) modernization programs, there are at least three sources of uncertainty— and, thus, risk The technical complexity The programmatic complexity of integrating software intensive systems The absence of accurate cost information at the onset of major systems/ software Programs• Software-intensive systems are particularly sensitive to the systematic underestimation of risk, primarily because the level of complexity is hard to manage, let alone comprehend. Investment in software-intensive systems tends to be irreversible because it is spent on the labor required to develop the intellectual capital embedded in software. The outcome of software development is almost invariably unique, a one-of-kind artifact—despite the numerous efforts to develop reusable software. Unlike physical assets ,the salvage value of software is zero because no benefit is realized until the system is deployed; and that labor, once invested, is unrecoverable. One result is an (implicit) incentive to continue projects that have little chance of success, despite significant cost overruns, schedule delays. R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paper presented at the 4th Annual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School 39
  • 40
  • UncertaintyFor large-scale information Technology (IT) modernization programs, there are at least three sources of uncertainty—and, thus, risk The technical complexity The programmatic complexity of integrating software intensive systems The absence of accurate cost information at the onset of major systems/ software Programs 41
  • Technological maturity 42
  • Common Software Risks that affect cost & schedule 43
  • Better Methods of Analyzing Cost Uncertainty Can Improve Acquisition Decision making • Cost estimation is a process that attempts to forecast the future expenditures for some capital asset, hardware, service, or capability. Despite being a highly quantitative field, cost estimation and the values it predicts are uncertain. An estimate is a possible or likely outcome, but not necessarily the outcome that will actually transpire. This uncertainty arises because estimators do not have perfect information about future events and the validity of assumptions that underpin an estimate. • Uncertainty may result from an absence of critical technical information, • the presence of new technologies or • approaches that do not have historical analogues for comparison, • the evolution of requirements, or • changes in economic conditions. • The Office of the Secretary of Defense and the military departments have historically underestimated and under funded the cost of buying new weapon systems (e.g., by more than 40 percent at Milestone II). • Much of this cost growth is thought to be the result of unforeseen (but knowable) circumstances when the estimate was developed. In the interest of generating more informative cost estimates, the Air Force Cost Analysis Agency and the Air Force cost analysis community want to formulate and implement a cost uncertainty analysis policy. • To help support this effort, RAND Project AIR FORCE (PAF) studied a variety of cost uncertainty assessment methodologies, examined how these methods and policies relate to a total portfolio of programs, and explored how risk information can be communicated to senior decision makers in a clear and understandable way. 44
  • •Project Air Force (USAF Rand Project) recommends that any cost uncertainty analysis policy reflect the following:• A single uncertainty analysis method should not be stipulated for all • circumstances and programs.• It is not practical to prefer one specific cost uncertainty analysis methodology in all cases. Rather, the policy should offer the flexibility to use different assessment methods. These appropriate methods fall into three classes: historical, sensitivity, and probabilistic. Moreover, a combination of methods might be desirable and more effective in communicating risks to decision makers.• • A uniform communications format should be used. PAF (USAF Rand Project) suggests a basic three-point format consisting of low, base, and high values as a minimum basis for displaying risk analysis. The advantages of such a format are that it is independent of the method employed and that it can be easily communicated to decision makers.• A record of cost estimate accuracy should be tracked and updated • periodically. Comparing estimates with final costs will enable organizations to identify areas where they may have difficulty estimating and sources of uncertainty that were not adequately examined.• • Risk reserves should be an accepted acquisition and funding practice.• Establishing reserves to cover unforeseen costs will involve a cultural change within the Department of Defense and Congress. The current approach of burying a reserve within the elements of the estimate makes it difficult to do a retrospective analysis of whether the appropriate level of reserve was set, and to move reserves, when needed, between elements of a large program.• Effective cost uncertainty analysis will help decision makers understand the nature of potential risk and funding exposure and will aid in the development of more realistic cost estimates by critically evaluating program assumptions and identifying technical issues. RAND 45
  • Uncertainty 46
  • COST ESTIMATING CHALLENGESDeveloping a good cost estimate requires stable program requirements, access todetailed documentation and historical data, well-trained and experienced cost analysts, arisk and uncertainty analysis, the identification of a range of confidence levels, andadequate contingency and management reserves. Cost estimating is nonetheless difficult in the best of circumstances. It requires both science and judgment. And, since answers are seldom—if ever—precise, the goal is to find a ―reasonable‖ answer. However, the cost estimator typically faces many challenges in doing so. These challenges often lead to bad estimates, which can be characterized as containing poorly defined assumptions, OMB first issued the Capital Programming Guide as a Supplement to the 1997 version of Circular A-11,• Part 3, still available on OMB‘s Web site at http://www.whitehouse.gov/omb/circulars/a11/cpgtoc.html.• Our reference here is to the 2006 version, as we noted in the preface: Supplement to Circular A-11, Part 7,• available at http://www.whitehouse.gov/omb/circulars/index.html. 47
  • John Wilder Tukey• "An appropriate answer to the right problem is worth a good deal more than an exact answer to an approximate problem." 48
  • Creating a range around a cost estimate 49
  • The absence of accurate cost information at the onset of major systems/ software ProgramsMeasures of uncertainty for cost/schedule estimates and the rate at which that uncertainty declines are a keyconcern—because, they govern whether and to what extent confidence can be placed in cost and scheduleestimates. The key to overcoming initial estimate uncertainty is the capability to harness and toapply information as it becomes available, thus, enabling a Firm to capturethe time value of that information.Indeed, where IT infrastructure modernization projects are supported by a strong quality-assurance, systems-engineeringculture (e.g., have institutionalized best-practice regimes such as the CMMI, 6-Sigma, Agile Methods are likely to quicklyreduce estimate errors incurred at project start-up. Firms without that culture tend to have limited informationefficiency. (Drawing an analogy to thermo-dynamic systems, such firms constitute highlydissipative systems in that they exhibit a high degree of entropy, which takes the form ofinformation disorganization).Unfortunately, traditional methods of discounting investment risk such as Net Present Value (NPV) do not account forirreversibility and uncertainty. In part, this is due to the fact that NPV computes the value of a portfolio of investments asthe maximized mean of discounted cash flows on the assumption that the risk to underlying investment options canbe replicated by assets in a financial market.NPV also implicitly assumes that the value of the underlying asset is known andaccurate at the time the investment decision is made.These assumptions seldom apply for large-scale infra-modernization programs, ineither the public or the private sector. In addition, NPV investment is undertaken when thevalue of a unit of capital is at least as large as its purchase and installation costs. But, thiscan be error prone since opportunity costs are highly sensitive to the uncertaintysurrounding the future value of the project due to factors such as the riskiness of future cashflows. These considerations also extend to econometric models, which excludeirreversibility, the incorporation of which transforms investment models into non-linearequations (Dixit & Pindyck, 1994, p. 421). Nonetheless, irreversibility constitutes both anegative opportunity cost and a lost-option value that must be included in the cost ofinvestment.R Suter Managing Uncertainty and Risk in Public Sector Investments, Richard Suter, Information Technology Systems, Inc., R Consulting A paper presented at the 4thAnnual Acquisition Research Symposium, Graduate School of Business & Public Policy, Naval Postgraduate School 50
  • Cost Estimating Process 51
  • Risk Assessment on Costs: A Cost Probability Distribution COMBINED COST MODELING AND TECHNICAL RISK Cost = a + bXc COST MODELING UNCERTAINTY CostEstimate Historical data point $ Cost estimating relationship TECHNICAL RISK Standard percent error bounds Cost Driver (Weight) Input variable Jeff Kline, Naval Postgraduate School 52 52
  • COST ESTIMATING METHODOLOGY TIME OF USEGROSS ESTIMATES DETAILED ESTIMATES PARAMETRIC ACTUAL (Program A B Initiation) C IOC FOC Concept Technology System Development Production & Operations &Refinement Development & Demonstration Deployment Support Concept Design FRP Decision Readiness LRIP/IOT&E Decision Review Review Pre-Systems Acquisition Systems Acquisition Sustainment EXPERT OPINION ANALOGY ENGINEERING 53
  • SOFTWARE DEVELOPMENT CONE OF UNCERTAINTY All software projects are subject to inherent errors in early estimates. The Cone of Uncertainty represents the best-case reduction in estimation error and improvement in predictability over the course of a project. Skillful project leaders treat the cone as a fact of life and plan accordingly. 4X Project predictability and control are attainable only through 2X active, skillful, and continuous efforts that force the cone to narrow. The cone represents the best case; results canRemaining variability in easily be worse. project scope 1.5X 1.25X 1.0X 0.8X 0.67X Estimates are possible anywhere in the cone, but 0.5X organizational commitments tied to project completion should not be made until about here – and only if work has been done to narrow the cone. 0.25X Square Peg in a Round Hole Initial Marketing Detailed Project Concept Approved Requirements Detailed Tech Design Complete Product Complete Requirements Complete Definition Complete Source: Construx, Bellevue WA
  • Software Cost Estimating• All commercial models (COCOMO II, SEER-SEM, and Price-S) are productivity- based models, and basically based on the same equation: Labor Rate ($/hr) * Software Size/ Productivity.• Maximize use Of actual data for Labor Rate, Productivity, Size.• Good source for productivity rates: http://www.stsc.hill.af.mil/CrossTalk/2002/03/reifer.html• COCOMO II does not capture requirement analysis and government V&V.• As man-effort increases, schedule and productivity decreases. However, cost increases and possible rework. 3• Schedule rule of thumb: Time ~ 3.67* Effort• CAUTIONS: – Code Re-use Lowers Cost, Modification Increases Cost • Per OSD/ CAIG: modified code, with more than 25% of the lines changed or added, is considered new code. (based on NASA Study) • with SEER-SEM cost of 99% Modified Code < Cost of New Code – Analogies: Don’t treat non-similar languages as equivalent Example in PLCCE: SLOC= C + C++ + IDL + JAVA + XML
  • Cost Risk Analysis The process of quantifying uncertainty in a cost estimate.• By definition a point estimate is precisely wrong – Assessment of risk is not evident in a point estimate – The influence of variables may not be understood by the decision maker• Cost risk predicts cost growth.• Cost risk = cost estimating risk + schedule risk+ technical risk + change in requirements/ threat• Risk analysis adjusts the cost estimate to provide decision makers an understanding of funding risks. 1 0.12 0.9 0.1 0.8 0.7 0.08 0.6 0.5 0.06 0.4 0.04 0.3 0.2 0.02 0.1 0 0 Probability Density Function PDF Cumulative Density Function CDF 56
  • Simplified Cost Risk Simulation Model If no actual data available Methodology perform the following steps Basis of Estimate Schedule Assign Risk Producibility to Each Element: Reliability None, Low, MedInfluencedby Complexity High, etc. Technology Statusavailabilityof actual Assess Riskdata Categories Assign Risk Limits toor expert For Data Inputs statistical distributionopinion By WBS (e.g. + X; -X to +Y, etc.) 8 7 6 Total Cost PDF 5 4 3 2 Select 1 0 Run statistical 1 4 7 3 6 9 2 5 8 1 4 7 3 6 9 1.2 1.5 1.1 1.1 1.1 1.2 1.2 1.2 1.3 1.3 1.3 1.4 1.4 1.4 1.5 1.5 1.5 1 0.9 Model distribution 0.8 0.7 0.6 0.5 0.4 0.3 CDF 0.2 0.1 0 Input PDFs
  • Example 1 Total Software Cost Estimate schedule slide also 100% schedule risk 80% KTR EAC 3/07 82% Complete 60%Risk PM Estimate/ICE 4/05 40% 20% Contract 6/05 & PM Estimate 11/05 0% $0 $20,000 $40,000 $60,000 $80,000 $100,000 $BY05K
  • Example (Cont’d) Pre & Post Software Contract Data 350000 30 01-6 ICE 4/05 New 300000 SLOC 25 250000 20 Dollars in MillionsSLOC in Units 200000 Offeror SLOC Estimate 6/05 15 with 38% Reuse 150000 Code PM SLOC 10 100000 Estimate 4/05 Software Metrics Report (SLOC) with 76% Reuse Code Ktr EAC 5 50000 0 0 R R 05 06 10 6 07 07 06 06 06 06 06 07 07 11 5 05 12 5 12 6 06 11 6 00 0 0 00 PD 0 CD 20 20 20 20 20 20 20 20 20 20 20 20 /20 /20 /20 /20 /2 /2 3/ 9/ 2/ 4/ 2/ 4/ 5/ 7/ 8/ 1/ 3/ 4/ 10 Date
  • Example (Cont’d) Schedule Risk Software Development Schedule Months01-6 ICE (4/05) COCOMO Equation 25 NCCA Equation 30PM Estimate (4/05) 18Contract - Initial (6/05) 18Contract - Current (3/07) (82% Complete) 3101-6 ICE - Current (3/07) 35
  • The Refining of a Life Cycle Cost EstimateLCCE Cost Estimating Uncertainty MS A MS B MS C Concept Trades Ktr Selection Design Reviews Production AOA Test & Eval / Design Mods CARD Logistics Program / System Evolution 61
  • 62
  • DIFFERENTIATING COST ANALYSIS AND COST ESTIMATINGCost analysis, used to develop cost estimates for such things as hardware systems,automated information systems, civil projects, manpower, and training, can be defined as1. the effort to develop, analyze, and document cost estimates with analyticalapproaches and techniques;2. the process of analyzing and estimating the incremental and total resourcesrequired to support past, present, and future systems—an integral step in selecting alternatives; and3. a tool for evaluating resource requirements at key milestones and decision points in the acquisition process.Cost estimating involves collecting and analyzing historical data and applying quantitative models, techniques, tools, and databases to predict a program‘s future cost.More simply, cost estimating combines science and art to predict the future cost of something based on known historical data that are adjusted to reflect new materials, technology, software languages, and development teams.Because cost estimating is complex, sophisticated cost analysts should combine concepts from such disciplines as accounting, budgeting, computer science, economics, engineering, mathematics, and statistics and should even employ concepts from marketing and public affairs. And because cost estimating requires such a wide range of disciplines, it is important that the cost analyst either be familiar with these disciplines or have access to an expert in these fields. 63
  • 64
  • Jackson Lears‘s analyzed why the dominant American ―culture of control‖ denies the importance of luck • Drawing on a vast body of research, Lears ranges through the entire sweep of American history as he uncovers the hidden influence of risk taking, conjuring, soothsaying, and sheer dumb luck on our culture, politics, social lives, and economy.T.J. Jackson Lears “Something for Nothing” (2003) 65
  • Illusion of Control• In a series of experiments, Ellen Langer (1975) demonstrated first the prevalence of the illusion of control and second, that people were more likely to behave as if they could exercise control in a chance situation where ―skill cues‖ were present. By skill cues, Langer meant properties of the situation more normally associated with the exercise of skill, in particular the exercise of choice, competition, familiarity with the stimulus and involvement in decisions.• One simple form of this fallacy is found in casinos: when rolling dice in craps, it has been shown that people tend to throw harder for high numbers and softer for low numbers.• Under some circumstances, experimental subjects have been induced to believe that they could affect the outcome of a purely random coin toss. Subjects who guessed a series of coin tosses more successfully began to believe that they were actually better guessers, and believed that their guessing performance would be less accurate if they were distracted. 66
  • Critque of Taleb• Talebs point is rather that most specific forecasting is pointless, as large, rare and unexpected events (which by definition could not have been included in the forecast) will render the forecast useless.• However, as Black Swans can be both negative and positive, we can try to structure our lives in order to minimize the effect of the negative Black Swans and maximize the impact of the positive ones. I think this is excellent advice on how to live ones life and seems to be equivalent, for example, to the focus on downside protection (rather than upside potential) that has led to the success of the value approach to investing. 67
  • Risk = Variance • Risk: Well, it certainly doesnt mean standard deviation. People mainly think of risk in terms of downside risk. They are concerned about the maximum they can lose. So thats what risk means. • In contrast, the professional view defines risk in terms of variance, and doesnt discriminate gains from losses. There is a great deal of miscommunication and misunderstanding because of these very different views of risk. Beta does not do it for most people, who are more concerned with the possibility of loss • Daniel KahnemanDaniel Kahneman is the Eugene Higgins Professor of Psychology at Princeton University) and Professor of Public Affairs at Woodrow Wilson School. Kahneman was born in Israel and educated at the Hebrew University in Jerusalem before taking his PhD at the University of California. He was the joint Nobel Prize winner for Economics in 2002 for his work on applying cognitive behavioural theorie to decision making in economics . 68
  • Cicero Born: January 3, 106 B.C.E. Arpinum, Latinum Died: December 7, 43 B.C.E. Formiae, Latinum Roman orator and writer Marcus Tullius Cicero ―Probability is the very guide of life.‖• Pp 31 The Drunkards Walk 69
  • Probability• “ in no other branch of mathematics is it so easy to blunder as in probability theory.” – Martin Gardiner, ―Mathematical Games," Scientific American, October 1959 pp 180-182 70
  • The Monte Hall problem• Probability Theory The Monte Hall problem, birthday pairings, counting principles, conditional probability and independence, Bayes Rule, random variables and their distributions, Gamblers Ruin problem, random walks, and Markov chains. 71
  • • Display aircraft movement 72
  • 73
  • Probability Theory• Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single occurrences or evolve over time in an apparently random fashion.• Although an individual coin toss or the roll of a die is a random event, if repeated many times the sequence of random events will exhibit certain statistical patterns, which can be studied and predicted. Two representative mathematical results describing such patterns are the law of large numbers and the central limit theorem.• As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of large sets of data. Methods of probability theory also apply to description of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. 74
  • 75
  • 0 random variablesProbability Density • In mathematics, random variables are used in the studyFunction PDF of chance and probability. They were developed to assist in the analysis of games of chance, stochastic events, and the results of scientific experiments by capturing only the mathematical properties necessary to answer probabilistic questions. Further formalizations have firmly grounded the entity in the theoretical domains of mathematics by making use of measure theory. • Fortunately, the language and structure of random variables can be grasped at various levels of mathematical fluency. Set theory and calculus are fundamental. • Broadly, there are two types of random variables — discrete and continuous. Discrete random variables take on one of a set of specific values, each with some probability greater than zero. Continuous random variables can be realized with any of a range of values (e.g., a real number between zero and one), and so there are several ranges (e.g. 0 to one half) that have a probability greater than zero of occurring. • A random variable has either an associated probability distribution (discrete random variable) or probability density function (continuous random variable). 76
  • Probability Density Function NEED A BETTER DEFINITION• it shows the probability density function (pdf) of a non-linear communications channel - i.e. the embedded output of a 2D system. It has been estimated by using a characteristic function estimator (the characteristic function is the Fourier transform of the pdf so by estimating the characteristic function you can get an estimate of the pdf by an inverse FFT). 77
  • Game theory Is a branch of applied mathematics that is used in the social sciences (most notably economics), biology, engineering, political science, computer science (mainly for artificial intelligence), and philosophy. Game theory attempts to mathematically capture behavior in strategic situations, in which an individuals success in making choices depends on the choices of others. While initially developed to analyze competitions in which one individual does better at anothers expense (zero sum games), it has been expanded to treat a wide class of interactions, which are classified according to several criteria. Today, ―game theory is a sort of umbrella or ‗unified field‘ theory for the rational side of social science, where ‗social‘ is interpreted broadly, to include human as well as non-human players (computers, animals, plants)‖ (Aumann 1987).• Traditional applications of game theory attempt to find equilibria in these games— sets of strategies in which individuals are unlikely to change their behavior. Many equilibrium concepts have been developed (most famously the Nash equilibrium) in an attempt to capture this idea. These equilibrium concepts are motivated differently depending on the field of application, although they often overlap or coincide. This methodology is not without criticism, and debates continue over the appropriateness of particular equilibrium concepts, the appropriateness of equilibria altogether, and the usefulness of mathematical models more generally.• Although some developments occurred before it, the field of game theory came into being with the 1944 book Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game theorists have won Nobel prizes in economics, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. 78
  • Itōs lemma• In mathematics, Itōs lemma is used in Itō stochastic calculus to find the differential of a function of a particular type of stochastic process. It is the stochastic calculus counterpart of the chain rule in ordinary calculus and is best memorized using the Taylor series expansion and retaining the second order term related to the stochastic component change. The lemma is widely employed in mathematical finance.• Itōs lemma is the version of the chain rule or change of variables formula which applies to the Itō integral. It is one of the most powerful and frequently used theorems in stochastic calculus. For a continuous d- dimensional semimartingale X = (X1,…,Xd) and twice continuously differentiable function f from Rd to R, it states that f(X) is a semimartingale an• This differs from the chain rule used in standard calculus due to the term involving the quadratic covariation [Xi,Xj ]. The formula can be generalized to non-continuous semimartingales by adding a pure jump term to ensure that the jumps of the left and right hand sides agree (see Itōs lemma). 79
  • EVENT• In probability theory, an event is a set of outcomes (a subset of the sample space) to which a probability is assigned. Typically, when the sample space is finite, any subset of the sample space is an event (i.e. all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is infinite, most notably when the outcome is a real number. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see §2, below).• A simple example• If we assemble a deck of 52 playing cards and no jokers, and draw a single card from the deck, then the sample space is a 52-element set, as each individual card is a possible outcome. An event, however, is any subset of the sample space, including any single-element set (an elementary event, of which there are 52, representing the 52 possible cards drawn from the deck), the empty set (which is defined to have probability zero) and the entire set of 52 cards, the sample space itself (which is defined to have probability one). Other events are proper subsets of the sample space that contain multiple elements. So, for example, potential events include:• A Venn diagram of an event. B is the sample space and A is an event. By the ratio of their areas, the probability of A is approximately 0.4.• "Red and black at the same time without being a joker" (0 elements),• "The 5 of Hearts" (1 element),• "A King" (4 elements),• "A Face card" (12 elements),• "A Spade" (13 elements),• "A Face card or a red suit" (32 elements),• "A card" (52 elements).• Since all events are sets, they are usually written as sets (e.g. {1, 2, 3}), and represented graphically using Venn diagrams. Venn diagrams are particularly useful for representing events because the probability of the event can be identified with the ratio of the area of the event and the area of the sample space. (Indeed, each of the axioms of probability, and the definition of conditional probability can be represented in this fashion.) 80
  • EVENT (continued)• Events in probability spaces• Defining all subsets of the sample space as events works well when there are only finitely many outcomes, but gives rise to problems when the sample space is infinite. For many standard probability distributions, such as the normal distribution the sample space is the set of real numbers or some subset of the real numbers. Attempts to define probabilities for all subsets of the real numbers run into difficulties when one considers badly-behaved sets, such as those which are nonmeasurable. Hence, it is necessary to restrict attention to a more limited family of subsets. For the standard tools of probability theory, such as joint and conditional probabilities, to work, it is necessary to use a σ-algebra, that is, a family closed under countable unions and intersections. The most natural choice is the Borel measurable set derived from unions and intersections of intervals. However, the larger class of Lebesgue measurable sets proves more useful in practice.• In the general measure-theoretic description of probability spaces, an event may be defined as an element of a selected σ-algebra of subsets of the sample space. Under this definition, any subset of the sample space that is not an element of the σ-algebra is not an event, and does not have a probability. With a reasonable specification of the probability space, however, all events of interest will be elements of the σ-algebra. 81
  • Law of Large Numbers• was first described by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoullis Theorem" (not to be confused with the Law in Physics with the same name.)• In 1835, S.D. Poisson further described it under the name "La loi des grands nombres" ("The law of large numbers").[3] Thereafter, it was known under both names, but the "Law of large numbers" is most frequently used.• After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli and Kolmogorov. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law. These forms do not describe different laws but instead refer to different ways of describing the mode of convergence of the cumulative sample means to the expected value, and the strong form implies the weak. 82
  • Law of Large Numbers• Both versions of the law state that the sample average converges to the expected value• where X1, X2, ... is an infinite sequence of i.i.d. random variables with finite expected value; – E(X1)=E(X2) = ... = µ < ∞.• An assumption of finite variance Var(X1) = Var(X2) = ... = σ2 < ∞ is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often used because it makes the proofs easier and shorter.• The difference between the strong and the weak version is concerned with the mode of convergence being asserted.• The weak law• The weak law of large numbers states that the sample average converges in probability towards the expected value.• Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value, that is, within the margin.• Convergence in probability is also called weak convergence of random variables. This version is called the weak law because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as below.• A consequence of the weak LLN is the asymptotic equipartition property.• The strong law• The strong law of large numbers states that the sample average converges almost surely to the expected value• That is, the proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the expected value of a random variable as the "long-term average when sampling repeatedly".• Almost sure convergence is also called strong convergence of random variables. This version is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). The strong law implies the weak law.• The strong law of large numbers can itself be seen as a special case of the ergodic theorem. 83
  • Bayesian Analysis• Bayesian inference uses aspects of the scientific method, which involves collecting evidence that is meant to be consistent or inconsistent with a given hypothesis. As evidence accumulates, the degree of belief in a hypothesis ought to change. With enough evidence, it should become very high or very low. Thus, proponents of Bayesian inference say that it can be used to discriminate between conflicting hypotheses: hypotheses with very high support should be accepted as true and those with very low support should be rejected as false. However, detractors say that this inference method may be biased due to initial beliefs that one holds before any evidence is ever collected. (This is a form of inductive bias).• Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before evidence has been observed and calculates a numerical estimate of the degree of belief in the hypothesis after evidence has been observed. (This process is repeated when additional evidence is obtained.) Bayesian inference usually relies on degrees of belief, or subjective probabilities, in the induction process and does not necessarily claim to provide an objective method of induction. Nonetheless, some Bayesian statisticians believe probabilities can have an objective value and therefore Bayesian inference can provide an objective method of induction 84
  • The Reverend Thomas Bayes, F.R.S. --- 1701?-1761 Bayes‘ EquationTo convert the Probability of event A given event B tothe Probability of event B given event A, we use Bayes’theorem. We must know or estimate the Probabilities ofthe two separate events. Pr (A|B) Pr (B) Pr(B|A) = Pr (A) Pr (A) = Pr(A|B)Pr(B) + Pr(A|B)Pr(B) Law of Total Probability 85 85
  • Bayesian Analysis – Example of Bayesian search theory• In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an extensive search failed to discover the wreck. The US Navys deep water expert, John Craven USN, believed that it was elsewhere and he organized a search south west of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, the Mizar, and he took advice from a firm of consultant mathematicians in order to maximize his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of the Scorpion.• The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched.• This sea grid was systematically searched in a manner which started with the high probability regions first and worked down to the low probability regions last. Each time a grid square was searched and found to be empty its probability was reassessed using Bayes theorem. This then forced the probabilities of all the other grid squares to be reassessed (upwards), also by Bayes theorem. The use of this approach was a major computational challenge for the time but it was eventually successful and the Scorpion was found about 740 kilometers southwest of the Azores in October of that year.• Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the wreck if it is there is q. If the square is searched and no wreck is found, then, by Bayes theorem, the revised probability of the wreck being in the square is given by XXXXXXXXX 86
  • Stochastic• Stochastic is synonymous with "random." The word is of Greek origin and means "pertaining to chance" (Parzen 1962, p. 7).• It is used to indicate that a particular subject is seen from point of view of randomness.• Stochastic is often used as counterpart of the word "deterministic," which means that random phenomena are not involved.• Therefore, stochastic models are based on random trials, while deterministic models always produce the same output for a given starting condition. 87
  • Randomness 88
  • Stochastic modeling• "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time- series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s).• Its application initially started in physics (sometimes known as the Monte Carlo Method). It is now being applied in engineering, life sciences, social sciences, and finance. 89
  • • Valuation• Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on.• So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the companys level of solvency.• Deterministic approach The simplest way of doing this, and indeed the primary method used, is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the mostly likely rate and the most critical rate. The result provides a point estimate - the best single estimate of what the companys current solvency position is or multiple points of estimate - depends on the problem definition. Selection and identification of parameter values are frequently a challenge to less experienced analysts. The downside of this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are more probable and some are less.• Stochastic modeling• A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire company. But rather than setting investment returns according to their most likely estimate, for example, the model uses random variations to look at what investment conditions might be like.• Based on a set of random outcomes, the experience of the policy/portfolio/company is projected, and the outcome is noted. Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.• At the end, a distribution of outcomes is available which shows not only what the most likely estimate, but what ranges are reasonable too.• This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time period or the chance that an extreme event in a particular time period leads to an investment return less than the guarantee. Stochastic modeling builds volatility and variability (randomness) into the simulation and therefore provides a better representation of real life from more angles. 90
  • Mont Carlo Simulations• Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business (for its use in the insurance industry, see stochastic modeling). A classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions.• Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models.• Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromo dynamics calculations to designing heat shields and aerodynamic forms.• Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema, business, economics and other fields.• Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabins algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a- million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea. 91
  • Fitting Lifetime Data to a Weibull Model• This Demonstration shows how to analyze lifetime test data from data-fitting to a Weibull distribution function plot.• The data fit is on a log-log plot by a least squares fitting method.• The results are presented as Weibull distribution CDF and PDF plots. 92
  • http://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/• The probability density function (PDF - upper plot) is the derivative of the cumulative density function (CDF - lower plot). This elegant relationship is illustrated here. The default plot of the PDF answers the question, "How much of the distribution of a random variable is found in the filled area; that is, how much probability mass is there between observation values equal to or more than 64 and equal to or fewer than 70?―•• The CDF is more helpful. By reading the axis you can estimate the probability of a particular observation within that range: take the difference between 90.8%, the probability of values below 70, and 25.2%, the probability of values below 63, to get 65.6%. 93
  • Connecting the CDF and the PDF• The probability density function (PDF - upper plot) is the derivative of the cumulative density function (CDF - lower plot). This elegant relationship is illustrated here. The default plot of the PDF answers the question, "How much of the distribution of a random variable is found in the filled area; that is, how much probability mass is there between observation values equal to or more than 64 and equal to or fewer than 70?"• The CDF is more helpful. By reading the y axis you can estimate the probability of a particular observation within that range: take the difference between 90.8%, the probability of values below 70, and 25.2%, the probability of values below 63, to get 65.6%.• http://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/ 94
  • Wolfram Mathmatica Player• I noticed you downloaded Mathematica Player. I assume you found lots of great Demonstrations to utilize within your curriculum, but if not (or if you had trouble figuring out how to use them), heres a video that will help you get started:• http://www.wolfram.com/videos/discoverdemonstrations• Most people find the deployment of existing Demonstrations extremely useful in illustrating concepts to their students, and often want to make their own models showing specific ideas interactively within the class. If that applies to you, heres a second video that teaches you how to make models:• http://www.wolfram.com/screencasts/makingmodels• I would be happy to walk you through the Demonstrations process if you have any questions or concerns. Please let me know how I can help make your classroom an interactive environment. If there are topics youd like to see Demonstrations for in the future, I look forward to hearing those suggestions as well.• Sincerely,• Scott Rauguth• Academic Marketing Manager• Wolfram Research, Inc.• http://www.wolfram.com• P.S. Did you know that the Wolfram Education Group offers free online seminars for training and development of Mathematica proficiency, including Creating Demonstrations? Visit:• http://www.wolfram.com/seminars/s14.html 95
  • Power Law In Nature• An example of a statistical macroscopic relation is the distribution of the magnitude of earthquakes. If it is the annual mean number of earthquakes (in a zone or worldwide) of size (energy released), then empirically one finds over a wide range, with the constant . The relation (7.1) is called the Gutenberg-Richter law and is obviously a statistical relation for observables - it does not specify when an earthquake of some magnitude will occur but only what the mean distribution in their magnitude is. The Gutenberg-Ricter law is a power-law and is therefore scale-invariant - a change of scale in can be absorbed in a normalization constant, leaving the form of the law invariant. The scale-invariance of the law implies a scale-invariance in the phenomena itself: earthquakes happen on all scales and there is no typical or mean magnitude! There are many other natural phenomena which exhibit power laws over a wide range of the parameters: Volcanic activity, solar-flares, charge released during lightning events, length of streams in river networks, forest fires, and even the extinction rate of biological species! Some of these power laws refer to spatial scale-free structures, or fractals, while some others refer to temporal events and are examples of the ubiquitous "one-over-f " phenomena (see chapter 2). Can the frequent appearance of such power laws in complex systems be explained in a simple way? Note that the systems mentioned above are examples of dissipative structures, with a slow but constant inflow of energy and its eventual dissipation. The systems are clearly out of equilibrium, since we know that equilibrium systems tend towards uniformity rather than complexity. On the other hand the abovementioned systems display scale-free behaviour similar to that exhibited by equilibrium systems near a critical point of a second-order phase transition. However while the critical point in equilibrium systems is reached only for some specific value of an external parameter, such as temperature, for the dissipative structures above the scale free behaviour appears to be robust and does not seem to require any fine- tuning. Bak and collaborators proposed that many dissipative complex systems naturally self-organise to a critical state, with the consequent scale-free fluctuations giving rise to power laws. In short, the proposal is that self- organised criticality is the natural state of large complex dissipative systems, relatively independent of initial conditions. It is important to note that while the critical state in an equilibrium second-order phase transition is unstable (slight perturbations move the system away from it), the critical state of self-organised systems is stable: systems are continually attracted to it! The idea that many complex systems are in a self-organised critical state is intuitively appealing because it is natural to associate complexity with a state that is balanced at the edge between total order and total disorder (sometimes loosely referred to as the "edge of chaos"). Far from the critical point, one typically has a very ordered phase on one side and a greatly disordered phase on the other side. It is only at the critical point that one has large correlations among the different parts of a large system, thus making it possible to have novel emergent properties, and in particular scale-free phenomena. In addition to the examples mentioned above, self-organised criticality has also been proposed to apply to economics, traffic jams, forest fires and even the brain! 96
  • Power Law• An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate (also known as the 80-20 rule).• A power law is any polynomial relationship that exhibits the property of scale invariance. The most common power laws relate two variables and have the form-• where a and k are constants, and o(xk) is an asymptotically small function of x. Here, k is typically called the scaling exponent, denoting the fact that a power-law function (or, more generally, a kth order (homogeneous polynomial) satisfies the criterion where c is a constant. That is, scaling the functions argument changes the constant of proportionality as a function of the scale change, but preserves the shape of the function itself. This relationship becomes more clear if we take the logarithm of both sides (or, graphically, plotting on a log-log graph) – .• Notice that this expression has the form of a linear relationship with slope k, and scaling the argument induces a linear shift (up or down) of the function, and leaves both the form and slope k unchanged.• Power-law relations characterize a staggering number of natural patterns, and it is primarily in this context that the term power law is used rather than polynomial function. For instance, inverse-square laws, such as gravitation and the Coulomb force are power laws, as are many common mathematical formulae such as the quadratic law of area of the circle. Also, many probability distributions have tails theory of large that asymptotically follow power-law relations, a topic that connects tightly with the deviations (also called extreme value theory), which considers the frequency of extremely rare events like stock market crashes, and large natural disasters.• Scientific interest in power law relations, whether functions or distributions, comes primarily from the ease with which certain general classes of mechanisms can generate them. That is, the observation of a power-law relation in data often points to specific kinds of mechanisms that underlie the natural phenomenon in question, and can often indicate a deep connection with other, seemingly unrelated systems (for instance, see both the reference by Simon and the subsection on universality below). The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy and robustness. A few notable examples of power laws are the Gutenberg-Richter law for earthquake sizes, Paretos law of income distribution, or structural self-similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is extremely active in many fields of modern science, including physics, computer science, linguistics, 97
  • Extreme Value Theory• http://www.panix.com/~kts/Thesis/extre me/extreme1.html 98
  • Prospect TheoryProspect theory was developed by Daniel Kahneman and Amos Tversky in 1979 99
  • Prospect Theory• Prospect theory was developed by Daniel Kahneman and Amos Tversky in 1979 as a psychologically realistic alternative to expected utility theory. It allows one to describe how people make choices in situations where they have to decide between alternatives that involve risk, e.g. in financial decisions. Starting from empirical evidence, the theory describes how individuals evaluate potential losses and gains. In the original formulation the term prospect referred to a lottery.• The theory describes such decision processes as consisting of two stages, editing and evaluation. In the first, possible outcomes of the decision are ordered following some heuristic. In particular, people decide which outcomes they see as basically identical and they set a reference point and consider lower outcomes as losses and larger as gains. In the following evaluation phase, people behave as if they would compute a value (utility), based on the potential outcomes and their respective probabilities, and then choose the alternative having a higher utility.• The formula that Kahneman and Tversky assume for the evaluation phase is (in its simplest form) given by where are the potential outcomes and their respective probabilities. v is a so-called value function that assigns a value to an outcome. The value function (sketched in the Figure) which passes through the reference point is s- shaped and, as its asymmetry implies, given the same variation in absolute value, there is a bigger impact of losses than of gains (loss aversion). In contrast to Expected Utility Theory, it measures losses and gains, but not absolute wealth. The function w is called a probability weighting function and expresses that people tend to overreact to small probability events, but underreact to medium and large probabilities• To see how Prospect Theory (PT) can be applied in an example, consider a decision about buying an insurance policy. Let us assume the probability of the insured risk is 1%, the potential loss is $1000 and the premium is $15. If we apply PT, we first need to set a reference point. This could be, e.g., the current wealth, or the worst case (losing $1000). If we set the frame to the current wealth, the decision would be to either pay $15 for sure (which gives the PT-utility of v( − 15)) or a lottery with outcomes $0 (probability 99%) or $-1000 (probability 1%) which yields the PT-utility of . These expressions can be computed numerically. For typical value and weighting functions, the former expression could be larger due to the convexity of v in losses, and hence the insurance looks unattractive. If we set the frame to $-1000, both alternatives are set in gains. The concavity of the value function in gains can then lead to a preference for buying the insurance.• We see in this example that a strong overweighting of small probabilities can also undo the effect of the convexity of v in losses: the potential outcome of losing $1000 is overweighted.• The interplay of overweighting of small probabilities and concavity-convexity of the value function leads to the so- called four-fold pattern of risk attitudes: risk-averse behavior in gains involving moderate probabilities and of small probability losses; risk-seeking behavior in losses involving moderate probabilities and of small probability gains. This is an explanation for the fact that people, e.g., simultaneously buy lottery tickets and insurances, but still invest money conservatively. 100
  • Kalman filter• The Kalman filter is an efficient recursive filter that estimates the state of a dynamic system from a series of incomplete and noisy measurements. Together with the linear-quadratic regulator (LQR) the Kalman filter solves the linear-quadratic-Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator and the linear-quadratic-Gaussian controller are solutions to what probably are the most fundamental problems in control theory.• An example application would be providing accurate continuously-updated information about the position and velocity of an object given only a sequence of observations about its position, each of which includes some error. It is used in a wide range of engineering applications from radar to computer vision. Kalman filtering is an important topic in control theory and control systems engineering.• For example, in a radar application, where one is interested in tracking a target, information about the location, speed, and acceleration of the target is measured with a great deal of corruption by noise at any time instant. The Kalman filter exploits the dynamics of the target, which govern its time evolution, to remove the effects of the noise and get a good estimate of the location of the target at the present time (filtering), at a future time (prediction), or at a time in the past (interpolation or smoothing). A simplified version of a Kalman filter is the alpha beta filter, still commonly used, which has static weighting constants instead of using co-variance matrices. 101
  • control theory A Feed back loop• Control theory is an interdisciplinary branch of engineering and mathematics, that deals with the behavior of dynamical systems. The desired output of a system is called the reference. When one or more output variables of a system need to follow a certain reference over time, a controller manipulates the inputs to a system to obtain the desired effect on the output of the system. 102
  • The Drunkard‘s Walk – Functional magnetic resonance imaging, for example, shows that risk and reward are assessed by parts of the dopaminergic system. – A brain-reward circuit important for motivational and emotional processes²• The images show, too, that the amygdala, which is also linked to our emotional state, especially fear, is activated when we make decisions couched in uncertainty.² 103
  • The Drunkard‘s Walk• Fortune is fair in potentialities, she is not fair in outcomes. pp 13• When we look at extraordinary accomplishments in sports---or elsewhere—we should keep in mind that extraordinary events can happen without extraordinary causes.• Random events often look like nonrandom events, and in interpreting human affairs we must take care not to confuse the two. pp20 104
  • Random Walk 105
  • Randomness 106
  • Expectation• Expectation is an important concept not just in gambling but in all decision making Drunkards Walk pp 76 107
  • Expectation• ―If you say that all risk is unknowable,‖ Gregg Berman said, ―you don‘t have the basis of any sort of a bet or a trade. You cannot buy and sell anything unless you have some idea of the expectation of how it will move.‖• NY Times Magazine 5 January 2009, Risk Mismanagement. 108
  • Baseball Example• If they (baseball teams) don‘t have an equal chance . . . . Each outcome would have to be weighted by a factor describing its relative probability. If you do that and analyze the situation at the start of a series you will discover that in a 7 game series there is a sizable chance that the inferior team will be crowned champion.• Pp 70-71The Drunkards Walk 109
  • Baseball Example• In a lopsided 2/3rds probability case, for example, you‘d have to play a series consisting of at a minimum the best of 23 games to determine the winner with what is called statistical significance, meaning the weaker team would be crowned champion 5% or less of the time.• And in the case of one‘s team having only a 55-45 edge (i.e 55:45 is just slightly imbalanced), the shortest statistically significant ―world series‖ would be the best of 269 games….• Pp 70-71The Drunkards Walk 110
  • ―Law of Sample Space‖• In probability theory, the sample space or universal sample space, often denoted S, Ω, or U (for "universe"), of an experiment or random trial is the set of all possible outcomes. For example, if the experiment is tossing a coin, the sample space is the set {head, tail}. For tossing a single six-sided die, the sample space is {1, 2, 3, 4, 5, 6}. For some kinds of experiments, there may be two or more plausible sample spaces available. For example, when drawing a card from a standard deck of 52 playing cards, one possibility for the sample space could be the rank (Ace through King), while another could be the suit (clubs, diamonds, hearts, or spades). A complete description of outcomes, however, would specify both the denomination and the suit, and a sample space describing each individual card can be constructed as the Cartesian product of the two sample spaces noted above.• In an elementary approach to probability, any subset of the sample space is usually called an event. However, this gives rise to problems when the sample space is infinite, so that a more precise definition of event is necessary. Under this definition only measurable subsets of the sample space, constituting a σ-algebra over the sample space itself, are considered events. However, this has essentially only theoretical significance, since in general the σ-algebra can always be defined to include all subsets of interest in applications. 111
  • Sample Space• Sample Space for a coin toss is the following sequences: Sample Space = heads, tails.• Possible Outcomes is:(heads, heads), (tails, tails), (heads, tails), (tails, heads) these are the four possibilities that make up the sample space ?????. 112
  • central limit theorem• The central limit theorem (CLT) states that the sum of a large number of independent and identically-distributed random variables will be approximately normally distributed (i.e., following a Gaussian distribution, or bell-shaped curve) if the random variables have a finite variance.• Formally, a central limit theorem is any of a set of weak-convergence results in probability theory. They all express the fact that any sum of many independent identically distributed random variables will tend to be distributed according to a particular "attractor distribution". 113
  • "It aint over till its over Yogi Berra John Maynard Keynes• ―But this long run is a misleading guide to current affairs. In the long run we are all dead.‖ identified three domains of probability: Frequency probability; Subjective or Bayesian probability; and Events lying outside the possibility of any description in terms of probability (special causes) and based a probability theory thereon. 114
  • A Markov Chain Analysis of Blackjack Strategy 115
  • Markov Chains• Many statistical problems of practical interest are simply too complicated to explore analytically. In these cases, researchers often turn to simulation techniques in order to evaluate the expected outcomes. When approached creatively, however, these problems sometimes reveal a structure that is consistent with a much simpler mathematical framework, possibly permitting an analytical exploration of the problem. It is this situation that we have encountered with the game of blackjack.• Blackjack receives considerable attention from mathematicians and entrepreneurs alike, due to its simple rules, its inherent random nature, and the abundance of “prior” information available to an observant player. Indeed, many attempts have been made to propose card- counting systems that exploit such information to the player’s advantage. Most card- counting systems include two aspects: a method for monitoring the cards played to watch for favorable situations, and a strategy for playing and betting depending on the current state of the game. Because blackjack is a complicated game, attempts to actually calculate the expected gain from a particular system often rely on simulation techniques. While such techniques may yield correct results, they may also fail to explore the interesting mathematical properties of the game.• Despite the apparent complexity, there is a great deal of structure inherent in both the blackjack rules and the card-counting systems. Exploiting this structure and elementary results from the theory of Markov chains, we present a novel framework for analyzing the expected advantage of a card-counting system entirely without simulation. The method presented here requires only a few, mild simplifying assumptions, can account for many rule variations, and is applicable to a large class of counting systems.• As a specific example, we verify the reported advantage provided by one of the earliest systems, the Complete Point-Count System, introduced by Harvey Dubner in 1963 and discussed in Edward Thorp’s famous book, Beat the Dealer. While verifying this analysis is Satisfying, in our opinion the primary value of this work lies in the exposition of an interesting mathematical framework for analyzing a complicated “real-world” problem. 116
  • Parkinsons Law• Parkinsons Law is the adage that "work expands so as to fill the time available for its completion." A more succinct phrasing also commonly used is "work expands to fill the time available." It was first articulated by Cyril Northcote Parkinson, appearing as the first sentence of a humorous essay published in The Economist in 1955, later reprinted together with other essays in the book Parkinsons Law: The Pursuit of Progress (London, John Murray, 1958). He derived the dictum from his extensive experience in the British Civil Service.• The current form of the law is not that which Parkinson refers to by that name in the article. Rather, he assigns to the term a mathematical equation describing the rate at which bureaucracies expand over time. Much of the essay is dedicated to a summary of purportedly scientific observations supporting his law, such as the increase in the number of employees at the Colonial Office while Britains overseas empire declined (indeed, he shows that the Colonial Office had its greatest number of staff at the point when it was folded into the Foreign Office because of a lack of colonies to administer). He explains this growth by two forces: (1) "An official wants to multiply subordinates, not rivals" and (2) "Officials make work for each other." He notes in particular that the total of those employed inside a bureaucracy rose by 5-7% per year "irrespective of any variation in the amount of work (if any) to be done."• In time, however, the first-referenced meaning of the phrase has dominated, and sprouted several corollaries: for example, the derivative relating to computers: "Data expands to fill the space available for storage". "Parkinsons Law" could be generalized further still as: "The demand upon a resource tends to expand to match the supply of the resource." An extension is often added to this, stating that "the reverse is not true." This generalization has become very similar to the economic law of demand; that the lower the price of a service or commodity, the greater the quantity demanded.• Parkinson also proposed a rule about the efficiency of administrative councils. He defined a coefficient of inefficiency with the number of members as the main determining variable.• Application of Parkinsons Law• In Project Management, individual tasks with end-dates rarely finish early because the people doing the work expand the work to finish approximately at the end-date. Coupled with the Student syndrome, individual tasks are nearly guaranteed to be late.• Individuals see this arise in their daily activities as well. A person with many tasks to do will allocate their time so that all of the tasks are completed, while an elderly, retired person may take all day just to pick out a greeting card. In this case, the work that is expended upon each task expands to fill the available time for the task. This leads to the canard, "If you want something done, give it to a busy person".• As an individuals income rises, their costs of living and lifestyle increases to meet their income level.• Part of Cyril Northcote Parkinson‘s observations are that once a core organisation exists, it will perpetuate and expand itself regardless of the reason it came into being. 117
  • Yale Endowment• Because investment management involves as much art as science, qualitative considerations play an extremely important role in portfolio decisions. The definition of an asset class is quite subjective, requiring precise distinctions where none exist. Returns and correlations are difficult to forecast. Historical data provide a guide, but must be modified to recognize structural changes and compensate for anomalous periods. Finally, quantitative measures have difficulty incorporating factors such as market liquidity or the influence of significant, low-probability events. 118
  • Yale Endowment 119
  • Performance for Harvard, Yale Endowments in 2008Harvard: 8.6%Yale: 4.5%Pretty impressive considering stocks were down more than 10% over the same time period.Below is a table of the five main asset classes over the past year and their total returns. Thebuy and hold allocation is the same allocation mentioned in my paper, namely a 20%allocation to the same five asset classes. GTAA is the timing model from the same paper.(Both are gross returns so lop off about 50 bps for management fees.)Harvard: 8.6%Yale: 4.5%B&H: 9.79% (no rebalance), 6.44% monthly rebalanceGTAA: 13.92% (no rebalance), 11.15% (monthly rebalance)US Stocks: -13.12%Foreign Stocks: -10.15%Bonds: 12.76%Commodities: 75.99%REITs: -16.55% 120
  • Framing Effect The framing of alternatives also affects decisions. For example, when people (including doctors) who are considering a risky medical procedure are told that 90 percent survive five years, they are far more likely to accept the procedure than when they are told that 10 percent do not survive five years. Because framing affects peoples behavior, providing more information cannot remedy matters, unless the information is presented in a fully neutral fashion. In some cases, additional information only increases peoples anxiety and confusion, thereby reducing their welfare. Rick Ferri [“Rick”] Ferri is the founder and CEO of Portfolio Solutions, LLC, a Michigan-based investment firm, and is an adjunct professor of finance at Walsh College.He has written five investment books. Ferri worked as a stockbroker with two major Wall Street firms for 10 years before starting his firm in 1999. PortfolioSolutions manages close to $700 million in separate accounts for high net worth individuals, families, nonprofit organizations and corporate pension plans. 121
  • Rules of Thumb Brewing Rule of Thumb Before the invention of thermometers, the brewer tested the wort by placing his thumb in it.When he could reliably place his thumb in the wort without having to remove it because of the heat, the wort was cool enough to pitch the yeast.• People rely on a limited number of heuristic (essentially rules of thumbs) principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations.• In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors.• Daniel Kahneman 122
  • heuristic• A heuristic (hyu̇-ris-tik) is a method to help solve a problem, commonly informal. It is particularly used for a method that often rapidly leads to a solution that is usually reasonably close to the best possible answer. Heuristics are "rules of thumb", educated guesses, intuitive judgments or simply common sense.• In more precise terms, heuristics stand for strategies using readily accessible, though loosely applicable, information to control problem-solving in human beings and machines. 123
  • Problems with Heuristics• Heuristic algorithms are often employed because they may be seen to "work" without having been mathematically proven to meet a given set of requirements.• Great care must be given when employing a heuristic algorithm. One common pitfall in implementing a heuristic method to meet a requirement comes when the engineer or designer fails to realize that the current data set doesnt necessarily represent future system states.• While the existing data can be pored over and an algorithm can be devised to successfully handle the current data, it is imperative to ensure that the heuristic method employed is capable of handling future data sets.• This means that the engineer or designer must fully understand the rules that generate the data and develop the algorithm to meet those requirements and not just address the current data sets.• A simple example of how heuristics can fail is to answer the question "What is the next number in this sequence: 1, 2, 4?". One heuristic algorithm might say that the next number is 8 because the numbers are doubling - leading to a sequence like 1, 2, 4, 8, 16, 32... Another, equally valid, heuristic would say that the next number is 7 because each number is being raised by one higher interval than the one before - leading to a series that looks like 1, 2, 4, 7, 11, 16...• Statistical analysis must be conducted when employing heuristics to ensure that enough data points are utilized to make incorrect outcomes statistically insignificant 124
  • Rules of Thumb• Financial - Rule of 72 A rule of thumb for exponential growth at a constant rate. An approximation of the "doubling time" formula used in population growth, which says divide 70 by the percent growth rate (the actual number is 69.3147181 from the natural logarithm of 2, if the percent growth is much much less than 1%). In terms of money, it is frequently easier to use 72 (rather than 70) because it works better in the 4%-10% range where interest rates often lie. Therefore, divide 72 by the percent interest rate to determine the approximate amount of time to double your money in an investment. For example, at 8% interest, your money will double in approximately 9 years (72/8 = 9).• Tailors Rule of Thumb A simple approximation that was used by tailors to determine the wrist, neck, and waist circumferences of a person through one single measurement of the circumference of that persons thumb. The rule states, typically, that twice the circumference of a persons thumb is the circumference of their wrist, twice the circumference of the wrist is the circumference of the neck, and twice around the neck is the persons waist. For example, if the circumference of the thumb is 4 inches, then the wrist circumference is 8 inches, the neck is 16 and the waist is 32. An interesting consequence of this is that — for those to whom the rule applies — this simple method can be used to determine if trousers will fit: the trousers are wrapped around the neck, and if the two ends barely touch, then they will fit. Any overlap or lack thereof corresponds to the trousers being too loose or tight, respectively.• Marine Navigation A ships captain should navigate to keep the ship more than a thumbs width from the shore, as shown on the nautical chart being used. Thus, with a coarse scale chart, that provides few details of nearshore hazards such as rocks, a thumbs width would represent a great distance, and the ship would be steered far from shore; whereas on a fine scale chart, in which more detail is provided, a ship could be brought closer to shore.[1]• Statistics The Statistical Rule of Thumb says that for most large data sets, 68% of data points will occur within one standard deviation from the mean, and 95% will occur within two standard deviations. Chebyshevs inequality is a more general rule along these same lines and applies to all data sets. 125
  • Rules of Thumb―There is always a well-known solution to every human problem – neat, plausible, and wrong.‖ – H. L. Mencken, Prejudices: Second Series, 1920Tailors Rule of Thumb. This is the fictional rule described by Jonathan Swift in his satirical novelGullivers Travels: Then they measured my right Thumb, and desired no more; for by a mathematical Computation, that twice round the Thumb is once around the “ Wrist, and so on to the Neck and Waist, and by the help of my old Shirt, which I displayed on the Ground before them for a Pattern, they fitted me [5] exactly." 126
  • • Taleb says that Wall Street risk models, no matter how mathematically sophisticated, are bogus; indeed, he is the leader of the camp that believes that risk models have done far more harm than good NY Times Magazine 5 January 2005. Risk Mismanagement 127
  • • And the essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can‘t see and therefore can never measure. The ones that seem so far outside the boundary of normal probability that you can‘t imagine they could happen in your lifetime — even though, of course, they do happen, more often than you care to realize. NY Times Magazine 5 January 2005. Risk Mismanagement 128
  • "The Scandal of Prediction"• "What is surprising is not the magnitude of our forecast errors," observes Mr. Taleb, "but our absence of awareness of it.―• We tend to fail--miserably--at predicting the future, but such failure is little noted nor long remembered. It seems to be of remarkably little professional consequence. 129
  • Black Swans• "Black swans" are highly consequential but unlikely events that are easily explainable – but only in retrospect.• • Black swans have shaped the history of technology, science, business and culture.• • As the world gets more connected, black swans are becoming more consequential.• • The human mind is subject to numerous blind spots, illusions and biases.• • One of the most pernicious biases is misusing standard statistical tools, such as the ―bell curve,‖ that ignore black swans.• • Other statistical tools, such as the "power-law distribution," are far better at modeling many important phenomena.• • Expert advice is often useless.• • Most forecasting is pseudoscience.• • You can retrain yourself to overcome your cognitive biases and to appreciate randomness. but its not easy.• • You can hedge against negative black swans while benefiting from positive ones. 130
  • • Statistical and appliedNassim Taleb probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the "logic of science"; it is the instrument of risk-taking; it is the applied tools of epistemology; you cant be a modern intellectual and not think probabilistically— but... lets not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (lets face it: use of probabilistic methods for the estimation of risks did just blow up the banking system). 131
  • 132
  • Randomness• One can study randomness, at three levels: mathematical, empirical, and behavioral.• Mathematical The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because weve pretty much reached small returns in what we can develop in that branch.• Empirical The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of the mechanics of historical events, whether quantitative models can help us and how they can hurt us.• Behavioral And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not see, even if our logic recommends otherwise. • We tend to overestimate causal relationships – When we meet someone who by playing Russian roulette became extremely influential, wealthy, and powerful, we still act toward that person as if he gained that status just by skills, even when you know theres been a lot of luck. Why?• Because our behavior toward that person is going to be entirely determined by shallow heuristics and very superficial matters related to his appearance. 133
  • Observation bias• Its what we call in some circles the observation bias, or the related data mining problem. When you look at anything; say the stock market; you see the survivors, the winners; you don t see the losers because you dont observe the cemetery and you will be likely to misattribute the causes that led to the winning.Nassim Taleb 134
  • Risk BearingNassim Taleb :We have vital research in risk-bearing. The availability heuristic tellsyou that your perception of a risk is going to be proportional to howsalient the event comes to your mind.It can come in two ways, either because it compressed a vivid image, orbecause its going to elicit an emotional reaction in you.The latter is called the affect heuristic, recently developed as the"risk as feeling" theory.We observe it in trading all the time. Basically you only worry aboutwhat you know, and typically once you know about something thedamage is done. 135
  • Goldman Sachs‘ chief financial officer, David Viniar• ―The question is: how extreme is extreme?‖ Viniar said. ―Things that we would have thought were so extreme have happened. We used to say, What will happen if every equity market in the world goes down by 30 percent at the same time? We used to think of that as an extreme event — except that now it has happened. Nothing ever happens until it happens for the first time.‖• NY Times Magazine 5 January 2009, Risk Mismanagement. 136
  • Risk as feelings.• Virtually all current theories of choice under risk or uncertainty are cognitive and consequentialist. They assume that people assess the desirability and likelihood of possible outcomes of choice alternatives and integrate this information through some type of expectation-based calculus to arrive at a decision.• The authors propose an alternative theoretical perspective, the risk-as-feelings hypothesis, that highlights the role of affect experienced at the moment of decision making. Drawing on research from clinical, physiological, and other subfields of psychology, they show that emotional reactions to risky situations often diverge from cognitive assessments of those risks.• When such divergence occurs, emotional reactions often drive behavior. The risk-as-feelings hypothesis is shown to explain a wide range of phenomena that have resisted interpretation in cognitive-consequentialist terms.• Loewenstein GF, Weber EU, Hsee CK, Welch N. Department of Social and Decision Sciences, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-3890, USA. gl20@andrew.cmu.edu 137
  • Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality• Rationality • risk analysis • risk perception • the affect heuristic Modern theories in cognitive psychology and neuroscience indicate that there are two fundamental ways in which human beings comprehend risk. The "analytic system" uses algorithms and normative rules, such as probability calculus, formal logic, and risk assessment. It is relatively slow, effortful, and requires conscious control. The "experiential system" is intuitive, fast, mostly automatic, and not very accessible to conscious awareness.• The experiential system enabled human beings to survive during their long period of evolution and remains today the most natural and most common way to respond to risk. It relies on images and associations, linked by experience to emotion and affect (a feeling that something is good or bad). This system represents risk as a feeling that tells us whether it is safe to walk down this dark street or drink this strange- smelling water.• Proponents of formal risk analysis tend to view affective responses to risk as irrational.• Current wisdom disputes this view. The rational and the experiential systems operate in parallel and each seems to depend on the other for guidance. Studies have demonstrated that analytic reasoning cannot be effective unless it is guided by emotion and affect.• Rational decision making requires proper integration of both modes of thought. Both systems have their advantages, biases, and limitations. Now that we are beginning to understand the complex interplay between emotion and reason that is essential to rational behavior, the challenge before us is to think creatively about what this means for managing risk.• On the one hand, how do we apply reason to temper the strong emotions engendered by some risk events? On the other hand, how do we infuse needed "doses of feeling" into circumstances where lack of experience may otherwise leave us too "coldly rational"? This article addresses these important questions.• DIGITAL OBJECT IDENTIFIER (DOI)• 10.1111/j.0272-4332.2004.00433.x About DOI• Paul Slovic 1,2*, Melissa L. Finucane 3 , Ellen Peters 1,2 , and Donald G. MacGregor 1 *Address correspondence to Paul Slovic, Decision Research, 1201 Oak Street, Eugene, OR 97401; . 138
  • Affect Heuristic• "Affect", in this context, is simply a feeling—fear, pleasure, humorousness, etc. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. Reading the words "lung cancer" usually generates an affect of dread, while reading the words "mothers love" usually generates an affect of affection and comfort. For the purposes of the psychological heuristic, affect is often judged on a simple diametric scale of "good" or "bad".• The theory of affect heuristic is that a human beings affect can influence their decision-making. The affect heuristic got recent attention when it was used to explain the unexpected negative correlation between benefit and risk perception. and others theorized in 2000 that a good feeling towards a situation (i.e., positive affect) would lead to a lower risk perception and a higher benefit perception, even when this is logically not warranted for that situation. This implies that a strong emotional response to a word or other stimulus might alter a persons judgment. (S)he might make different decisions based on the same set of facts and might thus make an illogical decision.• For example, in a blind taste test, a man might like Mirelli Beer better than Saddle Sweat Beer; however, if he has a strong gender identification, an advertisement touting Saddle Sweat as "a real mans brew" might cause him to prefer Saddle Sweat. Positive affect related to gender pride biases his decision sufficiently to overcome his cognitive judgment. 139
  • Black Swan Events • "Much of what happens in history comes from Black Swan dynamics, very large, sudden, and totally unpredictable outliers, while much of what we usually talk about is almost pure noise.• Our track record in predicting those events is dismal; yet by some mechanism called the hindsight bias we think that we understand them. We have a bad habit of finding laws in history (by fitting stories to events and detecting false patterns); we are drivers looking through the rear view mirror while convinced we are looking ahead." 140
  • Robert Shiller• Measuring risks, especially important long-term ones, is imprecise and difficult. Virtually none of the economic statistics reported in the media measure risk.• To fully comprehend risk, we must stretch our imagination to think of all the different ways that things can go wrong, including things that have not happened in recent memory.• We must protect ourselves against fallacies, such as thinking that just because a risk has not proved damaging for decades, it no longer exists. 141
  • Robert Shiller• Yet another psychological barrier is a sort of ego involvement in our own success.• Our tendency to take full credit for our successes discourages us from facing up to the possibility of loss or failure, because considering such prospects calls into question our self-satisfaction.• Indeed, self-esteem is one of the most powerful human needs: a view of our own success relative to others provides us with a sense of meaning and well-being. 142
  • Robert Shiller• So accepting the essential randomness of life is terribly difficult, and contradicts our deep psychological need for order and accountability.• We often do not protect the things that we have - such as our opportunities to earn income and accumulate wealth - because we mistakenly believe that our own natural superiority will do that for us. 143
  • Selection Bias• This bias makes us miscompute the odds and wrongly ascribe skills. If you funded 1,000,000 unemployed people endowed with no more than the ability to say "buy" or "sell", odds are that you will break-even in the aggregate, minus transaction costs, but a few will hit the jackpot, simply because the base cohort is very large. It will be almost impossible not to have small Warren Buffets by luck alone. After the fact they will be very visible and will derive precise and well-sounding explanations about why they made it. It is difficult to argue with them; "nothing succeeds like success". All these retrospective explanations are pervasive, but there are scientific methods to correct for the bias. This has not filtered through to the business world or the news media; researchers have evidence that professional fund managers are just no better than random and cost money to society (the total revenues from these transaction costs is in the hundreds of billion of dollars) but the public will remain convinced that "some" of these investors have skills. 144
  • False Discovery Rate• July 13, 2008• STRATEGIES• The Prescient Are Few• By MARK HLBERT• HOW many mutual fund managers can consistently pick stocks that outperform the broad stock market averages — as opposed to just being lucky now and then?• Countless studies have addressed this question, and have concluded that very few managers have the ability to beat the market over the long term. Nevertheless, researchers have been unable to agree on how small that minority really is, and on whether it makes sense for investors to try to beat the market by buying shares of actively managed mutual funds.• A new study builds on this research by applying a sensitive statistical test borrowed from outside the investment world. It comes to a rather sad conclusion: There was once a small number of fund managers with genuine market-beating abilities, as judged by having past performance so good that their records could not be attributed to luck alone. But virtually none remain today. Index funds are the only rational alternative for almost all mutual fund investors, according to the study‘s findings.• The study, ―False Discoveries in Mutual Fund Performance: Measuring Luck in Estimating Alphas,‖ has been circulating for over a year in academic circles. Its authors are Laurent Barras, a visiting researcher at Imperial College‘s Tanaka Business School in London; Olivier Scaillet, a professor of financial econometrics at the University of Geneva and the Swiss Finance Institute; and Russ Wermers, a finance professor at the University of Maryland.• The statistical test featured in the study is known as the ―False Discovery Rate,‖ and is used in fields as diverse as computational biology and astronomy. In effect, the method is designed to simultaneously avoid false positives and false negatives — in other words, conclusions that something is statistically significant when it is entirely random, and the reverse.• Both of those problems have plagued previous studies of mutual funds, Professor Wermers said. The researchers applied the method to a database of actively managed domestic equity mutual funds from the beginning of 1975 through 2006. To ensure that their results were not biased by excluding funds that have gone out of business over the years, they included both active and defunct funds. They excluded any fund with less than five years of performance history. All told, the database contained almost 2,100 funds.• The researchers found a marked decline over the last two decades in the number of fund managers able to pass the False Discovery Rate test. If they had focused only on managers running funds in 1990 and their records through that year, for example, the researchers would have concluded that 14.4 percent of managers had genuine stock-picking ability. But when analyzing their entire fund sample, with records through 2006, this proportion was just 0.6 percent — statistically indistinguishable from zero, according to the researchers.• This doesn’t mean that no mutual funds have beaten the market in recent years, Professor Wermers said. Some have done so repeatedly over periods as short as a year or two. But, he added, ―the number of funds that have beaten the market over their entire histories is so small that the False Discovery Rate test can’t eliminate the possibility that the few that did were merely false positives‖ — just lucky, in other words.• Professor Wermers says he was surprised by how rare stock-picking skill has become. He had ―generally been positive about the existence of fund manager ability,‖ he said, but these new results have been a ―real shocker.‖• WHY the decline? Professor Wermers says he and his co-authors suspect various causes. One is high fees and expenses. The researchers‘ tests found that, on a pre-expense This basis, 9.6 percent of mutual fund managers in 2006 showed genuine market-beating ability — far higher than the 0.6 percent after expenses were taken into account. suggests that one in 10 managers may still have market-beating ability. It’s just that they can’t come out ahead after all their funds’ fees and expenses are paid.• Another possible factor is that many skilled managers have gone to the hedge fund world. Yet a third potential reason is that the market has become more efficient, so it‘s harder to identify undervalued or overvalued stocks. Whatever the causes, the investment implications of the study are the same: buy and hold an index fund benchmarked to the broad stock market.• Professor Wermers says his advice has evolved significantly as a result of this study. Until now, he says, he wouldn’t have tried to discourage a sophisticated investor from trying to pick a mutual fund that would outperform the market. Now, he says, ―it seems almost hopeless.‖• Mark Hulbert is editor of The Hulbert Financial Digest, a service of MarketWatch. E-mail: strategy@nytimes.com.•• Copyright 2008 The New York Times Company 145
  • Long Term Capital Managementhttp://www.dailymotion.com/video/x4seov_th e-trillion-dollar-bet_news 146
  • Taxonomy-Based Risk Identification M. Carr S. Kondra I. Monarch F. Ulrich C. Walker• Technical Report CMU/SEI-93-TR-006• This report describes a method for facilitating the systematic and repeatable identification of risks associated with the development of a software-dependent project. This method, derived from published literature and previous experience in developing software, was tested in active government-funded defense and civilian software development projects for both its usefulness and for improving the method itself. Results of the field tests encouraged the claim that the described method is useful, usable, and efficient. The report concludes with some macro- level lessons learned from the field tests and a brief overview of Future work in establishing risk management on a firm footing in software development projects.• © 2008 Carnegie Mellon University• http://www.sei.cmu.edu/publications/documents/93.reports/93.tr.006.html• Terms of Use 147
  • The Concept of Risk Myth 1 Risk can be quantified as a single number given by the product of the probability of occurrence of a mishap and the severity of the potential outcome, R = P*C!Introduction Myth 1 Myth 2 Myth 3 Way Ahead References148
  • Issues with R = P*C Principally of actuarial value i.e.  Insurance companies, regulatory agencies  Limited insight into rare events and R&D projects  Often misapplied to situations where ―average‖ is not a good decision criterion  DoD acquisition  R&D programs/projects  Extreme (low probability/high consequence) events – Industrial accidents, terrorism threats, acts of nature  Hard personal decisions  Does not capture rational psychological influences of risk  Inconsistent with ―rational‖ human behaviorIntroduction Myth 1 Myth 2 Myth 3 Way Ahead References149
  • Reality  Risk is too complex a concept to be expressed by a single number or scalar  Critical information about extreme events is lost  Rationale people often act on the basis that risk depends more on the magnitude rather than the probability of an undesirable event  High probability/low consequence events ≠ Low probability/High consequence events Prospect theory, Kahneman and Tversky“One of the most dominant steps in the risk assessment process is thequantification of risk, yet the validity of the approach most commonly used toquantify risk –its expected value- has received neither the broad professionalscrutiny that it deserves nor the hoped-for wider mathematical challenges that itmandates.” Y.Y. Haimes [2004: 309] Gonick [1996]Introduction Myth 1 Myth 2 Myth 3 Way Ahead References150
  • Local Trade + + Why a risk profile? + Boats + + + 2 mi Channel Width  Consider the risk of a terrorist attack on a navy destroyer in restricted waterway – Risk depends on many factors/variables • Probability that specific type of attack occurs –THREAT • Probability that the attack results in damage – VULNERABILITY • Magnitudes and types of damage - CONSEQUENCES – Consequences can range from negligible to catastrophic depending on • Type of attack • State of readiness and capability of ship protection system • How promptly it is detected and distance at which it is stopped  Risk profiles provide spectrum of possible consequences – Deaths, injuries, damage with each magnitude having its own probability of occurrence • x- axis: Severity or consequence • y-axis: Probability of risk consequence > x-valueIntroduction Myth 1 Myth 2 Myth 3 Way Ahead References151
  • Generating a risk profile Analysis tools Risk profile Increasing uncertainty – 0.4 Databases of past occurrences • Frequency, damage, casualties… Long tail extending (severe – Exceedance probability Modeling & simulation 0.3 consequences) characteristic – Expert opinion of natural hazards, • Requires a systematic approach 0.2 industrial & personal accidents, and terrorist acts  Attack on destroyer scenario 0.1 Av_Co 0 0 20 40 60 80 100 120 140 Damage, M$ Po_Co 0 Po_Co Ex_Co DE_Gt_500y KE7.1 P1_T1 0.4 0 1 0 KE8.1 0.9 0 DE_500_300 Ex_Co 0.5 0 KE8.1a 0.1 Weib(0,10,20) KE9.1 0.8 Weib(5,15,40) DE_300 0.1 0 NKE9.1 0.2 Weib(40,120,300)Introduction Myth 1 Myth 2 Myth 3 Way Ahead References152
  • Comparing risk profiles Simple case 1.00 0.80  Options 1 & 2 • Identical benefits Prob. > X 0.60 • Risk profiles RP1 and RP2 0.40 Which would you select? 0.20 0.00  RP1 dominates RP2. It represents 0 20 40 60 80 100 Damage, K$ the greater risk: RP1 RP2 – RP1 has greater probability of exceeding any given damage X - RP1 has higher expected loss and 70 standard deviation 60 Magnitude, K$ 50 40 30  Unfortunately such simple situations 20 are very rare! 10 0 RP1 RP2 Risk profile ID Mean Std dev.Introduction Myth 1 Myth 2 Myth 3 Way Ahead References153
  • Comparing risk profiles Typical case  Options 1 & 3 1.00 • Identical benefits 0.80 • Risk profiles RP1 and RP3 Prob. > X 0.60 Which would you select? 0.40 RP1 does not dominates RP3 – RP1 has a greater exceedance 0.20 0.00 0 30 60 90 120 150 probability for small values of X Damage, K$ RP1 RP3 – RP3 has a greater exceedance probability for large values of X 80 - RP1 has a larger expected loss 70 - RP3 has a larger standard deviation Magnitude, K$ 60 50 Many “rational” decision makers will choose RP1 over RP3. It 40 30 requires probabilistic thinking! 20 10 0 RP1 RP3 Risk profile ID  Basis for Modern Portfolio Theory and Mean Std dev. Prospect TheoryIntroduction Myth 1 Myth 2 Myth 3 Way Ahead References154
  • Classical Safety Risk Assessment Matrix (CSRAM) CSRAM adapted Mil-Std-882D  Any mishap Ei is represented as a point: Ei =(Severity, Frequency)  Risk may be expressed qualitatively Example. E1 = IIB = (Critical, Probable), Risk=High  Risk may be assessed quantitatively: R = S*F (or P*C) Example. E1 = ($500K, 0.05), Risk = $500,000*0.05 = $25,000  Risk matrices must be tailored to different domainsIntroduction Myth 1 Myth 2 Myth 3 Way Ahead References155
  • Classical Safety Risk Assessment Matrix (CSRAM) - some issues  The assessment of ―best point-estimate‖ is ambiguous 1. ―Measure of the most reasonable mishap…‖ [Mil-Std-882D, 2000] 2. ―Worst credible consequence of a hazard…‖ [Arnason, 2002] 3. ―…‖worst credible‖ is a reasonable worst case, not the worst conceivable case.‖ [Kaiser Permanente, 2002] 4. ―…expected value of the potential outcomes…‖[McCormick, 1981]  Statements 1-3 are vague and at best subjective  May lead to invalid assessments and flawed decisions  ―Expected value‖ provides mathematically rigorous and practical definition for point estimate, BUT  It characterizes risk in terms of a single number  Critical information about extreme events is lost  It is difficult to evaluate using engineering judgment  Insufficient information for selecting robust solutionsIntroduction Myth 1 Myth 2 Myth 3 Way Ahead References156
  • Quantitative Probabilistic Risk Assessment Matrix (QPRAM) Risk assessment needs a simple and reliable approach  Risk profiles: may be overwhelming to lay decision-maker  Classical risk matrix: insufficient information for sound decision Quantitative Probabilistic Risk Assessment Matrix (QPRAM) proposed as improved approach Extension of CSRAM to effectively provide quantitative probabilistic information about a mishap‘s severity and probability Retains simplicity of a two-dimensional plot while providing critical information about low-probability/high-severity events – 3 percentiles consistent with Direct Fractile Assessment (DFA) method Simple graphical tool for communicating key risk analysis results Small effort differential between QPRAM and CSRAM [E. Kujawski and G. Miller, 2006] Introduction Myth 1 Myth 2 Myth 3 Way Ahead References157
  • ASSESSING THE EXTREMES OF PROBABILITY DISTRIBUTIONS BY THE FRACTILE METHOD• Probability Assessment • Decision Processes• When people are asked to express uncertainty in the form of probability distributions (by assessing several fractiles of the cumulative distributions), experiments have shown that most subjects exhibit systematic biases in describing the extremes or tails of the distributions.• This paper discusses methods designed to refine the assessments of such extremes of probability distributions.• The techniques were tested on large samples of assessors; the effectiveness of different methods is reported. The methods include asking for the assessment of additional fractiles in the extremes, asking for different fractiles in the extremes (.10 and .90 instead of .01 and .99 fractiles), making the assessment a two-step process by separating the questions about central fractiles from those requesting extreme values, and varying the order in which different fractile values are requested.• All these methods except the two-step process resulted in improved estimation of the extreme values of the distributions when compared to some of the early work in this field where five fractile assessments were made.• J. E. Selvidge 1 1 The BDM Corporation *This research was supported in part by the Division of Research, Graduate School of Business Administration, Harvard University. Received: September 1, 1975. Accepted: March 2, 1980• DIGITAL OBJECT IDENTIFIER (DOI)• 10.1111/j.1540-5915.1980.tb01154.x About DOI 158
  • • Reactions April 2008 39 CAT BONDS• SPONSORED ARTICLE• www.reactionsnet.com/digital• Richard Clinton, president of risk• management consultants EQECAT,• explains how technology is evolving• to help insurers, reinsurers and risk• managers quantify and transfer a• wider range of risks• What kind of technology is available to model natural and man-made hazards?• The technology exists today to credibly quantify natural hazard• risks, such as earthquake, hurricanes, winter storms and floods as• well as man-made risks such as explosions due either to industrial• accidents or terrorist acts. These risks can be quantified on a sitespecific• level, sometimes with the help of engineering site evaluations.• All sites can be combined into a portfolio of risks to quantify• the total exposure in order to evaluate the various risk transfer strategies• now available to risk managers.• Is it possible to model such risks in all areas of concentrated economic activity, as• well as North America, Europe and Japan?• With the growth of globalization, modellers have been pushed to• expand and refine model capabilities to keep up with increasingly• geographically diverse supply chains. Critical manufacturing and• distribution facilities are frequently located in high hazard regions.• At EQECAT, we‘ve invested substantially to update and improve• models for Central and South America, Eastern Europe, India, Taiwan,• mainland China and other Pacific Rim areas, for example.• Modelling risks associated with natural hazards used to be the preserve of the• insurance industry. Are big corporations also developing their own modelling• capabilities?• We serve the growing corporate market by providing global risk• analyses and by enabling them to evaluate a variety of mitigation• strategies ranging from engineered risk reduction initiatives to alternative• risk transfer into the capital markets.• We‘ve also noted an increasing demand for modelling in certain• financial sectors. For example, mortgage lending institutions• are increasingly concerned about measuring the risk of mortgage• defaults due to damage from natural hazard events as part of their• underwriting and portfolio management practices. We have a webbased• risk analysis service called Property RiskTM designed specifically• to serve this growing market.• Can computer models help insurers and reinsurers better understand their business• interruption exposures?• BI and CBI are the most challenging aspect of CAT modeling.• This is because the drivers of risk are more complex and have greater• uncertainty. The difficulty of pricing the risk has led to a chronic• shortage of capacity for coverage. We have leveraged our 1st hand• Analyse this!• engineering familiarity with the operational environment for each• occupancy class. Getting reliable BI numbers requires the insight of• people who understand specific machinery, equipment and workflows• and what the critical drivers of risk are and their vulnerability• to various hazards. Therefore, when this is a critical risk or there• are unique interdependency risks, we strongly recommend that companies• conduct a detailed engineering analysis in conjunction with 159
  • Behavioral economics topics• Behavioral economics topics• Models in behavioral economics are typically addressed to a particular observed market anomaly and modify standard neo-classical models by describing decision makers as using heuristics and being affected by framing effects. In general, economics sits within the neoclassical framework, though the standard assumption of rational behavior is often challenged.• Heuristics• Prospect theory - Loss aversion - Status quo bias - Gamblers fallacy - Self-serving bias - money illusion• Framing• Cognitive framing - Mental accounting – Anchoring• Anomalies (economic behavior)• Disposition effect - endowment effect - inequity aversion - reciprocity - intertemporal consumption - present-biased preferences - momentum investing - Greed and fear - Herd instinct - Sunk cost fallacy• Anomalies (market prices and returns)• equity premium puzzle - Efficiency wage hypothesis - price stickiness - limits to arbitrage - dividend puzzle - fat tails - calendar effect 160
  • Tunneling• The mind uses many more simplifying schemas that can lead to error. Once people have theories, they seek confirming evidence; this is called ―confirmation bias.‖ They fall victim to ―epistemic arrogance,‖ becoming overconfident about their ideas and failing to account for randomness. To make their theories work, people ―smooth out‖ the ―jumps‖ in a time series or historical sequence, looking for and finding patterns that are not there.• Their conceptual categories will limit what they see; this is called ―tunneling.‖ They turn to ―experts‖ for help, but often these expert opinions are no better – and often they are worse – than the ―insights‖ gained from flipping a coin or hiring a trained chimp to throw darts at the stock listings. Worst of all, people steadily fail to consider ―black swans,‖ the highly consequential rare events that drive history. 161
  • Mediocristan‖ or ―Extremistan?‖• ―So the human mind tends to smooth away the rough features of reality. Does this matter? It can matter, and a lot, depending on whether you‘re in ―Mediocristan‖ or ―Extremistan.‖• Where are these strange places? Nowhere. They are actually memorable metaphors for remembering two wildly different classes of natural phenomena.• Mediocristan refers to phenomena you could describe with standard statistical concepts, like the Gaussian distribution, known as the ―bell curve.‖• Extremistan refers to phenomena where a single, curve-distorting event or person can radically skew the distribution. Imagine citing Bill Gates in a comparison of executive incomes. 162
  • Extremistan• Take income. If a 100 people report their income, if 99 of them make between 20,000 and 80,000 a year and then the 100th person makes 90 million a year, this WILL dramatically change the overall picture, seriously inflating the mean.• If, say, the first 50 people make 20,000 and the remaining 50 make 80,000, then the "average" person makes 50,000.• If that last person makes 90,000,000 instead of 80,000, then the "average" person makes 949,200! Thus, in Extremistan, one single, improbable event can drastically alter the entire picture. 163
  • Median and the Mean• That is why people use the median when discussing income, and never the mean. Though the mean is dragged up by a large outlier, the median is not. Nobody in their right mind would use the mean to describe average income. Taleb ignores such elementary points in his tirade, somewhat weakening his overall point. He could easily come back and say that he is discussing prediction, however, and using the median instead does not mean that one will not miss the fact prediction is still unjustified. 164
  • In Natures Casino By MICHAEL LEWIS NY Times Magazine Published: August 26, 2007• To better judge the potential cost of catastrophe, Clark gathered very long-term historical data on hurricanes. There was all this data that wasnt being used, she says. You could take it, and take all the science that also wasnt being used, and you could package it in a model that could spit out numbers companies could use to make decisions. It just seemed like such an obvious thing to do.• She combined the long-term hurricane record with new data on property exposure -- building-replacement costs by ZIP code, engineering reports, local building codes, etc. -- and wound up with a crude but powerful tool, both for judging the probability of a catastrophe striking any one area and for predicting the losses it might inflict. NY Times Magazine Published: August 26, 2007 165
  • In Natures Casino By MICHAEL LEWIS NY Times Magazine Published: August 26, 2007• Clark wrote a paper with the unpromising title A Formal Approach to Catastrophe Risk Assessment in Management. In it, she made the simple point that insurance companies had no idea how much money they might lose in a single storm… 166
  • In Natures Casino By MICHAEL LEWIS NY Times Magazine August 26, 2007• ―in 1842, when the city of Hamburg burned to the ground and bankrupted the entire German insurance industry many times over. Out of the ashes was born a new industry, called reinsurance. The point of reinsurance was to take on the risk that the insurance industry couldnt dilute through diversification -- say, the risk of an entire city burning to the ground or being wiped off the map by a storm.• The old insurance companies would still sell policies to the individual residents of Hamburg. But they would turn around and provide some of the premiums they collected to Cologne Re (short for reinsurance) in exchange for taking on losses over a certain amount.• Cologne Re would protect itself by diversifying at a higher level -- by selling catastrophic fire insurance to lots of other towns.‖ 167
  • • They (the insurance companies) were comfortable with their own subjective judgment. Of course they were; they had made pots of money the past 20 years insuring against catastrophic storms. But -- and this was her real point -- there hadnt been any catastrophic storms! In Natures Casino The insurers hadnt By MICHAEL LEWISNY Times Magazine Published: August 26, 2007 been smart. They had been lucky. 168
  • ―Ludic fallacy‖• Now consider an alternative hypothesis: He got lucky. His putative ―virtues‖ had nothing to do with his success. He is, essentially, a lottery winner. The public looks at his life and concocts a story about how brilliant he was, when, in fact, he was merely at the right place at the right time. This is the ―ludic fallacy‖ (ludus means game in Latin)• People underestimate luck in life – though they ironically overestimate it in certain games of chance.‖ Even the businessman himself falls victim to flawed thinking through the self sampling bias. He looks at himself, a sample of one, and draws a sweeping conclusion, such as, ―If I can do it, anyone can!‖ Notice that the same reasoning would apply had he merely bought a winning lottery ticket. ―I‘m a genius for picking 3293927! Those long odds didn‘t mean a darn thing. I mean, after all, I won didn‘t I? 169
  • Predictive Tools It is tough to make predictions, especially about the future. - Yogi Berra, Baseball Savant• Prediction is very difficult, especially about the future.‘Niels Bohr – Physicist (1885-1962) 170
  • Catastrophic Events 0.4 Exceedance probability• TAIL RISK 0.3 Long tail extending (severe consequences) characteristic• CYCLONE 0.2 of natural hazards, industrial and personal accidents, and• EARTHQUAKE terrorist acts• EQ-WORKERS COMP 0.1• FLOOD• HURRICANE 0• TERRORISM 0 20 40 60 80 100 120 140• TORNADO/HAIL Damage, M$ Po_Co Ex_Co• TYPHOON• WILDFIRE• WINDSTORM• WINTERSTORM• TORNADO/HAIL 171
  • 172
  • Flood Hazard• Estimation of Extreme Floods and Droughts• Floods and droughts bring extreme negative values of great consequences to society. A wide variety of statistical techniques have been applied to the evaluations of the flood hazard. To estimate the severity of future floods and to allocate resources for its mitigation, it is necessary to make flood-frequency assessments. Historical records are used to provide such estimates. The required data include the amount of rainfall produced by storm(s) in question, the upstream drainage area and the topography, soil type, and the vegetation in the drainage area. Unfortunately there are two drawbacks encountered when making the quantitative estimates, i.e. the relatively short time span over which historical data are available and no general basis has been accepted for its extrapolation. An independent approach to reservoir storage is thus required to estimate the uncertain occurences of floods. Rescaled range and the fractional Brownian walks were forwarded and these studies introduced the possibility that extremes of floods and droughts could be fractal. In fact, an extensive study of flood gauge records at more than 1000 water dams and reservoirs indicates a good correlation with fractal statistics.• The volumetric flow of the river is assumed to be a continuous function of time and is therefore treated as a Fourier time series. The characteristics of the series can be studied by just determining its coefficients. These coefficients are associated to a normalized cumulative probability distribution function. By modeling it mathematically, the relation can define either a Brownian walk or a fractional Brownian walk. This technique, so-called power-law statistics, is expected to lead to a far more conservative estimate of future hazards.• If the above technique is to be carried out in large-scale projects, loss of lives and properties due to natural disasters may eventually be much reduced, although the feasibility of such estimations are not firmly guaranteed. 173
  • Pandemic 175
  • Flu Pandemic mortality by age group The difference between the influenza mortality age-distributions of the 1918 epidemic and normal epidemics. Deaths per 100,000 persons in each age group, United States, for the interpandemic years1911–1917 (dashed line) and the pandemic year 1918 (solid line). 176
  • Influenza• The global mortality rate from the 1918/1919 pandemic is not known, but is estimated at 2.5 to 5% of the human population, with 20% or more of the world population suffering from the disease to some extent. Influenza may have killed as many as 25 million in its first 25 weeks (in contrast, AIDS killed 25 million in its first 25 years). Older estimates say it killed 40–50 million people[9] while current estimates say 50 million to 100 million people worldwide were killed.[10] This pandemic has been described as "the greatest medical holocaust in history" and may have killed more people than the Black Death.[11]• An estimated 7 million died in India, about 2.78% of Indias population at the time. In the Indian Army, almost 22% of troops who caught the disease died of it[citation needed]. In the U.S., about 28% of the population suffered, and 500,000 to 675,000 died. In Britain as many as 250,000 died; in France more than 400,000. In Canada approximately 50,000 died. Entire villages perished in Alaska and southern Africa. Ras Tafari (the future Haile Selassie) was one of the first Ethiopians who contracted influenza but survived,[12] although many of his subjects did not; estimates for the fatalities in the capital city, Addis Ababa, range from 5,000 to 10,000, with some experts opining that the number was even higher,[13] while in British Somaliland one official on the ground estimated that 7% of the native population died from influenza.[14] In Australia an estimated 12,000 people died and in the Fiji Islands, 14% of the population died during only two weeks, and in Western Samoa 22%.• This huge death toll was caused by an extremely high infection rate of up to 50% and the extreme severity of the symptoms, suspected to be caused by cytokine storms.[9] Indeed, symptoms in 1918 were so unusual that initially influenza was misdiagnosed as dengue, cholera, or typhoid. One observer wrote, "One of the most striking of the complications was hemorrhage from mucous membranes, especially from the nose, stomach, and intestine. Bleeding from the ears and petechial hemorrhages in the skin also occurred."[10] The majority of deaths were from bacterial pneumonia, a secondary infection caused by influenza, but the virus also killed people directly, causing massive hemorrhages and edema in the lung.[8]• The unusually severe disease killed between 2 and 20% of those infected, as opposed to the more usual flu epidemic mortality rate of 0.1%.[8][10] Another unusual feature of this pandemic was that it mostly killed young adults, with 99% of pandemic influenza deaths occurring in people under 65, and more than half in young adults 20 to 40 years old.[15] This is unusual since influenza is normally most deadly to the very young (under age 2) and the very old (over age 70), and may have been due to partial protection caused by exposure to a previous Russian flu pandemic of 1889.[16] 177
  • MODELSRMS® Pandemic Influenza Model: Business Continuity Solutions• 􀂃 In the event of an influenza pandemic, businesses will play a key role in protecting employee health and safety as well as limiting the negative impact to the economy, society, and their business.• 􀂃 Most large companies have teams, made up of senior facilities and HR personnel, and disaster planning specialists responsible for creating business continuity plans.• 􀂃 While most companies have a plan in place given a pandemic occurs, most don‘t have the resources to estimate the impact on their employees and their businesses.• 􀂃 Many companies are using a single event to plan for a pandemic; an RMS business continuity report can help companies prepare for multiple potential scenarios.• 􀂃 An RMS business continuity report can help companies make better decisions by incorporating a risk analysis into their plan.• RMS ANALYSIS HIGHLIGHTS• 􀂃 Allows companies to understand the operational impact of the full range of influenza pandemic scenarios:• 􀂃 Company-specific calculation of number of days lost due to staff sickness• 􀂃 Company-specific calculation of number of days lost due to staff caretaker duties• 􀂃 Company-specific casualty estimates – Prvides detailed analysis of representative scenario results: – Quantify the likelihood of different pandemic severities as well as the range of outcomes Event time – lines detailing the week by week impact of an event, and how different choices can impact scenarios (if a specific scenario is already used in characteristics including location of outbreak, availability of a alternative strategies for company preparedness at different pandemic status levels. – Spain, Sweden, Switzerland, Taiwan, Thailand, Turkey, United Kingdom, infectiousness and lethality of national counter-measures – Analysis profiles to specify analysis details and settings – http://www.rms.com/InfectiousDiseaseRisk/ business Incorporation of company-specific planning sc• company pandemic preparedness planning) Presents key metrics and information sources that drive key decisions. Examines the sensitivity of various disease c• vaccine, and national response measures.• Incorporates industry-specific issues such as lack of hospital resources• SOFTWARE HIGHLIGHTS Geographic Scope: Australia, Austria, Belgium, Brazil, Canada, China, France, Germany, Hong Kong India, Indonesia, Ireland, Italy, Japan, Malaysia, Netherlands, Philippines, Poland, Russia, Singapore, South Africa, South Korea,• United States, Vietnam Stochastic Model: Nearly 2,000 unique scenarios; vary based on virus in• virus, demographic impact location of outbreak, and pandemic lifecycle• 􀂃 Factors in vaccine production and efficacy as well as various• 􀂃 Results viewing for probabilistic analysis outputs and ELTs• ©2007 Risk Management Solutions, Inc. Confidential 178
  • Terrorist Models SARA 179
  • Terrorist Models• SARA – Scan – Analyze – Respond – Assess• SARA – Select Target – Analyze & Assess – Research – Attack 180
  • Terrorist Models 181
  • Catastrophe Modeling• Finally most rating agencies consider terrorism models too new and untested to price a catastrophe bond.• Reinsurers view terrorism models as not very reliable in predicting the frequency of terrorist attacks under a wide range of scenarios.• Furthermore one of the major rating firms noted that the estimates derived from models developed by AIR Worldwide, EQECAT and Risk Management Solutions could vary200% or more. 182
  • Information Awareness Office• Conspiracy theories• The IAOs originally adopted logo, particularly the eye and pyramid, has long been associated with the symbolism of the Illuminati in various conspiracy theories surrounding the alleged New World Order. Interestingly the motto, scientia potentia est, was first stated by Sir Francis Bacon. Sir Francis Bacons connection to various secret societies, particularly the Freemasons, has often been suggested and discussed[27]. Some suggestions even go as far to say he was once a leader within the Freemasons.• Connections between the IAO and the popular social networking site Facebook have been also suggested as a hidden continuation of the programs intention to collect information on citizens. Theorists point out that the privacy policy of Facebook has raised many concerns about possible data-mining and that the relationships of persons funding Facebook shows close ties to peoples previously involved in the IAO. A short flash video released onto the internet called: "Does what happens in the Facebook stay in the Facebook" - talks of these connections calling for public awareness on the issue and discussing implications.• Theorists also point out that the initials I.A.O are also used as a magical formula in the Hermetic Order of the Golden Dawn, a notable and influential member of which was Aleister Crowley. Aleister Crowley claimed himself to be a Freemason, although this has been disputed. [28]• [edit] See also• Combat Zones That See, or CTS, a project to link up all security cameras citywide and "track everything that moves".• Echelon, NSA worldwide digital interception program• Carnivore, FBI US digital interception program• Intellipedia, a collection of wikis used by the U.S. intelligence community to "connect the dots" between pieces of intelligence• Government Information Awareness "acts as a framework for US citizens to construct and analyze a comprehensive database on our government".• Institutional memory• LifeLog, "an ontology-based (sub)system that captures, stores, and makes accessible the flow of one persons experience in and interactions with the world in order to support a broad spectrum of associates/assistants and other system capabilities."• Magic Lantern (software), the FBIs keystroke logging tool• Mass surveillance• Multistate Anti-Terrorism Information Exchange• Regulation of Investigatory Powers Act UK Legal provision for digital interception• Synthetic Environment for Analysis and Simulations• [edit] References• ^ TIA Lives On, National Journal, 23 February 2006, retrieved 27 July 2007• ^ Chief Takes Over at Agency To Thwart Attacks on U.S. - New York Times• ^ Overview of the Information Awareness Office• ^ You Are a Suspect• ^ Big Brother ...• ^ Search Results - THOMAS (Library of Congress)• ^ http://www.information-retrieval.info/docs/tia-exec-summ_20may2003.pdf• ^ Department of Defense Appropriations Act, 2004, Pub. L. No. 108–87, § 8131, 117 Stat. 1054, 1102 (2003)• ^ 149 Cong. Rec. H8755—H8771 (Sept. 24, 2003)• ^ [Wanted: Competent Big Brothers http://nationaljournal.com/about/njweekly/stories/2006/0223nj1.htm], Newsweek, 8 February 2006, retrieved 27 July 2007• ^ The Total Information Awareness Project Lives On, Technology Review, 26 April 2006, retrieved 27 July 2007• ^ TIA Lives On, National Journal, 23 February 2006, retrieved 27 July 2007 183
  • Information Awareness Office• John Poindexter, Overview of the Information Awareness Office (Remarks as prepared for delivery by Dr. John Poindexter, Director, Information Awareness Office, at the DARPATech 2002 Conference) (August 2, 2002).• Information Awareness Office• Media coverage• Harris, Shane (February 26, 2006). "TIA Lives On", The National Journal.• "Pentagon Defends Surveillance Program", The Washington Post (May 20, 2003).• Webb, Cynthia L. (May 21, 2003). "The Pentagons PR Play", The Washington Post.• Bray, Hiawatha (April 4, 2003). "Mining Data to Fight Terror Stirs Privacy Fears", The Boston Globe, pp. C2.• McCullagh, Declan (January 15, 2003). "News.com.com Pentagon database plan hits snag on Hill", CNET News.com.• Markoff, John (February 13, 2002). "Chief Takes Over at Agency To Thwart Attacks on U.S.", The New York Times, pp. (first mainstream media mention of IAO).• Academic articles• K. A. Taipale (2003). "Data Mining and Domestic Security: Connecting the Dots to Make Sense of Data". Columbia Sci. & Tech. Law Review 5 (2): 1–83 (TIA discussed 39–50).• Robert Popp and John Poindexter (november/december 2006). "Countering Terrorism through Information and Privacy Protection Technologies". IEEE Security & Privacy: 18 –26.• Critical views (established sources)• "TIA: Total Information Awareness", American Civil Liberties Union (January 16, 2004).• Charles V. Peña (November 22, 2002). "TIA: Information Awareness Office Makes Us a Nation of Suspects", Cato Institute.• "Total/Terrorism Information Awareness (TIA): Is It Truly Dead? EFF: Its Too Early to Tell", Electronic Frontier Foundation• "Epic.org Total "Terrorism" Information Awareness (TIA): Latest News", Electronic Privacy Information Center.• "SPTimes.com A Times Editorial: Unfocused data-mining", St. Petersburg Times (January 24, 2003).• Critical views• Russ Kick. "Information Awareness Office Website Deletes Its Logo", The Memory Hole. 184
  • Information Awareness Office• Proponent views• Mac Donald, Heather (January 27, 2003). "Total Misrepresentation", The Weekly Standard.• Levin, Jonathan (February 13, 2003). "Total Preparedness: The case for the Defense Department‘s Total Information Awareness project", National Review.• Taylor, Jr., Stuart (December 10, 2002). "Big Brother and Another Overblown Privacy Scare", The Atlantic.• Accord: Shane Ham & Robert D. Atkinson (2002). "Using Technology to Detect and Prevent Terrorism", Progressive Policy Institute.• "Safegaurding Privacy in the Fight Against Terrorism", DOD Technology and Privacy Advisory Committee (TAPAC) (March 2004).• Also:• Ignatius, David (August 1, 2003). "Back in the Safe Zone", The Washington Post, pp. A19 (discussing opposition to the IAO FutureMap project).• Retrieved from http://en.wikipedia.org/wiki/Information_Awareness_Office This page was last modified on 15 August 2008, at 20:55. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3) tax-deductible nonprofit charity. 185
  • "It aint over till its over‖ Yogi Berra• ―But this long run is a misleading guide to current affairs. In the long run we are all dead.‖ John Maynard Keynes identified three domains of probability: Frequency probability; Subjective or Bayesian probability; and Events lying outside the possibility of any description in terms of probability (special causes) and based a probability theory thereon. 186
  • Analysis of DOD Major Defense Acquisition Program Portfolios ( FY 2008 dollars) – • Number of programs FY-2000 95 FY-2005 75 FY-2007 91 • Total planned commitments $790 Billion $1.5 Trillion $1.6 Trillion • Commitments outstanding $380 Billion $887 Billion $858 Billion • Portfolio performance • Change RDT&E costs from first estimate 27 percent 33 percent 40 percent • Change acquisition cost from first estimate 6 percent 18 percent 26 percent • Estimated total acquisition cost growth $42 Billion $202 Billion $295Billion • Share of programs with 25 percent or more increase in program acquisition unit cost • 37 percent 44 percent 44 percent • Average schedule delay in delivering initial capabilities • 16 months 17 months 21 months • Source: GAO analysis of DOD data. 187
  • 21• From 13:16 to 16:29• 21 on Blu-Ray, High-Def & PSP July 22nd, 2008• Staring Jim Sturgess, Kate Bosworth with Laurence Fishburne and Kevin Spacey• Inspired by the true story of MIT students who mastered the art of card counting and took Vegas casinos for millions in winnings.• "Kevin Spacey and a gifted young cast deal a winning hand." - Peter Travers, Rolling Stone••• © 2008 Sony Pictures Digital Inc. All Rights Reserved. 188
  • • http://www.youtube.com/watch?v=5e_NKJD7msg 189
  • Variable Change• Newton‘s Method – Solving non linear equations – Newton Stole from Joseph Raphson• Law of Variable Change – a = 33% b or c= 66% By deleting door c, door b is left with 66% alone. However, it could also be said that doors a or b= 66% c=33% – Delete door c, and the odds are split between door a and b 50-50 chance. The logic used in the movie is just to confuse you, it is still a 50-50 chance. 190
  • Joseph Raphson• Raphsons most notable work is Analysis Aequationum Universalis, which was published in 1690. It contains a method, now known as the Newton-Raphson method, for approximating the roots of an equation. Isaac Newton had developed a very similar formula in his Method of Fluxions, written in 1671, but this work would not be published until 1736, nearly 50 years after Raphsons Analysis. However, Raphsons version of the method is simpler than Newtons, and is therefore generally considered superior. For this reason, it is Raphsons version of the method, rather than Newtons, that is to be found in textbooks today.• Raphson was a staunch supporter of Newtons, as opposed to Gottfried Leibnizs, claim as the inventor of Calculus. In addition, Raphson translated Newtons Arithmetica Universalis into English. The two were not close friends, however, as is evidenced by Newtons inability to spell Raphsons name either correctly or consistently.• Born c. 1648 Middlesex, England• Diedc. 1715 England• Residence England• Nationality English• Fields Mathematician• Mater University of Cambridge• Known forNewton-Raphson methodInfluencesIsaac Newton• 191
  • The Dilbert Rule of Project Management
  • Fischer Black and Myron Scholes• All this began to change in 1973, with the publication of the options-pricing model developed by Fischer Black and Myron Scholes and expanded on by Robert C. Merton. The new model enabled more-effective pricing and mitigation of risk. It could calculate the value of an option to buy a security as long as the user could supply five pieces of data: the risk-free rate of return (usually defined as the return on a three-month U.S. Treasury bill), the price at which the security would be purchased (usually given), the current price at which the security was traded (to be observed in the market), the remaining time during which the option could be exercised (given), and the security’s price volatility (which could be estimated from historical data and is now more commonly inferred from the prices of options themselves if they are traded). The equations in the model assume that the underlying security’s price mimics the random way in which air molecules move in space, familiar to engineers as Brownian motion.• The core idea addressed by Black-Scholes was optionality: Embedded in all instruments, capital structures, and business portfolios are options that can expire, be exercised, or be sold. In many cases an option is both obvious and bounded—as is, for example, an option to buy General Electric stock at a given price for a given period. Other options are subtler. In their 1973 paper Black and Scholes pointed out that the holders of equity in a company with debt in its capital structure have an option to buy back the firm from the debt holders at a strike price equal to the company’s debt. Similarly, the emerging field of real options identified those implicit in a company’s operations—for example, the option to cancel or defer a project based on information from a pilot. The theory of real options put a value on managerial flexibility—something overlooked in straightforward NPV calculations, which assume an all-or-nothing attitude toward projects.• The new model could hardly have come at a more propitious time, coinciding as it did with the spread of the handheld electronic calculator. Texas Instruments marketed an early version to financial professionals with the tagline “Now you can find the Black-Scholes value using our calculator.” The calculator’s rapid acceptance by options traders fueled the growth in derivatives markets and the broad development of standard pricing models. Other technological advances quickly followed: In 1975 the first personal computers were launched. In 1979 Dan Bricklin and Bob Frankston released VisiCalc, the first spreadsheet designed to work on a personal computer, giving managers a simple tool with which to run what-if scenarios. The financial sector rapidly developed new instruments for managing different types of risk and began trading them on exchanges—notably the Chicago Board Options Exchange—and in over-the-counter derivatives markets.• Reprint: R0809G• 12345• TO CONTINUE READING THIS ARTICLE:• HBR Subscribers - Log In:• As a subscriber, you can continue reading this article online or print a version for offline reading.• Log In To Continue Reading• Non-Subscribers Purchase Online:• You can purchase this article through Harvard Business Online• Purchase Full HBR Article• Search the siteSearch Terms• HBR Articles Only All Products• Text Size• Email This Article• Print This Article• Purchase This Article••• ADVERTISEMENT
  • Few Thoughts……ADM Roughead, CNO, 25 Nov 07 message The Honorable John J. Young, Jr, USDon POM 10: (AT&L), 12 Nov 07 message to the AT&L community:―We cannot afford to overly invest in ―… We will only execute to budgeted levels,capability areas where excess so my view is that program budget cuts willcapacity exists. We must be brutally honest generally be translated into scopein assessing where that is….. reductions. I do not favor translating budget cuts into program schedule extensions thatWe must not support programs that assure cost growth in the future – costcollectively are not affordable. Our growth also not programmed in theapproach must be balanced. We can not budget……rely on "raising the top line" as atactic or a strategy. We must be innovative All program managers should seek to obtainin how Navy services OPLANS independent cost estimates and ensure thatrather than just pacing the threat with the program is budgeted to the independentweapons systems, e.g. we must not be cost estimate…..maneuvered into being the victim of Do not view the funds above the programanyones competitive strategy….. office estimate as your funds. The funds belong to the defense enterprise and theOur imperative is to deliver a balanced taxpayer. It is not your money. You are beingforce while prudently managing risks asked to develop and deliver a product andand being exceptional stewards of taxpayer being given resources.‖resources.‖
  • • CRYSTAL BALL• "...I was able to identify a potential cost-savings of $2.8 million." - Jeff Blase, Sprint Corporation FEATURE BENEFITSPOTLIGHTMonte Carlo SimulationCalculates multiple scenarios of a spreadsheet model automatically. Frees users from the constraints of estimates and best-guess values. Distribution GalleryProvides an intuitive interface for selecting model input variables; includes 16 discrete and continuous distributions plus custom distribution. Simplifies the quantifying of risk, means you dont have to enter the distribution formula into Excel. Categories of Distributions Create predefined distributions, modify your existing distributions and organize them using custom categories. Create your own library of distributions, organized in categories. Re-use distributions from one project to the next. Publish and Subscribe Feature for Categories Publish categories and share with many users. Work as a team sharing models and data to get your work done faster. Process Capability Features Define spec limits (LSL, USL and Target) in your forecasts, calculate capability metrics and view simulation results and metrics together in one split-view chart. With capability metrics in Crystal Ball you simplify your workflow and better integrate simulation into your Six Sigma and Quality methodology.• Forecast Charts Graphically display simulation results and statistics. Allows users to track and analyze thousands of possible outcomes; charts are interactive.• Split-View charts• View forecast charts, descriptive statistics and capability metrics side-by-side on the same chart. Enable up to 6 charts and tables in one view. One chart tells the whole story.• Sensitivity and Tornado Analyses• Two separate methods for identifying the most critical model input variables. Enable users to focus on high-risk model input variables. Sensitivity• TornadoDistribution Fitting Uses historical data for defining assumptions. Can fit to continuous and discrete distributions. Allows users to customize model input variables based on real-world results.• Correlation Models dependencies between uncertain input variables. Provides more accurate modeling and forecasting.• Charting and Reporting• Automates report generation, includes the ability to overlay forecasts and to project trends through time. Creates clear analysis and presentation of all forecasts, improves communication with colleagues, management, and clients.• Precision Control• Provides advanced simulation capabilities. Increases simulation accuracy and flexibility; saves time.• Latin Hypercube SamplingAlternative simulation method to Monte Carlo. Samples regularly across distribution, excellent for simulations with restriction on number of trials. Data extraction Exports data from Crystal Ball memory. Allows users to examine individual simulation results and transfer results to other software programs. CB Tools• Macro-driven tools that use Crystal Ball; Includes scenario analysis, decision table, data analysis, tornado chart, correlation matrices, 2D-simulation, batch fit, and boot strap. Automates modeling processes with effortless and powerful tools.• Click here for more CB Tool informationCB Predictor™CB Predictor uses established forecasting methods to help identify and extrapolate the trends in your historical data. CB Predictor analyzes your data and produces insightful and accurate forecasts.• Crystal Ball and CB Predictor Developer KitsThe Crystal Ball and CB Predictor Developer Kits bring complete automation and control to these tools from within a Visual Basic for Applications (VBA) program or any other language outside of Excel that supports OLE 2 automation.• Click here for more informationMicrosoft Certification Certified Excel macro provider. Eliminates security concerns.•• IMMEDIATE BENEFITS• Most Crystal Ball users will be proficient in under 30 minutes if they are familiar with Microsoft ® Excel. Plus, the award-winning documentation makes it easy for users to begin seeing the benefits almost instantly. Since the program is a fully integrated Microsoft add-in that seamlessly extends the power of Excel spreadsheet models, the functionality of Crystal Ball is just a few clicks away.• "Crystal Ball is an excellent teaching tool for spreadsheet simulation models and is easy to learn and use. I highly recommend it." -- Liam ONeill, Assistant Professor, Policy Analysis and Management Cornell University• You stop guessing. While everyone must take risks to succeed, blind risks too often lead to costly errors. Crystal Ball puts the odds in your favor by helping you choose the most promising calculated risks. Each time you perform a Crystal Ball simulation, you gain a richer understanding of the inherent risks.• "This software is the best tool I have seen for trying to predict future occurrences in a very uncertain time." -- Steven J. Campbell, Managing Director, Nelligan Power• You have a competitive advantage. With a Crystal Ball analysis, you know what your competitor does not: the probability of a particular outcome. Because Crystal Ball lets you quantify your risks, it can be a crucial tool for a successful negotiation.• "Your product made me a hero - my executive clients finally can use their computers intuitively to get to the meat of their business options." -- Dick Willis, President, Decision Graphics• You break free from the limitations of spreadsheets. Monte Carlo simulation frees you from the constraints of estimates and best-guess values. Why rely on a single, possibly misleading estimate when you can easily create and analyze thousands of potential outcomes? Plus, using Crystal Ball means you no longer need to create several spreadsheets to analyze multiple scenarios. "Crystal 195
  • Hindsight Bias• the inclination to see events that have occurred as more predictable than they in fact were before they took place. Hindsight bias has been demonstrated experimentally in a variety of settings, including politics, games and medicine.• In psychological experiments of hindsight bias, subjects also tend to remember their predictions of future events as having been stronger than they actually were, in those cases where those predictions turn out correct.• Prophecy that is recorded after the fact is an example of hindsight bias, given its own rubric, as vaticinium ex eventu. foretelling after the event• One explanation of the bias is the availability heuristic: the event that did occur is more salient in ones mind than the possible outcomes that did not.• It has been shown that examining possible alternatives may reduce the effects of this bias. 196
  • Anchoring & Adjustment• Anchoring and adjustment: People who have to make judgments under uncertainty use this heuristic by starting with a certain reference point (anchor) and then adjust it insufficiently to reach a final conclusion.• Example: If you have to judge another persons productivity, the anchor for your final (adjusted) judgment may be your own level of productivity. Depending on your own level of productivity you might therefore underestimate or overestimate the productivity of this person. 197
  • Availability heuristic:• This heuristic is used to evaluate the frequency or likelihood of an event on the basis of how quickly instances or associations come to mind. When examples or associations are easily brought to mind, this fact leads to an overestimation of the frequency or likelihood of this event.• Example: People are overestimating the divorce rate if they can quickly find examples of divorced friends. 198
  • • @RISK 5.0.1 Update @RISK performs risk analysis using Monte Carlo simulation to show you many possible outcomes in your Microsoft Excel spreadsheet— and tells you how likely they are to occur. This means you can judge which risks to take and which ones to avoid, allowing for the best decision making under uncertainty. With @RISK, you can answer questions like, ―What is the probability of profit exceeding $10 million?‖ or ―What are the chances of losing money on this venture?‖• All-new @RISK 5.0 for Excel has been redesigned from the ground up to provide unprecedented model sharing, a revamped interface, and more robust analyses—including new Six Sigma and insurance functions.• @RISK is used to analyze risk and uncertainty in a wide variety of industries. From the financial to the scientific, anyone who faces uncertainty in their quantitative analyses can benefit from @RISK.• INDUSTRY• FINANCE AND SECURITIES• INSURANCE / REINSURANCE• OIL / GAS / ENERGY GOVERNMENT AND DEFENSE• AEROSPACE AND TRANSPORTATION• Retirement planning Currency valuation Real options analysis Discounted Cash Flow analysis Value-at-risk Portfolio optimization• Loss reserves estimation Premium pricing• Exploration and production Oil reserves estimation Capital project estimation Pricing Regulation compliance• Manufacturing quality control Customer service improvement DMAIC DFSS / DOE Lean Six Sigma• Resource allocation War games Welfare and budgetary projections• Cost estimating Highway planning and optimization Supply chain distribution 199
  • • References• Reifer, D. J. Making the Software Business Case: Improvement by the Numbers. Addison- Wesley, 2001.• Reifer, D. J., J. Craver, M. Ellis, and D. Strickland, eds. "Is Ada Dead or Alive Within the Weapons System World?" CrossTalk Dec. 2000: 22-24.• Ada and C++: A Business Case Analysis. U. S. Air Force, 1991.• Jensen, R. "Software Estimating Model Calibration." CrossTalk July 2001: 13-18.• Reifer, D. J. "Comparative Accuracy Analysis of Cost Models to Activity-Based Costing for Large Scale Software Projects." Reifer Consultants, Inc., 1996.• Ferens, E., and D. Christensen, eds. Calibrating Software Cost Models to Department of Defense Databases -- A Review of Ten Studies. Air Force Research Laboratories, Feb. 1998.• Florac, W. A., and A. D. Carleton, eds. Measuring the Software Process. Addison-Wesley, 1999.• Brooks, F. The Mythical Man-Month, Anniversary Edition. Addison Wesley, 1995.• Boehm, B. W., C. Abts, A. W. Brown, S. Chulani, B. K. Clark, E. Horowitz, R. Madachy, D. Reifer, and B. Steece, eds. Software Cost Estimation with COCOMO II. Prentice-Hall, 2000.• Kruchten, P. The Rational Unified Process. Addison-Wesley, 1998.• Royce, W. Software Project Management: A Unified Framework. Addison-Wesley, 1998.• Reifer, D. J. A Poor Mans Guide to Estimating Software Costs. 8th ed., Reifer Consultants, Inc., 2000.• Paulk, M. C., C. V.Weber, B. Curtis, and M. B. Chrissis, eds. The Capability Maturity Model: Guidelines for Improving the Software Process. Addison-Wesley, 1995.• Clark, B. "Quantifying the Effects on Effort of Process Improvement." IEEE Software Nov./Dec. 2000: 65-70.
  • Pareto Principle• Adding up to 100 leads to a nice symmetry. For example, if 80% of effects come from the top 20% of sources, then the remaining 20% of effects come from the lower 80% of sources. This is called the ―Pareto Principle", and can be used to measure the degree of imbalance: a joint ratio of 96:4 is very imbalanced, 80:20 is significantly imbalanced (Gini index: 60%), 70:30 is moderately imbalanced (Gini index: 40%), and 55:45 is just slightly imbalanced.• The Pareto Principle is an illustration of a "Power law" relationship, which also occurs in phenomena such as brush-fires and earthquakes. Because it is self-similar over a wide range of magnitudes, it produces completely different outcomes from Gaussian Distribution phenomena. This fact explains the frequent breakdowns of sophisticated financial instruments, which are modeled on the assumption that a Gaussian relationship is appropriate to - for example - stock movement sizes
  • power law• Scientific interest in power law relations also derives from the ease with which certain general classes of mechanisms can generate them, so that the observation of a power-law relation in data often points to specific kinds of mechanisms that might underlie the natural phenomenon in question, and can indicate a deep connection with other, seemingly unrelated systems.• The ubiquity of power-law relations in physics is partly due to dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy or of specific stochastic processes.• A few notable examples of power laws are the Gutenberg-Richter law for earthquake sizes, Paretos law of income distribution, structural self- similarity of fractals, and scaling laws in biological systems.• Research on the origins of power-law relations, and efforts to observe and validate them in the real world, is an active topic of research in many fields of science, including physics, computer science, linguistics, geophysics, sociology, economics and more. 202
  • 203
  • Edward Tufte - Beautiful Evidencepp-51 Sparklines: Intense, Simple, Word-sized Graphics Graphics Press LLC PO Box 430 Cheshire, Connecticut 06410 second edition Jan 2007 204
  • DoD Software Data????? 205
  • Comparison of Major Contract Types Fixed-Price Economic Fixed-Price Cost-Plus-Incentive- Cost-Plus-Award- Cost-Plus-Fixed-Fee Cost or Time & Materials Price Adjustment Fixed-Price Fixed-Price Award- Prospective Price Fee (CPIF) Fee (CPFF) Cost-Sharing (T&M) Firm-Fixed-Price (FPEPA) Incentive Firm Fee Redetermination (CPAF) (C or CS) (FFP) Target (FPAF) (FP3R) (FPIF) Principal Risk None. Thus, the Unstable market prices Moderately uncertain Risk that the user will Costs of performance Highly uncertain and speculative labor hours, labor mix, and/or material requirements (and other things) necessary to perform the to be Mitigated contractor assumes all for labor or material contract labor or not be fully satisfied after the first year contract. The Government assumes the risks inherent in the contract -benefiting if the actual cost is lower than the expected cost- cost risk. over the life of the material requirements. because of judgmental because they cannot losing if the work cannot be completed within the expected cost of performance. contract. acceptance criteria. be estimated with confidence. Use When . . . The requirement is The market prices at A ceiling price can be Judgmental standards The Government An objective Objective incentive Relating fee to The contractor expects No other type of contract well-defined. risk are severable and established that covers can be fairly applied needs a firm relationship can be targets are not feasible performance (e.g., to substantial is suitable (e.g., because Contractors are significant. The risk the most probable by the fee determining commitment from the established between for critical aspects of actual costs) would be compensating benefits costs are too low to experienced in stems from industry- risks inherent in the official. The potential contractor to deliver the fee and such performance. unworkable or of for absorbing part of justify an audit of the meeting it. wide contingencies nature of the work. fee is large enough to the supplies or measures of Judgmental standards marginal utility. the costs and/or contractors indirect Market conditions beyond the contractors The proposed profit both: services during performance as actual can be fairly applied. foregoing fee or expenses). are stable. control. The dollars at sharing formula Provide a meaningful subsequent years. The costs, delivery dates, Potential fee would The vendor is a non- Financial risks are risk outweigh the would motivate the incentive. 1 dollars at risk performance provide a meaningful profit entity otherwise administrative burdens contractor to control Justify related outweigh the benchmarks, and the incentive. insignificant. of an FPEPA. costs to and meet administrative administrative like. other objectives. burdens. burdens of an FPRP. Elements A firm-fixed-price for A fixed-price, ceiling A ceiling price A firm fixed-price. Fixed-price for the Target cost Target cost Target cost Target cost A ceiling price each line item or one on upward adjustment, Target cost Standards for first period. Performance targets Standards for Fixed fee If CS, an agreement A per-hour labor rate or more groupings of and a formula for Target profit evaluating Proposed subsequent (optional) evaluating on the Governments that also covers overhead line items. adjusting the price up or Delivery, quality, performance. periods (at least 12 A minimum, performance share of the cost. and profit down based on: and/or other Procedures for months apart). maximum, and target A base and No fee Provisions for Established prices. performance targets calculating a fee based Timetable for pricing fee maximum fee reimbursing direct Actual labor or (optional) on performance the next period(s). A formula for Procedures for material costs material costs. Profit sharing against the standards adjusting fee based on adjusting fee, based Labor or material formula actual costs and/or on performance indices. performance against the standards Contractor is Provide an acceptable Provide an acceptable Provide an acceptable Perform at the time, Provide acceptable Make a good faith effort to meet the Governments needs within the estimated cost in the Contract, Part Make a good faith effort Obliged to: deliverable at the deliverable at the time deliverable at the time place, and the price deliverables at the I the Schedule, Section B Supplies or services and prices/costs. to meet the Governments time, place and price and place specified in and place specified in fixed in the contract. time and place needs within the ceiling specified in the the contract at the the contract at or specified in the price. contract. adjusted price. below the ceiling contract at the price price. established for each period. Contractor Generally realizes an Generally realizes an Realizes a higher Generally realizes an For the period of Realizes a higher fee Realizes a higher fee Realizes a higher rate If CS, shares in the Incentive (other additional dollar of additional dollar of profit by completing additional dollar of performance, realizes by completing the by meeting of return (i.e., fee cost of providing a than maximizing profit for every dollar profit for every dollar the work below the profit for every dollar an additional dollar of work at a lower cost judgmental divided by total cost) deliverable of mutual goodwill) 1 that costs are reduced. that costs are reduced. ceiling price and/or by that costs are reduced; profit for every dollar and/or by meeting performance as total cost decreases. benefit meeting objective earns an additional fee that costs are reduced. other objective standards. performance targets. for satisfying the performance targets. performance standards.1 Typical Commercial supplies Long-term contracts for Production of a major Goodwill is the value of the name, reputation, location, and intangiblesupplies the firm. Application and services. commercial assets of 2 Comply Performance-based Long-term production Research and Large scale research Research study. system based on awith any USD(AT&L), DPAP or other memoranda that for a not been incorporated into the DFARS or DoD Directives or Instructions. service contracts. of spare parts have development of the study. Joint research with educational 206 DSMC MAY repairs to Emergency 2008 heating plants and
  • 207
  • Karl Popper 208
  • 209
  • Level of Activity over Life Cycle Monitoring and Control ExecuteLevel of Activity Plan Close Initiate Start Finish Time 210
  • Heuristic A heuristic is a strategy that can be applied to a variety of problems and that usually – but not always – yields a correct solution. People often use heuristics (or shortcuts) that reduce complex problem solving to more simple judgmental operations. Three of the most popular heuristics are discussed by Tversky and Kahnemann (1974): Representativeness heuristic: What is the probability that person A (Steve, a very shy and withdrawn man) belongs to group B (librarians) or C (exotic dancers)? In answering such questions, people typically evaluate the probabilities by the degree to which A is representative of B or C (Steve´s shyness seems to be more representative for librarians than for exotic dancers) and sometimes neglect base rates (there are far more exotic dancers than librarians in a certain sample).• Availability heuristic: This heuristic is used to evaluate the frequency or likelihood of an event on the basis of how quickly instances or associations come to mind. When examples or associations are easily brought to mind, this fact leads to an overestimation of the frequency or likelihood of this event. Example: People are overestimating the divorce rate if they can quickly find examples of divorced friends.• Anchoring and adjustment: People who have to make judgements under uncertainty use this heuristic by starting with a certain reference point (anchor) and then adjust it insufficiently to reach a final conclusion. Example: If you have to judge another person´s productivity, the anchor for your final (adjusted) judgement may be your own level of productivity. Depending on your own level of productivity you might therefore underestimate or overestimate the productivity of this person. 211
  • The Trillion Dollar Bet http://www.dailymotion.com/video/x4seov_the-trillion-dollar-bet_news http://www.pbs.org/wgbh/commandingheights/shared/video/qt/mini_p03_14_a_56.html• The Black-Scholes formula of economics promised to revolutionize Wall Street. It was a simple idea that determined the stock price according to mathematical principles. Yet when it hit the trading floor, it caused more damage than good. Nova reexamines the minds behind the money in Nova: The Trillion Dollar Bet. Fischer Black, Myron Scholes, and Robert Merton came up with the model that earned them a Nobel Prize in 1973.• After basking in the acclaim and profit from a major award, Scholes and Merton joined a trading group. They intended to market the theory and shake up the marketplace. The success lasted a while until the machine crashed. While the formula took a pounding, the world economy trembled. Nova relives this close call as fortune-telling gets scientific. 212
  • The Formula That Shook The World• The Formula That Shook The World Lets not kid ourselves: The Black-Scholes option-pricing formula is a difficult concept to grasp. To begin to understand the explanation of the formula below, you might want to first review the section on call options. Then click on the formula itself for definitions of its various elements. Finally, have a look below at the theory behind the formula. For a more comprehensive explanation of the formula, we recommend Chapter 20 of Investments, by Zvi Bodie, Alex Kane, and Alan Marcus (Irwin Press, 1996), and Finance, by Zvi Bodie and Robert Merton (Prentice Hall, 2000), the primary sources for this article. Theory behind the formula Derived by economists Myron Scholes, Robert Merton, and the late Fischer Black, the Black-Scholes Formula is a way to determine how much a call option is worth at any given time. The economist Zvi Bodie likens the impact of its discovery, which earned Scholes and Merton the 1997 Nobel Prize in Economics, to that of the discovery of the structure of DNA. Both gave birth to new fields of immense practical importance: genetic engineering on the one hand and, on the other, financial engineering. The latter relies on risk-management strategies, such as the use of the Black-Scholes formula, to reduce our vulnerability to the financial insecurity generated by a rapidly changing global economy. Heres the theory behind the formula: When a call option on a stock expires, its value is either zero (if the stock price is less than the exercise price) or the difference between the stock price and the exercise price of the option. For example, say you buy a call option on XYZ stock with an exercise price of $100. If at the options expiration date the price of XYZ stock is less than $100, the option is worthless. If, however, the stock price is greater than $100—say $120, then the call option is worth $20. The higher the stock price, the more the option is worth. The difference between the stock price and the exercise price is the "payoff" to the call option. The Black-Scholes Formula was derived by observing that an investor can precisely replicate the payoff to a call option by buying the underlying stock and financing part of the stock purchase by borrowing. To understand this, consider our example of XYZ stock. Suppose that instead of owning the call option, you purchased a share of XYZ stock itself and borrowed the $100 exercise price. At the options expiration date, you sell the stock for $120, you pay back the $100 loan, and you are left with the $20 difference less the interest on the loan. Note that at any price above the $100 exercise price, this equivalence exists between the payoff from the call option and the payoff from the so-called "replicating portfolio." But what about before the call option expires? Believe it or not, you can still match its future payoff by creating a replicating portfolio. However, to do so you must buy a fraction of a share of the stock and borrow a fraction of the exercise price. How do you know what these fractions are? That is what the Black-Scholes Formula tells you. It states that the price of the call option, C, is equal to a fraction—N(d1)—of the stocks current price, S, minus a fraction— —of the exercise price. The fractions depend on five factors, four of which are directly observable. They are: the price of the stock; the exercise price of the option; the risk-free interest rate (the annualized, continuously compounded rate on a safe asset with the same maturity as the option); and the time to maturity of the option. The only unobservable is the volatility of the underlying stock price. If the current stock price is way above the exercise price, these fractions are close to 1, and therefore the call option is approximately the difference between the stocks current price and the present discounted value of the exercise price. If, on the other hand, the current stock price is way below the exercise price, these fractions are close to zero, making the value of the call option very low. 213
  • Black–Scholes model• The term Black–Scholes refers to three closely related concepts:• The Black–Scholes model is a mathematical model of the market for an equity, in which the equitys price is a stochastic process.• The Black–Scholes PDE is a partial differential equation which (in the model) must be satisfied by the price of a derivative on the equity.• The Black–Scholes formula is the result obtained by solving the Black-Scholes PDE for European put and call options.• Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model and coined the term "Black-Scholes" options pricing model, by enhancing work that was published by Fischer Black and Myron Scholes. The paper was first published in 1973. The foundation for their research relied on work developed by scholars such as Louis Bachelier, , , Edward O. Thorp, and Paul Samuelson. The fundamental insight of Black- Scholes is that the option is implicitly priced if the stock is traded.• Merton and Scholes received the 1997 Nobel Prize in Economics for this and related work. Though ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish academy.• http://www.pbs.org/wgbh/nova/stockmarket/ 214
  • Fischer Black and Myron Scholes• In 1973, with the publication of the options-pricing model developed by Fischer Black and Myron Scholes and expanded on by Robert C. Merton. The new model enabled more-effective pricing and mitigation of risk. It could calculate the value of an option to buy a security as long as the user could supply five pieces of data: the risk-free rate of return (usually defined as the return on a three-month U.S. Treasury bill), the price at which the security would be purchased (usually given), the current price at which the security was traded (to be observed in the market), the remaining time during which the option could be exercised (given), and the security’s price volatility (which could be estimated from historical data and is now more commonly inferred from the prices of options themselves if they are traded). The equations in the model assume that the underlying security’s price mimics the random way in which air molecules move in space, familiar to engineers as Brownian motion. 215
  • Long-Term Capital Management• In its first four years, Long-Term Capital achieved phenomenal profits with virtually no downside. Thanks to its seemingly flawless computer models, as well as its formidable arbitrageurs — including two Nobel laureates and a former vice chairman of the Federal Reserve — it quadrupled its capital without having a single losing quarter. 216
  • Long-Term Capital Management• Long-Term Capital‘s strategy was grounded in the notion that markets could be modeled. Thus, in August 1998, the hedge fund calculated that its daily ―value at risk‖ VAR — meaning the total it could lose — was only $35 million. Later that month, it dropped $550 million in a day. 217
  • Long Term Capital Management• Such ―risk management‖ calculations were and are a central tenet of modern finance.• ―Risk‖ is said to be a function of potential market movement, based on historical market data. But this conceit is false, since history is at best an imprecise guide. 218
  • Long Term Capital Management • Rather than evaluate financial assets case by case, financial models rely on the notion of randomness, which has huge implications for diversification. It means two investments are safer than one, three safer than two. • The theory of option pricing, the Black-Scholes formula, is the cornerstone of modern finance and was devised by two Long-Term Capital partners, Robert C. Merton and Myron S. Scholes, along with one other scholar. It is based on the idea that each new price is random, like a coin flip. • Long-Term Capital‘s partners were shocked that their trades, spanning multiple asset classes, crashed in unison. But markets aren‘t so random. In times of stress, the correlations rise. People in a panic sell stocks — all stocks. Lenders who are under pressure tighten credit to all. • And Long-Term Capital’s investments were far more correlated than it realized. In different markets, it made essentially the same bet: that risk premiums — the amount lenders charge for riskier assets — would fall. Was it so surprising that when Russia defaulted, risk premiums everywhere rose?•NY Times Sept 7, 2008 Long-Term Capital: It’s Short-Term Memory 219
  • Trillion Dollar Bet• In 1973, three brilliant economists, Fischer Black, Myron Scholes, and Robert Merton, discovered a mathematical Holy Grail that revolutionized modern finance. The elegant formula they unleashed upon the world was sparse and deceptively simple, yet it led to the creation of a multi- trillion dollar industry. Their bold ideas earned Scholes and Merton a Nobel Prize (Black died before the prize was awarded) and attracted the elite of Wall Street.• In 1993, Scholes and Merton joined forces with John Meriweather, the legendary bond trader of Salomon Brothers. With 13 other partners, they launched a new hedge fund, Long Term Capital Management, which promised to use mathematical models to make investors tremendous amounts of money. Their money machines reaped fantastic profits, until their theories collided with reality, and sent the company spiraling out of control. The crisis threatened to bring markets around the world to the brink of collapse.• Join NOVA in the quest to turn finance into a science. Plus, trace the little-known history of predicting financial markets and go to work with some successful modern traders who rely on intuition as well as mathematical models.• Original broadcast date: 02/08/2000• Topic: mathematics, social sciences/miscellaneous 220
  • Neuroeconomics & neuroscience• Neuroeconomics• From Wikipedia, the free encyclopedia• Neuroeconomics combines neuroscience, economics, and psychology to study how people make decisions. It looks at the role of the brain when we evaluate decisions, categorize risks and rewards, and interact with each other.• [Neuroeconomics & neuroscience• Neuroscience studies the nervous system, with broad areas such as the senses, movement, and internal regulation. Neuroeconomics is the subset that focuses on high- level concepts of personal choices and decisions, and how these are represented using our neurons and neuronal networks. Neuroeconomics & economics• Economics studies choices and decisions, with broad areas such as macroeconomics for large groups and microeconomics for individuals. Neuroeconomics is the subset that focuses on personal choices and the mental changes that correlate with the choices and may even cause them. A key insight is that the biological substance of a living organism can be modeled as implementing an optimizing solution to some survival/reproductive challenge in the evolutionary environment.• Neuroeconomics & business research• Neuroeconomics also incorporates aspects of business research (e.g., consumer neuroscience, neurofinance, organizational decision making).• Neuroeconomics & psychology• Psychology studies thought and perception, with broad areas such as language, cognition, memory, group psychology and abnormal psychology. Neuroeconomics is the subset that focuses on thought about our choices, especially the cognition that happens when we understand our options and then choose one.• Neuroeconomics findings tend to confirm that emotions (among them hope and fear) are important factors in many economic choices.• Experiments• In a typical behavioral economics experiment, a subject is asked to make a series of economic decisions. For example, a subject may be asked whether they prefer to have 45 cents or a gamble with a 50% chance of one dollar and 50% chance of nothing. The experimenter will then measure different variables in order to determine what is going on in the subjects brain as they make the decision. The simplest experiments record the subjects decision over various different design parameters (what about 42 cents?), and use the data to generate formal models that predict performance. This is the type of experiment for which Daniel Kahneman won the Nobel Prize in Economics.• Neuroeconomics extends the approach of behavioral economics by adding observation of the nervous system to the set of explanatory variables.• In neuroeconomic experiments, full brain scans will be performed using fMRI or PET in order to compare the roles of the different brain areas that contribute to economic decision-making. Other experiments measure ERP (event-related potentials, which are closely related to EEG), and MEG (magnetoencephalograms) to measure the timecourses of different brain events that contribute to economic decision making.• The most complicated experiments involve direct recordings of neurons (and sometimes neurotransmitter concentrations) in monkeys and humans.• Criticisms• A prominent critic of Neuroeconomics is Ariel Rubinstein. In the world congress of the Econometric Society in 2005 he referred to Neuroeconomics as "a field that oversells itself" (see Rubinstein (2006)).• Gul and Pesendorfer (2005) have argued that the methodology of neuroeconomics answers irrelevant questions, in that it concentrates on what provides the most hedonic satisfaction to experimental subjects rather than what economic outcome those subjects choose out of multiple options. However, neuroeconomic research has been able to provide more insight into some behavior that could not be adequately explained by other methods. [1]• Ramifications• Neuroeconomics, although a relatively recent approach to biology and human behavior, shows promise of contributing to knowledge in a wide range of areas. Neuroeconomic approaches have already been applied to issues as diverse as proving guilt beyond a reasonable doubt (no reference) and analyzing communications services demand [2].• Some consider that neuroeconomics could be a new source of manipulative tools for advertisers to influence buying decisions, and that consumers should be taught to identify them.• Deppe M; Schwindt W; Pieper A; Kugel H; Plassmann H; Kenning P; Deppe K; Ringelstein EB; "Anterior cingulate reflects susceptibility to framing during attractiveness evaluation." Neuroreport; July 2007; 18 (11) : 1119-23.• Deppe M; Schwindt W; Kugel H; Plassmann H; Kenning P: "Nonlinear responses within the medial prefrontal cortex reveal when specific implicit information influences economic decision making." J Neuroimaging; April 2005; 15(2):171-82.• Deppe M; Schwindt W; Krämer J; Kugel H; Plassmann H; Kenning P; Ringelstein EB; "Evidence for a neural correlate of a framing effect: bias-specific activity in the ventromedial prefrontal cortex during credibility judgments." Brain Res Bull; November 2005; 67(5):413-21.• , , "Brain Research Bulletin - Special Issue on NeuroEconomics", 2005• , , "NeuroEconomics: An overview from an economic perspective", 2005• , Decisions, Uncertainty, and the Brain: The Science of Neuroeconomics, MIT Press, 2003.• Colin Camerer, George Loewenstein, , "Neuroeconomics: How neuroscience can inform economics", Journal of Economic Literature, 2005• Paul J. Zak, Robert Kurzban and , "The Neurobiology of Trust", , 1032:224–227 (2004).• Michael Kosfeld, Markus Heinrichs, Paul J. Zak, Urs Fischbacher, & Ernst Fehr. Oxytocin Increases Trust In Humans, Nature, 435:473–476, 2005 June 2nd. 221
  • Best Practices• CKlaus@kaneva.com Jeran,• We probably use more of a roadmap/milestone model, and with MS Project, see Ganntt charts to track progress. With a process similar to agile programming or Scrum. http://en.wikipedia.org/wiki/Scrum_(development)•• -----Original Message-----• From: Binning, Jeran [mailto:jeran.binning@dau.mil]• Sent: Tuesday, August 12, 2008 5:05 PM• To: Chris Klaus• Subject: FW: Software Metrics• Chris• We attended an executive program together at Stanford in 2000, I am now a US Government employee teaching for the department of defense. I am conducting some research to develop set of high level metrics for senior management to gain insight into program progress for software intensive systems. Do you and your staff have a set of metrics that you follow for major programming efforts?• Best regards,• Jeran Binning• Professor• Defense Acquisition University• San Diego, CA• 619-417-2513• Jeran.Binning@dau.mil• cklaus@iss.net 222
  • Lean Software Development• Agile• http://www.rallydev.com/?ppc=google&k w=program_management&gclid=CLStk MuVz5UCFQQCagodjF0CiQ 223
  • PMI Website• Dynamic Scheduling® with Microsoft® Office Project 2007]• ... scheduling best practices with valuable recommendations as to why, when, and how to use the various ... to achieve the best results in practice. ... scheduling best practices with valuable recommendations as to why, when, and how to use the various ... 224
  • PMI• ... Project Management Practices - finally, all the tools you need are available at the touch of a ... Winner of the 2003 Association Trends All Media Award (Gold) for Best Member Software Program! 225
  • SCRUM] History• In 1986, Hirotaka Takeuchi and described a new holistic approach which increases speed and flexibility in commercial new product development:[1] – They compare this new holistic approach, in which the phases strongly overlap and the whole process is performed by one cross-functional team across the different phases, to rugby, where the whole team "tries to go to the distance as a unit, passing the ball back and forth". – The case studies come from the automotive, photo machine, computer and printer industries.• In 1991, DeGrace and Stahl, in Wicked Problems, Righteous Solutions[2] referred to this approach as Scrum, a rugby term mentioned in the article by Takeuchi and Nonaka.• In the early 1990s, Ken Schwaber used an approach that led to Scrum at his company, Advanced Development Methods.• At the same time, Jeff Sutherland developed a similar approach at Easel Corporation and was the first to call it Scrum.[3]• In 1995 Sutherland and Schwaber jointly presented a paper describing Scrum at OOPSLA 95 in Austin, its first public appearance. Schwaber and Sutherland collaborated during the following years to merge the above writings, their experiences, and industry best practices into what is now known as Scrum.• In 2001 Schwaber teamed up with to write up the method in the book .Characteristics of Scrum• The Scrum process• Scrum is a process skeleton that includes a set of practices and predefined roles. The main roles in Scrum are the ScrumMaster who maintains the processes and works similar to a project manager, the Product Owner who represents the stakeholders, and the Team which includes the developers.• During each sprint, a 15-30 day period (length decided by the team), the team creates an increment of potential shippable (usable) software. The set of features that go into each sprint come from the product backlog, which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting the Product Owner informs the team of the items in the product backlog that he wants completed. The team then determines how much of this they can commit to complete during the next sprint.[4] During the sprint, no one is able to change the sprint backlog, which means that the requirements are frozen for a sprint.• There are several implementations of systems for managing the Scrum process which range from yellow stickers and white-boards to software packages. One of Scrums biggest advantages is that it is very easy to learn and requires little effort to start using.Scrum roles 226
  • • Martin Gardner Born: 21-Oct-1914 Birthplace: Tulsa, OK• Gender: Male Religion: Other Race or Ethnicity: White Sexual orientation: Straight Occupation: Mathematician• Nationality: United States Executive summary: Mathematical games, Scientific American• Military service: US Navy (1941-45)• Wife: Charlotte Greenwald (d. 2001)• University: BA Philosophy, University of Chicago, Chicago, IL (1936)• Scientific American Columnist Mathematical Games 1957-82 The Tulsa Tribune Reporter CSICOP Board Member False Memory Syndrome Foundation Advisory Board 227
  • Risk analysis for revenue dependent infrastructure projects• Risk analysis for revenue dependent infrastructure projects• Authors: Songer A. D.; Diekmann J.; Pecsok R. S.• Source: Construction Management and Economics, Volume 15, Number 4, 1 July 1997 , pp. 377-382(6)• Publisher: Routledge, part of the Taylor & Francis Group••• Key: - Free Content• - New Content• - Subscribed Content• - Free Trial Content••• Abstract:• Recent trends in the construction industry indicate continued use of alternative procurement methods such as design-build, construction management, build-operate-transfer, and privatization. Increased use of these evolving methods produces higher levels of uncertainty with respect to long term performance and profitability. The uncertainties inherent in implementing new procurement methods necessitate investigation of enhanced methods of pre-project planning and analysis. This is particularly true for revenue dependent privatization projects such as toll roads. Poor initial performance of toll road projects suggests traditional methods of project analysis are inadequate. Sustaining investor and stakeholder support of privatized revenue dependent projects is dependent upon successful financial performance. Enhanced risk analysis tools provide improved information for pre-project decision making and performance outcome. One such risk analysis method is the Monte Carlo. Monte Carlo methods are especially useful in evaluating which of several uncertain quantities most significantly contributes to the overall risk of the project. This paper demonstrates a Monte Carlo risk assessment methodology for revenue dependent infrastructure projects.• Keywords: PROJECT; FINANCE; MONTE; CARLO; PRIVATIZATION; RISK; ANALYSIS; COMPUTER• Language: English• Document Type: Research article 228
  • Cost Risk Analysis The process of quantifying uncertainty in a cost estimate.• By definition a point estimate is precisely wrong – Assessment of risk is not evident in a point estimate – The influence of variables may not be understood by the decision maker• Cost risk predicts cost growth.• Cost risk = cost estimating risk + schedule risk+ technical risk + change in requirements/ threat• Risk analysis adjusts the cost estimate to provide decision makers an understanding of funding risks. 0.12 1 0.1 0.9 0.8 0.08 0.7 0.6 0.06 0.5 0.04 0.4 0.3 0.02 0.2 0.1 0 0 Cumulative Density Function Probability Density Function
  • market, while the other can create feedback loops that drive the market further and further from the equilibrium of the "fair price".• A specific example of this criticism is found in some attempted explanations of the equity premium puzzle. It is argued that the puzzle simply arises due to entry barriers (both practical and psychological) which have traditionally impeded entry by individuals into the stock market, and that returns between stocks and bonds should stabilize as electronic• Behavioral economics resources open up the stock market to a greater number of traders (See Freeman, 2004 for a review). In reply, others contend that most personal investment funds are managed through superannuation funds, so the effect of these putative barriers to entry would be minimal. In addition, professional investors and fund managers seem to hold more bonds than one would expect given return differentials. [edit] Quantitative behavioral finance• Quantitative behavioral finance is a new discipline that uses mathematical and statistical methodology to understand behavioral biases in conjunction with valuation. Some of this endeavor has been lead by Gunduz Caginalp (Professor of Mathematics and Editor of Journal of Behavioral Finance during 2001-2004) and collaborators including Vernon Smith (2002 Nobel Laureate in Economics), David Porter, Don Balenovich,[8] Vladimira Ilieva, Ahmet Duran,[9] Huseyin Merdan). Studies by Jeff Madura,[10] Ray Sturm[11] and others have demonstrated significant behavioral effects in stocks and exchange traded funds.• The research can be grouped into the following areas:• Empirical studies that demonstrate significant deviations from classical theories• Modeling using the concepts of behavioral effects together with the non-classical assumption of the finiteness of assets• Forecasting based on these methods• Studies of experimental asset markets and use of models to forecast experiments• [edit] Behavioral economics topics• Models in behavioral economics are typically addressed to a particular observed market anomaly and modify standard neo-classical models by describing decision makers as using heuristics and being affected by framing effects. In general, economics sits within the neoclassical framework, though the standard assumption of rational behaviour is often challenged.• [edit] Heuristics• Prospect theory - Loss aversion - Status quo bias - Gamblers fallacy - Self-serving bias - money illusion• [edit] Framing• Cognitive framing - Mental accounting - Anchoring• [edit] Anomalies (economic behavior)• Disposition effect - endowment effect - inequity aversion - reciprocity - intertemporal consumption - present-biased preferences - momentum investing - Greed and fear - Herd instinct - Sunk cost fallacy• [edit] Anomalies (market prices and returns)• equity premium puzzle - Efficiency wage hypothesis - price stickiness - limits to arbitrage - dividend puzzle - fat tails - calendar effect• [edit] Critical conclusions of behavioral economics• Critics of behavioral economics typically stress the rationality of economic agents (see Myagkov and Plott (1997) amongst others). They contend that experimentally observed behavior is inapplicable to market situations, as learning opportunities and competition will ensure at least a close approximation of rational behavior.• Others note that cognitive theories, such as prospect theory, are models of decision making, not generalized economic behavior, and are only applicable to the sort of once-off decision problems presented to experiment participants or survey respondents.• Traditional economists are also skeptical of the experimental and survey based techniques which are used extensively in behavioral economics. Economists typically stress revealed preferences over stated preferences (from surveys) in the determination of economic value. Experiments and surveys must be designed carefully to avoid systemic biases, strategic behavior and lack of incentive compatibility, and many economists are distrustful of results obtained in this manner due to the difficulty of eliminating these problems.• Rabin (1998)[12] dismisses these criticisms, claiming that results are typically reproduced in various situations and countries and can lead to good theoretical insight. Behavioral economists have also incorporated these criticisms by focusing on field studies rather than lab experiments. Some economists look at this split as a fundamental schism between experimental economics and behavioral economics, but prominent behavioral and experimental economists tend to overlap techniques and approaches in answering common questions. For example, many prominent behavioral economists are actively investigating neuroeconomics, which is entirely experimental and cannot be verified in the field.• Other proponents of behavioral economics note that neoclassical models often fail to predict outcomes in real world contexts. Behavioral insights can be used to update neoclassical equations, and behavioral economists note that these revised models not only reach the same correct predictions as the traditional models, but also correctly predict some outcomes where the traditional models failed.[verification needed]• [edit] Key figures in behavioral economics• Dan Ariely[13]• Gary Becker• Colin Camerer• Ernst Fehr• Kenneth L. Fisher• Daniel Kahneman• David Laibson• George Loewenstein• R. Duncan Luce• Matthew Rabin• Howard Rachlin• Herbert Simon• Paul Slovic 230
  • • During the classical period, economics had a close link with psychology. For example, Adam Smith wrote The Theory of Moral Sentiments, an important text describing psychological principles of individual behavior; and Jeremy Bentham wrote extensively on the psychological underpinnings of utility. Economists began to distance themselves from psychology during the development of neo-classical economics as they sought to reshape the discipline as a natural science, with explanations of economic behavior deduced from assumptions about the nature of economic agents. The concept of homo economicus was developed, and the psychology of this entity was fundamentally rational. Nevertheless, psychological explanations continued to inform the analysis of many important figures in the development of neo-classical economics such as Francis Edgeworth, Vilfredo Pareto, Irving Fisher and John Maynard Keynes.• Psychology had largely disappeared from economic discussions by the mid 20th century. A number of factors contributed to the resurgence of its use and the development of behavioral economics. Expected utility and discounted utility models began to gain wide acceptance, generating testable hypotheses about decision making under uncertainty and intertemporal consumption respectively. Soon a number of observed and repeatable anomalies challenged those hypotheses. Furthermore, during the 1960s cognitive psychology began to describe the brain as an information processing device (in contrast to behaviorist models). Psychologists in this field such as Ward Edwards,[2] Amos Tversky and Daniel Kahneman began to compare their cognitive models of decision making under risk and uncertainty to economic models of rational behavior. In Mathematical psychology, there is a longstanding interest in the transitivity of preference and what kind of measurement scale utility constitutes (Luce, 2000).[3]• An important paper in the development of the behavioral finance and economics fields was written by Kahneman and Tversky in 1979. This paper, Prospect theory: An Analysis of Decision Under Risk, used cognitive psychological techniques to explain a number of documented divergences of economic decision making from neo-classical theory. Over time many other psychological effects have been incorporated into behavioral finance, such as overconfidence and the effects of limited attention. Further milestones in the development of the field include a well attended and diverse conference at the University of Chicago,[4] a special 1997 edition of the Quarterly Journal of Economics (In Memory of Amos Tversky) devoted to the topic of behavioral economics and the award of the Nobel prize to Daniel Kahneman in 2002 "for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty".[5]• Prospect theory is an example of generalized expected utility theory. Although not commonly included in discussions of the field of behavioral economics, generalized expected utility theory is similarly motivated by concerns about the descriptive inaccuracy of expected utility theory.• Behavioral economics has also been applied to problems of intertemporal choice. The most prominent idea is that of hyperbolic discounting, proposed by George Ainslie (1975), in which a high rate of discount is used between the present and the near future, and a lower rate between the near future and the far future. This pattern of discounting is dynamically inconsistent (or time-inconsistent), and therefore inconsistent with some models of rational choice, since the rate of discount between time t and t+1 will be low at time t-1, when t is the near future, but high at time t when t is the present and time t+1 the near future. As part of the discussion of hypberbolic discounting, has been animal and human work on Melioration theory and Matching Law of Richard Herrnstein. They suggest that behavior is not based on expected utility rather it is based on previous reinforcement experience.• [edit] Methodology• At the outset behavioral economics and finance theories were developed almost exclusively from experimental observations and survey responses, though in more recent times real world data has taken a more prominent position. fMRI has also been used to determine which areas of the brain are active during various steps of economic decision making. Experiments simulating market situations such as stock market trading and auctions are seen as particularly useful as they can be used to isolate the effect of a particular bias upon behavior; observed market behavior can typically be explained in a number of ways, carefully designed experiments can help narrow the range of plausible explanations. Experiments are designed to be incentive compatible, with binding transactions involving real money the norm.• [edit] Key observations• There are three main themes in behavioral finance and economics:[6]• Heuristics: People often make decisions based on approximate rules of thumb, not strictly rational analysis. See also cognitive biases and bounded rationality.• Framing: The way a problem or decision is presented to the decision maker will affect his action.• Market inefficiencies: There are explanations for observed market outcomes that are contrary to rational expectations and market efficiency. These include mis-pricings, non-rational decision making, and return anomalies. Richard Thaler, in particular, has described specific market anomalies from a behavioral perspective.• Recently, Barberis, Shleifer, and Vishny (1998),[7] as well as Daniel, Hirshleifer, and Subrahmanyam (1998)[citation needed] have built models based on extrapolation (seeing patterns in random sequences) and overconfidence to explain security market over- and underreactions, though such models have not been used in the money management industry. These models assume that errors or biases are correlated across agents so that they do not cancel out in aggregate. This would be the case if a large fraction of agents look at the same signal (such as the advice of an analyst) or have a common bias.• More generally, cognitive biases may also have strong anomalous effects in aggregate if there is a social contamination with a strong emotional content (collective greed or fear), leading to more widespread phenomena such as herding and groupthink. Behavioral finance and economics rests as much on social psychology within large groups as on individual psychology. However, some behavioral models explicitly demonstrate that a small but significant anomalous group can also have market-wide effects (eg. Fehr and Schmidt, 1999).[citation needed]• [edit] Behavioral finance topics• Some central issues in behavioral finance are why investors and managers (and also lenders and borrowers) make systematic errors. It shows how those errors affect prices and returns (creating market inefficiencies). It shows also what managers of firms or other institutions, as well as other financial players might do to take advantage of market inefficiencies.• Among the inefficiencies described by behavioral finance, underreactions or overreactions to information are often cited, as causes of market trends and in extreme cases of bubbles and crashes). Such misreactions have been attributed to limited investor attention, overconfidence / overoptimism, and mimicry (herding instinct) and noise trading.• Other key observations made in behavioral finance literature include the lack of symmetry between decisions to acquire or keep resources, called colloquially the "bird in the bush" paradox, and the strong loss aversion or regret attached to any decision where some emotionally valued resources (e.g. a home) might be totally lost. Loss aversion appears to manifest itself in investor behavior as an unwillingness to sell shares or other equity, if doing so would force the trader to realise a nominal loss (Genesove & Mayer, 2001). It may also help explain why housing market prices do not adjust downwards to market clearing levels during periods of low demand.• Applying a version of prospect theory, Benartzi and Thaler (1995) claim to have solved the equity premium puzzle, something conventional finance models have been unable to do.• Presently, some researchers in experimental finance use 231
  • Behavioral economics• Behavioral economics and behavioral finance are closely related fields which apply scientific research on human and social cognitive and emotional biases to better understand economic decisions and how they affect market prices, returns and the allocation of resources. The fields are primarily concerned with the rationality, or lack thereof, of economic agents. Behavioral models typically integrate insights from psychology with neo-classical economic theory.• Academics are divided between considering Behavioral Finance as supporting some tools of technical analysis by explaining market trends, and considering some aspects of technical analysis as behavioral biases (representativeness heuristic, self fulfilling prophecy• Behavioral analysts are mostly concerned with the effects of market decisions, but also those of public choice another source of economic decisions with some similar biases.• Quantitative behavioral finance 232
  • Credit• Beschreibung• DescriptionUniversum - C. Flammarion, Holzschnitt, Paris 1888, Kolorit : Heikenwaelder Hugo, Wien 1998, *)SourceHeikenwaelder Hugo, Austria, Email : heikenwaelder@aon.at, www.heikenwaelder.atDate1998AuthorHeikenwaelder Hugo, Austria, Email : heikenwaelder@aon.at, www.heikenwaelder.atPermission (Reusing this image)Heikenwaelder Hugo, Austria, Email : heikenwaelder@aon.at, www.heikenwaelder.at• ) "Eine Montage von Camille Flammarion für sein Werk LAstronomie populäire, das 1880 erschien"; siehe: Jean Pierre Verdet. Der HIMMEL. Ordnung und Chaos der Welt. Ravensburg: Maier, 1991, S.26 – Note. The Text "Urbi et orbi" does not appear in the original Flammarian woodcut. SteveMcCluskey 23:13, 25 February 2007 (UTC) 233
  • Wikipedia• Creative Commons• This page is available in the following languages: Afrikaans български Català Dansk Deutsch Ελληνικά English English (CA) English (GB) English (US) Esperanto Castellano Castellano (AR) Español (CL) Castellano (CO) Español (Ecuador) Castellano (MX) Castellano (PE) Euskara Suomeksi français français (CA) Galego hrvatski Magyar Italiano 日本語 한국어 Macedonian Melayu Nederlands Norsk Sesotho sa Leboa polski Português română slovenski jezik српски srpski (latinica) Sotho svenska 中文 華語 (台 灣) isiZulu•• Creative Commons License Deed• Attribution-Share Alike 2.5 Generic•• You are free:• to Share — to copy, distribute and transmit the work• to Remix — to adapt the work• Under the following conditions:• Attribution. You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work).• What does "Attribute this work" mean?• The page you came from contained embedded licensing metadata, including how the creator wishes to be attributed for re-use. You can use the HTML here to cite the work. Doing so will also include metadata on your page so that others can find the original work as well.•• Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.• For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to this web page.• Any of the above conditions can be waived if you get permission from the copyright holder.• Nothing in this license impairs or restricts the authors moral rights.• A new version of this license is available. You should use it for new works, and you may want to relicense existing works under it. No works are automatically put under the new license, however.• Disclaimer• Disclaimer• The Commons Deed is not a license. It is simply a handy reference for understanding the Legal Code (the full license) — it is a human-readable expression of some of its key terms. Think of it as the user-friendly interface to the Legal Code beneath. This Deed itself has no legal value, and its contents do not appear in the actual license.• Creative Commons is not a law firm and does not provide legal services. Distributing of, displaying of, or linking to this Commons Deed does not create an attorney-client relationship.•• Your fair dealing and other rights are in no way affected by the above.• This is a human-readable summary of the Legal Code (the full license).• Learn how to distribute your work using this license 234
  • AnY qUEsTIoNs? 235
  • Shattering the Bell Curve The power law rules. by DAVID A. SHAYWITZ Tuesday, April 24, 2007 12:01 A.M. EDT• Life isnt fair. Many of the most coveted spoils--wealth, fame, links on the Web--are concentrated among the few. If such a distribution doesnt sound like the familiar bell-shaped curve, youre right.• Along the hilly slopes of the bell curve, most values--the data points that track whatever is being measured- -are clustered around the middle. The average value is also the most common value. The points along the far extremes of the curve contribute very little statistically. If 100 random people gather in a room and the worlds tallest man walks in, the average height doesnt change much. But if Bill Gates walks in, the average net worth rises dramatically. Height follows the bell curve in its distribution. Wealth does not: It follows an asymmetric, L-shaped pattern known as a "power law," where most values are below average and a few far above. In the realm of the power law, rare and extreme events dominate the action.• For Nassim Taleb, irrepressible quant-jock and the author of "Fooled by Randomness" (2001), the contrast between the two distributions is not an amusing statistical exercise but something more profound: It highlights the fundamental difference between life as we imagine it and life as it really is. In "The Black Swan"--a kind of cri de coeur--Mr. Taleb struggles to free us from our misguided allegiance to the bell-curve mindset and awaken us to the dominance of the power law.• The attractiveness of the bell curve resides in its democratic distribution and its mathematical accessibility. Collect enough data and the pattern reveals itself, allowing both robust predictions of future data points (such as the height of the next five people to enter the room) and accurate estimations of the size and frequency of extreme values (anticipating the occasional giant or dwarf.• The power-law distribution, by contrast, would seem to have little to recommend it. Not only does it disproportionately reward the few, but it also turns out to be notoriously difficult to derive with precision. The most important events may occur so rarely that existing data points can never truly assure us that the future wont look very different from the present. We can be fairly certain that we will never meet anyone 14-feet tall, but it is entirely possible that, over time, we will hear of a man twice as rich as Bill Gates or witness a market crash twice as devastating as that of October 1987.• The problem, insists Mr. Taleb, is that most of the time we are in the land of the power law and dont know it. Our strategies for managing risk, for instance--including Modern Portfolio Theory and the Black-Scholes formula for pricing options--are likely to fail at the worst possible time, Mr. Taleb argues, because they are generally (and mistakenly) based on bell-curve assumptions. He gleefully cites the example of Long Term Capital Management (LTCM), an early hedge fund that blew up after its Nobel laureate founders "allowed themselves to take a monstrous amount of risk" because "their models ruled out the possibility of large deviations."• 236
  • • Mr. Taleb is fascinated by the rare but pivotal events that characterize life in the power-law world. He calls them Black Swans, after the philosopher Karl Poppers observation that only a single black swan is required to falsify the theory that "all swans are white" even when there are thousands of white swans in evidence. Provocatively, Mr. Taleb defines Black Swans as events (such as the rise of the Internet or the fall of LTCM) that are not only rare and consequential but also predictable only in retrospect. We never see them coming, but we have no trouble concocting post hoc explanations for why they should have been obvious. Surely, Mr. Taleb taunts, we wont get fooled again. But of course we will.• Writing in a style that owes as much to Stephen Colbert as it does to Michel de Montaigne, Mr. Taleb divides the world into those who "get it" and everyone else, a world partitioned into heroes (Popper, Hayek, Yogi Berra), those on notice (Harold Bloom, necktie wearers, personal-finance advisers) and entities that are dead to him (the bell curve, newspapers, the Nobel Prize in Economics).• A humanist at heart, Mr. Taleb ponders not only the effect of Black Swans but also the reason we have so much trouble acknowledging their existence. And this is where he hits his stride. We eagerly romp with him through the follies of confirmation bias (our tendency to reaffirm our beliefs rather than contradict them), narrative fallacy (our weakness for compelling stories), silent evidence (our failure to account for what we dont see), ludic fallacy (our willingness to oversimplify and take games or models too seriously), and epistemic arrogance (our habit of overestimating our knowledge and underestimating our ignorance).• For anyone who has been compelled to give a long-term vision or read a marketing forecast for the next decade, Mr. Talebs chapter excoriating "The Scandal of Prediction" will ring painfully true. "What is surprising is not the magnitude of our forecast errors," observes Mr. Taleb, "but our absence of awareness of it." We tend to fail--miserably--at predicting the future, but such failure is little noted nor long remembered. It seems to be of remarkably little professional consequence.• I suspect that part of the explanation for this inconsistency may be found in a study of stock analysts that Mr. Taleb cites. Their predictions, while badly inaccurate, were not random but rather highly correlated with each other. The lesson, evidently, is that its better to be wrong than alone.• If we accept Mr. Talebs premise about power-law ascendancy, we are left with a troubling question: How do you function in a world where accurate prediction is rarely possible, where history isnt a reliable guide to the future and where the most important events cannot be anticipated?• Mr. Taleb presents a range of answers--be prepared for various outcomes, he says, and dont rush for buses--but its clear that he remains slightly vexed by the world he describes so vividly. Then again, beatific serenity may not be the goal here. As Mr. Taleb warns, certitude is likely to be found only in a fools (bell- curve) paradise, where we choose the comfort of the "precisely wrong" over the challenge of the "broadly correct." Beneath Mr. Talebs blustery rhetoric lives a surprisingly humble soul who has chosen to follow a demanding and somewhat lonely path. 237
  • • The Trillion Dollar Bet• Duration: 05:56Recorded: 21 March 2008Location:France• Http: rnb4life.xooit.com index.php•• Anti Flag 1 Trillion Dollar$• Http: submedia.tv this week 1...• See all videos»• HelpContact us Our blog»• AboutStaffAdvertisePressJobs• LegalTerms of useCorporate informationPrivacy policyProhibited contentCopyright notificationContent Protection• ToolsVideoWallJukeboxUploaderFacebookiPhone• ContentBecome a MotionMakerOur MotionMakersBecome an Official UserOur Official Users• © 2005 - 2008 Dailymotion 238
  • • eu⋅de⋅mon⋅ic• yu dɪˈm ɪk/ ɒn• Pronunciation [yoo-di-mon-ik]• –adjective 1.pertaining or conducive to happiness 239
  • Cost Growth in DoD Programs •Source: GAO analysis of DOD data. FY 2000 FY 2005 FY 2007Number of Programs 95 75 91•Total planned commitments $790 B $1.5 T $1.6 T•Commitments outstanding $380 B $887 B $858 B•Portfolio performance•Change RDT&E costs from first estimate 27% 33% 40 percent•Change acquisition cost from first estimate 6% 18% 26 percent•Estimated total acquisition cost growth $42 B $202 B $295 BPrograms with = >25% increase in Program Acquisition Unit Cost 37% 44% 44%•Ave schedule delay delivering initial capability 16 mos 17 mos 21 mos 240
  • Risk management• A structured approach to managing uncertainty related to a threat, a sequence of human activities including:• risk assessment strategies development to manage it, and mitigation of risk using managerial resources.• Some traditional approaches to risk management are focused on risks stemming from physical or legal causes (e.g. natural disasters or fires, accidents, death and lawsuits).• Financial risk management, on the other hand, focuses on risks that can be managed using traded financial instruments.• The objective of risk management is to reduce different risks related to a pre-selected domain to the level accepted by society. It may refer to numerous types of threats caused by environment, technology, humans, organizations and politics. On the other hand it involves all means available for humans, or in particular, for a risk management entity (person, staff, organization). 241
  • Risk management• Some explanations• Steps in the risk management process • Limitations• Areas of risk management• Risk management and business continuity• General references• Further reading• External links 242
  • Some explanations• Risk management also faces difficulties allocating resources. This is the idea of opportunity cost.• Resources spent on risk management could have been spent on more profitable activities.• Again, ideal risk management minimizes spending while maximizing the reduction of the negative effects of risks 243
  • Some explanations• In ideal risk management, a prioritization process is followed whereby the risks with the greatest loss and the greatest probability of occurring are handled first, and risks with lower probability of occurrence and lower loss are handled in descending order.• In practice the process can be very difficult, and balancing between risks with a high probability of occurrence but lower loss versus a risk with high loss but lower probability of occurrence can often be mishandled. High probability/low consequence events ≠ Low probability/High consequence events Prospect theory, Kahneman and Tversky 244
  • Some Explanations• Intangible risk management identifies a new type of risk - a risk that has a 100% probability of occurring but is ignored by the organization due to a lack of identification ability. – For example, when deficient knowledge is applied to a situation, a knowledge risk materializes. – Relationship risk appears when ineffective collaboration occurs. – Process-engagement risk may be an issue when ineffective operational procedures are applied. These risks directly reduce the productivity of knowledge workers, decrease cost effectiveness, profitability, service, quality, reputation, brand value, and earnings quality. – Intangible risk management allows risk management to create immediate value from the identification and reduction of risks that reduce productivity. 245
  • Some ExplanationsHigh probability/low consequence events ≠Low probability/High consequence events Prospect theory, Kahneman and Tversky 246
  • Establish the context• Establishing the context involves: – Identification of risk in a selected domain of interest – Planning the remainder of the process. – Mapping out the following: • the social scope of risk management • the identity and objectives of stakeholders • the basis upon which risks will be evaluated, constraints. – Defining a framework for the activity and an agenda for identification. – Developing an analysis of risks involved in the process. – Mitigation of risks using available technological, human and organizational resources. 247
  • • The Black-Scholes model of option pricing is based on a normal distribution and under-prices options that are far out of the money since a 5 or 7 sigma event is more likely than the normal distribution predicts. • Applications in economics: • In finance, fat tails are considered undesirable because of the additional risk they imply. For example, an investment strategy may have an expected return, after one year, that is five times its standard deviation. Assuming a normal distribution, the likelihood of its failure (negative return) is less than one in a million; in practice, it may be higher. Normal distributions thatFAT TAILS emerge in finance generally do so because the factors influencing an assets value or price are mathematically "well-behaved", and the central limit theorem provides for such a distribution. • However, traumatic "real-world" events (such as an oil shock, a large corporate bankruptcy, or an abrupt change in a political situation) are usually not mathematically well-behaved. • Fat tails in market return distributions also have some behavioral origins (investor excessive optimism or pessimism leading to large market moves) and are therefore studied in behavioral finance. • In marketing, the familiar 80-20 rule frequently found (e.g. "20% of customers account for 80% of the revenue) is a manifestation of a fat tail distribution underlying the data 248
  • Wikipedia• The license Wikipedia uses grants free access to our content in the same sense that free software is licensed freely. This principle is known as copyleft. Wikipedia content can be copied, modified, and redistributed so long as the new version grants the same freedoms to others and acknowledges the authors of the Wikipedia article used (a direct link back to the article is generally thought to satisfy the attribution requirement). Wikipedia articles therefore will remain free under the GFDL and can be used by anybody subject to certain restrictions, most of which aim to ensure that freedom.• To this end, the text contained in Wikipedia is copyrighted (automatically, under the Berne Convention) by Wikipedia contributors and licensed to the public under the GNU Free Documentation License (GFDL). The full text of this license is at Wikipedia:Text of the GNU Free Documentation License. – Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. – A copy of the license is included in the section entitled "GNU Free Documentation License". – Content on Wikipedia is covered by disclaimers. 249
  • Macbeth• SEYTON The queen, my lord, is dead.• MACBETH She should have died hereafter; There would have been a time for such a word. To-morrow, and to-morrow, and to-morrow, Creeps in this petty pace from day to day To the last syllable of recorded time, And all our yesterdays have lighted fools The way to dusty death. Out, out, brief candle! Lifes but a walking shadow, a poor player That struts and frets his hour upon the stage And then is heard no more: it is a tale Told by an idiot, full of sound and fury, Signifying nothing. 250
  • Incorporating psychological influences in probabilistic cost analysis • Abstract: Todays typical probabilistic cost analysis assumes an ideal project that is devoid of the human and organizational considerations that heavily influence the success and cost of real-world projects. • In the real world Money Allocated Is Money Spent (MAIMS principle); cost underruns are rarely available to protect against cost overruns while task overruns are passed on to the total project cost. Realistic cost estimates therefore require a modified probabilistic cost analysis that simultaneously models the cost management strategy including budget allocation. • Psychological influences such as overconfidence in assessing uncertainties, dependencies among cost elements, and risk are other important considerations that are generally not addressed. • It should then be no surprise that actual project costs often exceed the initial estimates and are delivered late and/or with a reduced scope. • This paper presents a practical probabilistic cost analysis model that incorporates recent findings in human behavior and judgment under uncertainty, dependencies among cost elements, the MAIMS principle, and project management practices. • Uncertain cost elements are elicited from experts using the direct fractile assessment method and fitted with three-parameter Weibull distributions. The full correlation matrix is specified in terms of two parameters that characterize correlations among cost elements in the same and in different subsystems. The analysis is readily implemented using standard Monte Carlo simulation tools such as @Risk and Crystal Ball®. • The analysis of a representative design and engineering project substantiates that todays typical probabilistic cost analysis is likely to severely underestimate project cost for probability of success values of importance to contractors and procuring activities. The proposed approach provides a framework for developing a viable cost management strategy for allocating baseline budgets and contingencies. Given the scope and magnitude of the cost-overrun problem, the benefits are likely to be significant. © 2004 Wiley Periodicals, Inc. Syst Eng 7: 000- 000, 2004Edouard Kujawski 1, Mariana L. Alvaro 2, William R. Edwards 11Engineering Division, Ernest Orlando Lawrence Berkeley National Laboratory, University of California, Berkeley, CA 947202Department of Statistics, California State University, Hayward, CA 94542email: Edouard Kujawski (e_kujawski@lbl.gov) This article is a US Government work and, as such, is in the public domain in the United States of America. 251
  • Benoit Mandelbrot, godfather of fractal geometry• Although Mandelbrot coined the term fractal, some of the mathematical objects he presented in The Fractal Geometry of Nature had been described by other mathematicians. Before Mandelbrot, they had been regarded as isolated curiosities with unnatural and non-intuitive properties. Mandelbrot brought these objects together for the first time and turned them into essential tools for the long-stalled effort to extend the scope of science to non-smooth objects in the real world. He highlighted their common properties, such as self-similarity (linear, non-linear, or statistical), scale invariance, and a (usually) non-integer Hausdorff dimension.• He also emphasized the use of fractals as realistic and useful models of many "rough" phenomena in the real world. Natural fractals include the shapes of mountains, coastlines and river basins; the structures of plants, blood vessels and lungs; the clustering of galaxies; and Brownian motion. Fractals are found in human pursuits, such as music, painting, architecture, and stock market prices. Mandelbrot believed that fractals, far from being unnatural, were in many ways more intuitive and natural than the artificially smooth objects of traditional Euclidean geometry:• Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line. —Mandelbrot, in his introduction to The Fractal Geometry of Nature• Mandelbrot has been called a visionary.[9] His informal and passionate style of writing and his emphasis on visual and geometric intuition (supported by the inclusion of numerous illustrations) made The Fractal Geometry of Nature accessible to non-specialists. The book sparked widespread popular interest in fractals and contributed to chaos theory and other fields of science and mathematics. 252
  • FRACTALS 253
  • FAT TAILSBenoît Mandelbrot• A fat tail is a property of some probability distributions (alternatively referred to as heavy-tailed distributions) exhibiting extremely large kurtosis particularly relative to the ubiquitous normal which itself is an example of an exceptionally thin tail distribution. Fat tail distributions have power law decay. More precisely, the distribution of a random variable X is said to have a fat tail if FORMULA• Some reserve the term "fat tail" for distributions only where 0 < α < 2 (i.e. only in cases with infinite variance).• Fat tails and risk estimate distortions• By contrast to fat tail distributions, the normal distribution posits events that deviate from the mean by five or more standard deviations ("5-sigma event") are extremely rare, with 10- or more sigma being practically impossible. On the other hand, fat tail distributions such as the Cauchy distribution (and all other stable distributions with the exception of the normal distribution) are examples of fat tail distributions that have "infinite sigma" (more technically: "the variance does not exist").• Thus when data naturally arise from a fat tail distribution, shoehorning the normal distribution model of risk — and an estimate of the corresponding sigma based necessarily on a finite sample size — would severely understate the true risk. Many — notably Benoît Mandelbrot — have noted this shortcoming of the normal distribution model and have proposed that fat tail distributions such as the stable distribution govern asset 254 returns frequently found in finance.
  • Brownian Motion• http://xanadu.math.utah.edu/java/brownianmotion/1/• In 1827 the English botanist Robert Brown noticed that pollen grains suspended in water jiggled about under the lens of the microscope, following a zigzag path like the one pictured below. (Click the mouse button to draw a new path). Even more remarkable was the fact that pollen grains that had been stored for a century moved in the same way. BrownianMotion.java | CGraphics.java• In 1889 G.L. Gouy found that the "Brownian" movement was more rapid for smaller particles (we do not notice Brownian movement of cars, bricks, or people). In 1900 F.M. Exner undertook the first quantitative studies, measuring how the motion depended on temperature and particle size.• The first good explanation of Brownian movement was advanced by Desaulx in 1877: "In my way of thinking the phenomenon is a result of thermal molecular motion in the liquid environment (of the particles)." This is indeed the case. A suspended particle is constantly and randomly bombarded from all sides by molecules of the liquid. If the particle is very small, the hits it takes from one side will be stronger than the bumps from other side, causing it to jump. These small random jumps are what make up Brownian motion.• References: Encyclopedia Brittanica 1968, "Brownian Movement." 255
  • Kurtosis is the degree of peakedness of a distribution.• Skewness/Kurtosis• Skewness is the degree of departure from symmetry of a distribution. A positively skewed distribution has a "tail" which is pulled in the positive direction. A negatively skewed distribution has a "tail" which is pulled in the negative direction.• A normal distribution is a mesokurtic distribution. A pure leptokurtic distribution has a higher peak than the normal distribution and has heavier tails. A pure platykurtic distribution has a lower peak than a normal distribution and lighter tails.• Most departures from normality display combinations of both skewness and kurtosis different from a normal distribution. 256
  • Non Linearity• Taleb‘s father used this ditty to describe the nonlinearity of everyday life.―Ketchup in a bottle----None will come and then the lot‘ll.‖ 257
  • Root Cause• In the practice known as Root Cause Analysis (RCA), we are generally looking for reasons to explain why a problem occurred. In most cases, we find that there are many reasons for any given problem. Some (or most?) of them may be far removed in time, space, and subject from the problem itself. We typically call such reasons Root Causes, and according to theory, correcting these Root Causes will prevent future occurrences of this problem, and potentially many others.• The basic RCA method is to simply ask "Why" over and over again until you arrive at a Root Cause. The real question then becomes: how do we know when to stop asking "Why"? At what point are we satisfied that weve identified a Root Cause? What is a Root Cause? These are questions that constantly spark disagreement among RCA practitioners. While there is some disagreement as to what constitutes a Cause, the real fireworks begin when you try to define the word Root.• Dictionary.com has a rather lengthy definition of Root. I wont reproduce it here, but it should suffice to say that there are many different definitions. However, there are a few common meanings that run through most of them:• Roots are frequently hidden under the surface.• Roots provide support or act as a base.• Roots relate to origins and sources.• Roots are primary and fundamental.• Roots are established and entrenched.• What about the etymology of Root? According to the Online Etymology Dictionary, Root comes from the Old Norse word rot for "underground part of a plant." The current meanings of Root make sense in this respect. The etymology tells us that when we use the word Root today, we are basically using it as a metaphor to suggest the qualities of plant roots. In addition to the list above, the following qualities come to mind.• Roots can spread out further than you expect.• Roots can be hard to find and harder to get rid of.• Roots that arent removed may continue growing.• Roots are often very dirty.• When RCA practitioners talk about Root Causes, they are basically talking about Causes that have all the qualities listed above. They want you to understand that problems are like plants that you dont want, i.e. weeds. If you leave a weed alone, you will end up with more weeds. If you try to remove a weed by cutting it off at the surface, your weed will grow back. The part of a weed you have to kill or remove to prevent future weeds is the root. The best overall solution would be to treat the soil so weeds dont take root in the first place!• So, back to the real questions at hand: what is a Root Cause? At what point are you satisfied that youve found one? When can you stop asking "Why"? Heres a short answer: youre right next to a Root Cause for your problem when you reach a fundamental force, law, or limit that cannot be removed by any action taken within your system. The actual Root Cause is the contradiction between your systems values (purpose, rules, culture, etc.) and these fundamental forces, laws, or limits. 258
  • DoD Risk Management Process Risk RiskIdentification Tracking Risk Analysis Risk Mitigation Planning Risk Mitigation Plan Implementation
  • Risk Perspectives (Examples)Cost ExternalTechnicalScheduleResourcesProcesses InternalBusinessManagementUserContractor Near term Long term
  • Risk IdentificationActivities to identify and document risk events/sourcesof uncertainty and risk drivers, identify their rootcauses, and determine risk owners. An iterativeprocess; it should involve IPT members, riskmanagement team members, subject matter experts,contractors, customers, and stakeholders.Typical methods of identifying risk events include:• Brainstorming• Interviews• Decomposition• Delphi technique• SWOT analysis• Checklists/historical data
  • Additional Risk Identification Methods – Product-Based (WBS) – Functional Analysis – Scenario – Analogous Systems – Subject Matter Experts/Studies – Lessons Learned – Templates (Willoughby) – Process *WBS – Work Breakdown Structure
  • RISK IDENTIFICATION• What are the risk events in the program?  Known / Knowns  Known / Unknowns  Unknown / Unknowns• Where are risk events located in the program? – Requirements, Technology, Design, T&E, M&S, Cost, Schedule, etc.
  • Product Based Identification System Vehicle Hull Turret FCS The WBS is an excellent basis for IPT assignments If multiple problems involve specific functional areas, a process evaluation may be in order
  • Process Based PRODUCT Identification FUNDING Funds Phasing DESIGN TEST PRODUCTION FACILITIES LOGISTICS MANAGEMENTMission Profiles Integrated Mfg Plan Modernization Log Spt Manufacturing Test Analysis StrategyDesign Rqmts Quality Process Factory Failure Rpt Improvements Manpower PersonnelTrade Studies Price-Part Cntl Spt & Test Test Reports Productivity RequirementsDesign Policy Sub Kr Cntl Equipment DataDesign Processes Software Test Center Defect Control Training ManagementDesign Analysis Design Limits Equip Tech RiskParts/Matl Selctn Tool Planning Life Spares AssessmentSoftware Design Spcl Test Equip TAAF ProductionCAD CAM Breaks FeedbackDesign for Test Mfg ScreeningBITConfig MgtDesign Reviews DoD 4245.7-M, Transition to Production
  • Scenario Based Identification Execute RepairTOP LEVEL Prepare Mission and Maintain DECOMPOSITIONReceive Locate Deploy Attack Recover Order Target S C E Order Received, Understood N A Order Not Received R I Order Received, Not Understood O S
  • Risk Event Filters: PMO Cost Schedule 5 KM 0 0 4 Achieved 0 to date Performance 0 3 0 (#36 – 60) Objective 0 Threshold (technical) 2 0 0 (#19 – 35) Planned 1 (#1 – 18) Profile 0 0 JULAUGSEPOCT NOVDEC JAN FEB MARAPR MAYJUN
  • Risk Event Filters: People– Users– Relationships– Decision Makers/Authorities– Organizations– Availability– Talent/Skill Level/education– Experience– Motivation/Morale– Safety
  • Risk Event Filters: Process Requirements Threat Time/Schedule Cost- Estimation/Control Design Budget Logistics Management Test and Evaluation Project size/scope Legal/Regulatory Management Procurement Systems Engineering Production
  • Risk Event Filters: Technology – Change – New or Obsolete – Adoption/Use – Integration/Interfaces – Team (Government/Contractor) technology expertise – Security – Architecture – Scalability
  • Sample Risk Management Plan Format  Strategy and Approach – Introduction – Program Summary – Definitions – Organization  Processes and Procedures – Risk Planning – Risk Identification – Risk Analysis – Risk Mitigation – Risk Tracking  Tools – Risk Management Information System – Documentation/Reports
  • Entropy 272
  • Entropy 273
  • KALMAN FILTER PERFORMANCE• Lets look at an example. The system represented by the equation below was simulated on a computer with random bursts of acceleration which had a standard deviation of 0.5 feet/sec2. The position was measured with an error of 10 feet (one standard deviation). The figure shows how well the Kalman Filter was able to estimate the position, in spite of the large measurement noise. 274
  • Rudolf Kalman• Rudolf Emil Kálmán - (b. May 19, 1930, Budapest) Mathematician: Developed the Kálmán Filter which is the "greatest discovery in statistics in our century." Kalman filtering is also the method used in GPS (Global Positioning Systems) for navigation.• Kalman emigrated to the US in 1943 and received the bachelors and masters degrees in electrical engineering, from the Massachusetts Institute of Technology in 1953 and 1954 respectively. He received the doctorate degree (D. Sci.) from Columbia University in 1957. In the early years of his career he held research positions at IBM and at the Research Institute for Advanced Studies in Baltimore. From 1962 to 1971, he was at Stanford University. In 1971, he became a graduate research professor and director of the Center for Mathematical System Theory at the University of Florida, recently retiring with emeritus status.• Kalmans contributions to control theory and to applied mathematics and engineering in general have been widely recognized. In 1985, he was one of four recipients of the Kyoto Prize, inaugurated in that year by the Inamori Foundation of Japan. The Kyoto prize, which in 1985 carried a cash award of 45 million yen (then about $200,000), is sometimes referred to as the "Japanese Nobel prize." It recognizes "outstanding intellectual or creative activities which have significantly enriched the human experience." Kalman received the prize in the field of advanced technology. Among the other honors Kalman has received are the Institute of Electrical and Electronics Engineers highest award, the Medal of Honor (1974), and the American Mathematical Societys Steele Prize (1986), which recognized the fundamental importance of the papers on linear filtering Kalman published in 1960 and 1961. Kalman is a member of the French, Hungarian, and Russian Academies of Sciences and of the National Academy of Engineering, and is a Fellow of the American Academy of Arts and Sciences. 275
  • • Judea Pearl states that heuristic methods are based upon intelligent search strategies for computer problem solving, using several alternative approaches. 276
  • •Information Awareness Office• The Information Awareness Office (IAO) was established by the Defense Advanced Research Projects Agency (DARPA), the research and development agency of the United States Department of Defense, in January 2002 to bring together several DARPA projects focused on applying information technology to counter transnational threats to national security. The IAO mission was to "imagine, develop, apply, integrate, demonstrate and transition information technologies, components and prototype, closed-loop, information systems that will counter asymmetric threats by achieving total information awareness". Following public criticism that the development and deployment of these technologies could potentially lead to a mass surveillance system, the IAO was defunded by Congress in 2003, although several of the projects run under IAO have continued under different funding.• The IAO was established after Admiral John Poindexter, former United States National Security Advisor to President Ronald Reagan and SAIC executive Brian Hicks approached the US Department of Defence with the idea for an information awareness program after the terrorist attacks of September 11, 2001.[1].• Poindexter and Hicks had previously worked together on intelligence-technology programs for the Defense Advanced Research Projects Agency. DARPA agreed to host the program and appointed Poindexter to run it in 2002• The IAO began funding research and development of the Total Information Awareness (TIA) Program in February 2003 but renamed the program the Terrorism Information Awareness Program in May that year after an adverse media reaction to the programs implications for public surveillance. Although TIA was only one of several IAO projects, many critics and news reports conflated TIA with other related research projects of the IAO, with the result that TIA came in popular usage to stand for an entire subset of IAO programs.• The TIA program itself was the "systems-level" program of the IAO that intended to integrate information technologies into a prototype system to provide tools to better detect, classify, and identify potential foreign terrorists with the goal to increase the probability that authorized agencies of the United States could preempt adverse actions. As a systems-level program of programs, TIAs goal was the creation of a "counterterrorism information architecture" that integrated technologies from other IAO programs (and elsewhere, as appropriate). The TIA program was researching, developing, and integrating technologies to virtually aggregate data, to follow subject- oriented link analysis, to develop descriptive and predictive models through data mining or human hypothesis, and to apply such models to additional datasets to identify terrorists and terrorist groups.• Among the other IAO programs that were intended to provide TIA with component data aggregation and automated analysis technologies were the Genisys, Genisys Privacy Protection, Evidence Extraction and Link Discovery, and Scalable Social Network Analysis programs.• The first mention of the IAO in the mainstream media came from New York Times reporter John Markoff on February 13, 2002.[2] Initial reports contained few details about the program. In the following months, as more information emerged about the scope of the TIA project, civil libertarians became concerned over what they saw as the potential for the development of an Orwellian mass surveillance system.• On August 2 2002, Dr. Poindexter gave a speech at DARPAtech 2002 entitled "Overview of the Information Awareness Office"[3] in which he described the TIA program.• On November 14, 2002 the New York Times published a column by William Safire in which he claimed "[TIA] has been given a $200 million budget to create computer dossiers on 300 million Americans."[4] Safire has been "credited" with triggering the anti-TIA movement.[5]• In addition to the program itself, the involvement of Poindexter as director of the IAO also raised concerns among some, since he had been earlier convicted of lying to Congress and altering and destroying documents pertaining to the Iran-Contra Affair, although those convictions were later overturned on the grounds that the testimony used against him was protected. 277
  • Information Awareness Office• On January 16, 2003, Senator Russ Feingold introduced legislation to suspend the activity of the IAO and the Total Information Awareness program pending a Congressional review of privacy issues involved.[6] A similar measure introduced by Senator Ron Wyden would have prohibited the IAO from operating within the United States unless specifically authorized to do so by Congress, and would have shut the IAO down entirely 60 days after passage unless either the Pentagon prepared a report to Congress assessing the impact of IAO activities on individual privacy and civil liberties or the President certified the programs research as vital to national security interests. In February of 2003, Congress passed legislation suspending activities of the IAO pending a Congressional report of the offices activities (Consolidated Appropriations Resolution, 2003, No.108–7, Division M, §111(b) [signed Feb. 20, 2003]).• In response to this legislation, DARPA provided Congress on May 20, 2003 with a report on its activities.[7] In this report, IAO changed the name of the program to the Terrorism Information Awareness Program and emphasized that the program was not designed to compile dossiers on US citizens, but rather to research and develop the tools that would allow authorized agencies to gather information on terrorist networks. Despite the name change and these assurances, the critics continued to see the system as prone to potential misuse or abuse.• As a result House and Senate negotiators moved to prohibit further funding for the TIA program by adding provisions to the Department of Defense Appropriations Act, 2004[8] (signed into law by President Bush on October 1, 2003). Further, the Joint Explanatory Statement included in the conference committee report specifically directed that the IAO as program manager for TIA be terminated immediately.[9]• [edit] Components of TIA projects that continue to be developed• Despite the withdrawal of funding for the TIA and the closing of the IAO, the core of the project survived.[10] Legislators included a classified annex to the Defense Appropriations Act that preserved funding for TIAs component technologies, if they were transferred to other government agencies. TIA projects continued to be funded under classified annexes to Defense and Intelligence appropriation bills. However, the act also stipulated that the technologies only be used for military or foreign intelligence purposes against foreigners.[11]• TIAs two core projects are now operated by Advanced Research and Development Activity (ARDA) located among the 60-odd buildings of "Crypto City" at NSA headquarters in Fort Meade, MD. ARDA itself has been shifted from the NSA to the Disruptive Technology Office (run by to the Director of National Intelligence). They are funded by National Foreign Intelligence Program for foreign counterterrorism intelligence purposes.• One technology, now codenamed "Baseball" is the Information Awareness Prototype System, the core architecture to integrated all the TIAs information extraction, analysis, and dissemination tools. Work on this project is conducted by SAIC through its Hicks & Associates, consulting arm that is run by former Defense and military officials and which had originally been awarded US$19 million IAO contract to build the prototype system in late 2002.[12]• The other project has been re-designated "TopSail" (formerly Genoa II) and would provide IT tools to help anticipate and preempt terrorist attacks. SAIC has also been contracted to work on Topsail, including a US$3.7 million contract in 2005.• [edit] IAO research• IAO research was conducted along five major investigative paths: secure collaboration problem solving; structured discovery; link and group understanding; context aware visualization; and decision making with corporate memory.• Among the IAO projects that TIA was intended to integrate were:• Genisys aimed at developing technologies for virtual data aggregation in order to support effective analysis across heterogeneous databases as well as unstructured public data sources, such as the World Wide Web.[13]• Genisys Privacy Protection technology to ensure personal privacy and protect sensitive intelligence sources and methods in the context of increasing use of data analysis for detecting, identifying and tracking terrorist threats. These technologies were intended to enable greater access to data for security reasons while protecting privacy by providing critical data to analysts while not allowing access to unauthorized information, focusing on anonymized transaction data and exposing identity only if evidence warrants and appropriate authorization is obtained for further investigation, and ensuring that any misuse of data can be detected and addressed.• Genoa and Genoa II focused on providing advanced decision-support and collaboration tools to rapidly deal with and adjust to dynamic crisis management and allow for inter-agency collaboration in real-time.[14][15]• Evidence Extraction and Link Discovery (EELD) development of technologies and tools for automated discovery, extraction and linking of sparse evidence contained in large amounts of classified and unclassified data sources[16] 278
  • Information Awareness Office• Scalable Social Network Analysis aimed at developing techniques based on social network analysis for modeling the key characteristics of terrorist groups and discriminating these groups from other types of societal groups.• Among the IAO projects focused on language translation were:• Effective Affordable Reusable Speech-to-text (EARS) to develop automatic speech-to-text transcription technology whose output is substantially richer and much more accurate than previously possible. This program focused on translating spoken language (whether from broadcasts, telephone intercepts, or otherwise) in multiple languages.[13]• Translingual Information Detection, Extraction and Summarization (TIDES) developing advanced language processing technology to enable English speakers to find and interpret critical information in multiple languages without requiring knowledge of those languages.[17]• Other IAO projects not directly related to TIA include:• Human Identification at a Distance (HumanID) to develop automated biometric identification technologies to detect, recognize and identify humans at great distances.[18]• Wargaming the Asymmetric Environment (WAE) focused on developing automated technology capable of identifying predictive indicators of terrorist activity or impending attacks by examining individual and group behavior in broad environmental context and examining the motivation of specific terrorists.[19]• Futures Markets Applied to Prediction (FutureMAP) was intended to harness collective intelligence by researching market-based techniques for avoiding surprise and predicting future events. The intent was to explore the feasibility of market-based trading mechanisms to predict political instability, threats to national security, and other major events in the near future.[20]• Babylon to develop rapid, two-way, natural language speech translation interfaces and platforms for the warfighter for use in field environments for force protection, refugee processing, and medical triage.[21]• Communicator to develop and demonstrate ―dialogue interaction‖ technology that enables warfighters to talk with computers, such that information will be accessible on the battlefield or in command centers without ever having to touch a keyboard[22]• Bio-Surveillance to develop the necessary information technologies and resulting prototype capable of detecting the covert release of a biological pathogen automatically, and significantly earlier than traditional approaches.[23]• [edit] The IAO seal• IAO seal.• The IAO uses the eye of Providence from the Great Seal of the United States gazing at the Earth as logo, and the Latin motto scientia est potentia, meaning "knowledge is power". The pyramid has 13 steps, the same of that on the US 1-dollar bill.• As criticism of TIA grew in late 2002, the pyramid logo was removed from the official IAO webpage and replaced with a new logo. In response to questions about its removal, the IAO responded in February 2003 with a "Statement regarding the meaning and use of the IAO logo" published as a FAQ.[24] The original descriptions of the IAO, TIA, and the biographies of senior staffers were also removed from the DARPA web site although they remain widely available on the Internet.[25]• [edit] Concerns & Criticism• Extensive criticism of the IAO in the traditional media and on the Internet has come from both the left and the right—from civil libertarians and libertarians— who believe that massive information aggregation and analysis technologies are a grave threat to privacy and civil liberties. Many fear that allowing a government to monitor all communications, and map social networks will give them the ability to target dissenters or political threats. This concern is not unfounded -- there are several precedents, for example, COINTELPRO and other government programs that targeted peaceful political activists in the U.S. The IAO would greatly enhance their ability to identify, track, infiltrate, and target such groups. Such a system of surveillance is a necessary component of a strong totalitarian state.• Proponents believe that development of these technologies is inevitable and that designing systems and policies to control their use is a more effective strategy than simple opposition that has resulted in research and development projects migrating into classified programs.• On November 27, 2002, San Francisco Weekly columnist Matt Smith decided to illustrate the perils of information proliferation to the IAO director, Adm. John Poindexter, by disclosing Poindexters private home address and phone number, as well as those of Poindexters next-door neighbors. This information quickly propagated through the Internet, and some protesters created web sites republishing this and other personal data.[26] 279
  • • Prediction Markets at Slate. An account of current online prediction markets by Brendan I. Koerner• So far the validation of such market predictions seems unsystematic, anecdotal, and based on economisting theology. It is difficult to figure out a good research design that would assess the quality predictive information deriving from such betting.• Would the same claims for predictive accuracy be made for betting on horse racing? The stock market? Insurance? Gambling? When does participation in predictive markets differ from lotto--a stupidity tax? How do we get good comparison predictions made by other methods? In the face of no other information, the average of all independent predictions should in the long-run do well. When is this the case? How well? Under what conditions? How to make quantitative estimates? Are betting pools like taking the average?• Does it matter who participates at what level of investment? What about dependencies among the choices of the participants? Purely strategic betting? How is the prediction affected by the framing of the betting question, as will surely be the case?• Should some predictions receive more weight than others in the aggregation, just as higher quality studies receive more weight in meta-analysis?• Size of bet does not seem to be a good proxy for how informed the bet is; indeed there might be an inverse relationship! Can knowledge of the prediction affect the predicted outcome, as is often the case in human affairs?• How about a market predicting the conclusions of studies of prediction markets--want to bet on the findings of studies from the University of Chicago School of Business compared to those from a Department of Sociology in France? Or classical vs. behavioral economists?• Prediction markets seem a bit like factor analysis or data mining in statistics-- techniques to try when you dont have any good ideas. – Edward Tufte, August 7, 2003 280
  • Prediction Markets• As an economist, I think Edward Tufte. somewhat misunderstands the purpose of such markets. It is not to produce more accurate forecasts. The value of such markets is to more efficiently allocate risks. If a risk matters to an individual, say the arrest date of Saddam, and if you believe it will be later than most other people, and if you are willing to pay something to avoid later arrest dates and if somebody else is willing to pay to get later arrest dates, then all can gain by trading based on these conditions. Essentially, you would be paying somebody that is more willing to tolerate late arrest dates than you are. It is largely irrelevant whether your or their (or anybody elses) guesses are more accurate than any other prediction.• Having said that, Edward Tufte raises some valid concerns about the operation of such markets. To the extent that it satisfies a taste for gambling rather than real values placed on the risks, the market may be efficient but not serve the public interest. To the extent that bids can be manipulated, etc., this market may not work very well at all. But, the evaluation of such markets should look at what their purpose is – the reallocation of risk - and not on the accuracy of different predictions of the future.• -- Dale Lehman (email), August 8, 2003• <a href="http://www.businessdictionary.com/definition/risk-allocation.html">risk allocation</a> 281
  • Sigmoid function- a curve having an "S" shape• A sigmoid function is a mathematical function that produces a sigmoid curve — a curve having an "S" shape. Often, sigmoid function refers to the special case of the logistic function shown• Another example is the Gompertz curve. It is used when modeling systems that saturate at large values of t.• Members of the sigmoid family• The logistic curve• In general, a sigmoid function is real-valued and differentiable, having either a non-negative or non-positive first derivative and exactly one inflection point. There are also two asymptotes, .• Besides the logistic function, sigmoid functions include the ordinary arc-tangent, the hyperbolic tangent, and the error function, but also the Gompertz function, the generalised logistic function, and algebraic functions like .• The integral of any smooth, positive, "bump-shaped" function will be sigmoidal, thus the cumulative distribution functions for many common probability distributions are sigmoidal.• [edit] See also• Logistic distribution• Logistic regression• Logit• Hyperbolic function• Weibull distribution• References• Tom M. Mitchell, Machine Learning, WCB-McGraw-Hill, 1997, ISBN 0-07-042807-7. In particular see "Chapter 4: Artificial Neural Networks" (in particular p. 96-97) where Mitchel uses the word "logistic function" and the "sigmoid function" synonomously -- this function he also calls the "squashing function" -- and the sigmoid (aka logistic) function is used to compress the outputs of the "neurons" in multi-layer neural nets. 282
  • S- Curve 283
  • Ergodic theory• Ergodic theory is a branch of mathematics that studies dynamical systems with an invariant measure and related problems. Its initial development was motivated by problems of statistical physics.• A central aspect of ergodic theory is the behavior of a dynamical system when it is allowed to run for a long period of time. This is expressed through ergodic theorems which assert that, under certain conditions, the time average of a function along the trajectories exists almost everywhere and is related to the space average. Two most important examples are the ergodic theorems of Birkhoff and von Neumann. For the special class of ergodic systems, the time average is the same for almost all initial points: statistically speaking, the system that evolves for a long time "forgets" its initial state.• Stronger properties, such as mixing and equidistribution have also been extensively studied. The problem of metric classification of systems is another important part of the abstract ergodic theory.• An outstanding role in ergodic theory and its applications to stochastic processes is played by the various notions of entropy for dynamical systems.• Applications of ergodic theory to other parts of mathematics usually involve establishing ergodicity properties for systems of special kind. In geometry, methods of ergodic theory have been used to study the geodesic flow on Riemannian manifolds, starting with the results of Eberhard Hopf for Riemann surfaces of negative curvature.• Markov chains form a common context for applications in probability theory. Ergodic theory has fruitful connections with harmonic analysis, Lie theory (representation theory, lattices in algebraic groups), and number theory (the theory of diophantine approximations, L-functions). 284
  • Markov Chains• In mathematics, a Markov chain, named after Andrey Markov, is a stochastic process with the Markov property. Having the Markov property means that, given the present state, future states are independent of the past states. In other words, the description of the present state fully captures all the information that could influence the future evolution of the process. Future states will be reached through a probabilistic process instead of a deterministic one.• At each step the system may change its state from the current state to another state, or remain in the same state, according to a certain probability distribution. The changes of state are called transitions, and the probabilities associated with various state- changes are called transition probabilities.• An example of a Markov chain is a simple random walk where the state space is a set of vertices of a graph and the transition steps involve moving to any of the neighbors of the current vertex with equal probability (regardless of the history of the walk). 285
  • Markov Chains• Markov chain Monte Carlo (MCMC) methods (which include random walk Monte Carlo methods), are a class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution as its equilibrium distribution. The state of the chain after a large number of steps is then used as a sample from the desired distribution. The quality of the sample improves as a function of the number of steps.• Usually it is not hard to construct a Markov Chain with the desired properties. The more difficult problem is to determine how many steps are needed to converge to the stationary distribution within an acceptable error. A good chain will have rapid mixing—the stationary distribution is reached quickly starting from an arbitrary position—described further under Markov chain mixing time.• Typical use of MCMC sampling can only approximate the target distribution, as there is always some residual effect of the starting position. More sophisticated MCMC-based algorithms such as coupling from the past can produce exact samples, at the cost of additional computation and an unbounded (though finite in expectation) running time.• The most common application of these algorithms is numerically calculating multi-dimensional integrals. In these methods, an ensemble of "walkers" moves around randomly. At each point where the walker steps, the integrand value at that point is counted towards the integral. The walker then may make a number of tentative steps around the area, looking for a place with reasonably high contribution to the integral to move into next. Random walk methods are a kind of random simulation or Monte Carlo method. However, whereas the random samples of the integrand used in a conventional Monte Carlo integration are statistically independent, those used in MCMC are correlated. A Markov chain is constructed in such a way as to have the integrand as its equilibrium distribution. Surprisingly, this is often easy to do.• Multi-dimensional integrals often arise in Bayesian statistics, computational physics and computational biology, so Markov chain Monte Carlo methods are widely used in those fields. For example, see Gill[1] and Robert & Casella[2] 286
  • Markov Volatility Random Walks• http://demonstrations.wolfram.com/MarkovVolatilityRandomWalks/•• A decent first approximation of real market price activity is a lognormal random walk. But with a fixed volatility parameter, such models miss several stylized facts about real financial markets. Allowing the volatility to change through time according to a simple Markov chain provides a much closer approximation to real markets. Here the Markov chain has just two possible states: normal or elevated volatility. Either state tends to persist, with a small chance of transitioning to the opposite state at each time-step. 287
  • Copyright• Works produced by the Federal government--meaning works produced by an employee or officer of a federal agency as part of his official duties--are not entitled to copyright under US law. (See 17 U.S.C. ??105.) 288
  • 289
  • Complex Systems of Systems• mechatronics: electrical, electronic and mechanical engineering• automation, robotics and computer vision• 3D imaging• signal diagnostics• interdependent networks probabilistic modeling• software engineering and knowledge-based computer systems• applied mathematics and operations research• spatial analysis modeling• system reliability and risk assessment• industrial engineering CSIRO‘s core technologies can lead to improved management and support of complex systems.• business system modeling• architectural modeling• industrial lifecycle analysis• social science• production scheduling and control systems• virtual engineering.• SOURCE 290
  • Brownian Motion http://www.aip.org/history/einstein/brownian.htmRobert Brown (1773-1858) The jiggling of pollen grains he saw is now called Brownian Motion • In 1827 the botanist Robert Brown noticed that if you looked at pollen grains in water through a microscope, the pollen jiggles about. He called this jiggling Brownian motion, but Brown couldnt understand what was causing it. He thought at first the pollen must be alive, but after testing the phenomenon with fine dust particles, he confirmed that the movement was not due to any living organism. Interestingly, much earlier, the Dutch Physician Jan Ingenhousz had investigated a number of chemical and physical phenomena and described similar irregular motion of coal dust particles on the surface of alcohol in 1785, but as with Brown, the phenomenon did not attract much scientific attention. • John Dalton (1766-1844) Often called the Father of Modern Chemistry. He was the principal proponent of an atomic theory and published the first table of relative atomic weights. • In 1800, John Dalton (1766-1844), a Quaker from Cumbria became the Secretary of the Manchester Literary and Philosophical Society [see note 2]. Dalton became one of the most important chemists of his time and through his experimental work promoted the first systematic ideas of an atomic theory. As with all scientific theories, there were many people who contributed their views, and Daltons achievements rested on those of a number of scientists from France and England [see note 3]. • The first person to describe the mathematics behind Brownian motion was the Danish astronomer Thorvald Thiele in 1880, and later, in 1900, Louis Bachelier a French mathematician, wrote his PhD thesis on the Theory of Speculation, which was the first ever mathematical analysis of the stock and option markets. Bacheliers work also provided a mathematical account of Brownian Motion. 291
  • Golf Video 292
  • Weibull distribution• In probability theory and statistics, the Weibull distribution[2] (named after Waloddi Weibull) is a continuous probability distribution. It is often called the Rosin–Rammler distribution when used to describe the size distribution of particles. The distribution was introduced by P. Rosin and E. Rammler in 1933.[3] The probability density function is: – for x > 0 and f(x; k, λ) = 0 for x ≤ 0, where k > 0 is the shape parameter and λ > 0 is the scale parameter of the distribution. Its complementary cumulative distribution function is a stretched exponential.• The Weibull distribution is often used in the field of life data analysis due to its flexibility—it can mimic the behavior of other statistical distributions such as the normal and the exponential. If the failure rate decreases over time, then k < 1. If the failure rate is constant over time, then k = 1. If the failure rate increases over time, then k > 1.• An understanding of the failure rate may provide insight as to what is causing the failures:• Example of Reliability: – A decreasing failure rate would suggest "infant mortality". That is, defective items fail early and the failure rate decreases over time as they fall out of the population. – A constant failure rate suggests that items are failing from random events. – An increasing failure rate suggests "wear out" - parts are more likely to fail as time goes on.• When k = 1, the Weibull distribution reduces to the exponential distribution. When k = 3.4, the Weibull distribution appears similar to the normal distribution. 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • Risk Treatments• Risk avoidance – Includes not performing an activity that could carry risk. An example would be not buying a property or business in order to not take on the liability that comes with it. Another would be not flying in order to not take the risk that the airplane were to be hijacked. Avoidance may seem the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits.• Risk reduction – Involves methods that reduce the severity of the loss or the likelihood of the loss from occurring. Examples include sprinklers designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damaged therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy. • Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration. • Outsourcing could be an example of risk reduction if the outsourcer can demonstrate higher capability at managing or reducing risks. In this case companies outsource only some of their departmental needs. For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a call center.• Risk retention• Involves accepting the loss when it occurs. True self insurance falls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that they either cannot be insured against or the premiums would be infeasible. War is an example since most property and risks are not insured against war, so the loss attributed by war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great it would hinder the goals of the organization too much.• Risk transfer• Means causing another party to accept the risk, typically by contract or by hedging. Insurance is one type of risk transfer that uses contracts. Other times it may involve contract language that transfers a risk to another party without the payment of an insurance premium. Liability among construction or other contractors is very often transferred this way. On the other hand, taking offsetting positions in derivatives is typically how firms use hedging to financially manage risk.• Some ways of managing risk fall into multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group up front, but instead losses are assessed to all members of the group. 307