The document discusses Douglas W. Hubbard's book on risk management. It is divided into three parts: 1) introducing the crisis in risk management, 2) flaws in popular risk management practices, and 3) fixing these issues. It examines common risk management methods and their effectiveness, including issues like cognitive biases and incomplete approaches. It outlines the major players in risk management like actuaries, physicists, economists, and consultants. Overall, the document provides an overview and critique of Hubbard's analysis of common risk management techniques and how to improve risk assessment.
This document provides an overview of risk and issue management best practices. It discusses key concepts like the differences between risks and issues, how to prioritize them, and the overall process of identifying, analyzing, taking action, monitoring, reviewing, and reporting on risks and issues over the lifecycle of a project. The goal is to familiarize workshop participants with a standardized terminology and approach to proactively manage risks and issues in order to minimize potential impacts on a project.
This document discusses risk management and analysis. It defines risk management as identifying, analyzing, and responding to risks. Risk analysis helps identify potential problems that could undermine projects or initiatives. The key steps of risk analysis include identifying threats, estimating the likelihood and impact of each threat, and developing risk mitigation strategies. Quantitative techniques like decision trees and expected monetary value analysis can also be used. Ongoing risk monitoring and control is important to evaluate risks and ensure responses remain effective.
This document summarizes a website that provides information and resources for project managers on risk management. It includes definitions of project risk, descriptions of the risk management process and tips for identifying, prioritizing, and managing risks. Specific topics covered include risk identification techniques, using a risk matrix, the risk register form, and different strategies for responding to risks such as mitigation, transfer, avoidance and acceptance. Flowcharts and diagrams are provided to illustrate risk management concepts and processes.
This document discusses risk management. It defines risk as a potential problem that may or may not occur. Risks are characterized by uncertainty and potential loss. Risks are categorized as project risks, technical risks, business risks, known risks, predictable risks, and unpredictable risks. The document outlines the steps of risk management as identifying risks, analyzing their probability and impact, ranking risks, and developing contingency plans for high probability/impact risks. It also describes how a risk table can be used to project risks by listing the risk summary, category, probability, impact, and pointer to the risk management plan.
This document discusses risk management in corporate projects. It defines risk as uncertainty that matters, which can include both threats and opportunities that positively or negatively impact project objectives. Risks come in four types: event risks involving uncertain future events, variability risks where certain events have uncertain characteristics, ambiguity risks where the characteristics of certain events are unknown, and emergent risks involving unknown unknowns. The key phases of risk management are identified as risk identification, assessment, response, and monitoring/control. Response strategies should address both threats and opportunities. Managing risk is important for achieving project objectives on time and on budget while also finding potential benefits.
This document provides an overview of risk and issue management best practices. It discusses key concepts like the differences between risks and issues, how to prioritize them, and the overall process of identifying, analyzing, taking action, monitoring, reviewing, and reporting on risks and issues over the lifecycle of a project. The goal is to familiarize workshop participants with a standardized terminology and approach to proactively manage risks and issues in order to minimize potential impacts on a project.
This document discusses risk management and analysis. It defines risk management as identifying, analyzing, and responding to risks. Risk analysis helps identify potential problems that could undermine projects or initiatives. The key steps of risk analysis include identifying threats, estimating the likelihood and impact of each threat, and developing risk mitigation strategies. Quantitative techniques like decision trees and expected monetary value analysis can also be used. Ongoing risk monitoring and control is important to evaluate risks and ensure responses remain effective.
This document summarizes a website that provides information and resources for project managers on risk management. It includes definitions of project risk, descriptions of the risk management process and tips for identifying, prioritizing, and managing risks. Specific topics covered include risk identification techniques, using a risk matrix, the risk register form, and different strategies for responding to risks such as mitigation, transfer, avoidance and acceptance. Flowcharts and diagrams are provided to illustrate risk management concepts and processes.
This document discusses risk management. It defines risk as a potential problem that may or may not occur. Risks are characterized by uncertainty and potential loss. Risks are categorized as project risks, technical risks, business risks, known risks, predictable risks, and unpredictable risks. The document outlines the steps of risk management as identifying risks, analyzing their probability and impact, ranking risks, and developing contingency plans for high probability/impact risks. It also describes how a risk table can be used to project risks by listing the risk summary, category, probability, impact, and pointer to the risk management plan.
This document discusses risk management in corporate projects. It defines risk as uncertainty that matters, which can include both threats and opportunities that positively or negatively impact project objectives. Risks come in four types: event risks involving uncertain future events, variability risks where certain events have uncertain characteristics, ambiguity risks where the characteristics of certain events are unknown, and emergent risks involving unknown unknowns. The key phases of risk management are identified as risk identification, assessment, response, and monitoring/control. Response strategies should address both threats and opportunities. Managing risk is important for achieving project objectives on time and on budget while also finding potential benefits.
A risk is defined as “an uncertain event or condition that, if it occurs, has a positive and negative effect on a project’s objectives.” Risk is inherent with any project, and project managers should assess risk continually and develop plan to address them. The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team in order to avoid having the analysis become stale and not reflective of actual potential project risks. Most critical, risk management plans include a risk strategy.
This module on Managing Risk discusses different type of risk that needs to be taken into account by the management while implementing a project. The other topics converged in this module include probability-impact matrix, Risk Quantification; Mitigating/Transferring risk; Risk audits/Review; Sample Risk plan and how to initiate Risk Management Planning.
Risk Management Process Steps PowerPoint Presentation Slides SlideTeam
It covers all the important concepts and has relevant templates which cater to your business needs. This complete deck has PPT slides on Risk Management Process Steps PowerPoint Presentation Slides with well suited graphics and subject driven content. This deck consists of total of fifty four slides. All templates are completely editable for your convenience. You can change the colour, text and font size of these slides. You can add or delete the content as per your requirement. Get access to this professionally designed complete deck presentation by clicking the download button below.
This document discusses risk management for engineering projects. It defines risk as potential problems that could impact a project's budget, timeline or deliverables. The risk management process involves identifying risks, analyzing their likelihood and impact, planning strategies to avoid or minimize risks, and monitoring risks throughout the project. Common risk types are technology, people, organizational, tools and requirements risks. Risk analysis assesses the probability and consequences of each risk. Risk planning develops avoidance, minimization and contingency strategies. Risk monitoring tracks risks and determines if their likelihood or impact changes over time.
This document discusses risk analysis and management for projects. It defines risk as a potential problem that may or may not occur, and outlines why identifying and planning for risks is important for project success. The document then covers various aspects of risk analysis and management, including risk strategies, categories, identification, assessment, refinement, and developing plans to mitigate, monitor, and manage risks. The overall aim is to help project teams understand risks and put processes in place to avoid or minimize risks that could negatively impact a project.
1. The document discusses risk management standards and processes for construction project management. It outlines ISO 31000:2009 as the key risk management standard and describes the risk management process it establishes.
2. The risk management process involves establishing the context, identifying risks, analyzing and evaluating risks, treating risks, monitoring risks, and communicating about risks.
3. The document also discusses different risk management strategies like risk avoidance, reduction, sharing, and retaining and provides examples of each.
PROBLEMS ARE THE GOLDEN EGGS
problems??? day by day in our proffessional life we faces so many problems, but didn't recognize about the problem. Because we are habituate to facing to problems, if we want to solve the problems, first we can feel YES am facing a problem then you have a chance to solve it... after that we should find is it REPEATATIVE problem or New problem, on the bases of the issue we can take further steps, how to break it. how to analyse, how to find countermeasure, how to check is it suitable or not, how to make standard.... if you want to know gothrough my presentations..
This is my first presentation posted in Slideshare
The document discusses risk management, including what it is, who uses it, and how it is applied in customs. Specifically:
- Risk management is a systematic process of identifying, analyzing, and responding to risks to reduce losses and take advantage of opportunities. It is used widely in both public and private sectors.
- The key steps in risk management are establishing the context, identifying and analyzing risks, evaluating risks, treating risks, and ongoing communication, monitoring and review.
- Customs administrations use risk management strategies to facilitate trade while maintaining control over cross-border movement of goods and people. It helps customs prioritize resources according to risk level.
This document outlines a risk management module that describes the risk management lifecycle and procedures for managing risk. It discusses introducing risk management and identifying risk categories. It then covers the full procedure for managing risk, including planning, identification, assessment, monitoring, and tracking. It also addresses stakeholder engagement, including risk appetite and tolerance. Finally, it discusses tools and practices for risk analysis, impact analysis, risk mitigation strategies, and qualitative and quantitative analysis. The overall document provides an overview of a comprehensive risk management process.
In a business environment ,one of the essential competency for effective executive or manager is problem solving skill.In this basic version, we attempted to give holistic way of solving the problems step by step methodologies and application of of relevant tools & techniques in each step .It is surely useful for beginners.
This risk management outline provides a framework for identifying, assessing, prioritizing, and managing risks. It involves minimizing, monitoring, and controlling unfavorable events while maximizing opportunities. The document lists various types of internal and external risks categorized as strategic, operational, hazard, and financial risks. It also identifies risk categories related to products, manufacturing, quality, and projects. Metrics like likelihood and impact are used to assess risk levels, determine risk appetite, and establish risk tolerance limits. The outline provides templates for risk assessment plans and timelines to implement risk controls.
Risk management is the process of identifying, assessing and controlling threats to an organization's capital and earnings. These threats, or risks, could stem from a wide variety of sources, including financial uncertainty, legal liabilities, strategic management errors, accidents and natural disasters.
what is the definition of risk management
risk management services
risk management certification
risk management for project management
risk management terms
celgene risk management
risk management framework
risk management jobs
business research topics for mba
mba topics for presentation
mba project topics
mba research topics in management
dissertation topics for mba
mba finance research topics
mba topics on strategic management
thesis topic for mba
Risk management involves determining the probability and impact of process failures and mitigating risks likely to occur with severe impacts. An acceptable risk is determined by evaluating options and consequences to select the most acceptable one. Risk severity is the probability of an event multiplied by its potential negative impact. Ways to deal with risk include proactive risk management to reduce probabilities and impacts, and reactive crisis management with constrained options. The CAPA system connects to risk management by using risk assessments to prioritize CAPAs and elevate issues. An annual product review examines manufacturing, quality, and post-market records over the previous year to support management decisions.
[To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Root Cause Analysis (RCA) is a problem-solving technique that seeks to identify the primary cause of a problem. By focusing on the root cause, organizations can prevent the problem from recurring and develop long-term solutions that improve efficiency, reduce costs, and increase customer satisfaction.
RCA uses tools such as the 5 Whys and Cause & Effect Diagram to identify the underlying causes of a problem. The 5 Whys technique involves asking "why" multiple times to dig deeper into the root cause. The Cause & Effect Diagram categorizes potential causes, such as people, process, and equipment, to identify root causes quickly.
This RCA presentation is designed to provide participants with a comprehensive understanding of Root Cause Analysis (RCA) as a problem-solving technique. The presentation highlights the importance of identifying the root cause of a problem and how RCA can be used to achieve this. Participants will learn how to apply common RCA tools such as the 5 Whys and Cause & Effect Diagram to identify the root cause of a problem. They will also gain knowledge on how to prioritize root causes using a Pareto Chart to focus on the most significant causes first. The presentation will also cover the pitfalls in root cause analysis, highlighting the importance of avoiding making assumptions, involving stakeholders, and making RCA an ongoing process. By the end of the presentation, participants will have a deep understanding of RCA and be equipped with the skills needed to identify and solve problems effectively.
LEARNING OBJECTIVES:
1. Understand the critical role of identifying root causes in effective problem-solving.
2. Apply 5 Whys and Cause & Effect Diagram for practical root cause analysis.
3. Learn to prioritize root causes using Pareto Charts for impactful solutions.
4. Recognize common pitfalls and strategies for overcoming them.
CONTENTS
1. Introduction to Root Cause Analysis
2. Overview of Problem Solving
3. 5 Whys
4. Cause & Effect Diagram
5. Root Cause Prioritization
6. Effective RCA Practices
The document discusses the risk management process and administration. It begins by explaining the importance of understanding an organization's goals and the different types of risks it may face, such as property, liability, and human resources risks. It then defines three levels of risk impact - critical, important, and unimportant - based on the potential financial impact. The document also discusses measuring risk severity and frequency. It notes that risk management involves implementing programs to address identified risks using various techniques. Finally, it discusses how risk management allows reviewing decisions to discover mistakes and correct misconceptions that it only applies to large organizations or minimizes the role of insurance.
The presentation about Project Risk Management conducted by Mr. Mohamad Boukhari for the project management community in Lebanon during PMI Lebanon Chapter monthly lecture.
The document provides an overview of project risk management processes and techniques. It discusses qualitative and quantitative risk analysis methods, such as probability/impact matrices and decision trees. Response strategies like risk avoidance, mitigation, and acceptance are also covered. The document aims to equip project managers with tools and best practices for identifying, assessing, and responding to risks throughout the project life cycle.
Risk management is the process of identifying, assessing, and planning for possible risks associated with activities and events. It aims to limit uncertainties, potential dangers, and loss. The document outlines types of risks like physical, emotional, financial, and reputational risks. It also discusses strategies for risk management like risk avoidance, reduction, and transference. The key is being proactive in considering risks and having plans to address them.
Risk Management Lessons From The Current Crisis Ppt2003Barry Schachter
The document discusses risk management lessons that can be learned from the current financial crisis. It argues that viewing risk management failures as firm-specific issues and proposing regulatory fixes focused on individual firms is misguided. Instead, risk management should be seen as an adaptive system across an interconnected network of firms. Systemic problems are better addressed by allowing endogenous adaptations to emerge rather than imposing uniform rules that could reduce diversity and flexibility.
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docxhealdkathaleen
Running Head: CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUSTRY TO ELIMINATE SECURITY RISKS 2
CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUSTRY TO ELIMINATE SECURITY RISKS 2
Current Techniques Implemented in the Construction Industry to Eliminate Security Risks
Group 4
Balaram Chekuri
Laxmi Sravani Vallurpallis
Mohan Kadali
Shivasai Pabba
Vaagdevi Jali
University of the Cumberlands
ITS835-41 Enterprise Risk Management
Residency Assignment Research Paper
Professor Dr. James C. Hyatt
10/20/2019
Statement of the Problem and it is Setting
Risk management has been one of the breakthroughs for the modern world, and at the same one of the intellectual achievements is the identification, transformation, or risk and going from a world that described risk as fate to a world that looks at risk as an area of study. Risk management is the utilization of risk analysis to come up with management strategies used to reduce risk. Whereby generally in project management there are the two techniques categories qualitative; that involves impact and probability assessment, expected value calculations and influence diagrams and quantitative; that typically focuses on the overall risk which is managed more with numerical approach, and has techniques such as decision trees, Monte Carlo analysis and sensitivity analysis (McNeil et al. 2015). In various fields, there is enormous risk faced, and at the same time, there is a corresponding quantitative technique that can be used to address the risk. The focus is on the quantitative risk management techniques; they are based on scientific, mathematical and statistical background, that promise to give thorough and detailed management and quantification of risk that is imperative for designing the response (Teixeira et al. 2015).
This paper, based on the construction industry, provides an overview of the analytic overview of different quantitative risk analysis techniques. There is a focus on building on the existing quantitative techniques that are best for the construction industry around the world when it comes to utilizing relevant techniques after a qualitative risk analysis. In addition to this, there is looking further into the techniques and their details.
Guiding Questions
· Does Your Risk Management Process Address Root Cause of Failure?
· Are there gaps in this field?
· What Does Your Business Performance Tell You About Risk?
· What Do Controls Tell You About Your Risks?
· The main focus is looking at the practitioners and researchers looking: at the reason why they should simplify the existing techniques?
· Which, at the same looks at the research gaps in this field and propose areas of further research for project risk management in construction, which will improve the existing techniques.
Assumptions
Project Risk manage ...
A risk is defined as “an uncertain event or condition that, if it occurs, has a positive and negative effect on a project’s objectives.” Risk is inherent with any project, and project managers should assess risk continually and develop plan to address them. The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team in order to avoid having the analysis become stale and not reflective of actual potential project risks. Most critical, risk management plans include a risk strategy.
This module on Managing Risk discusses different type of risk that needs to be taken into account by the management while implementing a project. The other topics converged in this module include probability-impact matrix, Risk Quantification; Mitigating/Transferring risk; Risk audits/Review; Sample Risk plan and how to initiate Risk Management Planning.
Risk Management Process Steps PowerPoint Presentation Slides SlideTeam
It covers all the important concepts and has relevant templates which cater to your business needs. This complete deck has PPT slides on Risk Management Process Steps PowerPoint Presentation Slides with well suited graphics and subject driven content. This deck consists of total of fifty four slides. All templates are completely editable for your convenience. You can change the colour, text and font size of these slides. You can add or delete the content as per your requirement. Get access to this professionally designed complete deck presentation by clicking the download button below.
This document discusses risk management for engineering projects. It defines risk as potential problems that could impact a project's budget, timeline or deliverables. The risk management process involves identifying risks, analyzing their likelihood and impact, planning strategies to avoid or minimize risks, and monitoring risks throughout the project. Common risk types are technology, people, organizational, tools and requirements risks. Risk analysis assesses the probability and consequences of each risk. Risk planning develops avoidance, minimization and contingency strategies. Risk monitoring tracks risks and determines if their likelihood or impact changes over time.
This document discusses risk analysis and management for projects. It defines risk as a potential problem that may or may not occur, and outlines why identifying and planning for risks is important for project success. The document then covers various aspects of risk analysis and management, including risk strategies, categories, identification, assessment, refinement, and developing plans to mitigate, monitor, and manage risks. The overall aim is to help project teams understand risks and put processes in place to avoid or minimize risks that could negatively impact a project.
1. The document discusses risk management standards and processes for construction project management. It outlines ISO 31000:2009 as the key risk management standard and describes the risk management process it establishes.
2. The risk management process involves establishing the context, identifying risks, analyzing and evaluating risks, treating risks, monitoring risks, and communicating about risks.
3. The document also discusses different risk management strategies like risk avoidance, reduction, sharing, and retaining and provides examples of each.
PROBLEMS ARE THE GOLDEN EGGS
problems??? day by day in our proffessional life we faces so many problems, but didn't recognize about the problem. Because we are habituate to facing to problems, if we want to solve the problems, first we can feel YES am facing a problem then you have a chance to solve it... after that we should find is it REPEATATIVE problem or New problem, on the bases of the issue we can take further steps, how to break it. how to analyse, how to find countermeasure, how to check is it suitable or not, how to make standard.... if you want to know gothrough my presentations..
This is my first presentation posted in Slideshare
The document discusses risk management, including what it is, who uses it, and how it is applied in customs. Specifically:
- Risk management is a systematic process of identifying, analyzing, and responding to risks to reduce losses and take advantage of opportunities. It is used widely in both public and private sectors.
- The key steps in risk management are establishing the context, identifying and analyzing risks, evaluating risks, treating risks, and ongoing communication, monitoring and review.
- Customs administrations use risk management strategies to facilitate trade while maintaining control over cross-border movement of goods and people. It helps customs prioritize resources according to risk level.
This document outlines a risk management module that describes the risk management lifecycle and procedures for managing risk. It discusses introducing risk management and identifying risk categories. It then covers the full procedure for managing risk, including planning, identification, assessment, monitoring, and tracking. It also addresses stakeholder engagement, including risk appetite and tolerance. Finally, it discusses tools and practices for risk analysis, impact analysis, risk mitigation strategies, and qualitative and quantitative analysis. The overall document provides an overview of a comprehensive risk management process.
In a business environment ,one of the essential competency for effective executive or manager is problem solving skill.In this basic version, we attempted to give holistic way of solving the problems step by step methodologies and application of of relevant tools & techniques in each step .It is surely useful for beginners.
This risk management outline provides a framework for identifying, assessing, prioritizing, and managing risks. It involves minimizing, monitoring, and controlling unfavorable events while maximizing opportunities. The document lists various types of internal and external risks categorized as strategic, operational, hazard, and financial risks. It also identifies risk categories related to products, manufacturing, quality, and projects. Metrics like likelihood and impact are used to assess risk levels, determine risk appetite, and establish risk tolerance limits. The outline provides templates for risk assessment plans and timelines to implement risk controls.
Risk management is the process of identifying, assessing and controlling threats to an organization's capital and earnings. These threats, or risks, could stem from a wide variety of sources, including financial uncertainty, legal liabilities, strategic management errors, accidents and natural disasters.
what is the definition of risk management
risk management services
risk management certification
risk management for project management
risk management terms
celgene risk management
risk management framework
risk management jobs
business research topics for mba
mba topics for presentation
mba project topics
mba research topics in management
dissertation topics for mba
mba finance research topics
mba topics on strategic management
thesis topic for mba
Risk management involves determining the probability and impact of process failures and mitigating risks likely to occur with severe impacts. An acceptable risk is determined by evaluating options and consequences to select the most acceptable one. Risk severity is the probability of an event multiplied by its potential negative impact. Ways to deal with risk include proactive risk management to reduce probabilities and impacts, and reactive crisis management with constrained options. The CAPA system connects to risk management by using risk assessments to prioritize CAPAs and elevate issues. An annual product review examines manufacturing, quality, and post-market records over the previous year to support management decisions.
[To download this presentation, visit:
https://www.oeconsulting.com.sg/training-presentations]
Root Cause Analysis (RCA) is a problem-solving technique that seeks to identify the primary cause of a problem. By focusing on the root cause, organizations can prevent the problem from recurring and develop long-term solutions that improve efficiency, reduce costs, and increase customer satisfaction.
RCA uses tools such as the 5 Whys and Cause & Effect Diagram to identify the underlying causes of a problem. The 5 Whys technique involves asking "why" multiple times to dig deeper into the root cause. The Cause & Effect Diagram categorizes potential causes, such as people, process, and equipment, to identify root causes quickly.
This RCA presentation is designed to provide participants with a comprehensive understanding of Root Cause Analysis (RCA) as a problem-solving technique. The presentation highlights the importance of identifying the root cause of a problem and how RCA can be used to achieve this. Participants will learn how to apply common RCA tools such as the 5 Whys and Cause & Effect Diagram to identify the root cause of a problem. They will also gain knowledge on how to prioritize root causes using a Pareto Chart to focus on the most significant causes first. The presentation will also cover the pitfalls in root cause analysis, highlighting the importance of avoiding making assumptions, involving stakeholders, and making RCA an ongoing process. By the end of the presentation, participants will have a deep understanding of RCA and be equipped with the skills needed to identify and solve problems effectively.
LEARNING OBJECTIVES:
1. Understand the critical role of identifying root causes in effective problem-solving.
2. Apply 5 Whys and Cause & Effect Diagram for practical root cause analysis.
3. Learn to prioritize root causes using Pareto Charts for impactful solutions.
4. Recognize common pitfalls and strategies for overcoming them.
CONTENTS
1. Introduction to Root Cause Analysis
2. Overview of Problem Solving
3. 5 Whys
4. Cause & Effect Diagram
5. Root Cause Prioritization
6. Effective RCA Practices
The document discusses the risk management process and administration. It begins by explaining the importance of understanding an organization's goals and the different types of risks it may face, such as property, liability, and human resources risks. It then defines three levels of risk impact - critical, important, and unimportant - based on the potential financial impact. The document also discusses measuring risk severity and frequency. It notes that risk management involves implementing programs to address identified risks using various techniques. Finally, it discusses how risk management allows reviewing decisions to discover mistakes and correct misconceptions that it only applies to large organizations or minimizes the role of insurance.
The presentation about Project Risk Management conducted by Mr. Mohamad Boukhari for the project management community in Lebanon during PMI Lebanon Chapter monthly lecture.
The document provides an overview of project risk management processes and techniques. It discusses qualitative and quantitative risk analysis methods, such as probability/impact matrices and decision trees. Response strategies like risk avoidance, mitigation, and acceptance are also covered. The document aims to equip project managers with tools and best practices for identifying, assessing, and responding to risks throughout the project life cycle.
Risk management is the process of identifying, assessing, and planning for possible risks associated with activities and events. It aims to limit uncertainties, potential dangers, and loss. The document outlines types of risks like physical, emotional, financial, and reputational risks. It also discusses strategies for risk management like risk avoidance, reduction, and transference. The key is being proactive in considering risks and having plans to address them.
Risk Management Lessons From The Current Crisis Ppt2003Barry Schachter
The document discusses risk management lessons that can be learned from the current financial crisis. It argues that viewing risk management failures as firm-specific issues and proposing regulatory fixes focused on individual firms is misguided. Instead, risk management should be seen as an adaptive system across an interconnected network of firms. Systemic problems are better addressed by allowing endogenous adaptations to emerge rather than imposing uniform rules that could reduce diversity and flexibility.
Running Head CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUS.docxhealdkathaleen
Running Head: CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUSTRY TO ELIMINATE SECURITY RISKS 2
CURRENT TECHNIQUES IMPLEMENTED IN CONSTRUCTION INDUSTRY TO ELIMINATE SECURITY RISKS 2
Current Techniques Implemented in the Construction Industry to Eliminate Security Risks
Group 4
Balaram Chekuri
Laxmi Sravani Vallurpallis
Mohan Kadali
Shivasai Pabba
Vaagdevi Jali
University of the Cumberlands
ITS835-41 Enterprise Risk Management
Residency Assignment Research Paper
Professor Dr. James C. Hyatt
10/20/2019
Statement of the Problem and it is Setting
Risk management has been one of the breakthroughs for the modern world, and at the same one of the intellectual achievements is the identification, transformation, or risk and going from a world that described risk as fate to a world that looks at risk as an area of study. Risk management is the utilization of risk analysis to come up with management strategies used to reduce risk. Whereby generally in project management there are the two techniques categories qualitative; that involves impact and probability assessment, expected value calculations and influence diagrams and quantitative; that typically focuses on the overall risk which is managed more with numerical approach, and has techniques such as decision trees, Monte Carlo analysis and sensitivity analysis (McNeil et al. 2015). In various fields, there is enormous risk faced, and at the same time, there is a corresponding quantitative technique that can be used to address the risk. The focus is on the quantitative risk management techniques; they are based on scientific, mathematical and statistical background, that promise to give thorough and detailed management and quantification of risk that is imperative for designing the response (Teixeira et al. 2015).
This paper, based on the construction industry, provides an overview of the analytic overview of different quantitative risk analysis techniques. There is a focus on building on the existing quantitative techniques that are best for the construction industry around the world when it comes to utilizing relevant techniques after a qualitative risk analysis. In addition to this, there is looking further into the techniques and their details.
Guiding Questions
· Does Your Risk Management Process Address Root Cause of Failure?
· Are there gaps in this field?
· What Does Your Business Performance Tell You About Risk?
· What Do Controls Tell You About Your Risks?
· The main focus is looking at the practitioners and researchers looking: at the reason why they should simplify the existing techniques?
· Which, at the same looks at the research gaps in this field and propose areas of further research for project risk management in construction, which will improve the existing techniques.
Assumptions
Project Risk manage ...
Abstract
Key Features
Assessment
Introduction
Measures
Figure 1. This is the Risk Assessment Matrix Chart on the basis of the overall scenario
(continued)
Discussion
Figure1. The overall scenario of Risk management analysis on basis of survey and guidelines :
Safety of Risk Management
Risk management is an activity which integrates recognition of risk, risk assessment, developing strategies to manage it, and mitigation of risk using managerial resources. Some traditional risk managements are focused on risks stemming from physical or legal causes (e.g. natural disasters or fires, accidents, death).
Financial risk management, on the other hand, focuses on risks that can be managed using traded financial instruments. Objective of risk management is to reduce different risks related to a pre-selected domain to an acceptable. It may refer to numerous types of threats caused by environment, technology, humans,
organizations and politics. The paper describes the different steps in the risk management process which methods are used in the different steps, and provides some examples for risk and safety management.
The risk management steps are:
1. Establishing goals and context ,
2. Identifying risks,
3. Analysing the identified risks,
4. Assessing or evaluating the risks,
5. Treating or managing the risks,
6. Monitoring and reviewing the risks and the risk environment regularly, and
7. Continuously communicating, consulting with stakeholders and reporting.
Some of the risk management tools are described in (IEC 2008) and (Oehmen 2005).
As per discussed about the overall visualisation of safety risk management we can conclude by the stated figure about the outcome of the risk factor in different zone or field of work .
The common concept in all definitions is uncertainty of outcomes. Where they differ is in how they characterize outcomes. Some describe risk as having only adverse consequences, while others are neutral.
One description of risk is the following: risk refers to the uncertainty that surrounds future events and outcomes. It is the expression of the likelihood and impact of an event with the potential to influence the achievement of an organization's objectives.
The phrase "the expression of the likelihood and impact of an event" implies that, as a minimum, some form of quantitative or qualitative analysis is required for making decisions
concerning major risks or threats to the achievement of an organization's objectives. For each risk, two calculations are required: its likelihood or probability; and the extent of the impact or consequences.
Establish goals and context:- The purpose of this stage of planning enables to understand the environment in which the
respective organization operates, that means to thoroughly understand the external environment and the internal culture of the organization.
Identify the risks :- Using the information gained from the context, particularly as cat.
This book review summarizes Douglas Hubbard's book "The Failure of Risk Management: Why It’s Broken and How to Fix It". The review provides an overview of the book's main arguments. Hubbard argues that current risk management practices have not established adequate standards and metrics to properly measure and mitigate risk. He attributes this to a lack of rigorous quantitative models and the over-reliance on subjective analysis. Hubbard advocates for developing empirical risk measurement approaches, learning from uncertainty systems with calibrated probabilities, and establishing high standards for risk analysis certification to improve practices. The goal is for firms to generate calibrated risk analysis cultures that can diagnose problems and fix issues to minimize downtime during crises.
Case study in Enterprise Risk Management (ERM) showing paired comparison method to evaluate risk, allocate ERM resources and to highlight the different perspective or context for different levels of company management.
Management of risks and implication on the nigerian manufacturing sectorAlexander Decker
This document discusses risk management in the Nigerian manufacturing sector. It identifies key risks like employees, suppliers, customers, and competitors. It examines how risks should be assessed and prioritized, with larger risks and those that can't be avoided or transferred retained. Improperly assessing and prioritizing risks can waste time on unlikely events. The study was conducted through surveys and interviews of five manufacturing companies in Nigeria to understand their risk management practices.
1 Contemporary Approaches in Management of Risk in .docxoswald1horne84988
1
Contemporary Approaches in Management of Risk in Engineering Organizations
Assignment-1
Literature review
Student name: Hari Kiran Penumudi
student id: 217473484
Table of Contents
2
INTRODUCTION………………………………………………………………………3-4
OBJECTIVES & DELIVERABLES…………………………………………………....4
REVIEW OF LITERATURE…………………………………………………………....5-13
Risk and Risk Management………………………………………………………5-6
Risk Management Frameworks……………………………………………….....6-10
Importance of Risk Management in Engineering………………………….........10-13
GENERAL PROBLEM STATEMENT…………………………………………………13-14
RESEARH STRATEGY…………………………………………………………………14-15
RESOURCES REQUIREMENTS……………………………………………………….16
PROJECT PLANNING…………………………………………………………………..16
REFERNCES…………………………………………………………………………….17-19
Contemporary Approaches in Management of Risk in Engineering Organizations
3
Introduction
The term, ‘risk’ as defined by the Oxford English dictionary is a possibility to meet with any
kind of danger or suffer harm. Risk is a serious issue that every organization has to deal with in
their everyday operations. However, nature and magnitude of risks largely vary from
organization to organization and often depend on the type of the organization. Therefore,
organizations irrespective of their type of operations keep a risk management team that looks
after every risk to which an organization is vulnerable. Organizations in the field of engineering
also have to come across some inherent risks that negatively impact their operations. Engineering
may be defined as the process of applying science to practical purposes of designing structures,
systems, machines and similar things. Therefore, like every other organization, risk assessment
and management is also an integral part of engineering organizations. Since the task of
engineering is mostly complex, the risks in this area are also very complicated. If risks in
engineering field are not mitigated effectively it may produce long-term danger that may affect
both the organizational services and the society in whole. Hence, the activity of risk management
within engineering organizations must be undertaken seriously and measured thoroughly in order
to reduce the threat of risks. Amyotte et al., (2006) simply puts it like within the engineering
practice, an inbuilt risk is always present. Studies have found that despite the knowledge of
inherent risks within the field and activity of engineering, organizations are not very aware in
imparting knowledge about risk management to their engineers. From this the need of education
regarding the risk management approaches arises. Therefore, this paper tries to find out
approaches to management of risks and importance of these approaches within the area of
engineering. Bringing on the contemporary evidence from the literature review related to risk
management approaches, the paper examines how those approaches can be helpful for
4 .
This document discusses concepts and approaches related to risk assessment and management. It begins by outlining the key objectives of risk assessment, which include identifying hazards, analyzing and evaluating risk likelihood and impacts, determining control measures, and documenting findings. It then describes various risk classification systems that categorize risks based on factors like political, economic, social, technological, legal, and environmental considerations. Finally, it discusses analyzing risks by defining likelihood and impacts, identifying risk causes and consequences, and considering an organization's risk appetite when evaluating risks. The overall goal of the document is to provide an introduction to performing comprehensive risk assessments.
This document outlines the five step process for hazard identification, risk assessment, and management: 1) identify hazards, 2) determine consequences, 3) determine likelihood, 4) assess risk, and 5) manage risk. It describes how to identify hazards and determine consequences based on 5 factors. It also explains how to determine likelihood and assess risk by multiplying consequence and likelihood scores. Finally, it discusses managing risk through a hierarchy of controls from elimination to personal protective equipment.
This document outlines the five step process to conduct hazard identification, risk assessment, and management: 1) identify hazards, 2) determine consequences, 3) determine likelihood, 4) assess risk level by multiplying consequence and likelihood scores, and 5) manage risk through a hierarchy of controls from elimination to personal protective equipment. It provides details on how to implement each step, including factors to consider for determining consequences and likelihood, and levels of risk control strategies.
In this essay presents the rationale for risk analysis techniques, with the purpose of obtaining the costs, risks and establish the measures of protection against damage.
The document discusses findings from a study on whistleblower incentives and protection in finance departments. Three key themes emerged: 1) lack of ethical leadership discourages whistleblowing due to fear of retaliation, 2) mutual mistrust between leaders and staff prevents reporting of unethical behaviors, and 3) without whistleblower protections, corruption continues harming social welfare. The findings validate anticipated themes from literature and suggest finance departments should implement stronger incentives and protections to curb unethical practices through whistleblowing.
A brief and clear argumentation in favour of the personalisation approach in risk management procedures in large companies.
Taken from "Making better risk management decisions" by J. Birkinshaw and H. Jenkins.
Risk identification is the initial step of risk management that involves imagining potential future losses. It is a creative process that can use techniques like brainstorming, counterfactual thinking, and divergent thinking. Risk identification also involves analyzing past issues, current trends, and considering cascading failures, known unknowns, and failures of imagination to comprehensively identify risks. The goal is to create a complete list of risks without repetition.
In spring 2016, PwC investigated the current state and
future direction of stress testing. We surveyed 55 insurers
operating in the US about their stress testing framework and
the specific stresses that they test. We also engaged in more
detailed dialogue with a number of insurers in the US and
globally, as well as with some North American insurance
regulators.
Advantages of Regression Models Over Expert Judgement for Characterizing Cybe...Thomas Lee
Expert Judgment is the foundation of many risk assessment methodologies. But research is robust on the inaccuracy of Expert Judgment with regards to rare events—and large data breach events are rare. Regression models, which are a statistical characterization of cross-company historical events are substantially more accurate than expert judgment or even models with expert judgment as a foundation.
This chapter discusses modeling decision processes and decision support systems. It covers the typical modeling process of identifying a problem, analyzing requirements, and identifying variables and relationships. Common error in problem definition is premature focus on solutions rather than fully defining the problem. Tools for structuring problems include influence diagrams and decision trees. Probability can be estimated through frequency, subjectively, or through logic. Techniques for forecasting probabilities include direct estimation, odds forecasting, and comparison forecasting. Sensitivity analysis and value analysis are also discussed.
An analysis of the value of external studies to risk managers, and how to improve them
Once again during the last part of the year, academic institutions, consultancy firms, think-tanks and insurance companies are publishing studies on top risks. But what is the value of these studies to risk managers? The impact on the media and social networks surely justifies the marketing value for the organisations funding the reports. However, the impact on street-level risk managers is to be discussed.
Scoring and predicting risk preferencesGurdal Ertek
This study presents a methodology to determine risk scores of individuals, for a given financial risk preference survey. To this end, we use a regression based iterative algorithm to determine the weights for survey questions in the scoring process. Next, we generate classification models to classify individuals into risk-averse and risk-seeking categories, using a subset of survey questions. We illustrate the methodology through a sample survey with 656 respondents. We find that the demographic (indirect) questions can be almost as successful as risk-related (direct) questions in predicting risk preference classes of respondents. Using a decision-tree based classification model, we discuss how one can
generate actionable business rules based on the findings.
http://research.sabanciuniv.edu.
Similar to A critique of doug hubbards the failure of risk management (20)
Space acquisition environment 01 sep 2016_jeran_binning_v2.0Jeran Binning
Four of five selected DoD space programs experienced major cost growth and schedule delays due to difficulties with technology development, engineering, manufacturing, and testing. These programs implemented a high-risk acquisition approach characterized by changes to requirements, introduction of immature technologies, and inefficient procurement practices. Studies found common issues included immature technology, ineffective acquisition strategies, unrealistic cost baselines, and inadequate systems engineering. Reports recommended stabilizing requirements earlier, decoupling technology from development, and allowing more dialogue between stakeholders to improve outcomes for large acquisition programs.
This document discusses risk management strategies for forest rangers. It begins by outlining the tension between quantitative and subjective approaches to decision making under uncertainty. It then provides examples of risk concepts from different cultures and perspectives. The rest of the document discusses predictive tools and strategies used in risk management, including value at risk, probabilities, uncertainty, and prediction markets. It cautions that accurately predicting low probability events is very difficult.
The document discusses various behavioral cues that may indicate deception, including:
1) Micro-expressions that briefly reveal a person's true emotions before they display a fake expression to cover up a lie.
2) Hunched shoulders as liars try to make themselves seem small and inconspicuous.
3) Offering more information than asked for as liars try to seem innocent with details.
Daniel Kahneman and Amos Tversky developed prospect theory in 1979 as a psychologically realistic alternative to expected utility theory to describe how people make choices involving risk. Prospect theory incorporates cognitive biases like loss aversion and probability weighting to account for behaviors that contradict economic models' assumptions. Kahneman later explored hedonic psychology and found people's remembered well-being differs from their actual experienced well-being over time. His work established the foundations for behavioral economics by revealing unconscious errors in human judgment.
The document discusses various cognitive biases and heuristics that influence human decision-making, such as the planning fallacy in which people underestimate costs and overestimate benefits, and optimism bias which can motivate action but also lead to false beliefs. It also examines loss aversion bias and how optimism can help protect against the paralyzing effects of fearing losses more than valuing gains. A number of heuristics are explored, including the affect heuristic where emotional reactions can drive behavior over cognitive risk assessments.
1) Higher levels of testosterone in male traders were found to correlate with higher risk-taking and above-average profits based on two experiments sampling hormones from traders.
2) Preliminary data from these experiments showing the relationship between testosterone, risk-taking, and profits was strong enough to be published in the National Academy of Sciences.
3) A separate experiment found that levels of the stress hormone cortisol in traders rose both with the variability of their own trading results and with increased volatility in the market, suggesting cortisol may promote risk aversion.
History of Governmet Contracting 4 31 jan 12 2Jeran Binning
A survey of government contracting and the evolution of the military industrial complex. Research for a module in a new course being developed by the Defense Acquisition University called Business Acumen.
1) A study sampled hormones from male traders and found that testosterone levels rose with above-average profits and higher testosterone led to greater risk-taking.
2) The study also found that the stress hormone cortisol rose both with the variance of a trader's results and with increased market volatility.
3) Preliminary data from the studies was strong enough to be published by the National Academy of Sciences and suggests that testosterone may contribute to economic returns while cortisol increases with risk.
This document discusses how public policy impacts global supply chains. It explains that policy can affect both the general business environment and internal company operations. Changes in areas like taxation, trade, labor costs, environmental regulation, and infrastructure can influence where companies locate facilities and source parts. The document also examines specific policies in these areas and how they are considerations for businesses structuring their supply chains on a global scale.
This document outlines a 1 hour 15 minute lesson plan on the defense industry landscape. The lesson will compare differences between large, medium, and small defense companies and their product focuses. Students will learn about the evolution of the US defense industry from early arsenals to today's large defense contractors and how budget changes impact company strategies.
Profiles of Iconic Fashion Personalities.pdfTTop Threads
The fashion industry is dynamic and ever-changing, continuously sculpted by trailblazing visionaries who challenge norms and redefine beauty. This document delves into the profiles of some of the most iconic fashion personalities whose impact has left a lasting impression on the industry. From timeless designers to modern-day influencers, each individual has uniquely woven their thread into the rich fabric of fashion history, contributing to its ongoing evolution.
IMPACT Silver is a pure silver zinc producer with over $260 million in revenue since 2008 and a large 100% owned 210km Mexico land package - 2024 catalysts includes new 14% grade zinc Plomosas mine and 20,000m of fully funded exploration drilling.
SATTA MATKA DPBOSS KALYAN MATKA RESULTS KALYAN CHART KALYAN MATKA MATKA RESULT KALYAN MATKA TIPS SATTA MATKA MATKA COM MATKA PANA JODI TODAY BATTA SATKA MATKA PATTI JODI NUMBER MATKA RESULTS MATKA CHART MATKA JODI SATTA COM INDIA SATTA MATKA MATKA TIPS MATKA WAPKA ALL MATKA RESULT LIVE ONLINE MATKA RESULT KALYAN MATKA RESULT DPBOSS MATKA 143 MAIN MATKA KALYAN MATKA RESULTS KALYAN CHART
Ellen Burstyn: From Detroit Dreamer to Hollywood Legend | CIO Women MagazineCIOWomenMagazine
In this article, we will dive into the extraordinary life of Ellen Burstyn, where the curtains rise on a story that's far more attractive than any script.
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....Lacey Max
“After being the most listed dog breed in the United States for 31
years in a row, the Labrador Retriever has dropped to second place
in the American Kennel Club's annual survey of the country's most
popular canines. The French Bulldog is the new top dog in the
United States as of 2022. The stylish puppy has ascended the
rankings in rapid time despite having health concerns and limited
color choices.”
The APCO Geopolitical Radar - Q3 2024 The Global Operating Environment for Bu...APCO
The Radar reflects input from APCO’s teams located around the world. It distils a host of interconnected events and trends into insights to inform operational and strategic decisions. Issues covered in this edition include:
Prescriptive analytics BA4206 Anna University PPTFreelance
Business analysis - Prescriptive analytics Introduction to Prescriptive analytics
Prescriptive Modeling
Non Linear Optimization
Demonstrating Business Performance Improvement
Starting a business is like embarking on an unpredictable adventure. It’s a journey filled with highs and lows, victories and defeats. But what if I told you that those setbacks and failures could be the very stepping stones that lead you to fortune? Let’s explore how resilience, adaptability, and strategic thinking can transform adversity into opportunity.
Zodiac Signs and Food Preferences_ What Your Sign Says About Your Tastemy Pandit
Know what your zodiac sign says about your taste in food! Explore how the 12 zodiac signs influence your culinary preferences with insights from MyPandit. Dive into astrology and flavors!
AI Transformation Playbook: Thinking AI-First for Your BusinessArijit Dutta
I dive into how businesses can stay competitive by integrating AI into their core processes. From identifying the right approach to building collaborative teams and recognizing common pitfalls, this guide has got you covered. AI transformation is a journey, and this playbook is here to help you navigate it successfully.
During the budget session of 2024-25, the finance minister, Nirmala Sitharaman, introduced the “solar Rooftop scheme,” also known as “PM Surya Ghar Muft Bijli Yojana.” It is a subsidy offered to those who wish to put up solar panels in their homes using domestic power systems. Additionally, adopting photovoltaic technology at home allows you to lower your monthly electricity expenses. Today in this blog we will talk all about what is the PM Surya Ghar Muft Bijli Yojana. How does it work? Who is eligible for this yojana and all the other things related to this scheme?
Part 2 Deep Dive: Navigating the 2024 Slowdownjeffkluth1
Introduction
The global retail industry has weathered numerous storms, with the financial crisis of 2008 serving as a poignant reminder of the sector's resilience and adaptability. However, as we navigate the complex landscape of 2024, retailers face a unique set of challenges that demand innovative strategies and a fundamental shift in mindset. This white paper contrasts the impact of the 2008 recession on the retail sector with the current headwinds retailers are grappling with, while offering a comprehensive roadmap for success in this new paradigm.
Storytelling is an incredibly valuable tool to share data and information. To get the most impact from stories there are a number of key ingredients. These are based on science and human nature. Using these elements in a story you can deliver information impactfully, ensure action and drive change.
3. divided into three parts:
(1) the first part introduces the crisis in risk
management;
(2) the second deals with why some popular
risk management practices are flawed;
(3) the third discusses what needs to be done
to fix these.
4. Code of Hammurabi –
compensation or indemnification for those
harmed by bandits or floods.
Careful selection of debtors- Called underwriting
in insurance
Development of probability theory and
statistics
5. There are several risk management methodologies and
techniques in use ; a quick search will reveal some of
them. Hubbard begins his book by asking the following
simple questions about these:
Do these risk management methods work?
Would any organization that uses these techniques
know if they didn’t work?
What would be the consequences if they didn’t?
6. His contention is that for most organizations the answers to the first two
questions are negative.
To answer the third question, he gives the example of the crash of United
Flight 232 in 1989. The crash was attributed to the simultaneous failure
of three independent (and redundant) hydraulic systems. This happened
because the systems were located at the rear of the plane and debris
from a damaged turbine cut lines to all them. This is an example of
common mode failure – a single event causing multiple systems to fail.
The probability of such an event occurring was estimated to be less than
one in a billion. However, the reason the turbine broke up was that it
hadn’t been inspected properly (i.e. human error).
The probability estimate hadn’t considered human oversight, which is
way more likely than one-in-billion. Hubbard uses this example to make
the point that a weak risk management methodology can have huge
consequences.
7. Following a very brief history of risk management from historical times to the present, Hubbard
presents a list of common methods of risk management. These are:
Expert intuition – essentially based on “gut feeling”
Expert audit – based on expert intuition of independent consultants. Typically involves the
development of checklists and also uses stratification methods (see next point)
Simple stratification methods – risk matrices are the canonical example of stratification
methods.
Weighted scores – assigned scores for different criteria (scores usually assigned by expert
intuition), followed by weighting based on perceived importance of each criterion.
Non-probabilistic financial analysis –techniques such as computing the financial consequences
of best and worst case scenarios
Calculus of preferences – structured decision analysis techniques such as multi-attribute utility
and analytic hierarchy process. These techniques are based on expert judgements. However, in
cases where multiple judgements are involved these techniques ensure that the judgements are
logically consistent (i.e. do not contradict the principles of logic).
Probabilistic models – involves building probabilistic models of risk events. Probabilities can be
based on historical data, empirical observation or even intuition. The book essentially builds a
case for evaluating risks using probabilistic models, and provides advice on how these should be
built
8. The book also discusses the state of risk management practice (at
the end of 2008) as assessed by surveys carried out by
The Economist, Protiviti and Aon Corporation. Hubbard notes that
the surveys are based largely on self-assessments of risk
management effectiveness. One cannot place much confidence in
these because self-assessments of risk are subject to well known
psychological effects such as cognitive biases (tendencies to base
judgments on flawed perceptions) and the Dunning-Kruger effect
(overconfidence in one’s abilities).
The acid test for any assessment is whether or not it use sound
quantitative measures. Many of the firms surveyed fail on this
count: they do not quantify risks as well as they claim they do.
Assigning weighted scores to qualitative judgements does not
count as a sound quantitative technique – more on this later.
9. The Dunning–Kruger effect is a cognitive bias in which unskilled people make
poor decisions and reach erroneous conclusions, but their incompetence
denies them the metacognitive ability to recognize their mistakes.[1]
The unskilled therefore suffer from illusory superiority, rating their ability as
above average, much higher than it actually is, while the highly skilled
underrate their own abilities, suffering from illusory inferiority.
Actual competence may weaken self-confidence, as competent individuals
may falsely assume that others have an equivalent understanding.
As Kruger and Dunning conclude, "the miscalibration of the incompetent
stems from an error about the self, whereas the miscalibration of the highly
competent stems from an error about others" (p. 1127).[2] The effect is about
paradoxical defects in cognitive ability, both in oneself and as one compares
oneself to others.
10. So, what are some good ways of measuring
the effectiveness of risk management?
Hubbard lists the following:
Statistics based on large samples
Direct evidence
Component testing
Check of completeness
11. Statistics based on large samples – the use of this depends on the availability of historical or
other data that is similar to the situation at hand.
Direct evidence – this is where the risk management technique actually finds some problem that
would not have been found otherwise. For example, an audit that unearths dubious financial
practices
Component testing – even if one isn’t able to test the method end-to-end, it may be possible to
test specific components that make up the method. For example, if the method uses computer
simulations, it may be possible to validate the simulations by applying them to known situations.
Check of completeness – organisations need to ensure that their risk management methods
cover the entire spectrum of risks, else there’s a danger that mitigating one risk may increase the
probability of another. Further, as Hubbard states, “A risk that’s not even on the radar cannot be
managed at all.” As far as completeness is concerned, there are four perspectives that need to be
taken into account. These are:
Internal completeness – covering all parts of the organisation
External completeness – covering all external entities that the organisation interacts with.
Historical completeness – this involves covering worst case scenarios and historical data.
Combinatorial completeness – this involves considering combinations of events that may occur together;
those that may lead to common-mode failure discussed earlier.
12. Hubbard begins this section by identifying
the four major players in the risk
management game.
These are:
Actuaries
Physicists and Mathematicians
Economists
Management Consultants
13. These are perhaps the first modern professional risk
managers. They use quantitative methods to manage
risks in the insurance and pension industry.
Although the methods actuaries use are generally sound,
the profession is slow to pick up new techniques.
Further, many investment decisions that insurance
companies make do not come under the purview of
actuaries.
So, actuaries typically do not cover the entire spectrum of
organizational risks.
14. Many rigorous risk management techniques came out of statistical
research done during the second world war. Hubbard therefore
calls this group War Quants.
One of the notable techniques to come out of this effort is the
Monte Carlo Method – originally proposed by Nick Metropolis,
John Neumann and Stanislaw Ulam as a technique to calculate the
averaged trajectories of neutrons in fissile material (see
this article by Nick Metropolis for a first-person account of how
the method was developed).
Hubbard believes that Monte Carlo simulations offer a sound,
general technique for quantitative risk analysis. Consequently he
spends a fair few pages discussing these methods, albeit at a very
basic level. More about this later.
15. Risk analysts in investment firms often use quantitative techniques from
economics. Popular techniques include modern portfolio theory and models
from options theory (such as the Black-Scholes model) . The problem is that
these models are often based on questionable assumptions.
For example, the Black-Scholes model assumes that the rate of return on a stock
is normally distributed (i.e. its value is lognormally distributed) – an assumption
that’s demonstrably incorrect as witnessed by the events of the last few years .
Another way in which economics plays a role in risk management is through
behavioural studies, in particular the recognition that decisions regarding future
events (be they risks or stock prices) are subject to cognitive biases. Hubbard
suggests that the role of cognitive biases in risk management has been
consistently overlooked.
See my post entitled Cognitive biases as meta-risks and its follow-up for more on
this point.
16. In Hubbard’s view, management consultants and
standards institutes are largely responsible for many of
the ad-hoc approaches to risk management.
A particular favorite of these folks are ad-hoc scoring
methods that involve ordering of risks based on subjective
criteria. The scores assigned to risks are thus subject to
cognitive bias.
Even worse, some of the tools used in scoring can end up
ordering risks incorrectly.
Bottom line: many of the risk analysis techniques used by
consultants and standards have no justification.
17.
18. Following the discussion of the main players in the risk arena, Hubbard discusses
the confusion associated with the definition of risk.
There are a plethora of definitions of risk, most of which originated in academia.
Hubbard shows how some of these contradict each other while others are
downright non-intuitive and incorrect.
In doing so, he clarifies some of the academic and professional terminology
around risk.
As an example, he takes exception to the notion of risk as a “good thing” – as in
the PMI definition, which views risk as “an uncertain event or condition that, if it
occurs, has a positive or negative effect on a project objective.”
This definition contradicts common (dictionary) usage of the term risk (which
generally includes only bad stuff). Hubbard’s opinion on this may raise a few
eyebrows (and hackles!) in project management circles, but I reckon he has a
point.
19. The story that I have to tell is marked all the way through by a
persistent tension between those who assert that the best
decisions are based on quantification and numbers, determined
by the patterns of the past, and those who base their decisions
on more subjective degrees of belief about the uncertain future.
This is a controversy that has never been resolved.’
— FROM THE INTRODUCTION TO ‘‘AGAINST THE GODS: THE REMARKABLE STORY OF RISK,’’ BY PETER L. BERNSTEIN
http://www.mckinseyquarterly.com/Peter_L_Bernstein_on_risk_2211
21. Frank H. Knight was one of the founders of the so-called Chicago school of
economics, of which milton friedman and george stigler were the leading
members from the 1950s to the 1980s.
Knight made his reputation with his book Risk, Uncertainty, and Profit, which
was based on his Ph.D. dissertation. In it Knight set out to explain why “perfect
competition” would not necessarily eliminate profits.
His explanation was “uncertainty,” which Knight distinguished from risk.
According to Knight, “risk” refers to a situation in which the probability of an
outcome can be determined, and therefore the outcome insured against.
“Uncertainty,” by contrast, refers to an event whose probability cannot be
known.
Knight argued that even in long-run equilibrium, entrepreneurs would earn
profits as a return for putting up with uncertainty. Knight’s distinction between
risk and uncertainty is still taught in economics classes today.
22. [To differentiate] the measurable uncertainty
and an unmeasurable one we may use the
term “risk” to designate the former and the
term uncertainty for the latter.
23. Probability, then, is concerned with
professedly uncertain [emphasis added]
judgments.2
The word risk has acquired no technical
meaning in economics, but signifies here as
elsewhere [emphasis added] chance of
damage or loss.
24. If you wish to converse with me, define your terms
Voltaire
25. Uncertainty. The lack of complete certainty
—that is, the existence of more than one
possibility. The “true”
outcome/state/result/value is not known.
Measurement- A set of probabilities assigned to
set of possibilities. For example there is a 60%
chance of rain tomorrow, and a 40% chance it
won’t.
26. By “uncertain” knowledge … I do not mean merely to distinguish what is
known for certain from what is only probable. The game of roulette is not
subject, in this sense, to uncertainty…. The sense in which I am using the
term is that in which the prospect of a European war is uncertain, or the
price of copper and the rate of interest twenty years hence, or the
obsolescence of a new invention…. About these matters, there is no
scientific basis on which to form any calculable probability whatever.
We simply do not know!
27. A state of uncertainty where some of the
possibilities involve loss, injury, catastrophe, or
other undesirable outcome. (i.e. something bad
could happen) in the future. (if—then)
Measurement of Risk
A set of possibilities each with quantifiable
probabilities and quantified losses. For example, “we
believe there is a 40% chance a proposed oil well will
be dry with a loss of $12m in exploratory drilling costs
28. Risk: Well, it certainly doesn't mean standard
deviation. People mainly think of risk in terms of
downside risk. They are concerned about the
maximum they can lose. So that's what risk means.
In contrast, the professional view defines risk in terms
of variance, and doesn't discriminate gains from
losses. There is a great deal of miscommunication and
misunderstanding because of these very different
views of risk. Beta does not do it for most people, who
are more concerned with the possibility of loss
Daniel Kahneman
29. Measuring risks, especially important long-term ones, is
imprecise and difficult. Virtually none of the economic
statistics reported in the media measure risk.
To fully comprehend risk, we must stretch our
imagination to think of all the different ways that things
can go wrong, including things that have not happened in
recent memory.
We must protect ourselves against fallacies, such as
thinking that just because a risk has not proved damaging
for decades, it no longer exists.
30. Yet another psychological barrier is a sort of ego
involvement in our own success.
Our tendency to take full credit for our successes
discourages us from facing up to the possibility of loss or
failure, because considering such prospects calls into
question our self-satisfaction.
Indeed, self-esteem is one of the most powerful human
needs: a view of our own success relative to others
provides us with a sense of meaning and well-being.
31. So accepting the essential randomness of life is
terribly difficult, and contradicts our deep
psychological need for order and accountability.
We often do not protect the things that we have -
such as our opportunities to earn income and
accumulate wealth - because we mistakenly
believe that our own natural superiority will do
that for us.
32. Risk has to include some probability of loss—
this excludes Knight’s definition.
Risk involves only losses (not gains)---this
excludes PMI’s definition
Outside of finance, volatility may not
necessarily entail risk---this excludes
considering volatility alone as synonymous
with risk.
33. Risk in not just the product of probability and loss.
Multiplying them together unnecessarily presumes
that the decision maker is risk neutral. Keep risk as a
vector quantity where probability and magnitude of
loss are separate until we compare it to the risk
aversion of the decision maker.
Risk can be made of discrete or continuous losses and
associated probabilities. We do not need to make the
distinctions sometimes made in construction
engineering that risk is only discrete events.
34. According to the peak-end rule, we judge our past experiences almost entirely
on how they were at their peak (pleasant or unpleasant) and how they ended.
Other information is not lost, but it is not used. This includes net pleasantness
or unpleasantness and how long the experience lasted.
In one experiment, one group of people were subjected to loud, painful noises.
In a second group, subjects were exposed to the same loud, painful noises as
the first group, after which were appended somewhat less painful noises. This
second group rated the experience of listening to the noises as much less
unpleasant than the first group, despite having been subjected to more
discomfort than the first group, as they experienced the same initial duration,
and then an extended duration of reduced unpleasantness.
This heuristic was first suggested by Daniel Kahneman and others. He argues
that because people seem to perceive not the sum of an experience but its
average, it may be an instance of the representativeness heuristic.
35. Why we shouldn’t trust the numbers in our head.
Peak end rule. We tend to remember extremes and
not the mundane.
Misconceptions of chance
▪ (H=heads, T=Tails): HHHTTT or HTHTTH?
▪ Actually they are equally likely. But since the first “appears”
to be less random than the second, it must be less likely.
36. In my opinion, the most important sections of
the book are chapters 6 and 7, where
Hubbard discusses why “expert knowledge
and opinions” (favoured by standards and
methodologies are flawed) and why a very
popular scoring method (risk matrices) is
“worse than useless.” See my posts on
the limitations of scoring techniques and Cox
’s risk matrix theorem for detailed discussions
of these points.
37. A major problem with expert estimates is overconfidence.
To overcome this, Hubbard advocates using calibrated
probability assessments to quantify analysts’ abilities to
make estimates. Calibration assessments involve getting
analysts to answer trivia questions and eliciting confidence
intervals for each answer. The confidence intervals are
then checked against the proportion of correct answers.
Essentially, this assesses experts’ abilities to estimate by
tracking how often they are right. It has been found that
people can improve their ability to make subjective
estimates through calibration training – i.e. repeated
calibration testing followed by feedback. See this site for
more on probability calibration.
38. Next Hubbard tackles several “red herring”
arguments that are commonly offered as
reasons not to manage risks using rigorous
quantitative methods. Among these are
arguments that quantitative risk analysis is
impossible because:
Unexpected events cannot be predicted.
Risks cannot be measured accurately.
39. Hubbard states that the first objection is invalid because
although some events (such as spectacular stockmarket
crashes) may have been overlooked by models, it doesn’t
prove that quantitative risk as a whole is flawed.
As he discusses later in the book, many models go wrong
by assuming Gaussian probability distributions where
fat-tailed ones would be more appropriate. Of course,
given limited data it is difficult to figure out which
distribution’s the right one.
So, although Hubbard’s argument is correct, it offers little
comfort to the analyst who has to model events before
they occur.
40.
41. As far as the second is concerned, Hubbard has written another book on how just about any business
variable (even intangible ones) can be measured.
The book makes a persuasive case that most quantities of interest can be measured, but there are
difficulties.
First, figuring out the factors that affect a variable is not a straightforward task. It depends, among other
things, on the availability of reliable data, the analyst’s experience etc.
Second, much depends on the judgement of the analyst, and such judgements are subject to bias.
Although calibration may help reduce certain biases such as overconfidence, it is by no means a panacea
for all biases.
Third, risk-related measurements generally involve events that are yet to occur.
Consequently, such measurements are based on incomplete information. To make progress one often
has to make additional assumptions which may not justifiable a priori.
42.
43. Cost analysis, used to develop cost estimates for such things as hardware systems,
automated information systems, civil projects, manpower, and training, can be defined as
1. the effort to develop, analyze, and document cost estimates with analytical
approaches and techniques;
2. the process of analyzing and estimating the incremental and total resources
required to support past, present, and future systems—an integral step in selecting
alternatives; and
3. a tool for evaluating resource requirements at key milestones and decision points in the
acquisition process.
Cost estimating involves collecting and analyzing historical data and applying quantitative
models, techniques, tools, and databases to predict a program’s future cost.
More simply, cost estimating combines science and art to predict the future cost of
something based on known historical data that are adjusted to reflect new materials,
technology, software languages, and development teams.
Because cost estimating is complex, sophisticated cost analysts should combine concepts
from such disciplines as accounting, budgeting, computer science, economics,
engineering, mathematics, and statistics and should even employ concepts from
marketing and public affairs. And because cost estimating requires such a wide range of
disciplines, it is important that the cost analyst either be familiar with these disciplines
or have access to an expert in these fields.
44. They are often used without empirical data or validation – i.e. their inputs and results are not
tested through observation.
Are generally used piecemeal – i.e. used in some parts of an organisation only, and often to
manage low-level, operational risks.
They frequently focus on variables that are not important (because these are easier to measure)
rather than those that are important. Hubbard calls this perverse occurrence measurement
inversion. He contends that analysts often exclude the most important variables because these
are considered to be “too uncertain.”
They use inappropriate probability distributions. The Normal distribution (or bell curve) is not
always appropriate. For example, see my posts on the
inherent uncertainty of project task estimates for an intuitive discussion of the form of the
probability distribution for project task durations.
They do not account for correlations between variables. Hubbard contends that many analysts
simply ignore correlations between risk variables (i.e. they treat variables as independent when
they actually aren’t). This almost always leads to an underestimation of risk because correlations
can cause feedback effects and common mode failures.
45. It turns out that many phenomena can be modeled by this kind of long-tailed distribution. Some of the
better known long-tailed distributions include lognormal and power law distributions.
A quick, informal review of project management literature revealed that lognormal distributions are
more commonly used than power laws to model activity duration uncertainties.
This may be because lognormal distributions have a finite mean and variance whereas power law
distributions can have infinite values for both (see this presentation by Michael Mitzenmacher, for
example). [An Aside:If you're curious as to why infinities are possible in the latter, it is because power
laws decay more slowly than lognormal distributions - i.e they have "fatter" tails, and hence enclose
larger (even infinite) areas.].
In any case, regardless of the exact form of the distribution for activity durations, what’s important and
non-controversial is the short cutoff, the peak and long, decaying tail. These characteristics are true of all
probability distributions that describe activity durations.
46. There’s one immediate consequence of the long tail: if you
want to be really, really sure of completing any activity, you
have to add a lot of “air” or safety because there’s a chance that
you may “slip in the shower” so to speak. Hence, many activity
estimators add large buffers to their estimates.
Project managers who suffer the consequences of the resulting
inaccurate schedule are thus victims of the tail.
50. One can study randomness, at three levels: mathematical, empirical, and behavioral.
Mathematical
The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because we've pretty much
reached small returns in what we can develop in that branch.
Empirical
The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of
the mechanics of historical events, whether quantitative models can help us and how they can hurt us.
Behavioral
And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not
see, even if our logic recommends otherwise.
▪ We tend to overestimate causal relationships
▪ When we meet someone who by playing Russian roulette became extremely influential, wealthy, and
powerful, we still act toward that person as if he gained that status just by skills, even when you know
there's been a lot of luck. Why?
Because our behavior toward that person is going to be entirely determined by shallow heuristics and very
superficial matters related to his appearance.
Nassim Taleb
51. Following a very brief history of risk management from historical times to the present, Hubbard presents a list of
common methods of risk management. These are:
Expert intuition – essentially based on “gut feeling”
Expert audit – based on expert intuition of independent consultants. Typically involves the development of
checklists and also uses stratification methods (see next point)
Simple stratification methods – risk matrices are the canonical example of stratification methods.
Weighted scores – assigned scores for different criteria (scores usually assigned by expert intuition), followed by
weighting based on perceived importance of each criterion.
Non-probabilistic financial analysis –techniques such as computing the financial consequences of best and
worst case scenarios
Calculus of preferences – structured decision analysis techniques such as multi-attribute utility theory and
analytic hierarchy process. These techniques are based on expert judgements. However, in cases where multiple
judgements are involved these techniques ensure that the judgements are logically consistent (i.e. do not
contradict the principles of logic).
Probabilistic models – involves building probabilistic models of risk events. Probabilities can be based on
historical data, empirical observation or even intuition. The book essentially builds a case for evaluating risks
using probabilistic models, and provides advice on how these should be built
52. Adopt the language, tools and philosophy of uncertain systems. To do this he
recommends:
Using calibrated probabilities to express uncertainties. Hubbard believes that any person who
makes estimates that will be used in models should be calibrated. He offers some suggestions
on people can improve their ability to estimate through calibration – discussed earlier and on
this web site.
Employing quantitative modeling techniques to model risks. In particular, he advocates the
use of Monte Carlo methods to model risks. He also provides a list of commercially available
PC-based Monte Carlo tools. Hubbard makes the point that modeling forces analysts to
decompose the systems of interest and understand the relationships between their
components (see point 2 below).
Developing an understanding of the basic rules of probability including independent events,
conditional probabilities and Bayes’ Theorem. He gives examples of situations in which these
rules can help analysts extrapolate
To this, I would also add that it is important to understand the idea that an
estimate isn’t a number, but a probability distribution – i.e. a range of numbers,
each with a probability attached to it.
53. Build, validate and test models using reality as the
ultimate arbiter. Models should be built iteratively,
testing each assumption against observation. Further,
models need to incorporate mechanisms (i.e. how and
why the observations are what they are), not just raw
observations. This is often hard to do, but at the very
least models should incorporate correlations between
variables. Note that correlations are often (but not
always!) indicative of an underlying mechanism. See
this post for an introductory example of Monte Carlo
simulation involving correlated variables.
54. In the penultimate chapter of the book, Hubbard fleshes out the
characteristics or traits of good risk analysts. As he mentions several
times in the book, risk analysis is an empirical science – it arises from
experience.
So, although the analytical and mathematical (modeling) aspects of risk
are important, a good analyst must, above all, be an empiricist – i.e.
believe that knowledge about risks can only come from observation of
reality.
In particular, testing models by seeing how well they match historical
data and tracking model predictions are absolutely critical aspects of a
risk analysts job.
Unfortunately, many analysts do not measure the performance of their
risk models. Hubbard offers some excellent suggestions on how analysts
can refine and improve their models via observation.
55. Developing an understanding of the basic rules of
probability including independent events,
conditional probabilities and Bayes’ Theorem. He
gives examples of situations in which these rules
can help analysts extrapolate
56.
57. Both versions of the law state that the sample average converges to the expected value
where X1, X2, ... is an infinite sequence of i.i.d. random variables with finite expected value;
E(X1)=E(X2) = ... = µ < ∞.
An assumption of finite variance Var(X1) = Var(X2) = ... = σ2 < ∞ is not necessary. Large or
infinite variance will make the convergence slower, but the LLN holds anyway. This assumption is often
used because it makes the proofs easier and shorter.
The difference between the strong and the weak version is concerned with the mode of convergence being asserted.
The weak law
The weak law of large numbers states that the sample average converges in probability towards the expected value.
Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a
sufficiently large sample there will be a very high probability that the average of the observations will be close to the
expected value, that is, within the margin.
Convergence in probability is also called weak convergence of random variables. This version is called the weak law
because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as
below.
A consequence of the weak LLN is the asymptotic equipartition property.
The strong law
The strong law of large numbers states that the sample average converges almost surely to the expected value
That is, the proof is more complex than that of the weak law. This law justifies the intuitive interpretation of the expected
value of a random variable as the "long-term average when sampling repeatedly".
Almost sure convergence is also called strong convergence of random variables. This version is called the strong law
because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability).
The strong law implies the weak law.
The strong law of large numbers can itself be seen as a special case of the ergodic theorem.
58. Bayesian inference uses aspects of the scientific method, which involves collecting
evidence that is meant to be consistent or inconsistent with a given hypothesis. As
evidence accumulates, the degree of belief in a hypothesis ought to change. With enough
evidence, it should become very high or very low. Thus, proponents of Bayesian inference
say that it can be used to discriminate between conflicting hypotheses: hypotheses with
very high support should be accepted as true and those with very low support should be
rejected as false. However, detractors say that this inference method may be biased due to
initial beliefs that one holds before any evidence is ever collected. (This is a form of
inductive bias).
Bayesian inference uses a numerical estimate of the degree of belief in a hypothesis before
evidence has been observed and calculates a numerical estimate of the degree of belief in
the hypothesis after evidence has been observed. (This process is repeated when additional
evidence is obtained.) Bayesian inference usually relies on degrees of belief, or subjective
probabilities, in the induction process and does not necessarily claim to provide an
objective method of induction. Nonetheless, some Bayesian statisticians believe
probabilities can have an objective value and therefore Bayesian inference can provide an
objective method of induction
59. 59
To convert the Probability of event A given event B to
the Probability of event B given event A, we use Bayes’
theorem. We must know or estimate the Probabilities
of the two separate events.
Pr(B|A) =
Pr (A|B) Pr (B)
Pr (A)
Pr (A) = Pr(A|B)Pr(B) + Pr(A|B)Pr(B)
Law of Total Probability
The Reverend Thomas Bayes, F.R.S. --- 1701?-1761
60. ▪ Example of Bayesian search theory
In May 1968 the US nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of
Norfolk Virginia. The US Navy was convinced that the vessel had been lost off the Eastern seaboard but an
extensive search failed to discover the wreck. The US Navy's deep water expert, John Craven, USN, believed
that it was elsewhere and he organized a search south west of the Azores based on a controversial approximate
triangulation by hydrophones. He was allocated only a single ship, the Mizar, and he took advice from a firm of
consultant mathematicians in order to maximize his resources. A Bayesian search methodology was adopted.
Experienced submarine commanders were interviewed to construct hypotheses about what could have caused
the loss of the Scorpion.
The sea area was divided up into grid squares and a probability assigned to each square, under each of the
hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to
produce an overall probability grid. The probability attached to each square was then the probability that the
wreck was in that square. A second grid was constructed with probabilities that represented the probability of
successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This
was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives
the probability of finding the wreck in each grid square of the sea if it were to be searched.
This sea grid was systematically searched in a manner which started with the high probability regions first and
worked down to the low probability regions last. Each time a grid square was searched and found to be empty its
probability was reassessed using Bayes' theorem. This then forced the probabilities of all the other grid squares
to be reassessed (upwards), also by Bayes' theorem. The use of this approach was a major computational
challenge for the time but it was eventually successful and the Scorpion was found about 740 kilometers
southwest of the Azores in October of that year.
61. Stochastic is synonymous with
"random." The word is of Greek origin
and means "pertaining to chance"
(Parzen 1962, p. 7).
It is used to indicate that a particular
subject is seen from point of view of
randomness.
Stochastic is often used as
counterpart of the word
"deterministic," which means that
random phenomena are not involved.
Therefore, stochastic models are
based on random trials, while
deterministic models always produce
the same output for a given starting
condition.
62.
63.
64. "Stochastic" means being or having a random variable.
A stochastic model is a tool for estimating probability
distributions of potential outcomes by allowing for random
variation in one or more inputs over time. The random
variation is usually based on fluctuations observed in historical
data for a selected period using standard time-series
techniques. Distributions of potential outcomes are derived
from a large number of simulations (stochastic projections)
which reflect the random variation in the input(s).
Its application initially started in physics (sometimes known as
the Monte Carlo Method). It is now being applied in
engineering, life sciences, social sciences, and finance.
65. Valuation
Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry,
however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now
until the claim, investment returns during that period, and so on.
So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with
the best estimate for assets and liabilities, and therefore for the company's level of solvency.
Deterministic approach The simplest way of doing this, and indeed the primary method used,
is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely
investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the
mostly likely rate and the most critical rate. The result provides a point estimate- the best single estimate
of what the company's current solvency position is or multiple points of estimate - depends on the problem definition.
Selection and identification of parameter values are frequently a challenge to less experienced analysts. The downside of
this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are
more probable and some are less.
Stochastic modeling
A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire
company. But rather than setting investment returns according to their most likely estimate, for example, the model uses
random variations to look at what investment conditions might be like.
Based on a set of random outcomes, the experience of the policy/portfolio/company is projected, and the outcome is noted.
Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.
At the end, a distribution of outcomes is available which shows not only what the most likely estimate, but
what ranges are reasonable too.
This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A
deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the
cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time
period or the chance that an extreme event in a particular time period leads to an investment return less than the
guarantee. Stochastic modeling builds volatility and variability (randomness) into the simulation and therefore provides
a better representation of real life from more angles.
66. Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled
degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular structures
(see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling
phenomena with significant uncertainty in inputs, such as the calculation of risk in
business (for its use in the insurance industry, see stochastic modeling). A classic use is for the
evaluation of definite integrals, particularly multidimensional integrals with complicated boundary
conditions.
Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate
investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is
intended for financial analysts who want to construct stochastic or probabilistic financial models as
opposed to the traditional static and deterministic models.
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied
fields, and have diverse applications from complicated quantum chromo dynamics calculations to designing
heat shields and aerodynamic forms.
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of
radiation fields and energy transport, and thus these methods have been used in global illumination
computations which produce photorealistic images of virtual 3D models, with applications in video games,
architecture, design, computer generated films, special effects in cinema, business, economics and other
fields.
Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find
the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a
random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it
might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime"
when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind
produces one correct answer with a guarantee n is composite, and x proves it so, but another one without,
but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the
time. See also Las Vegas algorithm for a related, but different, idea.
67. This Demonstration shows how to analyze
lifetime test data from data-fitting to a Weibull
distribution function plot.
The data fit is on a log-log plot by a least squares
fitting method.
The results are presented as Weibull distribution
CDF and PDF plots.
68. The probability density function (PDF - upper plot) is the derivative
of the cumulative density function (CDF - lower plot). This elegant
relationship is illustrated here. The default plot of the PDF answers
the question, "How much of the distribution of a random variable
is found in the filled area; that is, how much probability mass is
there between observation values equal to or more than 64 and
equal to or fewer than 70?“
The CDF is more helpful. By reading the axis you can estimate the
probability of a particular observation within that range: take the
difference between 90.8%, the probability of values below 70, and
25.2%, the probability of values below 63, to get 65.6%.
69. The probability density function (PDF - upper plot) is the
derivative of the cumulative density function (CDF - lower
plot). This elegant relationship is illustrated here. The default plot of
the PDF answers the question, "How much of the distribution of a
random variable is found in the filled area; that is, how much
probability mass is there between observation values equal to or
more than 64 and equal to or fewer than 70?"
The CDF is more helpful. By reading the y axis you can estimate the
probability of a particular observation within that range: take the
difference between 90.8%, the probability of values below 70, and
25.2%, the probability of values below 63, to get 65.6%.
http://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/
70. I noticed you downloaded Mathematica Player. I assume you found lots of great Demonstrations to utilize within
your curriculum, but if not (or if you had trouble figuring out how to use them), here's a video that will help you get
started:
http://www.wolfram.com/videos/discoverdemonstrations
Most people find the deployment of existing Demonstrations extremely useful in illustrating concepts to their
students, and often want to make their own models showing specific ideas interactively within the class. If that
applies to you, here's a second video that teaches you how to make models:
http://www.wolfram.com/screencasts/makingmodels
I would be happy to walk you through the Demonstrations process if you have any questions or concerns. Please
let me know how I can help make your classroom an interactive environment. If there are topics you'd like to see
Demonstrations for in the future, I look forward to hearing those suggestions as well.
Sincerely,
Scott Rauguth
Academic Marketing Manager
Wolfram Research, Inc.
http://www.wolfram.com
P.S. Did you know that the Wolfram Education Group offers free online seminars for training and development of
Mathematica proficiency, including Creating Demonstrations? Visit:
http://www.wolfram.com/seminars/s14.html
71. An example of a statistical macroscopic relation is the distribution of the magnitude of earthquakes. If it is the annual mean
number of earthquakes (in a zone or worldwide) of size (energy released), then empirically one finds over a wide range, with
the constant . The relation (7.1) is called the Gutenberg-Richter law and is obviously a statistical relation for observables - it
does not specify when an earthquake of some magnitude will occur but only what the mean distribution in their magnitude is.
The Gutenberg-Ricter law is a power-law and is therefore scale-invariant - a change of scale in can be absorbed in a
normalization constant, leaving the form of the law invariant. The scale-invariance of the law implies a scale-invariance in the
phenomena itself: earthquakes happen on all scales and there is no typical or mean magnitude! There are many other natural
phenomena which exhibit power laws over a wide range of the parameters: Volcanic activity, solar-flares, charge released
during lightning events, length of streams in river networks, forest fires, and even the extinction rate of biological species!
Some of these power laws refer to spatial scale-free structures, or fractals, while some others refer to temporal events and are
examples of the ubiquitous "one-over-f " phenomena (see chapter 2). Can the frequent appearance of such power laws in
complex systems be explained in a simple way? Note that the systems mentioned above are examples of dissipative
structures, with a slow but constant inflow of energy and its eventual dissipation. The systems are clearly out of equilibrium,
since we know that equilibrium systems tend towards uniformity rather than complexity. On the other hand the
abovementioned systems display scale-free behaviour similar to that exhibited by equilibrium systems near a critical point of
a second-order phase transition. However while the critical point in equilibrium systems is reached only for some specific
value of an external parameter, such as temperature, for the dissipative structures above the scale free behaviour appears to
be robust and does not seem to require any fine-tuning. Bak and collaborators proposed that many dissipative complex
systems naturally self-organise to a critical state, with the consequent scale-free fluctuations giving rise to power laws. In
short, the proposal is that self-organised criticality is the natural state of large complex dissipative systems, relatively
independent of initial conditions. It is important to note that while the critical state in an equilibrium second-order phase
transition is unstable (slight perturbations move the system away from it), the critical state of self-organised systems is
stable: systems are continually attracted to it! The idea that many complex systems are in a self-organised critical state is
intuitively appealing because it is natural to associate complexity with a state that is balanced at the edge between total
order and total disorder (sometimes loosely referred to as the "edge of chaos"). Far from the critical point, one typically has a
very ordered phase on one side and a greatly disordered phase on the other side. It is only at the critical point that one has
large correlations among the different parts of a large system, thus making it possible to have novel emergent properties, and
in particular scale-free phenomena. In addition to the examples mentioned above, self-organised criticality has also been
proposed to apply to economics, traffic jams, forest fires and even the brain!
72. An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate
(also known as the 80-20 rule).
A power law is any polynomial relationship that exhibits the property of scale invariance. The most common power laws relate two variables and
have the form-
where a and k are constants, and o(xk) is an asymptotically small function of x. Here, k is typically called the scaling exponent, denoting the fact
that a power-law function (or, more generally, a kth order (homogeneous polynomial) satisfies the criterion where c is a constant. That is, scaling
the function's argument changes the constant of proportionality as a function of the scale change, but preserves the shape of the function itself.
This relationship becomes more clear if we take the logarithm of both sides (or, graphically, plotting on a log-log graph)
Notice that this expression has the form of a linear relationship with slope k, and scaling the argument induces a linear shift (up or down) of the
function, and leaves both the form and slope k unchanged.
Power-law relations characterize a staggering number of natural patterns, and it is primarily in this context that the term power law is used rather
than polynomial function. For instance, inverse-square laws, such as gravitation and the Coulomb force are power laws, as are many common
mathematical formulae such as the quadratic law of area of the circle. Also, many probability distributions have tails that asymptotically follow
power-law relations, a topic that connects tightly with the theory of large deviations (also called
extreme value theory), which considers the frequency of extremely
rare events like stock market crashes, and large natural disasters.
Scientific interest in power law relations, whether functions or distributions, comes primarily from the ease with which certain general classes of
mechanisms can generate them. That is, the observation of a power-law relation in data often points to specific kinds of mechanisms that underlie
the natural phenomenon in question, and can often indicate a deep connection with other, seemingly unrelated systems (for instance, see both
the reference by Simon and the subsection on universality below). The ubiquity of power-law relations in physics is partly due to
dimensional constraints, while in complex systems, power laws are often thought to be signatures of hierarchy and robustness. A few
notable examples of power laws are the Gutenberg-Richter law for earthquake sizes, Pareto's law of income distribution, or structural self-
similarity of fractals, and scaling laws in biological systems. Research on the origins of power-law relations, and efforts to observe and
validate them in the real world, is extremely active in many fields of modern science, including physics, computer science, linguistics,
74. When NASA missions are under tight time
and budget constraints, they tend to cut
component tests more than anything else.
And less testing means more failures.
75.
76. United Airlines Flight 232 was a scheduled flight from
Stapleton International Airport in Denver, Colorado, to
O'Hare International Airport in Chicago, with
continuing service to Philadelphia International Airport.
On July 19, 1989, the DC-10 (Registration N1819U)
operating the route crash-landed in Sioux City, Iowa,
after suffering catastrophic failure of its tail-mounted
engine, which led to the loss of all flight controls.
111 people died in the accident while 185 survived
77.
78.
79. Investigators were able to recover the aircraft's tailcone as well as half of the fan
containment ring. Also found were fan blade fragments and parts of the
hydraulic lines. Three months after the accident, two pieces of the engine fan
disk were found in the fields near where the first pieces were located. Together
the pieces made up nearly the entire fan disk assembly.
Two large fractures were found in the disk, indicating overstress failure.
Metallurgical examination showed that the primary fracture had resulted from a
fatigued section on the inside diameter of the disk.
Further examination showed that the fatiguing had resulted in a small cavity on
the surface of the disk, apparently a defect in manufacturing.
The 17 year old disk had undergone routine maintainence and six times had been
subjected to flourescent penetration inspections. Investigators concluded that
human error was responsible in improperly identifying the fatigued area before
the accident.
80.
81. In 1971 a Pan American 747 struck approach light structures for the reciprocal runway as it lifted off the runway at San Francisco Airport. Major
damage to the belly and landing gear resulted, which caused the loss of hydraulic fluid from three of its four flight control systems. The fluid which
remained in the fourth system gave the captain very limited control of some of the spoilers, ailerons, and one inboard elevator. That was sufficient
to circle the plane while fuel was dumped and then to make a hard landing. There were no fatalities, but there were some injuries.[31]
In 1981, Eastern Airlines Flight 935, operated by a Lockheed L-1011 suffered a similar kind of massive failure of its tail mounted number two engine.
The shrapnel from that engine inflicted damage on all four of its hydraulic systems, which were also close together in the tail structure. Fluid was
lost in three of the four systems. While the fourth hydraulic system was impacted with shrapnel too, it was not punctured. The hydraulic pressure
remaining in that fourth system enabled the captain to land the plane safely with some limited use of the outboard spoilers, the inboard ailerons,
and the horizontal stabilizer, plus differential engine power of the remaining two engines. There were no injuries.[32]
In 1985 Japan Airlines flight 123, a Boeing 747, suffered a rupture of the pressure bulkhead in its tail section. The damage was extensive and caused
the loss of fluid in all four of its hydraulic control systems. The pilots were able to keep the plane airborne for almost 30 minutes using differential
engine power, but eventually control was lost, and the plane crashed in mountainous terrain. There were only 4 survivors among the 524 on board.
This accident is the deadliest single-aircraft accident in history.[33]
In 1994, RA85656, a Tupolev Tu-154 operating as Baikal Airlines Flight 130, crashed near Irkutsk shortly after departing from Irkutsk Airport, Russia.
Damage to the starter caused a fire in engine number two (located in the rear of fuselage). High temperatures during the fire destroyed the tanks
and pipes of all three hydraulic systems. The crew lost control of the aircraft. The unmanageable plane, at a speed of 275 knots, hit the ground at a
dairy farm and burned. All passengers and crew, as well as a dairyman on the ground, died.[34]
In 2003, OO-DLL, a DHL Airbus A300 was struck by a surface-to-air missile shortly after departing from Baghdad International Airport, Iraq. The
missile struck the port side wing, rupturing a fuel tank and causing the loss of all three hydraulic systems. With the flight controls disabled, the crew
was able to use differential thrust to execute a safe landing at Baghdad. This is the first and only documented time anyone has managed to land a
transport aircraft safely without working flight controls.[35]
The disintegration of a turbine disc, leading to loss of control, was a direct cause of two major aircraft disasters in Poland:
On March 14, 1980, LOT Polish Airlines Flight 007, an Ilyushin Il-62, attempted a go-around when the crew experienced troubles with a gear
indicator. When thrust was applied, low pressure turbine disc in engine number 2 disintegrated because of material fatigue; parts of the disc
damaged engines number 1 and 3 and severed control pushers for both horizontal and vertical stabilizers. After 26 seconds of uncontrolled
descent, the aircraft crashed, killing all 87 people on board.[36]
On May 9, 1987, improperly assembled bearings in engine number 2 on LOT Polish Airlines Flight 5055 overheated and exploded during cruise over
Lipniki village, causing the shaft to break in two; this caused the low pressure turbine disc to spin to enormous speeds and disintegrate, damaging
engine number 1 and cutting the control pushers. The crew managed to return to Warsaw, using nothing but trim tabs to control the Il-62M, but on
the final approach, the trim controlling links burned and the crew completely lost control over the aircraft. Soon after, it crashed on the outskirts of
Warsaw; all 183 on board perished. Had the plane stayed airborne for 40 seconds more, it would have been able to reach the runway.[37]
82. It was featured in an episode of Seconds From
Disaster on the National Geographic Channel
and MSNBC Investigates on the MSNBC news
channel.
The History Channel distributed a
documentary named Shockwave; a portion of
Episode 7 (originally aired January 25, 2008)
detailed the events of the crash.
84. Transparency
"sunlight is said to be the
best of disinfectants”
Louis Dembitz Brandeis was an Associate Justice on the
Supreme Court of the United States from 1916 to 1939.
85. Brandeis made his famous statement that "sunlight is said to be the best of
disinfectants" in a 1913 Harper's Weekly article, entitled "What Publicity Can Do."
But it was an image that had been in his mind for decades.
Twenty years earlier, in a letter to his fiance, Brandeis had expressed an interest in
writing a "a sort of companion piece" to his influential article on "The Right to
Privacy," but this time he would focus on "The Duty of Publicity."
He had been thinking, he wrote, "about the wickedness of people shielding
wrongdoers & passing them off (or at least allowing them to pass themselves off) as
honest men."
He then proposed a remedy:If the broad light of day could be let in upon men’s
actions, it would purify them as the sun disinfects.Interestingly, at that time the
word "publicity" referred both to something like what we think of as "public
relations" as well to the practice of making information widely available to the
public (Stoker and Rawlins, 2005).
That latter definition sounds a lot like what we now mean by transparency.
86. All documents be made available to the public
Public hearings
Independent peer reviews
87. The decision to go ahead with a project
should, where all possible, be made
contingent on the willingness of private
financiers to participate without a sovereign
guarantee.
88. Infrastructure grants will let local officials
spend the funds at their discretion but every
dollar they spend on one type of
infrastructure reduces their ability to fund
another.
90. “ in no other branch of mathematics is it so
easy to blunder as in probability theory.”
Martin Gardiner, “Mathematical Games," Scientific American, October 1959 pp 180-182
91. Monte Carlo simulation methods are especially useful in studying systems with a large number of
coupled degrees of freedom, such as liquids, disordered materials, strongly coupled solids, and cellular
structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for
modeling phenomena with significant uncertainty in inputs, such as the calculation of
risk in business (for its use in the insurance industry, see stochastic modeling). A classic
use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated
boundary conditions.
Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate
investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method
is intended for financial analysts who want to construct stochastic or probabilistic financial models as
opposed to the traditional static and deterministic models.
Monte Carlo methods are very important in computational physics, physical chemistry, and related
applied fields, and have diverse applications from complicated quantum chromo dynamics calculations
to designing heat shields and aerodynamic forms.
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of
radiation fields and energy transport, and thus these methods have been used in global illumination
computations which produce photorealistic images of virtual 3D models, with applications in video
games, architecture, design, computer generated films, special effects in cinema, business, economics
and other fields.
Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can
find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not
prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x
says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is
probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo
algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so,
but another one without, but with a guarantee of not getting this answer when it is wrong too often —
in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
92.
93. The Senate committee
hearings that Pecora led
probed the causes of the
Wall Street Crash of 1929
that launched a major
reform of the American
financial system.
“Pitch darkness was
among the bankers
strongest allies.”
The Senate committee
hearings that Pecora led
probed the causes of the
Wall Street Crash of 1929
that launched a major
reform of the American
financial system.
“Pitch darkness was
among the bankers
strongest allies.”
94. “Economists for decades have shown that
transparency lowers margins, leads to
greater liquidity and more competition in the
marketplace…Transparent pricing is also a
critical feature of lowering the risk at banks,
and at the derivatives clearinghouses as
well.” Gary Gensler, Commodity Futures Trading Commission
Chairman NY Times 27 November 2011
95. Spurred by these revelations, the United
States Congress enacted the Glass–Steagall
Act, the Securities Act of 1933 and the
Securities Exchange Act of 1934.
96. Judgment Under Uncertainty:
Heuristics and Biases. Amos Tversky
and Daniel Kahneman
Science, Volume 185, 1974
Research for DARPA N00014-73C-
0438 monitored by ONR and Research
and Development Authority of
Hebrew University, Jerusalem, Israel.
97. Biases in the evaluation of compound events
are particularly significant in the context of
planning. The successful completion of an
undertaking, such as the development of a
new product, typically has a conjunctive
character: for the undertaking to succeed,
each of a series of events must occur. Even
when each of these events is very likely, the
overall probability of success can be quite low
if the number of events is large.
98.
99.
100.
101.
102.
103. “The new program baseline projects total acquisition costs of $395.7 billion, an
increase of $117.2 billion (42%) from the prior 2007 baseline. Full rate production
is now planned for 2019, a delay of 6 years from the 2007 baseline. Unit costs per
aircraft have doubled since start of development in 2001…. Since 2002, the total
quantity through 2017 has been reduced by three-fourths, from 1,591 to 365.
Affordability is a key challenge…. Overall performance in 2011 was mixed as the
program achieved 6 of 11 important objectives…. Late software releases and
concurrent work on multiple software blocks have delayed testing and training.
Development of critical mission systems providing core combat capabilities
remains behind schedule and risky…. Most of the instability in the program has
been and continues to be the result of highly concurrent development, testing,
and production activities. Cost overruns on the first four annual procurement
contracts total more than $1 billion and aircraft deliveries are on average more
than 1 year late. Program officials said the government’s share of the cost growth
is $672 million; this adds about $11 million to the price of each of the 63 aircraft
under those contract.”
104.
105.
106. In well-run firms in the private sector,
occasional problems are reluctantly
tolerated, but not disclosing them to
management is a crime.
107. "Unless you can point the finger at
the man who is responsible when
something goes wrong, then you
never had anyone really
responsible.“
▪ Hyman G. Rickover, Admiral, USN
▪ Director of Naval Reactors
107
108. Fought in 406 BC during the Peloponnesian War just east of the
island of Lesbos. In the battle, an Athenian fleet commanded by eight
strategoi defeated a Spartan fleet under Callicratidas. The battle was
precipitated by a Spartan victory which led to the Athenian fleet under
Conon being blockaded at Mytilene; to relieve Conon, the Athenians
assembled a scratch force composed largely of newly constructed
ships manned by inexperienced crews.
This inexperienced fleet was thus tactically inferior to the Spartans,
but its commanders were able to circumvent this problem by
employing new and unorthodox tactics, which allowed the Athenians
to secure a dramatic and unexpected victory.
The news of the victory itself was met with jubilation at Athens, and
the grateful Athenian public voted to bestow citizenship on the slaves
and metics who had fought in the battle. Their joy was tempered,
however, by the aftermath of the battle, in which a storm prevented
the ships assigned to rescue the survivors of the 25 disabled or sunken
Athenian triremes from performing their duties, and a great number of
sailors drowned. A fury erupted at Athens when the public learned of
this, and after a bitter struggle in the assembly six of the eight
generals who had commanded the fleet were tried as a group and
executed.
09/04/16 108
109. Generals were frequently subject to impeachment and
prosecution in the courts. Penalties ranged from execution,
banishment and fines. The fines imposed might be truly
monumental, figures that could swallow up the estates of the
very richest Athenians.
In 430 BC Pericles himself was removed summarily from office
by the assembly and fined.
After the victorious naval battle of Arginusae in 406 BC, all
eight generals in command on the day were tried and sentenced
to death for failing to rescue survivors, though not all came
home to accept the penalty.
09/04/16
110. Storm had prevented the victorious admirals from
picking up the crews of sunken ships. Many of them
drowned, and for this, the admirals were held
responsible.
09/04/16 Jeran Binning jeran.binning@dau.mil 110
111. The alignment of
interests and
incentives is
elusive because
today’s
acquisition
culture lacks
meaningful
consequences for
failure.
111
112. Dans ce pays-ci, il est bon de tuer de temps en temps un amiral pour encourager les autres).
The king did not exercise royal prerogative and John Byng was shot on 14 March 1757 in the
Solent on the forecastle of HMS Monarch by a platoon of musketeers.
Byng's execution was satirized byVoltaire in his novel Candide.
In Portsmouth, Candide witnesses the execution of an officer by firing squad; and is told that
"in this country, it is wise to kill an admiral from time to time to encourage the others”
113. "What is surprising is not the magnitude of our
forecast errors," observes Mr. Taleb, "but our
absence of awareness of it.“
We tend to fail--miserably--at predicting the
future, but such failure is little noted nor long
remembered. It seems to be of remarkably little
professional consequence.
114. "Black swans" are highly consequential but unlikely events that are easily
explainable – but only in retrospect.
• Black swans have shaped the history of technology, science, business and
culture.
• As the world gets more connected, black swans are becoming more
consequential.
• The human mind is subject to numerous blind spots, illusions and biases.
• One of the most pernicious biases is misusing standard statistical tools, such as
the “bell curve,” that ignore black swans.
• Other statistical tools, such as the "power-law distribution," are far better at
modeling many important phenomena.
• Expert advice is often useless.
• Most forecasting is pseudoscience.
• You can retrain yourself to overcome your cognitive biases and to appreciate
randomness. but it's not easy.
• You can hedge against negative black swans while benefiting from positive ones.
115. "Much of what happens in history
comes from 'Black Swan dynamics',
very large, sudden, and totally
unpredictable 'outliers', while much of
what we usually talk about is almost
pure noise.
• Our track record in predicting those
events is dismal; yet by some
mechanism called the hindsight bias
we think that we understand them.
We have a bad habit of finding 'laws'
in history (by fitting stories to events
and detecting false patterns); we are
drivers looking through the rear view
mirror while convinced we are
looking ahead."
116. The term Black–Scholes refers to three closely related concepts:
The Black–Scholes model is a mathematical model of the market for an equity, in which the equity's
price is a stochastic process.
The Black–Scholes PDE is a partial differential equation which (in the model) must be satisfied by the
price of a derivative on the equity.
The Black–Scholes formula is the result obtained by solving the Black-Scholes PDE for European put
and call options.
Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the
options pricing model and coined the term "Black-Scholes" options pricing model, by enhancing work
that was published by Fischer Black and Myron Scholes. The paper was first published in 1973. The
foundation for their research relied on work developed by scholars such as Louis Bachelier, , , Edward O.
Thorp, and Paul Samuelson. The fundamental insight of Black-Scholes is that the option is implicitly
priced if the stock is traded.
Merton and Scholes received the 1997 Nobel Prize in Economics for this and related work. Though
ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the
Swedish academy.
http://www.pbs.org/wgbh/nova/stockmarket/
117. In 1973, with the publication of the options-pricing model developed by
Fischer Black and Myron Scholes and expanded on by Robert C.
Merton. The new model enabled more-effective pricing and mitigation
of risk. It could calculate the value of an option to buy a security as
long as the user could supply five pieces of data: the risk-free rate of
return (usually defined as the return on a three-month U.S. Treasury
bill), the price at which the security would be purchased (usually
given), the current price at which the security was traded (to be
observed in the market), the remaining time during which the option
could be exercised (given), and the security’s price volatility (which
could be estimated from historical data and is now more commonly
inferred from the prices of options themselves if they are traded).
The equations in the model assume that the underlying security’s price
mimics the random way in which air molecules move in space, familiar
to engineers as Brownian motion.
118. “But this long run is a misleading guide to current
affairs. In the long run we are all dead.”
John Maynard Keynes
identified three domains of probability:
Frequency probability;
Subjective or Bayesian probability;
and
Events lying outside the possibility of any description
in terms of probability (special causes) and based
a probability theory thereon.
"It ain't over till it's over”
Yogi Berra
119. The Harken deal was a smaller scale version of the
accounting scandals at WorldCom, Enron and
other firms, Bush’s purchase and sale of the Texas
Rangers baseball team reveals other characteristic
features of the past several decades of American
capitalism: the plundering of public assets for
private gain, the confluence of political and
economic power, the defrauding of the American
people.
By the time he cashed out in 1998, Bush’s return
on his original $600,000 investment in the Rangers
was 2,400 percent.
120. Where did all of this money come from and what did Bush do to get it? Much of the story was first
reported nationally by Joe Conason in a February, 2000 article for Harpers Magazine. A report from the
public interest group, Center for Public Integrity, and recent columns on July 16 in the New York Times by
Paul Krugman and Nicholas Kristof have filled in some of the details.A free stadium, and some choice land
on the sideThe same factors that propelled Bush virtually overnight from failed oil man to wealthy corporate
executive—family connections and the desire of rich Texas businessmen to exploit the Bush name—opened
the way for him to buy a stake in the professional baseball team. Bill DeWitt, part owner of Spectrum 7,
which had bought Bush’s own company several years earlier and then later sold out to Harken, offered the
son of the then-US president a chance to join in a bid for the Rangers. In 1989 a deal was reached in which
Richard Rainwater, a wealthy Texas financier, joined Bush and several other investors in buying the
team.Bush himself did not have a large fortune at the time, and only bought a two percent share, financed
with a $500,000 loan from a bank on whose board of directors he had once served. Bush used the proceeds
from his questionable sale of Harken stock to repay this loan.Bush’s formal title was “managing partner.”
He served essentially as a public face, whose main responsibility was to attend the home baseball games.
Edward Rose, another wealthy Texas investor and Rainwater’s associate, was responsible for the actual
business operations of the team.The top priority for the new Rangers owners in increasing the value of their
holdings was to acquire a new stadium. They had no intention of paying for the stadium themselves, so they
threatened to move the team if the city of Arlington did not foot the bill. The city government readily agreed
to a generous deal. Reached in the fall of 1990, it guaranteed that the city would pay $135 million of an
estimated cost of $190 million. The remainder was raised through a ticket surcharge. Thus, local taxpayers
and baseball fans financed the entire cost of the stadium.Moreover, the owners were allowed to buy back
the stadium for a mere $60 million, which was deducted from ticket revenues at a rate of no more than $5
million per year. The Rangers syndicate was also given a property tax exemption and sales tax exemption on
products purchased for use in the stadium. City residents ended up subsidizing these tax breaks for the
Rangers owners by paying higher local rates.This plan was sold to Arlington voters with Bush’s help. At the
end of the day, the owners of the Rangers, including Bush, got a stadium worth nearly $200 million without
putting down a penny of their own money.But the boondoggle did not end there. As part of the deal, the
Rangers syndicate got a sizable chunk of land in addition to the stadium. This land naturally increased in
value as a result of the stadium’s construction.To oblige the owners, Ann Richards, the Democratic
Governor of Texas at the time, signed into law an extraordinary measure that set up the Arlington Sports
Facilities Development Authority (ASFDA), which was granted the power to seize privately owned land
Editor's Notes
His contention is that for most organisations the answers to the first two questions are negative. To answer the third question, he gives the example of the crash of United Flight 232 in 1989. The crash was attributed to the simultaneous failure of three independent (and redundant) hydraulic systems. This happened because the systems were located at the rear of the plane and debris from a damaged turbine cut lines to all them. This is an example of common mode failure – a single event causing multiple systems to fail. The probability of such an event occurring was estimated to be less than one in a billion. However, the reason the turbine broke up was that it hadn’t been inspected properly (i.e. human error). The probability estimate hadn’t considered human oversight, which is way more likely than one-in-billion. Hubbard uses this example to make the point that a weak risk management methodology can have huge consequences.
Daniel Kahneman is the Eugene Higgins Professor of Psychology at Princeton University) and Professor of Public Affairs at Woodrow Wilson School. Kahneman was born in Israel and educated at the Hebrew University in Jerusalem before taking his PhD at the University of California. He was the joint Nobel Prize winner for Economics in 2002 for his work on applying cognitive behavioural theorie to decision making in economics.
Kahneman and Tversky “Subjective Probability: A judgment of Representativeness. Cognitive Psychology3, 1972 430-454
Brandeis And The History Of TransparencySunlight InternMay 26, 2009, 10:47 a.m.
Where did all of this money come from and what did Bush do to get it? Much of the story was first reported nationally by Joe Conason in a February, 2000 article for Harpers Magazine. A report from the public interest group, Center for Public Integrity, and recent columns on July 16 in the New York Times by Paul Krugman and Nicholas Kristof have filled in some of the details.A free stadium, and some choice land on the sideThe same factors that propelled Bush virtually overnight from failed oil man to wealthy corporate executive—family connections and the desire of rich Texas businessmen to exploit the Bush name—opened the way for him to buy a stake in the professional baseball team. Bill DeWitt, part owner of Spectrum 7, which had bought Bush’s own company several years earlier and then later sold out to Harken, offered the son of the then-US president a chance to join in a bid for the Rangers. In 1989 a deal was reached in which Richard Rainwater, a wealthy Texas financier, joined Bush and several other investors in buying the team.Bush himself did not have a large fortune at the time, and only bought a two percent share, financed with a $500,000 loan from a bank on whose board of directors he had once served. Bush used the proceeds from his questionable sale of Harken stock to repay this loan.Bush’s formal title was “managing partner.” He served essentially as a public face, whose main responsibility was to attend the home baseball games. Edward Rose, another wealthy Texas investor and Rainwater’s associate, was responsible for the actual business operations of the team.The top priority for the new Rangers owners in increasing the value of their holdings was to acquire a new stadium. They had no intention of paying for the stadium themselves, so they threatened to move the team if the city of Arlington did not foot the bill. The city government readily agreed to a generous deal. Reached in the fall of 1990, it guaranteed that the city would pay $135 million of an estimated cost of $190 million. The remainder was raised through a ticket surcharge. Thus, local taxpayers and baseball fans financed the entire cost of the stadium.Moreover, the owners were allowed to buy back the stadium for a mere $60 million, which was deducted from ticket revenues at a rate of no more than $5 million per year. The Rangers syndicate was also given a property tax exemption and sales tax exemption on products purchased for use in the stadium. City residents ended up subsidizing these tax breaks for the Rangers owners by paying higher local rates.This plan was sold to Arlington voters with Bush’s help. At the end of the day, the owners of the Rangers, including Bush, got a stadium worth nearly $200 million without putting down a penny of their own money.But the boondoggle did not end there. As part of the deal, the Rangers syndicate got a sizable chunk of land in addition to the stadium. This land naturally increased in value as a result of the stadium’s construction.To oblige the owners, Ann Richards, the Democratic Governor of Texas at the time, signed into law an extraordinary measure that set up the Arlington Sports Facilities Development Authority (ASFDA), which was granted the power to seize privately owned land deemed necessary for stadium construction.According to documents obtained by the Center for Public Integrity, the Rangers owners would locate a piece of land they wanted, offer a price far below the market value, and if the owners of the land parcel refused, bring in the ASFDA to condemn the land.