This document discusses examples of past catastrophic events caused by gaps or failures in organizational knowledge. It examines case studies like the sinking of the Titanic due to an undelivered iceberg warning telegram, the Challenger space shuttle disaster resulting from O-ring failure, and more. The document argues that small knowledge gaps can cascade and lead to major impacts, like process breakdowns. It advocates combining risk management and knowledge management to build resilience by identifying weak signals, questioning assumptions, and integrating knowledge into critical processes. Learning from failures is key to avoiding future knowledge collapses.
GameDay: Creating Resiliency Through Destruction - LISA11Jesse Robbins
Jesse Robbins (Cofounder of Opscode) explains GameDay, an exercise designed to increase Resilience through large-scale fault injection across critical systems.
The document discusses the chaotic nature of information and how it relates to knowledge, predictability, and control. While information provides insights, it is inherently chaotic and incomplete. This chaos can be ignored, monitored and controlled, or embraced. Ignoring chaos can have lethal consequences, as British Petroleum did not adequately plan for the risks of an oil spill. Attempting to manage chaos also provides a false sense of security. Ultimately, the document argues we will never have full knowledge or predictability due to information chaos, and the best approach is to embrace uncertainty.
We all have to endure failure from time to time, whether it’s underperforming at a job interview, flunking an exam, or losing a football game. But for people working in safety-critical industries, getting it wrong can have deadly consequences. Consider the shocking fact that preventable medical error is the 3rd biggest killer in the United States, causing more than 400,000 deaths every year. More people die from mistakes made by doctors and hospitals than from traffic accidents. And most of those mistakes are never made public, because of malpractice settlements with nondisclosure clauses.
Behavioral Economics At Work Nunnally, Steadman, Baxter Las Vegas Finalksteadman
The document summarizes a presentation on behavioral economics and judgment risk given by Tyler Nunnally, the founder and CEO of Upside Risk. The presentation discusses concepts from behavioral economics like heuristics and biases that can lead to judgment errors, and examines how risk appetite can impact decision making and business performance. Best practices for managing judgment risk and reducing biases are also covered.
The document discusses the issue of preventable mishaps in the Navy. It notes that mishap rates were much higher in the past when mishaps were seen as an inevitable cost. However, over the past 50+ years, standardized procedures and improved training have led to significantly lower mishap rates. The document examines Navy mishap data from 2006-2010 and concludes that most mishaps were theoretically preventable if known risks had been properly controlled. It provides some examples of mishap reports that deemed incidents either preventable or not completely preventable. The discussion questions at the end prompt reflection on preventable mishaps witnessed and how risks can be mitigated.
Any emergency situation forces trauma into the mind and heart of .docxjustine1simpson78276
“Any emergency situation forces trauma into the mind and heart of all involved with a
shock of new information, images and experiences. One tool in reducing trauma is to
provide realistic training. Training to the emergency appears to inoculate the worker
from many critical stress reactions and creates a sound response during the real crisis.”
—The Author
Preparing and executing safe and effective mock scene training scenarios is the
theme of this chapter. Prep-scenarios, drills, and spontaneous tests, while highly
effective in increasing overall safety, call for close examination, analysis, and cautious
planning prior to engaging in the exercise. We will discuss a number of the positive
and challenging aspects of effective training scenarios, whether the security specialist
is conducting a simple evacuation drill or a complex mass-casualty exercise. This
chapter will discuss the following key points:
• The value of mock scene scenarios and drills
• Preparing for a training and testing scenario
• Fundamental training scenarios
• Realism over drama
• Site and personnel preparation
• Selection of appropriate role players
• Key contacts prior to a scenario
• Occupational standards of safety and executing the mock scenario
THE VALUE OF MOCK SCENE SCENARIOS AND DRILLS
In addition to confidence building and increasing the safety of all personnel, the use
of mock scenarios can reduce legal liability and increase defensibility in cases involving
civil and criminal litigation. The training scenario will often expose vulnerabilities
while affirming the strength of current security and safety practices. It is during
these often dramatic exercises where we find both what we had hoped would be a
T E N
PREPARING AND CONDUCTING
MOCK TRAINING
420528 ch10 pp. 127-138 6/24/05 12:48 PM Page 127
128 ANTI-TERRORISM RISK ASSESSMENTS
solid company practice and the exposure of more work to be done by the agency and
its personnel. While the number of constructive aspects of conducting mock training
exercises far outweighs the dangers, many agencies avoid the drill for reasons some
CEOs would rather not discuss—that being the reality that every exercise will expose
flaws in current policy and procedures. This raw truth is enough for many to skip this
critical part of an anti-terror risk assessment and target hardening. History teaches,
however, time and time again that tender truths and perceived flaws are not an
excuse for avoiding the improvement of target security and protection procedures
related to the safety of our workforce. It is a bitter pill that every agency interested in
excellence will be willing to swallow when faced with the probable alternatives. The
failure to train and test through mock scenarios—small and/or large—may indeed be
fatal to both the workforce and the agency’s financial future.
It is the added task of the proficient security specialist to present mock scene
practice scenarios as a necessary part of security despite the apparent paradox t.
Futurists need to advocate for change to cope with how the world is expected to change. We do this in the context of a constant need for growth. However, we need to understand that growth through increasing ef ciency comes at the cost of an increase in the risk of failure. Smarter systems and policy design can help us manage such risks. This article examines how.
GameDay: Creating Resiliency Through Destruction - LISA11Jesse Robbins
Jesse Robbins (Cofounder of Opscode) explains GameDay, an exercise designed to increase Resilience through large-scale fault injection across critical systems.
The document discusses the chaotic nature of information and how it relates to knowledge, predictability, and control. While information provides insights, it is inherently chaotic and incomplete. This chaos can be ignored, monitored and controlled, or embraced. Ignoring chaos can have lethal consequences, as British Petroleum did not adequately plan for the risks of an oil spill. Attempting to manage chaos also provides a false sense of security. Ultimately, the document argues we will never have full knowledge or predictability due to information chaos, and the best approach is to embrace uncertainty.
We all have to endure failure from time to time, whether it’s underperforming at a job interview, flunking an exam, or losing a football game. But for people working in safety-critical industries, getting it wrong can have deadly consequences. Consider the shocking fact that preventable medical error is the 3rd biggest killer in the United States, causing more than 400,000 deaths every year. More people die from mistakes made by doctors and hospitals than from traffic accidents. And most of those mistakes are never made public, because of malpractice settlements with nondisclosure clauses.
Behavioral Economics At Work Nunnally, Steadman, Baxter Las Vegas Finalksteadman
The document summarizes a presentation on behavioral economics and judgment risk given by Tyler Nunnally, the founder and CEO of Upside Risk. The presentation discusses concepts from behavioral economics like heuristics and biases that can lead to judgment errors, and examines how risk appetite can impact decision making and business performance. Best practices for managing judgment risk and reducing biases are also covered.
The document discusses the issue of preventable mishaps in the Navy. It notes that mishap rates were much higher in the past when mishaps were seen as an inevitable cost. However, over the past 50+ years, standardized procedures and improved training have led to significantly lower mishap rates. The document examines Navy mishap data from 2006-2010 and concludes that most mishaps were theoretically preventable if known risks had been properly controlled. It provides some examples of mishap reports that deemed incidents either preventable or not completely preventable. The discussion questions at the end prompt reflection on preventable mishaps witnessed and how risks can be mitigated.
Any emergency situation forces trauma into the mind and heart of .docxjustine1simpson78276
“Any emergency situation forces trauma into the mind and heart of all involved with a
shock of new information, images and experiences. One tool in reducing trauma is to
provide realistic training. Training to the emergency appears to inoculate the worker
from many critical stress reactions and creates a sound response during the real crisis.”
—The Author
Preparing and executing safe and effective mock scene training scenarios is the
theme of this chapter. Prep-scenarios, drills, and spontaneous tests, while highly
effective in increasing overall safety, call for close examination, analysis, and cautious
planning prior to engaging in the exercise. We will discuss a number of the positive
and challenging aspects of effective training scenarios, whether the security specialist
is conducting a simple evacuation drill or a complex mass-casualty exercise. This
chapter will discuss the following key points:
• The value of mock scene scenarios and drills
• Preparing for a training and testing scenario
• Fundamental training scenarios
• Realism over drama
• Site and personnel preparation
• Selection of appropriate role players
• Key contacts prior to a scenario
• Occupational standards of safety and executing the mock scenario
THE VALUE OF MOCK SCENE SCENARIOS AND DRILLS
In addition to confidence building and increasing the safety of all personnel, the use
of mock scenarios can reduce legal liability and increase defensibility in cases involving
civil and criminal litigation. The training scenario will often expose vulnerabilities
while affirming the strength of current security and safety practices. It is during
these often dramatic exercises where we find both what we had hoped would be a
T E N
PREPARING AND CONDUCTING
MOCK TRAINING
420528 ch10 pp. 127-138 6/24/05 12:48 PM Page 127
128 ANTI-TERRORISM RISK ASSESSMENTS
solid company practice and the exposure of more work to be done by the agency and
its personnel. While the number of constructive aspects of conducting mock training
exercises far outweighs the dangers, many agencies avoid the drill for reasons some
CEOs would rather not discuss—that being the reality that every exercise will expose
flaws in current policy and procedures. This raw truth is enough for many to skip this
critical part of an anti-terror risk assessment and target hardening. History teaches,
however, time and time again that tender truths and perceived flaws are not an
excuse for avoiding the improvement of target security and protection procedures
related to the safety of our workforce. It is a bitter pill that every agency interested in
excellence will be willing to swallow when faced with the probable alternatives. The
failure to train and test through mock scenarios—small and/or large—may indeed be
fatal to both the workforce and the agency’s financial future.
It is the added task of the proficient security specialist to present mock scene
practice scenarios as a necessary part of security despite the apparent paradox t.
Futurists need to advocate for change to cope with how the world is expected to change. We do this in the context of a constant need for growth. However, we need to understand that growth through increasing ef ciency comes at the cost of an increase in the risk of failure. Smarter systems and policy design can help us manage such risks. This article examines how.
The document discusses principles of resilience in emergency preparedness. It argues that rigid exercises do not fully prepare responders for real disasters, as real events are unpredictable. International cooperation and more flexible "demonstrations" that incorporate failures and collaboration are better for developing resilience. The Strong Angel exercises showed the importance of layering communications, transportation, and power resources, as well as using open-source, redundant, and diverse tools. Face-to-face relationships and frequent communication also improve response. Media training is important to avoid potential consequences of poor interactions.
Transition Words for Essays with Examples Englishan. Transition Statements For Persuasive Essays For High School. Transition Definition and Useful Examples of Transitions in Writing .... 004 Essay Example Transition Words In College Essays Thatsnotus. Mrs. Ormans Classroom: Common Core Tips: Using Transitional Words in .... 013 Transition Words And Phrases 274134 Essay Thatsnotus. 005 Good Transitions For Essays Essay Example Transition Words .... Transitions and Transitional Devices - Using transitions allows readers .... Transition Words for Essays: Great List amp; Useful Tips 7ESL. 007 Essay Example Transitional Words For An Argumentative Recommended .... 010 Good Essay Transitions Thatsnotus. List of Transitional Words for Writing Essays Word Processor Essays. 021 Essay Example Slide1 Transitions For Thatsnotus. transitions Transition words for essays, Transition words, Good essay. 025 List Of Transition Words For Writing 731722 Essay Example In .... Transition Words and Definitions, Transition Words For Essays - English .... 008 Good Transition Words For An Essay Example Phrases Essays .... Conclusion Transitions For Essays Progressive Smart Quiz. 016 Essay Example Transitionalphrases Transition Words And Phrases For .... Transition Words and Phrases: Useful List amp; Examples - Beauty of the world. List Of Transition Words And Phrases For Essays: Essay Transition Words .... Ultimate Guide To Starter Sentences for Essays. 001 Good Essay Transitions Thatsnotus. How to Write an Essay Introduction with Sample Intros. 008 Transitions Essays Good Essay Revising And Editing Transition Words .... 002 Essay Example Transition Words For Essays Thatsnotus. Admission Essay: Good transitions for an essay. Narrative Essay: Transition paragraph in cause and effect essay. 008 Transitions For Essays Essay Example Transition Words Thatsnotus. Marvelous Transition Words For Argumentative Essays Thatsnotus. How to Write an Essay: Transitions with Worksheet. 016 Essay Example Transitions 008066186 1 Thatsnotus. 023 Good Essay Transitions Example Awesome Collection Of Transition .... 022 Transitional Words And Phrases Help An Essay To Flow More Smoothly ... Transitions Essays Transitions Essays
The Key to Great Teams: Understanding the Human Operating SystemAtlassian
How do organizations become adaptive, agile, and resilient? What fosters trust and makes people feel safe to speak their minds? Stefan Knecht from it-economics – a 2016 Best Workplaces in Germany award recipient – say the master key lies in a social operating system used for over 300,000 years: the human operating system (OS H). Recent findings from behavioral, cognitive, and organizational sciences help us understand parts of this system – like what makes humans tick and how groups successfully form and perform. This talk will explain the idea behind OS H and teach you how you can employ the concept in your own organization to build trust, resiliency, adaptivity, etc. Come away with a solid understanding of social needs and the company dynamics you can help create to meet these needs. Even the best tools and technology will only get you part of the way to building world-class teams.
Stefan Knecht, Manager, it-economics GmbH
The document discusses five false assumptions companies often make about crisis management:
1. Having an operational plan is not the same as being prepared for an organizational crisis that violates public trust.
2. One size does not fit all - operational plans cannot contain an organizational crisis in the same way.
3. People are not entirely rational and emotions often overwhelm reason in a crisis.
4. Experienced executives may default to habitual responses that make the crisis worse rather than knowing what to do.
5. Behaviors that led to success in normal times will not necessarily work during a crisis when the rules have changed.
HBR.ORG APRIL !## REPRINT R!!#GFAILURE LEARN FROM ITJeanmarieColbert3
Near misses, or small failures that cause no immediate harm, often precede major crises and disasters. However, people tend to ignore or normalize near misses due to cognitive biases. The document discusses three examples where organizations failed to learn from numerous near misses: Apple's antenna issues with the iPhone 4, Toyota's unintended acceleration complaints, and JetBlue's risky strategy of keeping planes on the tarmac during bad weather. In each case, latent errors combined with enabling conditions to eventually cause major crises that could have been prevented by addressing issues flagged in prior near misses. The document advocates learning from near misses to improve operations and avoid potential catastrophes.
2006 StrongAngel III - integrated disaster response demonstration in San Diego. Directed by mentor Dr. Eric Rasmussen,MD,MDM,FACP http://about.me/EricRasmussenMD
Safety Management Systems (SMS) and Decision MakingIHSTFAA
The document summarizes some key limitations of traditional safety programs:
1. Traditional safety programs are limited in their understanding of exactly what risks and threats create accidents, relying instead on "educated guesses" based on personal experiences.
2. They have no method of tracking safety implementations to measure return on investment and effectiveness.
3. They take a reactive approach rather than conducting analysis of the nature and prioritization of risks to proactively address safety issues.
The document discusses business continuity planning and management to prepare for unexpected events known as "black swans" that can severely impact business operations. It defines black swans as rare but high-impact events like natural disasters, market crashes, or data leaks. Having a proper business continuity plan (BCP) and continuity management (BCM) process in place can help businesses recover more quickly. The document outlines the key components of an effective BCP/BCM system, including understanding risks, developing strategies and procedures to address risks, implementing and testing the plans, and maintaining the plans over time. Most businesses acknowledge risks but few have fully developed continuity plans, and plans often focus only on information technology rather than all aspects of operations.
1. The document discusses security risk management and outlines maturity levels of organizations in their approach to security risk management. It describes four levels - from initial/ad hoc implementation to optimizing where security risk management is fully integrated.
2. Key barriers to effective security risk management implementation are identified as unrealistic expectations, lack of clear vision and not treating implementation as a dedicated project. Guiding principles of direction, systems and execution are outlined to help integration.
3. Different industry sectors have varying needs for security investments depending on risk levels. Most organizations take on more risk than realized, over-engineer risks, or are too risk averse due to human cognitive limitations unless a structured risk management process is followed.
1. What are the differences between the individual rights perspect.docxjackiewalcutt
1. What are the differences between the individual rights perspective and the public order perspective?
2. What are the components of the criminal justice system and how do they work together? Please explain.
3. What is meant by the term due process of the law?
4. What does the term multiculturalism and how does it affect criminal justice?
Chapter 2
1. What are the special categories of crime? Please explain why they are important.
2. Describe the history of the NCVS and explain how it is different from the UCR?
3. What is the Dark Figure of Crime and why is knowledge of it important?
4. What was the last crime added to the UCR and when was it added? What is your personal opinion based on your research concerning why it was added?
Bloomberg Businessweek
Magazine
How Failure Breeds Success
Posted on July 09, 2006
http://www.businessweek.com/stories/2006-07-09/how-failure-breeds-success
COVER STORY PODCAST: Ever heard of Choglit? How about OK Soda or Surge? Long after "New
Coke" became nearly synonymous with innovation failure, these products joined Coca-Cola Co.'s
(KO) graveyard of beverage busts. Choglit, in case you blinked and missed it, was a chocolate-flavored
milk drink test-marketed with Nestlé (NSRGY) in 2002. OK Soda, unveiled in 1994, tried to capture
Generation X with edgy marketing. The "OK Manifesto," parts of which were printed on cans in an
attempt at hipster irony, asked: "What's the point of OK Soda?" It turned out customers wondered the
same thing. And while Surge did well initially, this me-too Mountain Dew later did anything but. Sales
began drying up after five years.
Given that history, failure hardly seems like a subject Chairman and CEO E. Neville Isdell would want
to trot out in front of investors. But Isdell did just that, deliberately airing the topic at Coke's annual
meeting in April. "You will see some failures," he told the crowd. "As we take more risks, this is
something we must accept as part of the regeneration process."
Warning Coke investors that the company might experience some flops is a little like warning
Atlantans they might experience afternoon thunderstorms in July. But Isdell thinks it's vital. He
wants Coke to take bigger risks, and to do that, he knows he needs to convince employees and
shareholders that he will tolerate the failures that will inevitably result. That's the only way to change
Coke's traditionally risk-averse culture. And given the importance of this goal, there's no podium too
big for sending the signal. "Using [the annual meeting] occasion elevates the statement to another
order of importance," Isdell said in an interview with BusinessWeek.
CLOSE TO BLASPHEMY
While few CEOs are as candid about the potential for failure as Isdell, many are wrestling with the
same problem, trying to get their organizations to cozy up to the risk-taking that innovation requires.
A warning: It's not going to be an easy shift. After years of cost-cutting initiat ...
1) The document discusses the need for holistic, whole-of-government intelligence that considers a wide range of threats and policies and is focused on decision support rather than secret sources or individual disciplines.
2) It argues that intelligence should not be divided into separate functions but viewed as a "scheme of things entire" and that supporting judgment should be a core function.
3) The document presents a preliminary holistic analytic model and identifies gaps in considering issues like poverty, disease, and environmental threats that are essential to future-proofing analysis.
While quality systems and procedures are important, three case studies show that culture is the most critical factor for success. General Bill Creech transformed the US Air Force's culture to prioritize quality through passionate leadership and initiatives to recognize all staff, not just pilots. Roger Milliken created a culture of relentless focus on quality at Milliken & Co. through sustained engagement and prioritizing those promoting quality culture. Dr. Peter Pronovost achieved dramatic error reductions at Johns Hopkins by tackling the medical hierarchy's culture and empowering nurses to intervene with doctors. In each case, implementing systems was less impactful than changing the underlying culture through strong, committed leadership.
Leadership and innovation presentation to UiO Green IT SchoolRick Wheatley
In October 2013 I gave a presentation to the University of Oslo's Green IT School. The topic was on innovation and leadership in business given the evolving context we live in - where some issues are becoming existential.
Doing business in an environment that is volatile, unpredictable, complex and ambiguous demands a different kind of leadership; a different sense of calm if you will. Where does this come from? This was my attempt to relate a view on the contextual picture along with some principles of 'leadership from the future' that Veronica Lie, a Xyntéo colleague, and I wrote about in the run up to the 2013 Performance Theatre in Istanbul, Turkey - amazingly enough held at the precise time of the riots at Taksim square.
Enjoy - questions and comments appreciated.
Original article available here: http://issuu.com/xynteo/docs/pages_from_leadership_paper
This document discusses strategies for succeeding under uncertainty. It provides examples of companies that were unprepared for uncertainties and suffered losses as a result. Cisco was unprepared for the collapse of the internet bubble, while Ericsson lacked a "Plan B" when a key supplier's factory was destroyed. Even Nobel Prize winners at Long-Term Capital Management could not anticipate market forces. The document argues that embracing uncertainty is necessary to profit from opportunities and avoid threats. Companies that are flexible and can envision multiple potential futures may be better equipped to adapt when uncertainties emerge.
A Current Event Essay (600 Words) - PHDessay.com. Recent Current Event Analysis Essay Example | Topics and Well Written .... Current Event Paper Sample. 003 Current Event Essay Example Essays College Format Sample Free .... ️ Current events essay. Current events essay topic. 2019-01-25. Current event essay. Current Event Essay Sample - Denton Independent School District. Current Event Essay. 025 Essay Example Current Event Events Standard Paragraph Outline .... Sample Current Events Report. 012 Essay Example Current Event On Events Where Can I Write My Now .... Short essay on current events in india in 2021 | Essay examples, Essay .... How to Write a Current Event Essay. How to Write a Current Events Essay/ Current Events Essay Guidelines. Current Event Analysis: Paper Sample and Free Essay Example.
A Current Event Essay 600 Words - PHDessay.com. Recent Current Event Analysis Essay Example Topics and Well Written .... Current Event Paper Sample. 003 Current Event Essay Example Essays College Format Sample Free .... ️ Current events essay. Current events essay topic. 2019-01-25. Current event essay. Current Event Essay Sample - Denton Independent School District. Current Event Essay. 025 Essay Example Current Event Events Standard Paragraph Outline .... Sample Current Events Report. 012 Essay Example Current Event On Events Where Can I Write My Now .... Short essay on current events in india in 2021 Essay examples, Essay .... How to Write a Current Event Essay. How to Write a Current Events Essay/ Current Events Essay Guidelines. Current Event Analysis: Paper Sample and Free Essay Example. Current events essay topics. Persuasive Essay On Current Events Free .... Current Event Essay Template and Rubric by Literature Lifesavers. Current Events Essay.pdf DocDroid. Current Event Essay: A Guide, Tips, Examples, and Topic Ideas. Current Event Report - MR. WIRKUS MORSE HIGH SCHOOL. Current Event Essay Sample. Reflection essay: Current event paper outline. Current event Paper format New Write My Paper Critique Essay Sample ... Current Event Essay Current Event Essay
The document discusses the key elements of risk management including risk identification, evaluation, control, and financing. It also discusses the concept of a "black swan" event, which is defined as a rare, highly improbable event with severe consequences. Specifically, a black swan is unpredictable, has major impacts, and is often rationalized with the benefit of hindsight. Examples of past black swan events like economic crises are provided. The document also discusses approaches to prepare for and respond to potential black swan events through principles like establishing response goals and empowering local leadership.
Here are some potential pros and cons of using firefighting robots:
Pros:
- Robots could perform firefighting tasks that are too dangerous for humans, such as searching collapsed buildings or battling intense flames. This reduces risks to human firefighters.
- Robots have enhanced capabilities compared to humans, such as night vision, thermal imaging, ability to withstand high temperatures, and ability to access small spaces. This could help locate and suppress fires.
- Robots do not require rest, food, or bathroom breaks. They can operate continuously for as long as their batteries last. This allows round-the-clock firefighting without risking human fatigue or safety.
Cons:
- Robots are not as intelligent as humans
Cyber Security: The Strategic View
By: Kah-Kin Ho, Head of Cyber Security Business Development Threat Response, Intelligence and Development (TRIAD)
This session begins by giving an overview of how Cisco sees the challenges and opportunities of cyber security for the Government which include areas such as recent development on applicability of International Law to Cyber conflict, the evolving role of the Government as the legitimate security provider, Public-Private Partnership issues, and the evolving technical, social and political threat landscape. Cisco recognizes that cyber security begins at the policy level and translates through to the operational and system level. We will discuss why an intelligence-led network-centric approach that focuses on enforcing policy, enhancing situational awareness, and providing the insight necessary to tackle threats before they impact information and infrastructure assets is key to Cyber Security.
The document discusses principles of resilience in emergency preparedness. It argues that rigid exercises do not fully prepare responders for real disasters, as real events are unpredictable. International cooperation and more flexible "demonstrations" that incorporate failures and collaboration are better for developing resilience. The Strong Angel exercises showed the importance of layering communications, transportation, and power resources, as well as using open-source, redundant, and diverse tools. Face-to-face relationships and frequent communication also improve response. Media training is important to avoid potential consequences of poor interactions.
Transition Words for Essays with Examples Englishan. Transition Statements For Persuasive Essays For High School. Transition Definition and Useful Examples of Transitions in Writing .... 004 Essay Example Transition Words In College Essays Thatsnotus. Mrs. Ormans Classroom: Common Core Tips: Using Transitional Words in .... 013 Transition Words And Phrases 274134 Essay Thatsnotus. 005 Good Transitions For Essays Essay Example Transition Words .... Transitions and Transitional Devices - Using transitions allows readers .... Transition Words for Essays: Great List amp; Useful Tips 7ESL. 007 Essay Example Transitional Words For An Argumentative Recommended .... 010 Good Essay Transitions Thatsnotus. List of Transitional Words for Writing Essays Word Processor Essays. 021 Essay Example Slide1 Transitions For Thatsnotus. transitions Transition words for essays, Transition words, Good essay. 025 List Of Transition Words For Writing 731722 Essay Example In .... Transition Words and Definitions, Transition Words For Essays - English .... 008 Good Transition Words For An Essay Example Phrases Essays .... Conclusion Transitions For Essays Progressive Smart Quiz. 016 Essay Example Transitionalphrases Transition Words And Phrases For .... Transition Words and Phrases: Useful List amp; Examples - Beauty of the world. List Of Transition Words And Phrases For Essays: Essay Transition Words .... Ultimate Guide To Starter Sentences for Essays. 001 Good Essay Transitions Thatsnotus. How to Write an Essay Introduction with Sample Intros. 008 Transitions Essays Good Essay Revising And Editing Transition Words .... 002 Essay Example Transition Words For Essays Thatsnotus. Admission Essay: Good transitions for an essay. Narrative Essay: Transition paragraph in cause and effect essay. 008 Transitions For Essays Essay Example Transition Words Thatsnotus. Marvelous Transition Words For Argumentative Essays Thatsnotus. How to Write an Essay: Transitions with Worksheet. 016 Essay Example Transitions 008066186 1 Thatsnotus. 023 Good Essay Transitions Example Awesome Collection Of Transition .... 022 Transitional Words And Phrases Help An Essay To Flow More Smoothly ... Transitions Essays Transitions Essays
The Key to Great Teams: Understanding the Human Operating SystemAtlassian
How do organizations become adaptive, agile, and resilient? What fosters trust and makes people feel safe to speak their minds? Stefan Knecht from it-economics – a 2016 Best Workplaces in Germany award recipient – say the master key lies in a social operating system used for over 300,000 years: the human operating system (OS H). Recent findings from behavioral, cognitive, and organizational sciences help us understand parts of this system – like what makes humans tick and how groups successfully form and perform. This talk will explain the idea behind OS H and teach you how you can employ the concept in your own organization to build trust, resiliency, adaptivity, etc. Come away with a solid understanding of social needs and the company dynamics you can help create to meet these needs. Even the best tools and technology will only get you part of the way to building world-class teams.
Stefan Knecht, Manager, it-economics GmbH
The document discusses five false assumptions companies often make about crisis management:
1. Having an operational plan is not the same as being prepared for an organizational crisis that violates public trust.
2. One size does not fit all - operational plans cannot contain an organizational crisis in the same way.
3. People are not entirely rational and emotions often overwhelm reason in a crisis.
4. Experienced executives may default to habitual responses that make the crisis worse rather than knowing what to do.
5. Behaviors that led to success in normal times will not necessarily work during a crisis when the rules have changed.
HBR.ORG APRIL !## REPRINT R!!#GFAILURE LEARN FROM ITJeanmarieColbert3
Near misses, or small failures that cause no immediate harm, often precede major crises and disasters. However, people tend to ignore or normalize near misses due to cognitive biases. The document discusses three examples where organizations failed to learn from numerous near misses: Apple's antenna issues with the iPhone 4, Toyota's unintended acceleration complaints, and JetBlue's risky strategy of keeping planes on the tarmac during bad weather. In each case, latent errors combined with enabling conditions to eventually cause major crises that could have been prevented by addressing issues flagged in prior near misses. The document advocates learning from near misses to improve operations and avoid potential catastrophes.
2006 StrongAngel III - integrated disaster response demonstration in San Diego. Directed by mentor Dr. Eric Rasmussen,MD,MDM,FACP http://about.me/EricRasmussenMD
Safety Management Systems (SMS) and Decision MakingIHSTFAA
The document summarizes some key limitations of traditional safety programs:
1. Traditional safety programs are limited in their understanding of exactly what risks and threats create accidents, relying instead on "educated guesses" based on personal experiences.
2. They have no method of tracking safety implementations to measure return on investment and effectiveness.
3. They take a reactive approach rather than conducting analysis of the nature and prioritization of risks to proactively address safety issues.
The document discusses business continuity planning and management to prepare for unexpected events known as "black swans" that can severely impact business operations. It defines black swans as rare but high-impact events like natural disasters, market crashes, or data leaks. Having a proper business continuity plan (BCP) and continuity management (BCM) process in place can help businesses recover more quickly. The document outlines the key components of an effective BCP/BCM system, including understanding risks, developing strategies and procedures to address risks, implementing and testing the plans, and maintaining the plans over time. Most businesses acknowledge risks but few have fully developed continuity plans, and plans often focus only on information technology rather than all aspects of operations.
1. The document discusses security risk management and outlines maturity levels of organizations in their approach to security risk management. It describes four levels - from initial/ad hoc implementation to optimizing where security risk management is fully integrated.
2. Key barriers to effective security risk management implementation are identified as unrealistic expectations, lack of clear vision and not treating implementation as a dedicated project. Guiding principles of direction, systems and execution are outlined to help integration.
3. Different industry sectors have varying needs for security investments depending on risk levels. Most organizations take on more risk than realized, over-engineer risks, or are too risk averse due to human cognitive limitations unless a structured risk management process is followed.
1. What are the differences between the individual rights perspect.docxjackiewalcutt
1. What are the differences between the individual rights perspective and the public order perspective?
2. What are the components of the criminal justice system and how do they work together? Please explain.
3. What is meant by the term due process of the law?
4. What does the term multiculturalism and how does it affect criminal justice?
Chapter 2
1. What are the special categories of crime? Please explain why they are important.
2. Describe the history of the NCVS and explain how it is different from the UCR?
3. What is the Dark Figure of Crime and why is knowledge of it important?
4. What was the last crime added to the UCR and when was it added? What is your personal opinion based on your research concerning why it was added?
Bloomberg Businessweek
Magazine
How Failure Breeds Success
Posted on July 09, 2006
http://www.businessweek.com/stories/2006-07-09/how-failure-breeds-success
COVER STORY PODCAST: Ever heard of Choglit? How about OK Soda or Surge? Long after "New
Coke" became nearly synonymous with innovation failure, these products joined Coca-Cola Co.'s
(KO) graveyard of beverage busts. Choglit, in case you blinked and missed it, was a chocolate-flavored
milk drink test-marketed with Nestlé (NSRGY) in 2002. OK Soda, unveiled in 1994, tried to capture
Generation X with edgy marketing. The "OK Manifesto," parts of which were printed on cans in an
attempt at hipster irony, asked: "What's the point of OK Soda?" It turned out customers wondered the
same thing. And while Surge did well initially, this me-too Mountain Dew later did anything but. Sales
began drying up after five years.
Given that history, failure hardly seems like a subject Chairman and CEO E. Neville Isdell would want
to trot out in front of investors. But Isdell did just that, deliberately airing the topic at Coke's annual
meeting in April. "You will see some failures," he told the crowd. "As we take more risks, this is
something we must accept as part of the regeneration process."
Warning Coke investors that the company might experience some flops is a little like warning
Atlantans they might experience afternoon thunderstorms in July. But Isdell thinks it's vital. He
wants Coke to take bigger risks, and to do that, he knows he needs to convince employees and
shareholders that he will tolerate the failures that will inevitably result. That's the only way to change
Coke's traditionally risk-averse culture. And given the importance of this goal, there's no podium too
big for sending the signal. "Using [the annual meeting] occasion elevates the statement to another
order of importance," Isdell said in an interview with BusinessWeek.
CLOSE TO BLASPHEMY
While few CEOs are as candid about the potential for failure as Isdell, many are wrestling with the
same problem, trying to get their organizations to cozy up to the risk-taking that innovation requires.
A warning: It's not going to be an easy shift. After years of cost-cutting initiat ...
1) The document discusses the need for holistic, whole-of-government intelligence that considers a wide range of threats and policies and is focused on decision support rather than secret sources or individual disciplines.
2) It argues that intelligence should not be divided into separate functions but viewed as a "scheme of things entire" and that supporting judgment should be a core function.
3) The document presents a preliminary holistic analytic model and identifies gaps in considering issues like poverty, disease, and environmental threats that are essential to future-proofing analysis.
While quality systems and procedures are important, three case studies show that culture is the most critical factor for success. General Bill Creech transformed the US Air Force's culture to prioritize quality through passionate leadership and initiatives to recognize all staff, not just pilots. Roger Milliken created a culture of relentless focus on quality at Milliken & Co. through sustained engagement and prioritizing those promoting quality culture. Dr. Peter Pronovost achieved dramatic error reductions at Johns Hopkins by tackling the medical hierarchy's culture and empowering nurses to intervene with doctors. In each case, implementing systems was less impactful than changing the underlying culture through strong, committed leadership.
Leadership and innovation presentation to UiO Green IT SchoolRick Wheatley
In October 2013 I gave a presentation to the University of Oslo's Green IT School. The topic was on innovation and leadership in business given the evolving context we live in - where some issues are becoming existential.
Doing business in an environment that is volatile, unpredictable, complex and ambiguous demands a different kind of leadership; a different sense of calm if you will. Where does this come from? This was my attempt to relate a view on the contextual picture along with some principles of 'leadership from the future' that Veronica Lie, a Xyntéo colleague, and I wrote about in the run up to the 2013 Performance Theatre in Istanbul, Turkey - amazingly enough held at the precise time of the riots at Taksim square.
Enjoy - questions and comments appreciated.
Original article available here: http://issuu.com/xynteo/docs/pages_from_leadership_paper
This document discusses strategies for succeeding under uncertainty. It provides examples of companies that were unprepared for uncertainties and suffered losses as a result. Cisco was unprepared for the collapse of the internet bubble, while Ericsson lacked a "Plan B" when a key supplier's factory was destroyed. Even Nobel Prize winners at Long-Term Capital Management could not anticipate market forces. The document argues that embracing uncertainty is necessary to profit from opportunities and avoid threats. Companies that are flexible and can envision multiple potential futures may be better equipped to adapt when uncertainties emerge.
A Current Event Essay (600 Words) - PHDessay.com. Recent Current Event Analysis Essay Example | Topics and Well Written .... Current Event Paper Sample. 003 Current Event Essay Example Essays College Format Sample Free .... ️ Current events essay. Current events essay topic. 2019-01-25. Current event essay. Current Event Essay Sample - Denton Independent School District. Current Event Essay. 025 Essay Example Current Event Events Standard Paragraph Outline .... Sample Current Events Report. 012 Essay Example Current Event On Events Where Can I Write My Now .... Short essay on current events in india in 2021 | Essay examples, Essay .... How to Write a Current Event Essay. How to Write a Current Events Essay/ Current Events Essay Guidelines. Current Event Analysis: Paper Sample and Free Essay Example.
A Current Event Essay 600 Words - PHDessay.com. Recent Current Event Analysis Essay Example Topics and Well Written .... Current Event Paper Sample. 003 Current Event Essay Example Essays College Format Sample Free .... ️ Current events essay. Current events essay topic. 2019-01-25. Current event essay. Current Event Essay Sample - Denton Independent School District. Current Event Essay. 025 Essay Example Current Event Events Standard Paragraph Outline .... Sample Current Events Report. 012 Essay Example Current Event On Events Where Can I Write My Now .... Short essay on current events in india in 2021 Essay examples, Essay .... How to Write a Current Event Essay. How to Write a Current Events Essay/ Current Events Essay Guidelines. Current Event Analysis: Paper Sample and Free Essay Example. Current events essay topics. Persuasive Essay On Current Events Free .... Current Event Essay Template and Rubric by Literature Lifesavers. Current Events Essay.pdf DocDroid. Current Event Essay: A Guide, Tips, Examples, and Topic Ideas. Current Event Report - MR. WIRKUS MORSE HIGH SCHOOL. Current Event Essay Sample. Reflection essay: Current event paper outline. Current event Paper format New Write My Paper Critique Essay Sample ... Current Event Essay Current Event Essay
The document discusses the key elements of risk management including risk identification, evaluation, control, and financing. It also discusses the concept of a "black swan" event, which is defined as a rare, highly improbable event with severe consequences. Specifically, a black swan is unpredictable, has major impacts, and is often rationalized with the benefit of hindsight. Examples of past black swan events like economic crises are provided. The document also discusses approaches to prepare for and respond to potential black swan events through principles like establishing response goals and empowering local leadership.
Here are some potential pros and cons of using firefighting robots:
Pros:
- Robots could perform firefighting tasks that are too dangerous for humans, such as searching collapsed buildings or battling intense flames. This reduces risks to human firefighters.
- Robots have enhanced capabilities compared to humans, such as night vision, thermal imaging, ability to withstand high temperatures, and ability to access small spaces. This could help locate and suppress fires.
- Robots do not require rest, food, or bathroom breaks. They can operate continuously for as long as their batteries last. This allows round-the-clock firefighting without risking human fatigue or safety.
Cons:
- Robots are not as intelligent as humans
Cyber Security: The Strategic View
By: Kah-Kin Ho, Head of Cyber Security Business Development Threat Response, Intelligence and Development (TRIAD)
This session begins by giving an overview of how Cisco sees the challenges and opportunities of cyber security for the Government which include areas such as recent development on applicability of International Law to Cyber conflict, the evolving role of the Government as the legitimate security provider, Public-Private Partnership issues, and the evolving technical, social and political threat landscape. Cisco recognizes that cyber security begins at the policy level and translates through to the operational and system level. We will discuss why an intelligence-led network-centric approach that focuses on enforcing policy, enhancing situational awareness, and providing the insight necessary to tackle threats before they impact information and infrastructure assets is key to Cyber Security.
2. Examples of catastrophic knowledge collapse
What (really) is COOP?
Disaster versus “business as usual”
Building resilience into “business as usual”
Combine Risk Management and KM
Knowledge Enabling of Processes
Identifying “weak signals” in a noisy environment.
3.
4.
5.
6.
7. 1. All were avoidable.
2. All are examples of
catastrophic knowledge
collapse.
Want proof?
12. In each case, a knowledge
gap resulted in a
catastrophic cascade; a
“knowledge collapse”
How?
13. Small gaps cascade to produce
big impacts. These can be
performance or knowledge gaps
For want of a nail the shoe was lost.
For want of a shoe the horse was lost.
For want of a horse the rider was lost.
For want of a rider the battle was lost.
For want of a battle the kingdom was lost.
And all for the want of a horseshoe.
14. “Process breakdown due to knowledge
resource failure.”
Today, organizations “run on” knowledge
Knowledge is as essential as gas and oil in a car;
food and water to an organism
In other words, the flow of knowledge is essential
to operation; stopping the flow causes problems;
if enough problems occur, processes break down.
15. Complacency breeds neglect, and ..
Failure to ”detect weak signals”
Acceptance of faulty assumptions
Failure to implement risk management
Disconnects betweenCOOP and Business as
Usual
16. Traditionally “disaster oriented.”
Views its role only in situations of catastrophe
Usually provides alternative sites and support
Usually ignores “business as usual.”
18. Decision and performance support systems
that are integrated with knowledge and
information sources
Combine KM & Risk Management
Learn from Failure
ComprehensiveWho KnowsWhatWhere
resource
Active, dynamic social networking systems
19. Disasters teach more than successes.
“Never let a good crisis go to waste.”
Allow learning by exposing failure to the light
of day.
Knowledge management can help by ..
Identifying “weak signals”
Institutionalizing adaptive habits
Building knowledge into critical processes
20.
21. Broad,William J. Taking Lessons FromWhatWentWrong. NewYorkTimes, July 19, 2010.
Carl, JosephW. Lt Col USAF (Ret) and Freeman,George,Col, USAFR (Ret), Nonstationary Root
Causes of Cobb’s Paradox, DefenseAcquisition Review Journal,Vol 17. No. 55, Jul 2010, pp 337-
25. In 1995, Martin Cobb worked for the Secretariat of the
Treasury Board of Canada. He attendedThe Standish
Group’s CHAOS University, where the year’s 10 most
complex information technology (IT) projects are
analyzed and discussed.The 10 most complex IT
projects studied byThe Standish Group in 1994 were
all in trouble: eight were over schedule, on average by
a factor of 1.6 and over budget by a factor of 1.9; the
other two were cancelled and never delivered
anything.That led Cobb to state his now famous
paradox (Cobb, 1995): “We know why [programs] fail;
we know how to prevent their failure—so why do they
still fail?”
Editor's Notes
My name is Neil Olonoff, and I am the Knowledge Management Lead for HQDA G-4 (Logistics) in the Pentagon. I’m also the Co-Chair of the Federal KM Working Group, we are 700 Federal employees and contractors who are working for KM in the Government.
What is catastrophic knowledge collapse? Small knowledge problems can result in process breakdown
What (really) is COOP?
Disaster scenarios versus “business as usual.” A continuum.
Building resilience into “business as usual”
Combine Risk Management and KM
Knowledge Enabling of Processes
Identifying Weak Signals
Weak signals in a noisy environment? I’ve got a couple of dogs, and I like to walk them near a lake near my house. There’s a little creekbed with a bridge and the dogs run in the creekbed. I’m not what you’d call a naturalist, but I do look around to see if there’s anything that might bite me, anyway. Well in the little creekbed, just after a big rainstorm, I saw the oddest little creature. It was orange, with yellow spots. Salamander, I thought. But looking closer I saw it had a spiky little body, kind of fat instead of skinny. And a spiky head and a spiky pointed tail. It wasn’t moving so I grabbed a stick to poke it. It just turned over. The back was cream colored and it didn’t move at all, and that’s when I realized it was a child’s toy plastic dinosaur.
Weak signals against a noisy environment. If id seen it on a sidewalk instead of a creek bed I woudn’t have given it a second thought.
On April __ the Titanic sank.
It hit an iceberg. How many died? As we all know, resulting in the tragic death of Leonardo di Caprio and hundreds of other non-essential personnel.
The space shuttle challenger blasted off on a very cool morning. The o-rings on the booster rockets gave way, resulting in the explosion that killed the crew.
9/11. I work in a location just adjacent to the area that was struck. You could say that the reverberations of this event still are being felt, especially in the political and military events that are occurring today in Iraq, Afghanistan and elsewhere.
It seems that this week they’ll finally be able to cap the well after ___ weeks of oil spilling into the gulf of mexico.
I understand that this was just one of several telegrams. But this was the crucial one that, if seen by the captain, would have avoided catastrophe.
The Challenger:NASA / JPL Contractor Morton Thiokol and NASA engineers knew about --- and had documented -– the danger of booster rocket O-ring blowback when launching in cool weathers. But because the President wanted to talk about the “teacher in space” in the State of the Union speech, NASA executives overruled the engineers.
Bin Ladin Determined To Strike in US was the President's Daily Brief given to U.S. President George W. Bush on August 6, 2001. The President's Daily Brief (PDB) is a brief of important classified information on national security collected by various U.S. intelligence agencies given to the president and a select group of senior officials. The brief warned of terrorism threats from Osama bin Laden and al-Qaeda over a month before the September 11, 2001 attacks.[1]
May 29, 2010
Documents Show Early Worries About Safety of Rig
By IAN URBINA
WASHINGTON — Internal documents from BP show that there were serious problems and safety concerns with the Deepwater Horizon rig far earlier than those the company described to Congress last week.
The problems involved the well casing and the blowout preventer, which are considered critical pieces in the chain of events that led to the disaster on the rig.
The documents show that in March, after several weeks of problems on the rig, BP was struggling with a loss of “well control.” And as far back as 11 months ago, it was concerned about the well casing and the blowout preventer.
On June 22, for example, BP engineers expressed concerns that the metal casing the company wanted to use might collapse under high pressure.
“This would certainly be a worst-case scenario,” Mark E. Hafle, a senior drilling engineer at BP, warned in an internal report. “However, I have seen it happen so know it can occur.”
The company went ahead with the casing, but only after getting special permission from BP colleagues because it violated the company’s safety policies and design standards. The internal reports do not explain why the company allowed for an exception. BP documents released last week to The Times revealed that company officials knew the casing was the riskier of two options.
Though his report indicates that the company was aware of certain risks and that it made the exception, Mr. Hafle, testifying before a panel on Friday in Louisiana about the cause of the rig disaster, rejected the notion that the company had taken risks.
“Nobody believed there was going to be a safety issue,” Mr. Hafle told a six-member panel of Coast Guard and Minerals Management Service officials.
“All the risks had been addressed, all the concerns had been addressed, and we had a model that suggested if executed properly we would have a successful job,” he said.
Mr. Hafle, asked for comment by a reporter after his testimony Friday about the internal report, declined to answer questions.
BP’s concerns about the casing did not go away after Mr. Hafle’s 2009 report.
In April of this year, BP engineers concluded that the casing was “unlikely to be a successful cement job,” according to a document, referring to how the casing would be sealed to prevent gases from escaping up the well.
The document also says that the plan for casing the well is “unable to fulfill M.M.S. regulations,” referring to the Minerals Management Service.
A second version of the same document says “It is possible to obtain a successful cement job” and “It is possible to fulfill M.M.S. regulations.”
Andrew Gowers, a BP spokesman, said the second document was produced after further testing had been done.
On Tuesday Congress released a memorandum with preliminary findings from BP’s internal investigation, which indicated that there were warning signs immediately before the explosion on April 20, including equipment readings suggesting that gas was bubbling into the well, a potential sign of an impending blowout.
A parade of witnesses at hearings last week told about bad decisions and cut corners in the days and hours before the explosion of the rig, but BP’s internal documents provide a clearer picture of when company and federal officials saw problems emerging.
In addition to focusing on the casing, investigators are also focusing on the blowout preventer, a fail-safe device that was supposed to slice through a drill pipe in a last-ditch effort to close off the well when the disaster struck. The blowout preventer did not work, which is one of the reasons oil has continued to spill into the gulf, though the reason it failed remains unclear.
Federal drilling records and well reports obtained through the Freedom of Information Act and BP’s internal documents, including more than 50,000 pages of company e-mail messages, inspection reports, engineering studies and other company records obtained by The Times from Congressional investigators, shed new light on the extent and timing of problems with the blowout preventer and the casing long before the explosion.
Kendra Barkoff, a spokeswoman for the Interior Department, declined to answer questions about the casings, the blowout preventer and regulators’ oversight of the rig because those matters are part of a continuing investigation.
The documents show that in March, after problems on the rig that included drilling mud falling into the formation, sudden gas releases known as “kicks” and a pipe falling into the well, BP officials informed federal regulators that they were struggling with a loss of “well control.”
On at least three occasions, BP records indicate, the blowout preventer was leaking fluid, which the manufacturer of the device has said limits its ability to operate properly.
“The most important thing at a time like this is to stop everything and get the operation under control,” said Greg McCormack, director of the Petroleum Extension Service at the University of Texas, Austin, offering his assessment about the documents.
He added that he was surprised that regulators and company officials did not commence a review of whether drilling should continue after the well was brought under control.
After informing regulators of their struggles, company officials asked for permission to delay their federally mandated test of the blowout preventer, which is supposed to occur every two weeks, until the problems were resolved, BP documents say.
At first, the minerals agency declined.
“Sorry, we cannot grant a departure on the B.O.P. test further than when you get the well under control,” wrote Frank Patton, a minerals agency official. But BP officials pressed harder, citing “major concerns” about doing the test the next day. And by 10:58 p.m., David Trocquet, another M.M.S. official, acquiesced.
“After further consideration,” Mr. Trocquet wrote, “an extension is approved to delay the B.O.P. test until the lower cement plug is set.”
When the blowout preventer was eventually tested again, it was tested at a lower pressure — 6,500 pounds per square inch — than the 10,000-pounds-per-square-inch tests used on the device before the delay. It tested at this lower pressure until the explosion.
A review of Minerals Management Service’s data of all B.O.P. tests done in deep water in the Gulf of Mexico for five years shows B.O.P. tests rarely dropped so sharply, and, in general, either continued at the same threshold or were done at increasing levels.
The manufacturer of the blowout preventer, Cameron, declined to say what the appropriate testing pressure was for the device.
In an e-mail message, Mr. Gowers of BP wrote that until their investigation was complete, it was premature to answer questions about the casings or the blowout preventer.
Even though the documents asking regulators about testing the blowout preventer are from BP, Mr. Gowers said that any questions regarding the device should be directed to Transocean, which owns the rig and, he said, was responsible for maintenance and testing of the device. Transocean officials declined to comment.
Bob Sherrill, an expert on blowout preventers and the owner of Blackwater Subsea, an engineering consulting firm, said the conditions on the rig in February and March and the language used by the operator referring to a loss of well control “sounds like they were facing a blowout scenario.”
Mr. Sherrill said federal regulators made the right call in delaying the blowout test, because doing a test before the well is stable risks gas kicks. But once the well was stable, he added, it would have made sense for regulators to investigate the problems further.
In April, the month the rig exploded, workers encountered obstructions in the well. Most of the problems were conveyed to federal regulators, according to federal records. Many of the incidents required that BP get a permit for a new tactic for dealing with the problem.
One of the final indications of such problems was an April 15 request for a permit to revise its plan to deal with a blockage, according to federal documents obtained from Congress by the Center for Biological Diversity, an environmental advocacy group.
In the documents, company officials apologized to federal regulators for not having mentioned the type of casing they were using earlier, adding that they had “inadvertently” failed to include it. In the permit request, they did not disclose BP’s own internal concerns about the design of the casing.
Less than 10 minutes after the request was submitted, federal regulators approved the permit.
Robbie Brown contributed reporting from Kenner, La., and Andy Lehren from New York.
Small gaps cascade to produce big impacts. These can be performance or knowledge gaps
This characterizes the cascading 2nd order and 3rd order effects.
For want of a nail the shoe was lost.
For want of a shoe the horse was lost.
For want of a horse the rider was lost.
For want of a rider the battle was lost.
For want of a battle the kingdom was lost.
And all for the want of a horseshoe.
What Does “Knowledge Collapse” Really Mean?
“Process breakdown due to knowledge resource failure.”
Today, organizations “run on” knowledge
Knowledge is as essential as gas and oil in a car; food and water to an organism
In other words, the flow of knowledge is essential to operation; stopping the flow causes problems; if enough problems occur, processes break down.
How “gaps” occur
Complacency breeds neglect, and ..
Failure to ”detect weak signals”
Acceptance of faulty assumptions
Failure to implement risk management
Disconnects between COOP and Business as Usual
The difference between “business as usual” and a disaster is often a matter of degree in terms of the “number of processes impacted.”
In other words, disaster often is different from business as usual in terms of the degree, but not the type of dysfunction.
The implications of this are:
It is possible to plan or prepare for disaster by building resilience into “Business as usual.”
The same processes
Building Resilience into “Business as Usual”
Decision and performance support systems that are integrated with knowledge and information sources
Combine KM & Risk Management
See the article “Fusing Risk Management & Knowledge Management” by ..
Knowledge Based Risks (KBR’s)
Infuse knowledge into work processes
- Close knowledge gaps by:
-providing broader access to risk information
-capture and transfer what you learn along the way
Identify Risks
-Lessons Learned – often these were risks that challenged an earlier program
KBRs provide that, plus …
Analysis and planning info
Four Practices
1. Pause and Learn (based on US Army’s After Action Review program)
2. Knowledge Sharing Forums
3. Experience based training using case studies
4. Web-enabled teams
Learn from Failure
Comprehensive Who Knows What Where resource
Active, dynamic social networking systems
Reprints This copy is for your personal, noncommercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers here or use the "Reprints" tool that appears next to any article. Visit www.nytreprints.com for samples and additional information. Order a reprint of this article now.
July 19, 2010
Taking Lessons From What Went Wrong
By WILLIAM J. BROAD
Disasters teach more than successes.
While that idea may sound paradoxical, it is widely accepted among engineers. They say grim lessons arise because the reasons for triumph in matters of technology are often arbitrary and invisible, whereas the cause of a particular failure can frequently be uncovered, documented and reworked to make improvements.
Disaster, in short, can become a spur to innovation.
There is no question that the trial-and-error process of building machines and industries has, over the centuries, resulted in the loss of much blood and many thousands of lives. It is not that failure is desirable, or that anyone hopes for or aims for a disaster. But failures, sometimes appalling, are inevitable, and given this fact, engineers say it pays to make good use of them to prevent future mistakes.
The result is that the technological feats that define the modern world are sometimes the result of events that some might wish to forget.
“It’s a great source of knowledge — and humbling, too — sometimes that’s necessary,” said Henry Petroski, a historian of engineering at Duke University and author of “Success Through Failure,” a 2006 book. “Nobody wants failures. But you also don’t want to let a good crisis go to waste.”
Now, experts say, that kind of analysis will probably improve the complex gear and procedures that companies use to drill for oil in increasingly deep waters. They say the catastrophic failure involving the Deepwater Horizon oil rig in the Gulf of Mexico on April 20 — which took 11 lives and started the worst offshore oil spill in United States history — will drive the technological progress.
“The industry knows it can’t have that happen again,” said David W. Fowler, a professor at the University of Texas, Austin, who teaches a course on forensic engineering. “It’s going to make sure history doesn’t repeat itself.”
One possible lesson of the disaster is the importance of improving blowout preventers — the devices atop wells that cut off gushing oil in emergencies. The preventer on the runaway well failed. Even before the disaster, the operators of many gulf rigs had switched to more advanced preventers, strengthening this last line of defense.
Of course, an alternative to improving a particular form of technology might be to discard it altogether as too risky or too damaging.
Abandoning offshore drilling is certainly one result that some environmentalists would push for — and not only because of potential disasters like the one in the gulf. They would rather see technologies that pump carbon into the atmosphere, threatening to speed global climate change, go extinct than evolve.
In London on June 22 at the World National Oil Companies Congress, protesters from Greenpeace interrupted an official from BP, the company that dug the runaway well. Planetary responsibility, a protestor shouted before being taken away, “means stopping the push for dangerous drilling in deep waters.”
The history of technology suggests that such an end is unlikely. Devices fall out of favor, but seldom if ever get abolished by design. The explosion of the Hindenburg showed the dangers of hydrogen as a lifting gas and resulted in new emphasis on helium, which is not flammable, rather than ending the reign of rigid airships. And engineering, by definition, is a problem-solving profession. Technology analysts say that constructive impulse, and its probable result for deep ocean drilling, is that innovation through failure analysis will make the wells safer, whatever the merits of reducing human reliance on oil. They hold that the BP disaster, like countless others, will ultimately inspire technological advance.
The sinking of the Titanic, the meltdown of the Chernobyl reactor in 1986, the collapse of the World Trade Center — all forced engineers to address what came to be seen as deadly flaws.
“Any engineering failure has a lot of lessons,” said Gary Halada, a professor at the State University of New York at Stony Brook who teaches a course called “Learning from Disaster.”
Design engineers say that, too frequently, the nature of their profession is to fly blind.
Eric H. Brown, a British engineer who developed aircraft during World War II and afterward taught at Imperial College London, candidly described the predicament. In a 1967 book, he called structural engineering “the art of molding materials we do not really understand into shapes we cannot really analyze, so as to withstand forces we cannot really assess, in such a way that the public does not really suspect.”
Among other things, Dr. Brown taught failure analysis.
Dr. Petroski, at Duke, writing in “Success Through Failure,” noted the innovative corollary. Failures, he said, “always teach us more than the successes about the design of things. And thus the failures often lead to redesigns — to new, improved things.”
One of his favorite examples is the 1940 collapse of the Tacoma Narrows Bridge. The span, at the time the world’s third-longest suspension bridge, crossed a strait of Puget Sound near Tacoma, Wash. A few months after its opening, high winds caused the bridge to fail in a roar of twisted metal and shattered concrete. No one died. The only fatality was a black cocker spaniel named Tubby.
Dr. Petroski said the basic problem lay in false confidence. Over the decades, engineers had built increasingly long suspension bridges, with each new design more ambitious.
The longest span of the Brooklyn Bridge, which opened to traffic in 1883, was 1,595 feet. The George Washington Bridge (1931) more than doubled that distance to 3,500 feet. And the Golden Gate Bridge (1937) went even farther, stretching its middle span to 4,200 feet.
“This is where success leads to failure,” Dr. Petroski said in an interview. “You’ve got all these things working. We want to make them longer and more slender.”
The Tacoma bridge not only possessed a very long central span — 2,800 feet — but its concrete roadway consisted of just two lanes and its deck was quite shallow. The wind that day caused the insubstantial thoroughfare to undulate wildly up and down and then disintegrate. (A 16-millimeter movie camera capturedthe violent collapse.)
Teams of investigators studied the collapse carefully, and designers of suspension bridges took away several lessons. The main one was to make sure the road’s weight and girth were sufficient to avoid risky perturbations from high winds.
Dr. Petroski said the collapse had a direct impact on the design of the Verrazano-Narrows Bridge, which opened in 1964 to link Brooklyn and Staten Island. Its longest span was 4,260 feet — making it, at the time, the world’s longest suspension bridge and potentially a disaster-in-waiting.
To defuse the threat of high winds, the designers from the start made the roadway quite stiff and added a second deck, even though the volume of traffic was insufficient at first to warrant the lower one. The lower deck remained closed to traffic for five years, opening in 1969.
“Tacoma Narrows changed the way that suspension bridges were built,” Dr. Petroski said. “Before it happened, bridge designers didn’t take the wind seriously.”
Another example in learning from disaster centers on an oil drilling rig called Ocean Ranger. In 1982, the rig, the world’s largest, capsized and sank off Newfoundland in a fierce winter storm, killing all 84 crew members. The calamity is detailed in a 2001 book, “Inviting Disaster: Lessons from the Edge of Technology,” by James R. Chiles.
The floating rig, longer than a football field and 15 stories high, had eight hollow legs. At the bottom were giant pontoons that crewmen could fill with seawater or pump dry, raising the rig above the largest storm waves — in theory, at least.
The night the rig capsized, the sea smashed in a glass porthole in the pontoon control room, soaking its electrical panel. Investigators found that the resulting short circuits began a cascade of failures and miscalculations that resulted in the rig’s sinking.
The lessons of the tragedy included remembering to shut watertight storm hatches over glass windows, buying all crew members insulated survival suits (about $450 each at the time) and rethinking aspects of rig architecture.
“It was a terrible design,” said Dr. Halada of the State University of New York. “But they learned from it.”
Increasingly, such tragedies get studied, and not just at Stony Brook. The Stanford University Center for Professional Development offers a graduate certificate in advanced structures and failure analysis. Drexel University offers a master’s degree in forensic science with a focus on engineering.
So too, professional engineering has produced a subspecialty that investigates disasters. One of the biggest names in the business is Exponent, a consulting company based in Menlo Park, Calif. It has a staff of 900 specialists around the globe with training in 90 engineering and scientific fields.
Exponent says its analysts deal with everything from cars and roller coasters to oil rigs and hip replacements. “We analyze failures and accidents,” the company says, “to determine their causes and to understand how to prevent them.”
Forensic engineers say it is too soon to know what happened with Deepwater Horizon, whose demise flooded the gulf with crude oil. They note that numerous federal agencies are involved in a series of detailed investigations, and that President Obama has appointed a blue-ribbon commission to make recommendations on how to strengthen federal oversight of oil rigs.
But the engineers hold, seemingly with one voice, that the investigatory findings will eventually improve the art of drilling for oil in deep waters — at least until the next unexpected tragedy, and the next lesson in making the technology safer.
One lesson might be to build blowout preventers with more than one blind shear ram. In an emergency, the massive blades of these devices slice through the drill pipe to cut off the flow of gushing oil. The Deepwater Horizon had just one, while a third of the rigs in the gulf now have two.
Perhaps regulators will decided that rig operators, whatever the cost, should install more blind shear rams on all blowout preventers.
“It’s like our personal lives,” said Dr. Fowler of the University of Texas. “Failure can force us to make hard decisions.”
More in Science (2 of 41 articles)
Adventures in Very Recent Evolution
Read More »
Close
Project Success Criteria used by Standish
1. User Involvement
2. Executive Management Support
3. Clear Statement of Requirements
4. Proper Planning
5. Realistic Expectations
6. Smaller Project Milestones
7. Competent Staff
8. Ownership
9. Clear Vision & Objectives
10. Hardworking, Focused Staff
Cobb’s Paradox
In 1995, Martin Cobb worked for the Secretariat of the Treasury Board
of Canada. He attended The Standish Group’s CHAOS University, where the
year’s 10 most complex information technology (IT) projects are analyzed
and discussed. The 10 most complex IT projects studied by The Standish
Group in 1994 were all in trouble: eight were over schedule, on average
by a factor of 1.6 and over budget by a factor of 1.9; the other two were
cancelled and never delivered anything. That led Cobb to state his nowfamous
paradox (Cobb, 1995): “We know why [programs] fail; we know how
to prevent their failure—so why do they still fail?”
Cobb’s Paradox states, “We know why [programs] fail; we
know how to prevent their failure—so why do they still fail?”
One possibility is that we do not really know why programs fail
and there is no paradox. Another possibility is that some of the
problems that lead to program failure may not be susceptible
to practical solution, so that continued failure is not paradoxical.
This article defines what we mean by nonstationary
root causes of program failures, and identifies 10 such causes.
Requirements volatility, funding stability, process immaturity,
and lack of discipline are often cited among the reasons. The
article ends with recommended approaches to mitigate the
effects of influences from the environment that change over
time—nonstationary effects.
In 2007, the many examples of government project failures led then-
Under Secretary of Defense for Acquisition, Technology and Logistics
John Young to issue a memorandum that requires prototyping and
competition on all major programs up to Milestone B (Young, 2007).
Young’s memorandum was a propitious start. But is it likely to be sufficient
to solve all the problems that lead to project failure?
This article summarizes the number and spectrum of project failures,
and makes the case that project failures cannot be attributed solely to
mismanagement on the part of project managers. Rather, it appears
improbable that all project managers of large complex projects could
produce similar failures. The prevailing perception throughout the
acquisition community is that program and project managers know why
projects fail and how to prevent them from failing. The authors discuss
the concept of other influences from the environment that change over
time—nonstationary effects—that may be the root cause of these numerous
project failures.
Background
In 2006, a Government Accountability Office report (GAO, 2006)
highlighted several government project failures.
In the last 5 years, the Department of Defense (DoD) has doubled
its planned investments in new weapon systems from about $700
billion in 2001 to nearly $1.4 trillion in 2006. While the weapons
that DoD develops have no rival in superiority, weapon systems
acquisition remains a long-standing, high-risk area. GAO's reviews
over the past 30 years have found consistent problems with
weapon acquisitions such as cost increases, schedule delays, and
performance shortfalls.
The report goes on to state that this huge increase in spending over the
past 5 years “has not been accompanied by more stability, better outcomes,
or more buying power for the acquisition dollar.” Examples of this huge
increase in spending follow:
• Capable satellites, potential overrun of $1.4 billion
• Satellite payload cost and schedule overruns greater than
$1.1 billion
• Radar contract projected to overrun target cost by up to 34
percent
• Advanced Precision Kill Weapon System (Joint Attack
Munition Systems), curtailment of initial program in January
2005 due to development cost overruns, projected schedule
slip of 1–2 years, unsatisfactory contract performance, and
environmental issues
• C-5 Avionics Modernization Program, $23 million cost overrun
• C-5 Reliability Enhancement and Re-engineering Program,
$209 million overrun
• F-22A, increase in the costs of avionics since 1997 by
more than $951 million or 24 percent, and other problems
discovered late in the program.
On March 31, 2006, Comptroller General of the United States David M.
Walker stated in congressional testimony:
The cost of developing a weapon system continues to often
exceed estimates by approximately 30 percent to 40 percent.
This in turn results in fewer quantities, missed deadlines, and
performance shortfalls. In short, the buying power of the weapon
system investment dollar is reduced, the warfighter gets less than
promised, and opportunities to make other investments are lost.
This is not to say that the nation does not get superior weapons
in the end, but that at twice the level of investment. DoD has an
obligation to get better results. In the larger context, DoD needs to
make changes…consistent with getting the desired outcomes from
the acquisition process.
Cobb’s Paradox
In 1995, Martin Cobb worked for the Secretariat of the Treasury Board
of Canada. He attended The Standish Group’s CHAOS University, where the
year’s 10 most complex information technology (IT) projects are analyzed
and discussed. The 10 most complex IT projects studied by The Standish
Group in 1994 were all in trouble: eight were over schedule, on average
by a factor of 1.6 and over budget by a factor of 1.9; the other two were
cancelled and never delivered anything. That led Cobb to state his nowfamous
paradox (Cobb, 1995): “We know why [programs] fail; we know how
to prevent their failure—so why do they still fail?”
The Standish Group uses project success criteria from surveyed IT
managers to create a success-potential chart. The success criteria are
shown in the Table, where they are ranked according to their perceived
importance. There seems to be an assumption that all the criteria are
stationary—that they are assumed to be present on any specific project to
some degree and do not change over time except potentially for the better
with conscious effort. A little more formally, a process or system is said to
be stationary if its behavioral description does not change over time, and
nonstationary if its behavioral description does change over time.