The document provides an overview of a presentation on evolving cyber risk from a multidisciplinary perspective. It discusses emerging cyber risks like hacking of industrial control systems, medical devices, and driverless cars. It acknowledges that emerging risks are difficult to anticipate because they involve things we are unaware of or combinations of known risks in new ways. Emerging risks often have complex and opaque relationships that challenge traditional risk modeling approaches. The presentation emphasizes some cognitive and group biases that can cause emerging risks to be overlooked or underestimated. It concludes by noting the importance of considering alternative perspectives to account for the context-dependence of risk.
Man’s dreams of ‘intelligences and robots’ go back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be had at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
Man’s dreams of ‘intelligences and robots’ go back thousands of years to the worship of gods and statues; mythologies: talisman and puppets; people, places and objects with supposed magical and (often) judgemental/punitive abilities. But it wasn’t until the electronic revolution in 1915, accelerated by WWII that we saw the realisation of two game changing-machines: Colossus (Decoding Machine of Bletchley Park) 1943 and ENIAC (Artillery Computation Engine and Nuclear Bomb Design @ The University of Pennsylvania) 1946.
And so in 1950 the modern AI movement was optimistically projecting what machines would be capable of ‘almost anything’ by 1960/70. Unfortunately, there was no understanding of the complexity to be addressed, and all the projections were wildly wrong; leading to a deep trough of disparagement and disillusionment of some 30 years. However, 70 years on and the original AI optimism and projections of what might be had at least been largely achieved with AI outgunning humans at every board and card game including Poker and GO, and of course; general knowledge, medical diagnosis, image and information pattern recognition…
ह्या अमृताचा अगदी अंशत: अनुभव देखील जीवन झळाळून टाकणारा असतो आणि तो येण्यासाठी चित्तशुद्धी होत राहण्याची आवश्यकता असते. ही चित्तशुद्धी नामस्मरणाद्वारे होत जाते आणि ती चित्तशुद्धी देखील कोणत्याही बाह्य लक्षणाने ओळखता येत नाही आणि दाखविता येत नाही!
Almost 70 years since the first computer bug was discovered, there has been decades of research done on Information Security theory and practice. Yet, despite vast amounts of money being spent, innumerable academic papers, mainstream media obsession, and entire industries being formed, we are left with the impression that the risk is growing, not receding. Why? Some argue a lack of data, but data clearly exists. We’re likely generating it, in some areas, faster than humans will ever be able to process it. Perhaps, after all of this effort, we’ve managed to box ourselves into metaphors and first principles that might be inappropriately constraining how we think about “Information Security Risk”. In fact, it’s worth noting that we can’t even agree if there is a space between “Cyber” and “Security” when it’s written out. This talk will take an anecdotal look at “Information Security Risk”, “What IS Cyber Security?”, and use that perspective to suggest areas of research that are either lacking or should be made more accessible to the markets, industries, and individuals driving risk management change. In an industry filled with data, perhaps an examination of empty space might be helpful.
Modern Network Security Threats
Data confidentiality, integrity, availability
Risks, threats, vulnerabilities and countermeasures
Methodology of a structured attack
Security model (McCumber cube)
Security policies, standards and guidelines
Selecting and implementing countermeasures
Network security design
Network security initiatives and network security specialists can be found in private and public, large and small companies and organizations
The reactionary state of the industry means that we quickly identify the ‘root cause’ in terms of ‘human-error’ as an object to attribute and shift blame. Hindsight bias often confuses our personal narrative with truth, which is an objective fact that we as investigators can never fully know. The poor state of self-reflection, human factors knowledge, and the nature of resource constraints further incentivize this vicious pattern. This approach results in unnecessary and unhelpful assignment of blame, isolation of the engineers involved, and ultimately a culture of fear throughout the organization. Mistakes will always happen.
Rather than failing fast and encouraging experimentation, the traditional process often discourages creativity and kills innovation. As an alternative to simply reacting to failures, the security industry has been overlooking valuable chances to further understand and nurture ‘accidents’ or ‘mistakes’ as opportunities to proactively strengthen system resilience. Expose the failures, build resilient systems, and develop an "Applied security" model to minimize the impact of failures. In this session we will cover discuss the role of ‘human-error’, root cause, and resilience engineering in our industry and how we can use new techniques such as Chaos Engineering to make a difference.
Security focused Chaos Engineering proposes that the only way to understand this uncertainty is to confront it objectively by introducing controlled signals. During this session we will cover some key concepts in Safety & Resilience Engineering work based on Sydney Dekker’s 30 years of research into airline accident investigations and how new techniques such as Chaos Engineering are making a difference in improving our ability to learn from incidents proactively before they become destructive
ह्या अमृताचा अगदी अंशत: अनुभव देखील जीवन झळाळून टाकणारा असतो आणि तो येण्यासाठी चित्तशुद्धी होत राहण्याची आवश्यकता असते. ही चित्तशुद्धी नामस्मरणाद्वारे होत जाते आणि ती चित्तशुद्धी देखील कोणत्याही बाह्य लक्षणाने ओळखता येत नाही आणि दाखविता येत नाही!
Almost 70 years since the first computer bug was discovered, there has been decades of research done on Information Security theory and practice. Yet, despite vast amounts of money being spent, innumerable academic papers, mainstream media obsession, and entire industries being formed, we are left with the impression that the risk is growing, not receding. Why? Some argue a lack of data, but data clearly exists. We’re likely generating it, in some areas, faster than humans will ever be able to process it. Perhaps, after all of this effort, we’ve managed to box ourselves into metaphors and first principles that might be inappropriately constraining how we think about “Information Security Risk”. In fact, it’s worth noting that we can’t even agree if there is a space between “Cyber” and “Security” when it’s written out. This talk will take an anecdotal look at “Information Security Risk”, “What IS Cyber Security?”, and use that perspective to suggest areas of research that are either lacking or should be made more accessible to the markets, industries, and individuals driving risk management change. In an industry filled with data, perhaps an examination of empty space might be helpful.
Modern Network Security Threats
Data confidentiality, integrity, availability
Risks, threats, vulnerabilities and countermeasures
Methodology of a structured attack
Security model (McCumber cube)
Security policies, standards and guidelines
Selecting and implementing countermeasures
Network security design
Network security initiatives and network security specialists can be found in private and public, large and small companies and organizations
The reactionary state of the industry means that we quickly identify the ‘root cause’ in terms of ‘human-error’ as an object to attribute and shift blame. Hindsight bias often confuses our personal narrative with truth, which is an objective fact that we as investigators can never fully know. The poor state of self-reflection, human factors knowledge, and the nature of resource constraints further incentivize this vicious pattern. This approach results in unnecessary and unhelpful assignment of blame, isolation of the engineers involved, and ultimately a culture of fear throughout the organization. Mistakes will always happen.
Rather than failing fast and encouraging experimentation, the traditional process often discourages creativity and kills innovation. As an alternative to simply reacting to failures, the security industry has been overlooking valuable chances to further understand and nurture ‘accidents’ or ‘mistakes’ as opportunities to proactively strengthen system resilience. Expose the failures, build resilient systems, and develop an "Applied security" model to minimize the impact of failures. In this session we will cover discuss the role of ‘human-error’, root cause, and resilience engineering in our industry and how we can use new techniques such as Chaos Engineering to make a difference.
Security focused Chaos Engineering proposes that the only way to understand this uncertainty is to confront it objectively by introducing controlled signals. During this session we will cover some key concepts in Safety & Resilience Engineering work based on Sydney Dekker’s 30 years of research into airline accident investigations and how new techniques such as Chaos Engineering are making a difference in improving our ability to learn from incidents proactively before they become destructive
Beyond the Knowledge Base: Turning Data into Wisdom - an ITSM Academy WebinarITSM Academy, Inc.
Many organizations live perceiving Knowledge Management begins and ends with a Knowledge Base. However, a more robust Knowledge Management process exists. The KM process is a pipeline to Continual Service Improvement. This presentation will provides insight and methods for developing and implementing a more comprehensive Knowledge Management process leading to improvement throughout the enterprise. This presentation covers design of the KM process, DIKW and its usages, the KM-CSI connection, knowledge repositories and much more.
Beyond the Knowledge Base: Turning Data into Wisdom - an ITSM Academy WebinarKaren Skiles
Many organizations live perceiving Knowledge Management begins and ends with a Knowledge Base. However, a more robust Knowledge Management process exists. The KM process is a pipeline to Continual Service Improvement. This presentation provides insight and methods for developing and implementing a more comprehensive Knowledge Management process leading to improvement throughout the enterprise. This presentation covers design of the KM process, DIKW and its usages, the KM-CSI connection, knowledge repositories and much more.
ERM in general
Observations from the CAS ERM Online Course
Issues in advancing ERM
ERM as complex systems analysis
ERM as an evolutionary process
ERM as subject to behavioral patterns
Conclusion
This keynote explores the future of risk analysis by revisiting the history of forecasting, mankind's eternal desire for quality datasets, and the strategic trends that will help us deal with risk in an increasingly connected, complex world. Learn more at https://www.competitivefutures.com and https://www.ericgarland.co
The Rising Tide Lifts All Boats: The Advancement of Science in Cybersecurity laurieannwilliams
Stolen passwords, compromised medical records, taking the internet out through video cameras– cybersecurity breaches are in the news every day. Despite all this, the practice of cybersecurity today is generally reactive rather than proactive. That is, rather than improving their defenses in advance, organizations react to attacks once they have occurred by patching the individual vulnerabilities that led to those attacks. Researchers engineer solutions to the latest form of attack. What we need, instead, are scientifically founded design principles for building in security mechanisms from the beginning, giving protection against broad classes of attacks. Through scientific measurement, we can improve our ability to make decisions that are evidence-based, proactive, and long-sighted. Recognizing these needs, the US National Security Agency (NSA) devised a new framework for collaborative research, the “Lablet” structure, with the intent to more aggressively advance the science of cybersecurity. A key motivation was to catalyze a shift in relevant areas towards a more organized and cohesive scientific community. The NSA named Carnegie Mellon University, North Carolina State University, and the University of Illinois – Urbana Champaign its initial Lablets in 2011, and added the University of Maryland in 2014.
This talk will reflect on the structure of the collaborative research efforts of the Lablets, lessons learned in the transition to more scientific concepts to cybersecurity, research results in solving five hard security problems, and methods that are being used for the measurement of scientific progress of the Lablet research.
ISO/IEC 27032 vs. ISO 31000 – How do they help towards Cybersecurity Risk Man...PECB
Organizations need to implement a risk management strategy in order to mitigate, and whenever possible, eliminate cyber risks and threats.
ISO/IEC 27032 and ISO 31000 combined help you to manage cyber risks.
Amongst others, the webinar covers:
• ISO/IEC 27032 vs. ISO 31000
• IRTVH Assessment Framework
Presenters:
Sherifat Akinwonmi
Sherifat is a Cyber Security professional with over 12 years of experience across diverse industries including Agriculture, Oil & Energy Services, Pharmaceuticals, Financial and IT services.
She is part of the top 20 Canadian Women in Cybersecurity – ITWC. She is also a Business Information Security Officer (BISO) with one of the top banks in Northern America.
Sherifat is member of several boards including the Advisory Board for Canadian Women in Cybersecurity, Girls & Women Technological Empowerment Organization (GWTEO).
She has a great passion and interest in enabling women in their professional careers. She volunteers her time mentoring young people to launch their careers in Technology and supports the less privileged.
Geary Sikich
Geary Sikich is a Senior Crisis Management Consultant at Health Care Service Corporation (HCSC). Prior to joining HCSC, Geary was a Principal with Logical Management Systems, Corp., a management consulting, and executive education firm with a focus on enterprise risk management, contingency planning, executive education and issues analysis. Geary developed LMSCARVERtm the “Active Analysis” framework, which directly links key value drivers to operating processes and activities. LMSCARVERtm provides a framework that enables a progressive approach to business planning, scenario planning, performance assessment and goal setting.
Prior to founding Logical Management Systems, Corp. in 1985 Geary held a number of senior operational management positions in a variety of industry sectors. Geary served in the U.S. Army; responsible for the initial concept design and testing of the U.S. Army's National Training Center and other related activities. Geary holds a M.Ed. in Counseling and Guidance from the University of Texas at El Paso and a B.S. in Criminology from Indiana State University.
Geary has developed and taught courses for Norwich University, University of Nevada Reno, George Washington University and University of California Berkley. He is active in Executive Education, where he has developed and delivered courses in enterprise risk management, contingency planning, performance management and analytics. Geary is a frequent speaker on business continuity issues business performance management.
Date: October 12, 2022
NASA
National Aeronautics and Space Administration
NASA Discovery Strategic Foresight
Helping Aviation Find Problems Worth Solving
By
Dr. Pankaj Dhussa
Similar to 2016-10-21 CAS Annual Meeting Slides (20)
3. Antitrust Notice
• The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit of the
antitrust laws. Seminars conducted under the auspices of the CAS are designed solely to
provide a forum for the expression of various points of view on topics described in the
programs or agendas for such meetings.
• Under no circumstances shall CAS seminars be used as a means for competing companies
or firms to reach any understanding – expressed or implied – that restricts competition or in
any way impairs the ability of members to exercise independent business judgment
regarding matters affecting competition.
• It is the responsibility of all seminar participants to be aware of antitrust regulations, to
prevent any written or verbal discussions that appear to violate these laws, and to adhere in
every respect to the CAS antitrust compliance policy.
4. Other “Housekeeping”
• Information presented today reflects the opinions of the
presenters and its not intended to, nor should be, imputed to
any of their employers.
5. Introduction to Session
• Learning Objectives
– Understanding emerging risk in the cyber space
• Deconstructing “emerging risk”
• Cognitive/perceptual “ blind spots” and how those may impact decision making
• Applications/implications for insurance
– Capacity
– SIRs/Deductibles
– Rethinking emerging risk in the cyber arena
6. Introduction to Dr. Kathleen Locklear
• Adjunct Professor at University of Maryland, University College
– Dissertation: “Emerging Risk: A Systems Thinking and Complexity
Approach”
– Research Interests: emerging risk, systems thinking, complexity
theory, risk perception
• Senior Director, Global Insurance – Teva Pharmaceuticals
– Worlds largest generics pharma producer; $ 19.7 B revenue (FY 2015)
• RIMS Strategic Risk Management Committee Member
7. Introduction to Michael Solomon
• FCAS, MAAA, CERA
• 1st Prize, Society of Actuaries/ Casualty Actuarial Society Joint
Risk Section Cybersecurity call for Essays
• 1st Prize, Professionally Speaking Toastmasters public speaking
competition
• CAMAR Vice President
• Member, Committee for P&C focused ERM Seminars
• Member, CAS/ CIA/ SOA Impairment Project Oversight Group
12. “We also know there are known unknowns; that is to
say, we know there are some things we do not know.
But there are also unknown unknowns – the ones we
don’t know we don’t know.”
United States Secretary of Defense, Donald Rumsfeld
Press Briefing, February 12, 2002
13. Category 1~ Framing the Problem
• Things we don’t know… but we’re aware of it!
– Emerging Risk as an “information gap”
– Solutions:
• Ask others
• Fill in the data points, gaps
14. Category 2~ Framing the Problem
• Things we don’t know… and we’re unaware of it!
– Much more difficult
– NOT merely an information gap that can be filled
– Emergent nature presents specific challenges and limitations
15. Category 2~ A Closer Look
(Things we don’t know… and we’re unaware of it!)
• Emerging risk~ “A novel manifestation of risk, of a type that
has not been experienced before” (Locklear, 2011)
– Pure- never experienced before, at all, by anyone
• example: nanotechnology, fracking, genetically modified crops
– Hybrid- blends together known risk types in new ways (combinations)
to produce outcomes that haven’t been experienced before
• Example: Zoonotic disease + global warming = (zika?)
• Example: Overstressed power grids + greater dependence on
telecommunications + {X Factor} = ???
16. Challenges of Emerging Risk
• ‘Relational Complexity’: Growing difficulty in determining
relationships among causal factors, making risk more ‘opaque’
– Richardson, Cilliers & Lissack (2001)
• Under these conditions, causes and effects no longer have simple, linear
connections
• It’s more difficult to ascertain interactions between elements
• Implication: Challenges for appropriately pricing and structuring insurance where
“cause” (i.e.- “trigger, for insurance) is not always apparent
17. Challenges of Emerging Risk
• Amplification/Cascade potential: Seemingly simple root causes can
trigger events which cascade through a network and are amplified to
produce an extreme event
– Example: August 2003 mega-power outage
• Ohio, Maryland, New York, Toronto
• Overstressed lines failed in Ohio after contacting overgrown tree limbs (Holbrook, 2010)
• Expected outcome was a minor, local outage
• Implication: Appropriate pricing for insurance, as well as capacity, are challenged when
a seemingly minor event is amplified
18. Challenges of Emerging Risk
• Emerging risk is often opaque, clouded within a complex web of
causal factors, until it escalates into an extreme event
– In an environment of great complexity, emerging risk may remain
“hidden” (latent)
– Modern structures, like the “Internet of Things” provide rich
environments for this period of latency
– Implication: Insurance/risk management need to use different tools
• Environmental scanning can help identify early “signals of change” (Ashley & Morrison,
1997) which if overlooked, ignored or downplayed, can allow emerging risk to
continue along its development trajectory
19. Challenges of Emerging Risk
• May involve rapid and widespread deployment of new/novel
technology/modalities
• By the time an issue is identified, the problem is already
extensive
20. “Classic” Example of Emerging Risk
• Asbestos (“Emerged” risk)
– Naturally occurring, used as far back as ancient Greece
– Industrial revolution, insulator for furnaces
– Subsequently used far and wide
– Then problems grew apparent (1960’s)
– Extensive litigation, ongoing abatement problems
– Classic illustration of unexpected impact for insurers
– NOTE: Hind-sight is 20/20!
21. Challenges of Emerging Risk
• Lack of historical data OR data that is not entirely relevant
– The capabilities of traditional risk management tools (quantitative, predictive)
are being stretched when applied to emerging risk
• Traditional modeling does not “fit” the challenges of “unknown-unknowns”
• Implications: In order to optimize approaches, including insurance, “non traditional” tools
may be needed
– Environmental scanning, systems thinking
22. Our “Human” Challenges
• Tendency to focus within the “comfort zone”
– Risks that are well known, well understood
– Lots of data available for analysis
– “Pure” risks that either happen, or don’t (e.g.- fire), with no up-side
potential
• We tend to heavily value corroborating factual information and
discount outliers, non-conforming information
23. More “human” challenges
• Recent movie “Everest”
• Roberto, M. A. (2002). Lessons from Everest: The interaction of
cognitive bias, psychological safety, and system complexity.
California Management Review, 45(1), 136-158.
24. “Human”challenges- Lessons from the
movie ‘Everest’
• Commitment escalation- continuing to invest resources,
commitment to a course of action that increasingly appears questionable
at best (Staw, 1987)
– Led climbers to ignore rules and place themselves in increasing danger
• Recency bias- tendency to focus on more recent events
– Hindered the judgment of the expedition leaders who had experienced good weather on Everest during
the prior recent years, causing them to underestimate the severity of the storm despite historical data
that showed the conditions on May 10, 1996 were anything but abnormal.
25. More “human” challenges
• Groupthink
– Defective decision making that occurs when conformity pressures of a group
lead to faulty decisions, made in an effort to preserve group harmony.
– Defective ‘groupthink’ decision making is characterized by the following
attributes: poor information searching; selective bias in information
processing; incomplete surveying of objectives and alternatives; failure to
re-examine choices and rejected alternatives; and failure to develop
contingency plans (Janis & Manning, 1977, p. 132).
• “a disease of insufficient search for information, alternatives and
modes of failure” (McCauley, 1998, p. 144)
26. Something to Consider:
Lessons from the Black Swan (Taleb, 2007)
• Extreme outlier (unpredictable)
• Thought not to exist (improbable)
• Outside the boundaries of “normal” expectations
• It can’t be… therefore it isn’t
28. CONCEPTUALIZING EMERGING CYBER RISK
• “FOOD FOR THOUGHT”
• SCADA
• HACKING OF MEDICAL DEVICES
• HACKING OF DRIVERLESS CARS
29. Hacking in action-
Use of smart phones
• Max Cornelisse, Netherlands
– hacking train schedule board
– turning building lights on/off
– raising/lowering drawbridge
– changing digital highway road sign
– Videos available on YouTube
– Real, not real?
30. SCADA
• Supervisor Control and Data Acquisition
– Refers to industrial control systems (ICS)
• Computer systems that monitor and control industrial, infrastructure
or facility-based processes
• System collects data from various sensors at factory, plant or other
remote locations
• Sends data to central computer that manages and controls the data
31. Possible SCADA Hacking Scenarios
• Power outages (blackouts, across grids)
• Waste water mixed with drinking water
• Disruption of manufacturing lines
• Transportation disruption/shut down
• NOTEWORTHY
– Potential for impact far away from the compromised source itself
– Amplification of impact
– Cascade effect
32. Actual SCADA Incidents
• Thirteen assembly lines shut down at Daimler-Chrysler, Zotob
worm, 2005
• Springfield, Illinois public water supply pump burned out after
being cycled on and off repeatedly (November 2011) through
an IP address in Russia
33. Actual SCADA Incidents
• Ohio Davis-Besse nuclear power plant safety monitoring system
off line for 5 hours, January 2003, Slammer worm
• Brisbane hacker used radio transmissions to create raw sewage
overflows on Sunshine coast (2000)
• CSX Transportation computers infected by virus (August 2003)
halting train traffic in Washington, D.C.
34. Emerging threats to critical infrastructure
• A computer virus attacked a turbine control system at a U.S.
power plant
– A third party technician had unknowingly used an infected USB drive
on the network
– Plant was down for three weeks
35. Insulin Pump Hacking
• “J&J Warns Insulin Pump Vulnerable to Cyber Hacking”
– October 4, 2016, Wall Street Journal
• OneTouch Ping uses unencrypted radio signal
• Hacker in close proximity could use equipment to detect
signal and program the device
36. Driverless CarTechnology
• Apps to unlock doors, start cars
– “Now is the transitional period, and it’s kind of ugly. They’re old-
school industries. They were mechanic or electronic kinds of systems,
and now they’re software-based companies—and they haven’t
realized they’re software-based companies, and that’s sort of the
problem.”
• Craig Smith, founder of Open Garages
http://www.vocativ.com/332734/driverless-car-hack/ (June 29, 2016)
• DECISION ALGORYTHMS TO AVOID CRASHES
– SACRIFICE DRIVER TO SAVE GROUPS OF PEOPLE
40. Super Storm Sandy
• Worth thinking about
– Some flood zone maps haven’t been updated in over 30 years
• NYC FEMA map last updated 1983
– Unintended consequences of coastal replenishment after other
storms? (emerging risk)
– “Only” a Cat 1 storm and not a classic “named storm”
41. Conclusion
• “The task is not to get it right but to get it less wrong, not to
disprove existing understandings but to recognize their
context-dependence, not to discover what is, but to construct
from conflicting understandings previously unconceived
alternative understandings.” Grobstein, 2010
43. Suggested Readings
• Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. New York: Harper Collins.
• Bazerman, M.H. & Watkins, M.D. (2004). Predictable Surprises: The disasters you should have seen coming and how to
prevent them. Boston: Harvard Business School Press.
• Friedman, T.L. (2005). The world is flat: A brief history of the twenty-first century (1st ed.). New York, NY: Farrar, Straus
and Giroux.
• Haeckel, S.H. (2004). Peripheral vision: Sensing and acting on weak signals- Making meaning out of apparent noise. Long
Range Planning, 37(2), 181-189.
• Hubbard, D.W. (2010). How to measure anything: Finding the value of intangibles. Hoboken, NJ: John H. Wiley & Sons.
• Taleb, N.N. (2007). The black swan. New York, NY: Random House