This document provides an overview of safety engineering concepts and processes. It discusses safety-critical systems and the importance of considering software safety. Safety is defined as a system's ability to operate without danger of injury or damage. Key concepts covered include safety requirements, hazard identification and analysis, risk assessment and reduction strategies, and safety engineering processes. Safety-critical systems must be designed and developed following strict processes to ensure all hazards are identified and mitigated.
This document discusses safety engineering for systems that contain software. It covers topics like safety-critical systems, safety requirements, and safety engineering processes. Safety is defined as a system's ability to operate normally and abnormally without harm. For safety-critical systems like aircraft or medical devices, software is often used for control and monitoring, so software safety is important. Hazard identification, risk assessment, and specifying safety requirements to mitigate risks are key parts of the safety engineering process. The goal is to design systems where failures cannot cause injury, death or environmental damage.
The document discusses dependability in systems. It covers topics like dependability properties, sociotechnical systems, redundancy and diversity, and dependable processes. Dependability reflects how trustworthy a system is and includes attributes like reliability, availability, and security. Dependability is important because system failures can have widespread impacts. Both hardware and software failures and human errors can cause systems to fail. Techniques like redundancy, diversity, and formal methods can help improve dependability. Regulation is also discussed as many critical systems require approval from regulators.
This document summarizes Chapter 12 of a textbook on dependability and security specification. It discusses risk-driven specification, including identifying risks, analyzing risks, and defining requirements to reduce risks. It also covers specifying safety requirements by identifying hazards, assessing hazards, and analyzing hazards to discover root causes. The goal is to specify requirements that ensure systems function dependably and securely without failures causing harm.
This document summarizes key concepts from Chapter 15 on resilience engineering. It discusses resilience as the ability of systems to maintain critical services during disruptions like failures or cyberattacks. Resilience involves recognizing issues, resisting failures when possible, and recovering quickly through activities like redundancy. The document also covers sociotechnical resilience, where human and organizational factors are considered, and characteristics of resilient organizations like responsiveness, monitoring, anticipation, and learning.
This document provides an overview of key topics from Chapter 11 on security and dependability, including:
- The principal dependability properties of availability, reliability, safety, and security.
- Dependability covers attributes like maintainability, repairability, survivability, and error tolerance.
- Dependability is important because system failures can have widespread effects and undependable systems may be rejected.
- Dependability is achieved through techniques like fault avoidance, detection and removal, and building in fault tolerance.
This document discusses the concept of dependability in computer systems. It defines dependability as the extent to which a system operates as expected without failure. Dependability is determined by attributes like availability, reliability, safety, and security. The document outlines these principal properties and how they are related. It also discusses how dependability is perceived subjectively and how availability and reliability can be quantified.
The chapter discusses software evolution, including that software change is inevitable due to new requirements, business changes, and errors. It describes how organizations must manage change to existing software systems, which represent huge investments. The majority of large software budgets are spent evolving, rather than developing new, systems. The chapter outlines the software evolution process and different approaches to evolving systems, including addressing urgent changes. It also discusses challenges with legacy systems and their management.
This document provides an overview of topics in chapter 13 on security engineering. It discusses security and dependability, security dimensions of confidentiality, integrity and availability. It also outlines different security levels including infrastructure, application and operational security. Key aspects of security engineering are discussed such as secure system design, security testing and assurance. Security terminology and examples are provided. The relationship between security and dependability factors like reliability, availability, safety and resilience is examined. The document also covers security in organizations and the role of security policies.
This document discusses safety engineering for systems that contain software. It covers topics like safety-critical systems, safety requirements, and safety engineering processes. Safety is defined as a system's ability to operate normally and abnormally without harm. For safety-critical systems like aircraft or medical devices, software is often used for control and monitoring, so software safety is important. Hazard identification, risk assessment, and specifying safety requirements to mitigate risks are key parts of the safety engineering process. The goal is to design systems where failures cannot cause injury, death or environmental damage.
The document discusses dependability in systems. It covers topics like dependability properties, sociotechnical systems, redundancy and diversity, and dependable processes. Dependability reflects how trustworthy a system is and includes attributes like reliability, availability, and security. Dependability is important because system failures can have widespread impacts. Both hardware and software failures and human errors can cause systems to fail. Techniques like redundancy, diversity, and formal methods can help improve dependability. Regulation is also discussed as many critical systems require approval from regulators.
This document summarizes Chapter 12 of a textbook on dependability and security specification. It discusses risk-driven specification, including identifying risks, analyzing risks, and defining requirements to reduce risks. It also covers specifying safety requirements by identifying hazards, assessing hazards, and analyzing hazards to discover root causes. The goal is to specify requirements that ensure systems function dependably and securely without failures causing harm.
This document summarizes key concepts from Chapter 15 on resilience engineering. It discusses resilience as the ability of systems to maintain critical services during disruptions like failures or cyberattacks. Resilience involves recognizing issues, resisting failures when possible, and recovering quickly through activities like redundancy. The document also covers sociotechnical resilience, where human and organizational factors are considered, and characteristics of resilient organizations like responsiveness, monitoring, anticipation, and learning.
This document provides an overview of key topics from Chapter 11 on security and dependability, including:
- The principal dependability properties of availability, reliability, safety, and security.
- Dependability covers attributes like maintainability, repairability, survivability, and error tolerance.
- Dependability is important because system failures can have widespread effects and undependable systems may be rejected.
- Dependability is achieved through techniques like fault avoidance, detection and removal, and building in fault tolerance.
This document discusses the concept of dependability in computer systems. It defines dependability as the extent to which a system operates as expected without failure. Dependability is determined by attributes like availability, reliability, safety, and security. The document outlines these principal properties and how they are related. It also discusses how dependability is perceived subjectively and how availability and reliability can be quantified.
The chapter discusses software evolution, including that software change is inevitable due to new requirements, business changes, and errors. It describes how organizations must manage change to existing software systems, which represent huge investments. The majority of large software budgets are spent evolving, rather than developing new, systems. The chapter outlines the software evolution process and different approaches to evolving systems, including addressing urgent changes. It also discusses challenges with legacy systems and their management.
This document provides an overview of topics in chapter 13 on security engineering. It discusses security and dependability, security dimensions of confidentiality, integrity and availability. It also outlines different security levels including infrastructure, application and operational security. Key aspects of security engineering are discussed such as secure system design, security testing and assurance. Security terminology and examples are provided. The relationship between security and dependability factors like reliability, availability, safety and resilience is examined. The document also covers security in organizations and the role of security policies.
The document discusses critical systems where failures can have severe consequences. It defines four dimensions of dependability - availability, reliability, safety, and security. Development methods for critical systems aim to avoid mistakes, detect and remove errors, and limit damage from failures. The dependability of a system reflects how much users trust that it will operate as expected without failures.
The document discusses critical systems where failures can have severe consequences. It defines four dimensions of dependability - availability, reliability, safety, and security. Development methods for critical systems aim to avoid mistakes, detect and remove errors, and limit damage from failures. The dependability of a system reflects how much users trust that it will operate as expected without failures.
This document discusses systems of systems and complexity. It begins by defining systems of systems and providing examples. Key characteristics of systems of systems include operational and managerial independence of elements, and evolutionary development. The document then covers sources of complexity, including technical, managerial and governance complexity. It discusses how reductionism has traditionally been used to manage complexity in engineering but has limitations for large systems of systems.
The document discusses techniques for achieving dependable software systems. It covers redundancy and diversity approaches including N-version programming where multiple versions of software are developed independently. Dependable system architectures like protection systems and self-monitoring architectures that use redundancy are described. The document emphasizes that a well-defined development process is important for minimizing faults and notes validation activities should include requirements reviews, testing, and change management.
This document discusses requirements specification for critical systems. It covers dependability requirements, risk-driven specification, safety specification, security specification, system reliability specification, and non-functional reliability requirements. For risk-driven specification, it describes the stages of risk identification, analysis and classification, decomposition, and risk reduction assessment. It provides examples of applying this process to an insulin pump. For safety specification, it discusses safety requirements, the safety life cycle, and the IEC 61508 standard. For security specification, it outlines a similar process to safety with stages of asset identification, threat analysis, and security requirements specification. It also discusses different types of security requirements.
System dependability is a composite system property that reflects the degree of trust users have in a system. It is determined by availability, reliability, safety, and security. Dependability is subjective as it depends on user expectations - a system deemed dependable by one user may be seen as unreliable by another if it does not meet their expectations. Formal specifications of dependability do not always capture real user experiences.
Static analysis, reliability testing, and security testing are techniques for validating critical systems. Additional validation processes are required for critical systems due to the high costs and consequences of failure. Validation costs for critical systems are significantly higher than for non-critical systems, typically taking up more than 50% of total development costs. The outcome of the validation process is evidence that demonstrates the system's level of dependability.
The document discusses specifications for dependability and security. It covers topics like risk-driven specification, safety specification, and security specification. It emphasizes that critical systems specification should be risk-driven as risks pose a threat to the system. The risk-driven approach aims to understand risks faced by the system and define requirements to reduce these risks through phased risk analysis including preliminary, life cycle, and operational risk analysis. Safety specification identifies protection requirements to ensure system failures do not cause harm, with risk identification, analysis, and reduction mirroring hazard identification, assessment, and analysis. An example of a safety-critical insulin pump system is provided to illustrate dependability requirements and risk analysis.
The document provides an overview of key security engineering activities that should be integrated into the software development lifecycle (SDLC). It discusses securing each phase of development through threat modeling, secure coding practices like code reviews, and security testing. The goal is to build security into applications from the start to help prevent vulnerabilities and deliver more robust products.
Covers security and privacy issues for software product developers including attacks and defenses, encryption, authentication, authorisation and data protection
This document discusses sociotechnical systems and systems engineering. It defines sociotechnical systems as systems that include both technical systems (e.g. hardware and software) as well as operational processes and people. Sociotechnical systems have emergent properties that depend on the interactions between system components. They are also non-deterministic since human behavior introduces unpredictability. Developing sociotechnical systems requires an interdisciplinary approach involving areas like software engineering, organizational design, and human factors.
The document discusses chapter 7 of a software engineering textbook which covers design and implementation. It begins by outlining the topics to be covered, including object-oriented design using UML, design patterns, and implementation issues. It then discusses the software design and implementation process, considerations around building versus buying systems, and approaches to object-oriented design using UML.
This document discusses safety standards for critical systems and proposes a new concept called Assured Reliability and Resilience Level (ARRL). It notes that while safety standards aim to reduce risk, their requirements differ across domains. The document argues that Safety Integrity Levels (SIL) alone are not sufficient and that Quality of Service is a more holistic criterion. It also notes standards provide little guidance on composing systems from components. The ARRL concept aims to address these issues and complement SIL by considering factors like component trustworthiness and fault behavior. The document suggests ARRL could help foster cross-domain safety engineering.
This document provides an overview of key topics in distributed software engineering. It discusses distributed systems issues, architectural patterns for distributed systems like client-server and peer-to-peer, and software as a service. Some important considerations for designing distributed systems include transparency, openness, scalability, security, and failure management. Middleware helps manage communication and interoperability between diverse components in a distributed system.
This document summarizes key topics from a lecture on security engineering:
1. It discusses security engineering and management, risk assessment, and designing systems for security. Application security focuses on design while infrastructure security is a management problem.
2. It outlines guidelines for secure system design including basing decisions on security policies, avoiding single points of failure, balancing security and usability, validating all inputs, and designing for deployment and recoverability.
3. It also covers risk management, assessing threats, and designing architectures with layered protection and distributed assets to minimize the effects of attacks.
This document discusses using dynamic adaptive systems in safety-critical domains. It begins by introducing safety-critical cyber-physical systems and how dynamic adaptivity could provide benefits like increased fault tolerance and deployability. However, adaptivity also introduces challenges for testing and certification. The document then discusses using the Architecture Analysis and Design Language (AADL) to model and analyze dynamic adaptive safety-critical systems. It considers issues like what constitutes sufficient pre-deployment testing of such systems and how failures from untested configurations can be mitigated. Overall, the document explores how to incorporate safety-critical concerns into the design of dynamic adaptive systems.
ARRL: A Criterion for Composable Safety and Systems EngineeringVincenzo De Florio
While safety engineering standards define rigorous and controllable
processes for system development, safety standards’ differences in distinct
domains are non-negligible. This paper focuses in particular on the aviation,
automotive, and railway standards, all related to the transportation market.
Many are the reasons for the said differences, ranging from historical reasons,
heuristic and established practices, and legal frameworks, but also from the
psychological perception of the safety risks. In particular we argue that the
Safety Integrity Levels are not sufficient to be used as a top level requirement
for developing a safety-critical system. We argue that Quality of Service is a
more generic criterion that takes the trustworthiness as perceived by users better
into account. In addition, safety engineering standards provide very little
guidance on how to compose safe systems from components, while this is the
established engineering practice. In this paper we develop a novel concept
called Assured Reliability and Resilience Level as a criterion that takes the
industrial practice into account and show how it complements the Safety
Integrity Level concept.
Program Robustness is now more important than before, because of the role software programs play in our
life. Many papers defined it, measured it, and put it into context. In this paper, we explore the different
definitions of program robustness and different types of techniques used to achieve or measure it. There
are many papers about robustness. We chose the papers that clearly discuss program or software
robustness. These papers stated that program (or software) robustness indicates the absence of ungraceful
failures. There are different types of techniques used to create or measure a robust program. However,
there is still a wide space for research in this area.
This document discusses approaches for specifying dependability and security requirements, including risk-driven, safety, and reliability specifications. It covers topics such as identifying risks and hazards, assessing their likelihood and impacts, analyzing root causes using techniques like fault trees, and defining requirements to reduce risks and prevent accidents. Safety requirements for an insulin pump example are provided. The key points are that risk analysis identifies risks that could lead to accidents, hazards are decomposed to discover their causes, and safety requirements ensure hazards do not occur or are limited if they do.
This document discusses resilience engineering and designing resilient systems. It covers topics such as resilience, cybersecurity threats and controls, resilience planning, sociotechnical resilience, and resilient systems design. The key ideas are that resilience involves maintaining critical system services during disruptions, using defensive layers and redundancy to limit failures, and designing systems and processes to recognize, resist, recover from, and reinstate after problems.
The document discusses security engineering and covers topics such as security requirements, secure system design, security testing and assurance. It defines security engineering as tools, techniques and methods to develop systems that can resist malicious attacks. It also discusses security dimensions of confidentiality, integrity and availability. Finally, it provides an overview of the preliminary risk assessment process for defining security requirements.
The document discusses critical systems where failures can have severe consequences. It defines four dimensions of dependability - availability, reliability, safety, and security. Development methods for critical systems aim to avoid mistakes, detect and remove errors, and limit damage from failures. The dependability of a system reflects how much users trust that it will operate as expected without failures.
The document discusses critical systems where failures can have severe consequences. It defines four dimensions of dependability - availability, reliability, safety, and security. Development methods for critical systems aim to avoid mistakes, detect and remove errors, and limit damage from failures. The dependability of a system reflects how much users trust that it will operate as expected without failures.
This document discusses systems of systems and complexity. It begins by defining systems of systems and providing examples. Key characteristics of systems of systems include operational and managerial independence of elements, and evolutionary development. The document then covers sources of complexity, including technical, managerial and governance complexity. It discusses how reductionism has traditionally been used to manage complexity in engineering but has limitations for large systems of systems.
The document discusses techniques for achieving dependable software systems. It covers redundancy and diversity approaches including N-version programming where multiple versions of software are developed independently. Dependable system architectures like protection systems and self-monitoring architectures that use redundancy are described. The document emphasizes that a well-defined development process is important for minimizing faults and notes validation activities should include requirements reviews, testing, and change management.
This document discusses requirements specification for critical systems. It covers dependability requirements, risk-driven specification, safety specification, security specification, system reliability specification, and non-functional reliability requirements. For risk-driven specification, it describes the stages of risk identification, analysis and classification, decomposition, and risk reduction assessment. It provides examples of applying this process to an insulin pump. For safety specification, it discusses safety requirements, the safety life cycle, and the IEC 61508 standard. For security specification, it outlines a similar process to safety with stages of asset identification, threat analysis, and security requirements specification. It also discusses different types of security requirements.
System dependability is a composite system property that reflects the degree of trust users have in a system. It is determined by availability, reliability, safety, and security. Dependability is subjective as it depends on user expectations - a system deemed dependable by one user may be seen as unreliable by another if it does not meet their expectations. Formal specifications of dependability do not always capture real user experiences.
Static analysis, reliability testing, and security testing are techniques for validating critical systems. Additional validation processes are required for critical systems due to the high costs and consequences of failure. Validation costs for critical systems are significantly higher than for non-critical systems, typically taking up more than 50% of total development costs. The outcome of the validation process is evidence that demonstrates the system's level of dependability.
The document discusses specifications for dependability and security. It covers topics like risk-driven specification, safety specification, and security specification. It emphasizes that critical systems specification should be risk-driven as risks pose a threat to the system. The risk-driven approach aims to understand risks faced by the system and define requirements to reduce these risks through phased risk analysis including preliminary, life cycle, and operational risk analysis. Safety specification identifies protection requirements to ensure system failures do not cause harm, with risk identification, analysis, and reduction mirroring hazard identification, assessment, and analysis. An example of a safety-critical insulin pump system is provided to illustrate dependability requirements and risk analysis.
The document provides an overview of key security engineering activities that should be integrated into the software development lifecycle (SDLC). It discusses securing each phase of development through threat modeling, secure coding practices like code reviews, and security testing. The goal is to build security into applications from the start to help prevent vulnerabilities and deliver more robust products.
Covers security and privacy issues for software product developers including attacks and defenses, encryption, authentication, authorisation and data protection
This document discusses sociotechnical systems and systems engineering. It defines sociotechnical systems as systems that include both technical systems (e.g. hardware and software) as well as operational processes and people. Sociotechnical systems have emergent properties that depend on the interactions between system components. They are also non-deterministic since human behavior introduces unpredictability. Developing sociotechnical systems requires an interdisciplinary approach involving areas like software engineering, organizational design, and human factors.
The document discusses chapter 7 of a software engineering textbook which covers design and implementation. It begins by outlining the topics to be covered, including object-oriented design using UML, design patterns, and implementation issues. It then discusses the software design and implementation process, considerations around building versus buying systems, and approaches to object-oriented design using UML.
This document discusses safety standards for critical systems and proposes a new concept called Assured Reliability and Resilience Level (ARRL). It notes that while safety standards aim to reduce risk, their requirements differ across domains. The document argues that Safety Integrity Levels (SIL) alone are not sufficient and that Quality of Service is a more holistic criterion. It also notes standards provide little guidance on composing systems from components. The ARRL concept aims to address these issues and complement SIL by considering factors like component trustworthiness and fault behavior. The document suggests ARRL could help foster cross-domain safety engineering.
This document provides an overview of key topics in distributed software engineering. It discusses distributed systems issues, architectural patterns for distributed systems like client-server and peer-to-peer, and software as a service. Some important considerations for designing distributed systems include transparency, openness, scalability, security, and failure management. Middleware helps manage communication and interoperability between diverse components in a distributed system.
This document summarizes key topics from a lecture on security engineering:
1. It discusses security engineering and management, risk assessment, and designing systems for security. Application security focuses on design while infrastructure security is a management problem.
2. It outlines guidelines for secure system design including basing decisions on security policies, avoiding single points of failure, balancing security and usability, validating all inputs, and designing for deployment and recoverability.
3. It also covers risk management, assessing threats, and designing architectures with layered protection and distributed assets to minimize the effects of attacks.
This document discusses using dynamic adaptive systems in safety-critical domains. It begins by introducing safety-critical cyber-physical systems and how dynamic adaptivity could provide benefits like increased fault tolerance and deployability. However, adaptivity also introduces challenges for testing and certification. The document then discusses using the Architecture Analysis and Design Language (AADL) to model and analyze dynamic adaptive safety-critical systems. It considers issues like what constitutes sufficient pre-deployment testing of such systems and how failures from untested configurations can be mitigated. Overall, the document explores how to incorporate safety-critical concerns into the design of dynamic adaptive systems.
ARRL: A Criterion for Composable Safety and Systems EngineeringVincenzo De Florio
While safety engineering standards define rigorous and controllable
processes for system development, safety standards’ differences in distinct
domains are non-negligible. This paper focuses in particular on the aviation,
automotive, and railway standards, all related to the transportation market.
Many are the reasons for the said differences, ranging from historical reasons,
heuristic and established practices, and legal frameworks, but also from the
psychological perception of the safety risks. In particular we argue that the
Safety Integrity Levels are not sufficient to be used as a top level requirement
for developing a safety-critical system. We argue that Quality of Service is a
more generic criterion that takes the trustworthiness as perceived by users better
into account. In addition, safety engineering standards provide very little
guidance on how to compose safe systems from components, while this is the
established engineering practice. In this paper we develop a novel concept
called Assured Reliability and Resilience Level as a criterion that takes the
industrial practice into account and show how it complements the Safety
Integrity Level concept.
Program Robustness is now more important than before, because of the role software programs play in our
life. Many papers defined it, measured it, and put it into context. In this paper, we explore the different
definitions of program robustness and different types of techniques used to achieve or measure it. There
are many papers about robustness. We chose the papers that clearly discuss program or software
robustness. These papers stated that program (or software) robustness indicates the absence of ungraceful
failures. There are different types of techniques used to create or measure a robust program. However,
there is still a wide space for research in this area.
This document discusses approaches for specifying dependability and security requirements, including risk-driven, safety, and reliability specifications. It covers topics such as identifying risks and hazards, assessing their likelihood and impacts, analyzing root causes using techniques like fault trees, and defining requirements to reduce risks and prevent accidents. Safety requirements for an insulin pump example are provided. The key points are that risk analysis identifies risks that could lead to accidents, hazards are decomposed to discover their causes, and safety requirements ensure hazards do not occur or are limited if they do.
This document discusses resilience engineering and designing resilient systems. It covers topics such as resilience, cybersecurity threats and controls, resilience planning, sociotechnical resilience, and resilient systems design. The key ideas are that resilience involves maintaining critical system services during disruptions, using defensive layers and redundancy to limit failures, and designing systems and processes to recognize, resist, recover from, and reinstate after problems.
The document discusses security engineering and covers topics such as security requirements, secure system design, security testing and assurance. It defines security engineering as tools, techniques and methods to develop systems that can resist malicious attacks. It also discusses security dimensions of confidentiality, integrity and availability. Finally, it provides an overview of the preliminary risk assessment process for defining security requirements.
Safety, Risk, Hazard and Engineer’s Role Towards SafetyAli Sufyan
1. Safety engineering aims to identify hazards and ensure systems can operate safely without risk of injury, death, or environmental damage.
2. Engineers must consider safety in various fields such as aerospace, automotive, chemical, nuclear and ensure proper safety measures are implemented to prevent accidents from failures, errors or external threats.
3. Safety is critical in systems where failure could be catastrophic, like aircraft control systems, and engineers are responsible for thorough hazard analysis and mitigation of risks.
This document provides an overview of topics covered in Chapter 14 on Security Engineering. It discusses security engineering and how it is concerned with applying security to applications, as well as security risk assessment and designing systems based on risk assessments. The document outlines the importance of security management, as well as risk management approaches like preliminary risk assessment, life cycle risk assessment, and operational risk assessment. It also discusses designing systems for security through approaches like incorporating security into architectural design, following best practices, and minimizing vulnerabilities introduced during deployment. Finally, the document discusses system survivability and delivering essential services even when under attack.
The document discusses how to specify requirements for critical systems based on risk analysis. It explains how to identify risks, analyze and classify them, then derive safety, security, and reliability requirements to reduce risks. For reliability, it describes metrics like probability of failure on demand and mean time to failure that can be used to specify quantitative reliability levels. The goal is to develop requirements that eliminate intolerable risks and minimize other risks given cost and schedule constraints.
The document discusses critical systems specification, including risk-driven specification, safety specification, security specification, and software reliability specification. It covers topics like risk identification and analysis, safety requirements generation from risk analysis, derivation of security requirements, and metrics used for reliability specification like probability of failure on demand and rate of fault occurrence. The slides provide examples of how these techniques are applied to a hypothetical insulin pump system.
eHealth - Medical Systems Interoperability & Mobile Healthulmedical
The Medical Device industry is rapidly adopting technologies that enable communication and connectivity of health products and systems to improve both speed and quality of care as well as patient safety. The users (i.e. hospitals and others) are demanding an approach that will support interoperability among multiple independently sourced medical devices. Industry will require standardization to support such interoperability. Government and regulators, on behalf of the patients and in compliance with their mission to protect public health, as well as users and manufacturers require that such interoperability is safe. This complementary webinar will introduce the eHealth sector and applications, outline the challenges and risks inherent in connecting heterogeneous equipment into medical device systems, and provide insights to how manufacturers can demonstrate compliance with the rapidly changing regulatory landscape for interoperable medical devices.
This webinar was presented by UL eHealth experts on October 30, 2013.
Safety is an important consideration in process design. Safety integrity level (or SIL) is often used to describe process safety requirements. However, there are often misconceptions or misunder- standings surrounding SIL. While the general subject, functional safety and SIL, can be highly technical, the general ideas can be distilled down to a few readily understandable concepts. In this paper, we will discuss what SIL is, why it is important, what certification means, and the implications and benefits of that certification to the end user.
The document describes a seminar on software for embedded safety critical systems held in Toulouse, France in January 2014. The seminar included 10 sessions covering various topics related to software in safety critical domains such as aeronautics, automotive, space, etc. The sessions addressed issues like software assurance levels, standards, development processes, verification, and new technologies. Experts from companies like Airbus, Continental, and ONERA presented on topics specific to their domains. The seminar aimed to discuss challenges in developing software for critical systems and recognize best practices defined in international standards.
This document provides an overview and definitions related to Safety Instrumented Systems (SIS). It discusses the need for SIS to protect personnel, equipment, and the environment from hazardous events in industries like chemical and oil & gas. SIS are designed to reduce the likelihood or impact of emergencies. The document defines common SIS terms and describes the basic components and purpose of SIS, which include sensors to detect process parameters, a logic solver to determine necessary actions, and final control elements like valves to isolate the process. It also discusses the concept of layers of protection to prevent and mitigate hazardous events, with SIS comprising the final active prevention layer.
Personality and Individual Differences: Determinants of Personality - Major P...RAJESHSKR
The document discusses an engineering module on safety and workplace rights. It covers topics like risk assessment, reducing risk, acceptable risk, voluntary risk, job-related risks, and analytical methods for testing safety like scenario analysis, failure mode and effects analysis, fault tree analysis, and event tree analysis. The document provides examples and explanations of these various safety concepts and methods. It emphasizes that safety should be an integral part of engineering design and discusses an engineer's responsibility to ensure safety.
Safety is defined as a system's ability to operate without causing human injury or environmental damage. Safety and reliability are related but distinct, as a reliable system can still be unsafe if requirements are incorrect. There are three approaches to safety critical systems development: hazard avoidance through design, hazard detection and removal before accidents occur, and damage limitation to minimize harm from any accidents.
Drager Fixed Gas Detector - Functional Safety & Gas Detection Systems - SIL B...Thorne & Derrick UK
A process is assumed to be safe if the actual risk is decreased below the level of acceptable risk through risk-reducing measures. Safety instrumented systems use functional safety to automatically activate safety measures and avoid dangerous situations. The required reliability of protection systems depends on the safety integrity level (SIL), which is determined through risk analysis of potential consequences, exposure to hazards, and avoiding hazardous events. Gas detection systems must activate safety countermeasures if gas concentrations exceed defined levels. Their safety function is to trigger gas alarms, and upon failure must go to a safe state of equivalent alarm activation. The probability of failure for safety functions is evaluated to ensure protection systems meet the necessary SIL level through factors like proof testing and detectable versus undetectable
Depending on the nature of the task, the level of safety management training required will vary from general safety familiarization to expert level for safety specialists, for example:
a) Corporate safety training for all staff,
b) Training aimed at management’s safety responsibilities,
c) Training for operational personnel (such as pilots, maintenance engineers, dispatchers / FOO’s and personnel with apron or ramp duties), and
d) Training for aviation safety specialists (such as the Safety Management System and Flight Data Analysts).
The scope of SMS training must be appropriate to each individual’s roles and responsibilities within the operation. Training should follow a building-block approach. As part of the ICAO requirements, an operator must provide training to its operational personnel (including cabin crew), managers and supervisors, senior managers, and the accountable executive for the SMS.
Training should address the specific role that cabin crew members play in the operation. This includes, but is not limited to training with regards to:
a) Unit 1 SMS fundamentals and overview of the operator’s SMS;
b) Unit 2 Safety policy;
c) Unit 3 Hazard identification and reporting; and
d) Unit 4 Safety Communication.
e) Unit 5 Review of Company Safety Management
f) Unit 6 Review of Safety Reporting
The base content comes from many sources but all aligned to the ICAO syllabus requirements, and created for an international operational airline.
If you are a startup airline, or looking to align courses with your specific operational standards, please take a look and check out
pghclearningsolutions@gmail.com leave a message and I will contact you where we can discuss your requirements, send you examples and if required, download my editable masters which you can customize to meet your own specific operational training requirements.
Depending on the nature of the task, the level of safety management training required will vary from general safety familiarization to expert level for safety specialists, for example:
a) Corporate safety training for all staff,
b) Training aimed at management’s safety responsibilities,
c) Training for operational personnel (such as pilots, maintenance engineers, dispatchers / FOO’s and personnel with apron or ramp duties), and
d) Training for aviation safety specialists (such as the Safety Management System and Flight Data Analysts).
The scope of SMS training must be appropriate to each individual’s roles and responsibilities within the operation. Training should follow a building-block approach. As part of the ICAO requirements, an operator must provide training to its operational personnel (including cabin crew), managers and supervisors, senior managers, and the accountable executive for the SMS.
Training should address the specific role that cabin crew members play in the operation. This includes, but is not limited to training with regards to:
a) Unit 1 SMS fundamentals and overview of the operator’s SMS;
b) Unit 2 Safety policy;
c) Unit 3 Hazard identification and reporting; and
d) Unit 4 Safety Communication.
e) Unit 5 Review of Company Safety Management
f) Unit 6 Review of Safety Reporting
The base content comes from many sources but all aligned to the ICAO syllabus requirements, and created for an international operational airline.
If you are a startup airline, or looking to align courses with your specific operational standards, please take a look and check out
pghclearningsolutions@gmail.com leave a message and I will contact you where we can discuss your requirements, send you examples and if required, download my editable masters which you can customize to meet your own specific operational training requirements.
An incident response plan (IRP) is a set of written instructions for.pdfaradhana9856
An incident response plan (IRP) is a set of written instructions for detecting, responding to and
limiting the effects of an information security event.Incident response plans provide instructions
for responding to a number of potential scenarios, including data breaches, denial of
service/distributed denial of service attacks, firewall breaches, virus or malware outbreaks or
insider threats. Without an incident response plan in place, organizations may either not detect
the attack in the first place, or not follow proper protocol to contain the threat and recover from it
when a breach is detected.
According to the SANS Institute, there are six key phases of an incident response plan:
1. Preparation: Preparing users and IT staff to handle potential incidents should they should arise
2. Identification: Determining whether an event is indeed a security incident
3. Containment: Limiting the damage of the incident and isolating affected systems to prevent
further damage
4. Eradication: Finding the root cause of the incident, removing affected systems from the
production environment
5. Recovery: Permitting affected systems back into the production environment, ensuring no
threat remains
6. Lessons learned: Completing incident documentation, performing analysis to ultimately learn
from incident and potentially improve future response efforts
It is important that an incident response plan is formulated, supported throughout the
organization, and is regularly tested. A good incident response plan can minimize not only the
affects of the actual security breach, but it may also reduce the negative publicity.
From a security team perspective, it does not matter whether a breach occurs (as such
occurrences are an eventual part of doing business using an untrusted carrier network, such as the
Internet), but rather, when a breach occurs. Do not think of a system as weak and vulnerable; it is
important to realize that given enough time and resources, someone can break into even the most
security-hardened system or network. You do not need to look any further than the Security
Focus website at http://www.securityfocus.com/ for updated and detailed information concerning
recent security breaches and vulnerabilities, from the frequent defacement of corporate
webpages, to the 2002 attacks on the root DNS nameservers[1].
The positive aspect of realizing the inevitability of a system breach is that it allows the security
team to develop a course of action that minimizes any potential damage. Combining a course of
action with expertise allows the team to respond to adverse conditions in a formal and responsive
manner.
The incident response plan itself can be separated into four phases:
Immediate action to stop or minimize the incident
Investigation of the incident
Restoration of affected resources
Reporting the incident to the proper channels
Solution
An incident response plan (IRP) is a set of written instructions for detecting, responding to and
limiting the eff.
This document discusses the key aspects of system dependability, including availability, reliability, safety, and security. It notes that dependability reflects the degree to which users trust a system and defines it as covering attributes like availability, reliability, and security. It also discusses factors that influence perceptions of reliability and availability, such as usage patterns, outage length and number of users affected.
The document discusses safety systems used in industrial plants, including emergency shutdown systems (ESD), process shutdown systems (PSD), and fire and gas control systems (F&G). It defines these terms and describes their objectives, typical components, and functions. Safety is measured by factors like average probability of failure on demand (PFDavg) and risk reduction factor (RRF). The document also covers related topics like hazard analysis, risk, reliability, availability, and definitions of key safety terminology.
accident prevention and theories of accidentsatheeshsep24
1. Several theories of accident causation are described, including the Domino Theory, Human Factors Theory, Accident/Incident Theory, Epidemiological Theory, and Systems Theory.
2. The Domino Theory proposes that accidents are caused by a series of preceding factors, and removing the central unsafe act or hazardous condition can prevent accidents.
3. The Human Factors Theory attributes accidents to a chain of events ultimately resulting from human error due to overload, inappropriate responses, or inappropriate activities.
The document discusses configuration management (CM) which involves managing changing software systems through policies, processes and tools. Key CM activities include version management to track changes made by different developers, system building to create executable systems, change management to track requests for changes, and release management. CM is important for team projects and agile development where components change frequently. Version control systems are used to identify, store and control access to different component versions.
The document discusses quality management in software development. It covers topics such as software quality, standards, reviews, quality management in agile development, and software measurement. Specifically, it describes that quality management is concerned with ensuring a required level of quality is achieved. It establishes organizational processes and standards to lead to high quality software. Quality management also involves applying specific quality processes and checking that planned processes are followed.
Project planning involves breaking down work into tasks assigned to team members, anticipating problems, and creating a project plan. The plan is used to communicate work and assess progress. Planning occurs at proposal, startup, and periodically throughout the project. At startup, more details are known and a plan is created for resource allocation. During development, the plan is regularly revised based on new information and experience. Agile planning uses iterative increments and flexible plans that can accommodate changing priorities and requirements.
The document discusses several aspects of software project management including risk management, managing people, and teamwork. It describes the risk management process of identifying, analyzing, planning for, and monitoring risks. Examples of different types of project, product, and business risks are provided. The document also discusses the importance of people management in projects and different personality types and motivations that managers should consider. Motivation factors like an individual's needs hierarchy and creating a balanced environment are addressed.
The document summarizes topics related to real-time software engineering including embedded system design, architectural patterns for real-time software, timing analysis, and real-time operating systems. It discusses key characteristics of embedded systems like responsiveness, the need to respond to stimuli within specified time constraints, and how real-time systems are often modeled as cooperating processes controlled by a real-time executive. The document also outlines common architectural patterns for real-time systems including observe and react, environmental control, and process pipeline.
This document provides an overview of systems of systems (SoS). It defines a SoS as a system containing two or more independently managed elements. Key characteristics of SoS include operational and managerial independence of elements. The document discusses challenges in engineering SoS due to lack of single control. It also describes common SoS development processes like conceptual design, system selection, and architectural design. Testing SoS is difficult as requirements may be undefined and constituent systems can change. The document advocates node and web architectures with collaboration incentives for SoS.
This document discusses systems engineering and the process of developing sociotechnical systems. It covers key topics like conceptual design, procurement, and the stages of systems engineering. Sociotechnical systems are complex and have emergent properties due to interactions between technical, human, and organizational factors. Success is difficult to define as stakeholders may have different views. Conceptual design develops an initial vision of the system purpose before detailed requirements. Procurement decisions involve choosing between custom development or commercial off-the-shelf systems.
This document discusses service-oriented software engineering and related topics. It covers service-oriented architectures, RESTful services, service engineering, and service composition. Key points include:
- Service-oriented architectures allow distributed systems to be developed where components are independent services. Standard protocols support service communication and information exchange.
- RESTful services provide a simpler alternative to SOAP/WSDL for implementing web services, using resources and standard HTTP methods like GET and POST.
- Service engineering is the process of developing reusable services, including identifying service candidates, designing service interfaces, and implementing and deploying services.
- Identifying appropriate service candidates involves understanding business processes and entities that could be supported by reusable services.
This document discusses various topics related to distributed software engineering including distributed systems, client-server computing, architectural patterns for distributed systems, and software as a service. It covers key characteristics of distributed systems like resource sharing, openness, concurrency, scalability, and fault tolerance. Some important design issues for distributed systems are also outlined such as transparency, openness, scalability, security, quality of service, and failure management. Common models of interaction in distributed systems including remote procedure calls and message passing are described. The roles of middleware and common architectural patterns like client-server, multi-tier, and distributed components are summarized.
Component-based software engineering (CBSE) is an approach that relies on reusable software components. It emerged due to limitations of object-oriented development in supporting effective reuse. CBSE uses independent and interchangeable components that communicate through well-defined interfaces. Middleware provides support for component interoperability. CBSE processes involve both developing components for reuse and developing systems using existing reusable components.
The document discusses various topics related to software reuse, including application frameworks, software product lines, and application system reuse. It describes application frameworks as reusable architectures made up of abstract and concrete classes that are extended to create applications. Software product lines are families of applications with a common architecture that can be configured for different contexts. Application system reuse involves adapting generic application systems through configuration for specific customers. The document outlines several benefits and challenges to software reuse approaches.
This document provides an overview of reliability engineering topics including software reliability, fault tolerance, and reliability requirements. It discusses key concepts such as availability, reliability, faults, errors and failures. It also describes different fault-tolerant system architectures and reliability metrics including probability of failure on demand, rate of occurrence of failures, and availability. Functional reliability requirements and examples are also presented relating to checking requirements, recovery requirements, redundancy requirements and development process requirements.
This chapter discusses dependable systems and covers topics like dependability properties, sociotechnical systems, redundancy and diversity, dependable processes, and formal methods for dependability. It defines dependability as reflecting a user's degree of trust in a system operating as expected without failure. Dependability encompasses attributes like reliability, availability, and security. Formal methods that use mathematical modeling can help reduce errors and improve dependability. Developing dependable systems also requires consideration of the sociotechnical context and dependable engineering processes.
This document discusses software evolution and maintenance. It covers topics like the inevitability of software change, legacy systems, and evolution processes. Software evolution involves implementing changes to existing systems to address new requirements, errors, or other issues. Most software budgets are spent evolving existing systems rather than developing new ones. Legacy systems rely on outdated technologies and can be difficult and expensive to change or replace. Effective evolution processes are needed to manage software changes over a system's lifetime.
The document discusses various topics related to software testing, including different types of testing (unit testing, component testing, system testing), test-driven development, and goals and processes for validation and defect testing. It provides examples and guidelines for testing individual components, interfaces, and integrated systems to discover errors and ensure software meets requirements.
This document discusses topics related to software design and implementation, including object-oriented design using UML, design patterns, and implementation issues. It provides details on the design and implementation process for a weather station system, including identifying system objects and classes, developing design models like sequence and state diagrams, and specifying interfaces. Design patterns are also introduced as a way to reuse solutions to common problems.
The document discusses architectural design, including:
- Architectural design determines how a software system is organized and structured. It identifies the main components and relationships.
- Architectural views show different perspectives of a system, such as logical, process, development, and physical views. Common patterns like model-view-controller and layered architectures are also covered.
- Architectural decisions impact system characteristics like performance, security, and maintainability. Common application architectures are also discussed.
This document discusses system modeling and different types of models used in system modeling. It covers context models, interaction models, structural models, behavioral models, and model-driven engineering. Some key points include:
- System modeling involves developing abstract models of a system from different perspectives or views. Models are often developed using the Unified Modeling Language (UML).
- Common model types include use case diagrams, sequence diagrams, class diagrams, state diagrams, and activity diagrams.
- Structural models show the organization and structure of a system. Behavioral models show the system's dynamic behavior and responses to events.
- Model-driven engineering is an approach where models rather than code are the primary outputs and code is generated
The document discusses requirements engineering processes. It covers topics such as functional and non-functional requirements, requirements elicitation, specification, validation and change. Requirements elicitation involves discovering requirements through interviews, ethnography and scenarios/stories with stakeholders. Requirements must be specified precisely and consistently. Non-functional requirements constrain the system and can be more critical than functional requirements. An iterative spiral process is used involving elicitation, analysis, validation and specification.
The document discusses agile software development methods. It covers topics like agile methods, techniques, and project management. Rapid and iterative development is emphasized to quickly adapt to changing requirements. Methods like Extreme Programming (XP) use practices like user stories, test-driven development, pair programming, and continuous refactoring to develop working software in short iterations.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
3. Safety
Safety is a property of a system that reflects the system’s
ability to operate, normally or abnormally, without danger
of causing human injury or death and without damage to
the system’s environment.
It is important to consider software safety as most
devices whose failure is critical now incorporate
software-based control systems.
3
Chapter 12 Safety Engineering
04/11/2014
4. Software in safety-critical systems
The system may be software-controlled so that the
decisions made by the software and subsequent actions
are safety-critical. Therefore, the software behaviour is
directly related to the overall safety of the system.
Software is extensively used for checking and monitoring
other safety-critical components in a system. For
example, all aircraft engine components are monitored
by software looking for early indications of component
failure. This software is safety-critical because, if it fails,
other components may fail and cause an accident.
04/11/2014 Chapter 12 Safety Engineering 4
5. Safety and reliability
Safety and reliability are related but distinct
In general, reliability and availability are necessary but not
sufficient conditions for system safety
Reliability is concerned with conformance to a given
specification and delivery of service
Safety is concerned with ensuring system cannot cause
damage irrespective of whether or not it conforms to its
specification.
System reliability is essential for safety but is not enough
Reliable systems can be unsafe
5
Chapter 12 Safety Engineering
04/11/2014
6. Unsafe reliable systems
There may be dormant faults in a system that are
undetected for many years and only rarely arise.
Specification errors
If the system specification is incorrect then the system can
behave as specified but still cause an accident.
Hardware failures generating spurious inputs
Hard to anticipate in the specification.
Context-sensitive commands i.e. issuing the right
command at the wrong time
Often the result of operator error.
6
Chapter 12 Safety Engineering
04/11/2014
8. Safety critical systems
Systems where it is essential that system operation is
always safe i.e. the system should never cause damage
to people or the system’s environment
Examples
Control and monitoring systems in aircraft
Process control systems in chemical manufacture
Automobile control systems such as braking and engine
management systems
04/11/2014 Chapter 12 Safety Engineering 8
9. Safety criticality
Primary safety-critical systems
Embedded software systems whose failure can cause the
associated hardware to fail and directly threaten people. Example
is the insulin pump control system.
Secondary safety-critical systems
Systems whose failure results in faults in other (socio-technical)
systems, which can then have safety consequences.
• For example, the Mentcare system is safety-critical as failure may
lead to inappropriate treatment being prescribed.
• Infrastructure control systems are also secondary safety-critical
systems.
9
Chapter 12 Safety Engineering
04/11/2014
10. Hazards
Situations or events that can lead to an accident
Stuck valve in reactor control system
Incorrect computation by software in navigation system
Failure to detect possible allergy in medication prescribing
system
Hazards do not inevitably result in accidents – accident
prevention actions can be taken.
04/11/2014 Chapter 12 Safety Engineering 10
11. Safety achievement
Hazard avoidance
The system is designed so that some classes of hazard simply
cannot arise.
Hazard detection and removal
The system is designed so that hazards are detected and
removed before they result in an accident.
Damage limitation
The system includes protection features that minimise the
damage that may result from an accident.
11
Chapter 12 Safety Engineering
04/11/2014
12. Safety terminology
Term Definition
Accident (or mishap) An unplanned event or sequence of events which results in human death or injury,
damage to property, or to the environment. An overdose of insulin is an example of an
accident.
Hazard A condition with the potential for causing or contributing to an accident. A failure of the
sensor that measures blood glucose is an example of a hazard.
Damage A measure of the loss resulting from a mishap. Damage can range from many people
being killed as a result of an accident to minor injury or property damage. Damage
resulting from an overdose of insulin could be serious injury or the death of the user of
the insulin pump.
Hazard severity An assessment of the worst possible damage that could result from a particular hazard.
Hazard severity can range from catastrophic, where many people are killed, to minor,
where only minor damage results. When an individual death is a possibility, a
reasonable assessment of hazard severity is ‘very high’.
Hazard probability The probability of the events occurring which create a hazard. Probability values tend to
be arbitrary but range from ‘probable’ (say 1/100 chance of a hazard occurring) to
‘implausible’ (no conceivable situations are likely in which the hazard could occur). The
probability of a sensor failure in the insulin pump that results in an overdose is probably
low.
Risk This is a measure of the probability that the system will cause an accident. The risk is
assessed by considering the hazard probability, the hazard severity, and the probability
that the hazard will lead to an accident. The risk of an insulin overdose is probably
medium to low.
12
Chapter 12 Safety Engineering
04/11/2014
13. Normal accidents
Accidents in complex systems rarely have a single cause
as these systems are designed to be resilient to a single
point of failure
Designing systems so that a single point of failure does not
cause an accident is a fundamental principle of safe systems
design.
Almost all accidents are a result of combinations of
malfunctions rather than single failures.
It is probably the case that anticipating all problem
combinations, especially, in software controlled systems
is impossible so achieving complete safety is impossible.
Accidents are inevitable.
13
Chapter 12 Safety Engineering
04/11/2014
14. Software safety benefits
Although software failures can be safety-critical, the use
of software control systems contributes to increased
system safety
Software monitoring and control allows a wider range of
conditions to be monitored and controlled than is possible using
electro-mechanical safety systems.
Software control allows safety strategies to be adopted that
reduce the amount of time people spend in hazardous
environments.
Software can detect and correct safety-critical operator errors.
Chapter 12 Safety Engineering 14
04/11/2014
16. Safety specification
The goal of safety requirements engineering is to identify
protection requirements that ensure that system failures
do not cause injury or death or environmental damage.
Safety requirements may be ‘shall not’ requirements i.e.
they define situations and events that should never
occur.
Functional safety requirements define:
Checking and recovery features that should be included in a
system
Features that provide protection against system failures and
external attacks
16
Chapter 12 Safety Engineering
04/11/2014
18. Hazard identification
Identify the hazards that may threaten the system.
Hazard identification may be based on different types of
hazard:
Physical hazards
Electrical hazards
Biological hazards
Service failure hazards
Etc.
18
Chapter 12 Safety Engineering
04/11/2014
19. Insulin pump risks
Insulin overdose (service failure).
Insulin underdose (service failure).
Power failure due to exhausted battery (electrical).
Electrical interference with other medical equipment
(electrical).
Poor sensor and actuator contact (physical).
Parts of machine break off in body (physical).
Infection caused by introduction of machine (biological).
Allergic reaction to materials or insulin (biological).
19
Chapter 12 Safety Engineering
04/11/2014
20. Hazard assessment
The process is concerned with understanding the
likelihood that a risk will arise and the potential
consequences if an accident or incident should occur.
Risks may be categorised as:
Intolerable. Must never arise or result in an accident
As low as reasonably practical(ALARP). Must minimise the
possibility of risk given cost and schedule constraints
Acceptable. The consequences of the risk are acceptable and no
extra costs should be incurred to reduce hazard probability
20
Chapter 12 Safety Engineering
04/11/2014
22. Social acceptability of risk
The acceptability of a risk is determined by human,
social and political considerations.
In most societies, the boundaries between the regions
are pushed upwards with time i.e. society is less willing
to accept risk
For example, the costs of cleaning up pollution may be less than
the costs of preventing it but this may not be socially acceptable.
Risk assessment is subjective
Risks are identified as probable, unlikely, etc. This depends on
who is making the assessment.
22
Chapter 12 Safety Engineering
04/11/2014
23. Hazard assessment
Estimate the risk probability and the risk severity.
It is not normally possible to do this precisely so relative
values are used such as ‘unlikely’, ‘rare’, ‘very high’, etc.
The aim must be to exclude risks that are likely to arise
or that have high severity.
23
Chapter 12 Safety Engineering
04/11/2014
24. Risk classification for the insulin pump
Identified hazard Hazard probability Accident severity Estimated risk Acceptability
1.Insulin overdose
computation
Medium High High Intolerable
2. Insulin underdose
computation
Medium Low Low Acceptable
3. Failure of
hardware monitoring
system
Medium Medium Low ALARP
4. Power failure High Low Low Acceptable
5. Machine
incorrectly fitted
High High High Intolerable
6. Machine breaks in
patient
Low High Medium ALARP
7. Machine causes
infection
Medium Medium Medium ALARP
8. Electrical
interference
Low High Medium ALARP
9. Allergic reaction Low Low Low Acceptable
24
Chapter 12 Safety Engineering
04/11/2014
25. Hazard analysis
Concerned with discovering the root causes of risks in a
particular system.
Techniques have been mostly derived from safety-critical
systems and can be
Inductive, bottom-up techniques. Start with a proposed system
failure and assess the hazards that could arise from that failure;
Deductive, top-down techniques. Start with a hazard and deduce
what the causes of this could be.
25
Chapter 12 Safety Engineering
04/11/2014
26. Fault-tree analysis
A deductive top-down technique.
Put the risk or hazard at the root of the tree and identify
the system states that could lead to that hazard.
Where appropriate, link these with ‘and’ or ‘or’
conditions.
A goal should be to minimise the number of single
causes of system failure.
26
Chapter 12 Safety Engineering
04/11/2014
27. An example of a software fault tree
27
Chapter 12 Safety Engineering
04/11/2014
28. Fault tree analysis
Three possible conditions that can lead to delivery of
incorrect dose of insulin
Incorrect measurement of blood sugar level
Failure of delivery system
Dose delivered at wrong time
By analysis of the fault tree, root causes of these
hazards related to software are:
Algorithm error
Arithmetic error
28
Chapter 12 Safety Engineering
04/11/2014
29. Risk reduction
The aim of this process is to identify dependability
requirements that specify how the risks should be
managed and ensure that accidents/incidents do not
arise.
Risk reduction strategies
Hazard avoidance;
Hazard detection and removal;
Damage limitation.
29
Chapter 12 Safety Engineering
04/11/2014
30. Strategy use
Normally, in critical systems, a mix of risk reduction
strategies are used.
In a chemical plant control system, the system will
include sensors to detect and correct excess pressure in
the reactor.
However, it will also include an independent protection
system that opens a relief valve if dangerously high
pressure is detected.
30
Chapter 12 Safety Engineering
04/11/2014
31. Insulin pump - software risks
Arithmetic error
A computation causes the value of a variable to overflow or
underflow;
Maybe include an exception handler for each type of arithmetic
error.
Algorithmic error
Compare dose to be delivered with previous dose or safe
maximum doses. Reduce dose if too high.
31
Chapter 12 Safety Engineering
04/11/2014
32. Examples of safety requirements
SR1: The system shall not deliver a single dose of insulin that is greater than a
specified maximum dose for a system user.
SR2: The system shall not deliver a daily cumulative dose of insulin that is greater
than a specified maximum daily dose for a system user.
SR3: The system shall include a hardware diagnostic facility that shall be
executed at least four times per hour.
SR4: The system shall include an exception handler for all of the exceptions that
are identified in Table 3.
SR5: The audible alarm shall be sounded when any hardware or software
anomaly is discovered and a diagnostic message, as defined in Table 4, shall be
displayed.
SR6: In the event of an alarm, insulin delivery shall be suspended until the user
has reset the system and cleared the alarm.
32
Chapter 12 Safety Engineering
04/11/2014
34. Safety engineering processes
Safety engineering processes are based on reliability
engineering processes
Plan-based approach with reviews and checks at each stage in
the process
General goal of fault avoidance and fault detection
Must also include safety reviews and explicit identification and
tracking of hazards
04/11/2014 Chapter 12 Safety Engineering 34
35. Regulation
Regulators may require evidence that safety engineering
processes have been used in system development
For example:
The specification of the system that has been developed and
records of the checks made on that specification.
Evidence of the verification and validation processes that have
been carried out and the results of the system verification and
validation.
Evidence that the organizations developing the system have
defined and dependable software processes that include safety
assurance reviews. There must also be records that show that
these processes have been properly enacted.
04/11/2014 Chapter 12 Safety Engineering 35
36. Agile methods and safety
Agile methods are not usually used for safety-critical
systems engineering
Extensive process and product documentation is needed for
system regulation. Contradicts the focus in agile methods on the
software itself.
A detailed safety analysis of a complete system specification is
important. Contradicts the interleaved development of a system
specification and program.
Some agile techniques such as test-driven development
may be used
04/11/2014 Chapter 12 Safety Engineering 36
37. Safety assurance processes
Process assurance involves defining a dependable
process and ensuring that this process is followed during
the system development.
Process assurance focuses on:
Do we have the right processes? Are the processes appropriate
for the level of dependability required. Should include
requirements management, change management, reviews and
inspections, etc.
Are we doing the processes right? Have these processes been
followed by the development team.
Process assurance generates documentation
Agile processes therefore are rarely used for critical systems.
37
Chapter 12 Safety Engineering
04/11/2014
38. Processes for safety assurance
Process assurance is important for safety-critical
systems development:
Accidents are rare events so testing may not find all problems;
Safety requirements are sometimes ‘shall not’ requirements so
cannot be demonstrated through testing.
Safety assurance activities may be included in the
software process that record the analyses that have
been carried out and the people responsible for these.
Personal responsibility is important as system failures may lead
to subsequent legal actions.
38
Chapter 12 Safety Engineering
04/11/2014
39. Safety related process activities
Creation of a hazard logging and monitoring system.
Appointment of project safety engineers who have
explicit responsibility for system safety.
Extensive use of safety reviews.
Creation of a safety certification system where the safety
of critical components is formally certified.
Detailed configuration management (see Chapter 25).
39
Chapter 12 Safety Engineering
04/11/2014
40. Hazard analysis
Hazard analysis involves identifying hazards and their
root causes.
There should be clear traceability from identified hazards
through their analysis to the actions taken during the
process to ensure that these hazards have been
covered.
A hazard log may be used to track hazards throughout
the process.
40
Chapter 12 Safety Engineering
04/11/2014
41. A simplified hazard log entry
Hazard Log Page 4: Printed 20.02.2012
System: Insulin Pump System
Safety Engineer: James Brown
File: InsulinPump/Safety/HazardLog
Log version: 1/3
Identified
Hazard
Insulin overdose delivered to patient
Identified
by
Jane Williams
Criticality
class
1
Identified
risk
High
Fault tree
identified
YES Date 24.01.07 Location Hazard
Log, Page 5
Fault tree
creators
Jane Williams and Bill Smith
Fault tree
checked
YES Date 28.01.07 Checker James
Brown
41
Chapter 12 Safety Engineering
04/11/2014
42. Hazard log (2)
04/11/2014 Chapter 12 Safety Engineering 42
System safety design requirements
1. The system shall include self-testing software that will test the sensor system, the
clock, and the insulin delivery system.
2. The self-checking software shall be executed once per minute.
3. In the event of the self-checking software discovering a fault in any of the system
components, an audible warning shall be issued and the pump display shall indicate
the name of the component where the fault has been discovered. The delivery of
insulin shall be suspended.
4. The system shall incorporate an override system that allows the system user to modify
the computed dose of insulin that is to be delivered by the system.
5. The amount of override shall be no greater than a pre-set value (maxOverride),
which is set when the system is configured by medical staff.
43. Safety reviews
Driven by the hazard register.
For each identified hazrd, the review team should assess
the system and judge whether or not the system can
cope with that hazard in a safe way.
04/11/2014 Chapter 12 Safety Engineering 43
44. Formal verification
Formal methods can be used when a mathematical
specification of the system is produced.
They are the ultimate static verification technique that
may be used at different stages in the development
process:
A formal specification may be developed and mathematically
analyzed for consistency. This helps discover specification errors
and omissions.
Formal arguments that a program conforms to its mathematical
specification may be developed. This is effective in discovering
programming and design errors.
44
Chapter 12 Safety Engineering
04/11/2014
45. Arguments for formal methods
Producing a mathematical specification requires a
detailed analysis of the requirements and this is likely to
uncover errors.
Concurrent systems can be analysed to discover race
conditions that might lead to deadlock. Testing for such
problems is very difficult.
They can detect implementation errors before testing
when the program is analyzed alongside the
specification.
45
Chapter 12 Safety Engineering
04/11/2014
46. Arguments against formal methods
Require specialized notations that cannot be understood
by domain experts.
Very expensive to develop a specification and even more
expensive to show that a program meets that
specification.
Proofs may contain errors.
It may be possible to reach the same level of confidence
in a program more cheaply using other V & V
techniques.
46
Chapter 12 Safety Engineering
04/11/2014
47. Formal methods cannot guarantee safety
The specification may not reflect the real requirements of
system users. Users rarely understand formal notations
so they cannot directly read the formal specification to
find errors and omissions.
The proof may contain errors. Program proofs are large
and complex, so, like large and complex programs, they
usually contain errors.
The proof may make incorrect assumptions about the
way that the system is used. If the system is not used as
anticipated, then the system’s behavior lies outside the
scope of the proof.
04/11/2014 Chapter 12 Safety Engineering 47
48. Model checking
Involves creating an extended finite state model of a
system and, using a specialized system (a model
checker), checking that model for errors.
The model checker explores all possible paths through
the model and checks that a user-specified property is
valid for each path.
Model checking is particularly valuable for verifying
concurrent systems, which are hard to test.
Although model checking is computationally very
expensive, it is now practical to use it in the verification
of small to medium sized critical systems.
48
Chapter 12 Safety Engineering
04/11/2014
50. Static program analysis
Static analysers are software tools for source text
processing.
They parse the program text and try to discover
potentially erroneous conditions and bring these to the
attention of the V & V team.
They are very effective as an aid to inspections - they
are a supplement to but not a replacement for
inspections.
50
Chapter 12 Safety Engineering
04/11/2014
51. Automated static analysis checks
Fault class Static analysis check
Data faults Variables used before initialization
Variables declared but never used
Variables assigned twice but never used between assignments
Possible array bound violations
Undeclared variables
Control faults Unreachable code
Unconditional branches into loops
Input/output faults Variables output twice with no intervening assignment
Interface faults Parameter-type mismatches
Parameter number mismatches
Non-usage of the results of functions
Uncalled functions and procedures
Storage management faults Unassigned pointers
Pointer arithmetic
Memory leaks
51
Chapter 12 Safety Engineering
04/11/2014
52. Levels of static analysis
Characteristic error checking
The static analyzer can check for patterns in the code that are
characteristic of errors made by programmers using a particular
language.
User-defined error checking
Users of a programming language define error patterns, thus
extending the types of error that can be detected. This allows
specific rules that apply to a program to be checked.
Assertion checking
Developers include formal assertions in their program and
relationships that must hold. The static analyzer symbolically
executes the code and highlights potential problems.
52
Chapter 12 Safety Engineering
04/11/2014
53. Use of static analysis
Particularly valuable when a language such as C is used
which has weak typing and hence many errors are
undetected by the compiler.
Particularly valuable for security checking – the static
analyzer can discover areas of vulnerability such as
buffer overflows or unchecked inputs.
Static analysis is now routinely used in the development
of many safety and security critical systems.
53
Chapter 12 Safety Engineering
04/11/2014
55. Safety and dependability cases
Safety and dependability cases are structured
documents that set out detailed arguments and evidence
that a required level of safety or dependability has been
achieved.
They are normally required by regulators before a
system can be certified for operational use. The
regulator’s responsibility is to check that a system is as
safe or dependable as is practical.
Regulators and developers work together and negotiate
what needs to be included in a system
safety/dependability case.
55
Chapter 12 Safety Engineering
04/11/2014
56. The system safety case
A safety case is:
A documented body of evidence that provides a convincing and
valid argument that a system is adequately safe for a given
application in a given environment.
Arguments in a safety case can be based on formal
proof, design rationale, safety proofs, etc. Process
factors may also be included.
A software safety case is usually part of a wider system
safety case that takes hardware and operational issues
into account.
56
Chapter 12 Safety Engineering
04/11/2014
57. The contents of a software safety case
Chapter Description
System description An overview of the system and a description of its critical
components.
Safety
requirements
The safety requirements abstracted from the system requirements
specification. Details of other relevant system requirements may
also be included.
Hazard and risk
analysis
Documents describing the hazards and risks that have been
identified and the measures taken to reduce risk. Hazard analyses
and hazard logs.
Design analysis A set of structured arguments (see Section 15.5.1) that justify why
the design is safe.
Verification and
validation
A description of the V & V procedures used and, where appropriate,
the test plans for the system. Summaries of the test results showing
defects that have been detected and corrected. If formal methods
have been used, a formal system specification and any analyses of
that specification. Records of static analyses of the source code.
57
Chapter 12 Safety Engineering
04/11/2014
58. 04/11/2014 Chapter 12 Safety Engineering 58
Chapter Description
Review reports Records of all design and safety reviews.
Team
competences
Evidence of the competence of all of the team involved in safety-
related systems development and validation.
Process QA Records of the quality assurance processes (see Chapter 24)
carried out during system development.
Change
management
processes
Records of all changes proposed, actions taken and, where
appropriate, justification of the safety of these changes. Information
about configuration management procedures and configuration
management logs.
Associated safety
cases
References to other safety cases that may impact the safety case.
59. Structured arguments
Safety cases should be based around structured
arguments that present evidence to justify the assertions
made in these arguments.
The argument justifies why a claim about system safety
and security is justified by the available evidence.
59
Chapter 12 Safety Engineering
04/11/2014
61. Insulin pump safety argument
Arguments are based on claims and evidence.
Insulin pump safety:
Claim: The maximum single dose of insulin to be delivered
(CurrentDose) will not exceed MaxDose.
Evidence: Safety argument for insulin pump (discussed later)
Evidence: Test data for insulin pump. The value of currentDose
was correctly computed in 400 tests
Evidence: Static analysis report for insulin pump software
revealed no anomalies that affected the value of CurrentDose
Argument: The evidence presented demonstrates that the
maximum dose of insulin that can be computed = MaxDose.
61
Chapter 12 Safety Engineering
04/11/2014
62. Structured safety arguments
Structured arguments that demonstrate that a system
meets its safety obligations.
It is not necessary to demonstrate that the program
works as intended; the aim is simply to demonstrate
safety.
Generally based on a claim hierarchy.
You start at the leaves of the hierarchy and demonstrate safety.
This implies the higher-level claims are true.
62
Chapter 12 Safety Engineering
04/11/2014
63. A safety claim hierarchy for the insulin pump
63
Chapter 12 Safety Engineering
04/11/2014
64. Software safety arguments
Safety arguments are intended to show that the system
cannot reach in unsafe state.
These are weaker than correctness arguments which
must show that the system code conforms to its
specification.
They are generally based on proof by contradiction
Assume that an unsafe state can be reached;
Show that this is contradicted by the program code.
A graphical model of the safety argument may be
developed.
64
Chapter 12 Safety Engineering
04/11/2014
65. Construction of a safety argument
Establish the safe exit conditions for a component or a
program.
Starting from the END of the code, work backwards until
you have identified all paths that lead to the exit of the
code.
Assume that the exit condition is false.
Show that, for each path leading to the exit that the
assignments made in that path contradict the
assumption of an unsafe exit from the component.
65
Chapter 12 Safety Engineering
04/11/2014
66. Insulin dose computation with safety checks
-- The insulin dose to be delivered is a function of blood sugar level,
-- the previous dose delivered and the time of delivery of the previous dose
currentDose = computeInsulin () ;
// Safety check—adjust currentDose if necessary.
// if statement 1
if (previousDose == 0)
{
if (currentDose > maxDose/2)
currentDose = maxDose/2 ;
}
else
if (currentDose > (previousDose * 2) )
currentDose = previousDose * 2 ;
// if statement 2
if ( currentDose < minimumDose )
currentDose = 0 ;
else if ( currentDose > maxDose )
currentDose = maxDose ;
administerInsulin (currentDose) ;
66
Chapter 12 Safety Engineering
04/11/2014
67. Informal safety argument based on
demonstrating contradictions
67
Chapter 12 Safety Engineering
04/11/2014
68. Program paths
Neither branch of if-statement 2 is executed
Can only happen if CurrentDose is >= minimumDose and <=
maxDose.
then branch of if-statement 2 is executed
currentDose = 0.
else branch of if-statement 2 is executed
currentDose = maxDose.
In all cases, the post conditions contradict the unsafe
condition that the dose administered is greater than
maxDose.
68
Chapter 12 Safety Engineering
04/11/2014
69. Key points
Safety-critical systems are systems whose failure can
lead to human injury or death.
A hazard-driven approach is used to understand the
safety requirements for safety-critical systems. You
identify potential hazards and decompose these (using
methods such as fault tree analysis) to discover their
root causes. You then specify requirements to avoid or
recover from these problems.
It is important to have a well-defined, certified process
for safety-critical systems development. This should
include the identification and monitoring of potential
hazards.
04/11/2014 Chapter 12 Safety Engineering 69
70. Key points
Static analysis is an approach to V & V that examines
the source code of a system, looking for errors and
anomalies. It allows all parts of a program to be checked,
not just those parts that are exercised by system tests.
Model checking is a formal approach to static analysis
that exhaustively checks all states in a system for
potential errors.
Safety and dependability cases collect the evidence that
demonstrates a system is safe and dependable. Safety
cases are required when an external regulator must
certify the system before it is used.
04/11/2014 Chapter 12 Safety Engineering 70