Dependable Systems - Hardware Dependability with Redundancy (14/16)Peter Tröger
1) The document discusses hardware dependability through the use of redundancy. It provides examples of static redundancy like voting and N-modular redundancy as well as dynamic redundancy using techniques like back-up sparing and duplex systems.
2) IBM's zSeries mainframe computers are highlighted as an example of a highly redundant system, using techniques like machine check handling, error correction codes, unit deletion for degradation, and fully redundant I/O subsystems.
3) Redundancy comes at a cost but can effectively improve reliability through techniques that either mask faults or allow systems to reconfigure around faults. The level of redundancy must be weighed against associated costs and design complexity.
This document provides an overview of dependability and dependable systems. It defines dependability as an umbrella term that includes reliability, availability, maintainability, and other attributes that allow systems to be trusted. Dependability addresses how systems can continue operating correctly even when faults occur. Key topics covered include fault tolerance techniques, error processing, failure modes, and modeling approaches for analyzing dependability. The goal of the course is to understand how to design systems that can be relied upon to deliver their services as specified, even in the presence of faults or unexpected events.
Dependable Systems -Dependability Means (3/16)Peter Tröger
This document provides an overview of dependability and dependable systems. It defines dependability as the trustworthiness of a system such that reliance can be placed on the service it delivers. The key aspects of dependability discussed include fault prevention, fault tolerance, fault removal, and fault forecasting. Fault tolerance techniques aim to provide service even in the presence of faults through methods like redundancy, error detection, error processing through recovery, and fault treatment. Dependable system design involves assessing risks, adding redundancy, and designing error detection and recovery capabilities.
Dependable Systems - Hardware Dependability with Redundancy (14/16)Peter Tröger
1) The document discusses hardware dependability through the use of redundancy. It provides examples of static redundancy like voting and N-modular redundancy as well as dynamic redundancy using techniques like back-up sparing and duplex systems.
2) IBM's zSeries mainframe computers are highlighted as an example of a highly redundant system, using techniques like machine check handling, error correction codes, unit deletion for degradation, and fully redundant I/O subsystems.
3) Redundancy comes at a cost but can effectively improve reliability through techniques that either mask faults or allow systems to reconfigure around faults. The level of redundancy must be weighed against associated costs and design complexity.
This document provides an overview of dependability and dependable systems. It defines dependability as an umbrella term that includes reliability, availability, maintainability, and other attributes that allow systems to be trusted. Dependability addresses how systems can continue operating correctly even when faults occur. Key topics covered include fault tolerance techniques, error processing, failure modes, and modeling approaches for analyzing dependability. The goal of the course is to understand how to design systems that can be relied upon to deliver their services as specified, even in the presence of faults or unexpected events.
Dependable Systems -Dependability Means (3/16)Peter Tröger
This document provides an overview of dependability and dependable systems. It defines dependability as the trustworthiness of a system such that reliance can be placed on the service it delivers. The key aspects of dependability discussed include fault prevention, fault tolerance, fault removal, and fault forecasting. Fault tolerance techniques aim to provide service even in the presence of faults through methods like redundancy, error detection, error processing through recovery, and fault treatment. Dependable system design involves assessing risks, adding redundancy, and designing error detection and recovery capabilities.
Dependable Systems -Fault Tolerance Patterns (4/16)Peter Tröger
The document discusses various patterns for achieving fault tolerance in dependable systems. It covers architectural patterns like units of mitigation and error containment barriers. It also discusses detection patterns such as fault correlation, system monitoring, acknowledgments, voting, and audits. Finally, it discusses error recovery patterns like quarantine, concentrated recovery, and checkpointing to avoid data loss during recovery. The patterns provide reusable solutions for commonly occurring problems in building fault tolerant systems.
What activates a bug? A refinement of the Laprie terminology model.Peter Tröger
The document proposes refinements to the Laprie terminology model for describing software bugs. It introduces concepts of a fault model describing faulty code, a fault condition model describing enabling system states, and an error model describing states where faults are activated and may lead to failures. A failure automaton is presented with states for disabled, dormant, and active faults, as well as detected errors and outages. Events are defined for when fault conditions are fulfilled or no longer fulfilled, faulty code is executed, and failures occur. The refinement aims to separately consider investigated software layers and their environment in order to better describe what activates bugs.
Fault recovery tactics in software systems include:
- Voting with redundant components to detect and correct faults. Diversity uses different software/hardware to detect algorithm faults.
- Active redundancy keeps redundant components synchronized in real-time, allowing recovery in milliseconds by switching to backups.
- Passive redundancy uses a primary component with backup components that are periodically resynchronized, allowing recovery in seconds.
- Using spare components requires reconfiguring software and state on the spare, increasing recovery time to minutes.
- Other tactics for recovering failed components include running them in shadow mode, resynchronizing their state, and rolling back to checkpoints.
1. The document discusses various types of software testing including unit testing, integration testing, system testing, and acceptance testing. It explains that unit testing focuses on individual program units in isolation while integration testing tests modules assembled into subsystems.
2. The document then provides examples of different integration testing strategies like incremental, bottom-up, top-down, and discusses regression testing. It also defines smoke testing and explains its purpose in integration, system and acceptance testing levels.
3. Finally, the document emphasizes the importance of system and acceptance testing to verify functional and non-functional requirements and ensure the system can operate as intended in a real environment.
This document proposes a new distributed voting protocol called the timed-buffer distributed voting algorithm (TB-DVA) that aims to provide both security and fault tolerance for distributed systems. The TB-DVA allows any voter to initially commit a result, but then buffers that result for a time to allow other voters to check it and potentially commit a new majority result. This process is repeated until all voters agree on the final result. The goal is to address limitations of existing protocols related to securing the voting process, especially for applications requiring flexible inexact voting schemes.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
The document discusses considerations for formal verification including complexity, level of abstraction, completeness, and reachable states. It provides examples of how formal verification can be applied to combinatorial blocks and higher effort blocks. The results that can be achieved with formal verification are also summarized such as always finding bugs but rarely missing them due to human error in the property set.
Formal verification is the process of proving or disproving properties of a system using precise mathematical methods. It provides guarantees that no simulations will violate specified properties. Formal verification can be applied at the block and system-on-chip levels to eliminate bugs early. However, current formal verification tools have limitations including capacity issues, generating coverage metrics from assertions, and handling large designs and multiple modes of operation. Improving formal verification requires efficient strategies and advancing tool capabilities.
HITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing...Lionel Briand
This document describes HITECS, a UML profile and analysis framework for specifying and analyzing hardware-in-the-loop (HiL) test cases for cyber-physical systems. HITECS addresses challenges with HiL testing such as risks of hardware damage, time budget constraints, and environmental uncertainties. It provides a modeling language to specify test platforms, behaviors, analyses, and schedules. HITECS also supports model checking of test case assertions and simulation of test execution times to help evaluate test cases prior to execution. An empirical evaluation on a satellite testing case study found that HITECS helps engineers define effective assertions, verify test cases efficiently, and accurately estimate execution times.
This document provides an introduction and overview of System Verilog. It discusses what System Verilog is, why it was developed, its uses for hardware description and verification. Key features of System Verilog are then outlined such as its data types, arrays, queues, events, structures, unions and classes. Examples are provided for many of these features.
This document discusses measuring various aspects of a software development process and project. It describes measuring process components by determining the number of roles, activities, outputs, and tasks. It also discusses measuring a project using function points by identifying files, interfaces, inputs, outputs and inquiries. Finally, it describes measuring the complexity of UML artifacts like use case diagrams, class diagrams, and component diagrams by analyzing elements and relationships.
Automated Testing of Autonomous Driving Assistance SystemsLionel Briand
This document discusses automated testing techniques for autonomous driving assistance systems (ADAS). It proposes using decision tree classification models and a multi-objective genetic search algorithm (NSGAII) to efficiently explore the complex scenario space of ADAS. The objectives are to identify critical, failure-revealing test scenarios by characterizing input conditions that lead to safety violations, such as the car hitting a pedestrian. Simulator-based testing of the automated emergency braking system is computationally expensive, so decision trees provide better guidance to the search by partitioning the input space into homogeneous regions.
The document discusses various architectural tactics that can be used to achieve different qualities like modifiability, testability, usability, availability, and performance. It defines tactics as design decisions that influence how a system responds to stimuli related to a quality attribute. For each quality, it outlines specific tactics and how they work. For example, for modifiability it discusses tactics like localizing modifications, preventing ripple effects, and deferring binding time. For testability, it covers input/output and internal monitoring tactics.
This document summarizes a presentation given by Mikael Lindvall and Dharma Ganesan of the Fraunhofer Center for Experimental Software Engineering Maryland on software architecture, reverse engineering, and analyzing legacy systems. The Fraunhofer Center develops techniques for analyzing the structure and behavior of legacy software using methods and tools. They have analyzed several large legacy systems, including NASA's Space Network and Core Flight Software. The presentation describes their model of software architecture and reverse engineering, which involves creating views of the runtime and development architecture from source code. It also gives an example of how they analyzed the Common Ground System, a ground system for NASA missions, by visualizing its actual architecture based on source code.
The document discusses software fault tolerance techniques. It begins by explaining why fault tolerant software is needed, particularly for safety critical systems. It then covers single version techniques like checkpointing and process pairs. Next it discusses multi-version techniques using multiple variants of software. It also covers software fault injection testing. Finally it provides examples of fault tolerant systems used in aircraft like the Airbus and Boeing.
This document discusses quality attributes that are important considerations for software architects. It defines key attributes like availability, modifiability, performance, security, and testability. It presents these attributes as general scenarios to help stakeholders communicate and understand them. The document also covers business qualities and architectural qualities that influence design decisions.
This document discusses fault-tolerant design techniques for real-time systems. It covers fault masking, reconfiguration, redundancy in hardware, software, and information. Specific techniques discussed include majority voting, error correcting memories, reconfiguration through fault detection, location, containment, and recovery. Hardware redundancy approaches like triple modular redundancy are examined, as are software redundancy techniques like N-version programming and recovery blocks.
Testing Autonomous Cars for Feature Interaction Failures using Many-Objective...Lionel Briand
This document proposes a search-based testing approach to automatically detect undesired feature interactions in self-driving systems during early development stages. It defines hybrid test objectives that combine coverage-based, failure-based, and unsafe overriding criteria. A tailored many-objective search algorithm is used to generate test cases that satisfy the objectives. An empirical evaluation on two industrial case study systems found the hybrid objectives revealed significantly more feature interaction failures than baseline objectives. Domain experts validated the identified failures were previously unknown and suggested ways to improve the feature integration logic.
Data Integrity Techniques: Aviation Best Practices for CRC & Checksum Error D...Philip Koopman
Author: Prof. Philip Koopman, Carnegie Mellon University
Abstract:
This talk includes both a tutorial and explanation of research results on the proper use of Cyclic Redundancy codes (CRCs) and checksums in an aviation context. More than 50 years since the invention of the CRC, the proper use of these error detection codes is still hampered by a combination of misleading folklore, sub-optimality of standard approaches, general inaccessibility of research results, and the occasional typographical error in key reference materials. However, recent work has been able to exhaustively explore the CRC design space and identify optimal selection criteria based on key system characteristics. This talk will covers the following areas: checksum and CRC theory with an emphasis on intuitive understanding rather than heavy math; why using a standard or widely used CRC can be suboptimal (or worse); how to pick a good checksum/CRC; the key parameters that affect the error detection capability of a checksum/CRC; CRC pitfalls illustrated via examples from Controller Area Network and ARINC-825; an example CRC selection process for achieving a required level of functional criticality; and a “seven deadly sins” list for CRC/checksum use. Some key research findings that are discussed include: a well-chosen CRC is usually dramatically better than a checksum for relatively little additional computational cost; you can usually do a lot better than “standard” CRC (especially CRC-32); Hamming Distance at the target payload length is the predominant selection criterion of interest; and it is important to avoid bit encoding approaches that undermine CRC effectiveness.
Bio:
Dr. Philip Koopman is a professor at Carnegie Mellon University, with research interests in the areas of software robustness, embedded networking, dependable embedded computer systems, and autonomous vehicle safety. Previously, he was a US Navy submarine officer, an embedded CPU architect for Harris Semiconductor, and an embedded system researcher at United Technologies. In addition to a variety of academic publications and two dozen patents, he has authored the book Better Embedded System Software based on lessons learned from more than a hundred design reviews of industry software. He has affiliations with both the Carnegie Mellon Electrical & Computer Engineering Department (ECE) and the National Robotics Engineering Center (NREC). He is a senior member of IEEE, senior member of the ACM, and a member of IFIP WG 10.4 on Dependable Computing and Fault Tolerance.
Unit test ???? - is an automated piece of code that invokes a unit of work in the system and then checks a single assumption about the behavior of that unit of work. This presentation is all about Unit Testing. Will be beneficial to both the developers and testers to learn from.
Dependable Systems -Fault Tolerance Patterns (4/16)Peter Tröger
The document discusses various patterns for achieving fault tolerance in dependable systems. It covers architectural patterns like units of mitigation and error containment barriers. It also discusses detection patterns such as fault correlation, system monitoring, acknowledgments, voting, and audits. Finally, it discusses error recovery patterns like quarantine, concentrated recovery, and checkpointing to avoid data loss during recovery. The patterns provide reusable solutions for commonly occurring problems in building fault tolerant systems.
What activates a bug? A refinement of the Laprie terminology model.Peter Tröger
The document proposes refinements to the Laprie terminology model for describing software bugs. It introduces concepts of a fault model describing faulty code, a fault condition model describing enabling system states, and an error model describing states where faults are activated and may lead to failures. A failure automaton is presented with states for disabled, dormant, and active faults, as well as detected errors and outages. Events are defined for when fault conditions are fulfilled or no longer fulfilled, faulty code is executed, and failures occur. The refinement aims to separately consider investigated software layers and their environment in order to better describe what activates bugs.
Fault recovery tactics in software systems include:
- Voting with redundant components to detect and correct faults. Diversity uses different software/hardware to detect algorithm faults.
- Active redundancy keeps redundant components synchronized in real-time, allowing recovery in milliseconds by switching to backups.
- Passive redundancy uses a primary component with backup components that are periodically resynchronized, allowing recovery in seconds.
- Using spare components requires reconfiguring software and state on the spare, increasing recovery time to minutes.
- Other tactics for recovering failed components include running them in shadow mode, resynchronizing their state, and rolling back to checkpoints.
1. The document discusses various types of software testing including unit testing, integration testing, system testing, and acceptance testing. It explains that unit testing focuses on individual program units in isolation while integration testing tests modules assembled into subsystems.
2. The document then provides examples of different integration testing strategies like incremental, bottom-up, top-down, and discusses regression testing. It also defines smoke testing and explains its purpose in integration, system and acceptance testing levels.
3. Finally, the document emphasizes the importance of system and acceptance testing to verify functional and non-functional requirements and ensure the system can operate as intended in a real environment.
This document proposes a new distributed voting protocol called the timed-buffer distributed voting algorithm (TB-DVA) that aims to provide both security and fault tolerance for distributed systems. The TB-DVA allows any voter to initially commit a result, but then buffers that result for a time to allow other voters to check it and potentially commit a new majority result. This process is repeated until all voters agree on the final result. The goal is to address limitations of existing protocols related to securing the voting process, especially for applications requiring flexible inexact voting schemes.
Enabling Automated Software Testing with Artificial IntelligenceLionel Briand
1. The document discusses using artificial intelligence techniques like machine learning and natural language processing to help automate software testing. It focuses on applying these techniques to testing advanced driver assistance systems.
2. A key challenge in software testing is scalability as the input spaces and code bases grow large and complex. Effective automation is needed to address this challenge. The document describes several industrial research projects applying AI to help automate testing of advanced driver assistance systems.
3. One project aims to develop an automated testing technique for emergency braking systems in cars using a physics-based simulation. The goal is to efficiently explore complex test scenarios and identify critical situations like failures to avoid collisions.
The document discusses considerations for formal verification including complexity, level of abstraction, completeness, and reachable states. It provides examples of how formal verification can be applied to combinatorial blocks and higher effort blocks. The results that can be achieved with formal verification are also summarized such as always finding bugs but rarely missing them due to human error in the property set.
Formal verification is the process of proving or disproving properties of a system using precise mathematical methods. It provides guarantees that no simulations will violate specified properties. Formal verification can be applied at the block and system-on-chip levels to eliminate bugs early. However, current formal verification tools have limitations including capacity issues, generating coverage metrics from assertions, and handling large designs and multiple modes of operation. Improving formal verification requires efficient strategies and advancing tool capabilities.
HITECS: A UML Profile and Analysis Framework for Hardware-in-the-Loop Testing...Lionel Briand
This document describes HITECS, a UML profile and analysis framework for specifying and analyzing hardware-in-the-loop (HiL) test cases for cyber-physical systems. HITECS addresses challenges with HiL testing such as risks of hardware damage, time budget constraints, and environmental uncertainties. It provides a modeling language to specify test platforms, behaviors, analyses, and schedules. HITECS also supports model checking of test case assertions and simulation of test execution times to help evaluate test cases prior to execution. An empirical evaluation on a satellite testing case study found that HITECS helps engineers define effective assertions, verify test cases efficiently, and accurately estimate execution times.
This document provides an introduction and overview of System Verilog. It discusses what System Verilog is, why it was developed, its uses for hardware description and verification. Key features of System Verilog are then outlined such as its data types, arrays, queues, events, structures, unions and classes. Examples are provided for many of these features.
This document discusses measuring various aspects of a software development process and project. It describes measuring process components by determining the number of roles, activities, outputs, and tasks. It also discusses measuring a project using function points by identifying files, interfaces, inputs, outputs and inquiries. Finally, it describes measuring the complexity of UML artifacts like use case diagrams, class diagrams, and component diagrams by analyzing elements and relationships.
Automated Testing of Autonomous Driving Assistance SystemsLionel Briand
This document discusses automated testing techniques for autonomous driving assistance systems (ADAS). It proposes using decision tree classification models and a multi-objective genetic search algorithm (NSGAII) to efficiently explore the complex scenario space of ADAS. The objectives are to identify critical, failure-revealing test scenarios by characterizing input conditions that lead to safety violations, such as the car hitting a pedestrian. Simulator-based testing of the automated emergency braking system is computationally expensive, so decision trees provide better guidance to the search by partitioning the input space into homogeneous regions.
The document discusses various architectural tactics that can be used to achieve different qualities like modifiability, testability, usability, availability, and performance. It defines tactics as design decisions that influence how a system responds to stimuli related to a quality attribute. For each quality, it outlines specific tactics and how they work. For example, for modifiability it discusses tactics like localizing modifications, preventing ripple effects, and deferring binding time. For testability, it covers input/output and internal monitoring tactics.
This document summarizes a presentation given by Mikael Lindvall and Dharma Ganesan of the Fraunhofer Center for Experimental Software Engineering Maryland on software architecture, reverse engineering, and analyzing legacy systems. The Fraunhofer Center develops techniques for analyzing the structure and behavior of legacy software using methods and tools. They have analyzed several large legacy systems, including NASA's Space Network and Core Flight Software. The presentation describes their model of software architecture and reverse engineering, which involves creating views of the runtime and development architecture from source code. It also gives an example of how they analyzed the Common Ground System, a ground system for NASA missions, by visualizing its actual architecture based on source code.
The document discusses software fault tolerance techniques. It begins by explaining why fault tolerant software is needed, particularly for safety critical systems. It then covers single version techniques like checkpointing and process pairs. Next it discusses multi-version techniques using multiple variants of software. It also covers software fault injection testing. Finally it provides examples of fault tolerant systems used in aircraft like the Airbus and Boeing.
This document discusses quality attributes that are important considerations for software architects. It defines key attributes like availability, modifiability, performance, security, and testability. It presents these attributes as general scenarios to help stakeholders communicate and understand them. The document also covers business qualities and architectural qualities that influence design decisions.
This document discusses fault-tolerant design techniques for real-time systems. It covers fault masking, reconfiguration, redundancy in hardware, software, and information. Specific techniques discussed include majority voting, error correcting memories, reconfiguration through fault detection, location, containment, and recovery. Hardware redundancy approaches like triple modular redundancy are examined, as are software redundancy techniques like N-version programming and recovery blocks.
Testing Autonomous Cars for Feature Interaction Failures using Many-Objective...Lionel Briand
This document proposes a search-based testing approach to automatically detect undesired feature interactions in self-driving systems during early development stages. It defines hybrid test objectives that combine coverage-based, failure-based, and unsafe overriding criteria. A tailored many-objective search algorithm is used to generate test cases that satisfy the objectives. An empirical evaluation on two industrial case study systems found the hybrid objectives revealed significantly more feature interaction failures than baseline objectives. Domain experts validated the identified failures were previously unknown and suggested ways to improve the feature integration logic.
Data Integrity Techniques: Aviation Best Practices for CRC & Checksum Error D...Philip Koopman
Author: Prof. Philip Koopman, Carnegie Mellon University
Abstract:
This talk includes both a tutorial and explanation of research results on the proper use of Cyclic Redundancy codes (CRCs) and checksums in an aviation context. More than 50 years since the invention of the CRC, the proper use of these error detection codes is still hampered by a combination of misleading folklore, sub-optimality of standard approaches, general inaccessibility of research results, and the occasional typographical error in key reference materials. However, recent work has been able to exhaustively explore the CRC design space and identify optimal selection criteria based on key system characteristics. This talk will covers the following areas: checksum and CRC theory with an emphasis on intuitive understanding rather than heavy math; why using a standard or widely used CRC can be suboptimal (or worse); how to pick a good checksum/CRC; the key parameters that affect the error detection capability of a checksum/CRC; CRC pitfalls illustrated via examples from Controller Area Network and ARINC-825; an example CRC selection process for achieving a required level of functional criticality; and a “seven deadly sins” list for CRC/checksum use. Some key research findings that are discussed include: a well-chosen CRC is usually dramatically better than a checksum for relatively little additional computational cost; you can usually do a lot better than “standard” CRC (especially CRC-32); Hamming Distance at the target payload length is the predominant selection criterion of interest; and it is important to avoid bit encoding approaches that undermine CRC effectiveness.
Bio:
Dr. Philip Koopman is a professor at Carnegie Mellon University, with research interests in the areas of software robustness, embedded networking, dependable embedded computer systems, and autonomous vehicle safety. Previously, he was a US Navy submarine officer, an embedded CPU architect for Harris Semiconductor, and an embedded system researcher at United Technologies. In addition to a variety of academic publications and two dozen patents, he has authored the book Better Embedded System Software based on lessons learned from more than a hundred design reviews of industry software. He has affiliations with both the Carnegie Mellon Electrical & Computer Engineering Department (ECE) and the National Robotics Engineering Center (NREC). He is a senior member of IEEE, senior member of the ACM, and a member of IFIP WG 10.4 on Dependable Computing and Fault Tolerance.
Unit test ???? - is an automated piece of code that invokes a unit of work in the system and then checks a single assumption about the behavior of that unit of work. This presentation is all about Unit Testing. Will be beneficial to both the developers and testers to learn from.
The document discusses the OSI model and data link layer. It provides details on:
1) The OSI model has 7 layers, with the data link layer responsible for transmitting frames between nodes and handling functions like framing, flow control, and error control.
2) Error detection techniques discussed include CRC, parity checks, and checksums. Flow control methods like stop-and-wait and sliding windows are also covered.
3) Error control uses techniques like ARQ to recover corrupted data, with examples of stop-and-wait, go-back-N, and selective repeat ARQ provided.
Improving the accuracy and reliability of data analysis codeJohan Carlin
1) The document discusses improving the accuracy and reliability of data analysis code through testing and version control. It emphasizes that reliable code is well-documented, generalizable beyond specific datasets, and includes tests to verify functionality.
2) Common approaches to testing include null simulations to calculate error rates and parameter recovery tests to confirm models can learn known weights. Version control through Git or SVN provides a record of code states over time.
3) The document argues that testing makes sense for scientific computing given demands on accuracy and risks of errors influencing research. Tests can target hypotheses, analysis methods, and experiment scripts.
Application of machine learning and cognitive computing in intrusion detectio...Mahdi Hosseini Moghaddam
This document describes a proposed hardware-based machine learning intrusion detection system using cognitive processors. It discusses the need for new intrusion detection approaches due to limitations of signature-based methods. The proposed system collects network packet data using a Raspberry Pi and classifies it using a Cognimem CM1K cognitive processor chip, which implements restricted coulomb energy and k-nearest neighbor algorithms. The document outlines the system architecture, data collection and normalization methodology, and analysis of results from testing the CM1K chip on both custom and NSL-KDD network datasets, finding accuracy levels around 70-80% but slower processing times than a software simulation of the chip's algorithms. Future work areas include adding more packet features, using
Information systems application control Framework.pptr209777z
This document discusses various methods for capturing input data and instructions into a computer system, including document-based, direct entry, and hybrid capture methods. It also describes common input devices like point-of-sale terminals and ATMs, and examines controls over data codes, source document design, and ensuring quality of instruction input. Controls are needed to prevent errors and unauthorized changes to input.
The document discusses various topics related to data link layer protocols in computer networks, including:
- Sliding window protocol which uses imaginary boxes (windows) on the sender and receiver sides to track frames and allow the sender to transmit multiple frames before requiring an acknowledgment.
- ARQ (Automatic Repeat Request) protocols like Stop-and-Wait, Go-Back-N, and Selective Repeat which determine the rules for retransmitting frames if errors are detected. Stop-and-Wait sends one frame at a time, Go-Back-N retransmits all outstanding frames, and Selective Repeat retransmits only corrupted frames.
- Piggybacking which is a technique used to improve the efficiency
Formal verification involves proving the correctness of algorithms or systems with respect to a formal specification using mathematical techniques. It can be done by formally modeling a system and using theorem proving or model checking to verify that the model satisfies given properties. Theorem proving uses logical deduction to prove properties, while model checking automatically checks all possible states of a finite model against temporal logic properties. Both approaches have advantages and limitations, but formal verification can help find bugs and prove correctness of systems.
#ITsubbotnik Spring 2017: Dmitrii Nikitko "Deep learning for understanding of...epamspb
(1) The document discusses approaches for understanding unstructured documents using deep learning. (2) It proposes a solution that uses open source software in a modular, dockerized architecture to extract information from invoices with different layouts and languages. (3) Recurrent neural networks and sequence-to-sequence models are explored for their ability to learn patterns in document sequences and output structured data like field values.
Unstructured data processing webinar 06272016George Roth
This document provides an overview of how to prepare unstructured data for business intelligence and data analytics. It discusses structured, semi-structured, and unstructured data types. It then introduces Recognos' platform called ETI, which uses human-assisted machine learning to extract and integrate data from unstructured documents. ETI can extract data from documents that contain classifiable content through predefined field definitions and templates. It also discusses the challenges of extracting tables and derived fields that require semantic analysis. The document concludes with examples of using extracted data for compliance applications and creating data teams to manage the extraction process over time.
Measuring the Combinatorial Coverage of Software in Real TimeZachary Ratliff
This document introduces a new real-time combinatorial coverage measurement tool called CCM Command Line. It summarizes the key limitations of an earlier tool, CCM, and describes new capabilities of the command line tool, including the ability to measure coverage incrementally and from various sources in real time. The document also discusses applications of the new tool and acknowledges those involved in its development.
This document provides an introduction to networking concepts including:
- Binary and hexadecimal number systems and how they relate to IP addressing.
- Network layers including the OSI and TCP/IP models.
- Common network devices, media, and protocols used to connect local and wide area networks.
- Key functions of each layer in the OSI model including physical, data link, network, transport, and application layers.
- How data is encapsulated as it moves up the OSI model layers and de-encapsulated as it moves down.
Mike Bartley - Innovations for Testing Parallel Software - EuroSTAR 2012TEST Huddle
EuroSTAR Software Testing Conference 2012 presentation on Innovations for Testing Parallel Software by Mike Bartley.
See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Using static code analysis tools and detecting and fixing identified issues is very important in order to improve the quality and security of the code baseline.
CodeChecker (https://github.com/Ericsson/codechecker ) is an open source analyzer tooling, defect database and viewer extension for the Clang Static Analyzer and Clang Tidy.
It provides a number of additional features:
- Good visualization of problems in the code
- Overview of results for the whole product
- Filtering
- Cross translational unit analysis and statistical checkers support
- Suppression handling
- And many others...
These features simplify the follow up of results and make it more efficient.
In the video, an overview of features and capabilities of CodeChecker is demonstrated as well as a description and recommendation of how to introduce new tools.
Recording of the demo: https://youtu.be/sQ2Qj0kHoRY published in C++ Dublin User group https://www.youtube.com/channel/UCZ4UNE_1IMUFfAhcdq7CMOg/
Useful links:
open source project: https://github.com/Ericsson/codechecker
http://codechecker-demo.eastus.cloudapp.azure.com/login.html#
demo/demo
https://codechecker.readthedocs.io/en/latest/
http://clang-analyzer.llvm.org/available_checks.html
http://clang.llvm.org/extra/clang-tidy/checks/list.html
Other related videos about Clang Static Analyzer and CodeChecker that goes a bit more deeply into how Clang Static Analyzer works:
Clang Static Analysis - Meeting C++ 2016 Gabor Horvath
https://www.youtube.com/watch?v=UcxF6CVueDM
CppCon 2016: Gabor Horvath “Make Friends with the Clang Static Analysis Tools"
https://www.youtube.com/watch?v=AQF6hjLKsnM
This document provides information about the CS 331 Data Structures course. It includes the contact information for the professor, Dr. Chandran Saravanan, as well as online references and resources about data structures. It then covers topics like structuring and organizing data, different types of data structures suitable for different applications, basic principles of data structures, language support for data structures, selecting an appropriate data structure, analyzing algorithms, and provides an example analysis of a sample algorithm's runtime complexity.
This document discusses the importance of instrumentation for effective fuzzing. It notes that while fuzzing may seem simple, it actually requires significant effort, target code adaptation, and input corpus minimization. Instrumentation is key to determining code coverage, finding new paths, and prioritizing inputs that lead to crashes or new code coverage. The document provides examples of instrumentation techniques using binary rewriting and hardware features and discusses how to set up fuzzing when source code is available versus when it is not. It also outlines some current gaps in fuzzing techniques.
Fault Detection Scheme for AES Using Composite FieldAJAL A J
The cipher Rijndael is one of the five finalists of the Advanced Encryption Standard (AES)
The algorithm has been designed by Joan Daemen and Vincent Rijmen
It is a Block cipher.
The hardware implementation with 128-bit blocks and 128-bit keys is presented.
VLSI optimizations of the Rijndael algorithm are discussed and several hardware design modifications and techniques are used, such as memory sharing and parallelism.
The document discusses algorithms and their analysis. It defines an algorithm as a sequence of unambiguous steps to solve a problem within a finite time. Characteristics of algorithms include being unambiguous, having inputs/outputs, and terminating in finite time. Algorithm analysis involves determining theoretical and empirical time and space complexity as input size increases. Time complexity is analyzed by counting basic operations, while space complexity considers fixed and variable memory usage. Worst, best, and average cases analyze how efficiency varies with different inputs. Asymptotic analysis focuses on long-term growth rates to compare algorithms.
Similar to Dependable Systems - Hardware Dependability with Diagnosis (13/16) (20)
Dr. Peter Tröger discusses cloud standards and virtualization. He outlines three basic cloud service models and describes challenges with cloud dependability from the customer and provider perspectives. The document also reviews several standards organizations and specifications relevant to cloud computing, including OVF, OCCI, CDMI, and the Cloud Security Alliance. It emphasizes that while infrastructure standards are maturing, more work remains regarding platforms, software, data models, and billing standards.
Distributed Resource Management Application API (DRMAA) Version 2Peter Tröger
The document describes the Distributed Resource Management Application API (DRMAA) version 2. It provides an overview of DRMAA and its goals of providing a standardized API for distributed resource management systems. It discusses the key components involved in DRMAA including distributed resource management systems, DRMAA implementations/libraries, submission hosts, and execution hosts. It also summarizes the success of DRMAA version 1 and outlines the status and design approach of the new DRMAA version 2 specification.
Design of Software for Embedded SystemsPeter Tröger
This document provides an overview of the Design of Software for Embedded Systems (SWES) course. It discusses the course organization, project requirements, and introduces some basic concepts and terminology related to embedded systems and real-time software. Specifically, it describes the challenges in embedded system design, different types of hardware platforms, characteristics of embedded software, issues related to timeliness and real-time scheduling, and how real-time operating systems address these issues. The document aims to equip students with foundational knowledge on embedded systems and real-time systems engineering.
Human users should not be forced to edit XML documents. Sometimes, they may want to read it.
The presentation persists some arguments I stated about this topic again and again in the past. Discussions and opinions are more than welcome.
This document provides a summary of the history of operating systems from the earliest mechanical computers in the 1800s to modern desktop and server operating systems. It discusses the first programmable computers like the Analytical Engine and Z3 and the development of stored program architectures. It then covers the evolution of batch processing systems and time-sharing to allow for interactive use. Key developments discussed include the IBM 1401, CTSS, Multics, Unix, and early versions of Windows. The document also provides an overview of basic hardware concepts like SMP, multi-core processors, and parallelism that operating systems must account for.
Operating Systems 1 (8/12) - ConcurrencyPeter Tröger
This document provides an introduction to concurrency and parallel programming concepts using POSIX threads. It defines key concurrency terms like race conditions, deadlocks, livelocks and starvation. It discusses how to protect critical sections and shared resources using semaphores and mutexes. Examples like the dining philosophers problem illustrate how deadlocks can occur. The document also outlines POSIX thread functions for thread management, synchronization with mutexes and condition variables, and reading/writing locks and barriers.
Operating Systems 1 (9/12) - Memory Management ConceptsPeter Tröger
The document discusses memory management concepts in operating systems including:
- Memory is a critical resource that must be managed by the operating system to allow multiple processes to efficiently share the physical memory.
- The operating system implements virtual memory which maps process logical addresses to physical addresses to isolate processes.
- A memory management unit (MMU) hardware performs this address translation transparently.
- Memory is organized in a hierarchy from fast expensive cache/RAM to slower cheaper disk storage.
- The operating system uses paging, swapping and memory partitioning to manage this hierarchy and allocate memory to processes.
The document discusses processes and threads in operating systems. It describes how processes contain multiple threads that can run concurrently on multicore systems. Each thread has its own execution state and context stored in a thread control block. When a thread is not actively running, its context is saved so it can resume execution later. The document provides examples of how threads are implemented in Windows and Linux operating systems.
The document provides a history of early computers and operating systems, beginning with Charles Babbage's analytical engine in the 1820s. It discusses the earliest digital computers in the 1940s-1950s and their use of serial processing. Batch processing was introduced with the IBM 1401 in 1959 to improve resource utilization. Multi-programming was developed in the 1960s to further increase CPU usage by running multiple jobs simultaneously. Time-sharing systems in the 1960s allowed for interactive use by multiple users through fast context switching. Modern operating systems evolved from these early batch processing and time-sharing systems.
Operating Systems 1 (3/12) - ArchitecturesPeter Tröger
The document discusses operating system architectures and provides examples of different architectures. It begins by explaining that operating systems provide a layered architecture that hides the complexity of underlying hardware and manages system resources. It then provides examples of monolithic, layered, and microkernel architectures. Specifically, it summarizes Linux as having a layered kernel and user mode, VMS and Windows as having layered designs with executive and kernel layers, and microkernel designs placing most functionality in user-mode servers.
Operating Systems 1 (6/12) - ProcessesPeter Tröger
The document discusses operating system concepts related to processes. It covers:
1) The core operating system task of managing multiple running processes through techniques like multi-processing and multi-tasking.
2) A process is the unit of execution and is represented by a process control block containing its state and resource usage information.
3) Processes go through various states in their lifecycle including running, ready, blocked, and terminated as they are scheduled by the operating system dispatcher.
Operating Systems 1 (5/12) - Architectures (Unix)Peter Tröger
Modern operating systems like Unix, Linux, and Mac OS X share common goals and design principles. They all aim to abstract away the underlying hardware, manage system resources efficiently, and provide security and flexibility through layering. Key concepts they employ include processes, virtual memory, user/kernel modes enabled by hardware rings, system APIs for applications to access kernel functionality, and pluggable drivers for hardware devices. Their architectures have evolved from early Unix systems through influential versions like System V and BSD to the major modern variants in use today.
Operating Systems 1 (4/12) - Architectures (Windows)Peter Tröger
The Windows operating system was developed to meet requirements for a 32-bit, preemptive, virtual memory OS that could run on multiple hardware architectures and scales well on multiprocessing systems. It was designed to be extensible, portable, dependable, compatible with older systems, and high performing. The Windows kernel implements low-level processor-dependent functions and services like threading, interrupts, and synchronization. Device drivers translate I/O calls to hardware-specific requests using kernel and HAL functions. The HAL abstracts platform-specific details and presents a uniform I/O interface.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Dependable Systems - Hardware Dependability with Diagnosis (13/16)
1. Dependable Systems
!
Hardware Dependability - Diagnosis
Dr. Peter Tröger
!
Sources:
!
Siewiorek, Daniel P.; Swarz, Robert S.:
Reliable Computer Systems. third. Wellesley, MA : A. K. Peters, Ltd., 1998. ,
156881092X
!
Some images (C) Elena Dubrova, ESDLab, Kungl Tekniska Högskolan
2. Dependable Systems Course PT 2014
Fault Diagnosis
• Contains of fault detection (what ?) and fault location (where ?)
• Fault detection
• Replication checks - Test operation / execution against alternate implementation
• Timing checks - Test operation / execution against timing constraints
• Reversal checks - Make use of operation reversibility
• Coding checks - Utilize redundant (but different) representation of data
• Reasonableness checks - Check against known system / data properties
• Structural checks - Ensure consistent structure of data, diagnostic tests, ...
• Diagnostic checks - Use set of inputs for which the outputs are known
• Algorithmic checks - Check invariants of an algorithm
2
3. Dependable Systems Course PT 2014
Fault Detection
• Replication check
• Perform test against the same / alternate implementation
• Identical / partial copies of unit - assumes correct design and independent
failure
• Separate and different copies - assumes design faults
• Repeated execution - assumes transient fault
• Works well, but expensive
• Implementable in spatial
redundancy architecture
3
4. Dependable Systems Course PT 2014
Fault Detection
• Timing checks
• Monitor execution based on timing constraints
• Watchdog - Progress resets timer, expired timer assumes broken component
• Detection of crash, overload, infinite loop; frequency depends on application
• Broadcast timeout - Component broadcasts message on progress,
receivers have deadline for next message
• Acknowledge timeout - Peer should react on request with answer message
4
Tandem: Heartbeat broadcast every second, check on every second message
5. Dependable Systems Course PT 2014
Fault Detection
• Reversal checks
• One-to-one relationship between inputs and outputs
• Calculates inputs from output by reverse function, comparison
• Re-read data after write, mathematical functions
• Diagnostic checks
• Check known output for some test inputs
• Typical approach in built-in hardware testing
• Load tests - run at saturation levels, trigger abnormal environmental conditions
• Plausibility checks
• Based on knowledge about system design and data types - range checks,
consistency checks, type checks
5
p
x
2
= x
6. Dependable Systems Course PT 2014
Fault Detection - Coding Checks
• Coding techniques help with fault detection, diagnosis and tolerance
• General idea: Informational redundancy
• Code defines a subset of all possible information words as valid information
• Each valid information is a code word
• Error detection mechanism: Is the given word a (valid) code word?
• Number space - All numbers that can be represented in a numerical system
• Code space - Subset of a number space
• Applications: Data storage, ALU, communication busses, self-checking hardware, ...
• Ranking of codes: Detection coverage, overhead, complexity added, ...
6
8. Dependable Systems Course PT 2014
Hamming Distance (Richard Hamming, 1950)
• Hamming distance: Number of positions on which two words differ
• Alternative definition: Number of necessary substitutions to make A to B
• Alternative definition: Number of errors that transform A to B
!
!
!
!
• Minimum Hamming distance: Minimum distance between any two code words
• To detect d single-bit errors, a code must have a min. distance of at least d + 1
• If at least d+1 bits change in the transmitted code, a new (valid) code appears
• To correct d single-bit errors, the minimum distance must be 2d+1
8
(C)Wikipedia
9. Dependable Systems Course PT 2014
Parity Codes
• Add parity bit to the information word
• Even parity: If odd number of ones, add one to make the number of ones even
• Odd parity: If even number of ones, add one to make the number of ones odd
• Variants
• Bit-per-word parity
• Bit-per-byte parity
(Example:
Pentium data cache)
• Bit-per-chip parity
• ...
9
11. Dependable Systems Course PT 2014
m-out-of-n Code
• Data word of length n
(including code bits),
make sure that m ones are in the word
• Example: 2-out-of-5 code
• Often used in telecommunication
and in the US postal system
• 5 bits allow 10 combinations with
2-out-of-5, so all decimal digits are
representable
• Can deal with more than only
single-bit errors
11
Decimal Digit 2-out-of-5
0 0 0 0 1 1
1 0 0 1 0 1
2 0 0 1 1 0
3 0 1 0 0 1
4 0 1 0 1 0
5 0 1 1 0 0
6 1 0 0 0 1
7 1 0 0 1 0
8 1 0 1 0 0
9 1 1 0 0 0
12. Dependable Systems Course PT 2014
Code Properties
12
Code Bits / Word
Number of
possible words
Coverage
Even Parity 4 8
Any single bit error,
no double-bit error,
not all-0s or all 1s errors
Odd Parity 4 8
Any single bit error,
no double-bit error,
all-0s or all 1s errors
2/4 4 6
Any single bit error,
33% of double bit errors
13. Dependable Systems Course PT 2014
Checksumming
• General idea: Multiply components of a data word and add them up to a checksum
• Good for large blocks of data, low hardware overhead
• Might require substantial time, memory overhead
• 100% coverage for single faults
• Example: ISBN 10 check digit
• Final number of 10 digit ISBN number is a check digit, computed by:
• Multiply all data digits by their position number, starting from right
• Take sum of the products, and set the check digit so that result MOD 11 = 0
• Check digit „01“ is represented by X
• More complex approaches get better coverage (e.g. CRC) for more CPU time
13
14. Dependable Systems Course PT 2014
Hamming Code
• Every data word with n bits is extended
by k control bits
• Control bits through standard parity approach
(even parity), implementable with XOR
• Inserted at power-of-two positions
• Parity bit pX is responsible for all position
numbers with the X-least significant bit set
(e.g. p1 responsible for ,red dot‘ positions)
• Parity bits check overlapping parts of the data
bits, computation and check are cheap,
but significant overhead in data
• Minimum Hamming distance of three
(single bit correction)
• Application in DRAM memory
14
takenfromWikimediaCommons
16. Dependable Systems Course PT 2014
Hamming Code
• Hamming codes are known to produce 10% to 40% of data
overhead
• Syndrome determines which bit (if any) is corrupted
• Example ECC
• Majority of „one-off“ soft errors in DRAM happens from
background radiation
• Denser packages, lower voltage for higher frequencies
• Most common approach is the SECDEC Hamming code
• "Single error correction, double error
detection" (SECDEC)
• Hamming code with additional parity
• Modern BIOS implementations perform
threshold-based warnings on frequent correctable errors
16
(C) Z. Kalbarczyk