The document discusses metric driven formal verification and observation coverage analysis. It describes OneSpin Solutions, a company that provides formal verification tools, and their tool OneSpin 360. The tool allows for observation coverage analysis to measure verification progress. It provides unique features like assertion debugging and formal scoreboarding. The document outlines that a progress metric is needed for formal verification to measure coverage and determine when verification is complete. It proposes combining control coverage, which measures stimulus quality, and observation coverage, which measures checker quality. Observation coverage is demonstrated on an example using the Quantify MDV metric to check how design modifications impact property proofs. Merging observation coverage with control coverage provides a fuller picture of verification progress.
Basics of Functional Verification - Arrow DevicesArrow Devices
Are you new to functional verification? Or do you need a refresher? This presentation takes you through the basics of functional verification - overall scope and process with examples. Also included are some tips on do's and don'ts!
Must.kill.mutants. TopConf Tallinn 2016Gerald Muecke
Mutation Testing has been around for almost 20 years. Originated in academic research it has found its way into the developer’s toolbox being easy to setup, use and producing valuable results. But what is mutation testing? It’s a practice to determine the actual value of an automated test suite and automatically explore parts of the code that have yet been untested, unveiling surprises even to experienced test automation developers. Given a test suite that runs successfully, mutation testing will inject changes to the production code based on a set of rules and reruns the test to determine if the test will fail. Depending on the size of the code base the execution time increases exponentially due to the sheer amount of permutations, requiring thorough planning, focus and prioritization.
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flexmichael.labriola
This document discusses automated testing in Flex. It begins by explaining why automated testing is important, such as reducing costs from software errors and allowing developers to change code without fear of breaking other parts of the project. It then covers topics like writing unit tests, using theories and data points to test over multiple values, and writing integration tests. The document emphasizes that writing testable code is key, and provides some principles for doing so, such as separating construction from application logic and using interfaces. It also discusses using fakes, stubs and mocks to isolate units for testing.
Design thinking is an iterative problem-solving process of discovery, employs various techniques to gain insight and produce innovative solutions for any type of challenge (academic or non-academic (organizational or business)).
The slides of my session "Unit vs. Integration Tests" I gave at our Softwerkskammer Meetup Munich.
Abstract:
Unit and integration test fan boys have been fighting against each other since the early days of TDD. Nevertheless in the last years the test pyramid has become the common sense strategy for automated tests synthesizing both approaches in an economic ratio. Unfortunately in practice the vague and abstract concept leaves us alone with a lot of remaining questions.
I will start the session introducing the test pyramid strategy and the strengths and weaknesses of the different kinds of tests followed by implementation approaches valuable for real life. Then we will split up into small groups and discuss there what in our current projects works well for us and work together on how we can approach remaining challenges. In the end we will come together again and exchange our solutions in the full audience.
Are Your Continuous Tests Too Fragile for Agile?Erika Barron
With a fragile test suite, the Continuous Testing that's vital to agile just isn't feasible. If you truly want to automate the execution of a broad test suite—embracing unit, component, integration, functional, performance, and security testing—during continuous integration, discover the tips to ensure your test suite is up to the task.
Logically-componentized: Tests need to be logically-componentized so you can assess the impact at change time. When tests fail and they're logically correlated to components, it is much easier to establish priority and associate tasks to the correct resource.
Incremental: Tests can be built on each other, without impacting the integrity of the original or new test case.
Repeatable: Tests can be executed over and over again with each incremental build, integration, or release process.
Presented at Better Software Conference East 2014 (Agile Development Conference East 2014).
Agile Test Management and Reporting—Even in a Non-Agile ProjectTechWell
This document summarizes an upcoming presentation on "Agile Test Management and Reporting - Even in a Non-Agile Project" by Paul Holland from Testing Thoughts. The presentation will discuss using a whiteboard and spreadsheet to plan and track testing work instead of metrics like test cases and pass/fail percentages. Tracking actual effort spent rather than numbers provides better visibility into testing progress and issues. Sample tools like a whiteboard layout and spreadsheet report are shown to illustrate this approach.
Basics of Functional Verification - Arrow DevicesArrow Devices
Are you new to functional verification? Or do you need a refresher? This presentation takes you through the basics of functional verification - overall scope and process with examples. Also included are some tips on do's and don'ts!
Must.kill.mutants. TopConf Tallinn 2016Gerald Muecke
Mutation Testing has been around for almost 20 years. Originated in academic research it has found its way into the developer’s toolbox being easy to setup, use and producing valuable results. But what is mutation testing? It’s a practice to determine the actual value of an automated test suite and automatically explore parts of the code that have yet been untested, unveiling surprises even to experienced test automation developers. Given a test suite that runs successfully, mutation testing will inject changes to the production code based on a set of rules and reruns the test to determine if the test will fail. Depending on the size of the code base the execution time increases exponentially due to the sheer amount of permutations, requiring thorough planning, focus and prioritization.
assertYourself - Breaking the Theories and Assumptions of Unit Testing in Flexmichael.labriola
This document discusses automated testing in Flex. It begins by explaining why automated testing is important, such as reducing costs from software errors and allowing developers to change code without fear of breaking other parts of the project. It then covers topics like writing unit tests, using theories and data points to test over multiple values, and writing integration tests. The document emphasizes that writing testable code is key, and provides some principles for doing so, such as separating construction from application logic and using interfaces. It also discusses using fakes, stubs and mocks to isolate units for testing.
Design thinking is an iterative problem-solving process of discovery, employs various techniques to gain insight and produce innovative solutions for any type of challenge (academic or non-academic (organizational or business)).
The slides of my session "Unit vs. Integration Tests" I gave at our Softwerkskammer Meetup Munich.
Abstract:
Unit and integration test fan boys have been fighting against each other since the early days of TDD. Nevertheless in the last years the test pyramid has become the common sense strategy for automated tests synthesizing both approaches in an economic ratio. Unfortunately in practice the vague and abstract concept leaves us alone with a lot of remaining questions.
I will start the session introducing the test pyramid strategy and the strengths and weaknesses of the different kinds of tests followed by implementation approaches valuable for real life. Then we will split up into small groups and discuss there what in our current projects works well for us and work together on how we can approach remaining challenges. In the end we will come together again and exchange our solutions in the full audience.
Are Your Continuous Tests Too Fragile for Agile?Erika Barron
With a fragile test suite, the Continuous Testing that's vital to agile just isn't feasible. If you truly want to automate the execution of a broad test suite—embracing unit, component, integration, functional, performance, and security testing—during continuous integration, discover the tips to ensure your test suite is up to the task.
Logically-componentized: Tests need to be logically-componentized so you can assess the impact at change time. When tests fail and they're logically correlated to components, it is much easier to establish priority and associate tasks to the correct resource.
Incremental: Tests can be built on each other, without impacting the integrity of the original or new test case.
Repeatable: Tests can be executed over and over again with each incremental build, integration, or release process.
Presented at Better Software Conference East 2014 (Agile Development Conference East 2014).
Agile Test Management and Reporting—Even in a Non-Agile ProjectTechWell
This document summarizes an upcoming presentation on "Agile Test Management and Reporting - Even in a Non-Agile Project" by Paul Holland from Testing Thoughts. The presentation will discuss using a whiteboard and spreadsheet to plan and track testing work instead of metrics like test cases and pass/fail percentages. Tracking actual effort spent rather than numbers provides better visibility into testing progress and issues. Sample tools like a whiteboard layout and spreadsheet report are shown to illustrate this approach.
SystemVerilog Assertions (SVA) are used to validate the behavior of a design. Assertions are pieces of verification code that monitor a design for compliance with specifications. They can find bugs earlier and faster. There are two main types of assertions - immediate assertions which follow simulation semantics, and concurrent assertions which are based on clock semantics and can specify behavior over time. Concurrent assertions use sequences and properties to describe complex behaviors and are well-suited for formal analysis methods. SVA provides a native assertion framework in SystemVerilog, allowing simple integration with designs.
Finding Bugs Faster with Assertion Based Verification (ABV)DVClub
1) Assertion-based verification introduces assertions into a design to improve observability and controllability during simulation and formal analysis.
2) Assertions define expected behavior and can detect errors by monitoring signals within a design.
3) An assertion-based verification methodology leverages assertions throughout the verification flow from module to system level using various tools like simulation, formal analysis, and acceleration for improved productivity, quality, and reduced verification time.
TRACK H: On-the-fly design exploration framework for simulation/ lior Altmanchiportal
The document proposes a framework to improve the bug fix verification process by allowing designers to instantly check the effect of fixes through on-the-fly expression calculation, without needing to recompile or resimulate the entire design. This is done by analyzing code changes, determining affected statements, and generating virtual signals to show expression results on the waveform. The goals are to minimize design re-spins caused by bugs found late and speed up the fix verification process. Initial results showed the approach works for simple logic changes. Future work aims to support more complex constructs and better compare multiple fixes.
How to Handle Asynchronous Behaviors Using SVADVClub
SystemVerilog assertions are inherently synchronous due to the language definition. This makes them difficult to use for checking asynchronous behaviors like resets or communication across clock domains. However, with the proper techniques, SVA can effectively check both synchronous and asynchronous properties. Key techniques include using "disable iff" to terminate assertions when asynchronous signals occur, and delaying input sampling through constructs like program blocks, sequence events, or #0 delays to sample inputs after asynchronous signals have taken effect.
SystemVerilog assertions are inherently synchronous due to the language definition. This makes them difficult to use for checking asynchronous behaviors like resets or communication across clock domains. However, with the proper techniques, SVA can effectively check both synchronous and asynchronous properties. Key techniques include using "disable iff" to terminate assertions when asynchronous signals occur, and delaying input sampling through constructs like program blocks, sequence events, or #0 delays to sample inputs after asynchronous signals have taken effect.
This document summarizes a presentation on verification challenges and technologies. It discusses the basics of verification, verification methodologies, and skills needed for verification jobs. It covers simulation-based verification techniques like testbenches, and limitations of simulation like lack of timing information. It also discusses functional coverage to track whether test plans have been fully executed.
The document discusses various types of testing including functional testing, integration testing, regression testing, smoke testing, performance testing, and exploratory testing. It provides examples and explanations of each type of testing. Functional testing involves testing application components independently, while integration testing checks for dependencies between modules. Regression testing re-tests applications after changes to check for bugs in unaffected areas. Smoke testing checks for blocker bugs before deep testing, and performance testing evaluates response time and stability under various loads. Exploratory testing explores applications without predefined requirements or test cases.
This document provides an introduction to verification and the Universal Verification Methodology (UVM). It discusses different types of verification including simulation, functional coverage, and code coverage. It describes how simulators work and their limitations in approximating physical hardware. It also covers topics like event-driven simulation, cycle-based simulation, co-simulation, and different types of coverage metrics used to ensure a design is fully tested.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
TRACK H: Using Formal Tools to Improve the Productivity of Verification at ST...chiportal
Formal verification was used to verify three projects at STMicroelectronics:
1) A Sensor Control Block, where 3 bugs were found including issues with an APB interface and interrupt properties.
2) A Clock and Reset Manager block where the specification was unclear but formal analysis helped extract timing properties.
3) Point-to-point connectivity checks across a subsystem where 2564 connections were formally verified.
Overall, formal verification provided time savings over constrained random testing, helped address incomplete specifications, and improved quality.
The document discusses addressing the time/quality trade-off in view maintenance when querying linked data. It proposes optimizing maintenance to satisfy either quality constraints within the lowest response time or time constraints with the highest response quality. It describes summarizing a dataset to estimate query freshness and challenges with building individual summaries for each maintenance plan. The conclusion notes next steps are designing a more realistic dataset and comparing histogram and predicate multiplication approaches.
Dealing with the Three Horrible Problems in VerificationDVClub
1) There are three major problems in verification: specifying the properties to check, specifying the environment, and computational complexity of achieving high coverage.
2) The author proposes using "perspectives" to address these problems by focusing verification on specific aspects or classes of properties of a design using minimal formalization, rather than trying to tackle all issues at once.
3) This approach reduces complexity by omitting irrelevant details, targeting properties designers care about, and allowing verification to keep pace with frequent design changes.
This document discusses various techniques for software testing, including static testing, black box testing, and white box testing. Static testing involves non-execution techniques like reviews of documentation. Black box testing focuses on functional requirements without knowledge of internal structures, using techniques like equivalence partitioning, boundary value analysis, and state transition testing. White box testing uses internal program structure, exercising all independent paths and logical decisions using techniques like statement coverage, branch coverage, and condition coverage. The document also covers topics like cyclomatic complexity, control flow graphs, and experienced-based testing methods like error guessing and exploratory testing.
White box testing is a software testing technique that tests internal coding and infrastructure. It involves writing test cases that exercise the paths in the code to help identify missing logic or errors. The document discusses various white box testing techniques like statement coverage, decision coverage, loop coverage, condition coverage, and path coverage. It also discusses performing white box testing at the unit, integration, and system levels. The session will cover white box testing at the unit level using control flow analysis techniques like building control flow graphs and analyzing possible paths.
This document contains an agenda for a presentation on verification topics including basics, challenges, technologies, strategies, methodologies, and skills needed for corporate jobs. It also includes details about the presenter such as their name, role at Mentor Graphics, contact information, and background. The document dives into various aspects of verification like simulation, testbenches, formal verification, and limitations of simulation.
Basic Engineering Design (Part 6): Test and EvaluateDenise Wilson
The document describes the process of testing and evaluating components in the engineering design cycle. It emphasizes beginning with testing critical components, like sensors, in isolated and controlled environments to characterize performance before moving to more complex system-level testing. Testing should progress from controlled laboratory settings to realistic operating environments to verify functionality. Both critical and supporting components require testing to validate they meet design specifications.
This document discusses verification and validation (V&V) and developing a V&V plan using model-based systems engineering. It explains that V&V activities should occur early in the lifecycle during requirements analysis and system design. It also discusses preparing for V&V by developing an ontology, defining verifiable requirements, and creating a V&V plan. The document shows how the LML schema can be extended to support V&V and describes characteristics of good requirements that make them verifiable. Finally, it demonstrates how to develop a test plan and test cases using MBSE and simulate test execution.
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
Approximate Continuous Query Answering Over Streams and Dynamic Linked Data SetsSoheila Dehghanzadeh
To perform complex tasks, RDF Stream Processing Web applications evaluate continuous queries over streams and quasi-static (background) data. While the former are pushed in the application, the latter are continuously retrieved from the sources. As soon as the background data increase the volume and become distributed over the Web, the cost to retrieve them increases and applications become unresponsive.
In this paper, we address the problem of optimizing the evaluation of these queries by leveraging local views on background data. Local views enhance performance, but require maintenance processes, because changes in the background data sources are not automatically reflected in the application.
We propose a two-step query-driven maintenance process to maintain the local view: it exploits information from the query (e.g., the sliding window definition and the current window content) to maintain the local view based on user-defined Quality of Service constraints.
Experimental evaluation show the effectiveness of the approach.
Prof. Zhihua Wang, Tsinghua University, Beijing, China chiportal
This document discusses the design considerations for wireless transceivers used in implantable medical devices (IMDs). It covers topics such as frequency band selection, power requirements, antenna design challenges due to the human body environment, and the need for both high burst data rates and long-term low data rate connections. The goal is to discuss the technical challenges in developing efficient, reliable wireless communication systems for implantable medical applications.
Prof. Steve Furber, University of Manchester, Principal Designer of the BBC M...chiportal
The document discusses progress towards developing intelligent machines, including deep learning networks that have transformed machine learning. It describes the Human Brain Project, a €1 billion EU initiative to simulate the human brain through building supercomputers like SpiNNaker with hundreds of thousands of processor cores. While general human-level artificial intelligence has not been achieved, machines are beginning to sense and understand their environment, like driverless cars, and understanding the brain could further accelerate this progress and its consequences.
More Related Content
Similar to TRACK H: Formal metric driven verification/ Raik Brinkmann
SystemVerilog Assertions (SVA) are used to validate the behavior of a design. Assertions are pieces of verification code that monitor a design for compliance with specifications. They can find bugs earlier and faster. There are two main types of assertions - immediate assertions which follow simulation semantics, and concurrent assertions which are based on clock semantics and can specify behavior over time. Concurrent assertions use sequences and properties to describe complex behaviors and are well-suited for formal analysis methods. SVA provides a native assertion framework in SystemVerilog, allowing simple integration with designs.
Finding Bugs Faster with Assertion Based Verification (ABV)DVClub
1) Assertion-based verification introduces assertions into a design to improve observability and controllability during simulation and formal analysis.
2) Assertions define expected behavior and can detect errors by monitoring signals within a design.
3) An assertion-based verification methodology leverages assertions throughout the verification flow from module to system level using various tools like simulation, formal analysis, and acceleration for improved productivity, quality, and reduced verification time.
TRACK H: On-the-fly design exploration framework for simulation/ lior Altmanchiportal
The document proposes a framework to improve the bug fix verification process by allowing designers to instantly check the effect of fixes through on-the-fly expression calculation, without needing to recompile or resimulate the entire design. This is done by analyzing code changes, determining affected statements, and generating virtual signals to show expression results on the waveform. The goals are to minimize design re-spins caused by bugs found late and speed up the fix verification process. Initial results showed the approach works for simple logic changes. Future work aims to support more complex constructs and better compare multiple fixes.
How to Handle Asynchronous Behaviors Using SVADVClub
SystemVerilog assertions are inherently synchronous due to the language definition. This makes them difficult to use for checking asynchronous behaviors like resets or communication across clock domains. However, with the proper techniques, SVA can effectively check both synchronous and asynchronous properties. Key techniques include using "disable iff" to terminate assertions when asynchronous signals occur, and delaying input sampling through constructs like program blocks, sequence events, or #0 delays to sample inputs after asynchronous signals have taken effect.
SystemVerilog assertions are inherently synchronous due to the language definition. This makes them difficult to use for checking asynchronous behaviors like resets or communication across clock domains. However, with the proper techniques, SVA can effectively check both synchronous and asynchronous properties. Key techniques include using "disable iff" to terminate assertions when asynchronous signals occur, and delaying input sampling through constructs like program blocks, sequence events, or #0 delays to sample inputs after asynchronous signals have taken effect.
This document summarizes a presentation on verification challenges and technologies. It discusses the basics of verification, verification methodologies, and skills needed for verification jobs. It covers simulation-based verification techniques like testbenches, and limitations of simulation like lack of timing information. It also discusses functional coverage to track whether test plans have been fully executed.
The document discusses various types of testing including functional testing, integration testing, regression testing, smoke testing, performance testing, and exploratory testing. It provides examples and explanations of each type of testing. Functional testing involves testing application components independently, while integration testing checks for dependencies between modules. Regression testing re-tests applications after changes to check for bugs in unaffected areas. Smoke testing checks for blocker bugs before deep testing, and performance testing evaluates response time and stability under various loads. Exploratory testing explores applications without predefined requirements or test cases.
This document provides an introduction to verification and the Universal Verification Methodology (UVM). It discusses different types of verification including simulation, functional coverage, and code coverage. It describes how simulators work and their limitations in approximating physical hardware. It also covers topics like event-driven simulation, cycle-based simulation, co-simulation, and different types of coverage metrics used to ensure a design is fully tested.
Testing As A Bottleneck - How Testing Slows Down Modern Development Processes...TEST Huddle
We often claim the purpose of testing is to verify that software meets a desired level of quality. Frequently, the term “testing” is associated with checking for functional correctness. However, in large, complex software systems with an established user-base, it is also important to verify system constraints such as backward compatibility, reliability, security, accessibility, usability. Kim Herzig from Microsoft explores these issues with the latest webinar on test Huddle.
TRACK H: Using Formal Tools to Improve the Productivity of Verification at ST...chiportal
Formal verification was used to verify three projects at STMicroelectronics:
1) A Sensor Control Block, where 3 bugs were found including issues with an APB interface and interrupt properties.
2) A Clock and Reset Manager block where the specification was unclear but formal analysis helped extract timing properties.
3) Point-to-point connectivity checks across a subsystem where 2564 connections were formally verified.
Overall, formal verification provided time savings over constrained random testing, helped address incomplete specifications, and improved quality.
The document discusses addressing the time/quality trade-off in view maintenance when querying linked data. It proposes optimizing maintenance to satisfy either quality constraints within the lowest response time or time constraints with the highest response quality. It describes summarizing a dataset to estimate query freshness and challenges with building individual summaries for each maintenance plan. The conclusion notes next steps are designing a more realistic dataset and comparing histogram and predicate multiplication approaches.
Dealing with the Three Horrible Problems in VerificationDVClub
1) There are three major problems in verification: specifying the properties to check, specifying the environment, and computational complexity of achieving high coverage.
2) The author proposes using "perspectives" to address these problems by focusing verification on specific aspects or classes of properties of a design using minimal formalization, rather than trying to tackle all issues at once.
3) This approach reduces complexity by omitting irrelevant details, targeting properties designers care about, and allowing verification to keep pace with frequent design changes.
This document discusses various techniques for software testing, including static testing, black box testing, and white box testing. Static testing involves non-execution techniques like reviews of documentation. Black box testing focuses on functional requirements without knowledge of internal structures, using techniques like equivalence partitioning, boundary value analysis, and state transition testing. White box testing uses internal program structure, exercising all independent paths and logical decisions using techniques like statement coverage, branch coverage, and condition coverage. The document also covers topics like cyclomatic complexity, control flow graphs, and experienced-based testing methods like error guessing and exploratory testing.
White box testing is a software testing technique that tests internal coding and infrastructure. It involves writing test cases that exercise the paths in the code to help identify missing logic or errors. The document discusses various white box testing techniques like statement coverage, decision coverage, loop coverage, condition coverage, and path coverage. It also discusses performing white box testing at the unit, integration, and system levels. The session will cover white box testing at the unit level using control flow analysis techniques like building control flow graphs and analyzing possible paths.
This document contains an agenda for a presentation on verification topics including basics, challenges, technologies, strategies, methodologies, and skills needed for corporate jobs. It also includes details about the presenter such as their name, role at Mentor Graphics, contact information, and background. The document dives into various aspects of verification like simulation, testbenches, formal verification, and limitations of simulation.
Basic Engineering Design (Part 6): Test and EvaluateDenise Wilson
The document describes the process of testing and evaluating components in the engineering design cycle. It emphasizes beginning with testing critical components, like sensors, in isolated and controlled environments to characterize performance before moving to more complex system-level testing. Testing should progress from controlled laboratory settings to realistic operating environments to verify functionality. Both critical and supporting components require testing to validate they meet design specifications.
This document discusses verification and validation (V&V) and developing a V&V plan using model-based systems engineering. It explains that V&V activities should occur early in the lifecycle during requirements analysis and system design. It also discusses preparing for V&V by developing an ontology, defining verifiable requirements, and creating a V&V plan. The document shows how the LML schema can be extended to support V&V and describes characteristics of good requirements that make them verifiable. Finally, it demonstrates how to develop a test plan and test cases using MBSE and simulate test execution.
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
Approximate Continuous Query Answering Over Streams and Dynamic Linked Data SetsSoheila Dehghanzadeh
To perform complex tasks, RDF Stream Processing Web applications evaluate continuous queries over streams and quasi-static (background) data. While the former are pushed in the application, the latter are continuously retrieved from the sources. As soon as the background data increase the volume and become distributed over the Web, the cost to retrieve them increases and applications become unresponsive.
In this paper, we address the problem of optimizing the evaluation of these queries by leveraging local views on background data. Local views enhance performance, but require maintenance processes, because changes in the background data sources are not automatically reflected in the application.
We propose a two-step query-driven maintenance process to maintain the local view: it exploits information from the query (e.g., the sliding window definition and the current window content) to maintain the local view based on user-defined Quality of Service constraints.
Experimental evaluation show the effectiveness of the approach.
Prof. Zhihua Wang, Tsinghua University, Beijing, China chiportal
This document discusses the design considerations for wireless transceivers used in implantable medical devices (IMDs). It covers topics such as frequency band selection, power requirements, antenna design challenges due to the human body environment, and the need for both high burst data rates and long-term low data rate connections. The goal is to discuss the technical challenges in developing efficient, reliable wireless communication systems for implantable medical applications.
Prof. Steve Furber, University of Manchester, Principal Designer of the BBC M...chiportal
The document discusses progress towards developing intelligent machines, including deep learning networks that have transformed machine learning. It describes the Human Brain Project, a €1 billion EU initiative to simulate the human brain through building supercomputers like SpiNNaker with hundreds of thousands of processor cores. While general human-level artificial intelligence has not been achieved, machines are beginning to sense and understand their environment, like driverless cars, and understanding the brain could further accelerate this progress and its consequences.
Prof. Steve Furber, University of Manchester, Principal Designer of the BBC M...chiportal
This document summarizes the SpiNNaker project, which aims to build a massively parallel supercomputer inspired by the brain's architecture. It discusses how SpiNNaker represents over 65 years of progress in computing efficiency. The SpiNNaker architecture uses low-power ARM processors and multicast routing to enable modeling large networks of neurons in real-time, representing up to 1% of the human brain. Recent SpiNNaker machines constructed for the Human Brain Project include a 500,000-core system that can simulate 500 million neurons and 500 billion synapses.
The document discusses handling memory accesses for big data workloads. It proposes using an architecture called a "funnel" to more efficiently process "non-temporal" or "read-once" memory accesses that exhibit no data reuse. The funnel would be placed close to data storage to bypass moving all data to DRAM, reducing bandwidth bottlenecks and energy wasted on unnecessary data movement. It provides analytical models showing the funnel can improve performance and energy efficiency by focusing expensive DRAM accesses only on data exhibiting temporal locality. Open questions remain around software models, shared data handling, and hardware implementation of computational capabilities at the funnel.
(1) Faraday provides an ESL SystemC model based virtual platform service to help with early software development. (2) The virtual platform allows software development to begin earlier compared to traditional design flows that rely on hardware prototypes. (3) Faraday has developed several virtual platforms using ARM CPU models and IP models from Faraday and Synopsys to help customers with software boot, driver development, and application development.
Prof. Danny Raz, Director, Bell Labs Israel, Nokia chiportal
SDN and NFV aim to revolutionize traditional network architecture by decoupling the data and control planes and implementing network functions through software on commercial off-the-shelf servers. While this promises benefits like increased flexibility and reduced costs, challenges remain around performance, reliability, and complexity of operation. Realizing the full potential of SDN and NFV will depend on overcoming technical hurdles in efficient implementation and hardware/software support.
Marco Casale-Rossi, Product Mktg. Manager, Synopsyschiportal
This document discusses trends and challenges in physical chip design over the next decade. It notes that while Moore's Law of transistor density doubling every two years remains intact, the cost aspect may be under threat. Emerging technologies below 10nm feature complex multi-patterning and 3D structures. Routing is increasingly difficult due to shrinking metal pitches. Interconnect delay dominates total delay, with resistance varying over 1000x between metal layers. Heterogeneous integration and 2.5D/3D packaging will require new design approaches handling non-Manhattan routing. Physical design innovation will be critical to enable emerging nodes and differentiate mature nodes.
This document describes a method for simulating electrostatic discharge (ESD) protection circuits using empirical models of ESD devices. The method combines regular SPICE models of ESD transistors with curves based on transmission line pulsing (TLP) measurements. The models trigger bipolar behavior based on simulated terminal voltages and TLP data. Simulation results matched TLP curves and demonstrated checking ESD current and voltage clamping. The method allows verifying ESD protection in complex chip designs.
Eddy Kvetny, System Engineering Group Leader, Intelchiportal
This document discusses approaches to offloading processing tasks from a host or AP to improve power efficiency. It describes traditional offloading through embedding dedicated hardware as well as limitations. An approach called "refined offloading" is proposed to move tasks out of the main OS environment through embedding, virtualization, a hybrid approach, or an isolated execution environment. Key criteria for choosing the best approach include power budget, complexity, memory needs, event rate, platform support, cost, and functional scalability.
Dr. John Bainbridge, Principal Application Architect, NetSpeed chiportal
Dr. John Bainbridge presented on NetSpeed's configurable, coherent system-on-chip interconnect for heterogeneous multiprocessing and storage applications. The interconnect provides flexibility to customize the cache hierarchy and optimize latency through physically distributed coherency controllers. It also scales coherency bandwidth through address-sliced coherency controllers and uses advanced directory techniques to avoid address conflicts and reduce dynamic power.
Xavier van Ruymbeke, App. Engineer, Arterischiportal
This document discusses enhancing data reliability in data center flash storage controllers through network-on-chip (NoC) interconnect data protection features. It describes the increasing complexity of flash controller designs, which raises the probability of on-chip errors. Implementing data protection directly in the NoC interconnect using techniques like parity checking, error correction codes, and logic duplication can help make the system more reliable compared to software-only solutions. The document provides examples of different data protection techniques that can be applied to transaction payloads, packet headers, and ARM Cortex cores to safeguard data as it travels across the on-chip network.
The document discusses how big data tools can be used to simplify debugging by extracting data from large simulation log files and presenting it graphically. Specifically, it proposes indexing simulation log files using Lucene to enable fast searching and extraction of relevant records. This would allow engineers to quickly find error messages and events within log files that can reach several gigabytes in size. Graphical representation of the log file data is presented as a more intuitive way to analyze logs and trace problems compared to navigating raw text. The goal is to harness big data techniques to shorten debugging time and increase productivity for verifying complex chip designs.
This document discusses embedded systems design and hardware-software codesign. It describes why codesign is important to reduce time to market, achieve better design, and explore alternative designs while meeting constraints. Various codesign approaches are presented, including using bus functional models, instruction set simulators, and carbon models in simulation tools. The document focuses on the Proteus VSM tool for embedded systems design, describing its microcontroller and peripheral models, visual firmware design, and example applications. References for further information are provided at the end.
This document summarizes GUC's zero-defect methodology for automotive and other applications requiring high reliability. It discusses GUC's comprehensive reliability management approach that handles reliability at all stages from design to production. This includes techniques like design-for-reliability, design-for-manufacturing, design-for-testability, tight process control, outlier screening, and statistical testing to achieve a failure rate of less than 5 FITs and defects of less than 5-10 DPPM. The document also outlines GUC's use of process monitoring, design robustness, package selection, and other methods to manage process variations and ensure product reliability.
The document describes HEAT, a hardware-enabled algorithmic tester for validating 2.5D HBM solutions. HEAT allows for at-speed functional testing of an HBM test chip through traffic generation and single-cycle data integrity checks. It also enables performance measurement, power-aware design, minimal package I/O count, fallback chip booting, functional debugging, user interface debugging, and testing the logic die before assembly.
Gert Goossens,Sen. Director, ASIP Tools, Synopsyschiportal
This document discusses using an application-specific processor (ASIP) to accelerate Robust Header Compression (ROHC). It describes how the ASIP methodology was used to design a customized processor that significantly improved performance over a general purpose CPU. The ASIP achieved up to 87% faster cycle counts and up to 7.9x speedup for specific data processing compared to software implementations. In conclusion, the ASIP approach enabled both control and data processing to be accelerated like fixed hardware, but with the flexibility of a programmable processor.
Tuvia Liran, Director of VLSI, Nano Retinachiportal
Miniature power sources such as solid state batteries, super capacitors, and nuclear batteries are emerging technologies that can power devices for the Internet of Things and autonomous systems. Solid state batteries offer high charge density, safety, and long life in a miniature package, but have limited capacity. Super capacitors provide virtually unlimited cycling but have limited energy storage. Nuclear batteries using radioactive isotopes can power devices for 10-20 years but have low power output. Emerging technologies for miniature power sources will enable further implementation of autonomous wireless devices and sensors.
Sagar Kadam, Lead Software Engineer, Open-Siliconchiportal
The document discusses trust-based IoT security mechanisms for ARM-based systems of things. It covers IoT architecture and security threats. It proposes using a SHUBHAM FPGA platform with a Cortex-M4F and cryptographic IP to provide features like secure boot, firmware over-the-air updates, and data security for sensors. Implementing this security would require additional gates and memory but help protect against attacks.
Ronen Shtayer,Director of ASG Operations & PMO, NXP Semiconductorchiportal
The document discusses the road ahead for securely connected cars. It summarizes that NXP is a leader in automotive semiconductors, including communications processors, RF power transistors, and automotive safety. It outlines NXP's role in enabling innovations in areas like infotainment, secure car access, vehicle networking, safety, and advanced driver assistance. The document also discusses trends like seamless connectivity and advanced driver assistance systems. It focuses on the role of vehicle-to-everything communication and security in connecting cars to infrastructure and ensuring safety.
This document summarizes a presentation on a mm-wave low-power transceiver for wireless interconnects. Key points include:
- A 120 GHz transceiver was designed in 28nm CMOS to enable wireless interconnects with data rates up to 80 Gbps and power efficiency below 4 pJ/bit.
- The transceiver uses frequency multiplication, passive quadrature generation, and downconversion mixing. On-chip measurements showed a receiver noise figure below 12 dB and transmitter output power over 2 dBm across the band.
- Initial over-the-air tests at a distance of 26 cm achieved 15 Gbps without equalization using BPSK modulation, demonstrating the viability of wireless interconnects.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
2. May 1, 2013
OneSpin Solutions - Who we are…
OneSpin provides
enduring solutions that
enable the most
thorough & easiest to use
logic verification available
OneSpin Solutions was spun
out of Infineon in 2005
Technologies include
Assertion-based
verification (ABV) and
equivalence checking (EC)
Production-proven
tools used in
thousands of designs
The company is a completely
independent entity owned by
Azini Capital and OneSpin’s
employees
The original technologies have
been augmented & enhanced
to provide the most thorough
formal capabilities available
Engineering HQ based
in Munich, Germany
Sales offices in US,
France, Germany,
Israel, Japan, and
Turkey
3. May 1, 2013
Formal Verification with OneSpin 360
• Unique push button observation coverage analysis for verification progress
• Faster assertion development in timing diagram style using transaction level assertions
• 100% functional coverage using unique gap free verification
• Unique structural assertion debugger and active value/driver tracing through RTL
• Incremental compilation of assertions for quick turn around
• Automatic clock and reset detection for easy design setup
Unique productivity features
• HDL-linting
• Structural assertion synthesis
• Coverage closure for simulation
• SoC connectivity verification
• Register verification
• Formal score boarding
• Protocol verification IP
• X-propagation verification
Large selection of push-button design verification solutions, included
• Verification of logic, sequential and power optimizations
• Supporting RTL and gate level for FPGA and ASIC
• Integrated with major vendors and many synthesis flows
Push-button equivalence checking
4. May 1, 2013
Outline
Importance of Progress Metrics
Coverage Metrics
Control vs. Observation Coverage
Practical Observation Coverage
Combining Control and Observation Coverage
Examples
Summary
5. May 1, 2013
Why do we need a metric?
DUV
Tests / Scenarios Checkers
Bus Functional
Model (BFM)
Test #1
Test #2
Stimulus / Scenario Generation
Constraints
Formal
Engine
Stimulus / Scenario Exclusion
Check #1
Check #2
Check #3
Check #4
Simulation
Formal
Verification Environment (simplified):
Verification Process:
• Plan: Verification planning
• Do: Write tests / properties and verify and fix DUV, tests, properties
• Check: Are we done, (still) making progress?
• Act: Adapt planning or get ready for tape-out
Progress Metric : Key Component of Verification Flow
Plan Do
CheckAct
6. May 1, 2013
Standard Formal ABV
Progress of Formal Verification?
• Unclear how many assertions to write and where to put them
• Unclear verification progress
• Unclear how formal verification affects overall verification quality
Integration with Simulation?
• Weak integration into verification flow and sign-off
• Additional effort to existing simulation effort
• Formal used as “point tool” for verifying specific aspects (e.g. hard to test
corner cases)
Low acceptance of formal ABV
We need a practical progress metric!
7. May 1, 2013
Progress Metric & Coverage Taxonomy
Coverage
Structural Functional
Progress Metric
Bug Rate
Code Circuit
• Line/Statement/Block
• Expression/Branch/Condition
• FSM
• Toggle
• Latch
Scenarios
Implementation
artefacts
Analytical
Bugs
Fixed
100%
time
ok#
AssertionComplete
• Gap Free Analysis • Verification Plan
Anecdotal
Have I written enough tests / properties?
How much has been verified?
Where are the gaps in my verification?
Are we done and ready for tape-out?
if (…)
…
if (…)
else …
8. May 1, 2013
Control / Simulation Coverage:
• Focused on quality of stimuli
• How to judge quality of checkers?
Observation & Control Coverage
coverage metrics
stimulus generation checkers/assertions
plan
DUV
report
How good are my
test vectors &
constraints?
How good are my
checkers and
assertions?
How much of my DUV is verified?
When is my verification finished?
Observation Coverage:
• Focused on quality of checkers
• Exposes unverified DUV parts
9. May 1, 2013
Been there? Done what?
• Has the statement been reached?
• Idea:
– If a statement has not been reached
during verification, it can’t break a
check.
– If a statement has been reached, would
some check fail?
• Can measure quality of stimuli.
case (state)
…
burst:
if (cancel_i)
done_o <= 1
…
active
case (state)
…
burst:
if (cancel_i)
done_o <= X
…
mutate
• Has the statement been verified?
• Idea:
If a statement is modified, some check
should fail.
But would some check fail, if the
statement cannot be reached?
• Can measure quality of checkers.
Been there! Done that!
Example: Statement Coverage
Coverage
Control Observation
Been there! Done that!
10. May 1, 2013
Control Coverage for Formal
No explicit stimulus generation for formal
Formal considers all possible input stimuli by default (exhaustive verification)
However: Control coverage still an issue because
• Formal requires constraints excluding illegal environment behavior
• Constraints may accidentally be too strong and exclude vital functionality
• Constrained functionality can neither be controlled nor observed
(Structural) control coverage
quantifies degree to which
locations of a design have been
activated during verification.
control
controllable
reached
unreached
unknown
uncontrollable
dead
constrained
11. May 1, 2013
(Formal) Observation Coverage
Verification tries to answer the question:
• “Does a design satisfy a specification?”
(Observation) coverage tries to answer the question:
• “What causes a design to satisfy a specification?”
Causality:
• “When do we say A is a cause of B?”
Common approach: Counterfactual causality:
• “A is a cause of B if, had A not happened, then B would not have happened.”
• A code line is a cause of a design satisfying an assertion. Source: H. Chockler, IBM
(Structural) observation
coverage quantifies degree to
which locations of a design
have been responsible to make
checks pass.
observation
unknown
observable
observed
unobserved
unobservable
12. May 1, 2013
Practical Observation Coverage
Mutation coverage
• Particular changes depending on error model, usually static
• Replacing for example “a <= b;” with “a <= 1´b1;”
• Problems:
• Can lead to vacuously holding assertions
• Requires several modifications at each location, high run time
“Quantify MDV” metric
• Use abstraction by free variables, allow dynamic behavioral change
• Replacing for example “a <= b;” with “a <= free_b;”
• Advantages:
• No unintended vacuity
• One modification for each location leads to faster results
• Multiple locations can be checked at the same time to prove “unobserved”
13. May 1, 2013
always @(posedge clk or posedge reset)
if (reset)
z <= 1’b0;
else
begin
case (i)
3'b001: z <= a;
3'b010: z <= b;
3'b100: z <= c;
default: z <= <input>;
endcase
end
M5
always @(posedge clk or posedge reset)
if (reset)
z <= 1’b0;
else
begin
case (i)
3'b001: z <= a;
3'b010: z <= b;
3'b100: z <= <input>;
default: z <= 1'b1;
endcase
end
M4
always @(posedge clk or posedge reset)
if (reset)
z <= 1’b0;
else
begin
case (i)
3'b001: z <= a;
3'b010: z <= <input>;
3'b100: z <= c;
default: z <= 1'b1;
endcase
end
M3
Formal Observation Coverage Examplemodule select1(onehot, a, b, c, z, clk, reset);
input clk;
input reset;
input [2:0] i;
input a;
input b;
input c;
output reg z;
always @(posedge clk or posedge reset)
if (reset)
z <= 1'b0; // L1: not covered (reset case)
else
begin
case (i)
3'b001: z <= a; // L2: covered by assertion
3'b010: z <= b; // L3: not covered
3'b100: z <= c; // L4: not covered
default: z <= 1'b1; // L5: not covered
endcase
end
// if there is no reset, then 'a' is stored in 'z' if ‘i' is 3'b001
A: assert property
( @(posedge clk)
disable iff (reset)
i == 3'b001 |=> z == $past(a)
);
endmodule
Q: Which assignment locations Lx in design
M are observed by proven assertion A?
2. Re-Check property A for each M1..M5
always @(posedge clk or posedge reset)
if (reset)
z <= <input>;
else
begin
case (i)
3'b001: z <= a;
3'b010: z <= b;
3'b100: z <= c;
default: z <= 1'b1;
endcase
end
M1
always @(posedge clk or posedge reset)
if (reset)
z <= 1’b0;
else
begin
case (i)
3'b001: z <= <input>;
3'b010: z <= b;
3'b100: z <= c;
default: z <= 1'b1;
endcase
end
M2
Assertion A holds on M1:
L1 not observed
Assertion A fails on M2: L2
is observed
M
A
L3 not observed
L4 not observed
L5 not observed
1. Modify each location L1..L5
of M: Producing M1..M5
A: The locations Lx for which A fails after
replacing the assignment with a free input.
14. May 1, 2013
Merging Observation and Control
Observation Coverage Control Coverage Merged Result
observed reached1) covered
unknown reached reached
unobserved reached unobserved
unknown unreached unreached
unobserved unreached uncovered
unobservable2) constrained constrained
unobservable2) dead dead
1. If a location is uncontrollable, it is also unobservable.
2. If a location is observed, it is also reached and thus controllable.
observation
unknown
observable
observed
unobserved
unobservable
control
controllable
reached
unreached
unknown
uncontrollable
dead
constrained
2.
1.
Been there! Done that!
15. May 1, 2013
Interfacing with UCIS
UCIS
• Coverage standard evolved from Mentor Graphics‘ UCDB
• Only supports control coverage, but not observation coverage
How to interface?
• Use user defined structures to store Quantify MDV results
• Use result merging to enhance standard UCIS coverage
Result Merging from UCIS to Quantify MDV:
• Covered bins -> Reached locations
Merging Quantify MDV to UCIS:
• Dead/Constrained -> Coverage Excludes
• Reached locations -> Covered bins
• Covered locations -> Covered bins
Conclusions on UCIS interfacing
• Merging results with UCIS is possible and useful now
• Direct support for observation coverage by UCIS is desirable
16. May 1, 2013
Simple Quantify MDV Example
• Mixed control and observation
coverage results indicate:
– holes indicating missing
assertions
– constrained code indicating
over-constrainingverification
hole
verified
code
constrained
code
dead
code
Management
overview
17. May 1, 2013
Industry Examples
Design LOCs #Assertions #Locations Runtime
FIFO 321 30 21 100s
FSM-DDR2-Read 839 6 93 106s
vCore-Processor 295 8 86 204s
SQRT 383 2 35 257s
IFX-Aurix-1*) 25563 85 2316 4d
IFX-Aurix-2 27374 157 1993 5d
IFX-Aurix-3 57253 253 5309 7d**)
*) “Quantification of Formal Properties for Productive Automotive Microcontroller Verification”, Holger
Busch, DVCon, San Jose - February 26, 2013
**) interrupted at 80% completion
18. May 1, 2013
Summary
Coverage is an important aspect of verification
New metric introduced: “Quantify MDV”
• Combines observation and control coverage
• Agnostic to assertion style, assertion language, design style and design
language
• Scales to relevant big industry designs
• Useful for
• Quick feedback on formal verification quality, as well as
• Criterion for (formal) verification closure
• Can be combined with simulation based verification and metrics
20. May 1, 2013
What about redundancy?
Redundant code cannot be observed!
• Redundant code will be marked uncovered if not treated specially.
What are sources of redundancy?
• Redundant logic not driving any outputs
• Safety logic meant to implement fail safe devices
How to mitigate?
• Identify redundant code and mark it
• Remove redundant code before running coverage
• Selectively activate/de-activate redundant parts
21. May 1, 2013
Functional vs. Structural
Coverage Type Functional Structural
Control How much of the specified behavior
has been activated?
How much of the DUV implementation has
been activated?
Observation How much of the specified behavior
has been verified?
How much of the DUV implementation has
been verified?
Functional vs. Structural
• Functional - relates to specification
– Measures completeness of requirements verification
– Can identify gaps in verification plan
• Structural - relates to DUV
– Measures completeness of design intent verification
– Can identify unverified parts of DUV
Control vs. Observation
• Control - relates to activation
– Measures quality of stimuli
– Can identify unreachable code / over-
constraining
• Observation - relates to causality
– Measures quality of checkers
– Can identify verification gaps
• Control coverage doesn’t imply any observation coverage.
• Observation coverage doesn’t imply any functional coverage.
• 100% observation coverage implies 100% control coverage.
• 100% Functional coverage + all assertions proven implies 100% observation coverage (if DUV contains no dead
code, no constrained code, no redundant code)
22. May 1, 2013
Verification Environment with Exhaustive Coverage Analysis
DUV
Tests / Scenarios Checkers
Spec
Verification Environment
Verification
Plan
Coverage
Analysis
Control
Coverage
Observation
Coverage
Functional
Coverage
Have I written
enough stimuli?
Which parts of
my DUV have
been exercised?
Which parts of my
DUV have been
checked?
Did I write
enough checks?
Are all specified
functions
implemented?
Are all specified
functions verified?
Functional
Structural
23. May 1, 2013
What does block level coverage mean
in the larger context?
System-level
design
Block-level
design (RTL)
Architecture-level
design
Design start
Verification closure
System-level
simulation
Block-level
simulation
Sub-system-level
simulation
Formal ABV with
Covage Analysis
UCDB
Aggregation of module / block
level coverage results with higher
level context
High-Level/RTL Design & Verification Flow