This document summarizes the verification methodology landscape. It discusses languages, methodologies, tools and standards used for hardware verification including OVM, VMM, and eRM. It also covers topics like interoperability between methodologies and convergence of approaches.
The document provides an overview of the engineering services offered by Sonalysts including systems engineering, RF engineering, reliability testing, and automatic test equipment. It describes capabilities in areas such as specification development, environmental testing, mechanical and electrical engineering, prototype and production engineering, and reliability testing. Application areas mentioned include acoustic sensor heads, antennas, power supplies, and integrated submarine imaging systems.
The document discusses Altreonic NV's approach to safety engineering, describing how they view safety as an emergent property of a quality engineering process rather than just preventing harm, and that their methodology focuses on developing simple, elegant system architectures through a formalized engineering process involving requirements capture, specification, modeling, verification and certification. It also outlines some of the challenges in ensuring safety for complex systems with multiple stakeholders and technical domains.
The document discusses developing a successful Selenium automation program by overcoming impediments such as unrealistic objectives and high startup costs, addressing challenges like managing object references and coding standards, and implementing solutions like a centralized object repository and using design patterns to develop maintainable test scripts. It also covers the need for robust reporting and metrics to measure the effectiveness of the automation program.
The document describes Cogent ATE's Leopard A Series Analog and Mixed-Signal Test System. It aims to provide low-cost, high-performance multi-site testing through its Floating Quad-Site Testing architecture. This allows independent testing of up to 4 devices simultaneously while avoiding interference through electrically isolated test sites. The system supports a wide range of analog and mixed-signal devices and can scale from single-site to multi-site testing through its Automatic Test Replication technology.
The document discusses optimization techniques for embedded systems to improve memory usage and performance. It covers profiling tools, code optimization, RAM optimization, and techniques like reducing memory footprint, resolving bottlenecks, and code refactoring. The document provides examples measuring tasks on a microcontroller and modifying the code to improve efficiency through algorithm changes, compiler optimizations, and assembly optimizations.
The document discusses enhanced equipment quality assurance (EEQA) and equipment health monitoring (EHM) methods to ensure reliable semiconductor manufacturing equipment. It provides:
1) An overview of the EEQA and EHM projects, including goals to reduce equipment variability and efficiently track performance.
2) Details on EEQA approaches like collecting equipment data to validate functional capabilities and monitor variations.
3) The 2011 EHM project timeline and objectives to demonstrate fingerprinting effectiveness using an equipment data model.
4) An equipment fingerprinting pilot to refine use cases and demonstrate the fingerprinting process using real manufacturing data.
Continuous delivery (CD) allows software updates to be released frequently by having each code change trigger automated builds, tests, and deployments. This document discusses best practices for implementing CD for Alfresco solutions, including using consistent project templates built with Maven or Gradle, packaging modules as AMPs, externalizing configurations, supporting multi-module deployments, using deployment frameworks like Chef, and deploying to test instances on private clouds. Common pitfalls to avoid are unrealistic time planning, lack of involvement from system admins, and developers not understanding the importance of green builds.
The document discusses STMicroelectronics' deployment of functional qualification methodologies using Certitude mutation analysis. It outlines ST's initial engagement with Certess in 2004 and how they have expanded usage of the technology to now cover 80% of ST's IPs. The document also provides details on ST's functional qualification methodology, sharing of best practices, detection strategies used, and two case studies on measuring quality of third-party IPs and detecting issues in a video codec design.
The document provides an overview of the engineering services offered by Sonalysts including systems engineering, RF engineering, reliability testing, and automatic test equipment. It describes capabilities in areas such as specification development, environmental testing, mechanical and electrical engineering, prototype and production engineering, and reliability testing. Application areas mentioned include acoustic sensor heads, antennas, power supplies, and integrated submarine imaging systems.
The document discusses Altreonic NV's approach to safety engineering, describing how they view safety as an emergent property of a quality engineering process rather than just preventing harm, and that their methodology focuses on developing simple, elegant system architectures through a formalized engineering process involving requirements capture, specification, modeling, verification and certification. It also outlines some of the challenges in ensuring safety for complex systems with multiple stakeholders and technical domains.
The document discusses developing a successful Selenium automation program by overcoming impediments such as unrealistic objectives and high startup costs, addressing challenges like managing object references and coding standards, and implementing solutions like a centralized object repository and using design patterns to develop maintainable test scripts. It also covers the need for robust reporting and metrics to measure the effectiveness of the automation program.
The document describes Cogent ATE's Leopard A Series Analog and Mixed-Signal Test System. It aims to provide low-cost, high-performance multi-site testing through its Floating Quad-Site Testing architecture. This allows independent testing of up to 4 devices simultaneously while avoiding interference through electrically isolated test sites. The system supports a wide range of analog and mixed-signal devices and can scale from single-site to multi-site testing through its Automatic Test Replication technology.
The document discusses optimization techniques for embedded systems to improve memory usage and performance. It covers profiling tools, code optimization, RAM optimization, and techniques like reducing memory footprint, resolving bottlenecks, and code refactoring. The document provides examples measuring tasks on a microcontroller and modifying the code to improve efficiency through algorithm changes, compiler optimizations, and assembly optimizations.
The document discusses enhanced equipment quality assurance (EEQA) and equipment health monitoring (EHM) methods to ensure reliable semiconductor manufacturing equipment. It provides:
1) An overview of the EEQA and EHM projects, including goals to reduce equipment variability and efficiently track performance.
2) Details on EEQA approaches like collecting equipment data to validate functional capabilities and monitor variations.
3) The 2011 EHM project timeline and objectives to demonstrate fingerprinting effectiveness using an equipment data model.
4) An equipment fingerprinting pilot to refine use cases and demonstrate the fingerprinting process using real manufacturing data.
Continuous delivery (CD) allows software updates to be released frequently by having each code change trigger automated builds, tests, and deployments. This document discusses best practices for implementing CD for Alfresco solutions, including using consistent project templates built with Maven or Gradle, packaging modules as AMPs, externalizing configurations, supporting multi-module deployments, using deployment frameworks like Chef, and deploying to test instances on private clouds. Common pitfalls to avoid are unrealistic time planning, lack of involvement from system admins, and developers not understanding the importance of green builds.
The document discusses STMicroelectronics' deployment of functional qualification methodologies using Certitude mutation analysis. It outlines ST's initial engagement with Certess in 2004 and how they have expanded usage of the technology to now cover 80% of ST's IPs. The document also provides details on ST's functional qualification methodology, sharing of best practices, detection strategies used, and two case studies on measuring quality of third-party IPs and detecting issues in a video codec design.
This document provides guidance on how to estimate the size, effort, cost, and schedule of a software project using a COCOMO II-based estimation toolkit. The key steps include:
1) Estimating the project size in function points and source lines of code based on features and components.
2) Calculating effort based on the size, selected scaling factors, and effort multipliers.
3) Distributing the effort across project phases and calculating costs using selected rate tiers.
4) Generating a project plan and schedule based on resources and potential delays to determine if the estimated duration meets client needs.
Benetel provides wireless product design and testing services for companies in the wireless industry. It designs wireless products and systems and develops automated test systems. It has expertise in various wireless technologies and standards. Benetel partners with other test equipment companies and has experience delivering test solutions to original equipment manufacturers and contract manufacturers globally.
High Availability and Disaster Recovery with Novell Sentinel Log ManagerNovell
Novell Sentinel Log Manager can be implemented in a high availability cluster using the SUSE Linux Enterprise 11 High Availability Extension. This approach, combined with Sentinel Log Manager backup scripts can be used to provide a solution for disaster recovery.
This session will explain the architecture of the high availability and disaster recovery solution available with Sentinel Log Manager as well as implementation details.
Probe Card Cost Drivers from Architecture to Zero Defects - IEEE Semiconductor Wafer Test Workshop 2011 presentation by Ira Feldman (www.hightechbizdev.com)
Fielding Systems-of-Systems, Riding the agile sw tigerSergey Tozik
The document describes a presentation about challenges integrating software-intensive systems-of-systems (SoS) and a proposed methodology. The presentation discusses the complexity of SoS, challenges with agile development and unexpected dependencies. It introduces the MaSK methodology using exploratory modeling, knowledge generation techniques, and a knowledge factory workflow to systematically gather and analyze information to improve understanding and fielding of SoS.
Cambridge Consultants Innovation Day 2012: Minimising risk and delay for comp...Cambridge Consultants
The document discusses systems engineering and its application to complex projects. It emphasizes taking a multidisciplinary approach, involving stakeholders, defining problems before solutions, and testing iteratively. Project teams at Cambridge Consultants use systems engineering practices like modeling requirements, designing configurable systems, and verifying solutions meet needs. The presentation provides tips for managing complex projects and minimizing risks.
The document discusses how the transition to wafer-level chip scale package (WLCSP) devices impacts testing costs. It notes that traditional test flows for packaged chips involve multiple discrete steps and machines, while WLCSP requires redefining the test flow to a single integrated platform to accommodate the new package type and simplify returns processing. Adopting a combined tester and inspection platform that can handle multiple input and output options could significantly reduce capital equipment, floor space needs, staffing requirements and inventory costs compared to the traditional multi-machine approach.
This document outlines an agenda and introduction for a HyperStudy training session. The morning session will cover introductions to HyperStudy, exercises in design of experiments, approximations, and exercises. The afternoon session will cover optimization, exercises in optimization, stochastic studies, and exercises in stochastic studies. HyperStudy allows users to perform design of experiments, approximations, optimization, and stochastic studies to analyze engineering designs under various conditions.
This document discusses identification, pricing, and tracking solutions for increased productivity, quality control, and inventory control throughout the production process. Labels are available in various colors and can include custom prints and preprints. They are used at multiple stages including order entry, receiving raw materials, work in process, quality control, inventory control, packing, and shipping. Labels help properly route materials and products, identify orders and jobs, and maintain records.
Ira Feldman's presentation about cost drivers for the design and fabrication of semiconductor wafer test probe cards. Presented at the IEEE Semiconductor Wafer Test Workshop, June 2011.
DELTA presented ways to minimize ASIC test costs using their European and Asian resources. They analyzed 28 ASIC projects and found test and QA activities accounted for 15-30% of total costs. DELTA proposed a strategy using their European resources for prototyping and medium volumes, and transferring knowledge to Asia for high volume testing, allowing test costs as low as 6.1 cents per chip depending on commitments made. A case study showed how optimizing test systems and processes reduced hourly rates for an RF and baseband chip from $3,940 and $6,000 originally to $2,450 and $2,680, respectively.
Liberty Mutual Information Systems uses open source tools to help Liberty Mutual Group exceed their business objectives by delivering high-value, market-responsive IT solutions. Richard Thompson discusses why open source tools are useful during various phases of development like unit testing, configuration management, and continuous integration. He provides examples of specific open source tools used for tasks like test reporting, static analysis, performance testing, and more. Thompson also outlines lessons for successfully implementing open source tools, like considering community size and support when selecting tools.
The document discusses noise, vibration, and harshness (NVH) foam equipment used in automotive applications. It notes that automakers use acoustic foam to minimize noise from tires, wind, exhaust, and engines. Two common approaches are using acoustic baffles or injected foam, with injected foam providing better sealing performance. Challenges with NVH equipment include high upfront costs, potential space constraints, and high costs of ownership. The document presents the Graco HFR metering system as offering more advanced and reliable technology for NVH foam applications at a lower initial investment compared to traditional custom systems.
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
Evidence-based software process recovery uses data from software repositories to understand the actual development process used by a team. This allows comparison of the proposed process with the recovered process. Topic modeling of commits can identify developer topics like reliability, maintainability, and portability over time. Release patterns showing activity in source code, tests, builds and documentation near releases can also be recovered. Process recovery provides an objective view of the actual development process.
This document discusses INCHRON GmbH, a company that provides tools for modeling embedded systems and analyzing timing behavior. It highlights key features of INCHRON's task-centric modeling tools, including flexible processing of C code and task models, easy-to-use interfaces, and the ability to generate response times from simulation and validation. The document also outlines INCHRON's integrated workflow with IBM Rational tools and how their modeling approach allows for fast iterations to optimize designs through what-if analysis.
The document discusses efficient verification methodology. It recommends defining a conceptual framework or methodology to standardize some aspects while allowing diversity. The methodology should define interfaces and transactions upfront using an interface definition language to generate verification components and reusable assertions. It also recommends modeling systems at the transaction level using executable specifications to frontload the verification schedule.
This document provides an overview of IBM's mainline functional verification of its POWER7 processor core. It first gives background on the history and roadmap of POWER processors. It then outlines the verification methodology, execution, advances, and concludes with a summary. The POWER7 is IBM's next generation processor that features a multi-core design, on-chip eDRAM, power optimization, and memory subsystem improvements. It follows over 20 years of POWER processors and continues IBM's leadership in this area.
The document discusses various metrics used to measure CPU verification progress including architectural verification, uArchitecture verification, formal verification, and system level verification. It outlines metrics such as functional coverage conditions, bug rates, RTL lines of change, and a health of the model score. Secondary metrics include cycles run, licenses used, and bugs caught at different levels.
1) The document discusses the importance of attitude in validation work, noting that attitude is more important than tools or techniques.
2) It emphasizes that nothing is perfect and all designs have bugs or shortcomings due to compromises, schedules, and unknowns. Accidents are inevitable in engineering work which pushes designs to their limits.
3) The document provides several examples of past engineering failures to illustrate issues like normalization of deviance, unexpected interactions in complex systems, and overreliance on untested assumptions. It stresses the importance of questioning everything, fighting urges to relax requirements, and trusting nothing without proper testing.
This document describes a case study of a staged migration from e to SystemVerilog at a company designing SERDES chips. It discusses advantages like reduced risk and training staff in small groups. Technical challenges addressed include coordinating simulation timelines and communicating between testbench parts in different languages. Solutions involved making SVTB the master, writing testcases as if fully converted, and using Verilog to pass info between languages. A proof of concept showed the converted approach. Supporting multiple simulators involved using a tool that connects a Pioneer-based SVTB to DUTs in other simulators to avoid lowest common denominator issues.
This document provides guidance on how to estimate the size, effort, cost, and schedule of a software project using a COCOMO II-based estimation toolkit. The key steps include:
1) Estimating the project size in function points and source lines of code based on features and components.
2) Calculating effort based on the size, selected scaling factors, and effort multipliers.
3) Distributing the effort across project phases and calculating costs using selected rate tiers.
4) Generating a project plan and schedule based on resources and potential delays to determine if the estimated duration meets client needs.
Benetel provides wireless product design and testing services for companies in the wireless industry. It designs wireless products and systems and develops automated test systems. It has expertise in various wireless technologies and standards. Benetel partners with other test equipment companies and has experience delivering test solutions to original equipment manufacturers and contract manufacturers globally.
High Availability and Disaster Recovery with Novell Sentinel Log ManagerNovell
Novell Sentinel Log Manager can be implemented in a high availability cluster using the SUSE Linux Enterprise 11 High Availability Extension. This approach, combined with Sentinel Log Manager backup scripts can be used to provide a solution for disaster recovery.
This session will explain the architecture of the high availability and disaster recovery solution available with Sentinel Log Manager as well as implementation details.
Probe Card Cost Drivers from Architecture to Zero Defects - IEEE Semiconductor Wafer Test Workshop 2011 presentation by Ira Feldman (www.hightechbizdev.com)
Fielding Systems-of-Systems, Riding the agile sw tigerSergey Tozik
The document describes a presentation about challenges integrating software-intensive systems-of-systems (SoS) and a proposed methodology. The presentation discusses the complexity of SoS, challenges with agile development and unexpected dependencies. It introduces the MaSK methodology using exploratory modeling, knowledge generation techniques, and a knowledge factory workflow to systematically gather and analyze information to improve understanding and fielding of SoS.
Cambridge Consultants Innovation Day 2012: Minimising risk and delay for comp...Cambridge Consultants
The document discusses systems engineering and its application to complex projects. It emphasizes taking a multidisciplinary approach, involving stakeholders, defining problems before solutions, and testing iteratively. Project teams at Cambridge Consultants use systems engineering practices like modeling requirements, designing configurable systems, and verifying solutions meet needs. The presentation provides tips for managing complex projects and minimizing risks.
The document discusses how the transition to wafer-level chip scale package (WLCSP) devices impacts testing costs. It notes that traditional test flows for packaged chips involve multiple discrete steps and machines, while WLCSP requires redefining the test flow to a single integrated platform to accommodate the new package type and simplify returns processing. Adopting a combined tester and inspection platform that can handle multiple input and output options could significantly reduce capital equipment, floor space needs, staffing requirements and inventory costs compared to the traditional multi-machine approach.
This document outlines an agenda and introduction for a HyperStudy training session. The morning session will cover introductions to HyperStudy, exercises in design of experiments, approximations, and exercises. The afternoon session will cover optimization, exercises in optimization, stochastic studies, and exercises in stochastic studies. HyperStudy allows users to perform design of experiments, approximations, optimization, and stochastic studies to analyze engineering designs under various conditions.
This document discusses identification, pricing, and tracking solutions for increased productivity, quality control, and inventory control throughout the production process. Labels are available in various colors and can include custom prints and preprints. They are used at multiple stages including order entry, receiving raw materials, work in process, quality control, inventory control, packing, and shipping. Labels help properly route materials and products, identify orders and jobs, and maintain records.
Ira Feldman's presentation about cost drivers for the design and fabrication of semiconductor wafer test probe cards. Presented at the IEEE Semiconductor Wafer Test Workshop, June 2011.
DELTA presented ways to minimize ASIC test costs using their European and Asian resources. They analyzed 28 ASIC projects and found test and QA activities accounted for 15-30% of total costs. DELTA proposed a strategy using their European resources for prototyping and medium volumes, and transferring knowledge to Asia for high volume testing, allowing test costs as low as 6.1 cents per chip depending on commitments made. A case study showed how optimizing test systems and processes reduced hourly rates for an RF and baseband chip from $3,940 and $6,000 originally to $2,450 and $2,680, respectively.
Liberty Mutual Information Systems uses open source tools to help Liberty Mutual Group exceed their business objectives by delivering high-value, market-responsive IT solutions. Richard Thompson discusses why open source tools are useful during various phases of development like unit testing, configuration management, and continuous integration. He provides examples of specific open source tools used for tasks like test reporting, static analysis, performance testing, and more. Thompson also outlines lessons for successfully implementing open source tools, like considering community size and support when selecting tools.
The document discusses noise, vibration, and harshness (NVH) foam equipment used in automotive applications. It notes that automakers use acoustic foam to minimize noise from tires, wind, exhaust, and engines. Two common approaches are using acoustic baffles or injected foam, with injected foam providing better sealing performance. Challenges with NVH equipment include high upfront costs, potential space constraints, and high costs of ownership. The document presents the Graco HFR metering system as offering more advanced and reliable technology for NVH foam applications at a lower initial investment compared to traditional custom systems.
The document discusses design for reliability (DFR) topics including the need for DFR, the DFR process, terminology, Weibull plotting, system reliability, DFR testing, and accelerated testing. It provides details on the DFR process, common reliability terminology such as reliability, failure rate, mean time to failure, and the bathtub curve. It also explains the exponential distribution and Weibull plotting, which are important reliability analysis tools.
Evidence-based software process recovery uses data from software repositories to understand the actual development process used by a team. This allows comparison of the proposed process with the recovered process. Topic modeling of commits can identify developer topics like reliability, maintainability, and portability over time. Release patterns showing activity in source code, tests, builds and documentation near releases can also be recovered. Process recovery provides an objective view of the actual development process.
This document discusses INCHRON GmbH, a company that provides tools for modeling embedded systems and analyzing timing behavior. It highlights key features of INCHRON's task-centric modeling tools, including flexible processing of C code and task models, easy-to-use interfaces, and the ability to generate response times from simulation and validation. The document also outlines INCHRON's integrated workflow with IBM Rational tools and how their modeling approach allows for fast iterations to optimize designs through what-if analysis.
The document discusses efficient verification methodology. It recommends defining a conceptual framework or methodology to standardize some aspects while allowing diversity. The methodology should define interfaces and transactions upfront using an interface definition language to generate verification components and reusable assertions. It also recommends modeling systems at the transaction level using executable specifications to frontload the verification schedule.
This document provides an overview of IBM's mainline functional verification of its POWER7 processor core. It first gives background on the history and roadmap of POWER processors. It then outlines the verification methodology, execution, advances, and concludes with a summary. The POWER7 is IBM's next generation processor that features a multi-core design, on-chip eDRAM, power optimization, and memory subsystem improvements. It follows over 20 years of POWER processors and continues IBM's leadership in this area.
The document discusses various metrics used to measure CPU verification progress including architectural verification, uArchitecture verification, formal verification, and system level verification. It outlines metrics such as functional coverage conditions, bug rates, RTL lines of change, and a health of the model score. Secondary metrics include cycles run, licenses used, and bugs caught at different levels.
1) The document discusses the importance of attitude in validation work, noting that attitude is more important than tools or techniques.
2) It emphasizes that nothing is perfect and all designs have bugs or shortcomings due to compromises, schedules, and unknowns. Accidents are inevitable in engineering work which pushes designs to their limits.
3) The document provides several examples of past engineering failures to illustrate issues like normalization of deviance, unexpected interactions in complex systems, and overreliance on untested assumptions. It stresses the importance of questioning everything, fighting urges to relax requirements, and trusting nothing without proper testing.
This document describes a case study of a staged migration from e to SystemVerilog at a company designing SERDES chips. It discusses advantages like reduced risk and training staff in small groups. Technical challenges addressed include coordinating simulation timelines and communicating between testbench parts in different languages. Solutions involved making SVTB the master, writing testcases as if fully converted, and using Verilog to pass info between languages. A proof of concept showed the converted approach. Supporting multiple simulators involved using a tool that connects a Pioneer-based SVTB to DUTs in other simulators to avoid lowest common denominator issues.
The document discusses three major problems in verification: specifying properties to check, specifying the environment, and computational complexity. It then presents several approaches to addressing these problems, including using coverage metrics tailored to detection ability, sequential equivalence checking to avoid testbenches, and "perspective-based verification" using minimal abstract models focused on specific property classes. This allows verification earlier in design when changes are more tractable and catches bugs before implementation.
The document discusses the history and evolution of 3D graphics technologies including OpenGL and DirectX, provides an overview of GPU programming models and architectures, and explores how GPUs are increasingly being used for general purpose computing beyond just graphics through technologies like CUDA and OpenCL. It also highlights how GPUs can provide significant performance gains for parallel applications compared to CPUs.
This document discusses the challenges of pre-silicon validation for Intel Xeon processors. It notes that Xeon validation teams have relatively small sizes compared to the scope of validation required. Key challenges include reusing design components from previous projects, managing cross-site teams, and dealing with ever-growing design complexity that strains simulation and formal verification methods. Specific issues involve integrating disparate design tools and environments, understanding the original intent when reusing unfinished code, minimizing duplicated stimulus code, managing the overhead of coverage instrumentation, and ensuring tests are portable between pre-silicon and post-silicon validation.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
Pivotal Labs Open View Presentation Quality Assurance And Developer Testingguestc8adce
1) The document discusses moving from a traditional development model where QA finds bugs after development is complete, to a model where quality is the focal point and everyone is responsible for testing.
2) It advocates for automating tests and running them frequently during development to find bugs early and have confidence in changes.
3) The goal is to develop software quickly with low defect rates by shifting left the work of finding bugs through techniques like test-driven development.
As device software complexity grows and test cycles shrink, the risk of untested code resulting in defects in the field increases every day.
Wind River Test Management is a test management solution that identifies high-risk segments in production code, enabling change-based, optimized testing, using real-time instrumentation of devices under test.
Wind River Test Management provides the following:
•Coverage and performance metrics on the same code you ship to customers
•Optimized test suite generator that runs only the test needed to validate changes
•Full-featured lab management system and a universal, open test execution engine to run any type of test on any device
Keynote, ISSRE-13, St. Malo, France, November 4, 2004.
Outline: 21st Century IT Trends, Mobile Technology Crisis, Test Effectiveness Levels, Level 4 Case Study, Reliability Arithmetic, Test Performance Envelope.
Oplægget blev holdt ved et seminar i InfinIT-interessegruppen Softwaretest den 28. september 2010.
Læs mere om interessegruppen på http://www.infinit.dk/dk/interessegrupper/softwaretest/softwaretest.htm
A comprehensive formal verification solution for ARM based SOC design chiportal
This document discusses Jasper's formal verification solutions for ARM processor-based system-on-chip (SoC) designs. It describes how Jasper can be used at the IP level to verify ARM Cortex processors and at the system level to verify aspects of full SoCs such as protocol verification, deadlock detection, and connectivity verification. Customers mentioned include Ericsson, Apple, Sony, and AMCC.
Achieving Very High Reliability for Ubiquitous Information Technology Bob Binder
1) The document discusses achieving very high reliability for ubiquitous information technology through full test automation.
2) It outlines the new IT reality of growing usage, mobility, and need for high reliability of "five nines" or 99.999% uptime.
3) The strategy proposed is taking a full end-to-end testing approach through automated test generation and execution to achieve the reliability needed for ubiquitous IT to scale to millions of users.
Quality Best Practices & Toolkit for Enterprise FlexFrançois Le Droff
Quality Best Practices & Toolkit for Enterprise Flex
Presentation given at the French Flex User group : "les tontons flexeurs" on the 21st of July 2009
Author : Xavier Agnetti, François Le Droff (and Alex Ulhmann)
Copyright: Adobe
Embedded Instrumentation: Critical to Validation and Test in the Electronics ...guestb993cd99
The document discusses embedded instrumentation and its importance in electronics validation and testing. It introduces traditional, virtual, and embedded instrumentation approaches. It then outlines existing technical challenges in validation and testing. Embedded instrumentation embeds test functions directly on chips and boards to solve accessibility, complexity, and cost issues. The document provides examples like Intel's IBIST and PLX's visionPAK and discusses how embedded instrumentation is a growing market trend, providing opportunities for standards-based software companies.
“Specification by Example” is a set of process patterns that helps to validate the application for faster feedback and minimal documentation. With Specification by Example, teams write just enough documenta- tion to facilitate change effectively in short iterations or in flow-based development.
Controller MIgration & Connectivity 11.10.09mgk918
The document discusses controller migration and connectivity solutions from Online Development Inc. (OLDI). It summarizes OLDI's products including:
- cATM controller-to-controller modules that connect different controller brands without programming.
- eATM modules that connect ControlLogix PACs to databases and enterprise systems like IBM, Microsoft, and Oracle without programming.
- The eATM tManager, a new product that provides connectivity and configuration for eATM modules to transfer data between controllers and enterprise systems.
Implementing Test Automation in Agile ProjectsDominik Dary
All new features at eBay Europe are developed using SCRUM. One key success factor for those projects is to have a reliable end-to-end test automation safety net. This presentation illustrates how in addition to a robust automation toolset it is essential to have an integrated approach to test automation design:
Test Aspects - Test Aspects are used to do the functional design of the end-to-end automation test cases. Since this is done upfront, the tester is able to focus on the what rather than the how.
Modeling of the Biz Domain Layer - The Biz Domain Layer is an abstraction layer above the user interface that is implemented in the test code. This layer is divided into pages and flows which are then used in the tests.
Test Implementation - Tests are written in Java, stored in SVN and are executed using the WebDriver Grid (Selenium2). Tests execution can be triggered by all team members via a continuous integration server (Hudson).
Lean Test Automation – it is important to retain and maintain the quality of the tests. Key success factors for this are: Code Reviews, Software Craftsmanship, Test Aspect Reviews and the “Definition of Done”.
Following an integrated approach to test automation ensures high efficiency, low overhead and easier maintenance.
This document discusses verification methodologies like OVM, VMM, and eRM. It provides an overview of their key features such as constrained random stimulus, coverage-driven verification, and reusable verification environments. It also examines standards development efforts and looks at trends like growing adoption of verification IP and efforts to improve interoperability between methodologies.
This document introduces Arena simulation software by providing an agenda, overview of the company Rockwell Automation and the Arena team, benefits of simulation and Arena, and how Arena can be used to model business processes. It discusses how Arena uses flowchart-based modeling for ease of use, cost savings from simulation projects, and how simulation is a team effort involving various roles.
Performance Testing Mobile and Multi-Tier ApplicationsBob Binder
Invited Talk, Chicago Quality Assurance Association, Chicago, June 26, 2007. Overview of performance testing strategy for handheld devices and multi-tier systems.
Implementing Test Automation in Agile ProjectsMichael Palotas
This document discusses test automation practices at eBay. It begins by providing facts about eBay as a company and platform. It then outlines eBay's approach to test automation, which involves designing automated tests using test aspects, modeling the business domain layer, and implementing tests using Selenium. The document advocates for a lean approach to test automation to avoid technical debt and waste. It emphasizes automating regression tests first before expanding to other test types and executing tests in parallel using a Selenium Grid for faster feedback.
This slidedeck goes through the technology involved automatiing tests throught the design cycle (MIL, SIL, HIL and test cells). It also touches on topics like lights-out-testing and links to requirements databases.
CloudTest Lite is the newest member of SOASTA’s growing line of CloudTest editions. It is an enterprise-class offering that enables rapid test creation and real-time resolution for performance testing early and often throughout the development lifecycle. Delivering internal testing behind the firewall on a single server, customers can execute performance tests of up to 100 concurrent virtual users in development, QA, staging or production. With CloudTest Lite, customers can:
- Test Web and mobile applications, including applications using the latest technologies from HTML5 to REST Web services
- Quickly build tests with visual test creation tools
- Integrate application, system, and network monitoring data
- Analyze results in real-time through an interactive, integrated dashboard
- Easily upgrade to a more scalable CloudTest edition to meet expanding testing requirements
The document discusses the Appliance Factory Company and its software UShareSoft. UShareSoft aims to simplify how software appliances are built, maintained, and deployed through its platform. It provides tools to reduce complexity in software integration, deployments, maintenance, and delivery. UShareSoft can generate appliance images in various formats for physical, virtual, and cloud environments automatically. This helps lower costs compared to traditional IT project implementations.
The document discusses how shaders are created and validated for graphics processing units (GPUs). Shaders are created by applications and sent to the GPU through graphics APIs and drivers. They are then executed by the GPU's shader processors. The validation process uses layered testbenches at the sub-block, block, and system levels for maximum controllability and observability. It also employs a reference model methodology using C++ models and hardware emulation to debug designs faster than simulation alone. This methodology helps improve the schedule and find bugs earlier in the development cycle.
The document is a presentation on verification of graphics ASICs given by Shaw Yang and Gary Greenstein of AMD. The presentation covers an overview of AMD, GPU systems, 3D graphics basics including vertices, polygons, pixels and textures, verification challenges related to size and complexity, and approaches used including layered code and testbenches, hardware emulation, and functional coverage.
The document discusses the importance of using verification metrics to predict the functional closure of a CPU design project and discusses challenges in relying solely on metrics. It outlines two key types of metrics - verification test plan based metrics that track testing progress and health of the design metrics that assess bug rates and stability. Examples are provided on using bug rate data and breaking bugs down by design unit to help evaluate the progress and health of a verification effort.
The document discusses the challenges of validating next generation CPUs. It notes that validation is increasingly critical for product success but requires constant innovation. Design complexity is growing exponentially, requiring up to 70% of resources for functional validation. The number of pre-silicon logic bugs found per generation has also increased significantly. Shorter timelines and cross-site development further complicate the validation process.
The document discusses validation and design in small teams with limited resources. It proposes constraining designs to a single clock rate, using FIFO interfaces between blocks, and separating algorithm from IO verification to simplify validation. This approach allows designs to be completed more quickly with fewer verification engineers through standardized, repeatable validation methods at the cost of optimal performance.
Verification challenges have increased with the globalization of chip design. Time zone differences and documentation issues can reduce efficiency, but greater collaboration across sites can also lead to new ideas. AMD addresses these challenges through a Verification Center of Expertise (COE) that coordinates methodologies across multiple sites. The COE develops tools and techniques while partnering with project teams to jointly improve processes over time through continuous review and rotation of engineers between the COE and projects.
Greg Tierney of Avid presented on their experiences using SystemC for design verification. SystemC provides hardware constructs and simulation capabilities in C++. Avid chose SystemC to enhance their existing C++ verification code and take advantage of its industry acceptance and built-in verification features. SystemC helped Avid solve issues like crossing language boundaries between HDL modules and testbenches, connecting ports and channels, implementing randomization, using multi-threaded processes, and defining module hierarchies. However, Avid also encountered issues with SystemC like slow compile/link times and limitations in its foreign language interface.
Bob Colwell documented notes from a meeting discussing the need for better software visualization tools to help localize bugs, diagnose problems, and monitor software behavior. The notes also reflect on important words in science according to Isaac Newton and reference a book about creative analogies. Finally, they caution against agreeing to sign a document just because a product is shipping.
The document outlines the verification strategy for a PCI-Express presenter device. It discusses the PCI-Express protocol overview including terminology, hierarchy and functions at various layers. It emphasizes the importance of design-for-verification using techniques like modular architectures, standardized interfaces and reference models to aid in functional verification closure and compliance testing. Performance verification is also highlighted as critical given the real-time requirements of the standard.
The document discusses verification strategies for PCI-Express. It outlines the PCI-Express protocol and highlights challenges in verifying chips that implement open standards. The verification paradigm focuses on functionality, performance, interoperability, reusability, scalability, and comprehensiveness using techniques like constrained-random testing, assertions, reference models, emulation, and compliance checkers. The goal is to deliver compliant and high-performing chips with zero bugs through an effective verification methodology.
The document discusses methodologies for improving verification efficiency at Cisco. It advocates separating testbench creation into three stages: component design, testbench integration, and testcase creation. It also recommends using standardized methodologies like testflow to synchronize component behavior, reusing unit-level component models and checkers, linking transactions between checkers, and generating common testbench infrastructure from templates to reduce duplication of effort. The key is pushing reusable behavior into components and standardizing common elements to maximize efficiency.
This document discusses the importance of pre-silicon verification for post-silicon validation. It notes that post-silicon validation schedules are growing due to increasing design complexity, while pre-silicon verification investment and methodologies have not kept pace. The document highlights mixed-signal verification, power-on/reset verification, and design-for-testability verification as key focus areas needed to improve pre-silicon verification and enable faster post-silicon validation. It provides examples of mixed-signal and power-on bugs that were found post-silicon due to insufficient pre-silicon verification of these areas. The document argues that pre-silicon verification must move beyond just functional verification and own mixed-signal effects
This document discusses challenges in low-power design and verification. It addresses why low-power is now a priority given trends in mobile applications. Key challenges include increased leakage due to process scaling, accounting for active leakage, and handling process variations. The document also discusses low-power design methodologies, including multiple power domains, voltage scaling, and clock gating. Verification challenges are presented, such as needing good test patterns and coordination across design domains. Overall power analysis is more complex than timing analysis due to its pattern dependence and need to optimize for performance per watt.
Verilog-AMS allows for mixed-signal modeling and simulation in a single language. It provides benefits like simplified mixed-signal modeling, decreased simulation time, and improved mixed-signal verification. Previous solutions involved using two simulators or approximating analog circuits, which caused issues like slow simulation and lack of analog results. Verilog-AMS uses constructs from Verilog and Verilog-A to model both analog and digital content together. This avoids issues with interface elements between domains.
This document discusses the verification of Intel's Atom processor. It describes the key verification challenges, methodology used, and results. The main challenges were verifying a new microarchitecture with aggressive schedules and limited resources. The methodology involved cluster-level validation, functional coverage, architectural validation, and formal verification. Metrics like coverage, bug rates, and a "health of model" indicator were used. The results showed a successful pre-silicon verification with few escapes and debug/survivability features working as intended. Key learnings included the importance of keeping the full-chip design healthy early and putting equal focus on testability features.
The document discusses verification strategies based on Sun Tzu's classic book "The Art of War". Some key points:
1. Sun Tzu emphasized understanding the objective conditions and subjective opinions of competitors to determine strategic positioning. This relates to verification where it is important to understand the design and "Murphy the Designer".
2. Sun Tzu's 13 chapters provide guidance on tactics like laying plans, attacking weaknesses, maneuvering, and using intelligence sources. These lessons can help verification engineers successfully navigate different stages of a competitive campaign against bugs and errors.
3. Effective verification requires knowing the design, understanding one's own verification process, preparing appropriate tools, and using feedback to improve. Coverage metrics alone do
Here are the key challenges faced in low power design without a common power format:
1. Domain definitions, level shifters, isolation cells, and other low power techniques are specified differently in each tool using tool-specific commands files and languages. This makes cross-tool consistency and validation difficult.
2. Power functionality cannot be easily verified at the RTL level without changing the RTL code, since power domains and low power techniques are not represented. This limits verification coverage.
3. Iteration between design creation and verification is difficult, since changes to the low power implementation require updates to multiple tool-specific specification files rather than a single cross-tool definition. This impacts design schedule and risks inconsistencies.
4.
This document discusses various metrics used to measure the progress and health of CPU verification. It describes architectural verification to ensure implementation meets specifications, as well as unit architecture and system level verification. Key metrics include pass rates for legacy tests, functional coverage, bug rates, lines of code changes, and a health of the model score to measure convergence. Secondary metrics like cycles run, bugs found at different levels, and test bench quality are also outlined.
This document discusses Freescale's verification of the QorIQ communication platform containing the CoreNet fabric using SystemVerilog. It describes the verification challenges, methodology used, and verification IP developed. Key aspects included developing a SystemVerilog testbench, CoreNet VIP, and hierarchical verification. This approach successfully verified the CoreNet platform and resulted in first silicon sampling to customers within 3 weeks with no major functional bugs found.
The document discusses verification challenges for modern wireless system-on-chips (SoCs). It describes how SoCs now include multiple processors, modems, multimedia components, and peripherals, making verification much more complex. Traditional "golden vector" verification is insufficient, as it lacks reactivity, coverage metrics, and visibility into hardware-software interactions. The document advocates for model-based verification using system models, constraints, assertions and other techniques to achieve a higher level of integration and achieve full functional coverage. This modern approach allows testing across different levels of abstraction and integration.