This document discusses coverage in hardware verification. It defines key coverage terms like coverage model, coverage space, and coverage point. It describes different coverage techniques like code coverage, functional coverage, assertion coverage, and transaction coverage. The document provides guidance on planning a coverage strategy, including identifying what to cover, when to add coverage points, and avoiding common pitfalls. It outlines the steps to implement coverage, from planning to collecting data to analysis. The goal is to help engineers get started with and effectively use coverage to improve verification.
The document discusses verification of simultaneous multi-threading (SMT) in IBM POWER5 and POWER6 high-performance processors. SMT allows a processor core to execute multiple instruction streams simultaneously, appearing as virtual processor cores to software. Verifying SMT presented significant challenges due to the greatly increased state space from multiple threads. Methodologies evolved from single-thread to leverage traditional symmetric multi-processing approaches applied to single cores. Shared resource conflicts and dynamic thread switching between SMT and single-thread modes required specialized testing.
Formal verification is the process of proving or disproving properties of a system using precise mathematical methods. It provides guarantees that no simulations will violate specified properties. Formal verification can be applied at the block and system-on-chip levels to eliminate bugs early. However, current formal verification tools have limitations including capacity issues, generating coverage metrics from assertions, and handling large designs and multiple modes of operation. Improving formal verification requires efficient strategies and advancing tool capabilities.
The document discusses methodologies for improving verification efficiency at Cisco. It advocates separating testbench creation into three stages: component design, testbench integration, and testcase creation. It also recommends using standardized methodologies like testflow to synchronize component behavior, reusing unit-level component models and checkers, linking transactions between checkers, and generating common testbench infrastructure from templates to reduce duplication of effort. The key is pushing reusable behavior into components and standardizing common elements to maximize efficiency.
Greg Tierney of Avid presented on their experiences using SystemC for design verification. SystemC provides hardware constructs and simulation capabilities in C++. Avid chose SystemC to enhance their existing C++ verification code and take advantage of its industry acceptance and built-in verification features. SystemC helped Avid solve issues like crossing language boundaries between HDL modules and testbenches, connecting ports and channels, implementing randomization, using multi-threaded processes, and defining module hierarchies. However, Avid also encountered issues with SystemC like slow compile/link times and limitations in its foreign language interface.
Here are the key challenges faced in low power design without a common power format:
1. Domain definitions, level shifters, isolation cells, and other low power techniques are specified differently in each tool using tool-specific commands files and languages. This makes cross-tool consistency and validation difficult.
2. Power functionality cannot be easily verified at the RTL level without changing the RTL code, since power domains and low power techniques are not represented. This limits verification coverage.
3. Iteration between design creation and verification is difficult, since changes to the low power implementation require updates to multiple tool-specific specification files rather than a single cross-tool definition. This impacts design schedule and risks inconsistencies.
4.
This document discusses power domain verification for the AeroFONE® product. It has multiple power domains for power savings and isolation. Static and dynamic verification is used. Static checks include ensuring all inputs and outputs are defined when changing power states. Dynamic verification exercises all power up sequences and mode transitions. Interface cells between domains are modeled in Verilog-AMS to represent the electrical behavior. Voltage regulators and their control loops are also modeled. The goal is to efficiently model the effects of different power configurations and combinations.
This document discusses challenges in low-power design and verification. It addresses why low-power is now a priority given trends in mobile applications. Key challenges include increased leakage due to process scaling, accounting for active leakage, and handling process variations. The document also discusses low-power design methodologies, including multiple power domains, voltage scaling, and clock gating. Verification challenges are presented, such as needing good test patterns and coordination across design domains. Overall power analysis is more complex than timing analysis due to its pattern dependence and need to optimize for performance per watt.
The document discusses verification of simultaneous multi-threading (SMT) in IBM POWER5 and POWER6 high-performance processors. SMT allows a processor core to execute multiple instruction streams simultaneously, appearing as virtual processor cores to software. Verifying SMT presented significant challenges due to the greatly increased state space from multiple threads. Methodologies evolved from single-thread to leverage traditional symmetric multi-processing approaches applied to single cores. Shared resource conflicts and dynamic thread switching between SMT and single-thread modes required specialized testing.
Formal verification is the process of proving or disproving properties of a system using precise mathematical methods. It provides guarantees that no simulations will violate specified properties. Formal verification can be applied at the block and system-on-chip levels to eliminate bugs early. However, current formal verification tools have limitations including capacity issues, generating coverage metrics from assertions, and handling large designs and multiple modes of operation. Improving formal verification requires efficient strategies and advancing tool capabilities.
The document discusses methodologies for improving verification efficiency at Cisco. It advocates separating testbench creation into three stages: component design, testbench integration, and testcase creation. It also recommends using standardized methodologies like testflow to synchronize component behavior, reusing unit-level component models and checkers, linking transactions between checkers, and generating common testbench infrastructure from templates to reduce duplication of effort. The key is pushing reusable behavior into components and standardizing common elements to maximize efficiency.
Greg Tierney of Avid presented on their experiences using SystemC for design verification. SystemC provides hardware constructs and simulation capabilities in C++. Avid chose SystemC to enhance their existing C++ verification code and take advantage of its industry acceptance and built-in verification features. SystemC helped Avid solve issues like crossing language boundaries between HDL modules and testbenches, connecting ports and channels, implementing randomization, using multi-threaded processes, and defining module hierarchies. However, Avid also encountered issues with SystemC like slow compile/link times and limitations in its foreign language interface.
Here are the key challenges faced in low power design without a common power format:
1. Domain definitions, level shifters, isolation cells, and other low power techniques are specified differently in each tool using tool-specific commands files and languages. This makes cross-tool consistency and validation difficult.
2. Power functionality cannot be easily verified at the RTL level without changing the RTL code, since power domains and low power techniques are not represented. This limits verification coverage.
3. Iteration between design creation and verification is difficult, since changes to the low power implementation require updates to multiple tool-specific specification files rather than a single cross-tool definition. This impacts design schedule and risks inconsistencies.
4.
This document discusses power domain verification for the AeroFONE® product. It has multiple power domains for power savings and isolation. Static and dynamic verification is used. Static checks include ensuring all inputs and outputs are defined when changing power states. Dynamic verification exercises all power up sequences and mode transitions. Interface cells between domains are modeled in Verilog-AMS to represent the electrical behavior. Voltage regulators and their control loops are also modeled. The goal is to efficiently model the effects of different power configurations and combinations.
This document discusses challenges in low-power design and verification. It addresses why low-power is now a priority given trends in mobile applications. Key challenges include increased leakage due to process scaling, accounting for active leakage, and handling process variations. The document also discusses low-power design methodologies, including multiple power domains, voltage scaling, and clock gating. Verification challenges are presented, such as needing good test patterns and coordination across design domains. Overall power analysis is more complex than timing analysis due to its pattern dependence and need to optimize for performance per watt.
This document discusses coverage analysis and its role in the verification process. It provides an overview of coverage terms and tools, and outlines the steps involved in planning and executing a coverage-driven verification strategy, including identifying a coverage model, implementing coverage points, collecting and analyzing coverage data, and using the results to direct future verification efforts. The goal of coverage analysis is to provide visibility into which parts of a design or testbench have been exercised to help improve verification quality and efficiency.
The document discusses coverage in hardware design verification. It defines key coverage terms like coverage model, coverage space, and coverage point. It outlines a coverage planning process including choosing a specification language, identifying the coverage model, and implementing coverage. It also discusses collecting coverage data, analyzing results, and using findings to improve verification. The goal of coverage is to understand design functionality and ensure a thorough verification process.
A methodology I developed a while back, for more of a military application, that I'm not revamping to fit a consumer model. I thought I would share the presentation, in the hopes that it will spark some interest in conversations, and maybe educate the public, not only on cloud computing as a whole, but also that bursting as it is portrayed, is not only a public cloud resource.
Supply Chain Network Design is a strategic exercise to evaluate and recommend changes to a company's physical supply chain, including inbound and outbound movement and storage of raw materials and finished goods. It aims to optimize asset utilization, total landed costs, and service levels to improve margins. Key triggers for a network design study include changes in regulations, business environment, growth plans, new products/markets, and mergers. The study analyzes scenarios to determine the most profitable supplier-plant-warehouse-market mapping and answers questions about facilities, capacity, and transportation.
This document provides a proposed roadmap for cloud computing standards from NIST. It discusses focusing initial standards efforts on Infrastructure as a Service (IaaS) to promote interoperability and portability. The presentation outlines potential IaaS standards for VM images, provisioning, exchange, storage, and SLAs. It also discusses longer term goals for Platform as a Service and Software as a Service standards, as well as common needs for security and privacy standards across cloud models. NIST aims to define core standardized capabilities while allowing for proprietary innovation.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Palo alto networks pcnse6 study guide feb 2015Silva_2
This document provides study materials and guidance for the Palo Alto Networks Certified Network Security Engineer (PCNSE6) certification exam. It outlines the six main topics covered on the exam, including architecture and design, core concepts, logs and reports, management, networking, and policies. For each topic, it lists the learning objectives and recommends study materials such as administrator guides, course materials, and documentation. It also provides sample exam questions for each topic along with answers. The goal of this document is to help candidates efficiently prepare for the PCNSE6 exam.
The document discusses Kognitio, an in-memory analytical platform for big data. It is built from the ground up to perform large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights. It aims to complement existing data infrastructures like Hadoop and data warehouses through its scalable in-memory approach and tight integration capabilities.
The document discusses that network design should be a complete process that analyzes business needs and goals, explores technical options, and delivers a system to maximize success; it emphasizes analyzing applications, users, and technical goals like performance, scalability, availability, and security before designing network structures and selecting specific technologies. The document also outlines phases of network design like requirements analysis, logical and physical design, testing, and optimization according to the systems development life cycle approach.
This document discusses analytics and IoT. It covers key topics like data collection from IoT sensors, data storage and processing using big data tools, and performing descriptive, predictive, and prescriptive analytics. Cloud platforms and visualization tools that can be used to build end-to-end IoT and analytics solutions are also presented. The document provides an overview of building IoT solutions for collecting, analyzing, and gaining insights from sensor data.
Decision Matrix for IoT Product DevelopmentAlexey Pyshkin
At first sight, the development of "hardware" products hardly differs from that of IoT devices. Here you can see the methodology of IoT product development based on an IoT framework by Daniel Elizalde. It’s a convenient and simple model that estimates expenses and potential income, evaluates the technological complexity and at the same time is easily understood by the client.
Made by notAnotherOne
Total Cost Ownership Surveillance Systems Th (2) (2)Tom Hulsey
The document summarizes research into the total cost of ownership of IP-based surveillance systems compared to analog surveillance systems. Key findings include:
- For a sample 40 camera system, the IP-based system had a slightly lower total cost of ownership (3.4% lower).
- Network cameras accounted for half the cost of the IP system but only a third of the analog system's cost. Cabling was almost three times more expensive for analog.
- Beyond 32 cameras, IP systems have lower costs than analog systems. If IP infrastructure is already installed, IP systems always have lower costs.
- Additional benefits of IP systems noted were scalability, flexibility, image quality, and ability to use me
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
The document discusses Kognitio, an in-memory analytical platform for big data. It was built from the ground up to handle large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights.
This document discusses ZSL, a solutions provider based in Edison, NJ. It provides an introduction to cloud computing and ZSL's 8-phased approach to migrating enterprise applications to the cloud. It also outlines ZSL's cloud services, two case studies, and details on its practice areas, experience, certifications, and partnerships.
NIST is developing Priority Action Plans (PAPs) and working with the Cyber Security Coordination Task Group (CSCTG) to address critical standards and security issues for the smart grid. The PAPs focus on 14 priority issues identified by NIST workshops. The CSCTG includes over 200 participants from industry, academia, and government agencies. It is establishing sub-working groups to perform risk assessments and develop security requirements and architectures for smart grid systems.
The document provides an overview of lessons learned from brokering cloud services. It discusses 5 key lessons: open source technologies can be more closed than they appear; managing customer expectations to control scope creep; avoiding vendor lock-in ("stickiness") by using multi-cloud orchestration tools; security opportunities exist in leveraging cloud service provider security controls; and the importance of trust between brokers and their customers.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
This document discusses the challenges of pre-silicon validation for Intel Xeon processors. It notes that Xeon validation teams have relatively small sizes compared to the scope of validation required. Key challenges include reusing design components from previous projects, managing cross-site teams, and dealing with ever-growing design complexity that strains simulation and formal verification methods. Specific issues involve integrating disparate design tools and environments, understanding the original intent when reusing unfinished code, minimizing duplicated stimulus code, managing the overhead of coverage instrumentation, and ensuring tests are portable between pre-silicon and post-silicon validation.
This document discusses coverage analysis and its role in the verification process. It provides an overview of coverage terms and tools, and outlines the steps involved in planning and executing a coverage-driven verification strategy, including identifying a coverage model, implementing coverage points, collecting and analyzing coverage data, and using the results to direct future verification efforts. The goal of coverage analysis is to provide visibility into which parts of a design or testbench have been exercised to help improve verification quality and efficiency.
The document discusses coverage in hardware design verification. It defines key coverage terms like coverage model, coverage space, and coverage point. It outlines a coverage planning process including choosing a specification language, identifying the coverage model, and implementing coverage. It also discusses collecting coverage data, analyzing results, and using findings to improve verification. The goal of coverage is to understand design functionality and ensure a thorough verification process.
A methodology I developed a while back, for more of a military application, that I'm not revamping to fit a consumer model. I thought I would share the presentation, in the hopes that it will spark some interest in conversations, and maybe educate the public, not only on cloud computing as a whole, but also that bursting as it is portrayed, is not only a public cloud resource.
Supply Chain Network Design is a strategic exercise to evaluate and recommend changes to a company's physical supply chain, including inbound and outbound movement and storage of raw materials and finished goods. It aims to optimize asset utilization, total landed costs, and service levels to improve margins. Key triggers for a network design study include changes in regulations, business environment, growth plans, new products/markets, and mergers. The study analyzes scenarios to determine the most profitable supplier-plant-warehouse-market mapping and answers questions about facilities, capacity, and transportation.
This document provides a proposed roadmap for cloud computing standards from NIST. It discusses focusing initial standards efforts on Infrastructure as a Service (IaaS) to promote interoperability and portability. The presentation outlines potential IaaS standards for VM images, provisioning, exchange, storage, and SLAs. It also discusses longer term goals for Platform as a Service and Software as a Service standards, as well as common needs for security and privacy standards across cloud models. NIST aims to define core standardized capabilities while allowing for proprietary innovation.
Getting Started with Splunk Enterprise Hands-OnSplunk
This document provides an overview and demonstration of Splunk software. The agenda includes downloading Splunk, an overview of its key features for searching machine data, field extraction, dashboards, alerting, and analytics. The presenter then demonstrates installing and onboarding sample data, performing searches, and using pivots. deployment architectures are discussed along with scaling to hundreds of terabytes per day. Questions areas like documentation, support, and the Splunk user conference are also mentioned.
Palo alto networks pcnse6 study guide feb 2015Silva_2
This document provides study materials and guidance for the Palo Alto Networks Certified Network Security Engineer (PCNSE6) certification exam. It outlines the six main topics covered on the exam, including architecture and design, core concepts, logs and reports, management, networking, and policies. For each topic, it lists the learning objectives and recommends study materials such as administrator guides, course materials, and documentation. It also provides sample exam questions for each topic along with answers. The goal of this document is to help candidates efficiently prepare for the PCNSE6 exam.
The document discusses Kognitio, an in-memory analytical platform for big data. It is built from the ground up to perform large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights. It aims to complement existing data infrastructures like Hadoop and data warehouses through its scalable in-memory approach and tight integration capabilities.
The document discusses that network design should be a complete process that analyzes business needs and goals, explores technical options, and delivers a system to maximize success; it emphasizes analyzing applications, users, and technical goals like performance, scalability, availability, and security before designing network structures and selecting specific technologies. The document also outlines phases of network design like requirements analysis, logical and physical design, testing, and optimization according to the systems development life cycle approach.
This document discusses analytics and IoT. It covers key topics like data collection from IoT sensors, data storage and processing using big data tools, and performing descriptive, predictive, and prescriptive analytics. Cloud platforms and visualization tools that can be used to build end-to-end IoT and analytics solutions are also presented. The document provides an overview of building IoT solutions for collecting, analyzing, and gaining insights from sensor data.
Decision Matrix for IoT Product DevelopmentAlexey Pyshkin
At first sight, the development of "hardware" products hardly differs from that of IoT devices. Here you can see the methodology of IoT product development based on an IoT framework by Daniel Elizalde. It’s a convenient and simple model that estimates expenses and potential income, evaluates the technological complexity and at the same time is easily understood by the client.
Made by notAnotherOne
Total Cost Ownership Surveillance Systems Th (2) (2)Tom Hulsey
The document summarizes research into the total cost of ownership of IP-based surveillance systems compared to analog surveillance systems. Key findings include:
- For a sample 40 camera system, the IP-based system had a slightly lower total cost of ownership (3.4% lower).
- Network cameras accounted for half the cost of the IP system but only a third of the analog system's cost. Cabling was almost three times more expensive for analog.
- Beyond 32 cameras, IP systems have lower costs than analog systems. If IP infrastructure is already installed, IP systems always have lower costs.
- Additional benefits of IP systems noted were scalability, flexibility, image quality, and ability to use me
Splunk is a powerful platform for understanding your data. The preview of the Machine Learning Toolkit and Showcase App extends Splunk with a rich suite of advanced analytics and machine learning algorithms. In this session, we'll present an overview of the app architecture and API and show you how to use Splunk to easily perform a variety of tasks, including outlier and anomaly detection, predictive analytics, and event clustering. We’ll use real data to explore these techniques and explain the intuition behind the analytics.
The document discusses Kognitio, an in-memory analytical platform for big data. It was built from the ground up to handle large, complex analytics on big data sets using a massively parallel architecture. Kognitio offers its platform both on-premises and in the cloud to provide high-performance analytics capabilities to power business insights.
This document discusses ZSL, a solutions provider based in Edison, NJ. It provides an introduction to cloud computing and ZSL's 8-phased approach to migrating enterprise applications to the cloud. It also outlines ZSL's cloud services, two case studies, and details on its practice areas, experience, certifications, and partnerships.
NIST is developing Priority Action Plans (PAPs) and working with the Cyber Security Coordination Task Group (CSCTG) to address critical standards and security issues for the smart grid. The PAPs focus on 14 priority issues identified by NIST workshops. The CSCTG includes over 200 participants from industry, academia, and government agencies. It is establishing sub-working groups to perform risk assessments and develop security requirements and architectures for smart grid systems.
The document provides an overview of lessons learned from brokering cloud services. It discusses 5 key lessons: open source technologies can be more closed than they appear; managing customer expectations to control scope creep; avoiding vendor lock-in ("stickiness") by using multi-cloud orchestration tools; security opportunities exist in leveraging cloud service provider security controls; and the importance of trust between brokers and their customers.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
This document discusses the challenges of pre-silicon validation for Intel Xeon processors. It notes that Xeon validation teams have relatively small sizes compared to the scope of validation required. Key challenges include reusing design components from previous projects, managing cross-site teams, and dealing with ever-growing design complexity that strains simulation and formal verification methods. Specific issues involve integrating disparate design tools and environments, understanding the original intent when reusing unfinished code, minimizing duplicated stimulus code, managing the overhead of coverage instrumentation, and ensuring tests are portable between pre-silicon and post-silicon validation.
The document discusses how shaders are created and validated for graphics processing units (GPUs). Shaders are created by applications and sent to the GPU through graphics APIs and drivers. They are then executed by the GPU's shader processors. The validation process uses layered testbenches at the sub-block, block, and system levels for maximum controllability and observability. It also employs a reference model methodology using C++ models and hardware emulation to debug designs faster than simulation alone. This methodology helps improve the schedule and find bugs earlier in the development cycle.
The document is a presentation on verification of graphics ASICs given by Shaw Yang and Gary Greenstein of AMD. The presentation covers an overview of AMD, GPU systems, 3D graphics basics including vertices, polygons, pixels and textures, verification challenges related to size and complexity, and approaches used including layered code and testbenches, hardware emulation, and functional coverage.
The document discusses the importance of using verification metrics to predict the functional closure of a CPU design project and discusses challenges in relying solely on metrics. It outlines two key types of metrics - verification test plan based metrics that track testing progress and health of the design metrics that assess bug rates and stability. Examples are provided on using bug rate data and breaking bugs down by design unit to help evaluate the progress and health of a verification effort.
The document discusses efficient verification methodology. It recommends defining a conceptual framework or methodology to standardize some aspects while allowing diversity. The methodology should define interfaces and transactions upfront using an interface definition language to generate verification components and reusable assertions. It also recommends modeling systems at the transaction level using executable specifications to frontload the verification schedule.
The document discusses the challenges of validating next generation CPUs. It notes that validation is increasingly critical for product success but requires constant innovation. Design complexity is growing exponentially, requiring up to 70% of resources for functional validation. The number of pre-silicon logic bugs found per generation has also increased significantly. Shorter timelines and cross-site development further complicate the validation process.
The document discusses validation and design in small teams with limited resources. It proposes constraining designs to a single clock rate, using FIFO interfaces between blocks, and separating algorithm from IO verification to simplify validation. This approach allows designs to be completed more quickly with fewer verification engineers through standardized, repeatable validation methods at the cost of optimal performance.
Verification challenges have increased with the globalization of chip design. Time zone differences and documentation issues can reduce efficiency, but greater collaboration across sites can also lead to new ideas. AMD addresses these challenges through a Verification Center of Expertise (COE) that coordinates methodologies across multiple sites. The COE develops tools and techniques while partnering with project teams to jointly improve processes over time through continuous review and rotation of engineers between the COE and projects.
Bob Colwell documented notes from a meeting discussing the need for better software visualization tools to help localize bugs, diagnose problems, and monitor software behavior. The notes also reflect on important words in science according to Isaac Newton and reference a book about creative analogies. Finally, they caution against agreeing to sign a document just because a product is shipping.
The document outlines the verification strategy for a PCI-Express presenter device. It discusses the PCI-Express protocol overview including terminology, hierarchy and functions at various layers. It emphasizes the importance of design-for-verification using techniques like modular architectures, standardized interfaces and reference models to aid in functional verification closure and compliance testing. Performance verification is also highlighted as critical given the real-time requirements of the standard.
The document discusses verification strategies for PCI-Express. It outlines the PCI-Express protocol and highlights challenges in verifying chips that implement open standards. The verification paradigm focuses on functionality, performance, interoperability, reusability, scalability, and comprehensiveness using techniques like constrained-random testing, assertions, reference models, emulation, and compliance checkers. The goal is to deliver compliant and high-performing chips with zero bugs through an effective verification methodology.
This document discusses the importance of pre-silicon verification for post-silicon validation. It notes that post-silicon validation schedules are growing due to increasing design complexity, while pre-silicon verification investment and methodologies have not kept pace. The document highlights mixed-signal verification, power-on/reset verification, and design-for-testability verification as key focus areas needed to improve pre-silicon verification and enable faster post-silicon validation. It provides examples of mixed-signal and power-on bugs that were found post-silicon due to insufficient pre-silicon verification of these areas. The document argues that pre-silicon verification must move beyond just functional verification and own mixed-signal effects
Verilog-AMS allows for mixed-signal modeling and simulation in a single language. It provides benefits like simplified mixed-signal modeling, decreased simulation time, and improved mixed-signal verification. Previous solutions involved using two simulators or approximating analog circuits, which caused issues like slow simulation and lack of analog results. Verilog-AMS uses constructs from Verilog and Verilog-A to model both analog and digital content together. This avoids issues with interface elements between domains.
This document discusses the verification of Intel's Atom processor. It describes the key verification challenges, methodology used, and results. The main challenges were verifying a new microarchitecture with aggressive schedules and limited resources. The methodology involved cluster-level validation, functional coverage, architectural validation, and formal verification. Metrics like coverage, bug rates, and a "health of model" indicator were used. The results showed a successful pre-silicon verification with few escapes and debug/survivability features working as intended. Key learnings included the importance of keeping the full-chip design healthy early and putting equal focus on testability features.
The document discusses verification strategies based on Sun Tzu's classic book "The Art of War". Some key points:
1. Sun Tzu emphasized understanding the objective conditions and subjective opinions of competitors to determine strategic positioning. This relates to verification where it is important to understand the design and "Murphy the Designer".
2. Sun Tzu's 13 chapters provide guidance on tactics like laying plans, attacking weaknesses, maneuvering, and using intelligence sources. These lessons can help verification engineers successfully navigate different stages of a competitive campaign against bugs and errors.
3. Effective verification requires knowing the design, understanding one's own verification process, preparing appropriate tools, and using feedback to improve. Coverage metrics alone do
This document discusses various metrics used to measure the progress and health of CPU verification. It describes architectural verification to ensure implementation meets specifications, as well as unit architecture and system level verification. Key metrics include pass rates for legacy tests, functional coverage, bug rates, lines of code changes, and a health of the model score to measure convergence. Secondary metrics like cycles run, bugs found at different levels, and test bench quality are also outlined.
This document discusses Freescale's verification of the QorIQ communication platform containing the CoreNet fabric using SystemVerilog. It describes the verification challenges, methodology used, and verification IP developed. Key aspects included developing a SystemVerilog testbench, CoreNet VIP, and hierarchical verification. This approach successfully verified the CoreNet platform and resulted in first silicon sampling to customers within 3 weeks with no major functional bugs found.
The document discusses verification challenges for modern wireless system-on-chips (SoCs). It describes how SoCs now include multiple processors, modems, multimedia components, and peripherals, making verification much more complex. Traditional "golden vector" verification is insufficient, as it lacks reactivity, coverage metrics, and visibility into hardware-software interactions. The document advocates for model-based verification using system models, constraints, assertions and other techniques to achieve a higher level of integration and achieve full functional coverage. This modern approach allows testing across different levels of abstraction and integration.
Verification and validation of OMAP chips is a large, complex process that involves verifying modules, subsystems, and the full chip. It utilizes a strict methodology with defined verification plans, reviews, and metrics to help ensure thorough testing and integration of the many components in a timely manner.
2. “You have this awesome generation that
pseudo-randomly creates all sorts of
good scenarios. You also have created
equally awesome scoreboard and
temporal checker infrastructure that will
catch all the bugs. Next, you run it like
mad, with all sorts of seeds to hit as
much of the verification space as
possible.”
Peet James
Verification Plans
30 September
2 2006
3. Given all that, what really happened?
• Where did all those transaction go?
• Which lines of RTL were exercised?
• Which sections of the specification were tested?
• Which corner cases of my implementation were hit?
• What was the distribution of transaction types issues?
• Do I need to create new tests?
• Can I stop running simulations?
Coverage helps
provides the
Coverage is a pieceanswers! the final answer
of the puzzle, not
30 September
3 2006
4. Coverage provides…
• An understanding of which portions of the design
have been exercised
• Increased observability of simulation behavior
• Feedback on which tests are effective
• Feedback to direct future verification efforts
30 September
4 2006
5. Agenda
• Coverage terms and tools
• How to get started
• Coverage planning
• Coverage execution
• Coverage analysis
• Coverage results
30 September
5 2006
7. Coverage terms
• Coverage strategy
− Approach defined to utilize coverage technology
− Generate, gather, and analyze coverage data
• Coverage model
− Collection of coverage spaces
− Definition of one or more coverage spaces of interest
• Coverage space
− Set of coverage points associated with a single aspect of the design and a single
coverage technology
• Coverage technology
− Specific mechanism such as code, functional, assertion, transaction
• Coverage point
− A specific named aspect of the design behavior
− FCP, line of code, state transition, transaction or sequence of transactions
• Coverage data
− Raw data collected from all coverage points and coverage spaces
• Coverage results
− Interpretation of coverage data in context of coverage model
30 September
7 2006
8. Coverage model vs. coverage tools
• WHAT
−Coverage model High Level
Architecture
Specification
Bug rates Transaction
Sim cycles Coverage Functional
Model
Code Assertion
Design detail
Low Level
• HOW
Coverage tools Coverage tools
30 September
8 2006
9. Code coverage
• Line/block, branch, path, expression, state
• Measures controllability aspect of our stimulus
− i.e. What lines of code have we exercised
• Does not connect us to the actual functionality of the chip
− No insight into functional correctness
• Takes a blind approach to coverage (low observability)
− Activating an erroneous statement does not mean the error will
propagate to an observable point during the course of a simulation
• Generates a lot of data
− Difficult to interpret what is significant and what is not
30 September
9 2006
10. Assertion coverage
Assertions monitor and report undesirable
behavior
• Ensures that preconditions of an assertion check
have been met
// SVA: if asserting stop or flush, no new request
assert property (@(posedge clk) disable iff (rst_n)
((Flush | SMQueStop) |->
SMQueNew))
else $error(“Illegal behavior”);
precondition
check
30 September
10 2006
11. Functional coverage
• Similar in nature to assertions
Assertions monitor and report undesirable behavior
Functional coverage monitors and reports desirable
behavior
• Functional coverage
− Specific design details
− Corner cases of interest to engineers
− Architectural features
30 September
11 2006
12. Transaction coverage
• A transaction is the logging of any data structure
− A packet on a bus
− Does not have to be a system packet
• Example transaction coverage points
− All transaction types were seen on each interface
− Transactions with specific data were seen
• Source, destination, address, address ranges
• Sequences of transactions
− Have recording monitor watch for sequence
− Implement advanced queries to look for sequence
• Two parts to transaction coverage
− Record the right data
− Correct queries
30 September
12 2006
13. EDA tools
• Code, FCPs and transactions are recorded into
vendor specific databases
− Tools are provided to look at coverage data
− Report engines provide text reports
• Debug tools for FCPs and assertions
• Tools to encourage coverage-driven
methodologies
• Coverage is still a young technology
− Tools still expanding set of capabilities
− Development areas such as data aggregation, multiple
view extraction
30 September
13 2006
14. How do I get
started with this
coverage stuff?
30 September
14 2006
15. Coverage roadmap – getting started
Planning
PSL / SVA / 1. Choose Code
specification 4. Collect
OVL
form data FCP
Txn
2. Identify
Spec, design coverage 5. Analyze Coverage tools
model data
Consumption
Code / 3. Implement
Assertion / FCP coverage 6. React to
/ Txn Adjust stimulus
model data
Execution
30 September
15 2006
16. Coverage
planning PSL / SVA /
Planning
1. Choose Code
OVL specification 4. Collect
form data FCP
Txn
2. Identify
Spec, design coverage 5. Analyze Coverage tools
model data
Consumption
Code / 3. Implement
Assertion / FCP coverage 6. React to Adjust stimulus
/ Txn model data
Execution
30 September
16 2006
17. Coverage planning
Start looking at coverage up front!
Coverage results only as good as coverage
model
• Identify content of the coverage model
− Coverage types to be used
• Identify required tools
• Coverage infrastructure
• Coverage execution
• Maintenance and tool enhancements
• Define coverage goals and metrics
• Coverage reviews
30 September
17 2006
18. Who, what, when, where, why
• Who creates coverage model?
− Who analyses the data? Logic and DV engineers
− Who owns coverage?
• What to cover in the model? Concern areas
• When to add coverage points? Add with RTL
− When to analyze coverage data? Analyze continuously
spec, design, assertions,
• Where to look for ideas? test plan
• Why mess with coverage? Because…
30 September
18 2006
19. For FCPs, ask yourself…
−What should be covered?
−Where is best place to put FCPs?
−When to look for condition?
−Why have coverage point?
30 September
19 2006
20. Watch out for…
• Too much data
− Need information, not data
− Need supporting tools to get correct
views of data
• Ineffective use of coverage
− FCPs that fire every clock cycle
− Duplication of coverage with different
tools
• Reading too much into grading tests
− Random tests produce different results
with different seeds
30 September
20 2006
21. Cost of coverage
• Plan for the costs of using coverage
− Get solid infrastructure setup
− Plan for slower simulations
• Some level of cost is acceptable
− Getting value back for investment
• Be smart
− Architect coverage plan up front to ensure success
30 September
21 2006
22. Coverage
execution PSL / SVA
/ OVL
Planning
1. Choose
specificatio 4. Collect
Code
n form data FCP
Txn
2. Identify
Spec, design coverage 5. Analyze Coverage tools
model data
Consumption
Code / 3.
Assertion / Implement
coverage 6. React to Adjust stimulus
FCP / Txn
model data
Execution
30 September
22 2006
23. Describing coverage model
• Code coverage
− RTL code, pragmas
• Assertion and functional coverage
− Use assertion language or library (PSL, SVA, OVL)
• Transaction
− Use hooks into Transaction Level Modeling
// SVA cover example
// PSL cover example
always @(posedge clk) begin
default clock = (posedge clk);
if (reset_n)
sequence qFullCondition =
myQfull: cover (q_full)
{reset_n ? (q_full : 1’b0);
$info (“queue was full”);
cover qFullCondition;
end
30 September
23 2006
24. Data collection
• Collect data across volume simulation
• Aggregate multiple databases
• Location of coverage data repository
• Manage volume of data
30 September
24 2006
25. Coverage
analysis
Planning
PSL / SVA / 1. Choose Code
OVL specification 4. Collect
form data FCP
Txn
2. Identify
Spec, design coverage 5. Analyze Coverage tools
model data
Consumption
Code / 3. Implement
Assertion / coverage 6. React to Adjust stimulus
FCP / Txn model data
Execution
30 September
25 2006
26. The analysis
• Easy to generate a ton
of data
− Want information, not
data
Need to organize the
data
Can’t look at it all at
once
Determine views
needed
30 September
26 2006
27. Views of coverage data
• Un-hit coverage
• Functionality groups
• Block, chip, system
• Current milestone functionality
• Instance or module specific
• Across environments, time,
model releases
• Cross views
30 September
27 2006
28. Our use of coverage
• Aggregate data for each verification environment
• Views: Verification effectiveness
− Verification environment
• Views: TR readiness
− Major sub-blocks and chip
• Filtering infrastructure
− Milestone specific functionality
− Unreachable
• Aggregate coverage data across windows of time
• Metrics provided for each team and full chips
30 September
28 2006
29. Analysis is done… now what?
• Understand all un-hit coverage
• Fill coverage holes
• Look for hard to hit coverage
• Track coverage metrics
Don’t play games with
metrics just to get
coverage goals met
Really understand the
results
30 September
29 2006
30. Coverage Planning
results PSL / SVA /
OVL
1. Choose
specification
form
4. Collect
data
Code
FCP
Txn
2. Identify
Spec, design coverage 5. Analyze Coverage tools
model data
Consumption
Code / 3. Implement
Assertion / coverage 6. React to Adjust stimulus
FCP / Txn model data
Execution
30 September
30 2006
31. Success stories
• They exist!
• Check them out in Assertion-Based Design
30 September
31 2006
32. HP coverage data
• SX1000 chipset
− 6,500 FCPs
• SX2000 chipset
− 25,000 FCPs
• Current efforts
− 135,000 assertions and 650,000 FCPs
− 56,000 transaction points
• Coverage goals
− 100% coverage with understood exceptions
− Team defined goals per milestone
30 September
32 2006
33. Resources
− J. Bergeron, Writing Testbenches: Functional Verification of HDL
Models, Second Edition, Kluwer Academic Publishers, 2003.
− H. Foster, A. Krolnick, D. Lacey, Assertion-Based Design, Second
Edition, Kluwer Academic Publishers, 2004.
− P. James, Verification Plans: The Five-Day Verification Strategy
for Modern Hardware Verification Languages, Kluwer Academic
Publishers, 2004.
− A. Piziali, Functional Verification Coverage Measurement and
Analysis, Kluwer Academic Publishers, 2004.
− B. Cohen, Using PSL/Sugar with Verilog and VHDL, Guide to
Property Specification Language for ABV, VhdlCohen Publishing,
2003.
David Lacey, Hewlett Packard, david.lacey@hp.com
Rob Porter, Hewlett Packard, robert.porter@hp.com
30 September
33 2006