Sun Tzu's The Art of War provides guidance for verification engineers in their battle against Murphy the Design. Key lessons include knowing your enemy (understanding the design and best verification approaches), knowing yourself (improving processes and understanding strengths and weaknesses), and preparing yourself with the right tools while maintaining flexibility. Using feedback based on objective metrics is also important for success.
The document discusses verification strategies based on Sun Tzu's classic book "The Art of War". Some key points:
1. Sun Tzu emphasized understanding the objective conditions and subjective opinions of competitors to determine strategic positioning. This relates to verification where it is important to understand the design and "Murphy the Designer".
2. Sun Tzu's 13 chapters provide guidance on tactics like laying plans, attacking weaknesses, maneuvering, and using intelligence sources. These lessons can help verification engineers successfully navigate different stages of a competitive campaign against bugs and errors.
3. Effective verification requires knowing the design, understanding one's own verification process, preparing appropriate tools, and using feedback to improve. Coverage metrics alone do
This document discusses managing technical issues on complex, high-risk projects. It emphasizes understanding risk through tools like risk matrices and fault trees to assess impact on cost, schedule, and safety. Issues should be addressed according to project phase and how they affect engineering, assembly, testing, operations, and integration teams. Thorough documentation and flexible requirements help manage the large volume of issues while allowing for unique project challenges. The goal is to choose solutions that minimize overall risk by fixing, mitigating, or accepting problems based on likelihood and consequences.
1) The document outlines a whole class literacy profile created by four first grade teachers at W.D. Richards Elementary School to develop independent, motivated, and lifelong readers and writers.
2) It discusses how assessments will drive instruction through small group work, individual instruction, and focus lessons based on student needs.
3) The assessments will evaluate student interests, skills, comprehension strategies, and writing to determine individual goals and guide literacy instruction.
This document outlines how two teachers at Yorktown Elementary School created a Classroom Literacy Profile (CLP) to better monitor student progress and drive classroom instruction. The CLP includes data from mandated assessments given throughout the year, as well as more frequent informal assessments collected by the teachers. All assessment data and teacher notes are stored in a binder kept in the classroom. The teachers plan to use the CLP each week during planning to inform both classroom lessons and targeted interventions through the school's RtI program. They are excited to implement the CLP this year and view it as a flexible tool that can evolve over time based on student needs.
The document discusses reaching coverage closure in verification by extracting important uncovered events from large coverage models and using techniques like manual analysis of coverage views, automatic hole analysis, and semi-automatic hole queries to group events and focus on areas of low coverage. It also addresses the challenge of providing concise yet informative coverage reports when dealing with thousands or millions of potential coverage points.
Ashley Kitt and Liz Phillips are special education teachers at Churubusco Middle School who service 28 students with disabilities. They created a Classroom Literacy Profile (CLP) to better organize their assessment of students and monitor progress more accurately and efficiently. The school has approximately 300 students in grades 6-8, with 30% receiving special education services. The teachers' goals are to monitor student progress more efficiently through better organization and incorporate additional assessments in literacy. Their CLP outlines current and new assessments they administer, as well as their organization and implementation plans to assess students at the beginning and end of year, and monthly.
Ms. Oeding created a classroom literacy profile (CLP) for her fifth grade class for the upcoming school year. She collaborated with a third grade teacher to develop the CLP. Their CLP includes four main components: DIBELS assessments to monitor comprehension every two weeks, comprehension assessments Ms. Oeding has used in the past, the CMRSU comprehension strategy assessment, and the Motivation for Reading Questionnaire to understand students' interests. Ms. Oeding found resources for the CLP through the course textbooks, discussions with other teachers, and collaborating with her third grade colleague.
The document discusses verification strategies based on Sun Tzu's classic book "The Art of War". Some key points:
1. Sun Tzu emphasized understanding the objective conditions and subjective opinions of competitors to determine strategic positioning. This relates to verification where it is important to understand the design and "Murphy the Designer".
2. Sun Tzu's 13 chapters provide guidance on tactics like laying plans, attacking weaknesses, maneuvering, and using intelligence sources. These lessons can help verification engineers successfully navigate different stages of a competitive campaign against bugs and errors.
3. Effective verification requires knowing the design, understanding one's own verification process, preparing appropriate tools, and using feedback to improve. Coverage metrics alone do
This document discusses managing technical issues on complex, high-risk projects. It emphasizes understanding risk through tools like risk matrices and fault trees to assess impact on cost, schedule, and safety. Issues should be addressed according to project phase and how they affect engineering, assembly, testing, operations, and integration teams. Thorough documentation and flexible requirements help manage the large volume of issues while allowing for unique project challenges. The goal is to choose solutions that minimize overall risk by fixing, mitigating, or accepting problems based on likelihood and consequences.
1) The document outlines a whole class literacy profile created by four first grade teachers at W.D. Richards Elementary School to develop independent, motivated, and lifelong readers and writers.
2) It discusses how assessments will drive instruction through small group work, individual instruction, and focus lessons based on student needs.
3) The assessments will evaluate student interests, skills, comprehension strategies, and writing to determine individual goals and guide literacy instruction.
This document outlines how two teachers at Yorktown Elementary School created a Classroom Literacy Profile (CLP) to better monitor student progress and drive classroom instruction. The CLP includes data from mandated assessments given throughout the year, as well as more frequent informal assessments collected by the teachers. All assessment data and teacher notes are stored in a binder kept in the classroom. The teachers plan to use the CLP each week during planning to inform both classroom lessons and targeted interventions through the school's RtI program. They are excited to implement the CLP this year and view it as a flexible tool that can evolve over time based on student needs.
The document discusses reaching coverage closure in verification by extracting important uncovered events from large coverage models and using techniques like manual analysis of coverage views, automatic hole analysis, and semi-automatic hole queries to group events and focus on areas of low coverage. It also addresses the challenge of providing concise yet informative coverage reports when dealing with thousands or millions of potential coverage points.
Ashley Kitt and Liz Phillips are special education teachers at Churubusco Middle School who service 28 students with disabilities. They created a Classroom Literacy Profile (CLP) to better organize their assessment of students and monitor progress more accurately and efficiently. The school has approximately 300 students in grades 6-8, with 30% receiving special education services. The teachers' goals are to monitor student progress more efficiently through better organization and incorporate additional assessments in literacy. Their CLP outlines current and new assessments they administer, as well as their organization and implementation plans to assess students at the beginning and end of year, and monthly.
Ms. Oeding created a classroom literacy profile (CLP) for her fifth grade class for the upcoming school year. She collaborated with a third grade teacher to develop the CLP. Their CLP includes four main components: DIBELS assessments to monitor comprehension every two weeks, comprehension assessments Ms. Oeding has used in the past, the CMRSU comprehension strategy assessment, and the Motivation for Reading Questionnaire to understand students' interests. Ms. Oeding found resources for the CLP through the course textbooks, discussions with other teachers, and collaborating with her third grade colleague.
Raj Dayal presented recommendations for managing the deployment of SystemVerilog Assertions (SVA) in projects. Some key points:
1. Assertions should be written by designers, verification engineers, and system engineers to fully specify the intended behavior. Training is important.
2. Assertions can be selectively included or excluded at compile-time using macros. They can also be individually turned on or off at run-time.
3. Assertions should be named meaningfully to aid debugging. Conventions like project_name_assert_label avoid conflicts.
4. Too many assertions can slow down simulation. A guideline is to limit the number to less than 5000 compiled into the simulation model.
This document discusses amorphous voltage islands, an approach to lowering power consumption by using multiple, granular voltage domains. It aims to reduce both active (AC) and leakage (DC) power through voltage scaling while minimizing performance degradation. The document outlines the topic and references previous work on voltage islands and level converters. It will describe implementing amorphous voltage islands in a PowerPC 405 core design using a 130nm process.
This document summarizes the cell verification process used at Sony, Toshiba, and IBM for a new microprocessor. A hierarchical verification methodology was used, breaking the design into partition, island, unit, and block levels. Key metrics like code coverage, passing rates, reviews, and bug rates were tracked at each level. Overall, the methodology was effective, finding 95% of bugs at lower levels and only 3.5% remaining at the full-chip level.
This document outlines a classroom literacy profile created by two 4th grade teachers to better monitor student literacy levels and progress through various assessments. It details the current assessments in place like ISTEP, NWEA, and DRA as well as new assessments like Rigby benchmarks that will be added. It also discusses new interventions and strategies for comprehension and vocabulary that will be implemented. The classroom literacy profile aims to organize existing assessment data to drive literacy instruction and allow teachers to collaborate on student successes and challenges over multiple years.
The document discusses next generation emulation techniques for improving verification speed and efficiency compared to first generation emulation and simulation. It provides two case studies: an AXI switch performance validation using an FPGA-based emulator setup, and a project at CMU involving ProtoFLEX. The document argues that next generation approaches like standard interfaces, transactors, and synthesizable testbenches help eliminate bottlenecks and allow emulation to be used earlier in the design cycle.
The document discusses steps for structuring decision problems:
1. Filter and operationalize objectives by classifying them as means or fundamental objectives and how they will be measured.
2. Structure the elements of the decision problem in a logical framework using influence diagrams to represent decisions, uncertain events, and consequences and their logical relationships.
3. Fill in the details of the influence diagram by precisely defining decisions and uncertain events, specifying probability distributions, and how consequences will be measured against the objectives.
The webinar discussed good practices in promoting microinsurance products. It provided a 10 step framework for developing a successful promotional campaign including selecting target audiences, defining goals and messages, developing creative elements, and choosing communication channels. Three case presenters then shared examples from Haiti, South Africa, and Guatemala. Their promotional campaigns utilized a variety of channels such as radio, print, online, and in-person events to raise awareness and drive sales of microinsurance policies tailored to low-income customers.
Agile2009 - How to sell a traditional client on an Agile project planOpenSource Connections
12 suggestions for how to convince traditional clients to agree to an Agile project plan. Presented by Arin Sime of OpenSource Connections at Agile 2009 in Chicago.
The document outlines the key steps in an analytical problem solving process: 1) clarify the problem, 2) investigate causes, 3) identify decision criteria, 4) identify solutions, 5) evaluate solutions, 6) implement a solution, and 7) follow up and measure. It emphasizes that clarifying the problem is the most important first step, and provides tools like the 5 Ws, 5 Whys, and SWOT analysis to help define and understand the problem. The document also provides a real-world example of using the 5 Whys technique to uncover the root cause of late product shipments.
Delivering value early and often, giving ourselves the best opportunity to beat the competition to market, realize revenue and discover insights that we can use to help us improve.
The document provides an overview of overcoming project failure and secrets to mastering success. It discusses the importance of reputation for project managers and how to respond to failure with a growth mindset. It outlines 10 common mistakes project managers make including preparing an ambitious schedule, pretending to know more than you do, and ignoring problems. The document also discusses how to come back from failure through strategies like being honest about mistakes, having patience, and focusing on growth. It emphasizes that mindset is important and shares a parting quote about not being defeated by encountering defeats.
QASymphony Webinar - "How to Start, Grow & Perfect Exploratory Testing on you...QASymphony
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
QASymphony - How to Start, Grow & Perfect Exploratory Testing on your Teamelizabethdiazqa
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
1) A company's response to a scandal must be carefully calibrated based on factors like the brand, nature of the event, and parties being blamed rather than a blanket approach.
2) A four-step framework is proposed for crafting an effective response: assess the incident and spill over/rebound effects, acknowledge the problem without premature statements, formulate a strategic response depending on if allegations are true or false, and implement response tactics based on customer perspective.
3) Executives cannot rely solely on preventative measures and must be prepared to respond to scandals, such as having a crisis team and contingency budget ready.
AgileCville: How to sell a traditional client on an Agile project planOpenSource Connections
This document provides strategies for selling an Agile project plan to a traditional client. It begins with explaining why Agile needs to be sold and defines a traditional environment. It then discusses 11 strategies for persuading clients, including running a trial sprint, using case studies and metrics to show successes, finding a champion, and comparing Agile to other methodologies. It stresses the importance of continuing to promote Agile's benefits throughout the project. The document aims to help consultants overcome clients' fears of Agile and replace traditional upfront documentation with iterative development.
Test Strategy-The real silver bullet in testing by Matthew EakinQA or the Highway
This document provides an overview of creating a testing strategy. It begins with explaining why a testing strategy is important, as testing accounts for a large portion of IT budgets. It then discusses the key questions a testing strategy should answer: what to test, where to test, when to test, how to test, and who will test.
The document outlines a process for creating a testing strategy, including assessing the current state, defining a future vision, and creating a roadmap to get from the current to future state. It provides examples of what to include under each section of the strategy, such as system architecture under "what to test" and test environments under "where to test". Overall, the document provides guidance on developing a
Intersection18: From a "Simple" App Challenge for Astronauts to an Enterprise...Intersection Conference
How a “simple” app implementation for a client in the Space Industry, helped our team to identify, isolate and rethink the whole procedures and communications our client had. Sometimes we focus on the tree, in this case a space rocket, and we miss the forest... in our case the galaxy. We will go through the tools used for building the app and how they unveiled pain-points and challenges within the organization itself. Sometimes we need to build a tool to explain what major changes should be faced within an enterprise.
Raj Dayal presented recommendations for managing the deployment of SystemVerilog Assertions (SVA) in projects. Some key points:
1. Assertions should be written by designers, verification engineers, and system engineers to fully specify the intended behavior. Training is important.
2. Assertions can be selectively included or excluded at compile-time using macros. They can also be individually turned on or off at run-time.
3. Assertions should be named meaningfully to aid debugging. Conventions like project_name_assert_label avoid conflicts.
4. Too many assertions can slow down simulation. A guideline is to limit the number to less than 5000 compiled into the simulation model.
This document discusses amorphous voltage islands, an approach to lowering power consumption by using multiple, granular voltage domains. It aims to reduce both active (AC) and leakage (DC) power through voltage scaling while minimizing performance degradation. The document outlines the topic and references previous work on voltage islands and level converters. It will describe implementing amorphous voltage islands in a PowerPC 405 core design using a 130nm process.
This document summarizes the cell verification process used at Sony, Toshiba, and IBM for a new microprocessor. A hierarchical verification methodology was used, breaking the design into partition, island, unit, and block levels. Key metrics like code coverage, passing rates, reviews, and bug rates were tracked at each level. Overall, the methodology was effective, finding 95% of bugs at lower levels and only 3.5% remaining at the full-chip level.
This document outlines a classroom literacy profile created by two 4th grade teachers to better monitor student literacy levels and progress through various assessments. It details the current assessments in place like ISTEP, NWEA, and DRA as well as new assessments like Rigby benchmarks that will be added. It also discusses new interventions and strategies for comprehension and vocabulary that will be implemented. The classroom literacy profile aims to organize existing assessment data to drive literacy instruction and allow teachers to collaborate on student successes and challenges over multiple years.
The document discusses next generation emulation techniques for improving verification speed and efficiency compared to first generation emulation and simulation. It provides two case studies: an AXI switch performance validation using an FPGA-based emulator setup, and a project at CMU involving ProtoFLEX. The document argues that next generation approaches like standard interfaces, transactors, and synthesizable testbenches help eliminate bottlenecks and allow emulation to be used earlier in the design cycle.
The document discusses steps for structuring decision problems:
1. Filter and operationalize objectives by classifying them as means or fundamental objectives and how they will be measured.
2. Structure the elements of the decision problem in a logical framework using influence diagrams to represent decisions, uncertain events, and consequences and their logical relationships.
3. Fill in the details of the influence diagram by precisely defining decisions and uncertain events, specifying probability distributions, and how consequences will be measured against the objectives.
The webinar discussed good practices in promoting microinsurance products. It provided a 10 step framework for developing a successful promotional campaign including selecting target audiences, defining goals and messages, developing creative elements, and choosing communication channels. Three case presenters then shared examples from Haiti, South Africa, and Guatemala. Their promotional campaigns utilized a variety of channels such as radio, print, online, and in-person events to raise awareness and drive sales of microinsurance policies tailored to low-income customers.
Agile2009 - How to sell a traditional client on an Agile project planOpenSource Connections
12 suggestions for how to convince traditional clients to agree to an Agile project plan. Presented by Arin Sime of OpenSource Connections at Agile 2009 in Chicago.
The document outlines the key steps in an analytical problem solving process: 1) clarify the problem, 2) investigate causes, 3) identify decision criteria, 4) identify solutions, 5) evaluate solutions, 6) implement a solution, and 7) follow up and measure. It emphasizes that clarifying the problem is the most important first step, and provides tools like the 5 Ws, 5 Whys, and SWOT analysis to help define and understand the problem. The document also provides a real-world example of using the 5 Whys technique to uncover the root cause of late product shipments.
Delivering value early and often, giving ourselves the best opportunity to beat the competition to market, realize revenue and discover insights that we can use to help us improve.
The document provides an overview of overcoming project failure and secrets to mastering success. It discusses the importance of reputation for project managers and how to respond to failure with a growth mindset. It outlines 10 common mistakes project managers make including preparing an ambitious schedule, pretending to know more than you do, and ignoring problems. The document also discusses how to come back from failure through strategies like being honest about mistakes, having patience, and focusing on growth. It emphasizes that mindset is important and shares a parting quote about not being defeated by encountering defeats.
QASymphony Webinar - "How to Start, Grow & Perfect Exploratory Testing on you...QASymphony
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
QASymphony - How to Start, Grow & Perfect Exploratory Testing on your Teamelizabethdiazqa
This webinar defines a clear path moving forward to gain success with exploratory testing, no matter what stage of the testing process you are currently in. Learn how to make the internal “sell” to get exploratory testing off the ground and then how to standardize and scale exploratory testing for the enterprise.Whether your organization is waterfall, agile, or somewhere in between, a properly implemented exploratory testing process is sure to increase the value of your testing team.
1) A company's response to a scandal must be carefully calibrated based on factors like the brand, nature of the event, and parties being blamed rather than a blanket approach.
2) A four-step framework is proposed for crafting an effective response: assess the incident and spill over/rebound effects, acknowledge the problem without premature statements, formulate a strategic response depending on if allegations are true or false, and implement response tactics based on customer perspective.
3) Executives cannot rely solely on preventative measures and must be prepared to respond to scandals, such as having a crisis team and contingency budget ready.
AgileCville: How to sell a traditional client on an Agile project planOpenSource Connections
This document provides strategies for selling an Agile project plan to a traditional client. It begins with explaining why Agile needs to be sold and defines a traditional environment. It then discusses 11 strategies for persuading clients, including running a trial sprint, using case studies and metrics to show successes, finding a champion, and comparing Agile to other methodologies. It stresses the importance of continuing to promote Agile's benefits throughout the project. The document aims to help consultants overcome clients' fears of Agile and replace traditional upfront documentation with iterative development.
Test Strategy-The real silver bullet in testing by Matthew EakinQA or the Highway
This document provides an overview of creating a testing strategy. It begins with explaining why a testing strategy is important, as testing accounts for a large portion of IT budgets. It then discusses the key questions a testing strategy should answer: what to test, where to test, when to test, how to test, and who will test.
The document outlines a process for creating a testing strategy, including assessing the current state, defining a future vision, and creating a roadmap to get from the current to future state. It provides examples of what to include under each section of the strategy, such as system architecture under "what to test" and test environments under "where to test". Overall, the document provides guidance on developing a
Intersection18: From a "Simple" App Challenge for Astronauts to an Enterprise...Intersection Conference
How a “simple” app implementation for a client in the Space Industry, helped our team to identify, isolate and rethink the whole procedures and communications our client had. Sometimes we focus on the tree, in this case a space rocket, and we miss the forest... in our case the galaxy. We will go through the tools used for building the app and how they unveiled pain-points and challenges within the organization itself. Sometimes we need to build a tool to explain what major changes should be faced within an enterprise.
The document discusses the importance of advocacy planning. It notes that advocacy planning is important because it helps: 1) head in the right direction by breaking goals into manageable steps, 2) use scarce resources wisely through strategic choices, and 3) counter any potential opposition. The document also outlines common problems with poor advocacy planning like unclear objectives. Finally, it presents the advocacy planning cycle as a series of logical questions to analyze issues, set goals, develop strategies, and monitor outcomes.
This document outlines steps for effective decision making, including defining the problem, determining requirements, establishing goals, identifying alternatives, selecting a decision making tool, evaluating feedback, and committing to a decision. It distinguishes between programmed and non-programmed decisions, and lists tips for making decisions, such as avoiding snap judgments, visualizing outcomes, and basing choices on objective facts. The overall goal is to guide decision-makers through a transparent process to select the best alternative.
This is the deck we used for a webinar presentation, along with HR.com, on how to handle and successfully manage individual and organizational transitions on the job.
This document provides guidance on planning responses for case study exams. It discusses the different types of issues examiners may present, including corporate governance, competitive threats, inability to deliver results, survival threats, strategy implementation, altered strategy, and strategic proposals. The document recommends a 6-step approach to planning: 1) identify the event, 2) consider the issue, 3) evaluate the impact, 4) identify alternative actions, 5) analyze advantages and disadvantages, and 6) decide on recommendations. It then provides an example mini-case scenario about the resignation of a commercial director to demonstrate how to apply the planning process.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
This document discusses the challenges of pre-silicon validation for Intel Xeon processors. It notes that Xeon validation teams have relatively small sizes compared to the scope of validation required. Key challenges include reusing design components from previous projects, managing cross-site teams, and dealing with ever-growing design complexity that strains simulation and formal verification methods. Specific issues involve integrating disparate design tools and environments, understanding the original intent when reusing unfinished code, minimizing duplicated stimulus code, managing the overhead of coverage instrumentation, and ensuring tests are portable between pre-silicon and post-silicon validation.
The document discusses how shaders are created and validated for graphics processing units (GPUs). Shaders are created by applications and sent to the GPU through graphics APIs and drivers. They are then executed by the GPU's shader processors. The validation process uses layered testbenches at the sub-block, block, and system levels for maximum controllability and observability. It also employs a reference model methodology using C++ models and hardware emulation to debug designs faster than simulation alone. This methodology helps improve the schedule and find bugs earlier in the development cycle.
The document is a presentation on verification of graphics ASICs given by Shaw Yang and Gary Greenstein of AMD. The presentation covers an overview of AMD, GPU systems, 3D graphics basics including vertices, polygons, pixels and textures, verification challenges related to size and complexity, and approaches used including layered code and testbenches, hardware emulation, and functional coverage.
The document discusses the importance of using verification metrics to predict the functional closure of a CPU design project and discusses challenges in relying solely on metrics. It outlines two key types of metrics - verification test plan based metrics that track testing progress and health of the design metrics that assess bug rates and stability. Examples are provided on using bug rate data and breaking bugs down by design unit to help evaluate the progress and health of a verification effort.
The document discusses efficient verification methodology. It recommends defining a conceptual framework or methodology to standardize some aspects while allowing diversity. The methodology should define interfaces and transactions upfront using an interface definition language to generate verification components and reusable assertions. It also recommends modeling systems at the transaction level using executable specifications to frontload the verification schedule.
The document discusses the challenges of validating next generation CPUs. It notes that validation is increasingly critical for product success but requires constant innovation. Design complexity is growing exponentially, requiring up to 70% of resources for functional validation. The number of pre-silicon logic bugs found per generation has also increased significantly. Shorter timelines and cross-site development further complicate the validation process.
The document discusses validation and design in small teams with limited resources. It proposes constraining designs to a single clock rate, using FIFO interfaces between blocks, and separating algorithm from IO verification to simplify validation. This approach allows designs to be completed more quickly with fewer verification engineers through standardized, repeatable validation methods at the cost of optimal performance.
Verification challenges have increased with the globalization of chip design. Time zone differences and documentation issues can reduce efficiency, but greater collaboration across sites can also lead to new ideas. AMD addresses these challenges through a Verification Center of Expertise (COE) that coordinates methodologies across multiple sites. The COE develops tools and techniques while partnering with project teams to jointly improve processes over time through continuous review and rotation of engineers between the COE and projects.
Greg Tierney of Avid presented on their experiences using SystemC for design verification. SystemC provides hardware constructs and simulation capabilities in C++. Avid chose SystemC to enhance their existing C++ verification code and take advantage of its industry acceptance and built-in verification features. SystemC helped Avid solve issues like crossing language boundaries between HDL modules and testbenches, connecting ports and channels, implementing randomization, using multi-threaded processes, and defining module hierarchies. However, Avid also encountered issues with SystemC like slow compile/link times and limitations in its foreign language interface.
Bob Colwell documented notes from a meeting discussing the need for better software visualization tools to help localize bugs, diagnose problems, and monitor software behavior. The notes also reflect on important words in science according to Isaac Newton and reference a book about creative analogies. Finally, they caution against agreeing to sign a document just because a product is shipping.
The document outlines the verification strategy for a PCI-Express presenter device. It discusses the PCI-Express protocol overview including terminology, hierarchy and functions at various layers. It emphasizes the importance of design-for-verification using techniques like modular architectures, standardized interfaces and reference models to aid in functional verification closure and compliance testing. Performance verification is also highlighted as critical given the real-time requirements of the standard.
The document discusses verification strategies for PCI-Express. It outlines the PCI-Express protocol and highlights challenges in verifying chips that implement open standards. The verification paradigm focuses on functionality, performance, interoperability, reusability, scalability, and comprehensiveness using techniques like constrained-random testing, assertions, reference models, emulation, and compliance checkers. The goal is to deliver compliant and high-performing chips with zero bugs through an effective verification methodology.
The document discusses methodologies for improving verification efficiency at Cisco. It advocates separating testbench creation into three stages: component design, testbench integration, and testcase creation. It also recommends using standardized methodologies like testflow to synchronize component behavior, reusing unit-level component models and checkers, linking transactions between checkers, and generating common testbench infrastructure from templates to reduce duplication of effort. The key is pushing reusable behavior into components and standardizing common elements to maximize efficiency.
This document discusses the importance of pre-silicon verification for post-silicon validation. It notes that post-silicon validation schedules are growing due to increasing design complexity, while pre-silicon verification investment and methodologies have not kept pace. The document highlights mixed-signal verification, power-on/reset verification, and design-for-testability verification as key focus areas needed to improve pre-silicon verification and enable faster post-silicon validation. It provides examples of mixed-signal and power-on bugs that were found post-silicon due to insufficient pre-silicon verification of these areas. The document argues that pre-silicon verification must move beyond just functional verification and own mixed-signal effects
This document discusses challenges in low-power design and verification. It addresses why low-power is now a priority given trends in mobile applications. Key challenges include increased leakage due to process scaling, accounting for active leakage, and handling process variations. The document also discusses low-power design methodologies, including multiple power domains, voltage scaling, and clock gating. Verification challenges are presented, such as needing good test patterns and coordination across design domains. Overall power analysis is more complex than timing analysis due to its pattern dependence and need to optimize for performance per watt.
Verilog-AMS allows for mixed-signal modeling and simulation in a single language. It provides benefits like simplified mixed-signal modeling, decreased simulation time, and improved mixed-signal verification. Previous solutions involved using two simulators or approximating analog circuits, which caused issues like slow simulation and lack of analog results. Verilog-AMS uses constructs from Verilog and Verilog-A to model both analog and digital content together. This avoids issues with interface elements between domains.
This document discusses the verification of Intel's Atom processor. It describes the key verification challenges, methodology used, and results. The main challenges were verifying a new microarchitecture with aggressive schedules and limited resources. The methodology involved cluster-level validation, functional coverage, architectural validation, and formal verification. Metrics like coverage, bug rates, and a "health of model" indicator were used. The results showed a successful pre-silicon verification with few escapes and debug/survivability features working as intended. Key learnings included the importance of keeping the full-chip design healthy early and putting equal focus on testability features.
Here are the key challenges faced in low power design without a common power format:
1. Domain definitions, level shifters, isolation cells, and other low power techniques are specified differently in each tool using tool-specific commands files and languages. This makes cross-tool consistency and validation difficult.
2. Power functionality cannot be easily verified at the RTL level without changing the RTL code, since power domains and low power techniques are not represented. This limits verification coverage.
3. Iteration between design creation and verification is difficult, since changes to the low power implementation require updates to multiple tool-specific specification files rather than a single cross-tool definition. This impacts design schedule and risks inconsistencies.
4.
This document discusses various metrics used to measure the progress and health of CPU verification. It describes architectural verification to ensure implementation meets specifications, as well as unit architecture and system level verification. Key metrics include pass rates for legacy tests, functional coverage, bug rates, lines of code changes, and a health of the model score to measure convergence. Secondary metrics like cycles run, bugs found at different levels, and test bench quality are also outlined.
36. Certitude Metrics - ST References
Global Metric
Representing the overall quality of the Verification Environment
ST reference : 75%, but usually higher
Activation Score
Measures the ability of the test suite to exercise all the RTL of the
IP
Similar to code coverage
ST reference : 95%, & 100% explained
Missing % should deeply studied & fixed or explained
Propagation Score
Measures the ability of the test suite to propagate mutations to the
outputs of the IP
ST reference : 80%, but should probably be enhanced by adding
more test scenarios to reach 90%
Detection Score
Measures the ability of the environment to catch errors
ST reference : 90%, but usually higher
DAC’2008 - Anaheim Elevating Confidence in Design IP 36
37. Case study 1 : 3rd Party - IP qualification
• Case study 1: Activation Score (A/F)
• Application: 3rd party IP 95%
IP ST Ref ST Avg 3rd Party IP
Activation Score (A/F) 95% 97% 97%
• HDL Directed Environment 90% Propagation Score (P/A) 80% 90% 80%
85% Global Metric (D/F) 75% 80% 66%
• ~300 tests, 30 minutes 80% Detection Score (D/P) 90% 93% 85%
75%
• Code Coverage ~100% 70%
65%
Detection Score (D/P) 60% Propagation Score (P/A)
• Challenges
ST Ref
• Convince 3rd Party IP provider
ST Avg
• High revenue, high visibility chip; 3rd Party IP
reduce respin risk
Global Metric (D/F)
• Results
• Helped us to push IP provider to improve verification environment
• and monitor progress
• Low detection score highlighted manual waveform checks
DAC’2008 - Anaheim Elevating Confidence in Design IP 37