1. The document discusses the importance of having a skeptical and questioning attitude when it comes to engineering and design.
2. It provides examples of past failures, such as the Hyatt Regency walkway collapse and Therac-25 radiation accidents, that were caused by a lack of questioning assumptions or not properly testing designs.
3. The author advocates listening to your "inner validator" and questioning everything, as complex systems can fail in unexpected ways if assumptions and requirements are not thoroughly tested.
1) The document discusses the importance of attitude in validation work, noting that attitude is more important than tools or techniques.
2) It emphasizes that nothing is perfect and all designs have bugs or shortcomings due to compromises, schedules, and unknowns. Accidents are inevitable in engineering work which pushes designs to their limits.
3) The document provides several examples of past engineering failures to illustrate issues like normalization of deviance, unexpected interactions in complex systems, and overreliance on untested assumptions. It stresses the importance of questioning everything, fighting urges to relax requirements, and trusting nothing without proper testing.
This document discusses trends in the design verification industry from 1980 to the present. It notes that ASIC designs are down, fewer large companies are headquartered in the area, and consulting fees and signing authority have decreased. Hardware is seen as more static due to standardized processes and fewer design options. Software is less static with many languages and free options. The author argues that while tools like synthesis, simulation and formal verification have helped, the industry lacks major innovations. Suggestions are made to focus on saving costs through efficient projects and dropping excess features rather than fretting over offshoring or wasted projects of the past.
What to Expect for Big Data and Apache Spark in 2017 Databricks
Big data remains a rapidly evolving field with new applications and infrastructure appearing every year. In this talk, Matei Zaharia will cover new trends in 2016 / 2017 and how Apache Spark is moving to meet them. In particular, he will talk about work Databricks is doing to make Apache Spark interact better with native code (e.g. deep learning libraries), support heterogeneous hardware, and simplify production data pipelines in both streaming and batch settings through Structured Streaming.
Speaker: Matei Zaharia
Video: http://go.databricks.com/videos/spark-summit-east-2017/what-to-expect-big-data-apache-spark-2017
This talk was originally presented at Spark Summit East 2017.
The document discusses best practices for developing tests and assessments. It provides guidance on writing different types of test items, including binary choice, matching, and multiple choice questions. For each item type, examples of both faulty and improved items are given to demonstrate how to avoid common pitfalls in writing clear, unambiguous test questions. The document emphasizes using simple language, avoiding negatives, and ensuring response options are logical and mutually exclusive.
How can we prevent accidents caused by human error? This presentation deals with typical examples of severe accidents related to human errors, and shows methods to prevent them.
Artificial Intelligence Robotics (AI) PPT by Aamir Saleem AnsariTech
Artificial intelligence (AI) is the intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal.Colloquially, the term "artificial intelligence" is likely to be applied when a machine uses cutting-edge techniques to competently perform or mimic "cognitive" functions that we intuitively associate with human minds, such as "learning" and "problem solving".The colloquial connotation, especially among the public, associates artificial intelligence with machines that are "cutting-edge" (or even "mysterious"). This subjective borderline around what constitutes "artificial intelligence" tends to shrink over time; for example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" as it is nowadays a mundane routine technology.Modern examples of AI include computers that can beat professional players at Chess and Go, and self-driving cars that navigate crowded city streets.
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
So you’re a big data and distributed systems “expert”, you’ve collected 500 billion data points, thrown it into sci-lib-of-the-week, you’re using Hadoop, backing onto those cool AWS GPU instances, let it grind away for days and its spit out the answer to life the universe and everything. But is it really better than a coin toss?
How do you validate whether your data analysis algorithm works? Are you learning a solution to your problems or just the data you already have? What problems can you encounter when analysing your data? How do you solve them, and what can you do easily under the time pressures of a business environment?
Chaos Engineering, When should you release the monkeys?Thoughtworks
Chaos Engineering is listed as 'Trial' in the ThoughtWorks Tech Radar, but what is it really and how is it different from traditional testing? When and why should you get started with Chaos Engineering and is Chaos Monkey the right place to start when you do?
1) The document discusses the importance of attitude in validation work, noting that attitude is more important than tools or techniques.
2) It emphasizes that nothing is perfect and all designs have bugs or shortcomings due to compromises, schedules, and unknowns. Accidents are inevitable in engineering work which pushes designs to their limits.
3) The document provides several examples of past engineering failures to illustrate issues like normalization of deviance, unexpected interactions in complex systems, and overreliance on untested assumptions. It stresses the importance of questioning everything, fighting urges to relax requirements, and trusting nothing without proper testing.
This document discusses trends in the design verification industry from 1980 to the present. It notes that ASIC designs are down, fewer large companies are headquartered in the area, and consulting fees and signing authority have decreased. Hardware is seen as more static due to standardized processes and fewer design options. Software is less static with many languages and free options. The author argues that while tools like synthesis, simulation and formal verification have helped, the industry lacks major innovations. Suggestions are made to focus on saving costs through efficient projects and dropping excess features rather than fretting over offshoring or wasted projects of the past.
What to Expect for Big Data and Apache Spark in 2017 Databricks
Big data remains a rapidly evolving field with new applications and infrastructure appearing every year. In this talk, Matei Zaharia will cover new trends in 2016 / 2017 and how Apache Spark is moving to meet them. In particular, he will talk about work Databricks is doing to make Apache Spark interact better with native code (e.g. deep learning libraries), support heterogeneous hardware, and simplify production data pipelines in both streaming and batch settings through Structured Streaming.
Speaker: Matei Zaharia
Video: http://go.databricks.com/videos/spark-summit-east-2017/what-to-expect-big-data-apache-spark-2017
This talk was originally presented at Spark Summit East 2017.
The document discusses best practices for developing tests and assessments. It provides guidance on writing different types of test items, including binary choice, matching, and multiple choice questions. For each item type, examples of both faulty and improved items are given to demonstrate how to avoid common pitfalls in writing clear, unambiguous test questions. The document emphasizes using simple language, avoiding negatives, and ensuring response options are logical and mutually exclusive.
How can we prevent accidents caused by human error? This presentation deals with typical examples of severe accidents related to human errors, and shows methods to prevent them.
Artificial Intelligence Robotics (AI) PPT by Aamir Saleem AnsariTech
Artificial intelligence (AI) is the intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal.Colloquially, the term "artificial intelligence" is likely to be applied when a machine uses cutting-edge techniques to competently perform or mimic "cognitive" functions that we intuitively associate with human minds, such as "learning" and "problem solving".The colloquial connotation, especially among the public, associates artificial intelligence with machines that are "cutting-edge" (or even "mysterious"). This subjective borderline around what constitutes "artificial intelligence" tends to shrink over time; for example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" as it is nowadays a mundane routine technology.Modern examples of AI include computers that can beat professional players at Chess and Go, and self-driving cars that navigate crowded city streets.
AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other. Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
So you’re a big data and distributed systems “expert”, you’ve collected 500 billion data points, thrown it into sci-lib-of-the-week, you’re using Hadoop, backing onto those cool AWS GPU instances, let it grind away for days and its spit out the answer to life the universe and everything. But is it really better than a coin toss?
How do you validate whether your data analysis algorithm works? Are you learning a solution to your problems or just the data you already have? What problems can you encounter when analysing your data? How do you solve them, and what can you do easily under the time pressures of a business environment?
Chaos Engineering, When should you release the monkeys?Thoughtworks
Chaos Engineering is listed as 'Trial' in the ThoughtWorks Tech Radar, but what is it really and how is it different from traditional testing? When and why should you get started with Chaos Engineering and is Chaos Monkey the right place to start when you do?
The document discusses negative results in using Monte-Carlo tree search (MCTS) to play the game of Go. It summarizes that:
1) MCTS has achieved great success in games but faces challenges in Go situations requiring abstract thinking and divide-and-conquer strategies.
2) Trivial situations in Go like "semeai" (liberty racing) are solved poorly by MCTS due to an inability to generalize across similar situations.
3) The paper examines techniques like parallelization, machine learning, genetic programming, and nested MCTS that have not fully addressed these challenges, showing the importance of continued work on these open problems.
Disappointing results & open problems in Monte-Carlo Tree SearchOlivier Teytaud
This document summarizes negative results from experiments using Monte-Carlo tree search (MCTS) techniques to play the game of Go. The key points are:
1) MCTS achieved early successes in Go but struggles with situations requiring abstract thinking and divide-and-conquer strategies.
2) Attempts to address these weaknesses through parallelization, machine learning, genetic programming, and nested MCTS provided only marginal improvements over random play or shallow search depths.
3) The results highlight important challenges that MCTS does not currently solve and indicate the most important areas of research are improving abstract reasoning and combination of local fights, not hardware or search improvements.
This document provides the rules and questions for different rounds of a science and technology quiz competition. The top 6 teams that answer the most questions correctly will advance to the finals. Questions cover topics in physics, chemistry, history of science and technology, and current events related to science and business. Correct answers in the finals rounds earn significantly more points than in the preliminary rounds.
The document discusses innovation and the process of developing new ideas. It notes that ideas do not come from laptops alone, and that innovation thrives when ideas can connect and recombine in serendipitous ways. The concept of the "adjacent possible" is introduced to describe this phenomenon. Various approaches to problem solving, testing ideas, and learning from failures are also presented.
The document discusses two feedback systems in the brain - the hedonic feedback system related to pain and pleasure, and the attentional feedback system related to boredom and excitement. It proposes that these systems interact, with the hedonic system generating preferences and values unconsciously in the background. Experiments show emotions and unconscious processing play a role in decision making, and that intense feelings can become boring over time as the attentional system loses its memory trace of them.
Introduction to sensation and perceptionLance Jones
This slideshow was created with images from the web. I claim no copyright or ownership of any images. If a copyright owner of any image objects to the use in this slideshow, contact me to remove it. This is for a course in Introductory Psychology using Wayne Weiten's "Psychology: Themes and Variations" 8th ed. Published by Cengage. Images from the text are copyrighted by Cengage.
The Ludic Fallacy Applied to Automated PlanningLuke Dicken
This is a short talk I gave to the Strathclyde Planning Group on deficiencies I can see in the way we thing and reason about planning in non-deterministic environments. PPDDL - the accepted standard - is overly simplistic and can get us into hot water because we focus on solving the PPDDL problem, rather than the Real World problem it models.
The breakout session that followed was very useful for generating a lot of ideas about different threads we could use to attack the weaknesses of PPDDL and work being done around the edges, which I hope to summarise at some point.
How I Learned to Stop Worrying and Love the ENCShaun Mouton
Working with Puppet Enterprise over the years, we've used quite a few tools and workflows to manage our consumption of modules from the Forge, but few things have been as valuable as Adrien Thebo's r10k, and later the Vagrant plugin Oscar.
Puppet Camp Austin 2015: How I learned to stop worrying and love the ENC Puppet
This document outlines a presentation about an organization's initial efforts using Puppet Enterprise, including implementing roles and profiles and adoption challenges. It describes three parts of the presentation: 1) protecting infrastructure with Puppet, 2) stumbling blocks faced during adoption like complex scripts, and 3) how newer Puppet Enterprise consoles make management easier. The presentation aims to make attendees comfortable allowing the console to manage infrastructure and understand code-based methodologies.
This document outlines a presentation about an organization's initial efforts using Puppet Enterprise, including implementing roles and profiles and adoption challenges. It describes three parts of the presentation: 1) protecting infrastructure with Puppet, 2) stumbling blocks faced during adoption like complex scripts, and 3) how newer Puppet Enterprise consoles make management easier. The presentation aims to help attendees feel more comfortable allowing the console to manage infrastructure and understand code-based methodologies.
Virtual reality originated as an idea by Morton Heilig in the 1960s to simulate environments that interact with human senses. Ivan Sutherland continued developing the concept using computer graphics and head-mounted displays. In the late 1960s, the US military and NASA recognized VR's potential and helped advance the technology for flight simulation training. Today, VR is used across many fields including education, training, medicine, and more, and its future applications may involve direct integration with the human body and nervous system.
The document discusses how manufacturers must prepare for unpredictable changes by adopting a scenario planning approach. It notes that changes like new technologies happen faster and more extensively than expected, creating tsunami-like disruptions. To cope, companies need flexible organizational structures and should consider multiple potential futures rather than relying on predictions or current assumptions. Scenario planning can help companies systematically envision different technological and market scenarios to guide strategic planning.
This document discusses copyright infringement and its key aspects. It addresses whether a work is subject to copyright, ownership of copyright, and what constitutes primary infringement such as copying or adapting a substantial part of a work. Non-literal copying can also infringe if there is resemblance and a causal link between works. Secondary infringement involves commercial dealing in unauthorized copies or communicating a copyrighted work to the public for business purposes. The document provides examples of court cases related to assessing infringement.
This document discusses failure through quotes and anecdotes from various entrepreneurs and designers. It describes high-profile failures like the crash of Air France Flight 447 due to pilot error. Designers like Milton Glaser and Paula Scher discuss how failure can aid development and the importance of distinguishing failure from bad luck. The document also examines how systems can be designed to prevent failures and gives examples of preventable medical errors and plane crashes.
Unraveling mysteries of the Universe at CERN, with OpenStack and HadoopPiotr Turek
I will talk about the challenges faced, lessons learned and fun I had while reinventing the way offline data analysis is done at one of LHC (Large Hadron Collider) experiments. A journey, which took us to another land: of contemporary Big Data stack, and which finally married those two. Did it make any sense in the end? Come and you will know.
Among other things you will learn:
• the why, what and how of data analysis at CERN
• why latency variability in large distributed systems matters (literally ;))
• why using C++ as a scripting language is both the best and the worst idea ever
• how to implement a reliable Hadoop cluster provisioning mechanism on OpenStack
• how to marry a huge data analysis framework written in C++, with Hadoop 2
• what is the moral of this story
This document summarizes a presentation about implementing private clouds. It discusses key concepts in building reliable private clouds such as isolation, concurrency, failure detection, fault identification, live upgrades, and stable storage. It also covers challenges such as complexity, automation, configuration management, continuous delivery, communities of practice, testing, monitoring, skills like web operations, networking and storage, and high availability even during failures. The document emphasizes that building reliable clouds at scale is difficult and requires addressing many technical challenges.
This document outlines a data science competition to build a spam detector using email data. Participants will be provided with training data containing 600 emails and their corresponding labels (spam or not spam). They will use this data to build a model to classify new emails as spam or not spam. The goal is to correctly classify as many new test emails as possible. Visualization and interpretation of results will be important for evaluating model performance and identifying ways to improve the spam detection.
A Kanban Case Study At MoneySuperMarketThoughtworks
The document discusses Kanban concepts for software development including making all work visible through a Kanban board, limiting work-in-progress to increase flow, and using queues, buffers, and limits to manage workflow from analysis through deployment and into a portfolio. It also touches on techniques like maximizing throughput, pulling work rather than pushing it, reducing multitasking, enhancing teamwork, and stopping starting and starting to finish.
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation.
Why is Software Testing Important to a business?
Software testing is a process to determine the quality of the software developed by a developer or programmer. It is a methodological study intended to evaluate the quality-related information of the product. Understanding of the important features and advantages of software testing helps businesses in their day-to-day activities.
Testing can be done in two ways, manual testing and automated testing. Manual software testing is done by human testers, who manually check the code and report bugs in it. In case of automated testing, testing is performed by a computer using software such as WinRunner, LoadRunner, etc.
IP Reuse Impact on Design Verification Management Across the EnterpriseDVClub
The document discusses challenges with IP reuse dependency management across hardware design projects. It notes that verification reuse is often neglected and that finding and fixing issues on complex projects can be difficult without proper dependency tracing of IP instances, designs, and versions. The presentation recommends establishing processes and checklists for IP verification and design history tracking to facilitate reuse. It also shares survey results about the organizational impacts of improved IP reuse dependency management, such as more efficient engineering resource usage and 30% faster time to market.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
The document discusses negative results in using Monte-Carlo tree search (MCTS) to play the game of Go. It summarizes that:
1) MCTS has achieved great success in games but faces challenges in Go situations requiring abstract thinking and divide-and-conquer strategies.
2) Trivial situations in Go like "semeai" (liberty racing) are solved poorly by MCTS due to an inability to generalize across similar situations.
3) The paper examines techniques like parallelization, machine learning, genetic programming, and nested MCTS that have not fully addressed these challenges, showing the importance of continued work on these open problems.
Disappointing results & open problems in Monte-Carlo Tree SearchOlivier Teytaud
This document summarizes negative results from experiments using Monte-Carlo tree search (MCTS) techniques to play the game of Go. The key points are:
1) MCTS achieved early successes in Go but struggles with situations requiring abstract thinking and divide-and-conquer strategies.
2) Attempts to address these weaknesses through parallelization, machine learning, genetic programming, and nested MCTS provided only marginal improvements over random play or shallow search depths.
3) The results highlight important challenges that MCTS does not currently solve and indicate the most important areas of research are improving abstract reasoning and combination of local fights, not hardware or search improvements.
This document provides the rules and questions for different rounds of a science and technology quiz competition. The top 6 teams that answer the most questions correctly will advance to the finals. Questions cover topics in physics, chemistry, history of science and technology, and current events related to science and business. Correct answers in the finals rounds earn significantly more points than in the preliminary rounds.
The document discusses innovation and the process of developing new ideas. It notes that ideas do not come from laptops alone, and that innovation thrives when ideas can connect and recombine in serendipitous ways. The concept of the "adjacent possible" is introduced to describe this phenomenon. Various approaches to problem solving, testing ideas, and learning from failures are also presented.
The document discusses two feedback systems in the brain - the hedonic feedback system related to pain and pleasure, and the attentional feedback system related to boredom and excitement. It proposes that these systems interact, with the hedonic system generating preferences and values unconsciously in the background. Experiments show emotions and unconscious processing play a role in decision making, and that intense feelings can become boring over time as the attentional system loses its memory trace of them.
Introduction to sensation and perceptionLance Jones
This slideshow was created with images from the web. I claim no copyright or ownership of any images. If a copyright owner of any image objects to the use in this slideshow, contact me to remove it. This is for a course in Introductory Psychology using Wayne Weiten's "Psychology: Themes and Variations" 8th ed. Published by Cengage. Images from the text are copyrighted by Cengage.
The Ludic Fallacy Applied to Automated PlanningLuke Dicken
This is a short talk I gave to the Strathclyde Planning Group on deficiencies I can see in the way we thing and reason about planning in non-deterministic environments. PPDDL - the accepted standard - is overly simplistic and can get us into hot water because we focus on solving the PPDDL problem, rather than the Real World problem it models.
The breakout session that followed was very useful for generating a lot of ideas about different threads we could use to attack the weaknesses of PPDDL and work being done around the edges, which I hope to summarise at some point.
How I Learned to Stop Worrying and Love the ENCShaun Mouton
Working with Puppet Enterprise over the years, we've used quite a few tools and workflows to manage our consumption of modules from the Forge, but few things have been as valuable as Adrien Thebo's r10k, and later the Vagrant plugin Oscar.
Puppet Camp Austin 2015: How I learned to stop worrying and love the ENC Puppet
This document outlines a presentation about an organization's initial efforts using Puppet Enterprise, including implementing roles and profiles and adoption challenges. It describes three parts of the presentation: 1) protecting infrastructure with Puppet, 2) stumbling blocks faced during adoption like complex scripts, and 3) how newer Puppet Enterprise consoles make management easier. The presentation aims to make attendees comfortable allowing the console to manage infrastructure and understand code-based methodologies.
This document outlines a presentation about an organization's initial efforts using Puppet Enterprise, including implementing roles and profiles and adoption challenges. It describes three parts of the presentation: 1) protecting infrastructure with Puppet, 2) stumbling blocks faced during adoption like complex scripts, and 3) how newer Puppet Enterprise consoles make management easier. The presentation aims to help attendees feel more comfortable allowing the console to manage infrastructure and understand code-based methodologies.
Virtual reality originated as an idea by Morton Heilig in the 1960s to simulate environments that interact with human senses. Ivan Sutherland continued developing the concept using computer graphics and head-mounted displays. In the late 1960s, the US military and NASA recognized VR's potential and helped advance the technology for flight simulation training. Today, VR is used across many fields including education, training, medicine, and more, and its future applications may involve direct integration with the human body and nervous system.
The document discusses how manufacturers must prepare for unpredictable changes by adopting a scenario planning approach. It notes that changes like new technologies happen faster and more extensively than expected, creating tsunami-like disruptions. To cope, companies need flexible organizational structures and should consider multiple potential futures rather than relying on predictions or current assumptions. Scenario planning can help companies systematically envision different technological and market scenarios to guide strategic planning.
This document discusses copyright infringement and its key aspects. It addresses whether a work is subject to copyright, ownership of copyright, and what constitutes primary infringement such as copying or adapting a substantial part of a work. Non-literal copying can also infringe if there is resemblance and a causal link between works. Secondary infringement involves commercial dealing in unauthorized copies or communicating a copyrighted work to the public for business purposes. The document provides examples of court cases related to assessing infringement.
This document discusses failure through quotes and anecdotes from various entrepreneurs and designers. It describes high-profile failures like the crash of Air France Flight 447 due to pilot error. Designers like Milton Glaser and Paula Scher discuss how failure can aid development and the importance of distinguishing failure from bad luck. The document also examines how systems can be designed to prevent failures and gives examples of preventable medical errors and plane crashes.
Unraveling mysteries of the Universe at CERN, with OpenStack and HadoopPiotr Turek
I will talk about the challenges faced, lessons learned and fun I had while reinventing the way offline data analysis is done at one of LHC (Large Hadron Collider) experiments. A journey, which took us to another land: of contemporary Big Data stack, and which finally married those two. Did it make any sense in the end? Come and you will know.
Among other things you will learn:
• the why, what and how of data analysis at CERN
• why latency variability in large distributed systems matters (literally ;))
• why using C++ as a scripting language is both the best and the worst idea ever
• how to implement a reliable Hadoop cluster provisioning mechanism on OpenStack
• how to marry a huge data analysis framework written in C++, with Hadoop 2
• what is the moral of this story
This document summarizes a presentation about implementing private clouds. It discusses key concepts in building reliable private clouds such as isolation, concurrency, failure detection, fault identification, live upgrades, and stable storage. It also covers challenges such as complexity, automation, configuration management, continuous delivery, communities of practice, testing, monitoring, skills like web operations, networking and storage, and high availability even during failures. The document emphasizes that building reliable clouds at scale is difficult and requires addressing many technical challenges.
This document outlines a data science competition to build a spam detector using email data. Participants will be provided with training data containing 600 emails and their corresponding labels (spam or not spam). They will use this data to build a model to classify new emails as spam or not spam. The goal is to correctly classify as many new test emails as possible. Visualization and interpretation of results will be important for evaluating model performance and identifying ways to improve the spam detection.
A Kanban Case Study At MoneySuperMarketThoughtworks
The document discusses Kanban concepts for software development including making all work visible through a Kanban board, limiting work-in-progress to increase flow, and using queues, buffers, and limits to manage workflow from analysis through deployment and into a portfolio. It also touches on techniques like maximizing throughput, pulling work rather than pushing it, reducing multitasking, enhancing teamwork, and stopping starting and starting to finish.
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.
Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation.
Why is Software Testing Important to a business?
Software testing is a process to determine the quality of the software developed by a developer or programmer. It is a methodological study intended to evaluate the quality-related information of the product. Understanding of the important features and advantages of software testing helps businesses in their day-to-day activities.
Testing can be done in two ways, manual testing and automated testing. Manual software testing is done by human testers, who manually check the code and report bugs in it. In case of automated testing, testing is performed by a computer using software such as WinRunner, LoadRunner, etc.
IP Reuse Impact on Design Verification Management Across the EnterpriseDVClub
The document discusses challenges with IP reuse dependency management across hardware design projects. It notes that verification reuse is often neglected and that finding and fixing issues on complex projects can be difficult without proper dependency tracing of IP instances, designs, and versions. The presentation recommends establishing processes and checklists for IP verification and design history tracking to facilitate reuse. It also shares survey results about the organizational impacts of improved IP reuse dependency management, such as more efficient engineering resource usage and 30% faster time to market.
The document describes Cisco's Base Environment methodology for digital verification. It aims to standardize the verification process, promote reuse, and improve predictability. The methodology defines a common testbench topology and infrastructure that is vertically scalable from unit to system level and horizontally scalable across projects. It provides templates, scripts, verification IP and documentation to help teams set up verification environments quickly and leverage existing best practices. The standardized approach facilitates extensive code and test reuse and delivers benefits such as faster ramp-up times, improved planning, and higher return on verification IP development.
Intel Xeon Pre-Silicon Validation: Introduction and ChallengesDVClub
This document discusses the challenges of pre-silicon validation for Intel Xeon processors. Some key challenges include: reusing design components from previous projects which may have incomplete or poorly written code; managing cross-site validation teams; developing sufficient stimulus and checking while minimizing overhead; achieving high functional coverage within tight validation windows; and ensuring tests can be ported between pre-silicon and post-silicon environments. The validation process aims to quickly comprehend new features and design changes while validating the full chip design before tapeout.
The document discusses how shaders are created and validated for graphics processing units (GPUs). Shaders are created by applications and sent to the GPU through graphics APIs and drivers. They are then executed by the GPU's shader processors. The validation process uses layered testbenches at the sub-block, block, and system levels for maximum controllability and observability. It also employs a reference model methodology using C++ models and hardware emulation to debug designs faster than simulation alone. This methodology helps improve the graphics development schedule.
This document appears to be a presentation given by AMD on verification challenges for graphics ASICs. The presentation covers an overview of AMD, GPU systems, 3D graphics basics, and verification challenges. It discusses the size and complexity of GPUs, layered code and testbenches used for verification, and the use of hardware emulation and functional coverage.
1. The document discusses methodologies for hardware verification and developing an efficient verification flow.
2. It recommends defining a conceptual framework for the flow to standardize some aspects while allowing for diversity and innovation.
3. Using transaction level modeling and assertions in early stages like the specification model can help validation before the RTL design stage. Assertions can be written at different levels from the specification to the RTL and testbench.
Praveen Vishakantaiah, President of Intel India, discussed the challenges of validating next generation CPUs. Validation is increasingly complex due to factors like rising design complexity from multi-core processors and chipset integration, as well as shorter time to market windows. Validation efforts are also not scaling incrementally with post-silicon development. Addressing these challenges requires experienced architects and validators working closely together, instrumentation of design models to enable validation, reuse of validation tools, and scaling of emulation and formal verification techniques. Validation is critical to meeting customer satisfaction and business goals around schedule and costs.
This document discusses using the IP-XACT standard to address challenges in verification automation. IP-XACT allows generating verification platforms, register tests, and other elements from a single IP description. It standardizes IP information exchange and reduces duplication. Using IP-XACT, a verification flow is proposed where the testbench, models, and register tests are automatically generated from an IP-XACT file, improving consistency and reducing turnaround times. IP-XACT is now an IEEE standard developed by the SPIRIT consortium to describe IPs in a vendor-neutral way and enable maximum automation.
Validation and Design in a Small Team EnvironmentDVClub
The document discusses validation and design in small teams with limited resources. It proposes constraining designs to a single clock rate, standardized interfaces, and automated test cases to streamline verification. This reduces complexity and verification costs, allowing designs to be completed more quickly despite limited experience. Standardizing interfaces and separating algorithm from implementation verification improves efficiency enough to overcome typical verification to design ratios.
This document discusses trends in mixed signal validation. It begins with an overview of mixed signal systems that contain both analog and digital components. The evolution of mixed signal validation is then described, from early approaches that simulated analog and digital components separately to modern tools that can jointly simulate both domains using languages like Verilog-AMS. The key steps in mixed signal validation are outlined, including modeling components in Verilog-AMS, validating blocks, and performing system-level validation. Throughout, the importance of accurate models for verification is emphasized. Examples of mixed signal modeling and a charge pump PLL validation environment are also provided.
Verification teams at chip design companies now work globally, presenting communication challenges. Time zone differences make real-time collaboration difficult, and documentation through tools like TWiki can suffer if not well-organized. However, global teams also provide benefits by making more people and creative ideas available. Companies like AMD are addressing these issues through centers of expertise that standardize methodologies, tools, and components to facilitate collaboration across sites, while still allowing projects flexibility and innovation. Regular reviews help continuously improve processes as new techniques are adopted or abandoned.
Greg Tierney of Avid presented on their experiences using SystemC for design verification. Some key points:
1) Avid chose SystemC to enhance their existing C++ verification code and take advantage of its built-in verification capabilities like randomization and multi-threading.
2) SystemC helped Avid solve problems like connecting entire HDL modules to their testbench and monitoring foreign signals.
3) While SystemC provided benefits, Avid also encountered issues with its compile/link performance and large library size. Overall, Avid found SystemC reliable for design verification over three years of use.
This document provides an overview of the verification strategy for PCI-Express. It discusses the PCI-Express protocol, including the physical, data link, transaction, and software layers. It outlines the verification paradigm, including functional verification using constrained random testing, assertions, asynchronous/power domain simulations, and performance verification. It also discusses compliance verification through electrical, data link, transaction, and system architecture checklists. Finally, it discusses design for verification through a modular and scalable architecture to promote reusability and reduce verification effort and complexity.
SystemVerilog Assertions (SVA) in the Design/Verification ProcessDVClub
1) Visual SVA tools like Zazz allow designers to create complex SystemVerilog assertions through a graphical interface, addressing issues with SVA syntax.
2) Zazz also enables debugging assertions as they are created by generating constrained random tests, improving assertion quality before use in verification.
3) Using assertions improved the author's verification and debugging process, identifying errors sooner and in corner cases, and provided additional value to IP customers through early fault detection.
The document discusses methodologies for improving efficiency in verification testing at Cisco, including using reusable components from other projects, avoiding duplicate specifications, providing flexible testbenches, and automating tasks. It provides examples used at Cisco such as separating testbench creation into three stages, using testflow to synchronize component behavior, reusing unit-level checkers, linking transactions between checkers, and generating common infrastructure from templates to reduce designer effort. The biggest efficiency gains come from methodologies that push shared behavior into reusable components and standardize common elements.
1) Pre-silicon verification is increasingly important for post-silicon validation as design complexity grows and schedules shrink. Bugs that escape pre-silicon verification can significantly impact post-silicon schedules and effort.
2) Mixed-signal effects, power-on/reset sequences, and design-for-testability features need to be verified pre-silicon to avoid difficult to reproduce bugs during post-silicon validation.
3) Case studies demonstrate how low investment in pre-silicon verification of areas like power-on/reset sequences and design-for-testability features can lead to longer post-silicon schedules due to unexpected bugs.
The document discusses Sun Microsystems' UltraSPARC T1 processor. It provides an overview of the processor's features, including its implementation of chip multi-threading with up to 8 cores and 32 threads. It describes the processor's design choices such as shared caches and memory controllers. It also discusses Sun's strategy for verifying the processor's architecture and microarchitecture through directed testing, coverage metrics, and other techniques. Finally, it notes some of the benefits of chip multi-threading for performance, cost, reliability, and power efficiency.
Intel Atom Processor Pre-Silicon Verification ExperienceDVClub
This document discusses the verification methodology and results for the Intel Atom processor. It describes the challenges of verifying a new microarchitecture with power management features on an aggressive schedule. The methodology involved cluster-level validation with functional coverage, architectural validation using an instruction set generator, and power management validation. Verification metrics like coverage and bug rates were tracked. The results included booting Windows and Linux 10 hours after receiving silicon, with few functional bugs found post-silicon that weren't corner cases. Debug and survivability features helped reduce escapes.
This document discusses using assertions in analog mixed-signal (AMS) verification. It describes how assertions can be used to check interface assumptions, power mode transitions, and timing relationships for AMS blocks. Assertions provide compact and precise checks that can be reused across different verification methodologies. The document also provides an example of using Verilog-AMS monitors to digitize continuous signals from an AMS model so they can be checked using SystemVerilog assertions.
This document discusses challenges and requirements for low-power design and verification. It begins with an overview of how leakage is significantly increasing due to process scaling and how active power is now a major portion of power budgets. New strategies are needed to address process variations and enhance scaling approaches. The verification flows must support multi-voltage domain analysis and rule-based checking across voltage states while capturing island ordering and microarchitecture sequence errors. Low-power implementation introduces challenges for design representation, implementation across tools, and verification. Methodologies and design flows must be adapted to account for power and ground nets becoming functional signals.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
2. 2
2
AttitudeAttitude
I could talk about techniques, tools,I could talk about techniques, tools,
FVFV
Environments, algorithms, machineryEnvironments, algorithms, machinery
Languages, suites, trainingLanguages, suites, training
but I thinkbut I think attitudeattitude is more importantis more important
than any of thosethan any of those
3. 3
34/4/07 Bob Colwell
No Perfect DesignsNo Perfect Designs
Nothing is perfect, everything has bugsNothing is perfect, everything has bugs
– Shortcomings, compromises, defects, design errata, gaffes, goofs,Shortcomings, compromises, defects, design errata, gaffes, goofs,
fumbles, errors, boneheaded mistakes, bobbles, bungles, boo-boosfumbles, errors, boneheaded mistakes, bobbles, bungles, boo-boos
– But not all bugs are equal!But not all bugs are equal!
Can’t test to saturation: schedule matters tooCan’t test to saturation: schedule matters too
Why is everything always so darned buggy?Why is everything always so darned buggy?
– Software…need say no more…Software…need say no more…
– Why did Titanic not have waterproof compartments?Why did Titanic not have waterproof compartments?
– Why did Ford Pinto have gas tank in back?Why did Ford Pinto have gas tank in back?
– Why did Challenger fly with leaky O-rings?Why did Challenger fly with leaky O-rings?
– Why did torpedoes not explode in WWII?Why did torpedoes not explode in WWII?
Entropy has a preferred directionEntropy has a preferred direction
Only genius could paint Mona Lisa,Only genius could paint Mona Lisa,
but any small child can destroy it quicklybut any small child can destroy it quickly
1000 ways to do things wrong, 1 or 2 that work1000 ways to do things wrong, 1 or 2 that work
5. 5
54/4/07 Bob Colwell
Accidents Are InevitableAccidents Are Inevitable
– It's the nature of engineeringIt's the nature of engineering
to push designs to edge ofto push designs to edge of
failure (schedule, reliability,failure (schedule, reliability,
thermals, materials, tools,thermals, materials, tools,
judgment of unknowns)judgment of unknowns)
– P(accident) =P(accident) = εε , for, for εε ≠≠ 00
– World rewards this behaviorWorld rewards this behavior
Cool new features + first toCool new features + first to
market often preferred tomarket often preferred to
dependabilitydependability
Other markets (life-support)Other markets (life-support)
make (or should make) thismake (or should make) this
trade-off differently!trade-off differently!
6. 6
64/4/07 Bob Colwell
Isn’t that justIsn’t that just ??
Close. But Murphy is notClose. But Murphy is not
quite right.quite right.
1.1. #Near-misses >> #disasters#Near-misses >> #disasters
2.2. Competent design/test findsCompetent design/test finds
simple errorssimple errors
3.3. Complex sequences & unlikelyComplex sequences & unlikely
event cascades survive to prod’nevent cascades survive to prod’n
7. 7
74/4/07 Bob Colwell
Failures Getting WorseFailures Getting Worse
Mechanical things usually fail predictably due to physicsMechanical things usually fail predictably due to physics
– Wings bend, bridges groan, engines rattle, knees acheWings bend, bridges groan, engines rattle, knees ache
– By contrast, computer-based things fail “all over the place”By contrast, computer-based things fail “all over the place”
Helpful Engineering Attitude:Helpful Engineering Attitude:
1.1. Nature does not want yourNature does not want your
engineered system to work; willengineered system to work; will
actively work against youactively work against you
2.2. Your design will do only whatYour design will do only what
you’ve constrained it to do, onlyyou’ve constrained it to do, only
as long as it has toas long as it has to
3.3. Watch out for…Watch out for…
Normalization of devianceNormalization of deviance
(Challenger O-rings, Apollo(Challenger O-rings, Apollo
1 fire)1 fire)
8. 8
8
The Steely-Eyed Missile ValidatorThe Steely-Eyed Missile Validator
Apollo 12Apollo 12
22ndnd
try to land on moon, launched 11/14/69try to land on moon, launched 11/14/69
36 seconds after liftoff, spacecraft struck by lightning => power36 seconds after liftoff, spacecraft struck by lightning => power
surgesurge
– All telemetry went haywire; book said to abort liftoffAll telemetry went haywire; book said to abort liftoff
– Both spacecraft pilot and mission controller were furiously considering that optionBoth spacecraft pilot and mission controller were furiously considering that option
– But John Aaron was on shift, and thought he’d seen this malfunction beforeBut John Aaron was on shift, and thought he’d seen this malfunction before
During testing 1 year earlier, Aaron observed test that went off into weedsDuring testing 1 year earlier, Aaron observed test that went off into weeds
– Aaron took it on himself to investigate this – led him to obscure SCE subsystemAaron took it on himself to investigate this – led him to obscure SCE subsystem
In critical “abort or not” few seconds, with lives on line, Aaron made one ofIn critical “abort or not” few seconds, with lives on line, Aaron made one of
most famous calls in NASA historymost famous calls in NASA history
– ““Flight, try SCE to ‘Aux’”Flight, try SCE to ‘Aux’”
– Neither Flight nor spacecraft pilot Conrad knew what that even meant, but Alan Bean tried itNeither Flight nor spacecraft pilot Conrad knew what that even meant, but Alan Bean tried it
– Telemetry came right back, vaulted Aaron into validation stardomTelemetry came right back, vaulted Aaron into validation stardom
He could have blown off earlier test, butHe could have blown off earlier test, but he didn’the didn’t
His inner validator wanted to know “what just happened?”His inner validator wanted to know “what just happened?”
Isaac Asimov once said 3 most important
words in science are “What was THAT?”
9. 9
9
Complexity Implies SurprisesComplexity Implies Surprises
……and surprises areand surprises are badbad
Chaos effects in complexChaos effects in complex µµ P’sP’s
– Decomposability is a fundamental tenet ofDecomposability is a fundamental tenet of
complex system designcomplex system design
– Butterfly wings ruin decomposabilityButterfly wings ruin decomposability
– ““Improve design, get slower performance” notImprove design, get slower performance” not
at all uncommonat all uncommon
We must stop designing largeWe must stop designing large
systems as though small ones simplysystems as though small ones simply
scale upscale up
– lesson from comm engineers:lesson from comm engineers: assumeassume errorserrors
10. 10
10
Thinking about validationThinking about validation
Ability to think in analogies is highestAbility to think in analogies is highest
form of intelligenceform of intelligence
– IQ tests like “a:b :: c:d”IQ tests like “a:b :: c:d”
– Hofstadter's book: numerical sequencesHofstadter's book: numerical sequences
Analogies may illuminate a subject inAnalogies may illuminate a subject in
a way that direct introspection cannota way that direct introspection cannot
– They drive our minds to their creative limitsThey drive our minds to their creative limits
11. 11
11
Listen to Your Inner ValidatorListen to Your Inner Validator
YouYou knewknew it wouldn’t be 3, didn’t you?it wouldn’t be 3, didn’t you?
– You sensed something’s not quite as it seemsYou sensed something’s not quite as it seems
Answer: 0, 1, 2, 720!, …Answer: 0, 1, 2, 720!, …
= 0, 1, 2, 6!!= 0, 1, 2, 6!!
= 0, 1!, 2!!, 3!!!, …= 0, 1!, 2!!, 3!!!, …
That was the voice of your innerThat was the voice of your inner
validator that you were hearingvalidator that you were hearing
D. Hofstadter, Fluid Concepts and Creative Analogies
0, 1, 2, …?0, 1, 2, …?
13. 13
13
What Happened?What Happened?
Spec was marginalSpec was marginal
40’ threaded rods40’ threaded rods
“too hard”, changed“too hard”, changed
to 2x20’ by contractorto 2x20’ by contractor
No simulation, no testNo simulation, no test
Who goofed?Who goofed?
Engineer, contractor,Engineer, contractor,
inspector…everyoneinspector…everyone
15. 15
15
Question EverythingQuestion Everything
TestTest assumptionsassumptions as well as designas well as design
– If assumptions are broken, design surely is tooIf assumptions are broken, design surely is too
– Try to “catch the field goals”Try to “catch the field goals”
16. 16
16
Fight Urge to Relax RequirementsFight Urge to Relax Requirements
ChallengerChallenger
– Not ok to slip design assumptions (launch temp,Not ok to slip design assumptions (launch temp,
# of unburnt O-rings) to suit desires# of unburnt O-rings) to suit desires
AirbusAirbus
– Blaming pilot not reasonable explanation; pilotBlaming pilot not reasonable explanation; pilot
is part of system designis part of system design
Runway “incursions” up 71% since ‘93Runway “incursions” up 71% since ‘93
– Near-misses are trying toNear-misses are trying to tell us somethingtell us something
Diane Vaughan, The Challenger Launch Decision, Chicago PressDiane Vaughan, The Challenger Launch Decision, Chicago Press
1996; Nancy Leveson, Safeware, Addison-Wesley 19951996; Nancy Leveson, Safeware, Addison-Wesley 1995
17. 17
17
If You Didn’t Test It,If You Didn’t Test It,
It Doesn’t WorkIt Doesn’t Work
Mir: fire extinguishersMir: fire extinguishers boltedbolted to wallto wall
– Still had strong metal launch strapsStill had strong metal launch straps
– Had never been needed before, so never testedHad never been needed before, so never tested
– Discovered with a roaring fire several feet awayDiscovered with a roaring fire several feet away
18. 18
184/4/07 Bob Colwell
Complexity Makes Everything WorseComplexity Makes Everything Worse
Some things must be complicated to do their jobSome things must be complicated to do their job
– Our brains, for exampleOur brains, for example
But complex sequences are root of most disastersBut complex sequences are root of most disasters
– Challenger, Bhopal, Chernobyl, FDIV, Exxon ValdezChallenger, Bhopal, Chernobyl, FDIV, Exxon Valdez
Where does complexity come from? Why does itWhere does complexity come from? Why does it
keep increasing? Where are the limits?keep increasing? Where are the limits?
– Pentium 4Pentium 4
““in the small” vs “in the large” design (micros vsin the small” vs “in the large” design (micros vs
comm systems)comm systems)
What to do? Vigilance, testing, awareness…we areWhat to do? Vigilance, testing, awareness…we are
all validatorsall validators
19. 19
19
What To DoWhat To Do
Get the spec rightGet the spec right
Design for correctness but…Design for correctness but…
design knowing perfection is unattainabledesign knowing perfection is unattainable
Users are part of the systemUsers are part of the system
Formal methodsFormal methods
Pre-production testing and validationPre-production testing and validation
Post-production testing and verificationPost-production testing and verification
Education of the publicEducation of the public
20. 20
204/4/07 Bob Colwell
RolesRoles
Engineers must standEngineers must stand
their groundtheir ground
– There are always doubts,There are always doubts,
incomplete data; don’t letincomplete data; don’t let
‘em use those against you‘em use those against you
Judgment is cruciallyJudgment is crucially
needed --needed -- YOURSYOURS
–Remember the ChallengerRemember the Challenger
““My God, Thiokol, when do you want me to launch? Next April?”My God, Thiokol, when do you want me to launch? Next April?”
–Be careful with “data”Be careful with “data”
““Risk assessment data is like a captured spy; if you torture it long enough, it will tell youRisk assessment data is like a captured spy; if you torture it long enough, it will tell you
anything you want to know…”anything you want to know…” (Wm. Ruckelshaus)(Wm. Ruckelshaus)
–Crushing, conflicting demands are normCrushing, conflicting demands are norm
DesignDesign must push the envelope w/o ceding responsibilitymust push the envelope w/o ceding responsibility
ValidationValidation establishes whether they've pushed it too farestablishes whether they've pushed it too far
ManagementManagement must beware overriding tech judgmentmust beware overriding tech judgment
PublicPublic must understand limits of human design processmust understand limits of human design process
All players must value roles of others!All players must value roles of others!
engineermgt HR
21. 21
21
Roles cont.Roles cont.
ManagementManagement
– wants to assume a product is safewants to assume a product is safe
– knows nothing’s ever perfect,knows nothing’s ever perfect,
comes a time to “shoot the engineers” or they’ll nevercomes a time to “shoot the engineers” or they’ll never
stop tinkeringstop tinkering
ValidatorsValidators
– want to prove a product is safewant to prove a product is safe
– assume it is not by defaultassume it is not by default
– only informed arbiters of when product is readyonly informed arbiters of when product is ready
don’t fall for “might as well sign, we’re
22. 22
22
Future Directions:Future Directions:
Public ExpectationsPublic Expectations
Andy Grove’s FDIV epiphanyAndy Grove’s FDIV epiphany
Paradoxically, the more high tech, the more public expects of productParadoxically, the more high tech, the more public expects of product
Users caused Chernobyl, TMI by going “off book”, but prevented manyUsers caused Chernobyl, TMI by going “off book”, but prevented many
other disasters with real-time creativity…lessons are subtleother disasters with real-time creativity…lessons are subtle
Takes exquisite understanding & judgment to discernTakes exquisite understanding & judgment to discern
accidents from reasonable risk-taking andaccidents from reasonable risk-taking and
bonehead errors or incompetencebonehead errors or incompetence
This is what a jury must do.This is what a jury must do.
How?How?
Can’t keep trending this wayCan’t keep trending this way
23. 23
23
Future of ValidationFuture of Validation
Multiple Culture Changes NeededMultiple Culture Changes Needed
Public needs to stop expecting perfectionPublic needs to stop expecting perfection
Design teams must explicitly limit complexityDesign teams must explicitly limit complexity
and avoid auto-scale-up assumptionsand avoid auto-scale-up assumptions
Companies must mature past point of viewingCompanies must mature past point of viewing
validation as an unpleasant overheadvalidation as an unpleasant overhead
does your company have “Validation Fellows?”does your company have “Validation Fellows?”
Validation is a profession of its own.Validation is a profession of its own.
Cultivate the Validation Attitude!Cultivate the Validation Attitude!