Functionality testing involves developing test cases to test new code based on software function specifications, marketing requirements, and developer code. Test cases are the foundation of quality assurance and should cover equivalence classes, boundary values, decision tables, state transitions, and all pairs to ensure thorough coverage. Quality functionality testing requires understanding the purpose of new features, communicating with developers, thoroughly designing test cases, carefully executing tests, and reviewing results.
Components - Crossing the Boundaries while Analyzing Heterogeneous Component-...ICSM 2011
Paper: "Crossing the Boundaries while Analyzing Heterogeneous Component-Based Software Systems"
Authors: Amir Reza Yazdanshenas, Leon Moonen
Session: Research Track Session 7: Components
Functional testing is a type of software testing that validates software functions or features based on requirements specifications. It involves testing correct and incorrect inputs to check expected behaviors and outputs. There are different types of functional testing including unit testing, integration testing, system testing, and acceptance testing. Testers write test cases based on requirements and specifications to test the functionality of software under different conditions.
** An easy Software Interface to guide the user through program creation ** Intuitive and easy to use ** 5x7 full color touch panel display ** Inviting to look at ** Most options are preset and allow toggling between options ** Testing capability through preview screen
Get the Balance Right: Acceptance Test Driven Development, GUI Automation and...Michael Larsen
The document discusses different testing approaches including Acceptance Test Driven Development (ATDD), Test Driven Development (TDD), GUI automation, and exploratory testing. It explains that ATDD and TDD are design processes that help ensure software meets project needs, while testing involves asking questions of a product. GUI automation can simulate user actions but is fragile. Exploratory testing involves testing design and execution together in a flexible way. The document argues that these approaches work best in balance and that exploration is important at all levels, including with automation. It emphasizes putting the customer first and seeing the approaches as interdependent parts of an overall quality process.
Black Box Testing Techniques by Sampath MForziatech
This document provides an overview of black box testing techniques. It begins with an agenda that outlines topics like quality assurance, testing types (functional vs non-functional), and black box testing. The document then defines black box testing as testing an application's functionality without knowing its internal structure or code. Common black box techniques are described like equivalence partitioning, boundary value analysis, decision tables, and error guessing. Examples of using these techniques for testing valid and invalid salary, date, and month fields in a talent management application are also provided.
This document provides an introduction to automation testing. It discusses the need for automation testing to improve speed, reliability and test coverage. The document outlines when tests should be automated such as for regression testing or data-driven testing. It also discusses automation tool options and the process for automating tests. While automation testing provides benefits like time savings, it also has limitations such as the need for programming skills and maintenance of test code. Key challenges of automation testing include unrealistic expectations of tools and dependency on third party integrations.
This document discusses automation testing. It begins by defining automation testing and listing its benefits, which include saving time and money, improving accuracy, and increasing test coverage. It then covers levels of automation testing, frameworks, approaches like record and playback, modular scripting, and keyword-driven testing. The document also discusses the automation testing lifecycle, how to choose a testing tool, types of tools, when to automate and who should automate, supporting practices, and skills needed for automation testing.
Components - Crossing the Boundaries while Analyzing Heterogeneous Component-...ICSM 2011
Paper: "Crossing the Boundaries while Analyzing Heterogeneous Component-Based Software Systems"
Authors: Amir Reza Yazdanshenas, Leon Moonen
Session: Research Track Session 7: Components
Functional testing is a type of software testing that validates software functions or features based on requirements specifications. It involves testing correct and incorrect inputs to check expected behaviors and outputs. There are different types of functional testing including unit testing, integration testing, system testing, and acceptance testing. Testers write test cases based on requirements and specifications to test the functionality of software under different conditions.
** An easy Software Interface to guide the user through program creation ** Intuitive and easy to use ** 5x7 full color touch panel display ** Inviting to look at ** Most options are preset and allow toggling between options ** Testing capability through preview screen
Get the Balance Right: Acceptance Test Driven Development, GUI Automation and...Michael Larsen
The document discusses different testing approaches including Acceptance Test Driven Development (ATDD), Test Driven Development (TDD), GUI automation, and exploratory testing. It explains that ATDD and TDD are design processes that help ensure software meets project needs, while testing involves asking questions of a product. GUI automation can simulate user actions but is fragile. Exploratory testing involves testing design and execution together in a flexible way. The document argues that these approaches work best in balance and that exploration is important at all levels, including with automation. It emphasizes putting the customer first and seeing the approaches as interdependent parts of an overall quality process.
Black Box Testing Techniques by Sampath MForziatech
This document provides an overview of black box testing techniques. It begins with an agenda that outlines topics like quality assurance, testing types (functional vs non-functional), and black box testing. The document then defines black box testing as testing an application's functionality without knowing its internal structure or code. Common black box techniques are described like equivalence partitioning, boundary value analysis, decision tables, and error guessing. Examples of using these techniques for testing valid and invalid salary, date, and month fields in a talent management application are also provided.
This document provides an introduction to automation testing. It discusses the need for automation testing to improve speed, reliability and test coverage. The document outlines when tests should be automated such as for regression testing or data-driven testing. It also discusses automation tool options and the process for automating tests. While automation testing provides benefits like time savings, it also has limitations such as the need for programming skills and maintenance of test code. Key challenges of automation testing include unrealistic expectations of tools and dependency on third party integrations.
This document discusses automation testing. It begins by defining automation testing and listing its benefits, which include saving time and money, improving accuracy, and increasing test coverage. It then covers levels of automation testing, frameworks, approaches like record and playback, modular scripting, and keyword-driven testing. The document also discusses the automation testing lifecycle, how to choose a testing tool, types of tools, when to automate and who should automate, supporting practices, and skills needed for automation testing.
On codes, machines, and environments: reflections and experiencesVincenzo De Florio
Code explicitly refers to a reference machine and, implicitly, to a set of conditions often called the system model and the fault model.
If one wants to guarantee an agreed-upon quality of service, one needs to either make assumptions about those conditions or adapt to them.
In this lecture I present this problem and a number of solutions, both practical and theoretical, that I have devised in the course of my career.
Although the main accent is on programming languages, here I provide links and references to other approaches that operate at algorithmic- and system-level.
The document provides information on software testing techniques including boundary value analysis, equivalence class testing, and decision table-based testing. It discusses topics like boundary value testing, robustness testing, worst-case testing, and special value testing. Examples are provided to illustrate test cases generated for problems related to triangles, date functions, and sales commissions using techniques like boundary value analysis, equivalence class testing, and decision tables. Guidelines for applying these techniques effectively are also outlined.
This document discusses code analysis and techniques for predicting runtime errors in source code. It describes existing solutions like detecting uninitialized variables, overflows, divide by zeros, incorrect argument data types. It also discusses detecting out-of-bounds array and pointer references, memory allocation/deallocation errors, and memory leaks. The document outlines the design of a code analyzer that takes C code as input, performs lexical and syntax analysis to generate intermediate code, and then uses the intermediate code to predict possible runtime errors. Further work mentioned includes evaluating the intermediate code to perform data and control flow analysis for error prediction.
Nanometer chip testing faces new challenges due to increasing process variations, new failure mechanisms, and higher costs. Solutions include new fault models like bridge fault testing, delay fault testing to check timing at high speeds, scan compression to reduce test data volume, and scan-based diagnostics to improve yield learning. Effective solutions require close collaboration between test, technology, and automated test equipment experts.
1) Testing at the nanometer scale presents new challenges due to increasing process variations, complex signal integrity issues, and new defect mechanisms.
2) New test techniques are needed to detect failures such as small delay defects and high-resistance bridges. Approaches such as bridge fault testing and delay fault testing generate significantly more test patterns.
3) Solutions to reduce cost and power consumption during testing include scan compression techniques, preventing unnecessary switching during scan shifts, and developing power-aware test patterns.
1) JustRunIt is an experiment-based infrastructure for managing virtualized data centers that uses VM cloning and workload replay to conduct management experiments in a sandbox.
2) Case studies show JustRunIt can determine optimal resource allocations to meet performance targets with minimal resources, outperforming highly accurate modeling.
3) JustRunIt can also evaluate hardware upgrades by running experiments on upgraded sandbox hardware.
The document discusses the importance of measuring metrics related to code quality and health in order to transition from working software to high quality software. It provides examples of metrics that can be measured such as coding standard violations, duplicated code, dead code, code coverage by tests, number of open tasks, complexity of expressions, and modularity. It explains what different trends in these metrics may indicate and considerations for utilizing the metrics such as tools, policies, and developer skills. Regular measurement and response to trends helps avoid issues like increasing technical debt and the "Broken Window Syndrome."
Python is an interpreted, open source programming language that is simple, powerful, and preinstalled on many systems. It has less syntax than other languages and a plethora of penetration testing tools have already been created in Python. Python code is translated and executed by an interpreter one statement at a time, allowing it to be run from the command prompt, through command prompt files, or in an integrated development environment. The language uses whitespace and comments to make code more readable. It can perform basic operations like printing, taking user input, performing conditionals and loops, defining reusable functions, and importing additional modules.
Effective Test Suites for ! Mixed Discrete-Continuous Stateflow ControllersLionel Briand
The document describes algorithms for generating effective test suites for mixed discrete-continuous controllers modeled in Stateflow. It introduces the challenges of testing cyber-physical systems with both discrete and continuous behaviors. It then presents six test generation algorithms, including ones based on input diversity, state/transition coverage, and output diversity/stability/continuity. An evaluation of these algorithms on three industrial case studies examines their fault detection abilities, how they compare to each other, and how test suite size impacts results. The best performing algorithms focused on maximizing differences between output signals.
Dynamic testing analyzes the dynamic behavior of code by executing it with different inputs and checking the outputs. There are two main types: black box testing which tests functionality without viewing internal structure, and white box testing which tests based on internal structure. Black box techniques include boundary value analysis, equivalence partitioning, error guessing, cause-effect graphing, and state transition testing. White box techniques include code coverage and complexity analysis. Dynamic testing can find errors not detected through static analysis but takes more time than static testing.
White box testing is a software testing technique that tests internal coding and infrastructure. It involves writing test cases that exercise the paths in the code to help identify missing logic or errors. The document discusses various white box testing techniques like statement coverage, decision coverage, loop coverage, condition coverage, and path coverage. It also discusses performing white box testing at the unit, integration, and system levels. The session will cover white box testing at the unit level using control flow analysis techniques like building control flow graphs and analyzing possible paths.
Test Driven Development With YUI Test (Ajax Experience 2008)Nicholas Zakas
This document discusses test-driven development (TDD) using the YUI Test framework. It introduces TDD principles like writing tests before code and iterating through failing tests, passing tests, and refactoring. It then covers key aspects of YUI Test like writing unit tests, simulating events, handling asynchronous code, and hooking into the test runner. The document provides examples and recommendations for effectively using TDD and YUI Test in web development.
This document discusses using test behavior metrics to build pre-release and post-release defect prediction models. It finds that test failure metrics like unreliable test cases executed, test failure bursts, and number of failing execution contexts can predict defects with high precision and recall. Models built using these metrics performed better than coverage-based models and were more accurate for pre-release defects compared to post-release defects. The most influential metrics for prediction were the relative number of unreliable tests, test failure bursts, and number of failing execution contexts.
Cobol is a robust, English-like programming language used widely in enterprise applications. It turned 50 years old in 2009 and remains heavily used due to the large amount of existing Cobol code and the challenges of migrating data to new systems. Cobol uses a structured programming style with four divisions - identification, environment, data, and procedure. The data and procedure divisions declare variables and contain the program logic. Cobol supports common control structures like conditional statements and loops. Records allow grouping of related data fields.
The document summarizes experiences with test automation and discusses three test automation frameworks developed over 10 years for marketplace systems. It describes the process of contributing chapters to a book on test automation experiences. The frameworks aimed to automate an increasing number of tests to support agile development practices. The first framework developed tests quickly under time pressure with a thin abstraction layer. The second framework focused more on the test tool than tests. The third framework aimed for a thicker abstraction layer but architects became bottlenecks. ROI analysis showed savings from automation increased with larger test batches.
The document discusses code coverage and provides guidance on how to properly use and interpret code coverage metrics and data. It cautions against aiming for 100% coverage and instead advocates using coverage information to identify areas for improving test cases and finding dead code. The document also warns of potential misuses of code coverage like assuming it guarantees quality or that test generation tools can replace manual testing.
This document discusses software test design, including seatwork activities, premidterm exams, equivalence partitioning, boundary value analysis, black box and white box testing techniques. It provides examples of statement coverage and branch coverage testing. It also discusses condition coverage testing and illustrates a control flow graph. Finally, it provides references to three premid articles on testing, maintenance, and project management.
This document discusses various functional testing techniques, including:
- Boundary value analysis, which tests inputs at minimum, maximum, and nominal values to find faults.
- Equivalence class testing, which divides the input domain into classes and tests one representative from each class.
- Decision table testing, which represents logical relationships between inputs and outputs in a table to derive test cases.
The techniques aim to design test cases that have a higher probability of failure and cover all possible program functionality through a black box approach. Functional testing treats the program as a black box and ignores internal structure.
The value of "a.value" will be printed to the VBA Immediate window when that line is executed. The Debug.Print statement sends its output to the Immediate window, which is useful for inspecting variable values while code is running without stopping the execution.
This document discusses several bugs found in various networking devices and products. For each bug, it provides details about the issue and asks how test cases could be designed to catch the bug, why it was not caught internally, and what test strategies could cover it. It aims to analyze customer-found bugs to improve testing methods.
Why we didn't catch that application bugsgaoliang641
The document discusses 30 different software bugs found in various systems. For each bug, it provides a brief description of the issue and asks how test cases could be designed to catch the bug, why it was not caught internally, and what test strategies could help cover such bugs in the future. The bugs covered a wide range of systems, including WebEx, Taobao, Google Docs, Cisco devices, banking systems, travel sites, Microsoft Office, and more.
On codes, machines, and environments: reflections and experiencesVincenzo De Florio
Code explicitly refers to a reference machine and, implicitly, to a set of conditions often called the system model and the fault model.
If one wants to guarantee an agreed-upon quality of service, one needs to either make assumptions about those conditions or adapt to them.
In this lecture I present this problem and a number of solutions, both practical and theoretical, that I have devised in the course of my career.
Although the main accent is on programming languages, here I provide links and references to other approaches that operate at algorithmic- and system-level.
The document provides information on software testing techniques including boundary value analysis, equivalence class testing, and decision table-based testing. It discusses topics like boundary value testing, robustness testing, worst-case testing, and special value testing. Examples are provided to illustrate test cases generated for problems related to triangles, date functions, and sales commissions using techniques like boundary value analysis, equivalence class testing, and decision tables. Guidelines for applying these techniques effectively are also outlined.
This document discusses code analysis and techniques for predicting runtime errors in source code. It describes existing solutions like detecting uninitialized variables, overflows, divide by zeros, incorrect argument data types. It also discusses detecting out-of-bounds array and pointer references, memory allocation/deallocation errors, and memory leaks. The document outlines the design of a code analyzer that takes C code as input, performs lexical and syntax analysis to generate intermediate code, and then uses the intermediate code to predict possible runtime errors. Further work mentioned includes evaluating the intermediate code to perform data and control flow analysis for error prediction.
Nanometer chip testing faces new challenges due to increasing process variations, new failure mechanisms, and higher costs. Solutions include new fault models like bridge fault testing, delay fault testing to check timing at high speeds, scan compression to reduce test data volume, and scan-based diagnostics to improve yield learning. Effective solutions require close collaboration between test, technology, and automated test equipment experts.
1) Testing at the nanometer scale presents new challenges due to increasing process variations, complex signal integrity issues, and new defect mechanisms.
2) New test techniques are needed to detect failures such as small delay defects and high-resistance bridges. Approaches such as bridge fault testing and delay fault testing generate significantly more test patterns.
3) Solutions to reduce cost and power consumption during testing include scan compression techniques, preventing unnecessary switching during scan shifts, and developing power-aware test patterns.
1) JustRunIt is an experiment-based infrastructure for managing virtualized data centers that uses VM cloning and workload replay to conduct management experiments in a sandbox.
2) Case studies show JustRunIt can determine optimal resource allocations to meet performance targets with minimal resources, outperforming highly accurate modeling.
3) JustRunIt can also evaluate hardware upgrades by running experiments on upgraded sandbox hardware.
The document discusses the importance of measuring metrics related to code quality and health in order to transition from working software to high quality software. It provides examples of metrics that can be measured such as coding standard violations, duplicated code, dead code, code coverage by tests, number of open tasks, complexity of expressions, and modularity. It explains what different trends in these metrics may indicate and considerations for utilizing the metrics such as tools, policies, and developer skills. Regular measurement and response to trends helps avoid issues like increasing technical debt and the "Broken Window Syndrome."
Python is an interpreted, open source programming language that is simple, powerful, and preinstalled on many systems. It has less syntax than other languages and a plethora of penetration testing tools have already been created in Python. Python code is translated and executed by an interpreter one statement at a time, allowing it to be run from the command prompt, through command prompt files, or in an integrated development environment. The language uses whitespace and comments to make code more readable. It can perform basic operations like printing, taking user input, performing conditionals and loops, defining reusable functions, and importing additional modules.
Effective Test Suites for ! Mixed Discrete-Continuous Stateflow ControllersLionel Briand
The document describes algorithms for generating effective test suites for mixed discrete-continuous controllers modeled in Stateflow. It introduces the challenges of testing cyber-physical systems with both discrete and continuous behaviors. It then presents six test generation algorithms, including ones based on input diversity, state/transition coverage, and output diversity/stability/continuity. An evaluation of these algorithms on three industrial case studies examines their fault detection abilities, how they compare to each other, and how test suite size impacts results. The best performing algorithms focused on maximizing differences between output signals.
Dynamic testing analyzes the dynamic behavior of code by executing it with different inputs and checking the outputs. There are two main types: black box testing which tests functionality without viewing internal structure, and white box testing which tests based on internal structure. Black box techniques include boundary value analysis, equivalence partitioning, error guessing, cause-effect graphing, and state transition testing. White box techniques include code coverage and complexity analysis. Dynamic testing can find errors not detected through static analysis but takes more time than static testing.
White box testing is a software testing technique that tests internal coding and infrastructure. It involves writing test cases that exercise the paths in the code to help identify missing logic or errors. The document discusses various white box testing techniques like statement coverage, decision coverage, loop coverage, condition coverage, and path coverage. It also discusses performing white box testing at the unit, integration, and system levels. The session will cover white box testing at the unit level using control flow analysis techniques like building control flow graphs and analyzing possible paths.
Test Driven Development With YUI Test (Ajax Experience 2008)Nicholas Zakas
This document discusses test-driven development (TDD) using the YUI Test framework. It introduces TDD principles like writing tests before code and iterating through failing tests, passing tests, and refactoring. It then covers key aspects of YUI Test like writing unit tests, simulating events, handling asynchronous code, and hooking into the test runner. The document provides examples and recommendations for effectively using TDD and YUI Test in web development.
This document discusses using test behavior metrics to build pre-release and post-release defect prediction models. It finds that test failure metrics like unreliable test cases executed, test failure bursts, and number of failing execution contexts can predict defects with high precision and recall. Models built using these metrics performed better than coverage-based models and were more accurate for pre-release defects compared to post-release defects. The most influential metrics for prediction were the relative number of unreliable tests, test failure bursts, and number of failing execution contexts.
Cobol is a robust, English-like programming language used widely in enterprise applications. It turned 50 years old in 2009 and remains heavily used due to the large amount of existing Cobol code and the challenges of migrating data to new systems. Cobol uses a structured programming style with four divisions - identification, environment, data, and procedure. The data and procedure divisions declare variables and contain the program logic. Cobol supports common control structures like conditional statements and loops. Records allow grouping of related data fields.
The document summarizes experiences with test automation and discusses three test automation frameworks developed over 10 years for marketplace systems. It describes the process of contributing chapters to a book on test automation experiences. The frameworks aimed to automate an increasing number of tests to support agile development practices. The first framework developed tests quickly under time pressure with a thin abstraction layer. The second framework focused more on the test tool than tests. The third framework aimed for a thicker abstraction layer but architects became bottlenecks. ROI analysis showed savings from automation increased with larger test batches.
The document discusses code coverage and provides guidance on how to properly use and interpret code coverage metrics and data. It cautions against aiming for 100% coverage and instead advocates using coverage information to identify areas for improving test cases and finding dead code. The document also warns of potential misuses of code coverage like assuming it guarantees quality or that test generation tools can replace manual testing.
This document discusses software test design, including seatwork activities, premidterm exams, equivalence partitioning, boundary value analysis, black box and white box testing techniques. It provides examples of statement coverage and branch coverage testing. It also discusses condition coverage testing and illustrates a control flow graph. Finally, it provides references to three premid articles on testing, maintenance, and project management.
This document discusses various functional testing techniques, including:
- Boundary value analysis, which tests inputs at minimum, maximum, and nominal values to find faults.
- Equivalence class testing, which divides the input domain into classes and tests one representative from each class.
- Decision table testing, which represents logical relationships between inputs and outputs in a table to derive test cases.
The techniques aim to design test cases that have a higher probability of failure and cover all possible program functionality through a black box approach. Functional testing treats the program as a black box and ignores internal structure.
The value of "a.value" will be printed to the VBA Immediate window when that line is executed. The Debug.Print statement sends its output to the Immediate window, which is useful for inspecting variable values while code is running without stopping the execution.
This document discusses several bugs found in various networking devices and products. For each bug, it provides details about the issue and asks how test cases could be designed to catch the bug, why it was not caught internally, and what test strategies could cover it. It aims to analyze customer-found bugs to improve testing methods.
Why we didn't catch that application bugsgaoliang641
The document discusses 30 different software bugs found in various systems. For each bug, it provides a brief description of the issue and asks how test cases could be designed to catch the bug, why it was not caught internally, and what test strategies could help cover such bugs in the future. The bugs covered a wide range of systems, including WebEx, Taobao, Google Docs, Cisco devices, banking systems, travel sites, Microsoft Office, and more.
Release engineering involves managing the delivery of high quality software releases through processes like release planning, branch management, building, testing, and source code control. It aims to make releases predictable and of high quality by facilitating activities such as compiling code, verifying functionality, controlling branching/merging of codelines, and following best practices.
Regression testing is important to ensure new software changes do not break existing functionality. Automating regression testing helps manage the large number of test cases needed and speeds up release cycles. Key aspects of managing regression include establishing a baseline, comparing new results to the baseline, debugging failures efficiently, and automating testing processes to reduce human effort and testing time.
The document discusses system and solution testing. It provides an example of how unit tests that pass can fail during system testing. It defines system testing as testing at a product level to find bugs not discoverable through feature testing. Solution testing is defined as customer-oriented end-to-end application testing. The document outlines some key differences between feature, system, and solution testing and discusses common bugs found through system testing.
This document outlines a performance evaluation framework for testers. It defines attributes of good testers, types of metrics to measure performance, and a quality review process. Performance is measured both quantitatively using metrics like bugs found and test cases run, and qualitatively through reviews of bug and test case quality. Testers are evaluated based on their role, with defined metrics for developers, regression engineers, tools engineers and system testers.
This document discusses the relationship between testers and developers and how to improve interaction between the two roles. It notes that while they sometimes have a "love-hate" relationship as they have different expertise and goals, they ultimately depend on each other to ensure high quality software. The document provides tips for when testers and developers should interact, such as during design, test case reviews, and bug fixing. It also recommends ways to build trust between testers and developers through clear communication and establishing processes for quality control.
This document discusses career paths for testing engineers. It begins by describing a typical interview where the candidate's lack of technical skills is apparent. It then discusses a tester's concerns about their career progression and perceptions of testers. The document outlines stories of individuals who grew their careers in testing over long periods of time, taking on roles such as test automation engineer and testing director. It provides advice on making good career choices by gaining experience in one's current role and waiting for opportunities, rather than changing roles frequently or due to money alone. Specific career paths are suggested such as build master, release engineer, and testing management. The document emphasizes that experience over time strengthens one's position and makes them competitive for career growth.
This document discusses agile testing practices used on a large, mission critical project for the Israeli Air Force's information management system. Key points include:
- The entire project used extreme programming (XP) and involved no separation between development and testing teams. Developers performed all testing and regression.
- Testing was integrated into the development process with testing beginning from the first line of code. The size of tests and functionality developed were equal at each iteration.
- Bugs were prioritized and fixed immediately within each iteration to ensure all work was fully tested before being counted as completed. This approach helped keep the project on track and of high quality.
The document provides guidance for a QA manager's role in project management. It outlines responsibilities like managing the QA team, interacting with other teams, and ensuring work is on time and high quality. It also describes tasks for project planning such as understanding requirements, defining deliverables, scheduling testing activities, and communicating with stakeholders. Standard templates, reviews, reports, and post-mortem analyses are recommended to help manage the quality of the work.
This document discusses exploratory testing (E.T.) and how to effectively implement it. It begins by defining E.T. as simultaneous learning, test design, and execution. It then contrasts E.T. with scripted testing, noting that E.T.'s goal is to find bugs while scripted testing aims to measure coverage. The document provides tips for doing good E.T., such as keeping notes and using different testing styles. It also discusses managing E.T., including using sessions and balancing E.T. with other testing. Lessons learned emphasize the benefits of pair testing and that E.T. requires skilled testers and planning to be successful.
The document provides guidance on best practices for bug filing and management. It discusses how to write high-quality bug reports that are reproducible by developers. It emphasizes the importance of thoroughly documenting steps to reproduce issues and providing all relevant information. The document also covers defect tracking metrics and how they can be used to assess testing progress and product quality.
Lessons learned on localization testinggaoliang641
Localization testing began 25 years ago with Windows. Early localization caused regression issues. Now single worldwide binaries are used with different fonts and keyboards. There are 3 types of localization defects: functionality, usability, and linguistic quality. Usability defects make up 90% of issues. Lessons include checking translation completeness, identifying string locations, considering cultural differences in shortcuts, and testing various resolutions.
Lessons learned on software testing automationgaoliang641
This document outlines 14 lessons learned about automation testing. Key points include that automation requires resources to develop, maintain and run scripts; automated tests can miss bugs if not run manually as well; a framework is needed to manage thousands of scripts; scripts should use standard languages and be data-driven and independent of testbeds; and logs are more important than scripts for debugging failures. Separating script writers from runners and using databases to store results are also advised.
This document provides guidance on how to become a testing expert. It discusses typical interview questions faced by testers and the dilemmas of both testers and management in hiring strong testing candidates. It outlines the career journey of one individual who grew from their first testing role to leadership positions. Key lessons include gaining experience over time and having the ability to find critical bugs quickly. Attributes of a testing expert are developing comprehensive testing strategies and managing the entire release process.
Protocol Security Testing best practicegaoliang641
This document discusses different types of boundary value testing for protocol parsing code, including:
1. Value boundary testing to ensure proper functionality at the minimum, maximum, and boundary values of input data.
2. Logic boundary testing to check error handling and protocol parsing.
3. Performance boundary testing to evaluate how a system performs under attack.
It describes creating test cases with boundary values for individual fields and combinations of fields in protocol data units. It also discusses challenges of boundary testing due to the large number of possible field combinations and proposes structured and unstructured approaches.
Backward thinking design qa system for quality goalsgaoliang641
This document discusses strategies for designing a quality assurance system to meet quality goals. It outlines various types of testing, such as user testing, integration testing, and performance testing. It also poses many questions about testing organization, processes, tools, and metrics that need to be considered when setting up a QA system. The document emphasizes establishing repetitive regression testing to stabilize code branches before release and using automation to help reduce the workload of testing.
Automation framework design and implementationgaoliang641
The document discusses the need for automation frameworks to increase productivity and avoid errors in repetitive testing. It covers key elements of frameworks like control libraries, common libraries, coding guidelines, execution engines and result databases. Design principles include code control, abstract library layers, independent scripts and test beds, resource allocation and parallel execution. Implementation considerations involve choosing scripting languages and optimizing performance. Popular frameworks from various companies are examined as case studies.
This document discusses testing automation from start to finish. It covers topics like realizing when automation is needed, defining development processes, script quality, frameworks, execution processes, and team management. The overarching goal of automation is to generate accurate testing reports. Key points include spending time on API design, following coding standards for scripts, determining if a framework is required, managing parallel execution and reports, and defining roles for test writers, scripters, and runners.
This document discusses agile testing practices used for a large, mission critical project for the Israeli Air Force. Key points include:
- The entire project used extreme programming (XP) and involved no separation of developers and testers. Testing was integrated into development and everyone on the team tested.
- Testing was considered equally important as development. Test size was tied directly to product size and untested work was not considered completed. Automated unit and regression testing occurred in each iteration.
- Through close collaboration and ensuring all work was fully tested, the average time to fix defects remained low even as the project increased in complexity. This allowed bugs to be fixed immediately.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
3. What makes a
functionality tester
• Functionality testing is to test the “new” code.
Input Output
Tester
Software Function Spec Test Plan/Test Cases
Defect
Marketing Requirement Spec
s
Scripts
Developer's code
4. Test Case Is Very
Important
• Development is science, Testing is an ART.
• Test case is the foundation of a QA department.
• Test case is the Crown Jewel of the testing.
• What is “Garbage in, garbage out”?
5. Exercise
• A program called “sum” is developed in Linux, it
accepts two 32-bit signed integers, add them and
display the result in the next line.
• The result is also 32-bit signed integer, if error is
detected, error message will be displayed to stderr
and error code will be returned.
6. Solutions
• What is the range of 32-bit signed integer?
• What are the boundaries conditions?
o -231 <= X + Y <= 231-1
o -231 <= X or Y <= 231-1
• What are the negative conditions?
o Out of range
o Invalid inputs
• External factors
o System resources are low
o Simultaneous execution
o Execution from another program (GUI as front-end), exit status code
• Other testing
o Performance (speed of the execution)
o Spell check on error messages (Localization?)
o Size of the program
o Versioning
o Help
7. • Input types:
Input • Arguments:
o +<#> (acceptable?) o 0, 1, 3, 10 arguments
o 600A o <#> <alphabet>
o 600.60 o “ “ <#> <#>
o <#> <100 space> <#> o 2 arguments with 200-
o <#> <tab> <#> character each
o 0x<hex>? • Error messages:
o “<#> <#>” o Ensure all error condition
o 100-digit number generates appropriate error
o 0<#>? messages in current locale
o 100 “0”? o Ensure all error the exit status
o 231 + 30 negative number? code is non-zero
o 100 “-” follow by valid o Ensure error messages are
number printed out in stderr
o %d o Ensure no mis-spell and
o &, $, #, ^G grammar errors
o Ensure consistency in error
messages (error format, error
number)
8. •
External• Factor
OS related factor: Others
o Run in different varieties of o Use “time” to measure the
Linux distro performance of sum
o Run program in low- program. The execution
memory condition (1MB of elapse time should not
memory) exceed 0.2 second; the CPU
o Develop shell scripts to usage should be less than
invoke 20 “sum” in 20% during execution
background “&” and ensure o Find out the size of the
the results are correct program, it should not be
o Run “sum” from cron more than 2MB of disk space
o Use “ldd” to find out the o Ensure the version is
shared library “sum” is using, displayed in help string
install different versions of o Ensure the help program is
shared library to ensure the available
program works o Develop a script to run
10,000 sum program
overnight
oinstallation
9. Boundary Testing
231-1 X
X X
X
X
X
X X
-231 231-1
X
X X
X
X X
X-231
10. 5 Basic Testing Case
Design Techniques
• Equivalent class
• Boundary values
• Decision tables
• State-transition
• All pair
11. Equivalence class ( 等价
类)
• 等价列划分设计方法是把所有可能的输入数据,即程序
的输入域划分成若干部分(子集),然后从每一个子集
中选取少量具有代表性的数据作为测试用例。
Equivalence
class Example
12. Equivalence class ( 等价
类)
• Case #1: domain of data is an interval
o – Class(es) of valid values
o – Class of invalid values (out of boundary, inf. and sup.)
o – Class of non member (for numerical values, characters are non
members)
• Case #2: Domain of data is a number of values
o –N classes for valid values
o –One class: absence of values
o –One class: too much values
13. Equivalence class ( 等价
类)
• Case #3: data is a set of values that are
processed in different manner
o – One class for each valid value
o – One class of all other invalid values
• Case #4: data is a constraint (form, syntax,
mean)
o –One class for valid constraint
o –One class for constraint violation
14. Boundary value testing
• Often combined with Equivalence class testing
• Boundary value testing ensures proper
functionality at the boundary (or edges) or
allowable data input. Boundary values include
maximum, minimum, just inside/outside
boundary, typical values, and error (malformed
values).
15. Boundary value testing
• Boundary value testing depends greatly on
experience
• Don’t come to Developer to ask for boundary
value advise often
• We have an practical example for boundary
testing
18. All Pair Testing
Closing Cost
Bank Emp
Intro Rate
Refinance
Residence
Property
Region
Credit
NAV
LTV
NIV
tier
NY L 1 fam A+ Pri 80% Yes Yes Yes Cust Yes Yes
NJ M 2 fam A Vac 90% No No No Ban No No
k
FL H 3 fam A- Inv 100
%
TX H+1 4 fam B
CA H+2 Coop <B
DC H+3 Cond
o
Othe
r
19. All Pair Testing
Twelve variables, with varying numbers of values, have
7 x 6 x 6 x 5 x 3 x 3 x 2 x 2 x 2 x 2 x 2 x 2 = 725,760
combinations of values.
“All Pairs does it in 50.” (Bernie Berger, STAREast2003)
20. All Pair Testing
• Reduce the number of tests Clean
• Multiple test cases will find same defects (all
combination is an overkill)
• Tests that cover all pairwise
o For any two parametersp1 and p2 and any valid
values v1 for p1 and v2 for p2, there is a test in which p1 has the
value v1 and p2 has the value v2.
21. In Our Example
• All pair test cases: C = 12!/(2! x (12 - 2)!) = 66
2
12
• Triple pair test case: C312 = 12!/(3! x (12 - 3)!)
• Worst case:
∑1
12 p 12
C12 = 2 − 1 = 4095
P=1
22. Can it Work?
• 97% of failure are caused by 2 feature interaction
(NIST Medical software failures 2000)
23. •
What about coverage
We measured the coverage of combinatorial design test sets for 10 Unix commands:
basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc. […] The pairwise
tests gave over 90 percent block coverage.[D. M. Cohen et al., 1996][…]
• a set of 29 pair-wise AETG tests gave 90% block coverage for the UNIX sort
command. We also compared pair-wise testing with random input testing and found
that pair-wise testing gave better coverage.[D. M. Cohen et al., 1997]
• The block coverage obtained for [pairwise] was comparable with that achieved by
exhaustively testing all factor combinations […][I. S. Dunietz et al., 1997]
• Our initial trial of this was on a subset Nortel's internal e-mail system where we able
cover 97% of branches with less than 100 valid and invalid testcases, as opposed to
27 trillion exhaustive testcases.[K. Burr and W. Young, 1998]
• Compared to a traditional company that would use the quasiexhaustive strategy, an
innovative company using the [Combinatorial] strategy would reduce its system
level test schedule by sixty-eight percent (68%) and save sixty-seven percent (67%)
in labor costs associated with the testing.[J. Huller, 2000]
• [Evaluating FDA recall class failures in medical devices we established that] [...] out
of the 109 reports that [were] detailed [enough], 98% showed that the problem could
have been detected by testing the device with all pairs of parameter settings.[D. R.
Wallace and D. R. Kuhn, 2001]
• More than 70% of bugs were detected with two or fewer conditions (75% for browser
and 70% for server) and approximately 90% of the bugs reported were detected with
three or fewer conditions (95% for browser and 89% for server). [...] It is interesting that
a small number of conditions (n<=6) are sufficient to detect all reported errors for the
browser and server software.[R. Kuhn and M. J. Reilly, 2002]
25. Test Case Quality
• Single Test Case Quality
o Clean
o Exact
o Repeatable
• Test Case Coverage Quality
o How do you know you are not missing critical test cases?
o After a test, if no bugs found, how confident are you to say the
software is in good quality.
26. quality
• Chances are: functionality tester might be junior.
• Use standard testing procedures whenever
possible
• Interact with programmer.
• Rome is not built in one day, so is your test plan /
test cases
o Review modify and Review modify and Review modify….
o Bring developers, technical marketing engineers, product managers
into the review meeting
27. Create What ACL?
need a detailed
Bad Test Cases
CLI Here
Match What
Group? What is
• Create extended acl the group
• Match group name, what is
the cli?
• Associate match group to E-ACL Next Interface?
Need to explain
• Create action group with next interface as tunnelthat.
• Create PBR policy
• Associate pbr policy to the action and match
group
• Apply pbr policy on the ingress interface
Ingress Interface?
Which one is
the ingress
interface?
28. Functionality Testing (2
input, 2 talk)
• Understand features based on
software function specification and
Marketing Requirement Documentation.
• Talk to the one who propose this new
functionality (Why we need this feature).
• Talk to the one who wrote the function
specification
Developer Checklist
29. •
6 “Understand”
Try to understand on “Functional Points”
• Try to understand more “Use Case/Scenarios”
• Try to understand more Why we need this feature
(revenue impact)
• Try to understand more the modules will be
affected by this feature.
• Try to understand more the performance impact.
• Try to understand more the way it is going to be
implemented by the developers
OSPF Hello Protocol
30. Test Case Attributes
• Test ID
• Description
• Pre-Setup
• Platform
• Topology
Sample Test Plan
• Priority
• Complexity
• Steps/Expected results
31. Functionality Test Case
Coverage
• Need to be sure all functionality points are
covered
• Need to be sure all related modules combine test
are covered. (IPv6, HA?)
• Need to be sure to have a platform coverage list.
• Enough use cases (experience and
communication matters) (Build your Customer
Scenario Library: various sample topologies,
typical device configurations, typical
applications)
32. Code Coverage
• A quantum way to measure testing coverage.
• A special compiler patch that when build a software
image, can place mark on the new code
• After you execute your black box test cases, you can
know which code line is being executed.
• Very mature in Java (just Google “java code
coverage)
• C has open source and commercialized tools (hard to
integrate into a build environment if code base is
huge) http://gcc.gnu.org/onlinedocs/gcc/Gcov-
Intro.html
33. CLI Standard Testing Procedure Example - 1
• enable CLI, verify all options and parameters are implemented as defined
in the function spec.
• Enable CLI, do “show running-config”, verify it is in the memory
Disable CLI, do “show running-config” , verify it is not in the memory.
Do a “show xxx” to verify the CLI is taking effect.
• Repeat 1, but write the configure into flash, verify it is written in the flash
memory
Reload the box; verify the configuration is still there.
• Enable then disable CLI for at least 20 times, see if we have any memory
leak, or other instability.
If it is a CLI that can be repeated, like firewall rules. Enable many rules,
then disable many rules.
• Verify there is no typos in the CLI
Especially when there are some errors prompt messages. Verity those
messages have no typos.
• Conflict CLI on different features will report error
• CLI response time
34. CLI Standard Testing Procedure Example - 2
• Boundary testing on maximum and minimum CLI parameters
values that it can take, check with function spec to see if it is as
designed.
• Negative testing on CLI parameters on
o Numbers:
• Out of boundary numbers. 负 数? Maximum number + 1 if
possible.
• empty
• Put a string in to a number field.
o strings:
• Strange characters. %(#*$*@#&#&^&<>:”:{}.
• long characters ( as long as the system can take it, as long
as you can type on a terminal)
• empty
36. Review Process
Internal peer
Team Internal
review
Review
Peer review from other External peer
team that has domain review
expert knowledge
Tiger Testing Team
is a elite group of
Testing Tiger Team Review team lead and
managers that has
rich testing
experience
37. Testing Coverage Quality
•
Control
1st to see what developer wants to say
• Test area coverage Developer Checklist
o identify all functionality points (areas) on the software
function specification
o Can do code coverage if image building environment
support it
• Test type coverage (required for all new feature testing)
o CLI or GUI testing
o Functionality testing
o Negative testing/Boundary testing
o Use Case Testing
o Performance testing
38. Test case execution
quality control
• Define the platform coverage
• For both manual execution and auto execution
(running scripts)
• Detailed step by step execution logs may need
to be saved and reviews for junior engineers
• Regular rotation to keep engineer excited about
different testing areas better execution quality
• Cross testing to discover new bugs
• Stringent schedule control
39. What If you can’t get the
•
Spec?
Whatever specs exist
• Software change memos that come with each new
internal version of the program
• User manual draft (and previous version’s manual)
• Product literature
• Published style guide and UI standards
• Published standards (C language or RFC)
• 3rd party product compatibility testing suites
• Published regulations
• Internal memos (e.g. project mgr.to engineers, describing
the feature definitions)
40. What If you can’t get the
•
Spec?
Marketing presentations, selling the concept of the product to
management
• Bug reports (responses to them)
• Reverse engineer the program
• Interview people
o Developer lead
o Tech writer
o Customer service
o Subject matter experts
o Project manager
• Look at header files, source code, database table definitions
• Specs and bug lists for all 3rd party tools that you use
• Prototypes or lab notes on the prototypes.
41. What If you can’t get the
Spec?
• •Look at compatible products
• Look at customer call records