This document provides an overview of the qEstimation analysis method for estimating the size and effort of software testing. It discusses estimating test size using Test Case Points (TCP), which represents the size of the simplest test case. TCP is calculated based on the number of checkpoints in a test case, and the complexity of the test setup and data. Estimated testing effort is then determined using the total TCP size and either a measured test velocity or a regression analysis of previous test cycles. The method aims to introduce a standardized way to estimate testing size and effort.
Test Estimation using Test Case Point Analysis methodKMS Technology
The document introduces the qEstimation method for estimating the size and effort of software testing activities. It discusses estimating test size using Test Case Points (TCP) by analyzing checkpoints, test setup complexity, and test data complexity. Effort can then be estimated using test velocity/productivity or regression analysis of historical size and effort data. The method is implemented in a qEstimation toolkit for easily counting TCPs, calibrating estimates, and monitoring test metrics. The approach provides an agile way to estimate testing independent of test case details. More empirical validation is still needed but initial experiences have been positive.
This document provides the syllabus for the International Software Testing Qualifications Board's Certified Tester Advanced Level certification. It outlines the learning objectives for test managers, test analysts, and technical test analysts. The syllabus covers topics such as testing in the software lifecycle, specific system types like systems of systems and safety critical systems, testing processes, test management, risk-based testing, and more. It is intended to guide curriculum and training for the advanced level certification. The syllabus was last updated in 2007 by the Advanced Level Working Party committee members.
This document provides a 3-sentence summary of the Certified Tester Foundation Level Syllabus document:
The syllabus outlines the key concepts and topics covered in foundation level certification for software testing, including testing techniques, test management, and quality assurance. It provides the copyright information and history of revisions to the certification syllabus. The International Software Testing Qualifications Board maintains and updates the syllabus.
The document discusses test execution and reporting. It provides details on general test procedures including planning, execution, and evaluation. It describes preparing the test infrastructure by setting up systems, software, and standards. Test execution involves conducting individual test cases, verifying results against expected outcomes, and analyzing any variances. Reporting includes documenting test logs, creating incident reports for problems, and providing effective defect reports using a standardized template. Defects are then resolved by referring them to defect or change management processes.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
parikshalabs.com provides the advance level services in software field like Software Testing Tools, Web Application Testing and also mobile Apps testing.
Test Estimation using Test Case Point Analysis methodKMS Technology
The document introduces the qEstimation method for estimating the size and effort of software testing activities. It discusses estimating test size using Test Case Points (TCP) by analyzing checkpoints, test setup complexity, and test data complexity. Effort can then be estimated using test velocity/productivity or regression analysis of historical size and effort data. The method is implemented in a qEstimation toolkit for easily counting TCPs, calibrating estimates, and monitoring test metrics. The approach provides an agile way to estimate testing independent of test case details. More empirical validation is still needed but initial experiences have been positive.
This document provides the syllabus for the International Software Testing Qualifications Board's Certified Tester Advanced Level certification. It outlines the learning objectives for test managers, test analysts, and technical test analysts. The syllabus covers topics such as testing in the software lifecycle, specific system types like systems of systems and safety critical systems, testing processes, test management, risk-based testing, and more. It is intended to guide curriculum and training for the advanced level certification. The syllabus was last updated in 2007 by the Advanced Level Working Party committee members.
This document provides a 3-sentence summary of the Certified Tester Foundation Level Syllabus document:
The syllabus outlines the key concepts and topics covered in foundation level certification for software testing, including testing techniques, test management, and quality assurance. It provides the copyright information and history of revisions to the certification syllabus. The International Software Testing Qualifications Board maintains and updates the syllabus.
The document discusses test execution and reporting. It provides details on general test procedures including planning, execution, and evaluation. It describes preparing the test infrastructure by setting up systems, software, and standards. Test execution involves conducting individual test cases, verifying results against expected outcomes, and analyzing any variances. Reporting includes documenting test logs, creating incident reports for problems, and providing effective defect reports using a standardized template. Defects are then resolved by referring them to defect or change management processes.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
The document summarizes key principles of software testing including:
1. Testing is necessary because software will contain faults due to human errors, and failures can be costly.
2. Exhaustive testing of all possible test cases is impractical. Risk-based prioritization is used to test the most important cases first.
3. The test process includes planning, specification, execution, recording results and checking completion criteria. Effective test cases are prioritized to efficiently find faults.
parikshalabs.com provides the advance level services in software field like Software Testing Tools, Web Application Testing and also mobile Apps testing.
The document provides information on types of software testing, test strategy and planning, and test estimation techniques. It describes various types of testing including functional, system, end-to-end, load, security, and others. It also discusses test strategy, test planning, and creating test plans. Finally, it outlines several techniques for estimating testing efforts such as best guess, analogies, work breakdown structure, three-point estimation, and function point analysis.
The document outlines software testing best practices organized into groups:
- The Basic Practices include writing functional specifications, code reviews, test criteria, and automated test execution.
- Foundational Practices involve user scenarios, usability testing, and feedback loops.
- Incremental Practices focus on close collaboration between testers and developers, code coverage, test automation, and testing for quick releases.
The document discusses test execution and reporting. It provides details on general test procedures including planning, execution, and evaluation. It describes preparing the test infrastructure by setting up systems, software, and standards. Test execution involves conducting individual test cases, verifying results against expected outcomes, and analyzing any variances. Reporting includes documenting test logs, creating incident reports for problems, and providing effective defect reports using a standardized template. Defects are then resolved by referring them to defect or change management processes.
The document discusses the challenges of testing the Internet of Everything (IoE). It notes that the IoE will include vast numbers of static and mobile devices integrated with hundreds of services. Testing the IoE will require strategies for functional testing, testing at scale, network testing, big data testing, and the use of modeling, test environments, tools, and analytics. A new model for testing is needed that focuses on exploration and learning skills over process. Testers may need new skills like writing code and working more closely with developers to test the complex IoE.
At SQA Solution the objectives of SAP System Testing are to verify that the installed system, which includes the SAP software, custom code and manual procedures, perform per business requirements:
Executes as specified and without error.
Validates with the users and management that the delivered system performs in accordance with the stated system requirements.
Ensures that the system works with other existing systems, including but not limited to interfaces, conversions, and reports.
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
This document provides an overview and introduction to software testing for beginners. It discusses what software testing is, why it's important, and what testers do. Some key points covered include:
- The goal of testing is to find bugs early and ensure quality by designing and executing test cases that cover functionality, security, databases, and user interfaces.
- A good tester has skills like communication, organization, troubleshooting, and being methodical and objective in their work.
- Testing occurs at all stages of the software development life cycle from initial specifications through coding, testing, deployment and maintenance.
The document discusses various aspects of the software testing process including verification and validation strategies, test phases, metrics, configuration management, test development, and defect tracking. It provides details on unit testing, integration testing, system testing, and other test phases. Metrics covered include functional coverage, software maturity, and reliability. Configuration management and defect tracking processes are also summarized.
Testing metrics provide visibility into software quality and the testing process. Some key metrics include defect severity index, number of defects found, and test case effectiveness. It is important to analyze metrics over time and consider other factors, as metrics alone can sometimes be misleading. Looking at trends in multiple metrics together can provide valuable insights about software quality and areas for improvement.
The document discusses various topics related to measurement accuracy including definitions of accuracy and precision. It describes sources of error such as systematic errors, random errors, and quantization errors. It also discusses ways to improve accuracy such as filtering, averaging, and guardbanding. Finally, it covers calibration techniques, data analysis methods like histograms and distributions, and the relationship between noise, test time, and yield.
The document provides an introduction to software testing fundamentals and artifacts. It discusses test cases, test specifications, test planning, and test execution. Test cases are defined as a set of test inputs, execution conditions, and expected results to test a specific objective. Good test cases should be reasonable, exercise areas of interest, and make failures obvious. The document outlines steps for creating test cases such as breaking the application into testable modules, writing checklists, adding questions, and getting reviews from other testers and developers.
Describes the detail of software quality, tradeoffs, quality with testing, quality with inspection, need of standards, standards organizations & different type of software standards.
Mieke Gevers - Performance Testing in 5 Steps - A Guideline to a Successful L...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Performance Testing in 5 Steps - A Guideline to a Successful Load Test by Mieke Gevers. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
This document discusses predicting defects in the system testing phase using a model based on a six sigma approach. The research aims to establish a defect prediction model to determine the number of defects to be found before testing begins. The model would help with resource planning, test coverage, and meeting deadlines. The research applies a define-measure-analyze-design-verify process to build the model using regression analysis on data from previous projects. Factors like requirements errors, design errors, and code errors are analyzed to determine their relationship to defects found during testing. The initial results found several significant factors that could be used to reliably predict defects.
The document discusses the testing life cycle process. It involves testing activities from the beginning of a project through requirements, design, development, integration testing, system testing, and release. Key phases include test planning, case design, execution, and using various testing types and tools. An effective testing team has defined roles and responsibilities throughout the project life cycle.
This document discusses various types of software testing techniques used in the software development lifecycle (SDLC). It begins by describing different SDLC models like waterfall, prototyping, RAD, spiral and V-models. It then discusses the importance of testing at different stages of SDLC and different types of testing like static vs dynamic, black box vs white box, unit vs integration etc. The rest of the document elaborates on specific black box and white box testing techniques like equivalence partitioning, boundary value analysis, cause-effect graphing, statement coverage and basis path testing.
Regression testing is important to ensure new software changes do not break existing functionality. Automating regression testing helps manage the large number of test cases needed and speeds up release cycles. Key aspects of managing regression include establishing a baseline, comparing new results to the baseline, debugging failures efficiently, and automating testing processes to reduce human effort and testing time.
Automated testing helps identify software bugs earlier through unit testing, code coverage, code analysis, web testing, load testing, and test case management. These tools help ensure software works as intended under normal and peak usage while finding errors. Static code analysis further checks for design, naming, security, and other issues based on configurable rules.
The document outlines a test plan for a Waste Management Inspection Tracking System (WMITS) software. It includes sections on test scope and objectives, interfaces to be tested, testing strategies including unit, integration, validation and high-order testing, a test schedule, and resources and staffing. The testing aims to minimize bugs and defects by thoroughly testing all components, functions, and the integrated system prior to release.
The document provides an overview of construction project management. It discusses [1] the characteristics of construction projects, [2] the need for project management in construction due to its complex nature, [3] the typical project life cycle from conceptual planning to closeout, and [4] the major types of construction projects and participants including owners, design professionals, contractors, and project managers.
There are two types of overheads in construction estimating: direct overheads such as tools, facilities, transportation and site offices; and indirect overheads like office costs, vehicles, utilities and administration staff. The on-site charge out rate for labor is affected by factors including annual leave, statutory holidays, employer national insurance contributions, pension contributions, tools and plant costs, supervision costs, and training levies. The on-site rate is calculated by applying all these additional cost factors to the base JIB rate.
The document provides information on types of software testing, test strategy and planning, and test estimation techniques. It describes various types of testing including functional, system, end-to-end, load, security, and others. It also discusses test strategy, test planning, and creating test plans. Finally, it outlines several techniques for estimating testing efforts such as best guess, analogies, work breakdown structure, three-point estimation, and function point analysis.
The document outlines software testing best practices organized into groups:
- The Basic Practices include writing functional specifications, code reviews, test criteria, and automated test execution.
- Foundational Practices involve user scenarios, usability testing, and feedback loops.
- Incremental Practices focus on close collaboration between testers and developers, code coverage, test automation, and testing for quick releases.
The document discusses test execution and reporting. It provides details on general test procedures including planning, execution, and evaluation. It describes preparing the test infrastructure by setting up systems, software, and standards. Test execution involves conducting individual test cases, verifying results against expected outcomes, and analyzing any variances. Reporting includes documenting test logs, creating incident reports for problems, and providing effective defect reports using a standardized template. Defects are then resolved by referring them to defect or change management processes.
The document discusses the challenges of testing the Internet of Everything (IoE). It notes that the IoE will include vast numbers of static and mobile devices integrated with hundreds of services. Testing the IoE will require strategies for functional testing, testing at scale, network testing, big data testing, and the use of modeling, test environments, tools, and analytics. A new model for testing is needed that focuses on exploration and learning skills over process. Testers may need new skills like writing code and working more closely with developers to test the complex IoE.
At SQA Solution the objectives of SAP System Testing are to verify that the installed system, which includes the SAP software, custom code and manual procedures, perform per business requirements:
Executes as specified and without error.
Validates with the users and management that the delivered system performs in accordance with the stated system requirements.
Ensures that the system works with other existing systems, including but not limited to interfaces, conversions, and reports.
Measurement System Analysis is the first step of the Measure Phase of an improvement project. Before you can pass judgment on the process, you need to ensure that your measurement system is accurate, precise, capable and in control.
This document provides an overview and introduction to software testing for beginners. It discusses what software testing is, why it's important, and what testers do. Some key points covered include:
- The goal of testing is to find bugs early and ensure quality by designing and executing test cases that cover functionality, security, databases, and user interfaces.
- A good tester has skills like communication, organization, troubleshooting, and being methodical and objective in their work.
- Testing occurs at all stages of the software development life cycle from initial specifications through coding, testing, deployment and maintenance.
The document discusses various aspects of the software testing process including verification and validation strategies, test phases, metrics, configuration management, test development, and defect tracking. It provides details on unit testing, integration testing, system testing, and other test phases. Metrics covered include functional coverage, software maturity, and reliability. Configuration management and defect tracking processes are also summarized.
Testing metrics provide visibility into software quality and the testing process. Some key metrics include defect severity index, number of defects found, and test case effectiveness. It is important to analyze metrics over time and consider other factors, as metrics alone can sometimes be misleading. Looking at trends in multiple metrics together can provide valuable insights about software quality and areas for improvement.
The document discusses various topics related to measurement accuracy including definitions of accuracy and precision. It describes sources of error such as systematic errors, random errors, and quantization errors. It also discusses ways to improve accuracy such as filtering, averaging, and guardbanding. Finally, it covers calibration techniques, data analysis methods like histograms and distributions, and the relationship between noise, test time, and yield.
The document provides an introduction to software testing fundamentals and artifacts. It discusses test cases, test specifications, test planning, and test execution. Test cases are defined as a set of test inputs, execution conditions, and expected results to test a specific objective. Good test cases should be reasonable, exercise areas of interest, and make failures obvious. The document outlines steps for creating test cases such as breaking the application into testable modules, writing checklists, adding questions, and getting reviews from other testers and developers.
Describes the detail of software quality, tradeoffs, quality with testing, quality with inspection, need of standards, standards organizations & different type of software standards.
Mieke Gevers - Performance Testing in 5 Steps - A Guideline to a Successful L...TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Performance Testing in 5 Steps - A Guideline to a Successful Load Test by Mieke Gevers. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
This document discusses predicting defects in the system testing phase using a model based on a six sigma approach. The research aims to establish a defect prediction model to determine the number of defects to be found before testing begins. The model would help with resource planning, test coverage, and meeting deadlines. The research applies a define-measure-analyze-design-verify process to build the model using regression analysis on data from previous projects. Factors like requirements errors, design errors, and code errors are analyzed to determine their relationship to defects found during testing. The initial results found several significant factors that could be used to reliably predict defects.
The document discusses the testing life cycle process. It involves testing activities from the beginning of a project through requirements, design, development, integration testing, system testing, and release. Key phases include test planning, case design, execution, and using various testing types and tools. An effective testing team has defined roles and responsibilities throughout the project life cycle.
This document discusses various types of software testing techniques used in the software development lifecycle (SDLC). It begins by describing different SDLC models like waterfall, prototyping, RAD, spiral and V-models. It then discusses the importance of testing at different stages of SDLC and different types of testing like static vs dynamic, black box vs white box, unit vs integration etc. The rest of the document elaborates on specific black box and white box testing techniques like equivalence partitioning, boundary value analysis, cause-effect graphing, statement coverage and basis path testing.
Regression testing is important to ensure new software changes do not break existing functionality. Automating regression testing helps manage the large number of test cases needed and speeds up release cycles. Key aspects of managing regression include establishing a baseline, comparing new results to the baseline, debugging failures efficiently, and automating testing processes to reduce human effort and testing time.
Automated testing helps identify software bugs earlier through unit testing, code coverage, code analysis, web testing, load testing, and test case management. These tools help ensure software works as intended under normal and peak usage while finding errors. Static code analysis further checks for design, naming, security, and other issues based on configurable rules.
The document outlines a test plan for a Waste Management Inspection Tracking System (WMITS) software. It includes sections on test scope and objectives, interfaces to be tested, testing strategies including unit, integration, validation and high-order testing, a test schedule, and resources and staffing. The testing aims to minimize bugs and defects by thoroughly testing all components, functions, and the integrated system prior to release.
The document provides an overview of construction project management. It discusses [1] the characteristics of construction projects, [2] the need for project management in construction due to its complex nature, [3] the typical project life cycle from conceptual planning to closeout, and [4] the major types of construction projects and participants including owners, design professionals, contractors, and project managers.
There are two types of overheads in construction estimating: direct overheads such as tools, facilities, transportation and site offices; and indirect overheads like office costs, vehicles, utilities and administration staff. The on-site charge out rate for labor is affected by factors including annual leave, statutory holidays, employer national insurance contributions, pension contributions, tools and plant costs, supervision costs, and training levies. The on-site rate is calculated by applying all these additional cost factors to the base JIB rate.
This chapter discusses techniques for scheduling repetitive projects, including summary diagrams and the line of balance (LOB) method. The LOB technique aims to balance resources and synchronize work across repetitive units so that crews are fully employed without interruption. It provides a useful visual representation of the schedule for large repetitive projects like highways, pipelines, and high-rise buildings. The chapter will cover LOB network representations, integrating CPM and LOB analyses, and using LOB to determine resource needs to meet a project deadline.
Este documento describe las principales características nuevas de Microsoft Excel 2007. Entre ellas se incluyen una interfaz de usuario orientada a resultados, mayor capacidad con hasta 1 millón de filas y 16,000 columnas, nuevos temas y estilos, formato condicional mejorado, escritura simplificada de fórmulas, ordenación y filtrado mejorados, mejoras en tablas, gráficos y tablas dinámicas, nuevos formatos de archivo como PDF y XPS, y mejores formas de compartir el trabajo. También cubre funciones, mane
This document outlines the process and steps for construction cost estimating. It begins by defining estimating and differentiating it from calculation. It then describes the key steps in the estimating process: planning and scheduling, project study and data collection, preparing method statements, assessing resource outputs, and calculating direct, overhead and total costs. The document provides examples of calculating labor, equipment and material rates. It also discusses different estimating methods and includes an example cost estimate calculation for a bridge project.
This document discusses construction contracts and equipment costs for an excavation project.
The document contains information about a building project that requires 2000 cubic meters of excavation work. The equipment crew consists of one excavator rented at 700 LE per day and two trucks rented at 300 LE per day each. The crew's production rate is 200 cubic meters per day.
The document then provides an example calculation to estimate the equipment cost per cubic meter for this excavation project. It calculates the contractor's fee under different total project cost scenarios for a target cost construction contract.
El documento describe cómo crear gráficos y organigramas en Excel 2007. Explica cómo insertar gráficos y seleccionar el tipo de gráfico deseado. También muestra cómo agregar títulos, etiquetar ejes y cambiar el tipo de gráfico. Además, enseña a insertar formas como líneas y flechas, y cómo crear organigramas usando SmartArt. Por último, explica cómo modificar un organigrama existente agregando más cuadros.
Bug deBug Chennai 2012 Talk - V3 analysis an approach for estimating software...RIA RUI Society
Dr. Vu Nguyen is a Director of Software Engineering at QASymphony and a Lecturer at the University of Science, Vietnam National University. At both places, he is involved in developing software tools and performing research in software estimation, testing, maintenance, and process.
Quality assurance management is an essential component of the software development lifecycle. To ensure quality, applicability, and usefulness of a product, development teams must spend considerable time and resources testing, which makes the estimation of the software testing effort, a critical activity. In this talk, we present an approach, namely V3 Analysis, to estimating the size of software testing work. The approach measures the size of a software test case based on its checkpoints, preconditions and test data, as well as the types of testing. We also introduce a supporting toolkit that you can use to estimate testing effort quickly for your projects.
This document discusses test case point analysis (TCPA) for estimating software testing projects. It introduces TCPA, which estimates testing size using test cases. The complexity of test cases is measured by counting checkpoints, preconditions, and test data. Test case points are adjusted based on test type. Estimated effort is then calculated using test case points and productivity metrics. The goal is to provide accurate estimates of testing size, effort, schedule and staff specifically for testing projects.
Software testing is an essential activity of the software development lifecycle. To ensure quality, applicability, and usefulness of a product, development teams must spend considerable time and resources testing, which makes the estimation of the software testing effort, a critical activity. This presentation presents a simple and useful method called qEstimation to estimate the size and effort of the software testing activities. The method measures the size of the test case in terms of test case points based on its checkpoints, preconditions and test data, as well as the type of testing. The testing effort is then computed using the size estimated in test case points. All calculations are embedded in a simple Excel tool, allowing estimators easily to estimate testing effort by providing test cases and their complexity.
Database Unit Testing Made Easy with VSTSSanil Mhatre
This document discusses database unit testing using Visual Studio Team System (VSTS). It begins with an overview of software testing basics and unit testing principles. It then covers database unit testing terminology, principles of isolation and independence, and testable interfaces of stored procedures. The document outlines different levels of unit testing and factors to consider. It demonstrates implementing database unit testing in VSTS 2010 and new features in VSTS 2012. The goal is to show how VSTS can be used to test database code and improve quality.
The document discusses various software testing techniques including white box testing and black box testing. It provides details on test cases, test suites, and testing conventional applications. Specifically:
- It describes white box and black box testing techniques, and explains that white box tests the implementation while black box tests only the functionality.
- It defines what a test case is and lists typical parameters for a test case like ID, description, test data, expected results. It provides an example test case.
- It explains that a test suite is a container that holds a set of tests and can be in different states. A diagram shows the relationship between test plans, test suites and test cases.
- It discusses unit testing and
Test case prioritization usinf regression testing.pptxmaheshwari581940
This document discusses regression testing and test case prioritization techniques. It proposes prioritizing test cases based on six factors: customer priority, changes to requirements, code complexity, reusability, application flow, and fault impact. An algorithm and genetic algorithm are presented to assign weights and prioritize test cases. The approach aims to improve software quality and increase the fault detection rate. Metrics like APFD and ATEI are discussed to analyze the number of faults detected and test cases used.
Test automation principles, terminologies and implementationsSteven Li
A general slides for test automation principle, terminologies and implementation
Also, the slides provide an example - PET, which is a platform written by Perl, but not just for Perl. It provides a general framework to use.
TMPA-2017: Regression Testing with Semiautomatic Test Selection for Auditing ...Iosif Itkin
TMPA-2017: Tools and Methods of Program Analysis
3-4 March, 2017, Hotel Holiday Inn Moscow Vinogradovo, Moscow
Regression Testing with Semiautomatic Test Selection for Auditing of IMS Database
Alexey Ruchay, Ivan Kliavin, Tatiana Kotova, Julia Ivanova, Applied Technologies
Would like to know more?
Visit our website:
www.tmpaconf.org
www.exactprosystems.com/events/tmpa
Follow us:
https://www.linkedin.com/company/exactpro-systems-llc?trk=biz-companies-cym
https://twitter.com/exactpro
Fundamentals of software 2 | Test Case | Test Suite | Test Plan | Test ScenarioSiddharth Sharma
Test case design in software testing is a mandatory skill for every Software Test Engineer, QA Engineer, Quality Analyst etc. Test scenario vs Test case are the topics that are required to be clarified. Test suite collection should be managed to have good quality testing.
The document discusses test design which includes creating test scenarios and test cases to thoroughly test all features of a system. It provides templates and guidelines for writing effective test scenarios and test cases, including elements like preconditions, test steps, and expected results. The document also discusses traceability matrices to map test cases to requirements and help determine test coverage.
The document discusses various software testing strategies and techniques. It begins by explaining the importance of testing software before customers use it in order to reduce errors. It then describes different testing techniques including white-box testing, which tests the internal logic and paths of a program, and black-box testing, which tests the inputs and outputs against requirements without considering internal logic. The document provides examples of specific strategies like branch coverage, basis path testing, and boundary value analysis. It also discusses test case documentation and different testing phases from unit to integration to system testing.
The document discusses risk based testing and random testing approaches. It outlines the challenges of time and resource constraints when testing software. Risk based testing uses risk analysis and metrics to focus testing on high risk areas in order to save time and money while maintaining quality. Metrics are developed to manage and organize large test projects. Random testing involves automatically generating random inputs and scenarios to stress test software in ways not covered by nominal testing. It can be used with differential and fault injection testing against a reference implementation to automatically check test results.
Software testing is important for quality assurance and finding errors. However, complete testing is infeasible due to the complexity of software. There are many types of testing like black-box, white-box, correctness, and performance testing. Testing should be automated when possible to improve efficiency and effectiveness. It is difficult to determine when to stop testing but metrics like reliability and budgets can provide guidelines. While testing has limitations, it remains a crucial part of the software development process.
Software quality assurance involves testing software to find errors and ensure correct execution. There are various types of testing like unit testing, integration testing, and system testing. Testers define test cases to verify program behaviors meet specifications. Test cases are designed using techniques like equivalence partitioning, boundary value analysis, and branch testing. The objective is to thoroughly test software and uncover defects before final deployment.
The document provides an overview of software quality assurance and testing. It defines testing as executing a program to find errors based on the definitions of Glen Myers and Paul Jorgensen. The objectives of testing are finding failures, demonstrating correct execution, and being concerned with errors, faults, and incidents. The document also discusses testing life cycles, verification versus validation, classifications of testing at different levels and based on methodologies, relationships between specified and programmed behaviors, and test methodologies like black box and white box testing.
This document introduces the validation and verification process for a new testing method or instrument for use in a laboratory. It defines key terms like validation, verification, sensitivity and specificity. It outlines the steps for validation including installation qualification, operational qualification and performance qualification. Verification involves testing precision, conducting sample testing comparisons, participating in proficiency testing and evaluating results against peer laboratories. Once validated and verified, the method can be standardized through refined work procedures, training and documentation meeting ISO standards for introduction into routine use. Ongoing verification includes lot testing, instrument comparisons and quality control data analysis.
The document discusses software testing throughout the software development life cycle. It covers key topics like software development life cycle models, test levels, test types, and maintenance testing. Test levels include component testing, integration testing, and system testing. Software development life cycle models can be sequential, iterative, or incremental. The document provides details on various models like waterfall, V-model, spiral, agile development, etc. It also discusses test planning, test design techniques, integration strategies like big bang, top-down and bottom-up integration.
The document discusses various whitebox testing techniques including statement coverage, branch coverage, condition coverage, path coverage, and data flow-based testing. Statement coverage requires designing test cases such that every statement in a program is executed at least once. Branch coverage requires test cases where different branch conditions are given true and false values. Path coverage requires test cases such that all linearly independent paths in a program are executed based on the program's control flow graph. Data flow-based testing focuses on connections between variable definitions and uses.
Software testing techniques document discusses various software testing methods like unit testing, integration testing, system testing, white box testing, black box testing, performance testing, stress testing, and scalability testing. It provides definitions and characteristics of each method. Some key points made in the document include that unit testing tests individual classes, integration testing tests class interactions, system testing validates functionality, and performance testing evaluates how the system performs under varying loads.
Similar to An Approach to estimate Software Testing (20)
Hanoi, July2015 monthly event: Overcome obstacles to truly be a Scrum teamAgile Vietnam
The document discusses obstacles that teams face in becoming truly Scrum teams and how to overcome them. Some common obstacles include mixing Scrum roles, command and control management styles, treating tasks like queues rather than focusing on flow, not taking responsibility for all tasks, providing silent feedback, and the product owner not fulfilling their role. These obstacles can be overcome through training and coaching on Scrum practices, leading by example in communication and focusing on quality, and embracing Scrum values like collaboration and happiness.
[Hanoi, june 2015] one normal day of an agile developer Agile Vietnam
An Agile developer typically starts their day with pair programming, where two developers work together on the same code. They use tools like Pivotal Tracker to manage tasks and track progress. Developers practice test-driven development and behavior-driven development, writing automated tests before and during development to ensure quality. If testing goes well, changes are continuously delivered to clients for feedback in an effort to get working software into users' hands quickly.
Marketing needs to change they way it works. Budget for Social media and onlinemarketing are increasing. Interaction with consumers and customers are also in the shift.
The document summarizes the transition to Agile at a company in Vietnam. It discusses challenges faced such as resistance to change and lack of understanding of Agile concepts. It outlines steps taken such as training key people and starting with a small feature team. It acknowledges that the company is not fully implementing Scrum correctly yet and the presenter needs to focus more on the Agile transition. Overall it evaluates progress made and lessons learned in transitioning the organization to Agile methods.
Agile Vietnam shared at the T3Con, one of the biggest web conference at it's region in Phnom Penh. The goal is to kick-off an agile community in Cambodia by sharing how Agile Vietnam has done it in the past 1.5 years.
This document discusses Sprint Zero, which is the preparation period before real sprints begin for a new Scrum team. Sprint Zero is needed because the product owner and team need time to get acquainted, set up the development environment, establish processes like the definition of done, and clarify roles. The document provides a checklist of activities for Sprint Zero, including ensuring team training, setting up tools and infrastructure, defining the technical architecture and coding standards, and agreeing on the sprint agenda and planning process. It also lists signs of good and bad Scrum practices.
The document discusses the role of a Scrum Master at East Agile. As Scrum Master, they guide teams to follow the Scrum process, facilitate communication between the Product Owner and team, and help improve team productivity, engineering practices, and tools. At East Agile, the Scrum Master supports the Product Owner by training them on the agile process and helping with story breakdown and prioritization. They also support teams by facilitating daily knowledge sharing meetings and removing barriers to work.
This document advertises Scrum coaching sessions from Agile Vietnam to help organizations implement Scrum on real projects. It lists Thang Phan as the Scrum coach and provides options for weekly, bi-weekly, or monthly remote coaching sessions. Companies that Agile Vietnam has previously worked with include Accenture, Agilemed Technology, and EDS/HP-Healthcare Payer System. Interested parties are invited to contact agile@agilevietnam.org to sign up for coaching sessions.
This document presents five core concepts for competing with change: 1) Team Teams which emphasizes shared work and responsibility, 2) Self-managing Teams which have authority over their work, 3) Cross-functional Teams with all needed skills, 4) Short Iterative Full-Cycle Feedback with frequent feedback, and 5) Lowering Cost of Change through strategies like reducing work in progress. It also introduces the presenters, Steven Mak and Stanly Lau from Odd-e, and provides resources on agile, scrum, technical practices, teams, and scaling.
The document provides tips for lean startups and agile entrepreneurs. It emphasizes customer development before product development, learning from minimum viable products, failing fast and iterating quickly. The key is to find product market fit with under 100 users and then go big quickly by maximizing marketing spend. Once product market fit and initial sales are validated, it becomes easier to sell to investors and scale the business rapidly.
Behavior Driven Development (BDD) is an agile development approach that involves writing requirements as testable scenarios described using a domain-specific language. These scenarios guide software development and ensure the developed code matches the described behaviors. BDD helps align development with stakeholder needs, verifies requirements and code through continuous integration, and makes software more robust by describing its behavior in two ways - through code and scenarios. While BDD requires extra documentation in scenarios, it helps focus development and prevents bugs caused by changes.
The document discusses experience design and how to integrate it into agile delivery processes. It defines experience design as focusing on the quality of the user experience rather than just functionality. It provides six tips for integrating experience design: 1) use experience design to kickoff projects rather than requirements lists, 2) de-mystify the design process, 3) continually test designs with real users, 4) use experience maps instead of backlogs, 5) prioritize based on the designed experience, and 6) win more contracts by designing multi-phase customer experiences. The overall message is that experience design can help teams focus on delivering expected and delightful experiences for users through an agile process.
This document summarizes the progress of agile adoption in Vietnam over the past 8 months. It discusses the first certified scrum master course in February, 18 agile events held, the formation of local agile groups, and facts and figures on event attendance. It outlines the community's outlook to promote agile more, provide more training through localization and sharing, and better connect with companies. The trend is towards further growing the agile community in Vietnam.
This document discusses an agile development process used by Tech Propulsion Labs (TPL). It outlines key aspects of TPL's agile approach, including that it optimizes for flexible, business-driven projects and adapts to new insights. A major role discussed is the Product Owner, who acts as the customer representative and makes decisions for the development team. An optimal development path aims to get critical features launched quickly while deferring less important features.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
2. Agenda
• Background and Motivation
• qEstimation Analysis
– Test Size Estimation (Test Case Point Analysis)
– Test Effort Estimation
• Conclusion
2
3. Background
• Software estimation
– process of determining the cost, time, staff, and other related
attributes of software projects, often before work is performed
• Estimation is important for the success or failure of software
projects
• Methods and Metrics
– Source Lines of Code (SLOC)
– Function Points
– Use Case Points
– Story Points
– COCOMO
– Expert Judgment
3
4. Motivation
• Testing accounts for up to 50% of project effort [1]
• Current problems
– estimates are done for the whole project rather than testing
specific
– lack of reliable methods designed for estimating size and effort
of software testing
– vague definitions of testing productivity
• due to the lack of a size measure for software testing
• Our aim
– To introduce a method for estimating the size of testing
activities
– To discuss methods to estimate testing effort using this size
measure
– To introduce a simple toolkit for this estimation process
4
5. Agenda
• Background and Motivation
• qEstimation Analysis
– Test Size Estimation (Test Case Point Analysis)
– Test Effort Estimation
• Conclusion
5
6. qEstimation Analysis’ Principles
• Size reflects the mass and complexity of each test cycle
of a testing project
• Test case’s complexity is based on
– Number of checkpoints
– Complexity of test setup or precondition
– Complexity of test data
• Test Case Point (TCP) is used as size unit
– representing the size of the most simple test case
• Calibration or model refinement is key to estimating
effort
– calibration based on feedback from different cycles within
project or of similar projects
• Focusing on independent testing (V & V)
6
7. qEstimation Analysis’ Process
Estimate size and effort of different test cycles of a same project:
[Test Cycle i]
Count TCPs Estimate
Test Case Counted Estimated
of all Test Testing
Test Case Size
Effort
Effort
Cases
Update
Parameters Historical
Data
Calibrate
Estimation Historical Data of this Project
Model
Test Cycle Size Actual Effort by
Effort Activity
…. …. …. ….
Test cycle i …. …. ….
…. …. …. ….
7
8. Count Size of Test Cycle
• Size of a test cycle is the total of TCPs of all test cases to be
executed in that test cycle
• Steps:
Count
Checkpoints
Adjust based on
Test Case Determine Set Up Unadjusted
Test Type TCPs
Complexity TCPs
(optional)
Determine Test
Data Complexity
8
9. Count Size of Test Cycle (cont’d)
• Checkpoints
– Checkpoint is the condition in which the tester verifies
whether the result produced by the target function
matches the expected criterion
– One test case consists of one or many checkpoints
One checkpoint is counted as one TCP
9
10. Count Size of Test Cycle (cont’d)
• Test Setup or Precondition
– Test setup specifies the condition to execute the test case
• Include setup steps to prepare environment for testing
• Mainly affect the cost to execute the test case
• May be related to data prepared for the test case
– Four levels of Test Setup complexity
• Each is assigned a number of TCPs
Number Complexity Description
of TCP(*) Level
0 None • The set up is not applicable or important to execute the test case
• Or, the set up is just reused from the previous test case to continue the current test case
1 Low • The condition for executing the test case is available with some simple modifications required
• Or, some simple set-up steps are needed
3 Medium • Some explicit preparation is needed to execute the test case
• Or, The condition for executing is available with some additional modifications required
• Or, some additional set-up steps are needed
5 High • Heavy hardware and/or software configurations are needed to execute the test case
(*) based on our survey of 18 senior QA engineers. You can adjust according to your project’s experience.
10
11. Count Size of Test Cycle (cont’d)
• Test Data
– Test Data is used to execute the test case
• It can be generated at the test case execution time, sourced from previous tests, or
generated by test scripts
• Test data is test case specific, or general to a group of test cases
– Four levels of Test Data complexity
• Each is assigned a number of TCPs
Number of TCP Complexity Description
(*) Level
0 None • No test data preparation is needed
1 Low • Simple test data is needed and can be created during the test case execution time
• Or, the test case uses a slightly modified version of existing test data and requires
little or no effort to modify the test data
3 Medium • Test data is deliberately prepared in advance with extra effort to ensure its
completeness, comprehensiveness, and consistency
6 High • Test data is prepared in advance with considerable effort to ensure its completeness,
comprehensiveness, and consistency
• This could include using support tools to generate data and a database to store and
manage test data
• Scripts may be required to generate test data
(*) based on our survey of 18 senior QA engineers. You can adjust according to your project’s experience.
12. Count Size of Test Cycle (cont’d)
• Adjust TCPs based on Type of Test
– This is an OPTIONAL step
– Adjustment is based on types of test cases
• Each type of test case is assigned a weight
• Adjusted TCP of the test case = Counted TCP x Weight(*)
(*) based on our survey of 18
senior QA engineers. You can
adjust according to your
project’s experience.
12
13. Estimate Effort of Test Cycle
• Overview
– Two estimation methods
• Based on Test Velocity
• Regression analysis of Size and Effort of completed test cycles
– Effort distributed by activity
• Test Planning
• Test Analysis and Design Each of these activities may
be performed multiple times
• Test Execution
• Test Tracking and Reporting
13
14. Estimate Effort of Test Cycle (cont’d)
• Estimate Effort based on Test Velocity
Effort(person-hour) = Size(TCP) / Test Velocity (TCP per person-hour)
– Test Velocity is measured as TCP/person-hour
• dependent on project
• calculated based on data from completed test cycles
of the same project
14
15. Estimate Effort of Test Cycle (cont’d)
• Estimate effort using Linear Regression Analysis
– Find out the equation of effort and size using similar completed test
cycles of a project
100
90
80 Equation of
70 Size and Effort
y = 0.0729x + 1.6408
Effort (PM)
60
50
40 The data analysis tool like
30 Excel can be used to find
20
out the equation
10
0
0 100 200 300 400 500 600 700 800 900 1000
Adjusted TCP
15
16. Calibrate the qEstimation Estimation Model
• Calibration: a process adjusting parameters for a
model using historical data or experiences
• With qEstimation, you can calibrate:
(1) TCP assigned to each complexity level of Test Setup
(2) TCP assigned to each complexity level of Test Data
(3) Test Velocity
(4) Effort distribution
(5) Weights of test case types
• Process can be done with the help of tools
Tool Demo
16
17. Conclusion
• qEstimation Analysis is an agile approach to estimating
size and effort of test cycle
– Estimate Size in TCP
– Estimate Effort using Test Velocity or Regression
– An Excel toolkit to simplify the approach
• Advantages and experiences learned
– Easy to implement
– Reflecting real complexity of test cases
– Independent with the level of details of test cases
– Found useful for estimating testing effort
• Limitations and future improvements
– A new approach
– Need more empirical validations
17
19. References
• [1] Y. Yang, Q. Li, M. Li, Q. Wang, “An empirical analysis on distribution patterns of software
maintenance effort”, International Conference on Software Maintenance, 2008, pp. 456-
459
• [2] N. Patel, M. Govindrajan, S. Maharana, S. Ramdas, “Test Case Point Analysis”, Cognizant
Technology Solutions, White Paper, 2001
• [3] QASymphony: www.qasymphony.com
• [4] V. Lam, “Estimable”, Professional Tester,
http://www.professionaltester.com/magazine/backissue/PT015/ProfessionalTester-
July2012-Lam.pdf
• [5] QASymphony, “Test Case Point Analysis”, White Paper and Tool
http://www.qasymphony.com/white-papers.html