Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on How Much Testing is Enough by Edwin Van Loon . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Edwin Van Loon - How Much Testing is Enough - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on How Much Testing is Enough by Edwin Van Loon . See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Anyone who has ever attempted to estimate software testing effort realizes just how difficult the task can be. The number of factors that can affect the estimate is virtually unlimited. The key to good estimates is to understand the primary variables, compare them to known standards, and normalize the estimates based on their differences. This is easy to say but difficult to accomplish because estimates are frequently required even when very little is known about the project and what is known is constantly changing. Throw in a healthy dose of politics and a bit of wishful thinking and estimation can become a nightmare. Rob Sabourin provides a foundation for anyone who must estimate software testing work effort. Learn about the test team’s and tester’s roles in estimation and measurement, and how to estimate in the face of uncertainty. Analysts, developers, leads, test managers, testers, and QA personnel can all benefit from this tutorial.
We have experience with testing projects, both large and small. Sometimes our test estimates are accurate—and sometimes they’re not. We often miss deadlines because there are no defined criteria used to create our estimates. Sometimes we miss our schedules due to crunched testing timelines. Shyam Sunder briefly describes the different test estimation techniques including Simple, Medium, Complex; Top Down, Bottom Up; and Test Point Analysis. To assist in better estimating in the future, Shyam has prepared test estimation templates and guidelines, which can significantly help organizations in proper estimation of testing projects. Through his work, effort and schedule variations have significantly improved from ±60 percent to ±2 percent. Shyam explains the test estimation templates in detail and demonstrates how to choose the estimation templates for your organization’s software development process. Learn why effective software test estimation techniques help in tracking and controlling cost/effort overruns significantly.
'Model Based Test Design' by Mattias ArmholtTEST Huddle
MBT (Model Based Testing) has been used within my department in Ericsson since 2007. As an MBT tool we have been using Conformiq Modeler, which is a commercially available tool. This has been a great success, and is now our main way of working when verifying functional requirements.
Until now, MBT has neither within Ericsson nor outside, only been used very rarely for verification of non-functional requirements, such as performance testing, load testing, stability and robustness tests and characteristics measurements.
This presentation covers the work of two Master Students, who in 2010 performed a study of the possibilities to use MBT for verifying non-functional requirements. One of the results of this study was a new method, inspired by MPDE (Model Driven Performance Engineering), where non-functional requirements can be covered by test models describing the functional behavior. Test Cases can then be generated from these models with an MBT tool.
The proposed method provides different possibilities to handle the non-functional requirements. The requirements can, for example, be introduced with new dedicated states in the behavioral model, or be introduced by extending the existing state model. Another possibility is to implement the non-functional requirements in the test harness, and by that keeping the model simple. The most realistic scenario, however, is a combination of all the above. The grouping and allocation of both functional and non-functional requirements should be considered already in the early test analysis phase.
The new method has been tried out and evaluated. It has been proved useful and fully applicable, and there are clear indications that it is beneficial, and that project lead time can be reduced by using it. We have therefore now started to apply this method in our new development projects.
The presentation includes examples of real cases where MBT has been used for verifying non-functional requirements.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Open Source Tools for Test Management by C.V, Narayanan. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Are Real Test Metrics Predictive for the Future? by Rob Baarda. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Robert Magnusson - TMMI Level 2 - A Practical ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on TMMI Level 2 - A Practical Approach by Robert Magnusson. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Software QA Metrics Dashboard BenchmarkingJohn Carter
Software metrics best practices from a benchmarking assignment that indicates how software metrics are reported to management and used to drive behavior. We learned how leading companies used dashboards to report on quality progress and improvement results. We found the best organizations focused on the vital few metrics but also had automated systems with the ability to drill down on metrics at the divisional and team levels. In addition, the best normalized the metrics by number of customers or complexity. They systematically used root cause analysis to analyze bugs in the field. The SW Quality metrics often went beyond the strict definition of quality in that they also measured release predictability and feature expectations. Finally, the best companies used external benchmarks to set their quality targets.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Software testing is an activity which is aimed for evaluating quality of a program and also for improving it, by identifying defects and problems. Software testing strives for achieving its goal (both implicit and explicit) but it has certain limitations, still testing can be done more effectively if certain established principles are to be followed. In spite of having limitations, software testing continues to dominate other verification techniques like static analysis, model checking and proofs. So it is indispensable to understand the goals, principles and limitations of software testing so that the effectiveness of software testing could be maximized.
Anyone who has ever attempted to estimate software testing effort realizes just how difficult the task can be. The number of factors that can affect the estimate is virtually unlimited. The key to good estimates is to understand the primary variables, compare them to known standards, and normalize the estimates based on their differences. This is easy to say but difficult to accomplish because estimates are frequently required even when very little is known about the project and what is known is constantly changing. Throw in a healthy dose of politics and a bit of wishful thinking and estimation can become a nightmare. Rob Sabourin provides a foundation for anyone who must estimate software testing work effort. Learn about the test team’s and tester’s roles in estimation and measurement, and how to estimate in the face of uncertainty. Analysts, developers, leads, test managers, testers, and QA personnel can all benefit from this tutorial.
We have experience with testing projects, both large and small. Sometimes our test estimates are accurate—and sometimes they’re not. We often miss deadlines because there are no defined criteria used to create our estimates. Sometimes we miss our schedules due to crunched testing timelines. Shyam Sunder briefly describes the different test estimation techniques including Simple, Medium, Complex; Top Down, Bottom Up; and Test Point Analysis. To assist in better estimating in the future, Shyam has prepared test estimation templates and guidelines, which can significantly help organizations in proper estimation of testing projects. Through his work, effort and schedule variations have significantly improved from ±60 percent to ±2 percent. Shyam explains the test estimation templates in detail and demonstrates how to choose the estimation templates for your organization’s software development process. Learn why effective software test estimation techniques help in tracking and controlling cost/effort overruns significantly.
'Model Based Test Design' by Mattias ArmholtTEST Huddle
MBT (Model Based Testing) has been used within my department in Ericsson since 2007. As an MBT tool we have been using Conformiq Modeler, which is a commercially available tool. This has been a great success, and is now our main way of working when verifying functional requirements.
Until now, MBT has neither within Ericsson nor outside, only been used very rarely for verification of non-functional requirements, such as performance testing, load testing, stability and robustness tests and characteristics measurements.
This presentation covers the work of two Master Students, who in 2010 performed a study of the possibilities to use MBT for verifying non-functional requirements. One of the results of this study was a new method, inspired by MPDE (Model Driven Performance Engineering), where non-functional requirements can be covered by test models describing the functional behavior. Test Cases can then be generated from these models with an MBT tool.
The proposed method provides different possibilities to handle the non-functional requirements. The requirements can, for example, be introduced with new dedicated states in the behavioral model, or be introduced by extending the existing state model. Another possibility is to implement the non-functional requirements in the test harness, and by that keeping the model simple. The most realistic scenario, however, is a combination of all the above. The grouping and allocation of both functional and non-functional requirements should be considered already in the early test analysis phase.
The new method has been tried out and evaluated. It has been proved useful and fully applicable, and there are clear indications that it is beneficial, and that project lead time can be reduced by using it. We have therefore now started to apply this method in our new development projects.
The presentation includes examples of real cases where MBT has been used for verifying non-functional requirements.
EuroSTAR Software Testing Conference 2009 presentation on Incremental Scenario Testing by Mattias Ratert. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
C.V, Narayanan - Open Source Tools for Test Management - EuroSTAR 2010TEST Huddle
EuroSTAR Software Testing Conference 2010 presentation on Open Source Tools for Test Management by C.V, Narayanan. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Rob Baarda - Are Real Test Metrics Predictive for the Future?TEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on Are Real Test Metrics Predictive for the Future? by Rob Baarda. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Robert Magnusson - TMMI Level 2 - A Practical ApproachTEST Huddle
EuroSTAR Software Testing Conference 2008 presentation on TMMI Level 2 - A Practical Approach by Robert Magnusson. See more at conferences.eurostarsoftwaretesting.com/past-presentations/
Software QA Metrics Dashboard BenchmarkingJohn Carter
Software metrics best practices from a benchmarking assignment that indicates how software metrics are reported to management and used to drive behavior. We learned how leading companies used dashboards to report on quality progress and improvement results. We found the best organizations focused on the vital few metrics but also had automated systems with the ability to drill down on metrics at the divisional and team levels. In addition, the best normalized the metrics by number of customers or complexity. They systematically used root cause analysis to analyze bugs in the field. The SW Quality metrics often went beyond the strict definition of quality in that they also measured release predictability and feature expectations. Finally, the best companies used external benchmarks to set their quality targets.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Software testing is an activity which is aimed for evaluating quality of a program and also for improving it, by identifying defects and problems. Software testing strives for achieving its goal (both implicit and explicit) but it has certain limitations, still testing can be done more effectively if certain established principles are to be followed. In spite of having limitations, software testing continues to dominate other verification techniques like static analysis, model checking and proofs. So it is indispensable to understand the goals, principles and limitations of software testing so that the effectiveness of software testing could be maximized.
FROM THE ART OF SOFTWARE TESTING TO TEST-AS-A-SERVICE IN CLOUD COMPUTINGijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
From the Art of Software Testing to Test-as-a-Service in Cloud Computingijseajournal
Researchers consider that the first edition of the book "The Art of Software Testing" by Myers (1979)
initiated research in Software Testing. Since then, software testing has gone through evolutions that have
driven standards and tools. This evolution has accompanied the complexity and variety of software
deployment platforms. The migration to the cloud allowed benefits such as scalability, agility, and better
return on investment. Cloud computing requires more significant involvement in software testing to ensure
that services work as expected. In addition to testing cloud applications, cloud computing has paved the
way for testing in the Test-as-a-Service model. This review aims to understand software testing in the
context of cloud computing. Based on the knowledge explained here, we sought to linearize the evolution of
software testing, characterizing fundamental points and allowing us to compose a synthesis of the body of
knowledge in software testing, expanded by the cloud computing paradigm.
Software testing for project report .pdfKamal Acharya
Methods of Software Testing There are two basic methods of performing software testing: 1. Manual testing 2. Automated testing Manual Software Testing As the name would imply, manual software testing is the process of an individual or individuals manually testing software. This can take the form of navigating user interfaces, submitting information, or even trying to hack the software or underlying database. As one might presume, manual software testing is labor-intensive and slow.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
SRGM Analyzers Tool of SDLC for Software Improving QualityIJERA Editor
Software Reliability Growth Models (SRGM) have been developed to estimate software reliability measures such as
software failure rate, number of remaining faults and software reliability. In this paper, the software analyzers tool proposed
for deriving several software reliability growth models based on Enhanced Non-homogeneous Poisson Process (ENHPP) in
the presence of imperfect debugging and error generation. The proposed models are initially formulated for the case when
there is no differentiation between failure observation and fault removal testing processes and then this extended for the case
when there is a clear differentiation between failure observation and fault removal testing processes. Many Software
Reliability Growth Models (SRGM) have been developed to describe software failures as a random process and can be used
to measure the development status during testing. With SRGM software consultants can easily measure (or evaluate) the
software reliability (or quality) and plot software reliability growth charts.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
Test case prioritization techniques basically schedule the execution of test cases in a definite order such that to attain an objective function with greater efficiency. This scheduling of test cases improves the results of regression testing. Test case prioritization techniques order the test cases such that the most important ones are executed first encountering the faults first and thus makes the testing effective. In this paper an approach is presented which calculates the product of statement coverage and function calls. The results illustrate the effectiveness of formula computed with the help of APFD metric.
Similar to Test case-point-analysis (whitepaper) (20)
Author: Toan Le
Topic: Being a software tester is no longer an easy job. It was. More technologies and platforms have emerged, along with more complex applications have been created to serve users’ various expectations while the time to go live is getting much shorter over time. It's not only about desktop or web-based applications but also about mobile, cloud-based applications, IoT and more. It's not only about testing alone anymore. It's about continuous integration and continuous delivery indeed.
How to survive and thrive in this Era of New Technology seems to become a critical question for all of us. Being a Full-stack Tester could be an answer, even though we may have different starting points in this career journey. And, the next considerable questions are: what is it and how to get there?
My presentation is to give you some ideas to answer those questions through my own experience in the path of pursuing Full-stack Tester.
Author: Son Tang - Senior Engineer Manager
Contact Email: sontang@kms-technology.com
Git repo: https://github.com/hunterbmt/react_redux_seminar
Working as a Front-end developer is more challenging than ever since the Front-end part of application is no longer simple tasks. Nowadays, with the increased popularity of Single Page Application (SPA), developing a Front-end application requires more tools, more frameworks and also more attention from software engineers to application architecture so as to make sure high performance and scalability.
When the complexity of your SPA increases, more people have to work on the application at the same time and a larger number of components and UI elements are built. That results in the application scalability becoming a signification problem. Without a good approach, the more complicated our application becomes, the buggier, the more unproductive and low-performing it becomes. React and Redux are one of many technical stacks which provides a lot of support to developers to build a solid SPA in an easy and effective way. They are easy to pick up and to be productive with.
This presention will discuss benefits of using React and Redux as well as how to architect application in order to scale effectively without sacrificing benefits we have from React and Redux.
[Webinar] Test First, Fail Fast - Simplifying the Tester's Transition to DevOpsKMS Technology
DevOps is a spectacular mish-mash of development and operations processes and practices that has been growing increasingly popular in recent years. With the upward trending rate in adoption comes the need for organizations to fully understand the key practices as well as thoroughly integrating team members, especially testers, throughout the delivery pipeline. Getting started with DevOps practices can be a little tricky when choosing the right tools, people, and processes. In this webinar, we’ll focus on helping you make the switch without diminishing the team’s delivered product quality, so that the transition meets the enterprise objectives of speed and reliability.
Tune in to learn:
The biggest concern when moving to DevOps - and how to handle it
Why you need ‘Coding Testers’
The best tools for the job
The process of failing fast, and its significance to testers
Measuring the transition - recommended metrics
The value of DevOps long-term - efficiency, repeatability & reliability
Don’t worry about failing - it’s a part of the process!
Increase Chances to Be Hired as Software Developers - 2014KMS Technology
KMS Technology, together with Duy Tan University, hold two sessions of their workshop "Increase Chance for Being Hired as Software Developers - 2014" for IT students at Da Nang province.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
Test case-point-analysis (whitepaper)
1. Test Case Point Analysis
Vu Nguyen
QASymphony, LLC.(www.qasymphony.com)
Abstract
Software testing is a crucial activity in software application development lifecycle to ensuring
quality, applicability, and usefulness of software products. It consumes a large amount of effort of
the development team to achieve the software quality goal. Thus, one critical success-factor in
software testing is to estimate and measure software testing accurately as a part of the software
quality assurance management. This white paper proposes an approach, namely Test Case Point
Analysis, to estimating the size and effort of software testing work. The approach measures the
size of software test case based on its checkpoints, precondition and test data, and types of test.
The testing effort is computed using the Test Case Point count of the testing activities.
Keywords: test case point, software test estimation, software testing, quality assurance, test
management
I. INTRODUCTION
Software testing is a crucial activity in software application development lifecycle to ensuring
quality, applicability, and usefulness of software products. No software can be released without a
reasonable amount of testing involved. To achieve the quality goal, software project teams have
been reported to devoting a substantial portion of the total effort to perform the software testing
activity. According to the previous industry reports, software testing consumes about 10-25% of
the total project effort, and on some projects this number may reach 50% [1].
Thus, one factor to ensure the success of software testing is the ability to provide reasonably
accurate size and effort estimates for testing activities. Size and effort estimates are used as key
inputs for making investment decisions, planning, scheduling, project bidding, and deriving other
metrics such as productivity and defect density for project monitoring and controlling. If the
estimates are not realistic, then these activities effectively use the misleading information, and the
consequences could be disastrous.
Unfortunately, providing reliable size and effort estimates for software testing is challenging. One
reason is that software testing is a human-intensive activity that is affected by a large number of
factors such as personnel capabilities, software requirements, processes, environments,
communications, and technologies. These factors cause high variance in productivities of projects
of different characteristics. Another reason is that software quality attributes such as functionality,
reliability, usability, efficiency, and scalability are hard to quantify objectively. Moreover, there
are many different types of software testing and testing contexts even in one project. Because of
the diversity of software testing activities, one approach that works well on one type of test in one
environment may not work when it is applied to another type of test in another context.
Although many approaches have been introduced and applied to estimating overall software size
and effort, there is a lack of methods for estimating software testing size and effort. Often, testing
effort is estimated as a part of the overall software effort, and software size is used as a size
measure of software testing activity. This approach, however, cannot account for specific
requirements of software testing, such as intensive testing required by clients. Another limitation
is that it cannot be applied to the test-only project in which the software exists but the testing
team does not have access to source code or requirements in order to perform Function Point
Analysis or count Source Lines of Code (SLOC). Finally, the software size in Function Points or
2. SLOC may not well reflect the amount of test performed by the testing team. Therefore,
measuring the productivity of software testing using these metrics is misleading.
In an attempt to address this issue, this white paper presents an approach to estimating the size
and effort of software testing. The sizing method is called Test Case Point. As the name indicates,
the method measures the size of test case, the core artifact that testers create and use when
performing software test execution. The size of test case is evaluated using four elements of test
case complexity, including checkpoint, precondition, data, and type of the test case. This article
also describes simple approaches to determining testing effort using Test Case Point.
Although theoretically this Test Case Point analysis can be applied to measuring the size of
different kinds of software testing given test cases as input, this method is only focused on
estimating the size of system, integration, and acceptance testing activities which are often
performed manually by independent verification and validation (IV&V). Unit testing, automation
testing, and performance testing that involve using test scripts are also beyond the scope of this
method.
II. A SUMMARY SOFTWARE TESTING SIZE AND EFFORT ESTIMATION
METHODS
A number of methods and metrics have been proposed and applied to estimating the size and
effort of software projects. Many of these methods are also used to estimate the effort of software
testing activities. SLOC is a traditional and popular metric that measures the size of software
product by counting its source program. A number of SLOC definitions exist, but the most
common are physical SLOC which counts the number of physical lines of code and logical SLOC
which counts the number of logical statements in the source program. SLOC is used as a size
input for popular software estimation models such as COCOMO, SEER-SIM, and SLIM. In these
models, the testing effort is computed the overall effort estimate using a predefined percentage.
This percentage can be obtained from industry average, historical data, or based on experience of
the estimators.
Function Point Analysis (FPA) is another method for estimating the size of software that has been
used in industry since the late 1970s [3]. This analysis measures the software size based on five
components including inputs, outputs, inquiries, files, and interfaces. The size output, whose unit
is Function Point, is then adjusted with technical factors to take into account technical
characteristics of the system. To estimate effort, the estimated size in Function Point can be
determined using a productivity index, by converting to SLOC using a method called backfire, or
by performing a simple linear regression model. FPA has been extended and introduced in a
number of sizing methods including Mark II [11], COSTMIC FFP [12], and NESMA FPA [6].
Although FPA is widely used in industry, it has been criticized for its complexity that required
much time and skilled estimators to perform the analysis. A similar but simplified method named
Use Case Point was proposed [7]. This method counts the number Use Case Points of the
software project by measuring the complexity of its use cases. The use case complexity is based
on actors and transactions of the use case and then adjusted with the technical complexity and the
environment factors to obtain the final Use Case Point count. The idea of FPA and UCP methods
has inspired the introduction of Test Case Point Analysis [4] and other methods from the practice
and research community [2, 5, 8, 9, 10, 13]. In [8], Aranha and Borba proposed a model for
estimating test execution effort by measuring the size and execution complexity of test cases.
They introduced the size unit of execution point which is based on the characteristics of every
step in the test case. The effort is then estimated by converting the execution point count to effort
using a conversion factor.
3. III. TEST CASE POINT ANALYSIS
1. Method Overview
The Test Case Point measures the software testing size, reflecting the complexity of the testing
activities performed to ensure the quality goal of the software. As a complexity measure, it needs
to reflect the effort required to perform the testing activities including planning, designing,
executing tests, and reporting and tracking defects.
The Test Case Point Analysis uses test cases as input and generates Test Case Point count for the
test cases being measured. The complexity of the test case is based on four elements including
checkpoint, precondition, test data, and types of test case, which effectively assumes that the
complexity is centered at these four elements. These elements are classified into two types, one
reflecting the largeness of the test case, which includes checkpoint, precondition, test data, and
the other normalizing the complexity of different types of test by adjusting the Test Case Point by
weights of test types.
2. Method Detail
2.1. Measure Test Case Complexity
Each test case is assigned a number of Test Case Points based on the number of checkpoints, the
complexity of precondition, and test data used in the test case.
Checkpoint is the condition in which the tester verifies whether the result produced by the target
function matches the expected criterion. One test case consists of one or many checkpoints.
Rule 1: One checkpoint is counted as one Test Case Point.
Precondition. Test case’s precondition specifies the condition to execute the test case. The
precondition mainly affects the cost to execute the test case, same as that of the test data. Some
precondition may be related to data prepared for the test case.
Table 1. Precondition Complexity Level Description
Complexity level Description
None The precondition is not applicable or important to execute the test
case. Or, the precondition is just reused from the previous test case
to continue the current test case.
Low The condition for executing the test case is available with some
simple modifications required. Or, some simple setting-up steps are
needed.
Medium Some explicit preparations are needed to execute the test case. The
condition for executing is available with some additional
modifications required. Or, some additional setting-up steps are
needed.
High Heavy hardware and/or software configurations are needed to
execute the test case.
4. The complexity of test case’s precondition is classified into four levels, None, Low, Medium, and
High (see Table 1).
Test data is used to execute the test case. It can be generated at the test case execution time, or is
already prepared by previous tests, or is generated by testing scripts. Test data is test case specific
or general to a group of test cases or the whole system. In the latter cases, the data can be reused
in multiple test cases.
The complexity of test data is classified into four levels, None, Low, Medium, and High, which is
shown in Table 2.
Table 2. Test Data Complexity Level Description
Complexity level Description
None No test data preparation is needed.
Low Test data is needed, but it is simple so that it can be created during
the test case execution time. Or, the test case uses a slightly
modified version of the existing test data, i.e., little effort required to
modify the test data.
Medium Test data is deliberately prepared in advance with extra effort to
ensure its completeness, comprehensiveness, and consistency.
High Test data is prepared in advance with considerable effort to ensure
its completeness, comprehensiveness, and consistency or by using
support tools to generate and database to store and manage. Scripts
may be required to generate test data.
In many cases, the test data has to be supplied by a third-party. In such cases, the effort to
generate test data for the test case is expectedly small, thus, the test data complexity should be
low.
Rule 2: each complexity level of precondition and test data is assigned a number of Test
Case Points. This measure is called unadjusted Test Case Point.
Table 3. Test Case Point Allocation for Precondition
Complexity level Number of Test Case Point Standard
deviation (±σ)
None 0 0
Low 1 0
Medium 3 0.5
High 5 1
Table 3 and 4 show the number of Test Case Points allocated to each complexity level for the
precondition and test data components of the test case. These constants were obtained through a
survey of 18 experienced quality assurance engineers in our organization. The standard deviation
5. values reflect the variance among the survey’s results. It is worthy to note that the estimator
should adjust these constants to better reflect the characteristics of their projects and
environments.
Table 4. Test Case Point Allocation for Test Data
Complexity level Number of Test Case Point Standard
deviation (±σ)
None 0 0
Low 1 0
Medium 3 0.6
High 6 1.3
2.2. Adjust Test Case Point by Type of Test
To account for the complexity differences among test types, the Test Case Point counted for each
test case is adjusted using a weight corresponding to its test type. The weight of the most common
test type, user interface and functional testing, is established as the baseline and fixed at 1.0. The
weights of other test types are relative to this baseline.
Table 5 shows the types of test, the corresponding weights, and standard deviations of the
weights. The weights were obtained via the same survey of 18 experienced quality assurance
engineers in our organization. Again, the estimators can review and adjust these weights when
applying the method in their organizations.
Table 5. Weights of Test Types
Type of Test Weight Standard Comments
(W) deviation
(±σ)
User interface and 1.0 0 User interface and functional testing is
functional testing considered baseline.
API 1.22 0.32 API testing verifies the accuracy of the
interfaces in providing services.
Database 1.36 0.26 Testing the accuracy of database scripts, data
integrity and/or data migration.
Security 1.39 0.28 Testing how well the system sustains from
hacking attacks, unauthorized and
unauthenticated access.
Installation 1.09 0.06 Testing of full, partial, or upgrade
install/uninstall processes of the software.
6. Type of Test Weight Standard Comments
(W) deviation
(±σ)
Networking 1.27 0.29 Testing the communications among entities
via networks.
Algorithm and 1.38 0.27 Verifying algorithms and computations
computation designed and implemented in the system.
Usability testing 1.12 0.13 Testing the friendliness, ease of use, and other
usability attributes of the system.
Performance 1.33 0.27 Verifying whether the system meets
(manual) performance requirements, assuming that the
test is done manually.
Recovery testing 1.07 0.14 Recovery testing verifies the accuracy of the
recovery process to recover from system
crashes and other errors.
Compatibility 1.01 0.03 Testing whether the software is compatible
testing with other elements of a system with which it
should operate, e.g. browsers, Operating
Systems, or hardware.
The total Adjusted Test Case Point (ATCP) is counted as
ATCP = ∑(UTCPi * Wi) (Eq 3.1)
Where UTCPi is the number of Unadjusted Test Case Points counted using checkpoints,
precondition, and test data for the test case ith.
Wi is the weight of the test case ith, taking into account its test type.
It is important to note that the list of test types is not exhaustive. Estimators can supplement it
with additional test types and the corresponding weights based on the weight of the user interface
and functional testing.
IV. EFFORT ESTIMATION
Testing activities can be classified into four categories, test planning, test design, test execution,
and defect reporting. Of these activities, test execution and defect reporting may be performed
multiple times for a single test case during project. However, the size measured in Test Case
Point takes into account all of these activities, assuming that each activity is performed once. The
distribution of effort of these activities allows estimating the effort in case the test execution and
defect reporting activities are performed more than once. The distribution of testing effort can be
generated using historical data.
The distribution of effort of testing phases shown in Table 6 was obtained from the same survey
performed to collect the constants described above. Again, these values mainly reflect the
7. experience in our organization, and thus, the estimators are encouraged to adjust these numbers
using their own experience or historical data.
Table 6. Testing Effort Distribution
Testing Phase Description Percent of Standard
Effort deviation
(±σ)
Test Planning Identifying test scope, test approaches, and 10% 3%
associated risks. Test planning also identifies
resource needs (personnel, software,
hardware) and preliminary schedule for the
testing activities.
Test Analysis and Dealing with detailing test scope/objectives, 25% 5%
Design evaluating/clarifying test requirements,
designing the test cases and preparing
necessary environments for the testing
activities. This phase needs to ensure
combinations of test are covered.
Test Execution Is the repeating phase dealing with executing 47% 4%
test activities and objectives outlined in the
test planning phase and reporting test results,
defects, and risks.
This phase also includes test results analysis
activities to check the actual output versus the
expected results.
Defect Tracking & Is the repeating phase dealing with tracking 18% 3%
Reporting the progress and status of tests and defects
and evaluating current status versus exit
criteria and goals
Dependent on the availability of information and resources, the testing effort can be estimated
using one of the following simple methods:
Productivity Index
Simple Linear Regression Analysis
Estimate Effort Using Productivity Index
The Productivity Index is measured as the number of Test Case Points completed per person-
month. The Productivity Index is then used to compute the testing effort by multiplying it with
the total Adjusted Test Case Point (see Eq. 4.1).
The Productivity Index can be determined using historical data. It is recommended to use the
Productivity Index of similar projects with the project being estimated. The testing effort is
computed as
Effort = TCP * Productivity Index (Eq. 4.1)
8. Estimate Effort Using Simple Linear Regression Analysis
Regression is a more comprehensive and expectedly a more accurate method to estimate the
testing effort given that many historical data points are available. With this method, the estimators
can compute estimation ranges and confidence level of the estimate. This method requires having
at least three similar projects for providing meaningful estimates.
Given N completed projects in which the ith project has the number of Test Case Point (TCPi)
and effort (Ei) in person-month, the simple linear regression model is presented as
Ei = α + β * TCPi + ɛi (Eq. 4.2)
Where, A and B are coefficients and ɛi is the error term of the model. Using the least squares
regression one can easily compute the estimates A and B of the coefficients α and β, respectively.
The A and B values are then used in the model for estimating the effort of new testing projects as
follows
E = A + B*TCP (Eq. 4.3)
Where, TCP is the size of the test cases in Test Case Point, and E is the estimated effort in
person-month of the project being estimated.
V. CONCLUSION
Software testing plays an important role in the success of software development and maintenance
projects. Estimating testing effort accurately is a key step towards to that goal. In an attempt to
fill the gap in estimating software testing, this paper has proposed a method called Test Case
Point Analysis to measure and computing the size and effort of software testing activities. The
input of this analysis is test cases, and the output is the number of Test Case Points for the test
cases being counted.
An advantageous feature of this approach is that it measures the complexity of test case, the main
work product that the tester produces and uses for test execution. Thus, it better reflects the effort
that the tester spends on their activities. Another advantage is that the analysis can be performed
easily by counting the number of checkpoints, measuring the complexity of precondition and test
data, and determining the type of each test case. However, there are several limitations of this
approach, hence suggesting directions for future improvements of the approach. One limitation is
that the Test Case Point measure has not been empirically validated. Data of the past testing
projects need to be used to validate the effect and usefulness of the size measure in estimating the
effort of the software testing activities. Another limitation is the concern of whether the
complexity ranges of the test case’s precondition and test data can properly reflect the actual
complexity of these attributes. Future improvements on the method need to address these
limitations.
VI. REFERENCES
[1] Y. Yang, Q. Li, M. Li, Q. Wang, An empirical analysis on distribution patterns of software
maintenance effort, International Conference on Software Maintenance, 2008, pp. 456-459
[2] S. Kumar, “Understanding TESTING ESTIMATION using Use Case Metrics”, Web:
http://www.stickyminds.com/sitewide.asp?ObjectId=15698&Function=edetail&ObjectType=
ART
[3] A.J. Albrecht, “Measuring Application Development Productivity,” Proc. IBM Applications
Development Symp., SHARE-Guide, pp. 83-92, 1979
9. [4] TestingQA.com, “Test Case Point Analysis”, Web: http://testingqa.com/test-case-point-
analysis/
[5] TestingQA.com, “Test Effort Estimation”, Web: http://testingqa.com/test-effort-estimation/
[6] NESMA (2008), "FPA according to NESMA and IFPUG; the present situation",
http://www.nesma.nl/download/artikelen/FPA%20according%20to%20NESMA%20and%20
IFPUG%20-%20the%20present%20situation%20(vs%202008-07-01).pdf
[7] G. Karner, “Metrics for Objectory”. Diploma thesis, University of Linköping, Sweden. No.
LiTHIDA-Ex-9344:21. December 1993.
[8] E. Aranha and P. Borba. “An estimation model for test execution effort”. In Proceedings of
First International Symposium on Empirical Software Engineering and Measurement, 2007.
[9] E. R. C. de Almeida, B. T. de Abreu, and R. Moraes. “An Alternative Approach to Test Effort
Estimation Based on Use Cases”. In Proceedings of International Conference on Software
Testing, Verification, and Validation, 2009.
[10] S. Nageswaran. “Test effort estimation using use case points”. In Proceedings of 14th
International Internet & Software Quality Week 2001, 2001
[11] C.R. Symons, "Software Sizing and Estimating: Mk II FPA". Chichester, England: John
Wiley, 1991.
[12] A. Abran, D. St-Pierre, M. Maya, J.M. Desharnais, "Full function points for embedded and
real-time software", Proceedings of the UKSMA Fall Conference, London, UK, 14, 1998
[13] D.G. Silva, B.T. Abreu, M. Jino, A Simple Approach for Estimation of Execution Effort of
Functional Test Cases, In Proceedings of 2009 International Conference on Software Testing
Verification and Validation, 2009