SOFTWARE TESTING
METRICS
Quantitative Measurement Of Testing
Agenda
• Considerations
• Importance of Test Metrics
• Test Metrics – Manual Testing
• Test Metrics – Automation Testing
• Test Metrics – Performance Testing
• Common Metrics For All Types Of Testing
Considerations
• For managers, test metrics are key to improvising software projects as well as ensuring that progress remains
on schedule.
• To understand test metrics manager should have extensive experience in managing software testing projects
• Manager should have worked in SDLC, water fall model projects
• Data for metrics should be tracked manually or through test management tools
• From test planning to go-live phase data needs to be captured for metrics
Importance of Test Metrics
A software metric is a standard of measure of a degree to which a software system or process possesses some
property.
Test managers can identify risk areas, improvement opportunities, testing status and testing effectiveness
QA teams are beginning to rely more on historical data and reports to handle various issues as well as to show
which areas of software projects can be improved.
"The final judgment and decision-making should be driven by analysis and not only data. Metric s) assist in
decision-making, but do not substitute it.”
Famous phrase - “You can’t manage what you don’t measure,” By Edward Deming
Test Metrics – Manual Testing
• Test Design Coverage
Formula - (Number of testable requirements covered by test cases created / Number of testable system
requirements planned to be covered)
• Test Execution Coverage
Formula: (Number of testable requirements covered by test cases executed / Number of testable system
requirements planned to be covered)
• Test case density
Formula: Number of test cases per unit size of system tested -
(A test case is defined as a simple test case with upto 10 steps)
• Test design completion
Formula: (Number of test cases created / Number of planned test cases) * 100
Test Metrics – Manual Testing
• Test execution completion
Formula: (Number of test cases executed / Number of test cases planned for execution) * 100
• Productivity - Test Design
Formula: (Number of test cases designed / Test design effort in person days)
• Productivity - Test Execution-
Formula: (Number of test cases executed / Test execution effort in person days)
• Test Effectiveness
Formula: Percentage of defects found in testing done by Team with respect to total number of defects
raised in subsequent test phases and mutually agreed period in production
Test Metrics – Manual Testing
• Invalid defects
Formula: Percentage of defects rejected with respect to total number of defects raised
• Defect Density
Formula: Valid defects per unit size of the software
(Unit size of software could be requirements, use case, screen, line of code)
Test Metrics – Automation Testing
• Return Of Investment (ROI) of Automation
Formula: (Gain + Investment) / Investment
Investment = (a) + (b) + (c) + (d) + (e)
Gain = [(g) * (h) * Period of ROI] / 8
(a) Automated test script development time = (Hourly automation time per test * Number of automated
test cases) / 8
(b) Automated test script execution time = (Automated test execution time per test * Number of automated
test cases * Period of ROI) / 18
(c) Automated test analysis time = (Test Analysis time * Period of ROI) / 8
(d) Automated test maintenance time = (Maintenance time * Period of ROI) / 8
(e) Manual Execution Time = (Manual test execution time * Number of manual test cases * Period of ROI) /
8
(g) Manual test execution time or analysis time
(h) Total number of test cases (automated + manual)
Period of ROI is the number of weeks for which ROI is to be calculated. Divided by 8 is done wherever
manual effort is needed. Divided by 18 is done wherever automation is done.
Test Metrics – Automation Testing
• Percentage of reusable scripts in new enhancements
Formula: (Number of reusable scripts used / Total number of new functionality created)
• Percentage of defects in test scripts per automated test case
Formula: (Number of defects found in automated scripts / Number of automated test cases)*100
• Automatability achieved
Formula: Number of automation test cases run in a delivery cycle / Total test cases selected for a delivery
cycle
• Automation achieved per test case
Formula: Number of test steps automated for a test case / Total number of test steps for test case
Test Metrics – Automation Testing
• Percent of automation achieved in Regression
Formula: Number test cases automated in regression test pack / Total number of regression test cases
available for automation
Test Metrics – Performance Testing
• Performance Execution Data - Client Side
This metric gives the detail information of Client side data for execution.
Following are some of the data points of this metric -
1. Running Users
2. Response Time
3. Hits per Second
4. Throughput
5. Total Transaction per second
6. Time to first byte
7. Error per second
Test Metrics – Performance Testing
• Performance Execution Data - Server Side
This metric gives the detail information of Server side date for execution.
Following are some of the data points of this metric -
1. CPU Utilization
2. Memory Utilization
3. HEAP Memory Utilization
4. Database connections per second
• Product Quality
Formula: (Number of defects in performance test scripts found during expert reviews / Number of
performance test cases created) * 100
• Throughput
Formula: (Number of performance test cases designed / Performance Test design effort in persons days or
hours)
Test Metrics – Performance Testing
• Effectiveness of Verification
Formula: (Number of defects found in performance testing done by Team / Total Number of defects found
in performance testing done by team and subsequent test phases and mutually agreed period in production
or other environment) * 100
• Effort Variance (EV)
Formula: [(Actual effort - Estimated effort) / Estimated efforts ] * 100
• Schedule Variance (SV)
Formula: [(Actual number of days - Estimated number of days) / Estimated number of days] * 100
• Scope Change
Formula: [(Total scope - Previous scope) / Previous scope] * 100
Common Metrics For All Types Of Testing
Thank You!
For any queries connect me at jatinkochhar@hotmail.com

Software Testing Metrics

  • 1.
  • 2.
    Agenda • Considerations • Importanceof Test Metrics • Test Metrics – Manual Testing • Test Metrics – Automation Testing • Test Metrics – Performance Testing • Common Metrics For All Types Of Testing
  • 3.
    Considerations • For managers,test metrics are key to improvising software projects as well as ensuring that progress remains on schedule. • To understand test metrics manager should have extensive experience in managing software testing projects • Manager should have worked in SDLC, water fall model projects • Data for metrics should be tracked manually or through test management tools • From test planning to go-live phase data needs to be captured for metrics
  • 4.
    Importance of TestMetrics A software metric is a standard of measure of a degree to which a software system or process possesses some property. Test managers can identify risk areas, improvement opportunities, testing status and testing effectiveness QA teams are beginning to rely more on historical data and reports to handle various issues as well as to show which areas of software projects can be improved. "The final judgment and decision-making should be driven by analysis and not only data. Metric s) assist in decision-making, but do not substitute it.” Famous phrase - “You can’t manage what you don’t measure,” By Edward Deming
  • 5.
    Test Metrics –Manual Testing • Test Design Coverage Formula - (Number of testable requirements covered by test cases created / Number of testable system requirements planned to be covered) • Test Execution Coverage Formula: (Number of testable requirements covered by test cases executed / Number of testable system requirements planned to be covered) • Test case density Formula: Number of test cases per unit size of system tested - (A test case is defined as a simple test case with upto 10 steps) • Test design completion Formula: (Number of test cases created / Number of planned test cases) * 100
  • 6.
    Test Metrics –Manual Testing • Test execution completion Formula: (Number of test cases executed / Number of test cases planned for execution) * 100 • Productivity - Test Design Formula: (Number of test cases designed / Test design effort in person days) • Productivity - Test Execution- Formula: (Number of test cases executed / Test execution effort in person days) • Test Effectiveness Formula: Percentage of defects found in testing done by Team with respect to total number of defects raised in subsequent test phases and mutually agreed period in production
  • 7.
    Test Metrics –Manual Testing • Invalid defects Formula: Percentage of defects rejected with respect to total number of defects raised • Defect Density Formula: Valid defects per unit size of the software (Unit size of software could be requirements, use case, screen, line of code)
  • 8.
    Test Metrics –Automation Testing • Return Of Investment (ROI) of Automation Formula: (Gain + Investment) / Investment Investment = (a) + (b) + (c) + (d) + (e) Gain = [(g) * (h) * Period of ROI] / 8 (a) Automated test script development time = (Hourly automation time per test * Number of automated test cases) / 8 (b) Automated test script execution time = (Automated test execution time per test * Number of automated test cases * Period of ROI) / 18 (c) Automated test analysis time = (Test Analysis time * Period of ROI) / 8 (d) Automated test maintenance time = (Maintenance time * Period of ROI) / 8 (e) Manual Execution Time = (Manual test execution time * Number of manual test cases * Period of ROI) / 8 (g) Manual test execution time or analysis time (h) Total number of test cases (automated + manual) Period of ROI is the number of weeks for which ROI is to be calculated. Divided by 8 is done wherever manual effort is needed. Divided by 18 is done wherever automation is done.
  • 9.
    Test Metrics –Automation Testing • Percentage of reusable scripts in new enhancements Formula: (Number of reusable scripts used / Total number of new functionality created) • Percentage of defects in test scripts per automated test case Formula: (Number of defects found in automated scripts / Number of automated test cases)*100 • Automatability achieved Formula: Number of automation test cases run in a delivery cycle / Total test cases selected for a delivery cycle • Automation achieved per test case Formula: Number of test steps automated for a test case / Total number of test steps for test case
  • 10.
    Test Metrics –Automation Testing • Percent of automation achieved in Regression Formula: Number test cases automated in regression test pack / Total number of regression test cases available for automation
  • 11.
    Test Metrics –Performance Testing • Performance Execution Data - Client Side This metric gives the detail information of Client side data for execution. Following are some of the data points of this metric - 1. Running Users 2. Response Time 3. Hits per Second 4. Throughput 5. Total Transaction per second 6. Time to first byte 7. Error per second
  • 12.
    Test Metrics –Performance Testing • Performance Execution Data - Server Side This metric gives the detail information of Server side date for execution. Following are some of the data points of this metric - 1. CPU Utilization 2. Memory Utilization 3. HEAP Memory Utilization 4. Database connections per second • Product Quality Formula: (Number of defects in performance test scripts found during expert reviews / Number of performance test cases created) * 100 • Throughput Formula: (Number of performance test cases designed / Performance Test design effort in persons days or hours)
  • 13.
    Test Metrics –Performance Testing • Effectiveness of Verification Formula: (Number of defects found in performance testing done by Team / Total Number of defects found in performance testing done by team and subsequent test phases and mutually agreed period in production or other environment) * 100
  • 14.
    • Effort Variance(EV) Formula: [(Actual effort - Estimated effort) / Estimated efforts ] * 100 • Schedule Variance (SV) Formula: [(Actual number of days - Estimated number of days) / Estimated number of days] * 100 • Scope Change Formula: [(Total scope - Previous scope) / Previous scope] * 100 Common Metrics For All Types Of Testing
  • 15.
    Thank You! For anyqueries connect me at jatinkochhar@hotmail.com