• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Copyright McCabe
 

Copyright McCabe

on

  • 961 views

 

Statistics

Views

Total Views
961
Views on SlideShare
961
Embed Views
0

Actions

Likes
0
Downloads
36
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Slide 10 shows the integration between McCabe IQ and configuration management tools. McCabe’s current partners in the configuration management space are Merant, Rational, and CA. There are two key areas of synergy between McCabe IQ and configuration management tools. First, we can set up triggers in the configuration management tools so that McCabe IQ is called to analyze software quality as each software change is made, so we can monitor and manage software quality when the software changes, rather than just storing the changed software itself. Next, McCabe IQ and the configuration management tool can combine to provide a test environment in the same way that you currently have production and debugging environments. For example, in addition to a nightly production build, you can set up a nightly instrumented build that monitors test execution. Then, as your testers, who may not know anything about McCabe IQ or configuration management, execute tests, you can analyze the effectiveness of their testing and help refocus their efforts as appropriate.
  • Slide 11 shows the integration between McCabe IQ and test automation. McCabe’s current partner in the test automation space is Mercury Interactive. McCabe IQ analyzes source code to focus testing effort on the highest-risk software, and this information is used by TestDirector when planning and managing testing activity. McCabe IQ also produces test executables that monitor testing progress, so that you can develop a set of WinRunner scripts that effectively test your application rather than just pushing all the buttons and selecting all the menu choices. Just pushing buttons and selecting menu choices typically exercises less than 30% of the application logic. For non-GUI applications, the McCabe ReTest component automates test case execution and verification. Putting it all together, McCabe IQ allows you to manage your automated testing effort based on software risk, and make sure your automated test suites provide a thorough, effective test of your application.
  • Slide 15 shows the McCabe Compare add-on. McCabe Compare helps identify and maintain duplicate and near-duplicate software modules by comparing metrics, names, logic structure, data references, and other characteristics. Since similar software modules often contain similar errors, McCabe Compare helps propagate software changes everywhere they are needed.
  • Slide 16 shows the McCabe Change add-on. McCabe Change identifies software change at the module level, so it can be used along with quality and testing information to form a complete profile of technical risk. In the report at the right, there are several low-complexity changed modules, and several high-complexity unchanged modules, but only one high-complexity changed module, which should be the main focus of review and testing.
  • Slide 17 shows the McCabe Test product. McCabe Test analyzes source code and tracks execution paths during testing using source-level instrumentation. It analyzes testing thoroughness at many levels of detail, from specific paths through individual source code modules up to program summary statistics. It also helps increase testing thoroughness by identifying paths that remain to be tested. By using the results of detailed structural code analysis to guide testing, McCabe Test focuses test effort on high-risk software, and uses precise, objective metrics to monitor testing progress. The result is that more errors can be found and fixed during the testing phase, before the software is released. Also, the test progress metrics allow you to estimate the time and effort necessary to finish testing, and to know when you can stop testing and ship with confidence. The top diagram is a structure chart in which the colors, percentages, and progress bars indicate testing progress, showing testing thoroughness in the context of program structure. The diagram at the lower left is the detailed control flow graph structure of a single module, with an untested path highlighted in blue. The report fragment to the lower right shows the detailed sequence of decision outcomes corresponding to that untested path, with source line numbers, control flow graph node numbers, and source code decision expressions given for reference.
  • Slice 19 shows the McCabe Slice add-on. McCabe Slice traces functionality to implementation by identifying and isolating the software that was executed during specific functional transactions. This helps isolate business rules for reengineering, and also helps isolate errors for debugging. The diagram shows the section of code that was executed in response to a particular functional scenario, and was not executed by any of a collection of other, related functional scenarios. This factors out all common processing and isolates the software that is unique to the target functionality. The left side of the diagram shows the unique control flow fragment highlighted in red on the control flow graph, and the right side shows the highlighted source code.
  • Slide 21 gives a summary of the McCabe IQ component products and add-ons. McCabe QA helps improve quality with precise, objective metrics. McCabe Data analyzes data impact to focus reengineering and testing efforts. McCabe compare helps eliminate or consistently maintain duplicate and near-duplicate code. McCabe Change helps focus on the impact of new and changed software. McCabe Test helps increase test effectiveness and focus testing efforts on high-risk software areas. McCabe TestCompress helps increase test efficiency by identifying small subsets of large production data sets that can be used as test data. McCabe Slice traces functionality to code and isolates the subset of code that implements particular functional transactions. And McCabe ReTest automates regression testing for non-GUI software.
  • Initiate Exercise I. After each concept, click on the Battlemap and illustrate the concept.
  • Initiate Exercise I. After each concept, click on the Battlemap and illustrate the concept.
  • Initiate Exercise I. After each concept, click on the Battlemap and illustrate the concept.
  • With the conclusions of the previous slide in mind, the testing challenges can be summarised as on this slide.
  • This slide is self explanatory!
  • This slide is simply to show the change in mode of the Battlemap from Standard to Coverage mode.
  • This slide is self-explanatory.
  • Testing results can be examined at both the structural (Battlemap) level, and at the flowgraph level.
  • Additional functional tests can often be derived simply by examination of the Battlemap in Coverage mode. A first step is to isolate the untested calling hierarchies, and determine whether the names of untested modules provide any insight into extra functional tests. Frequently the assistance of someone who knows the software is invaluable at this stage, although it is not necessary to gain value from the coverage Battlemap. Functional testers can often challenge developers as to why certain modules remain untested even after a full functional test run - this type of dialog between testers and developers can be healthy and useful - although caution should be used to keep functional tests “unbiased” towards the internals of the software. Further work can be undertaken on the partially tested modules if necessary, although this will usually require knowledge of the code to derive high level functional tests from untested paths in modules.
  • Following the derivation of the very large number of theoretical execution paths through software, the question is: How can a sensible set of theoretical tests be derived for a software component?
  • In order to examine code coverage we will use two simple code examples. The simple question is “Which of the two flowgraphs is more complex?” (Obviously the answer is B!) The next question is “Therefore which requires more testing?”. Again the answer is clearly B.
  • When examining code coverage, we can see that minimum 2 tests are required to cover all the code for example A. When looking at example B, we can also see that 2 similar tests will also cover all the code. In fact, if there were 1,000,000 decisions in sequence in example B, then 2 tests would still cover all the code. Clearly, code coverage has nothing to do with the amount of logic in a piece of code. In fact, Code coverage is just a mathematical side effect of executing code and is not a good mechanism for determining the testedness of code.
  • This is where McCabe found himself in 1977. He was asked by the NSA to look at large FORTRAN systems and determine how they should theoretically be tested. He took a simple flowgraph (as shown) and determined that there were (in this case) 2 tests required to cover all the code - which is a minimum level for testing. He then determined that one extra test (as shown) could be performed. This extra test will not execute any additional code (because it has already all been covered by the first 2 tests), but it will test the first decision against the second to ensure that they are not tied together (dependent). If the fourth (and final) test is performed in addition to the first three, there is no additional new result, apart from the fact that all possible execution paths have been executed. Given the previous determination that the total number of possible execution paths quickly grows very large, doing all possible tests is too many. This number of tests (in this case 3) can be measured for any flowgraph, and this number is called Complexity …..
  • When looking at the earlier types of testing which are more focussed on the code internals, cyclomatic path derivation can be a useful tool. Recalling from previous sections, complexity is equal to the number of linearly independent test paths through a module. An important result is that a full set of cyclomatic paths represents a “meaningful subset of all possible execution paths”, i.e. that any conceivable execution path can be obtained using a combination of the paths in a cyclomatic set. Additionally, because cyclomatic paths can be derived using McCabe Test, they can be used easily to derive additional tests that could be added to an initial set.
  • McCabe Test can produce the untested cyclomatic paths - that is the cyclomatic paths which remain to be executed. These are displayed as both a flowgraph and as a decision table. The mechanism to show these paths is by right clicking on a tested module, and then selecting ‘Test Paths’. The diagrams and reports which appear when ‘Test Paths’ is selected is configurable under ‘Preferences->Testing’ - the default is all cyclomatic paths...
  • Alternative reports available under ‘Test Paths’ include a report of untested branches. This report can be more useful for less rigorous testing approaches, and more quickly displays the untested sections of code and the associated decisions. Note the ‘# Exec’ column which describes the number of times each branch has been executed.
  • Need to introduce System Level Integration Paths. Assumption is that S0 and S1 have been covered in the metrics section. This system has S1 of 6 - meaning there are 6 end-to-end paths which would cover all the iv(G) paths in each module.
  • Need to describe how the graphical Integration paths work within McCabe Test.
  • Description of textual report.
  • High level verification is implicit in coverage analysis. Examination of coverage reports is a useful starting point for verifying that code has been sufficiently tested.
  • When looking at coverage reports, there are several different mechanisms for measuring coverage. McCabe Test supports/provides four major types.
  • Coverage Reports at System Level can be used to determine when testing is complete. In particular, using incremental test coverage analysis (examining the difference in coverage between test sets) provides a powerful mechanism for assessing a good time to stop testing. Caution should be exercised, however, if using this technique as coverage increments will not decay linearly. It is important to ensure that basic (level one) functional testing is complete before using coverage increments to determine an end-point for testing.
  • The assumption that all the code needs to be tested equally is an invalid one. All systems have critical or important code, with the remainder being less important either because it is run less often, or because it is already well tested or perhaps that the delivered functionality is less important. It is important to focus limited testing resources onto the critical code.
  • The technique is to import coverage across the whole analysis, then to bind the previously created “critical” class. The Combined Coverage report (right click on bound class) for this class can be used to assess the testedness of code elements at a high level. Then zoom into the class using the Battlemap would allow the contents of the class to be viewed. Remaining tests should be focussed on increasing coverage of the critical elements. Coverage in other areas of the system will be a side effect of trying to increase the coverage of the critical elements.
  • Finally, this approach of grouping elements by criticality can be refined, by creating several groups (classes) containing modules of increasing criticality. Examining the coverage of each of the criticality groups (classes) in turn will provide a detailed breakdown of the spread of coverage across the system. This technique is more useful for providing a risk assessment of the completeness of coverage. It is difficult to derive additional tests from several different groups of critical elements - experience shows that the best method is to stick to a single small group of critical modules to derive additional tests as described in the previous slide.
  • This slide should be self-explanatory. The reason this is introduced here is because it becomes relevant later - although load/save testdata is not specific to ‘When to Stop Testing’ only! Note: If two analyses contain common code (source files) it is possible to import coverage between programs. [the way to do this is to export a repos testdata item, then change the program name of the existing testdata item(so that it can be imported into the second program) - then reimport the original testdata item. The result is duplicate coverage in 2 programs. In the second program only identical code elements will have coverage imported for them]
  • Let’s assume that a software application has been analysed and coverage achieved. If changes are made to the code, then the parsers will detect a change in the date stamp of the source files, and will reparse the modified files. If any modules inside these changed files have been modified, then the parser will internally mark the module as being changed. If the existing coverage results are then imported into the changed project (or simply if you change to coverage mode), the modified code modules will have had all coverage removed from them. However, the unchanged code modules will retain the previous coverage. This can be used in order to focus testing on the changed code only.
  • Metrics Trending (VQT Only!) is useful to ensure improving test coverage between releases. A powerful technique is to utilise the change detection described in the previous slide, and then import previous test results. The objective at each release is to retest the application to achieve similar or better coverage results than the last version. In this manner the changes will have been tested, and an improving test schedule will be easy to maintain.
  • A description of McCabe Change. [a brazen attempt to get the client to buy more licenses!]
  • Self explanatory

Copyright McCabe Copyright McCabe Presentation Transcript

  • Management Overview 9861 Broken Land Parkway Fourth Floor Columbia, Maryland 21046 800-638-6316 www.mccabe.com [email_address] 1-800-634-0150
  • Agenda
    • McCabe IQ Overview
    • Software Measurement Issues
    • McCabe Concepts
    • Software Quality Metrics
    • Software Testing
    • Questions and Answers
  • About McCabe & Associates 20 Years of Expertise Global Presence Analyzed Over 25 Billion Lines of Code
  • McCabe IQ process flow Analysis platform Target platform Source code Compile and run Execution log Effective Testing Quality Management Instrumented source code McCabe IQ
  • McCabe IQ and Configuration Management
    • Merant PVCS
    • Rational ClearCase
    • CA Endevor
    McCabe IQ Execution Log Test Environment Effective Testing Quality Management
    • Monitor quality as software changes
    • Manage test environment
  • McCabe IQ and Test Automation McCabe IQ
    • Mercury Interactive:
    • TestDirector
    • WinRunner
    Source code Test executable Execution log Risk Management Test Management GUI Test Automation Effective Testing
    • Risk-driven test management
    • Effective, automated testing
    Non-GUI Test Automation
  • McCabe IQ Components McCabe IQ Framework ( metrics, data, visualization, testing, API ) TESTING McCabe Test McCabe TestCompress McCabe Slice McCabe ReTest QUALITY ASSURANCE McCabe QA McCabe Data McCabe Compare McCabe Change Source Code Parsing Technology (C, C++, Java, Visual Basic, COBOL, Fortran, Ada)
  • McCabe QA
    • McCabe QA measures software quality with industry-standard metrics
      • Manage technical risk factors as software is developed and changed
      • Improve software quality using detailed reports and visualization
      • Shorten the time between releases
      • Develop contingency plans to address unavoidable risks
  • McCabe Data
    • McCabe Data pinpoints the impact of data variable modifications
      • Identify usage of key data elements and data types
      • Relate data variable changes to impacted logic
      • Focus testing resources on the usage of selected data
  • McCabe Compare
    • McCabe Compare identifies reusable and redundant code
      • Simplify maintenance and re-engineering of applications through the consolidation of similar code modules
      • Search for software defects in similar code modules, to make sure they’re fixed consistently throughout the software
  • McCabe Change
    • McCabe Change identifies new and changed modules
      • Manage change with more precision than the file-level information from CM tools
      • Work with a complete technical risk profile
        • Complex?
        • Poorly tested?
        • New or changed?
      • Focus review and test efforts
  • McCabe Test
    • McCabe test maximizes testing effectiveness
      • Focus testing on high-risk areas
      • Objectively measure testing effectiveness
      • Increase the failure detection rate during internal testing
      • Assess the time and resources needed to ensure a well-tested application
      • Know when to stop testing
  • McCabe Slice
    • McCabe Slice traces functionality to implementation
      • Identifies code that implements specific functional transactions
      • Isolates code that is unique to the implementation of specific functional transactions
      • Helps extract business rules for application redesign
  • McCabe IQ Components Summary
    • McCabe QA : Improve quality with metrics
    • McCabe Data : Analyze data impact
    • McCabe Compare : Eliminate duplicate code
    • McCabe Change : Focus on changed software
    • McCabe Test : Increase test effectiveness
    • McCabe TestCompress : Increase test efficiency
    • McCabe Slice : Trace functionality to code
    • McCabe ReTest : Automate regression testing
  • Software Measurement Issues
    • Risk management
    • Software metrics
    • Complexity metrics
    • Complexity metric evaluation
    • Benefits of complexity measurement
  • Software Risk Management
    • Software risk falls into two major categories
      • Non-technical risk: how important is the system?
        • Usually known early
      • Technical risk: how likely is the system to fail?
        • Often known too late
    • Complexity analysis quantifies technical risk
      • Helps quantify reliability and maintainability
        • This helps with prioritization, resource allocation, contingency planning, etc.
      • Guides testing
        • Focuses effort to mitigate greatest risks
        • Helps deploy testing resources efficiently
  • Software Metrics Overview
    • Metrics are quantitative measures
      • Operational: cost, failure rate, change effort, …
      • Intrinsic: size, complexity, …
    • Most operational metrics are known too late
      • Cost, failure rate are only known after deployment
      • So, they aren’t suitable for risk management
    • Complexity metrics are available immediately
      • Complexity is calculated from source code
    • Complexity predicts operational metrics
      • Complexity correlates with defects, maintenance costs, ...
  • Complexity Metric Evaluation
    • Good complexity metrics have three properties
      • Descriptive: objectively measure something
      • Predictive: correlate with something interesting
      • Prescriptive: guide risk reduction
    • Consider lines of code
      • Descriptive: yes, measures software size
      • Predictive, Prescriptive: no
    • Consider cyclomatic complexity
      • Descriptive: yes, measures decision logic
      • Predictive: yes, predicts errors and maintenance
      • Prescriptive: yes, guides testing and improvement
  • Benefits of Complexity Measurement
    • Complexity metrics are available from code
      • They can even be estimated from a design
    • They provide continuous feedback
      • They can identify high-risk software as soon as it is written or changed
    • They pinpoint areas of potential instability
      • They can focus resources for reviews, testing, and code improvement
    • They help predict eventual operational metrics
      • Systems with similar complexity metric profiles tend to have similar test effort, cost, error frequency, ...
  • McCabe Concepts
    • Definition: In C and C++, a module is a function or subroutine with a single entry point and a single exit point. A module is represented by a rectangular box on the Battlemap.
    main function a function c function d printf Difficult to maintainable module Difficult to test module Well-designed, testable module Library module
  • Analyzing a Module
    • For each module, an annotated source listing and flowgraph is generated.
    • Flowgraph - an architectural diagram of a software module’s logic.
    1 main() 2 { 3 printf(“example”); 4 if (y > 10) 5 b(); 6 else 7 c(); 8 printf(“end”); 9 } Stmt Code Number main Flowgraph node :statement or block of sequential statements condition end of condition edge : flow of control between nodes 1-3 4 5 7 8-9 Battlemap main b c printf
  • Flowgraph Notation (C) if (i) ; if (i) ; else ; if (i || j) ; do ; while (i); while (i) ; switch(i) { case 0: break; ... } if (i && j) ;
  • Flowgraph and Its Annotated Source Listing 0 1* 2 3 4* 5 6* 7 8 9 Origin information Node correspondence Metric information Decision construct
  • Low Complexity Software
    • Reliable
      • Simple logic
        • Low cyclomatic complexity
      • Not error-prone
      • Easy to test
    • Maintainable
      • Good structure
        • Low essential complexity
      • Easy to understand
      • Easy to modify
  • Moderately Complex Software
    • Unreliable
      • Complicated logic
        • High cyclomatic complexity
      • Error-prone
      • Hard to test
    • Maintainable
      • Can be understood
      • Can be modified
      • Can be improved
  • Highly Complex Software
    • Unreliable
      • Error prone
      • Very hard to test
    • Unmaintainable
      • Poor structure
        • High essential complexity
      • Hard to understand
      • Hard to modify
      • Hard to improve
  • Would you buy a used car from this software?
    • Problem: There are size and complexity boundaries beyond which software becomes hopeless
      • Too error-prone to use
      • Too complex to fix
      • Too large to redevelop
    • Solution: Control complexity during development and maintenance
      • Stay away from the boundary
  • Important Complexity Measures
    • Cyclomatic complexity: v(G)
      • Amount of decision logic
    • Essential complexity: ev(G)
      • Amount of poorly-structured logic
    • Module design complexity: iv(G)
      • Amount of logic involved with subroutine calls
    • Data complexity: sdv
      • Amount of logic involved with selected data references
  • Cyclomatic Complexity
    • The most famous complexity metric
    • Measures amount of decision logic
    • Identifies unreliable software, hard-to-test software
    • Related test thoroughness metric, actual complexity, measures testing progress
    • Cyclomatic complexity , v - A measure of the decision logic of a software module.
      • Applies to decision logic embedded within written code.
      • Is derived from predicates in decision logic.
      • Is calculated for each module in the Battlemap.
      • Grows from 1 to high, finite number based on the amount of decision logic.
      • Is correlated to software quality and testing quantity; units with higher v , v>10 , are less reliable and require high levels of testing.
    Cyclomatic Complexity
  • Cyclomatic Complexity 1 4 2 6 7 8 9 11 13 14 15 3 5 10 12 region method regions = 11 Beware of crossing lines R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 19 23 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 21 22 23 24 edges and node method e = 24, n = 15 v = 24 -15 +2 v = 11  =2  =1  =1  =2  =1  =1  =1  =1 predicate method v =  + 1 v = 11
  • Vital Signs and High v’s Risks of increasing v TIME 5 20 15 10
    • Higher risk of failures
    • Difficult to understand
    • Unpredictable expected results
    • Complicated test environments including more test drivers
    • Knowledge transfer constraints to new staff
  • Essential Complexity
    • Measures amount of poorly-structured logic
    • Remove all well-structured logic, take cyclomatic complexity of what’s left
    • Identifies unmaintainable software
    • Pathological complexity metric is similar
      • Identifies extremely unmaintainable software
    • Essential complexity , ev - A measure of “structuredness” of decision logic of a software module.
      • Applies to decision logic embedded within written code.
      • Is calculated for each module in the Battlemap.
      • Grows from 1 to v based on the amount of “unstructured” decision logic.
      • Is associated with the ability to modularize complex modules.
      • If ev increases , then the coder is not using structured programming constructs.
    Essential Complexity
  • Essential Complexity - Unstructured Logic Branching out of a loop Branching in to a loop Branching into a decision Branching out of a decision
  • Essential Complexity - Flowgraph Reduction
    • Essential complexity, ev, is calculated by reducing the module flowgraph. Reduction is completed by removing decisions that conform to single-entry, single-exit constructs.
    Cyclomatic Complexity = 4 Essential Complexity = 1
  • Essential Complexity
    • Flowgraph and reduced flowgraph after structured constructs have been removed, revealing decisions that are unstructured.
    v = 5 Reduced flowgraph v = 3 Therefore ev of the original flowgraph = 3 Superimposed essential flowgraph
  • Essential Complexity
    • Essential complexity helps detect unstructured code.
    Good designs Can quickly deteriorate! v = 10 ev = 1 v = 11 ev = 10
  • Vital Signs and High ev’s Risks of increasing ev TIME 1 10 6 3
    • Intricate logic
    • Conflicting decisions
    • Unrealizable test paths
    • Constraints for architectural improvement
    • Difficult knowledge transfer to new staff
  • How to Manage and Reduce v and ev TIME Decreasing and managing v and ev 1 20 15 10
    • Emphasis on design architecture and methodology
    • Development and coding standards
    • QA procedures and reviews
    • Peer evaluations
    • Automated tools
    • Application portfolio management
    • Modularization
  • Module Design Complexity How Much Supervising Is Done?
  • Module design complexity
    • Measures amount of decision logic involved with subroutine calls
    • Identifies “managerial” modules
    • Indicates design reliability, integration testability
    • Related test thoroughness metric, tested design complexity, measures integration testing progress
    • Module design complexity , iv - A measure of the decision logic that controls calls to subroutines.
      • Applies to decision logic embedded within written code.
      • Is derived from predicates in decision logic associated with calls.
      • Is calculated for each module in the Battlemap.
      • Grows from 1 to v based on the complexity of calling subroutines.
      • Is related to the degree of "integratedness" between a calling module and its called modules.
    Module Design Complexity
  • Module Design Complexity
    • Module design complexity, iv, is calculated by reducing the module flowgraph. Reduction is completed by removing decisions and nodes that do not impact the calling control over a module’s immediate subordinates.
  • Module Design Complexity
    • Example:
    main proge progd iv = 3 Therefore, iv of the original flowgraph = 3 Reduced Flowgraph v = 3 proge() progd() main v = 5 proge() progd() main() { if (a == b) progd(); if (m == n) proge(); switch(expression) { case value_1: statement1; break; case value_2: statement2; break; case value_3: statement3; } } do not impact calls
  • Data complexity
    • Actually, a family of metrics
      • Global data complexity (global and parameter), specified data complexity, date complexity
    • Measures amount of decision logic involved with selected data references
    • Indicates data impact, data testability
    • Related test thoroughness metric, tested data complexity, measures data testing progress
  • Data complexity calculation Paths Conditions Pb : 1-2-3-4-9-3-4-9-12 C1 = T, C2 = T, C2 = F P2 : 1-2-12 C1 = F P3 : 1-2-3-4-9-12 C1 = T, C2 = F v = 6 M : data complexity = 3 M : => Data A Data A C1 C2 C1 C2 C3 C4 C5 1 2 5 7 8 3 10 11 12 4* 9 6 3 2 1 12 4* 9
  • Module Metrics Report v, number of unit test paths for a module Total number of test paths for all modules iv, number of integration tests for a module Average number of test paths for each module
  • Common Testing Challenges
    • Deriving Tests
      • Creating a “Good” Set of Tests
    • Verifying Tests
      • Verifying that Enough Testing was Performed
      • Providing Evidence that Testing was Good Enough
    • When to Stop Testing
    • Prioritizing Tests
      • Ensuring that Critical or Modified Code is Tested First
    • Reducing Test Duplication
      • Identifying Similar Tests That Add Little Value & Removing Them
  • An Improved Testing Process Requirements Static Identification of Test Paths Implementation Black Box White Box Sub-System or System Analysis Test Scenarios
  • What is McCabe Test? Source Code Parsing The McCabe Tools Execute Code Trace Info Build Executable Import Requirements Tracing Test Coverage Untested Paths Database Instrumented Source Code Export
  • Coverage Mode
    • Color Scheme Represents Coverage
    No Trace File Imported
  • Coverage Results
    • Colors Show “Testedness”
    • Lines Show Execution Between Modules
    • Color Scheme:
      • Branches
      • Paths
      • Lines of Code
    Trace File Imported Partially Tested Tested Untested 3 67% My_Func1ion
  • Coverage Results at Unit Level Module _ >Slice
  • Deriving Functional Tests
    • Examine Partially Tested Modules
    • Module Names Provide Insight into Additional Tests
    • Visualize Untested Modules
    Module Name ‘search’
  • Deriving Tests at the Unit Level 18 times
    • Too Many Theoretical Tests!
    • What is the Minimum Number of Tests?
    • What is a “Good” Number of Tests?
    Statistical Paths = 10 18 0 10 18 Minimum yet effective testing? Too Few Tests Too Many Tests
  • Code Coverage Example ‘A’ Example ‘B’ Which Function Is More Complex?
  • Using Code Coverage Example ‘A’ Example ‘B’ 2 Tests Required Code Coverage Is Not Proportional to Complexity 2 Tests Required
  • McCabe's Cyclomatic Complexity McCabe's Cyclomatic Complexity v(G) Number of Linearly Independent Paths One Additional Path Required to Determine the Independence of the 2 Decisions
  • Deriving Tests at the Unit Level Complexity = 10
    • Minimum 10 Tests Will:
    • Ensure Code Coverage
    • Test Independence of Decisions
  • Unit Level Test Paths - Baseline Method
    • The baseline method is a technique used to locate distinct paths within a flowgraph. The size of the basis set is equal to v(G).
    C E B G A F D M=N O=P X=Y S=T Basis set of paths Path conditions P1: ABCBDEF Pb: M=N,O=P,S=T,O not = P P2: AGDEF P2: M not = N, X=Y P3: ABDEF P3: M=N,O not = P P4: ABCF P4: M=N,O=P,S not = T P5: AGEF P5: M not = N,X not = Y v = 5
  • Structured Testing Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 1. Generates independent tests Basis set P1: ACDEGHIKLMOP P2: ABD… P3: ACDEFH… P4: ACDEGHIJL… P5: ACDEGHIKLMNP 2. Code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
  • Other Baselines - Different Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 Previous code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5 Same number of tests; which coverage is more effective? 1. Generates independent tests Basis set P1: ABDEFHIJLMNP P2: ACD… P3: ABDEGH… P4: ABDEGHIKL… P5: ABDEGHIKLMOP 2. Code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 4 1 5 5 4 1 5 5 4 1 5 5 4 1 5
  • Untested Paths at Unit Level
    • Cyclomatic Test Paths
      • M odule _ > T est Paths
      • Complete Test Paths by Default
    • Configurable Reports
      • Preferences _ >Testing
      • Modify List of Graph/Test Path Flowgraphs
    Module _ >Test Paths Remaining Untested Test Paths
  • Untested Branches at Unit Level Preferences _ >Testing (Add ‘Tested Branches’ Flowgraph to List) Module _ >Test Paths Number of Executions for Decisions Untested Branches
  • Untested Paths at Higher Level
    • System Level Integration Paths
      • Based on S 1
      • End-to-End Execution
      • Includes All iv(G) Paths
    S 1 = 6
  • Untested Paths at Higher Level
    • System Level Integration Paths
      • Displayed Graphically
      • Textual Report
      • Theoretical Execution Paths
      • Show Only Untested Paths
    S 1 = 6
  • Untested Paths at Higher Level
    • Textual Report of End-to-End Decisions
    Decision Values with Line/Node # Module Calling List
  • Verifying Tests
    • Use Coverage to Verify Tests
      • Store Coverage Results in Repository
    • Use Execution Flowgraphs to Verify Tests
  • Verifying Tests Using Coverage
    • Four Major Coverage Techniques:
      • Code Coverage
      • Branch Coverage
      • Path Coverage
      • Boolean Coverage (MC/DC)
    67% 23% 0% 35% 100%
  • When to Stop Testing
    • Coverage to Assess Testing Completeness
      • Branch Coverage Reports
    • Coverage Increments
      • How Much New Coverage for Each New Test of Tests?
  • When to Stop Testing
    • Is All of the System Equally Important?
    • Is All Code in An Application Used Equally?
      • 10% of Code Used 90% of Time
      • Remaining 90% Only Used 10% of Time
    • Where Do We Need to Test Most?
  • When to Stop Testing / Prioritizing Tests
    • Locate “Critical” Code
      • Important Functions
      • Modified Functions
      • Problem Functions
    • Mark Modules
      • Create New “Critical” Group
    • Import Coverage
    • Assess Coverage for “Critical” Code
      • Coverage Report for “Critical” Group
      • Examine Untested Branches
    32 67% Runproc 39 52% Search 56 My_Func1ion
  • Criticality Coverage
    • Optionally Use Several “Critical” Groups
      • Increasing Levels
      • Determine Coverage for Each Group
      • Focus Testing Effort on Critical Code
    Coverage Insufficient Testing? Criticality Useful as a Management Technique 30% 25% 90% 70% 50%
  • When to Stop Testing
    • Store Coverage in Repository
      • With Name & Author
    • Load Coverage
      • Multiple Selections
      • Share Between Users
      • Import Between Analyses with Common Code
    T esting _ > L oad/Save Testing Data
  • Testing the Changes Version 1.0 - Coverage Results Version 1.1 - Previous Coverage Results Imported Into New Analysis Changed Code
    • Import Previous Coverage Results Into New Analysis:
      • Parser Detects Changed Code
      • Coverage Removed for Modified or New Code
  • Testing the Changes
    • Store Coverage for Versions
      • Use Metrics Trending to Show Increments
      • Objective is to Increase Coverage between Releases
    Incremental Coverage
  • McCabe Change
    • Marking Changed Code
      • Reports Showing Change Status
      • Coverage Reports for Changed Modules
    • Configurable Change Detection
      • Standard Metrics
      • “ String Comparison”
    Changed Code
  • Manipulating Coverage
    • Addition/Subtraction of slices
      • The technique:
  • Slice Manipulation
    • Slice Operations
    • Manipulate Slices Using Set Theory
    • Export Slice to File
      • List of Executed Lines
    • Must be in Slice Mode
  • Review
    • McCabe IQ Products
    • Metrics
      • cyclomatic complexity, v
      • essential complexity, ev
      • module design complexity, iv
    • Testing
      • Deriving Tests
      • Verifying Tests
      • Prioritizing Tests
      • When is testing complete?
    • Managing Change