Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Management Overview 9861 Broken Land Parkway Fourth Floor Columbia, Maryland 21046 800-638-6316 www.mccabe.com [email_addr...
Agenda <ul><li>McCabe IQ Overview </li></ul><ul><li>Software Measurement Issues </li></ul><ul><li>McCabe Concepts </li></u...
About McCabe & Associates 20 Years of Expertise Global  Presence Analyzed Over 25 Billion  Lines of Code
McCabe IQ process flow Analysis platform Target platform Source code Compile and run Execution log Effective Testing Quali...
McCabe IQ and Configuration Management <ul><li>Merant PVCS </li></ul><ul><li>Rational ClearCase </li></ul><ul><li>CA Endev...
McCabe IQ and Test Automation McCabe IQ <ul><li>Mercury Interactive: </li></ul><ul><li>TestDirector </li></ul><ul><li>WinR...
McCabe IQ Components McCabe IQ Framework ( metrics, data, visualization, testing, API ) TESTING McCabe Test McCabe TestCom...
McCabe QA <ul><li>McCabe QA measures software quality with industry-standard metrics </li></ul><ul><ul><li>Manage technica...
McCabe Data <ul><li>McCabe Data pinpoints the impact of data variable modifications     </li></ul><ul><ul><li>Identify usa...
McCabe Compare <ul><li>McCabe Compare identifies reusable and redundant code </li></ul><ul><ul><li>Simplify maintenance an...
McCabe Change <ul><li>McCabe Change identifies new and changed modules </li></ul><ul><ul><li>Manage change with more preci...
McCabe Test <ul><li>McCabe test maximizes testing effectiveness </li></ul><ul><ul><li>Focus testing on high-risk areas </l...
McCabe Slice <ul><li>McCabe Slice traces functionality to implementation </li></ul><ul><ul><li>Identifies code that implem...
McCabe IQ Components Summary <ul><li>McCabe QA : Improve quality with metrics </li></ul><ul><li>McCabe Data : Analyze data...
Software Measurement Issues <ul><li>Risk management </li></ul><ul><li>Software metrics </li></ul><ul><li>Complexity metric...
Software Risk Management <ul><li>Software risk falls into two major categories </li></ul><ul><ul><li>Non-technical risk: h...
Software Metrics Overview <ul><li>Metrics are quantitative measures </li></ul><ul><ul><li>Operational: cost, failure rate,...
Complexity Metric Evaluation <ul><li>Good complexity metrics have three properties </li></ul><ul><ul><li>Descriptive: obje...
Benefits of Complexity Measurement <ul><li>Complexity metrics are available from code </li></ul><ul><ul><li>They can even ...
McCabe Concepts <ul><li>Definition: In C and C++, a module is a function or subroutine with a single entry point and a sin...
Analyzing a Module <ul><li>For each module, an annotated source listing and flowgraph is generated. </li></ul><ul><li>Flow...
Flowgraph Notation (C) if (i) ; if (i) ; else ; if (i || j) ; do ; while (i); while (i) ; switch(i) { case 0: break; ... }...
Flowgraph and Its Annotated Source Listing 0 1* 2 3 4* 5 6* 7 8 9 Origin information Node correspondence Metric informatio...
Low Complexity Software <ul><li>Reliable </li></ul><ul><ul><li>Simple logic </li></ul></ul><ul><ul><ul><li>Low cyclomatic ...
Moderately Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Complicated logic </li></ul></ul><ul><ul><ul><li>High...
Highly Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Error prone </li></ul></ul><ul><ul><li>Very hard to test ...
Would you buy a used car from this software? <ul><li>Problem: There are size and complexity boundaries beyond which softwa...
Important Complexity Measures <ul><li>Cyclomatic complexity: v(G) </li></ul><ul><ul><li>Amount of decision logic </li></ul...
Cyclomatic Complexity <ul><li>The most famous complexity metric </li></ul><ul><li>Measures amount of decision logic </li><...
<ul><li>Cyclomatic complexity ,  v  - A measure of the decision logic of a software module. </li></ul><ul><ul><li>Applies ...
Cyclomatic Complexity 1 4 2 6 7 8 9 11 13 14 15 3 5 10 12 region method   regions = 11  Beware of crossing lines R1 R2 R3 ...
Vital Signs and High v’s Risks of increasing v TIME 5 20 15 10 <ul><li>Higher risk of failures </li></ul><ul><li>Difficult...
Essential Complexity <ul><li>Measures amount of poorly-structured logic </li></ul><ul><li>Remove all well-structured logic...
<ul><li>Essential complexity ,  ev  - A measure of “structuredness” of decision logic of a software module. </li></ul><ul>...
Essential Complexity - Unstructured Logic Branching out of a loop Branching in to a loop Branching into a decision Branchi...
Essential Complexity - Flowgraph Reduction <ul><li>Essential complexity, ev, is calculated by reducing the module flowgrap...
Essential Complexity <ul><li>Flowgraph and reduced flowgraph after structured constructs have been removed, revealing deci...
Essential Complexity <ul><li>Essential complexity helps detect unstructured code. </li></ul>Good designs Can quickly deter...
Vital Signs and High ev’s Risks of increasing ev TIME 1 10 6 3 <ul><li>Intricate logic </li></ul><ul><li>Conflicting decis...
How to Manage and Reduce v and ev TIME Decreasing and managing v and ev 1 20 15 10 <ul><li>Emphasis on design architecture...
Module Design Complexity How Much Supervising Is Done?
Module design complexity <ul><li>Measures amount of decision logic involved with subroutine calls </li></ul><ul><li>Identi...
<ul><li>Module design complexity ,  iv  - A measure of the decision logic that controls calls to subroutines. </li></ul><u...
Module Design Complexity <ul><li>Module design complexity, iv, is calculated by reducing the module flowgraph. Reduction i...
Module Design Complexity <ul><li>Example: </li></ul>main proge progd iv = 3 Therefore,  iv of the original flowgraph = 3  ...
Data complexity <ul><li>Actually, a family of metrics </li></ul><ul><ul><li>Global data complexity (global and parameter),...
Data complexity calculation Paths Conditions Pb  :  1-2-3-4-9-3-4-9-12  C1 = T, C2 = T, C2 = F P2  :  1-2-12 C1 = F P3  : ...
Module Metrics Report v, number of unit test paths for a module Total number of test paths for all modules iv, number of i...
Common Testing Challenges <ul><li>Deriving Tests </li></ul><ul><ul><li>Creating a “Good” Set of Tests </li></ul></ul><ul><...
An Improved Testing Process Requirements Static Identification of Test Paths Implementation Black Box White Box Sub-System...
What is McCabe Test? Source Code Parsing The McCabe Tools Execute Code Trace Info Build Executable Import Requirements Tra...
Coverage Mode <ul><li>Color Scheme Represents Coverage </li></ul>No  Trace File Imported
Coverage Results <ul><li>Colors Show “Testedness” </li></ul><ul><li>Lines Show Execution Between Modules </li></ul><ul><li...
Coverage Results at Unit Level Module  _ >Slice
Deriving Functional Tests <ul><li>Examine Partially Tested Modules </li></ul><ul><li>Module Names Provide Insight into Add...
Deriving Tests at the Unit Level 18 times <ul><li>Too Many Theoretical Tests! </li></ul><ul><li>What is the Minimum Number...
Code Coverage  Example ‘A’ Example ‘B’ Which Function Is More Complex?
Using Code Coverage  Example ‘A’ Example ‘B’ 2 Tests Required Code Coverage Is  Not  Proportional to Complexity 2 Tests Re...
McCabe's Cyclomatic Complexity McCabe's Cyclomatic Complexity   v(G) Number of Linearly Independent Paths  One Additional ...
Deriving Tests at the Unit Level Complexity  = 10 <ul><li>Minimum  10 Tests Will: </li></ul><ul><li>Ensure Code Coverage <...
Unit Level Test Paths - Baseline Method <ul><li>The baseline method is a technique used to locate distinct paths within a ...
Structured Testing Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 1. Generates independent tests Basis set P1: AC...
Other Baselines - Different Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 Previous code coverage - frequency of ...
Untested Paths at Unit Level <ul><li>Cyclomatic Test Paths </li></ul><ul><ul><li>M odule  _ > T est Paths </li></ul></ul><...
Untested Branches at Unit Level Preferences  _ >Testing  (Add ‘Tested Branches’ Flowgraph to List) Module  _ >Test Paths N...
Untested Paths at Higher Level <ul><li>System Level Integration Paths </li></ul><ul><ul><li>Based on S 1 </li></ul></ul><u...
Untested Paths at Higher Level <ul><li>System Level Integration Paths </li></ul><ul><ul><li>Displayed Graphically </li></u...
Untested Paths at Higher Level <ul><li>Textual Report of End-to-End Decisions </li></ul>Decision Values with Line/Node # M...
Verifying Tests <ul><li>Use Coverage to Verify Tests </li></ul><ul><ul><li>Store Coverage Results in Repository </li></ul>...
Verifying Tests Using Coverage <ul><li>Four Major Coverage Techniques: </li></ul><ul><ul><li>Code Coverage </li></ul></ul>...
When to Stop Testing <ul><li>Coverage to Assess Testing Completeness </li></ul><ul><ul><li>Branch Coverage Reports </li></...
When to Stop Testing <ul><li>Is All of the System Equally Important? </li></ul><ul><li>Is All Code in An Application Used ...
When to Stop Testing / Prioritizing Tests <ul><li>Locate “Critical” Code </li></ul><ul><ul><li>Important Functions </li></...
Criticality Coverage <ul><li>Optionally Use Several “Critical” Groups </li></ul><ul><ul><li>Increasing Levels </li></ul></...
When to Stop Testing <ul><li>Store Coverage in Repository </li></ul><ul><ul><li>With Name & Author </li></ul></ul><ul><li>...
Testing the Changes Version 1.0 - Coverage Results Version 1.1 - Previous Coverage Results Imported Into New Analysis Chan...
Testing the Changes <ul><li>Store Coverage for Versions </li></ul><ul><ul><li>Use Metrics Trending to Show Increments </li...
McCabe Change <ul><li>Marking Changed Code </li></ul><ul><ul><li>Reports Showing Change  Status </li></ul></ul><ul><ul><li...
Manipulating Coverage <ul><li>Addition/Subtraction of slices </li></ul><ul><ul><li>The technique: </li></ul></ul>
Slice Manipulation <ul><li>Slice Operations </li></ul><ul><li>Manipulate Slices Using Set Theory </li></ul><ul><li>Export ...
Review <ul><li>McCabe IQ Products </li></ul><ul><li>Metrics </li></ul><ul><ul><li>cyclomatic complexity, v </li></ul></ul>...
Upcoming SlideShare
Loading in …5
×

Copyright McCabe

1,442 views

Published on

  • Be the first to comment

  • Be the first to like this

Copyright McCabe

  1. 1. Management Overview 9861 Broken Land Parkway Fourth Floor Columbia, Maryland 21046 800-638-6316 www.mccabe.com [email_address] 1-800-634-0150
  2. 2. Agenda <ul><li>McCabe IQ Overview </li></ul><ul><li>Software Measurement Issues </li></ul><ul><li>McCabe Concepts </li></ul><ul><li>Software Quality Metrics </li></ul><ul><li>Software Testing </li></ul><ul><li>Questions and Answers </li></ul>
  3. 3. About McCabe & Associates 20 Years of Expertise Global Presence Analyzed Over 25 Billion Lines of Code
  4. 4. McCabe IQ process flow Analysis platform Target platform Source code Compile and run Execution log Effective Testing Quality Management Instrumented source code McCabe IQ
  5. 5. McCabe IQ and Configuration Management <ul><li>Merant PVCS </li></ul><ul><li>Rational ClearCase </li></ul><ul><li>CA Endevor </li></ul>McCabe IQ Execution Log Test Environment Effective Testing Quality Management <ul><li>Monitor quality as software changes </li></ul><ul><li>Manage test environment </li></ul>
  6. 6. McCabe IQ and Test Automation McCabe IQ <ul><li>Mercury Interactive: </li></ul><ul><li>TestDirector </li></ul><ul><li>WinRunner </li></ul>Source code Test executable Execution log Risk Management Test Management GUI Test Automation Effective Testing <ul><li>Risk-driven test management </li></ul><ul><li>Effective, automated testing </li></ul>Non-GUI Test Automation
  7. 7. McCabe IQ Components McCabe IQ Framework ( metrics, data, visualization, testing, API ) TESTING McCabe Test McCabe TestCompress McCabe Slice McCabe ReTest QUALITY ASSURANCE McCabe QA McCabe Data McCabe Compare McCabe Change Source Code Parsing Technology (C, C++, Java, Visual Basic, COBOL, Fortran, Ada)
  8. 8. McCabe QA <ul><li>McCabe QA measures software quality with industry-standard metrics </li></ul><ul><ul><li>Manage technical risk factors as software is developed and changed </li></ul></ul><ul><ul><li>Improve software quality using detailed reports and visualization </li></ul></ul><ul><ul><li>Shorten the time between releases </li></ul></ul><ul><ul><li>Develop contingency plans to address unavoidable risks </li></ul></ul>
  9. 9. McCabe Data <ul><li>McCabe Data pinpoints the impact of data variable modifications </li></ul><ul><ul><li>Identify usage of key data elements and data types </li></ul></ul><ul><ul><li>Relate data variable changes to impacted logic </li></ul></ul><ul><ul><li>Focus testing resources on the usage of selected data </li></ul></ul>
  10. 10. McCabe Compare <ul><li>McCabe Compare identifies reusable and redundant code </li></ul><ul><ul><li>Simplify maintenance and re-engineering of applications through the consolidation of similar code modules </li></ul></ul><ul><ul><li>Search for software defects in similar code modules, to make sure they’re fixed consistently throughout the software </li></ul></ul>
  11. 11. McCabe Change <ul><li>McCabe Change identifies new and changed modules </li></ul><ul><ul><li>Manage change with more precision than the file-level information from CM tools </li></ul></ul><ul><ul><li>Work with a complete technical risk profile </li></ul></ul><ul><ul><ul><li>Complex? </li></ul></ul></ul><ul><ul><ul><li>Poorly tested? </li></ul></ul></ul><ul><ul><ul><li>New or changed? </li></ul></ul></ul><ul><ul><li>Focus review and test efforts </li></ul></ul>
  12. 12. McCabe Test <ul><li>McCabe test maximizes testing effectiveness </li></ul><ul><ul><li>Focus testing on high-risk areas </li></ul></ul><ul><ul><li>Objectively measure testing effectiveness </li></ul></ul><ul><ul><li>Increase the failure detection rate during internal testing </li></ul></ul><ul><ul><li>Assess the time and resources needed to ensure a well-tested application </li></ul></ul><ul><ul><li>Know when to stop testing </li></ul></ul>
  13. 13. McCabe Slice <ul><li>McCabe Slice traces functionality to implementation </li></ul><ul><ul><li>Identifies code that implements specific functional transactions </li></ul></ul><ul><ul><li>Isolates code that is unique to the implementation of specific functional transactions </li></ul></ul><ul><ul><li>Helps extract business rules for application redesign </li></ul></ul>
  14. 14. McCabe IQ Components Summary <ul><li>McCabe QA : Improve quality with metrics </li></ul><ul><li>McCabe Data : Analyze data impact </li></ul><ul><li>McCabe Compare : Eliminate duplicate code </li></ul><ul><li>McCabe Change : Focus on changed software </li></ul><ul><li>McCabe Test : Increase test effectiveness </li></ul><ul><li>McCabe TestCompress : Increase test efficiency </li></ul><ul><li>McCabe Slice : Trace functionality to code </li></ul><ul><li>McCabe ReTest : Automate regression testing </li></ul>
  15. 15. Software Measurement Issues <ul><li>Risk management </li></ul><ul><li>Software metrics </li></ul><ul><li>Complexity metrics </li></ul><ul><li>Complexity metric evaluation </li></ul><ul><li>Benefits of complexity measurement </li></ul>
  16. 16. Software Risk Management <ul><li>Software risk falls into two major categories </li></ul><ul><ul><li>Non-technical risk: how important is the system? </li></ul></ul><ul><ul><ul><li>Usually known early </li></ul></ul></ul><ul><ul><li>Technical risk: how likely is the system to fail? </li></ul></ul><ul><ul><ul><li>Often known too late </li></ul></ul></ul><ul><li>Complexity analysis quantifies technical risk </li></ul><ul><ul><li>Helps quantify reliability and maintainability </li></ul></ul><ul><ul><ul><li>This helps with prioritization, resource allocation, contingency planning, etc. </li></ul></ul></ul><ul><ul><li>Guides testing </li></ul></ul><ul><ul><ul><li>Focuses effort to mitigate greatest risks </li></ul></ul></ul><ul><ul><ul><li>Helps deploy testing resources efficiently </li></ul></ul></ul>
  17. 17. Software Metrics Overview <ul><li>Metrics are quantitative measures </li></ul><ul><ul><li>Operational: cost, failure rate, change effort, … </li></ul></ul><ul><ul><li>Intrinsic: size, complexity, … </li></ul></ul><ul><li>Most operational metrics are known too late </li></ul><ul><ul><li>Cost, failure rate are only known after deployment </li></ul></ul><ul><ul><li>So, they aren’t suitable for risk management </li></ul></ul><ul><li>Complexity metrics are available immediately </li></ul><ul><ul><li>Complexity is calculated from source code </li></ul></ul><ul><li>Complexity predicts operational metrics </li></ul><ul><ul><li>Complexity correlates with defects, maintenance costs, ... </li></ul></ul>
  18. 18. Complexity Metric Evaluation <ul><li>Good complexity metrics have three properties </li></ul><ul><ul><li>Descriptive: objectively measure something </li></ul></ul><ul><ul><li>Predictive: correlate with something interesting </li></ul></ul><ul><ul><li>Prescriptive: guide risk reduction </li></ul></ul><ul><li>Consider lines of code </li></ul><ul><ul><li>Descriptive: yes, measures software size </li></ul></ul><ul><ul><li>Predictive, Prescriptive: no </li></ul></ul><ul><li>Consider cyclomatic complexity </li></ul><ul><ul><li>Descriptive: yes, measures decision logic </li></ul></ul><ul><ul><li>Predictive: yes, predicts errors and maintenance </li></ul></ul><ul><ul><li>Prescriptive: yes, guides testing and improvement </li></ul></ul>
  19. 19. Benefits of Complexity Measurement <ul><li>Complexity metrics are available from code </li></ul><ul><ul><li>They can even be estimated from a design </li></ul></ul><ul><li>They provide continuous feedback </li></ul><ul><ul><li>They can identify high-risk software as soon as it is written or changed </li></ul></ul><ul><li>They pinpoint areas of potential instability </li></ul><ul><ul><li>They can focus resources for reviews, testing, and code improvement </li></ul></ul><ul><li>They help predict eventual operational metrics </li></ul><ul><ul><li>Systems with similar complexity metric profiles tend to have similar test effort, cost, error frequency, ... </li></ul></ul>
  20. 20. McCabe Concepts <ul><li>Definition: In C and C++, a module is a function or subroutine with a single entry point and a single exit point. A module is represented by a rectangular box on the Battlemap. </li></ul>main function a function c function d printf Difficult to maintainable module Difficult to test module Well-designed, testable module Library module
  21. 21. Analyzing a Module <ul><li>For each module, an annotated source listing and flowgraph is generated. </li></ul><ul><li>Flowgraph - an architectural diagram of a software module’s logic. </li></ul>1 main() 2 { 3 printf(“example”); 4 if (y > 10) 5 b(); 6 else 7 c(); 8 printf(“end”); 9 } Stmt Code Number main Flowgraph node :statement or block of sequential statements condition end of condition edge : flow of control between nodes 1-3 4 5 7 8-9 Battlemap main b c printf
  22. 22. Flowgraph Notation (C) if (i) ; if (i) ; else ; if (i || j) ; do ; while (i); while (i) ; switch(i) { case 0: break; ... } if (i && j) ;
  23. 23. Flowgraph and Its Annotated Source Listing 0 1* 2 3 4* 5 6* 7 8 9 Origin information Node correspondence Metric information Decision construct
  24. 24. Low Complexity Software <ul><li>Reliable </li></ul><ul><ul><li>Simple logic </li></ul></ul><ul><ul><ul><li>Low cyclomatic complexity </li></ul></ul></ul><ul><ul><li>Not error-prone </li></ul></ul><ul><ul><li>Easy to test </li></ul></ul><ul><li>Maintainable </li></ul><ul><ul><li>Good structure </li></ul></ul><ul><ul><ul><li>Low essential complexity </li></ul></ul></ul><ul><ul><li>Easy to understand </li></ul></ul><ul><ul><li>Easy to modify </li></ul></ul>
  25. 25. Moderately Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Complicated logic </li></ul></ul><ul><ul><ul><li>High cyclomatic complexity </li></ul></ul></ul><ul><ul><li>Error-prone </li></ul></ul><ul><ul><li>Hard to test </li></ul></ul><ul><li>Maintainable </li></ul><ul><ul><li>Can be understood </li></ul></ul><ul><ul><li>Can be modified </li></ul></ul><ul><ul><li>Can be improved </li></ul></ul>
  26. 26. Highly Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Error prone </li></ul></ul><ul><ul><li>Very hard to test </li></ul></ul><ul><li>Unmaintainable </li></ul><ul><ul><li>Poor structure </li></ul></ul><ul><ul><ul><li>High essential complexity </li></ul></ul></ul><ul><ul><li>Hard to understand </li></ul></ul><ul><ul><li>Hard to modify </li></ul></ul><ul><ul><li>Hard to improve </li></ul></ul>
  27. 27. Would you buy a used car from this software? <ul><li>Problem: There are size and complexity boundaries beyond which software becomes hopeless </li></ul><ul><ul><li>Too error-prone to use </li></ul></ul><ul><ul><li>Too complex to fix </li></ul></ul><ul><ul><li>Too large to redevelop </li></ul></ul><ul><li>Solution: Control complexity during development and maintenance </li></ul><ul><ul><li>Stay away from the boundary </li></ul></ul>
  28. 28. Important Complexity Measures <ul><li>Cyclomatic complexity: v(G) </li></ul><ul><ul><li>Amount of decision logic </li></ul></ul><ul><li>Essential complexity: ev(G) </li></ul><ul><ul><li>Amount of poorly-structured logic </li></ul></ul><ul><li>Module design complexity: iv(G) </li></ul><ul><ul><li>Amount of logic involved with subroutine calls </li></ul></ul><ul><li>Data complexity: sdv </li></ul><ul><ul><li>Amount of logic involved with selected data references </li></ul></ul>
  29. 29. Cyclomatic Complexity <ul><li>The most famous complexity metric </li></ul><ul><li>Measures amount of decision logic </li></ul><ul><li>Identifies unreliable software, hard-to-test software </li></ul><ul><li>Related test thoroughness metric, actual complexity, measures testing progress </li></ul>
  30. 30. <ul><li>Cyclomatic complexity , v - A measure of the decision logic of a software module. </li></ul><ul><ul><li>Applies to decision logic embedded within written code. </li></ul></ul><ul><ul><li>Is derived from predicates in decision logic. </li></ul></ul><ul><ul><li>Is calculated for each module in the Battlemap. </li></ul></ul><ul><ul><li>Grows from 1 to high, finite number based on the amount of decision logic. </li></ul></ul><ul><ul><li>Is correlated to software quality and testing quantity; units with higher v , v>10 , are less reliable and require high levels of testing. </li></ul></ul>Cyclomatic Complexity
  31. 31. Cyclomatic Complexity 1 4 2 6 7 8 9 11 13 14 15 3 5 10 12 region method regions = 11 Beware of crossing lines R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 19 23 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 21 22 23 24 edges and node method e = 24, n = 15 v = 24 -15 +2 v = 11  =2  =1  =1  =2  =1  =1  =1  =1 predicate method v =  + 1 v = 11
  32. 32. Vital Signs and High v’s Risks of increasing v TIME 5 20 15 10 <ul><li>Higher risk of failures </li></ul><ul><li>Difficult to understand </li></ul><ul><li>Unpredictable expected results </li></ul><ul><li>Complicated test environments including more test drivers </li></ul><ul><li>Knowledge transfer constraints to new staff </li></ul>
  33. 33. Essential Complexity <ul><li>Measures amount of poorly-structured logic </li></ul><ul><li>Remove all well-structured logic, take cyclomatic complexity of what’s left </li></ul><ul><li>Identifies unmaintainable software </li></ul><ul><li>Pathological complexity metric is similar </li></ul><ul><ul><li>Identifies extremely unmaintainable software </li></ul></ul>
  34. 34. <ul><li>Essential complexity , ev - A measure of “structuredness” of decision logic of a software module. </li></ul><ul><ul><li>Applies to decision logic embedded within written code. </li></ul></ul><ul><ul><li>Is calculated for each module in the Battlemap. </li></ul></ul><ul><ul><li>Grows from 1 to v based on the amount of “unstructured” decision logic. </li></ul></ul><ul><ul><li>Is associated with the ability to modularize complex modules. </li></ul></ul><ul><ul><li>If ev increases , then the coder is not using structured programming constructs. </li></ul></ul>Essential Complexity
  35. 35. Essential Complexity - Unstructured Logic Branching out of a loop Branching in to a loop Branching into a decision Branching out of a decision
  36. 36. Essential Complexity - Flowgraph Reduction <ul><li>Essential complexity, ev, is calculated by reducing the module flowgraph. Reduction is completed by removing decisions that conform to single-entry, single-exit constructs. </li></ul>Cyclomatic Complexity = 4 Essential Complexity = 1
  37. 37. Essential Complexity <ul><li>Flowgraph and reduced flowgraph after structured constructs have been removed, revealing decisions that are unstructured. </li></ul>v = 5 Reduced flowgraph v = 3 Therefore ev of the original flowgraph = 3 Superimposed essential flowgraph
  38. 38. Essential Complexity <ul><li>Essential complexity helps detect unstructured code. </li></ul>Good designs Can quickly deteriorate! v = 10 ev = 1 v = 11 ev = 10
  39. 39. Vital Signs and High ev’s Risks of increasing ev TIME 1 10 6 3 <ul><li>Intricate logic </li></ul><ul><li>Conflicting decisions </li></ul><ul><li>Unrealizable test paths </li></ul><ul><li>Constraints for architectural improvement </li></ul><ul><li>Difficult knowledge transfer to new staff </li></ul>
  40. 40. How to Manage and Reduce v and ev TIME Decreasing and managing v and ev 1 20 15 10 <ul><li>Emphasis on design architecture and methodology </li></ul><ul><li>Development and coding standards </li></ul><ul><li>QA procedures and reviews </li></ul><ul><li>Peer evaluations </li></ul><ul><li>Automated tools </li></ul><ul><li>Application portfolio management </li></ul><ul><li>Modularization </li></ul>
  41. 41. Module Design Complexity How Much Supervising Is Done?
  42. 42. Module design complexity <ul><li>Measures amount of decision logic involved with subroutine calls </li></ul><ul><li>Identifies “managerial” modules </li></ul><ul><li>Indicates design reliability, integration testability </li></ul><ul><li>Related test thoroughness metric, tested design complexity, measures integration testing progress </li></ul>
  43. 43. <ul><li>Module design complexity , iv - A measure of the decision logic that controls calls to subroutines. </li></ul><ul><ul><li>Applies to decision logic embedded within written code. </li></ul></ul><ul><ul><li>Is derived from predicates in decision logic associated with calls. </li></ul></ul><ul><ul><li>Is calculated for each module in the Battlemap. </li></ul></ul><ul><ul><li>Grows from 1 to v based on the complexity of calling subroutines. </li></ul></ul><ul><ul><li>Is related to the degree of &quot;integratedness&quot; between a calling module and its called modules. </li></ul></ul>Module Design Complexity
  44. 44. Module Design Complexity <ul><li>Module design complexity, iv, is calculated by reducing the module flowgraph. Reduction is completed by removing decisions and nodes that do not impact the calling control over a module’s immediate subordinates. </li></ul>
  45. 45. Module Design Complexity <ul><li>Example: </li></ul>main proge progd iv = 3 Therefore, iv of the original flowgraph = 3 Reduced Flowgraph v = 3 proge() progd() main v = 5 proge() progd() main() { if (a == b) progd(); if (m == n) proge(); switch(expression) { case value_1: statement1; break; case value_2: statement2; break; case value_3: statement3; } } do not impact calls
  46. 46. Data complexity <ul><li>Actually, a family of metrics </li></ul><ul><ul><li>Global data complexity (global and parameter), specified data complexity, date complexity </li></ul></ul><ul><li>Measures amount of decision logic involved with selected data references </li></ul><ul><li>Indicates data impact, data testability </li></ul><ul><li>Related test thoroughness metric, tested data complexity, measures data testing progress </li></ul>
  47. 47. Data complexity calculation Paths Conditions Pb : 1-2-3-4-9-3-4-9-12 C1 = T, C2 = T, C2 = F P2 : 1-2-12 C1 = F P3 : 1-2-3-4-9-12 C1 = T, C2 = F v = 6 M : data complexity = 3 M : => Data A Data A C1 C2 C1 C2 C3 C4 C5 1 2 5 7 8 3 10 11 12 4* 9 6 3 2 1 12 4* 9
  48. 48. Module Metrics Report v, number of unit test paths for a module Total number of test paths for all modules iv, number of integration tests for a module Average number of test paths for each module
  49. 49. Common Testing Challenges <ul><li>Deriving Tests </li></ul><ul><ul><li>Creating a “Good” Set of Tests </li></ul></ul><ul><li>Verifying Tests </li></ul><ul><ul><li>Verifying that Enough Testing was Performed </li></ul></ul><ul><ul><li>Providing Evidence that Testing was Good Enough </li></ul></ul><ul><li>When to Stop Testing </li></ul><ul><li>Prioritizing Tests </li></ul><ul><ul><li>Ensuring that Critical or Modified Code is Tested First </li></ul></ul><ul><li>Reducing Test Duplication </li></ul><ul><ul><li>Identifying Similar Tests That Add Little Value & Removing Them </li></ul></ul>
  50. 50. An Improved Testing Process Requirements Static Identification of Test Paths Implementation Black Box White Box Sub-System or System Analysis Test Scenarios
  51. 51. What is McCabe Test? Source Code Parsing The McCabe Tools Execute Code Trace Info Build Executable Import Requirements Tracing Test Coverage Untested Paths Database Instrumented Source Code Export
  52. 52. Coverage Mode <ul><li>Color Scheme Represents Coverage </li></ul>No Trace File Imported
  53. 53. Coverage Results <ul><li>Colors Show “Testedness” </li></ul><ul><li>Lines Show Execution Between Modules </li></ul><ul><li>Color Scheme: </li></ul><ul><ul><li>Branches </li></ul></ul><ul><ul><li>Paths </li></ul></ul><ul><ul><li>Lines of Code </li></ul></ul>Trace File Imported Partially Tested Tested Untested 3 67% My_Func1ion
  54. 54. Coverage Results at Unit Level Module _ >Slice
  55. 55. Deriving Functional Tests <ul><li>Examine Partially Tested Modules </li></ul><ul><li>Module Names Provide Insight into Additional Tests </li></ul><ul><li>Visualize Untested Modules </li></ul>Module Name ‘search’
  56. 56. Deriving Tests at the Unit Level 18 times <ul><li>Too Many Theoretical Tests! </li></ul><ul><li>What is the Minimum Number of Tests? </li></ul><ul><li>What is a “Good” Number of Tests? </li></ul>Statistical Paths = 10 18 0 10 18 Minimum yet effective testing? Too Few Tests Too Many Tests
  57. 57. Code Coverage Example ‘A’ Example ‘B’ Which Function Is More Complex?
  58. 58. Using Code Coverage Example ‘A’ Example ‘B’ 2 Tests Required Code Coverage Is Not Proportional to Complexity 2 Tests Required
  59. 59. McCabe's Cyclomatic Complexity McCabe's Cyclomatic Complexity v(G) Number of Linearly Independent Paths One Additional Path Required to Determine the Independence of the 2 Decisions
  60. 60. Deriving Tests at the Unit Level Complexity = 10 <ul><li>Minimum 10 Tests Will: </li></ul><ul><li>Ensure Code Coverage </li></ul><ul><li>Test Independence of Decisions </li></ul>
  61. 61. Unit Level Test Paths - Baseline Method <ul><li>The baseline method is a technique used to locate distinct paths within a flowgraph. The size of the basis set is equal to v(G). </li></ul>C E B G A F D M=N O=P X=Y S=T Basis set of paths Path conditions P1: ABCBDEF Pb: M=N,O=P,S=T,O not = P P2: AGDEF P2: M not = N, X=Y P3: ABDEF P3: M=N,O not = P P4: ABCF P4: M=N,O=P,S not = T P5: AGEF P5: M not = N,X not = Y v = 5
  62. 62. Structured Testing Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 1. Generates independent tests Basis set P1: ACDEGHIKLMOP P2: ABD… P3: ACDEFH… P4: ACDEGHIJL… P5: ACDEGHIKLMNP 2. Code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5
  63. 63. Other Baselines - Different Coverage E F G H A B C D M N O P I J K L R1 R2 R3 R4 R5 Previous code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 1 4 5 5 1 4 5 5 1 4 5 5 1 4 5 Same number of tests; which coverage is more effective? 1. Generates independent tests Basis set P1: ABDEFHIJLMNP P2: ACD… P3: ABDEGH… P4: ABDEGHIKL… P5: ABDEGHIKLMOP 2. Code coverage - frequency of execution Node A B C D E F G H I J K L M N O P Count 5 4 1 5 5 4 1 5 5 4 1 5 5 4 1 5
  64. 64. Untested Paths at Unit Level <ul><li>Cyclomatic Test Paths </li></ul><ul><ul><li>M odule _ > T est Paths </li></ul></ul><ul><ul><li>Complete Test Paths by Default </li></ul></ul><ul><li>Configurable Reports </li></ul><ul><ul><li>Preferences _ >Testing </li></ul></ul><ul><ul><li>Modify List of Graph/Test Path Flowgraphs </li></ul></ul>Module _ >Test Paths Remaining Untested Test Paths
  65. 65. Untested Branches at Unit Level Preferences _ >Testing (Add ‘Tested Branches’ Flowgraph to List) Module _ >Test Paths Number of Executions for Decisions Untested Branches
  66. 66. Untested Paths at Higher Level <ul><li>System Level Integration Paths </li></ul><ul><ul><li>Based on S 1 </li></ul></ul><ul><ul><li>End-to-End Execution </li></ul></ul><ul><ul><li>Includes All iv(G) Paths </li></ul></ul>S 1 = 6
  67. 67. Untested Paths at Higher Level <ul><li>System Level Integration Paths </li></ul><ul><ul><li>Displayed Graphically </li></ul></ul><ul><ul><li>Textual Report </li></ul></ul><ul><ul><li>Theoretical Execution Paths </li></ul></ul><ul><ul><li>Show Only Untested Paths </li></ul></ul>S 1 = 6
  68. 68. Untested Paths at Higher Level <ul><li>Textual Report of End-to-End Decisions </li></ul>Decision Values with Line/Node # Module Calling List
  69. 69. Verifying Tests <ul><li>Use Coverage to Verify Tests </li></ul><ul><ul><li>Store Coverage Results in Repository </li></ul></ul><ul><li>Use Execution Flowgraphs to Verify Tests </li></ul>
  70. 70. Verifying Tests Using Coverage <ul><li>Four Major Coverage Techniques: </li></ul><ul><ul><li>Code Coverage </li></ul></ul><ul><ul><li>Branch Coverage </li></ul></ul><ul><ul><li>Path Coverage </li></ul></ul><ul><ul><li>Boolean Coverage (MC/DC) </li></ul></ul>67% 23% 0% 35% 100%
  71. 71. When to Stop Testing <ul><li>Coverage to Assess Testing Completeness </li></ul><ul><ul><li>Branch Coverage Reports </li></ul></ul><ul><li>Coverage Increments </li></ul><ul><ul><li>How Much New Coverage for Each New Test of Tests? </li></ul></ul>
  72. 72. When to Stop Testing <ul><li>Is All of the System Equally Important? </li></ul><ul><li>Is All Code in An Application Used Equally? </li></ul><ul><ul><li>10% of Code Used 90% of Time </li></ul></ul><ul><ul><li>Remaining 90% Only Used 10% of Time </li></ul></ul><ul><li>Where Do We Need to Test Most? </li></ul>
  73. 73. When to Stop Testing / Prioritizing Tests <ul><li>Locate “Critical” Code </li></ul><ul><ul><li>Important Functions </li></ul></ul><ul><ul><li>Modified Functions </li></ul></ul><ul><ul><li>Problem Functions </li></ul></ul><ul><li>Mark Modules </li></ul><ul><ul><li>Create New “Critical” Group </li></ul></ul><ul><li>Import Coverage </li></ul><ul><li>Assess Coverage for “Critical” Code </li></ul><ul><ul><li>Coverage Report for “Critical” Group </li></ul></ul><ul><ul><li>Examine Untested Branches </li></ul></ul>32 67% Runproc 39 52% Search 56 My_Func1ion
  74. 74. Criticality Coverage <ul><li>Optionally Use Several “Critical” Groups </li></ul><ul><ul><li>Increasing Levels </li></ul></ul><ul><ul><li>Determine Coverage for Each Group </li></ul></ul><ul><ul><li>Focus Testing Effort on Critical Code </li></ul></ul>Coverage Insufficient Testing? Criticality Useful as a Management Technique 30% 25% 90% 70% 50%
  75. 75. When to Stop Testing <ul><li>Store Coverage in Repository </li></ul><ul><ul><li>With Name & Author </li></ul></ul><ul><li>Load Coverage </li></ul><ul><ul><li>Multiple Selections </li></ul></ul><ul><ul><li>Share Between Users </li></ul></ul><ul><ul><li>Import Between Analyses with Common Code </li></ul></ul>T esting _ > L oad/Save Testing Data
  76. 76. Testing the Changes Version 1.0 - Coverage Results Version 1.1 - Previous Coverage Results Imported Into New Analysis Changed Code <ul><li>Import Previous Coverage Results Into New Analysis: </li></ul><ul><ul><li>Parser Detects Changed Code </li></ul></ul><ul><ul><li>Coverage Removed for Modified or New Code </li></ul></ul>
  77. 77. Testing the Changes <ul><li>Store Coverage for Versions </li></ul><ul><ul><li>Use Metrics Trending to Show Increments </li></ul></ul><ul><ul><li>Objective is to Increase Coverage between Releases </li></ul></ul>Incremental Coverage
  78. 78. McCabe Change <ul><li>Marking Changed Code </li></ul><ul><ul><li>Reports Showing Change Status </li></ul></ul><ul><ul><li>Coverage Reports for Changed Modules </li></ul></ul><ul><li>Configurable Change Detection </li></ul><ul><ul><li>Standard Metrics </li></ul></ul><ul><ul><li>“ String Comparison” </li></ul></ul>Changed Code
  79. 79. Manipulating Coverage <ul><li>Addition/Subtraction of slices </li></ul><ul><ul><li>The technique: </li></ul></ul>
  80. 80. Slice Manipulation <ul><li>Slice Operations </li></ul><ul><li>Manipulate Slices Using Set Theory </li></ul><ul><li>Export Slice to File </li></ul><ul><ul><li>List of Executed Lines </li></ul></ul><ul><li>Must be in Slice Mode </li></ul>
  81. 81. Review <ul><li>McCabe IQ Products </li></ul><ul><li>Metrics </li></ul><ul><ul><li>cyclomatic complexity, v </li></ul></ul><ul><ul><li>essential complexity, ev </li></ul></ul><ul><ul><li>module design complexity, iv </li></ul></ul><ul><li>Testing </li></ul><ul><ul><li>Deriving Tests </li></ul></ul><ul><ul><li>Verifying Tests </li></ul></ul><ul><ul><li>Prioritizing Tests </li></ul></ul><ul><ul><li>When is testing complete? </li></ul></ul><ul><li>Managing Change </li></ul>

×