Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
The Structured Testing Methodology for Software Quality Analyses of  Networking Systems   Vladimir Riabov, Ph.D. Associate...
Agenda: <ul><li>Structured Software Testing Methodology & Graph Theory: Approach and Tools; </li></ul><ul><li>McCabe’s Sof...
McCabe’s Structured Testing Methodology Approach and Tools <ul><li>McCabe’s Structured Testing Methodology is: </li></ul><...
McCabe’s Publication on the Structured Testing Methodology (1976) NIST Standard on the Structured Testing Methodology (1996)
McCabe’s Structured Testing Methodology <ul><li>The key requirement of  structured testing  is that  all decision outcomes...
Basics: Analyzing a Software Module <ul><li>For each  module  (a function or subroutine with a single entry point and a si...
Flowgraph Notation (in C) if (i) ; if (i) ; else ; if (i || j) ; do ; while (i); while (i) ; switch(i) { case 0: break; .....
Flowgraph and Its Annotated Source Listing 0 1* 2 3 4* 5 6* 7 8 9 Origin information Node correspondence Metric informatio...
Would you buy  a used car  from this software? <ul><li>Problem:  There are size and complexity boundaries beyond which sof...
Important Complexity Measures <ul><li>Cyclomatic complexity:   v   =  e  -  n  + 2  </li></ul><ul><li>(here:  e  = edges; ...
<ul><li>Cyclomatic complexity ,   v   - a measure of the decision logic of a software module. </li></ul><ul><ul><li>Applie...
Cyclomatic Complexity 1 4 2 6 7 8 9 11 13 14 15 3 5 10 12 (Measure of independent logical decisions in the module) region ...
Essential Complexity - Unstructured Logic Branching out of a loop Branching in to a loop Branching into a decision Branchi...
Essential Complexity,  ev <ul><li>Flowgraph and reduced flowgraph after structured constructs have been removed, revealing...
Essential Complexity,  ev <ul><li>Essential complexity helps detect unstructured code. </li></ul>Good designs Can quickly ...
Module Design Complexity,  iv <ul><li>Example: </li></ul>main proge progd iv = 3 Therefore,  iv of the original flowgraph ...
Module Metrics Report v, number of unit test paths for a module Total number of test paths for all modules iv, number of i...
Low Complexity Software <ul><li>Reliable </li></ul><ul><ul><li>Simple logic </li></ul></ul><ul><ul><ul><li>Low cyclomatic ...
Moderately Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Complicated logic </li></ul></ul><ul><ul><ul><li>High...
Highly Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Error prone </li></ul></ul><ul><ul><li>Very hard to test ...
McCabe QA <ul><li>McCabe QA measures software quality with industry-standard metrics </li></ul><ul><ul><li>Manage technica...
Processing with McCabe QA Tools BUILD Level TEST Level ANALYSIS Level Preprocess Compile & Link Run & Test CM ClearCase PA...
Project B: Backbone ™  Concentration Node
Project B: Backbone Concentration Node <ul><li>This system has been designed to support carrier networks. It provides both...
Annotated Source Listing for the OSPF-module Flowgraph for the OSPF-module
Cyclomatic Test Paths for the OSPF-module 1 st  Test Flowgraph for the OSPF-module
Module Metrics for the OSPF Protocol Suite Halstead Metrics for the OSPF Protocol Suite
Example 1:  Reliable and Maintainable Module Example 2:  Unreliable Module that difficult to maintain
Example 3:  Absolutely Unreliable and Unmaintainable Module Summary of Modules’ Reliability and Maintainability
Project-B Protocol-Based Code Analysis <ul><li>Unreliable modules:   38%  of the code modules have the  Cyclomatic Complex...
Project-B Code Protocol-Based Analysis ( continue ) <ul><li>1066 functions (31%) have the Module Design Complexity more th...
Comparing Project-B  Core Code  Releases
Comparing Project-B  Core Code  Releases <ul><li>NEW B-1.3 Release ( 262 modules ) vs. OLD B-1.2 Release ( 271 modules ); ...
Project C:   Broadband Service Node   <ul><li>Broadband Service Node (BSN) allows service providers to aggregate tens of t...
Project-C Code Subtrees-Based Analysis <ul><li>THREE branches of the Project-C code (Release 2.5int21) have been analyzed,...
Project-C Protocol-Based Code Analysis <ul><li>NINE protocol-based areas  of the code ( 2,141 modules ) have been analyzed...
Correlation between the Number of Error Submits, the Number of Unreliable Functions (v > 10), and the Number of Possible E...
Correlation between the Number of Customer Reports, the Number of Unreliable Functions (v > 10), and the Number of Possibl...
Project-C: Code Coverage
Project-C: Test Coverage
The Structured Testing Methodology (based on the Theory of Graphs) has done for us: <ul><li>Identified  complex code areas...
Upcoming SlideShare
Loading in …5
×

Paper 13D

664 views

Published on

  • Be the first to comment

  • Be the first to like this

Paper 13D

  1. 1. The Structured Testing Methodology for Software Quality Analyses of Networking Systems Vladimir Riabov, Ph.D. Associate Professor Department of Mathematics & Computer Science Rivier College, Nashua, NH E-mail: vriabov@rivier.edu 56th Northeast Quality Council Conference Mansfield, Massachusetts, October 17-18, 2006
  2. 2. Agenda: <ul><li>Structured Software Testing Methodology & Graph Theory: Approach and Tools; </li></ul><ul><li>McCabe’s Software Complexity Analysis Techniques; </li></ul><ul><li>Results of Code Complexity Analysis for two industrial projects in Networking; </li></ul><ul><li>Study of Networking Protocols Implementation; </li></ul><ul><li>Predicting Code Errors; </li></ul><ul><li>Test and Code Coverage; </li></ul><ul><li>Conclusion: What have the Graph Theory and Structured Testing Methodology done for us? </li></ul>Developing Complex Computer Systems “ If you don’t know where you’re going, any road will do,” - Chinese Proverb “ If you don’t know where you are, a map won’t help,” - Watts S. Humphrey “ You can’t improve what you can’t measure,” - Tim Lister
  3. 3. McCabe’s Structured Testing Methodology Approach and Tools <ul><li>McCabe’s Structured Testing Methodology is: </li></ul><ul><li>- a unique methodology for software testing developed in 1976 </li></ul><ul><li>[IEEE Transactions on Software Engineering, Vol. SE-2, No. 4, 1976, pp. 308-320]; </li></ul><ul><li>- based on the Theory of Graphs; </li></ul><ul><li>- approved as the NIST Standard (1996) in the structured testing; </li></ul><ul><li>- a leading tool in computer, IT, and aerospace industries </li></ul><ul><li> ( HP, GTE, AT&T, Alcatel, GIG, Boeing, NASA, etc. ) since 1977; </li></ul><ul><li>- provides Code Coverage Capacity. </li></ul><ul><li>Author’s Experience with McCabe IQ Tools since 1998: </li></ul><ul><li>- leaded three projects in networking industry that required Code Analysis, Code Coverage, and Test Coverage ; </li></ul><ul><li>- completed BCN Code Analysis with McCabe Tools ; </li></ul><ul><li>- completed BSN Code Analysis with McCabe Tools ; </li></ul><ul><li>- studied BSN-OSPF Code Coverage & Test Coverage . </li></ul>
  4. 4. McCabe’s Publication on the Structured Testing Methodology (1976) NIST Standard on the Structured Testing Methodology (1996)
  5. 5. McCabe’s Structured Testing Methodology <ul><li>The key requirement of structured testing is that all decision outcomes must be exercised independently during testing. </li></ul><ul><li>The number of tests required for a software module is equal to the cyclomatic complexity of that module. </li></ul><ul><li>The software complexity is measured by metrics: </li></ul><ul><li> - cyclomatic complexity, v </li></ul><ul><li> - essential complexity, ev </li></ul><ul><li> - module design complexity, iv </li></ul><ul><li> - system design, S0 =  iv </li></ul><ul><li> - system integration complexity, S1 = S0 - N + 1 for N modules </li></ul><ul><li> - Halstead metrics, and 52 metrics more. </li></ul><ul><li>The testing methodology allows to identify unreliable-and- unmaintainable code , predict number of code errors and maintenance efforts , develop strategies for unit/module testing , integration testing , and test/code coverage . </li></ul>
  6. 6. Basics: Analyzing a Software Module <ul><li>For each module (a function or subroutine with a single entry point and a single exit point), an annotated source listing and flowgraph is generated. </li></ul><ul><li>Flowgraph is an architectural diagram of a software module’s logic. </li></ul>1 main() 2 { 3 printf(“example”); 4 if (y > 10) 5 b(); 6 else 7 c(); 8 printf(“end”); 9 } Statement Code Number main Flowgraph node :statement or block of sequential statements condition end of condition edge : flow of control between nodes 1-3 4 5 7 8-9 Battlemap main b c printf
  7. 7. Flowgraph Notation (in C) if (i) ; if (i) ; else ; if (i || j) ; do ; while (i); while (i) ; switch(i) { case 0: break; ... } if (i && j) ;
  8. 8. Flowgraph and Its Annotated Source Listing 0 1* 2 3 4* 5 6* 7 8 9 Origin information Node correspondence Metric information Decision construct
  9. 9. Would you buy a used car from this software? <ul><li>Problem: There are size and complexity boundaries beyond which software becomes hopeless </li></ul><ul><ul><li>Too error-prone to use </li></ul></ul><ul><ul><li>Too complex to fix </li></ul></ul><ul><ul><li>Too large to redevelop </li></ul></ul><ul><li>Solution: Control complexity during development and maintenance </li></ul><ul><ul><li>Stay away from the boundaries. </li></ul></ul>
  10. 10. Important Complexity Measures <ul><li>Cyclomatic complexity: v = e - n + 2 </li></ul><ul><li>(here: e = edges; n = nodes) </li></ul><ul><ul><li>Amount of decision logic </li></ul></ul><ul><li>Essential complexity: ev </li></ul><ul><ul><li>Amount of poorly-structured logic </li></ul></ul><ul><li>Module design complexity: iv </li></ul><ul><ul><li>Amount of logic involved with subroutine calls </li></ul></ul><ul><li>System design complexity: S0 =  iv </li></ul><ul><ul><li>Amount of independent unit (module) tests for a system </li></ul></ul><ul><li>System integration complexity: S1 = S0 - N + 1 </li></ul><ul><ul><li>Amount of integration tests for a system of N modules. </li></ul></ul>
  11. 11. <ul><li>Cyclomatic complexity , v - a measure of the decision logic of a software module. </li></ul><ul><ul><li>Applies to decision logic embedded within written code. </li></ul></ul><ul><ul><li>Is derived from predicates in decision logic. </li></ul></ul><ul><ul><li>Is calculated for each module in the Battlemap. </li></ul></ul><ul><ul><li>Grows from 1 to high, finite number based on the amount of decision logic. </li></ul></ul><ul><ul><li>Is correlated to software quality and testing quantity; units with higher v , v > 10 , are less reliable and require high levels of testing. </li></ul></ul>Cyclomatic Complexity
  12. 12. Cyclomatic Complexity 1 4 2 6 7 8 9 11 13 14 15 3 5 10 12 (Measure of independent logical decisions in the module) region method regions = 11 Beware of crossing lines R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 19 23 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 20 21 22 23 24 edges and node method e = 24, n = 15 v = 24 - 15 + 2 = 11 v = 11  =2  =1  =1  =2  =1  =1  =1  =1 predicate method v =  + 1 v = 11
  13. 13. Essential Complexity - Unstructured Logic Branching out of a loop Branching in to a loop Branching into a decision Branching out of a decision
  14. 14. Essential Complexity, ev <ul><li>Flowgraph and reduced flowgraph after structured constructs have been removed, revealing decisions that are unstructured. </li></ul>v = 5 Reduced flowgraph v = 3 Therefore ev of the original flowgraph = 3 Superimposed essential flowgraph
  15. 15. Essential Complexity, ev <ul><li>Essential complexity helps detect unstructured code. </li></ul>Good designs Can quickly deteriorate! v = 10 ev = 1 v = 11 ev = 10
  16. 16. Module Design Complexity, iv <ul><li>Example: </li></ul>main proge progd iv = 3 Therefore, iv of the original flowgraph = 3 Reduced Flowgraph v = 3 proge() progd() main v = 5 proge() progd() main() { if (a == b) progd(); if (m == n) proge(); switch(expression) { case value_1: statement1; break; case value_2: statement2; break; case value_3: statement3; } } do not impact calls
  17. 17. Module Metrics Report v, number of unit test paths for a module Total number of test paths for all modules iv, number of integration tests for a module Average number of test paths for each module
  18. 18. Low Complexity Software <ul><li>Reliable </li></ul><ul><ul><li>Simple logic </li></ul></ul><ul><ul><ul><li>Low cyclomatic complexity </li></ul></ul></ul><ul><ul><ul><li>(v < 10) </li></ul></ul></ul><ul><ul><li>Not error-prone </li></ul></ul><ul><ul><li>Easy to test </li></ul></ul><ul><li>Maintainable </li></ul><ul><ul><li>Good structure </li></ul></ul><ul><ul><ul><li>Low essential complexity </li></ul></ul></ul><ul><ul><ul><li>(ev < 4) </li></ul></ul></ul><ul><ul><li>Easy to understand </li></ul></ul><ul><ul><li>Easy to modify </li></ul></ul>
  19. 19. Moderately Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Complicated logic </li></ul></ul><ul><ul><ul><li>High cyclomatic complexity </li></ul></ul></ul><ul><ul><ul><li>(v >> 10) </li></ul></ul></ul><ul><ul><li>Error-prone </li></ul></ul><ul><ul><li>Hard to test </li></ul></ul><ul><li>Maintainable </li></ul><ul><ul><li>Can be understood </li></ul></ul><ul><ul><li>Can be modified </li></ul></ul><ul><ul><li>Can be improved </li></ul></ul>
  20. 20. Highly Complex Software <ul><li>Unreliable </li></ul><ul><ul><li>Error prone </li></ul></ul><ul><ul><li>Very hard to test </li></ul></ul><ul><li>Unmaintainable </li></ul><ul><ul><li>Poor structure </li></ul></ul><ul><ul><ul><li>High essential complexity </li></ul></ul></ul><ul><ul><ul><li>(ev >> 10) </li></ul></ul></ul><ul><ul><li>Hard to understand </li></ul></ul><ul><ul><li>Hard to modify </li></ul></ul><ul><ul><li>Hard to improve </li></ul></ul>
  21. 21. McCabe QA <ul><li>McCabe QA measures software quality with industry-standard metrics </li></ul><ul><ul><li>Manage technical risk factors as software is developed and changed </li></ul></ul><ul><ul><li>Improve software quality using detailed reports and visualization </li></ul></ul><ul><ul><li>Shorten the time between releases </li></ul></ul><ul><ul><li>Develop contingency plans to address unavoidable risks </li></ul></ul>
  22. 22. Processing with McCabe QA Tools BUILD Level TEST Level ANALYSIS Level Preprocess Compile & Link Run & Test CM ClearCase PARSE src files src *.E Battlemap Flowgraphs Reports Text Graphics Test Plan Instrumented src inst-src; inst-lib.c inst-src Inst-lib.c inst.exe Output IMPORT Trace File Coverage Analysis Coverage Report Project Code Traditional Procedures New McCabe’s Procedures
  23. 23. Project B: Backbone ™ Concentration Node
  24. 24. Project B: Backbone Concentration Node <ul><li>This system has been designed to support carrier networks. It provides both services of conventional Layer 2 switches and the routing and control services of Layer 3 devices. </li></ul><ul><li>Nine protocol-based sub-trees of the code (3400 modules written in the C-programming language for BGP, DVMRP, Frame Relay, ISIS, IP, MOSPF, OSPF2, PIM, and PPP protocols) have been analyzed. </li></ul>
  25. 25. Annotated Source Listing for the OSPF-module Flowgraph for the OSPF-module
  26. 26. Cyclomatic Test Paths for the OSPF-module 1 st Test Flowgraph for the OSPF-module
  27. 27. Module Metrics for the OSPF Protocol Suite Halstead Metrics for the OSPF Protocol Suite
  28. 28. Example 1: Reliable and Maintainable Module Example 2: Unreliable Module that difficult to maintain
  29. 29. Example 3: Absolutely Unreliable and Unmaintainable Module Summary of Modules’ Reliability and Maintainability
  30. 30. Project-B Protocol-Based Code Analysis <ul><li>Unreliable modules: 38% of the code modules have the Cyclomatic Complexity more than 10 (including 592 functions with v > 20 ); </li></ul><ul><li>Only two code parts ( FR, ISIS ) are reliable ; </li></ul><ul><li>BGP and PIM have the worst characteristics (49% of the code modules have v > 10); </li></ul><ul><li>1147 modules (34%) are unreliable and unmaintainable with v > 10 and e v > 4; </li></ul><ul><li>BGP , DVMRP , and MOSPF are the most unreliable and unmaintainable (42% modules); </li></ul><ul><li>The Project-B was cancelled. </li></ul>
  31. 31. Project-B Code Protocol-Based Analysis ( continue ) <ul><li>1066 functions (31%) have the Module Design Complexity more than 5. The System Integration Complexity is 16026 , which is a top estimation of the number of integration tests ; </li></ul><ul><li>Only FR, ISIS, IP , and PPP modules require 4 integration tests per module. BGP , MOSPF , and PIM have the worst characteristics (42% of the code modules require more than 7 integration tests per module); </li></ul><ul><li>B-2.0.0.0int18 Release potentially contains 2920 errors estimated by the Halstead approach. FR , ISIS , and IP have relatively low (significantly less than average level of 0.86 error per module) B-error metrics. For BGP, DVMRP, MOSPF , and PIM , the error level is the highest one (more than one error per module) . </li></ul>
  32. 32. Comparing Project-B Core Code Releases
  33. 33. Comparing Project-B Core Code Releases <ul><li>NEW B-1.3 Release ( 262 modules ) vs. OLD B-1.2 Release ( 271 modules ); </li></ul><ul><li>16 modules were deleted (7 with v >10); </li></ul><ul><li>7 new modules were added (all modules are reliable with v < 10, e v = 1); </li></ul><ul><li>Sixty percent of changes have been made in the code modules with the parameters of the Cyclomatic Complexity metric more than 20. </li></ul><ul><li>63 modules are still unreliable and unmainaitable; </li></ul><ul><li>39 out of 70 (56%) modules with v >10 were targeted for changing and remained unreliable; </li></ul><ul><li>7 out of 12 (58%) modules have increased their complexity v > 10; </li></ul><ul><li>Significant reduction achieved in System Design ( S0 ) and System Integration Metrics ( S1 ): </li></ul><ul><li>S1 from 1126 to 1033; S0 from 1396 to 1294. </li></ul><ul><li>New Release potentially contains less errors : 187 errors (vs. 206 errors) estimated by the Halstead approach. </li></ul><ul><li>The Project-B was cancelled. </li></ul>
  34. 34. Project C: Broadband Service Node <ul><li>Broadband Service Node (BSN) allows service providers to aggregate tens of thousands of subscribers onto one platform and apply customized IP services to these subscribers; </li></ul><ul><li>Different networking services [IP-VPNs, Firewalls, Network Address Translations (NAT), IP Quality-of-Service (QoS), Web steering, and others] are provided. </li></ul>
  35. 35. Project-C Code Subtrees-Based Analysis <ul><li>THREE branches of the Project-C code (Release 2.5int21) have been analyzed, namely RMC, CT3, and PSP subtrees ( 23,136 modules ); </li></ul><ul><li>26% of the code modules have the Cyclomatic Complexity more than 10 (including 2,634 functions with v > 20); - unreliable modules! </li></ul><ul><li>All three code parts are approximately at the same level of complexity ( average per module : v = 9.9; ev = 3.89; iv = 5.53). </li></ul><ul><li>1.167 Million lines of code have been studied (50 lines average per module); </li></ul><ul><li>3,852 modules (17%) are unreliable and unmaintainable with v > 10 and e v > 4; </li></ul><ul><li>Estimated number of possible ERRORS is 11,460 ; </li></ul><ul><li>128,013 unit tests and 104,880 module integration tests should be developed to cover all modules of the Project-C code. </li></ul>
  36. 36. Project-C Protocol-Based Code Analysis <ul><li>NINE protocol-based areas of the code ( 2,141 modules ) have been analyzed, namely BGP, FR, IGMP, IP, ISIS, OSPF, PPP, RIP, and SNMP . </li></ul><ul><li>130,000 lines of code have been studied. </li></ul><ul><li>28% of the code modules have the Cyclomatic Complexity more than 10 (including 272 functions with v > 20); - unreliable modules ! </li></ul><ul><li>FR & SNMP parts are well designed & programmed with few possible errors. </li></ul><ul><li>39% of the BGP and PPP code areas are unreliable ( v > 10). </li></ul><ul><li>416 modules (19.4%) are unreliable & unmaintainable ( v >10 & e v >4). </li></ul><ul><li>27.4% of the BGP and IP code areas are unreliable & unmaintainable . </li></ul><ul><li>Estimated number of possible ERRORS is 1,272 ; </li></ul><ul><li>12,693 unit tests and 10,561 module integration tests should be developed to cover NINE protocol-based areas of the Project-C code. </li></ul>
  37. 37. Correlation between the Number of Error Submits, the Number of Unreliable Functions (v > 10), and the Number of Possible Errors for Six Protocols
  38. 38. Correlation between the Number of Customer Reports, the Number of Unreliable Functions (v > 10), and the Number of Possible Errors for Five Protocols
  39. 39. Project-C: Code Coverage
  40. 40. Project-C: Test Coverage
  41. 41. The Structured Testing Methodology (based on the Theory of Graphs) has done for us: <ul><li>Identified complex code areas ( high v ). </li></ul><ul><li>Identified unreliable & unmaintainable code ( v >10 & ev >4 ). </li></ul><ul><li>Predicted number of code errors and maintenance efforts [Halstead B, E-, and T-metrics]. </li></ul><ul><li>Estimated manpower to develop, test, and maintain the code. </li></ul><ul><li>Developed strategies for unit/module testing , integration testing . </li></ul><ul><li>Provided Test & Code Coverage [paths vs. lines]. </li></ul><ul><li>Identified “dead” code areas . </li></ul><ul><li>Improved Software Design and Coding Standards . </li></ul><ul><li>Improved Reengineering Efforts in many other projects. </li></ul><ul><li>Validated Automated Test Effectiveness . </li></ul>

×