• Like
CodeFest 2012. Ильин А. — Метрики покрытия. Прагматичный подход
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

CodeFest 2012. Ильин А. — Метрики покрытия. Прагматичный подход

  • 569 views
Published

 

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
569
On SlideShare
0
From Embeds
0
Number of Embeds
4

Actions

Shares
Downloads
5
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. <Insert Picture Here>Code coverage metrics.The pragmatic approach.Александр ИльинOracle
  • 2. <Insert Picture Here>Preface
  • 3. What is the code coverage data forMeasure to which extent source code is covered during testing. consequently … Code coverage isA measure of how much source code is covered during testing. finally … Testing isA set of activities aimed to prove that the system under test behaves as expected. 3
  • 4. CC – how to get Create a templateTemplate is a description of all the code there is to cover “Instrument” the source/compiled code/bytecodeInsert instructions for dropping data into a file/network, etc. Run testing, collect dataMay need to change environment Generate reportHTML, DB, etc 4
  • 5. CC – kinds ofBlock / primitive blockLineCondition/branch/predicateEntry/exitMethodPath/equence 5
  • 6. CC – how to use for testbase improvement 1: Measure (prev. slide) Measure (prev. slide)Good reporting is really important Perform analysisFind what code you need to cover.Find what tests you need to develop. Develop more tests Find dead code GOTO 1 6
  • 7. <Insert Picture Here>Mis-usages
  • 8. CC – how not to use mis-usages Must to get to 100%May be not. 100% means no more testingNo it does not. CC does not mean a thingIt does mean a fair amount if it is used properly. There is that tool which would generate tests for us and were doneNope. 8
  • 9. <Insert Picture Here>Mis-usagesTest generation
  • 10. Test generation“We present a new symbolic execution tool, ####, capable of automatically generating tests that achieve high coverage on a diverse set of complex and environmentally-intensive programs.” #### tool documentation
  • 11. Test generation cont.if ( b != 3 ) { double a = 1 / ( b – 2); 3);} else { …}Reminder: testing is ...A set of activities aimed to prove that the system under test behaves as expected.
  • 12. Test generation - conclusionGenerated tests could not test that the code work as expected because they only know how the code works and not how it is expected to. Because the only thing they possess is the code which may already be not working as expected. :)Hence …Generated tests code coverage should not be mixed with regular functional tests code coverage. 12
  • 13. <Insert Picture Here>Mis-usagesWhat does 100% coverage mean?
  • 14. 100% block/line coverage number value 1 true
  • 15. 100% branch coverage number value 1 true -1 false
  • 16. 100% domain coverage number value 0 0 .1 0.316227766016838 -1 exception
  • 17. 100% sequence coverage a b result -1 -1 1 b -1 1 -1 1 -1 -1 1 1 1 0 1 NaN 1 0 NaN 0 0 NaN
  • 18. 100% coverage - conclusion100% block/line/branch/path coverage, even if reachable, does not prove much.Hence …No need to try to get there unless ... 18
  • 19. <Insert Picture Here>Mis-usagesTarget value
  • 20. CC target value - cost Test Dev. Effort by Code Block Coverage.at effort increases exponentially with coverage. 90.00 80.00 Relative Test Dev. Effort (1 at 50% code block coverage) 70.00 60.00ffort relative to the effort of getting 50% coverage. 50.00 rx f  x =k e 40.00 k =e−50r ⇒ f 50=1coverage is proportional to the total effort30.00 needed to df current coverage. get =r f  x  dx 20.00 10.00elow 50% coverage, except maybe very big projects. 0.00 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Code Block Coverage (%)
  • 21. CC target value - effectiveness Defect Coverage by Code Block Coverage 120.00t coverage by code block coverage... 100.00ffort per code coverage and defect coverage by effort. H  x =h  f  x  80.00 f  x =k e r x Defect Coverage(%) s − y B 60.00 h  y = B1−e the percentage of bugs remaining and the effort needed to get current  x  df dH H coverage. =s 1−  x dx B dx 40.00 20.00e below 50% coverage except maybe very big projects. 0.00 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Code Block Coverage(%)
  • 22. CC target value - ROI Benefit(c) = DC(c) DD COD, where Cost-Benefit Analysis DC(c): Defect Coverage DD: Defect Density. Example: 50bug/kloc 1200.00 COD: Cost Of Defect. Example: $20k/bug 1000.00 800.00Benefit ($/size), Cost ($/size), ROI (%) ROI = Benefit(c)/ Cost(c) - 1 600.00 400.00 200.00 0.00 Cost(c) = F + V * RE(c), where -200.00 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 RE(c): Relative Effort, RE(50%) = 1 F: Fixed cost of test. Example: $50k/kloc V: Variable cost of test. Example: $5k/kl Code Block Coverage (%)
  • 23. 100% coverage - conclusion100% block/line/branch/path coverage, even if reachable, does not prove much.Hence …No need to try to get there unless 100% is the target value.Which could happen if cost of a bug is really big and/or the product is really small. 23
  • 24. Target value - conclusionTrue target value for block/line/branch/path comes from ROI, which is really hard to calculate and justify. 24
  • 25. <Insert Picture Here>Usages
  • 26. CC – how to use Test base improvement.Right. How to select which tests to develop first Dead code.Barely of artifact MetricBetter have a good metric. Control over code development Deep analysis 26
  • 27. <Insert Picture Here>CC as a metric
  • 28. What makes a good metricSimple to explainSo that you could explain your boss why is that important to spend resources onSimple to work towardsSo that you know what to do to improveHas a clear goalSo you could tell how far are you.
  • 29. Is CC a good metric?Simple to explain +Is a metric of quality of testing.Simple to work towards +Relatively easy to map uncovered code to missed testsHas a clear goal -Nope. ROI – too complicated.Need to filter the CC dataso only that is left which must be covered
  • 30. Public API*Is a set of program elements suggested for usage by public documentation.For example: all functions and variables which are described in documentation.For a Java library: all public and protected methods and fields mentioned in the library javadoc.For Java SDK: … of all public classes in java and javax packages. (*) Only applicable for a library or a SDK
  • 31. Public API
  • 32. True Public API (c)Is a set of program elements which could be accessed directly by a library userPublic API+all extensions of public API in non-public classes
  • 33. True public API example My code ArrayList.java
  • 34. True Public API how to getGet public API with interfacesFilter CC data so that it only contains implementations and extensions of the public API (*)(*) This assume that you eitherUse a tool which allows such kind of filteringorHave the data in a parse-able format and develop the filtering on your own
  • 35. UI coverageIn a way, equivalent to public API but for a UI product%% of UI elements shown – display coverage%% user actions performed – action coverageOnly “action coverage” could be obtained from CC data (*).(*) For UI toolkits which the presenter is familiar with.
  • 36. Action coverage – how to getCollect CCExtract all implementations ofjavax.swing.Action.actionPerformed(ActionEvent)orjavafx.event.EventHandler.handle(Event)Inspect all the implementationsorg.myorg.NodeAction.actionPerformed(ActionEvent)Add to the filter:org.myorg.NodeAction.nodeActionPerformed(Node myNode)Extract, repeat
  • 37. “Controller” code coverageModelContains the domain logicViewImplements user interactionControllerMaps the two. Only contains code which is called as a result of view actions and model feedbacks.Controller has very little boilerplate code. A good candidate for 100% block coverage.
  • 38. “Important” codeDevelopment/SQE marks class/method as important We use an annotation @CriticalForCoverageList of methods is obtained which are marked as important We do that by an annotation processor right while main compilationCC data is filtered by the method listGoal is 100%
  • 39. Examples of non-generic metricsSOA elementsJavaFX properties A property in JavaFX is something you could set, get and bindInsert your own.
  • 40. CC as a metric - conclusionThere are multiple ways to filter CC data to a set of code which needed to be covered in full.There are generic metrics and there is a possibility to introduce product specific metric.Such metrics are easy to use, although not always so straightforward to obtain. 40
  • 41. <Insert Picture Here>Test prioritization
  • 42. Test prioritization 100500 uncovered lines of code! “OMG! Where do I start?” Metric Develop tests to close the metric Pick another metric“Metrics for managers. Me no manager! Me write code!” Consider mapping CC data to few other source code characteristics.
  • 43. Age of the codeNew code is better be tested before getting to customer.Improves bug escape rate, BTWOld code is more likely to be tested by usersorNot used by users.
  • 44. Whats a bug escape metric?Ratio of defects sneaked out unnoticed # defects not found before releaseIn theory: # defects in the product # defects found after releasePractical: # defects found after + # defects found before
  • 45. Number of changesMore times a piece of code was changed, more atomic improvements/bugfixes were implemented in it.Hence …Higher risk of introducing a regression.
  • 46. Number of lines changesMore lines changed – more testing it needs.Better all – number of uncovered lines which were changed in the last release.
  • 47. Bug densityAssuming all the pieces were tested equally well …Many bugs means there are, probably, even more Hidden behind the known ones Fixing existing ones may introduce yet more as regressions
  • 48. Code complexityAssuming the same engineering talent and the same technology …More complex the code is – more bugs likely to be there.Any complexity metric would work: from class size to cyclomatic complexity
  • 49. Putting it togetherA formula(1 – cc) * (a1*x1 + a2*x2 + a3*x3 + ...)Wherecc – code coverage (0 - 1)xi – a risk of bug discovery in a piece of codeai – a coefficient
  • 50. Putting it together(1 – cc) * (a1*x1 + a2*x2 + a3*x3 + ...)The ones with higher value are first to coverFix the coefficientsDevelop testsCollect statistics on bug escapeFix the coefficientContinue
  • 51. Test prioritization - conclusionCC information alone may not give enough information.Need to accompany it with other characteristics of test code to make a decision.Could use a few of other characteristics simultaniously. 51
  • 52. <Insert Picture Here>Test prioritizationExecution
  • 53. Decrease test execution timeExclude tests which do not add coverage (*).But, be careful! Remember that CC is not all and even 100% coverage does not mean a lot.While excluding tests get some orthogonal measurement as well, such as specification coverage.(*) Requires “test scales”
  • 54. Deep analysisStudy the coverage report, see what test code exercises which code. (*).Recommended for developers.(*) Also requires “test scales”
  • 55. Controlled code changesDo not allow commits unless all the new/changed code is covered.Requires simultaneous commits of tests and the changes.
  • 56. Code coverage - conclusion100% CC does not guarantee that the code is working right100% CC may not be neededIt is possible to build good metrics with CCCC helps with prioritization of test developmentOther source code characteristics could be used with CC 56
  • 57. <Insert Picture Here>Code coverage metrics.The pragmatic approach.Александр ИльинOracle