Your SlideShare is downloading. ×
Sonar Metrics
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Sonar Metrics

9,498
views

Published on

Using Sonar efficiently means perfectly understanding the definition and calculation algorithm of each one of the metrics.

Using Sonar efficiently means perfectly understanding the definition and calculation algorithm of each one of the metrics.

Published in: Technology

0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
9,498
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
238
Comments
0
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Sonar Metrics Keheliya Gallaba WSO2 Inc.
  • 2. Why collect metrics?● You cannot improve what you don’t measure● What you don’t measure, you cannot prove● Broken Window Theory
  • 3. What to do?● Prevention is the best medicine● Planning and Prioritizing● Technical Debt Resolution
  • 4. What to monitor? Duplicated code Coding standards Unit tests Complex code Potential bugs Comments Design and architecture
  • 5. How to monitor?● Sonar Dashboard – Lines of code – Code Complexity – Code Coverage – Rules Compliance● Time Machine● Clouds & Hot spots
  • 6. Demo
  • 7. Metrics - Rules● Violations – Total number of rule violations● New Violations – Total number of new violations● xxxxx violations – Number of violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info● New xxxxx violations – Number of new violations with severity xxxxx, xxxxx being blocker, critical, major, minor or info● Weighted violations – Sum of the violations weighted by the coefficient associated at each priority (Sum(xxxxx_violations * xxxxx_weight)) – Default Weights: INFO=0;MINOR=1;MAJOR=3;CRITICAL=5;BLOCKER=10● Rules compliance index (violations_density) – 100 - weighted_violations / Lines of code * 100
  • 8. Metrics - Size● Physical lines – Number of carriage returns● Comment lines – Number of javadoc, multi-comment and single-comment lines. Empty comment lines like, header file comments (mainly used to define the license) and commented-out lines of code are not included.● Commented-out lines of code – Number of commented-out lines of code. Javadoc blocks are not scanned.● Lines of code (ncloc) – Number of physical lines of code - number of blank lines - number of comment lines - number of header file comments - commented-out lines of code● Density of comment lines – Number of comment lines / (lines of code + number of comments lines) * 100 – With such formula : 50% means that there is the same number of lines of code and comment lines 100% means that the file contains only comment lines and no lines of code
  • 9. Metrics – Size (Contd.)● Packages – Number of packages● Classes – Number of classes including nested classes, interfaces, enums and annotations● Files – Number of analyzed files● Directories – Number of analyzed directories● Accessors – Number of getter and setter methods used to get(reading) or set(writing) a class property .● Methods – Number of Methods without including accessors. A constructor is considered to be a method.● Public API – Number of public classes, public methods (without accessors) and public properties (without public final static ones)● Public undocumented API – Number of public API without a Javadoc block● Density of public documented API (public_documented_api_density) – (Number of public API - Number of undocumented public API) / Number of public API * 100● Statements – Number of statements as defined in the Java Language Specification but without block definitions. Statements counter gets incremented by one each time an expression, if, else, while, do, for, switch, break, continue, return, throw, synchronized, catch, finally is encountered.. – Statements counter is not incremented by a class, method, field, annotation definition or by a package and import declaration.
  • 10. Metrics – Complexity● Complexity – The Cyclomatic Complexity Number is also known as McCabe Metric. It all comes down to simply counting if, for, while statements etc. in a method. Whenever the control flow of a method splits, the Cyclomatic counter gets incremented by one. – Each method has a minimum value of 1 per default except accessors which are not considered as method and so dont increment complexity. For each of the following Java keywords/statements this value gets incremented by one: ● If ● For ● While ● Case ● Catch ● Throw ● return (that isnt the last statement of a method) ● && ● || ● ? – Note that else, default, and finally dont increment the CCN value any further.
  • 11. Metrics – Complexity (continued..)public void process(Car myCar){ <- +1 if (myCar.isNotMine()){ <- +1 return; <- +1 } car.paint("red"); car.changeWheel(); while(car.hasGazol() && car.getDriver().isNotStressed()){ <- +2 car.drive(); } return;}
  • 12. Metrics – Complexity (Continued..)● Average complexity by method (function_complexity) – Average cyclomatic complexity number by method● Complexity distribution by method (function_complexity_distribution) – Number of methods for given complexities● Average complexity by class (class_complexity) – Average cyclomatic complexity by class● Complexity distribution by class (class_complexity_distribution) – Number of classes for given complexities● Average complexity by file (file_complexity) – Average cyclomatic complexity by file
  • 13. Metrics – Duplication● Duplicated lines (duplicated_lines) – Number of physical lines touched by a duplication● Duplicated blocks (duplicated_blocks) – Number of duplicated blocks of lines● Duplicated files (duplicated_files) – Number of files involved in a duplication of lines● Density of duplicated lines (duplicated_lines_density) – Duplicated lines / Physical lines * 100
  • 14. Metrics – Tests● Unit tests (tests) – Number of unit tests● Unit tests duration (test_execution_time) – Time required to execute unit tests● Unit test error (test_errors) – Number of unit tests that failed● Unit test failures (test_failures) – Number of unit tests that failed with an unexpected exception● Unit test success density (test_success_density) – (Unit tests - (errors + failures))/ Unit tests * 100● Skipped unit tests (skipped_tests) – Number of skipped unit tests●
  • 15. Metrics – Tests (continued..)● Line Coverage (line_coverage) – On a given line of code, line coverage simply answers the question: "Is this line of code executed during unit test execution?". At project level, this is the density of covered lines: Line coverage = LC / EL where ● LC - lines covered (lines_to_cover – uncovered_lines ● EL - total number of executable lines (lines_to_cover)● New Line Coverage (new_line_coverage) – identical to line_coverage but restricted to new / update source code
  • 16. Metrics – Tests (continued..)● Branch coverage (branch_coverage) ● On each line of code containing some boolean expressions, the branch coverage simply answers the question: "Has each boolean expression evaluated both to true and false ?". At project level, this is the density of possible branches in flow control structures that have been followed. Branch coverage = (CT + CF) / (2*B) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once (CT + CF = conditions_to_cover – uncovered_conditions) B - total number of branches (2*B = conditions_to_cover)● New Branch Coverage (new_branch_coverage) – identical to branch_coverage but restricted to new / update source code
  • 17. Metrics – Tests (continued..)● Coverage (coverage) ● Coverage metric is a mix of the two previous line coverage and branch coverage metrics to get an even more accurate answer to the question "how much of a source-code is being executed by your unit tests?". ● The Coverage is calculated with the following formula : coverage = (CT + CF + LC)/(2*B + EL) where CT - branches that evaluated to "true" at least once CF - branches that evaluated to "false" at least once LC - lines covered (lines_to_cover - uncovered_lines) B - total number of branches (2*B = conditions_to_cover) EL - total number of executable lines (lines_to_cover)● New Coverage (new_coverage) ● identical to coverage but restricted to new / update source code
  • 18. Metrics – Tests (continued..)● Conditions to Cover (conditions_to_cover) – Total number of conditions which could be covered by unit tests.● New Conditions to Cover (new_conditions_to_cover) – identical to conditions_to_cover but restricted to new / update source code● Lines to Cover (lines_to_cover) – Total number of lines of code which could be covered by unit tests.● New Lines to Cover (new_lines_to_cover) – identical to lines_to_cover but restricted to new / update source code● Uncovered Conditions (uncovered_conditions) – Total number of conditions which are not covered by unit tests● New Uncovered Conditions (new_uncovered_conditions) – identical to uncovered_conditions but restricted to new / update source code● Uncovered Lines (uncovered_lines) – Total number of lines of code which are not covered by unit tests.● New Uncovered Lines (new_uncovered_lines) – identical to uncovered_lines but restricted to new / update source code
  • 19. Metrics – Design● Depth of inheritance tree (dit) – The depth of inheritance tree (DIT) metric provides for each class a measure of the inheritance levels from the object hierarchy top. In Java where all classes inherit Object the minimum value of DIT is 1.● Number of children (noc) – A classs number of children (NOC) metric simply measures the number of direct and indirect descendants of the class.● Response for class (rfc) – The response set of a class is a set of methods that can potentially be executed in response to a message received by an object of that class. RFC is simply the number of methods in the set.● Afferent couplings (ca) – A classs afferent couplings is a measure of how many other classes use the specific class.● Efferent couplings (ce) – A classs efferent couplings is a measure of how many different classes are used by the specific class.● Lack of cohesion of methods (lcom4) – LCOM4 measures the number of "connected components" in a class. A connected component is a set of related methods and fields. There should be only one such component in each class. If there are 2 or more components, the class should be split into so many smaller classes.
  • 20. Metrics – Design (continued..)● Package cycles (package_cycles) – Minimal number of package cycles detected to be able to identify all undesired dependencies.● Package dependencies to cut (package_feedback_edges) – Number of package dependencies to cut in order to remove all cycles between packages.● File dependencies to cut (package_tangles) – Number of file dependencies to cut in order to remove all cycles between packages.● Package edges weight (package_edges_weight) – Total number of file dependencies between packages.● Package tangle index (package_tangle_index) – Gives the level of tangle of the packages, best value 0% meaning that there is no cycles and worst value 100% meaning that packages are really tangled. The index is calculated using : 2 * (package_tangles / package_edges_weight) * 100.
  • 21. Metrics – Design (continued..)● File cycles (file_cycles) – Minimal number of file cycles detected inside a package to be able to identify all undesired dependencies.● Suspect file dependencies (file_feedback_edges) – File dependencies to cut in order to remove cycles between files inside a package. Warning : cycles are not always bad between files inside a package.● File tangle (file_tangles) – file_tangles = file_feedback_edges.● File edges weight (file_edges_weight) – Total number of file dependencies inside a package.● File tangle index (file_tangle_index) – 2 * (file_tangles / file_edges_weight) * 100.
  • 22. Metrics – SCM● Commits – The number of commits.● Last commit date – The latest commit date on a resource.● Revision – The latest revision of a resource.● Authors by line – The last committer on each line of code.● Revisions by line – The revision number on each line of code.
  • 23. Coverage - Take-home Points:● Don’t use percentage metrics for coverage● Unit tests make code simpler, easier to understand
  • 24. Compliance - Take-home Point: Don’t Change the Rules During the Game
  • 25. Complexity - Take-home Point: Don’t Prohibit Complexity, Manage It.
  • 26. Comments - Take-home Point: Code a little. Comment a little.
  • 27. Architecture & Design - Take- home Point: Sonar-guided re-factoring
  • 28. Live Sonar instancehttp://nemo.sonarsource.org/
  • 29. Technical Debt Plugin
  • 30. Technical Debt Calculation Important metrics to look for duplicated_blocks violations – info_violations public_undocumented_api uncovered_complexity_by_tests (it is considered that 80% of coverage is the objective) function_complexity_distribution >= 8, class_complexity_distribution >= 60 package_edges_weight
  • 31. Technical Debt CalculationDebt(in man days) = cost_to_fix_duplications + cost_to_fix_violations + cost_to_comment_public_API + cost_to_fix_uncovered_complexity + cost_to_bring_complexity_below_threshold + cost_to_cut_cycles_at_package_level
  • 32. Calculation of Debt ratioDebt Ratio = (Current Debt / Total Possible Debt)* 100
  • 33. Sonar in 2 minuteshttp://sonar.codehaus.org/downloads/ unzip sonar.sh console mvn sonar:sonar http://localhost:9000/
  • 34. Installing Plug-ins Download the plug-in jarCopy to extensions/plug-ins Restart Sonar
  • 35. Some Useful Plugins● SQALE – Quality Model● Technical Debt● Eclipse/IntelliJ IDEA● JIRA Issues● Build Breaker
  • 36. Thank You !twitter.com/keheliyakeheliya@wso2.com Image credit:April 5, 2012 http://crimespace.ning.com/profiles/blogs/psychological-impact-of-the http://agileandbeyond.blogspot.com/2011/05/velocity-handle-with-care.html