Upcoming SlideShare
×

# Software Measurement: Lecture 1. Measures and Metrics

16,184

Published on

Materials of the lecture on metrics and measures held by Programeter leadership during the Software Economics course at Tartu University: courses.cs.ut.ee/2010/se

Published in: Technology
10 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• So cool

Are you sure you want to  Yes  No
• un po' lunga

Are you sure you want to  Yes  No
Views
Total Views
16,184
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
640
2
Likes
10
Embeds 0
No embeds

No notes for slide

### Software Measurement: Lecture 1. Measures and Metrics

1. 1. Software Measurement Software Economics 2010
2. 2. Anton Litvinenko <ul><li>Co-founder and CTO at </li><ul><li>Metrics tracking kit for software development
3. 3. Key competence: software measurement and metrics </li></ul><li>9 years of software development at Programeter, Mobi, and MicroLink
4. 4. MSc in computer science at Tartu University </li></ul>
5. 5. Agenda <ul><li>What is a software metric? </li></ul>
6. 6. Are software metrics “ good ” or “ bad ”?
7. 7. What is a “ measure ”?
8. 8. What is a “measure”? “ Way of associating a number with some attribute of a physical object ” height -> meters temperature -> degrees Celsius
9. 9. What is “measure”? One-to-one mapping between physical objects and formal objects (e.g numbers)
10. 10. Same Stuff Formally <ul><li>Relational System – tuple consisting of </li><ul><li>Set of objects ( e.g. collections of apples )
11. 11. Relations on these objects ( e.g. more, equal )
12. 12. Binary operations on these objects ( e.g. combine, put together ) </li></ul></ul><ul><li>What would be the corresponding formal relational system? </li></ul>
13. 13. Relationships and Operations <ul><li>Apples: </li><ul><li>Steve Jobs has 7 apples
14. 14. Steve Ballmer has 4 apples </li></ul><li>Jobs has more apples
15. 15. Jobs and Ballmer can cooperate and put their apples together to have a larger pile </li></ul>
16. 16. We defined a “ complete transition ” from “ real world ” into “ formal world ”
17. 17. Same Stuff Formally... again <ul><li>Let A be a relational system of physical objects (e.g. apples)
18. 18. B be a relational system of formal objects ( e.g. numbers)
19. 19. m be a measure from A to B then
20. 20. Tuple A , B and m is a scale if </li><ul><li>Relations from A equivalent to relations from B
21. 21. For each operator in A there is a corresponding operator in B </li></ul></ul>
22. 22. Why is this important?
23. 23. What Can You Say?
24. 24. Why is this important? <ul><li>Software design: </li><ul><li>10 modules with complexity 20 – 30 range
25. 25. 20 modules with complexity 10 – 30 range </li></ul><li>Which one is less complex?
26. 26. We don't have intuition for such cases </li></ul>
27. 27. Intelligence Barrier
28. 28. Example: Temperature <ul><li>Facts : </li><ul><li>Steve : today is 40 ºF, yesterday was 80ºF
29. 29. Anton : today is 4ºC, yesterday was 27ºC </li></ul><li>Statements : </li><ul><li>Steve : Yesterday was warmer than today
30. 30. Anton : Yesterday was warmer than today </li></ul></ul>
31. 31. Example: Temperature <ul><li>Facts : </li><ul><li>Steve : today is 40 ºF, yesterday was 80ºF
32. 32. Anton : today is 4ºC, yesterday was 27ºC </li></ul><li>Statements : </li><ul><li>Steve : Yesterday was 2x times warmer </li></ul></ul><ul>Is this a meaningful statement about temperature? </ul>
33. 33. Statement is meaningful when it gives same result on all similar scales
34. 34. Scales are similar when there is a transformation from one scale to another that retains all defined relations and operations
35. 35. Nominal Scale <ul><li>Giving “ names ” to objects </li><ul><li>Equality </li></ul></ul><ul><li>Gender </li><ul><li>Any naming is similar to any other </li></ul><li>Numbers on t-shirts of football players </li><ul><li>Any unique numbering is similar to any other </li></ul></ul>
36. 36. View from 3000 feet :) Nominal Scales Gender T-shirt Numbering
37. 37. Ordinal Scale <ul><li>Giving “ names ” in particular order </li><ul><li>More .... than ....
38. 38. Middle element – median </li></ul></ul><ul><li>Rating of tennis players </li><ul><li>Similar: any other rating that retains the order </li></ul></ul>
39. 39. All Ordinal Scales Are Nominal Nominal Ordinal Gender T-shirt Numbering Top 100 Grading
40. 40. Interval Scale <ul><li>Assigning numbers so that interval is also meaningful </li><ul><li>Both median and arithmetic mean
41. 41. Similar – reachable via positive linear transformation : t(x) = ax + b </li></ul></ul><ul><li>Temperature in Celsius scale </li><ul><li>Similar: Fahrenheit scale </li></ul></ul>
42. 42. Interval Scales Are Ordinal Nominal Ordinal Interval Gender T-shirt Numbering Top 100 Grading Temperature
43. 43. Ratio Scale <ul><li>Ratio of two measures is meaningful </li><ul><li>All statistical measures
44. 44. Similar – reachable via positive linear transformation in form of t(x) = ax </li></ul><li>Length, height, ... </li><ul><li>Similar: Imperial units </li></ul></ul>
45. 45. Ratio Scales Are Interval Nominal Ordinal Interval Ratio Gender T-shirt Numbering Top 100 Grading Temperature Length Height
46. 46. Absolute Scale <ul><li>Only one way of measuring objects ! </li><ul><li>Similar – identity transformation: t(x) = x </li></ul></ul><ul><li>Counting: </li><ul><li>My team has 5 members
47. 47. My software is 25 lines of code </li></ul></ul>
48. 48. Absolute Scales Are Ratio Nominal Ordinal Interval Ratio Absolute Scales Gender T-shirt Numbering Top 100 Grading Temperature Length Height Team Size
49. 49. Exercise 2 <ul><li>Cost is usually a measure with ratio scale
50. 50. Quality is only ordinal (rarely interval)
51. 51. Judgment in terms of value </li><ul><li>Quality per unit of cost
52. 52. Should we pay 2x for 2x quality? </li></ul><li>Combining cost measure on a ratio scale with quality measure on ordinal scale , what scale do you get ? </li></ul>
53. 53. In This Course: Metric = Measure
54. 54. Software Metric is a measure of anything directly related to software or its production
55. 55. Agenda <ul><li>What is a software metric?
56. 56. Examples of software metrics </li><ul><li>Most famous :) </li></ul></ul>
57. 57. Can anybody name any software metric?
58. 58. Lines Of Code (LOC) – Product Size <ul><li>12 </li></ul><ul><li>14 </li></ul><ul><li>18 </li></ul>
59. 59. Lines Of Code
60. 60. Lines Of Code – Summary <ul><li>Accurate, easy to measure </li></ul><ul><li>How to interpret ... </li><ul><li>Empty lines
62. 62. Several statements on one line </li></ul><li>Language dependent
63. 63. Doesn't respect complexity and content </li></ul>
64. 64. McCabe's Cyclomatic Complexity <ul><li>Thomas McCabe, 1976
65. 65. Complexity of a program </li><ul><li>Number of linearly independent paths through a function
66. 66. Usually calculated using flow graph </li></ul><li>V(G) = e – n + 2p </li><ul><li>e – num of edges, n – num of vertices, p – num of unconnected parts of graph </li></ul></ul>
67. 67. McCabe's Cyclomatic Complexity
68. 68. McCabe's Cyclomatic Complexity <ul><li>e = 7
69. 69. n = 6
70. 70. p = 1
71. 71. V(G) = 3 </li></ul>
72. 72. Cyclomatic Complexity – Summary <ul><li>Automated
73. 73. Maintainability </li><ul><li>V(G) > 10 -> Probability of defects rises </li></ul><li>Testability </li><ul><li>V(G) is an upper bound for the branch coverage </li><ul><li>Each control structure was evaluated both to true and false </li></ul><li>V(G) is a lower bound for the path coverage </li><ul><li>All possible paths were executed </li></ul></ul><li>Doesn't respect other types of complexity </li><ul><li>Data structure, data flow, interfaces </li></ul></ul>
74. 74. Exercise 3 <ul><li>Calculate LOC
75. 75. Draw a flow graph
76. 76. Calculate McCabe's cyclomatic complexity </li></ul>Code snippet
77. 77. Agenda <ul><li>What is a software metric?
78. 78. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
79. 79. Object oriented metrics </li></ul></ul>
80. 80. Object Oriented Metrics <ul><li>Shiyam Chidamber and Chris Kemerer , 1994 </li></ul><ul><li>Metrics based on firm theoretical basis and experience of professional software developers
81. 81. Measure unique aspects of the object oriented approach </li></ul>
82. 82. Inheritance Metrics <ul><li>Depth of inheritance tree (DIT) </li><ul><li>Depth of the class in the inheritance tree </li></ul><li>Number of children (NOC) </li><ul><li>Number of immediate descendants </li></ul></ul>NOC: 2 DIT: 2 NOC: 3 DIT: 1
83. 83. Complexity <ul><li>Weighted method count (WMC) </li><ul><li>Sum of McCabe's cyclomatic complexities of all methods </li></ul><li>Response for a class (RFC) </li><ul><li>Number of public methods in a class and methods directly called by these </li></ul></ul>
84. 84. Complexity – Example RFC = 6, WMC = 1 + 2 + 1 = 4
85. 85. Coupling <ul><li>Coupling between object classes (CBO) </li><ul><li>Number of classes given class is coupled to </li></ul><li>Lack of cohesion in methods (LCOM) </li><ul><li>Number of method pairs that do not share instance variables vs number of methods that share at least one instance variable </li></ul></ul>
86. 86. Coupling – Example CBO = 2, LCOM = 3 – 0 = 3
87. 87. Coupling – Example LCOM = 2 – 1 = 1
88. 88. Agenda <ul><li>What is a software metric?
89. 89. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
90. 90. Object oriented metrics
91. 91. Object oriented design quality metrics </li></ul></ul>
92. 92. Object Oriented Design <ul><li>Bad design symptoms: </li><ul><li>Rigidity, fragility, immobility, viscosity </li></ul><li>Class design principles </li><ul><li>Open closed principle , Liskov substitution principle , ... </li></ul><li>Package architecture principles </li><ul><li>Stable dependencies principle , Stable abstractness principle , … </li></ul></ul>
93. 93. OO Design Quality Metrics <ul><li>Robert Martin (aka Uncle Bob) , 1994
94. 94. Measure quality of an object oriented design </li></ul>
95. 95. Can we divide dependencies into “ good ” and “ bad ”? Dependencies Between Classes
96. 96. Dependencies <ul><li>Stable (good) vs unstable (bad) class
97. 97. Stable </li><ul><li>No need to change = independent
98. 98. Hard to change = many dependents = responsible </li></ul><li>Unstable </li><ul><li>Depends on many = dependent
99. 99. Easy to change = no dependents = irresponsible </li></ul></ul>
100. 100. Class Category <ul><li>Class category – group of highly cohesive classes </li><ul><li>Closed and open to changes together
101. 101. Reused together
102. 102. Same goal </li></ul></ul><ul><li>Packages in Java, namespaces in C# </li></ul>
103. 103. Dependency Metrics <ul><li>Afferent Coupling (Ca) – number of classes outside the category depending on the classes inside the category </li><ul><li>Incoming dependencies </li></ul><li>Efferent Coupling (Ce) – number of classes inside the category depending on the classes outside the category </li><ul><li>Outgoing dependencies </li></ul></ul>
104. 104. Example - Coupling Package One Package Two Package Three Ca(Package One) = 1, Ce(Package One) = 2
105. 105. Instability (I) <ul><li>Ratio of outgoing dependencies to total number of dependencies </li></ul><ul><li>I = Ce / (Ca + Ce)
106. 106. Stable -> I = 0 -> Ce = 0
107. 107. Unstable -> I = 1 -> Ca = 0, Ce > 0 </li></ul>
108. 108. Should all categories be stable ?
109. 109. Why a stable category needs to be extensible ? How?
110. 110. Abstractness (A) <ul><li>Degree to which a category is abstract </li><ul><li>Ratio of abstract classes to the total number of classes in category </li></ul><li>Completely abstract -> A = 1 -> all classes are abstract
111. 111. Completely concrete -> A = 0 -> no abstract classes in category </li></ul>
112. 112. Is there a relationship between Instability and Abstractness ?
113. 113. Main Sequence
114. 114. Distance From Main Sequence <ul><li>D' = |A + I – 1| </li><ul><li>Normalized to range from [0, 1] </li></ul></ul>
115. 115. Agenda <ul><li>What is a software metric?
116. 116. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
117. 117. Object oriented metrics
118. 118. Object oriented design quality metrics
119. 119. Developer and team metrics </li></ul></ul>
120. 120. Developer and Team Metrics <ul><li>Productivity </li><ul><li>How active developers are, how much work is being done </li></ul><li>Knowledge </li><ul><li>How much developers know the software they are working on </li></ul><li>Expertise </li><ul><li>What kind of tools and libraries developers use </li></ul><li>Team “healthiness” </li><ul><li>Communication and knowledge sharing </li></ul></ul>
121. 121. Productivity: Code Churn Metrics <ul><li>Amount of code changed in the software during the period of time
122. 122. Churned LOC – number of added, modified and deleted lines of code
123. 123. Churn Count – number of changes made to a file
124. 124. Files Churned – number of changed files </li></ul>
125. 132. Code Churn Metrics <ul><li>Overview of activity and productivity </li></ul><ul><li>Increase in relative code churn metrics -> increase in defect density </li><ul><li>Number of defects per line of code </li></ul><li>Vulnerable files have higher code churn metrics </li><ul><li>Vulnerability – instance of violation of the security policy </li></ul></ul>
126. 133. Agenda <ul><li>What is a software metric?
127. 134. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
128. 135. Object oriented metrics
129. 136. Object oriented design quality metrics
130. 137. Developer and team metrics </li><ul><li>Productivity
131. 138. Knowledge </li></ul></ul></ul>
132. 139. Knowledge Metrics <ul><li>Which parts of the software developer is comfortable working with? </li><ul><li>Better planning </li></ul><li>Does developer share her knowledge with colleagues? </li><ul><li>Risk management </li></ul></ul>
133. 145. Unique: 2 / 5 -> 40% Shared: 1 / 5 -> 20% Unique: 1 / 5 -> 20% Shared: 1 / 5 -> 20%
134. 146. Example <ul><li>If developer decides to leave – all his unique knowledge is lost for the team </li></ul>Unique - 35% Shared - 10% Unique - 10% Shared - 35%
135. 147. Agenda <ul><li>What is a software metric?
136. 148. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
137. 149. Object oriented metrics
138. 150. Object oriented design quality metrics
139. 151. Developer and team metrics
140. 152. Project size metrics </li></ul></ul>
141. 153. How would you measure product size?
142. 154. Perfect Hours <ul><li>One hour of ideal engineering </li><ul><li>How many perfect hours in a work day? </li></ul><li>Relative measure of effort </li><ul><li>“How many ideal engineering hours required to complete the feature” </li></ul><li>Team specific
143. 155. Applied early ↔ Manual and subjective </li></ul>
144. 156. Points <ul><li>Generalization of a perfect hour </li><ul><li>Relative measure of effort required to complete the feature </li></ul><li>Not tied to time
145. 157. Team specific
146. 158. Applied early ↔ Manual and subjective </li></ul>
147. 159. Velocity <ul><li>How much work can a team complete per iteration </li></ul>Completed Points Iterations
148. 160. Function Points <ul><li>Will be covered during next workshop </li></ul>
149. 161. Agenda <ul><li>What is a software metric?
150. 162. Examples of software metrics </li><ul><li>LOC and McCabe's cyclomatic complexity
151. 163. Object oriented metrics
152. 164. Object oriented design quality metrics
153. 165. Developer and team metrics
154. 166. Project size metrics
155. 167. Quality metrics </li></ul></ul>
156. 168. What does “ high quality ” mean? Quality Metrics
157. 169. Quality Metrics <ul><li>Many different “ models ”, “ checklists ” </li><ul><li>McCall's, FRUPS, ISO 9126
158. 170. Functionality, reliability, usability, portability, … </li></ul></ul><ul><li>Cannot be measured directly -> derived from other metrics </li></ul>
159. 172. Quality – Developer's Perspective <ul><li>Comprehensibility </li><ul><li>Style and cleanness of source code
160. 173. Architecture and design
161. 174. Used technologies and libraries </li></ul><li>Testability + Existing tests </li><ul><li>Easiness of automated testing
162. 175. Code coverage with tests </li></ul></ul>
163. 176. Quality – PM's Perspective <ul><li>Predictability </li><ul><li>Effort required for development, testing, ...
164. 177. Delivery planning
165. 178. Additional costs </li></ul><li>Correctness </li><ul><li>Satisfies specification
166. 179. Serves customer needs </li></ul></ul>
167. 180. Quality – Customer's Perspective <ul><li>Value for money </li><ul><li>Supports organizational goals
168. 181. Return on investment </li></ul><li>Transparency </li><ul><li>Partner's effort is recognizable
169. 182. Delays and troubles are not hidden </li></ul></ul>
170. 183. Quality – User's Perspective <ul><li>Usability </li><ul><li>Ease of use
171. 184. Comprehensibility </li></ul><li>Performance </li><ul><li>Responsive
172. 185. Critical functionality is quick </li></ul><li>Functionality </li><ul><li>Software does the right thing </li></ul></ul>
173. 186. Example: Defect Detection Percentage <ul><li>Efficiency of quality assurance procedures </li><ul><li>How many bugs were “ delivered ” to customer </li></ul><li>DDP = E / (E + D) </li><ul><li>E – errors found before delivery
174. 187. D – defects = errors found after delivery </li></ul></ul><ul><li>What would be an ideal situation? </li></ul>
175. 188. Example: Time Between Escaped Defects <ul><li>How often new defects are found in delivered versions of the product </li></ul><ul><li>How would you use this metric? </li></ul>
176. 189. Course: IDY0204 “ Software Quality and Standards”
177. 190. Agenda <ul><li>What is a software metric?
178. 191. Examples of software metrics
179. 192. Classification of software metrics </li></ul>
180. 193. Classification of Software Metrics Subject of measurement
181. 194. Subject: Development Process <ul><li>Measuring the efficiency of process application
182. 195. On the organizational level – strategic purposes
183. 196. On the project level – tactical purposes </li></ul><ul><li>Examples of metrics </li><ul><li>Length of (development) iteration
184. 197. Number of changes in requirements
185. 198. Number of finished tasks
186. 199. Defect detection percentage </li></ul></ul>
187. 200. Subject: Resources <ul><li>Measuring usage of personnel & resources and their properties </li></ul><ul><li>Examples of metrics </li><ul><li>Developer competency
188. 201. Developer fluctuation
189. 202. Developer productivity and know-how in the project
190. 203. Maturity of the code written by developer </li></ul></ul>
191. 204. Subject: Product <ul><li>Measuring product attributes </li><ul><li>Size, complexity, scalability </li></ul></ul><ul><li>Examples of metrics </li><ul><li>LOC, commented lines of code, function points
192. 205. McCabe's cyclomatic complexity
193. 206. Code coverage with test
194. 207. Code stability </li></ul></ul>
195. 208. Classification – Overview What is measured?
196. 209. Classification of Software Metrics “ Lines of Code” vs “Quality”
197. 210. Direct Metrics <ul><li>Directly measurable </li></ul><ul><li>Examples of metrics: </li><ul><li>LOC, function points
198. 211. McCabe's cyclomatic complexity
199. 212. Number of requirements </li></ul></ul>
200. 213. Indirect Metrics <ul><li>Not possible to measure directly </li><ul><li>Derived from other properties </li></ul></ul><ul><li>Examples of metrics </li><ul><li>Code quality, code readability
201. 214. Developer productivity, efficiency
202. 215. Reliability </li></ul></ul>
203. 216. Classification – Overview What is measured? Is it measurable?
204. 217. Classification of Software Metrics (In)dependency on the measurement context
205. 218. Internal Attributes <ul><li>Measurement context/environment is not relevant </li></ul><ul><li>Examples of metrics </li><ul><li>LOC
206. 219. McCabe's cyclomatic complexity
207. 220. Code coverage with tests </li></ul></ul>
208. 221. External Metrics <ul><li>Measured with respect to environment/context </li></ul><ul><li>Examples of metrics </li><ul><li>Software reliability
209. 222. Developer productivity
210. 223. Source code comprehensibility
211. 224. Usability </li></ul></ul>
212. 225. Classification – Overview What is measured? Is it measurable? Is context dependent?
213. 226. Classification – Example What is measured? Is it measurable? Is context dependent?
214. 227. References <ul><li>G. Ford, Measurement theory for software engineers </li><ul><li>http://courses.cs.ut.ee/2010/se/uploads/Main/measurement-theory.pdf </li></ul><li>Wikipedia </li><ul><li>http://en.wikipedia.org/wiki/Software_metrics </li></ul><li>C. Lange, Metrics in software architecting </li><ul><li>http://www.win.tue.nl/~mchaudro/sa2007/Metrics%20Architecting%202005.pdf </li></ul><li>M. Gökmen, Software process and project metrics </li><ul><li>http://www3.itu.edu.tr/~gokmen/SE-lecture-2.pdf </li></ul><li>H. Nestra, Metrics, Software engineering 2005 </li><ul><li>http://courses.cs.ut.ee/2005/tvt/uploads/Main/software_engineering_21.pdf </li></ul><li>Lines of code </li><ul><li>http://en.wikipedia.org/wiki/Source_lines_of_code </li></ul></ul>
215. 228. References II <ul><li>McCabe's cyclomatic complexity </li><ul><li>http://en.wikipedia.org/wiki/Cyclomatic_complexity
216. 229. http://www.stsc.hill.af.mil/crosstalk/1994/12/xt94d12b.asp
217. 230. http://www.answers.com/topic/cyclomatic-complexity </li></ul><li>S. Chidamber and C. Kemerer, A metrics suite for object oriented des. </li><ul><li>http://bit.ly/2xY21F </li></ul><li>C. Martin, OO Quality design metrics </li><ul><li>http://cs.allegheny.edu/wiki/cs290F2004/uploads/123/oodmetrc.pdf </li></ul><li>R. Pressman, Software engineering: a practitioner's approach </li><ul><li>http://bit.ly/gCWvm </li></ul><li>More </li><ul><li>http://www.laynetworks.com/Software%20Engineering.htm
218. 231. http://www.parlezuml.com/metrics/OO%20Design%20Principles%20&%20Metrics.pdf
219. 232. http://www.parlezuml.com/metrics/Metrics%20Definitions.pdf </li></ul></ul>
220. 233. Home Reading David Longstreet “ Function Point Manual”
221. 234. Thank you for your time and attention!
1. #### A particular slide catching your eye?

Clipping is a handy way to collect important slides you want to go back to later.