Software Quality Engineering is a broad area that is concerned with various approaches to improve software quality. A quality model would prove successful when it suffices the requirements of the developers and the consumers. This research focuses on establishing semantics between the existing techniques related to the software quality engineering and thereby designing a framework for rating software quality.
Judging the Relevance and worth of ideas part 2.pptx
Thesis Part II EMGT 699
1. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
Western New England University
Framework for a Software Quality
Rating System & Comparison with
Existing Techniques
Software Quality Engineering is a broad area that is concerned with
various approaches to improve software quality. A quality model would
prove successful when it suffices the requirements of the developers and
the consumers. This research focuses on establishing semantics between
the existing techniques related to the software quality engineering and
thereby designing a framework for rating software quality.
12/6/2012
Under The Guidance Of,
Karthik Murali
Dr. Julie Drzymalski
Asst. Professor
Dept. of Industrial Engineering & Engineering Management
Western New England University
Student ID 131629
Western New England University | EMGT 699 Thesis
EMGT 699 Thesis
10
2. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
Table of Contents
Sr. No.
Topic
1.
Page No.
Introduction
1.1 Background
1.2 Scope of the Research
3
1.3 Statement of the Problem
2.
2
4
Analysis Approach
2.1 Function Point Analysis
2.2 Analytical Hierarchical Process
3.
6
7
Methodology’s Semantics
3.1 Observables Matrix
8
3.2 Priority Scaling Factor
13
4.
Future Work
15
5.
Conclusion
16
Western New England University | EMGT 699
1
3. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
1. INTRODUCTION
1.1
Background
“Quality is free, but only to those who are willing to pay heavily for it.”
– T. DeMarco & T. Lister
Software Quality Engineering covers a very broad horizon about the various approaches to
improve software quality (Cote, Suryn, & Georgiadou, 2007). A software quality model must be
applicable throughout the software development life-cycle i.e. the implementation of quality in a
software product is an effort that should be formally managed and monitored throughout the
development process (Bourque, Dupuis, Morre, Moore, Tripp, & Wolff, 2002). A successful
quality model should be designed in such a manner that helps the managers, developers and the
end-users.
Research thus far; on delivering, increasing and rating software quality has been more generic
and thereby the frameworks designed fall into the same components. The constituents of quality
or what software quality should be made up of is not concrete. There are different perspectives of
defining quality (Kitchenham & Pfleeger, 1996). The market has become more consumer
oriented due to the ever growing and changing demands of the end user. This has created a great
impact on the industry to deliver top quality products. The software will become more appealing
and attractive if it is rated. Thus, a software rating system can be useful to make the product
value-based and also comply with the user specifications.
Western New England University | EMGT 699
2
4. [FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
December 6, 2012
1.2
Scope of the Research
Software quality models have caught the eye of researchers and there have been constant
attempts to modify them with the inflexible demands of the end users and the varying outlook of
the developers towards quality from the past two decades (Shaw & Clements, 2006). Most of the
models that have been designed follow a more developer oriented approach since majority of
quality characteristics were present when the software passes through the design phase and the
maturity of the development process is reflected by the emphasis of testing and other quality
activities (Georgiadou, 2003).
If the end user is considered for understanding and translating the specifications into an
application then a software model should take into account inclusion of quality characteristics for
the whole development lifecycle from the consumer point of view. Any software that is of good
quality relies on metrics that are defined pertaining to the resources used in that design. (Boehm,
Brown, & Lipow, Quantitative Evaluation of Software Quality, 1976). If we notice, today the
market allows a product to be dominant only if it is of superior quality and marked with the right
price. Quality of a product and the pricing strategy run parallel with one another. If there is a
defined software quality rating system then it can also be used a marketing tool to attract
consumers and make the product more appealing.
Several models have been defined and designed in order to achieve software quality but all the
research have been more qualitative than quantitative. The concentration was more on process
metrics. Research indicates that the software quality models provide an explicit process building
quality carrying properties into software – which basically dealt with the qualitative attributes of
the measurement factors (Dromey, 1994).
Western New England University | EMGT 699
3
5. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
The most recent publications was regarding the Quamoco Tool (Deissenboeck, Heinemann,
Herrmannsdoerfer, Lochman, & Wagner, 2011) which explains the working of the Quamoco
Tool that was developed using JAVA/Eclipse. It performs quality analysis of the application
depending upon its type. It still leaves the ground of the rating system open for extensive
research since there is no concrete system for rating software’s quality. This research however,
focuses on establishing semantics between the existing techniques related to software quality and
eventually builds a framework for a software quality rating system.
1.3
Statement of the Problem
Software quality is evaluated only on the basis of the metrics used in the application design
(Frakes & Terry, 1996). There have been large differences in areas of software quality since all
process metrics may not be uniform enough for all the software. Whenever a measure is defined
– how and why it has been formulated needs to be clearly expounded (Kearney, Sedlmeyer,
Thompson, Gray, & Adler, 1986). In order to make sure that the rating system is balanced, the
developer and the user must be considered and the software quality index or the number thus
derived should be understood by the developing organization that releases the product and the
consumer who wishes to use it.
As mentioned (Rosenberg and Hyatt; 1995) there are five basic attributes for quality –
complexity, efficiency, reusability, testability and understandability. These are uniform factors
that are given acute attention when the software is designed.
Western New England University | EMGT 699
4
6. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
2. ANALYSIS APPROACH
The following is the proposed idea for the rating system,
Scaling
System
Criteria
Metrics
Figure1
The above diagrammatic representation is the basic ideology behind the software rating system.
We need to have a list of metrics which would be used for constructing the software application,
the influence factors (criteria) which affect the metrics and a weighting system (scaling system)
that helps in assigning weights or rank the metrics.
There are eight important metrics that must be present in a software development process
(Murali & Drzymalski, 2012) which have to be ranked according to priority and the same needs
to be done with the influencing factors so as to reach a stage where the two techniques
considered in this research – Function Point Analysis and Analytical Hierarchy Process can be
incorporated to form a framework for a rating system model.
Western New England University | EMGT 699
5
7. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
2.1
Function Point Analysis
Function Points Analysis is a technique, a unit of measurement that can be used to express the
functionality of a software system (Longstreet, 2005). It is a standard metric which is used for
estimating the size and complexity of a software. It is highly used in the analysis and design
phase of a software development lifecycle.
Function Point Analysis was first developed by Allan J. Albrecht in the mid-1970s (Longstreet,
2005). It was built to act as a mechanism which would predict effort associated with software
development. This helped in tracking the progress and productivity of software projects and
thereby making quality existence possible in the development procedures. The following decade
witnessed some refinement in the original function point method and several version of function
point counting practices were released by the IFPUG [International Function Point User Group].
Since the function point measures systems from a functional perspective they are independent of
technology. Regardless of the language used, development method, or hardware platform used,
the number of function points for a system would remain constant. However, the only variable is
the amount of effort needed to deliver a given set of function points. But this methodology can
be adapted to understand how a set of external influence factors from the user’s point of view
can affect the quality performance of the software.
Function point can also be used to track and monitor scope creep. Function Points can help to
compare requirements, analysis and software structure and implementation dimensions.
Western New England University | EMGT 699
6
8. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
2.2
Analytical Hierarchy Process
Analytical Hierarchy Process or AHP is a matrix where the rows and columns have the same
parameters.
Category
Complexity Functionality Reusability
Complexity
Functionality
Reusability
The above table is an example of AHP Matrix. As we see, the row starts with “Complexity” so is
the column (same parameters). The AHP matrix is assigned weights (score) from 1 – 9 where the
maximum score implies that the row is more important than the column. The diagonal of matrix
is always allocated a score of 1. Now preceding columnwise the value in the corresponding
column just below the diagonal is the inverse of the scores in the corresponding row.
Analytical Hierarchy Process can be used to understand the priority factor that’s important when
building software simultaneously helping the development team to deliver a quality product.
Since the rating system model needs the end-user’s perspectives also, AHP in a way helps to
identify the requirements of the user for building the software quality rating model.
Western New England University | EMGT 699
7
9. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
3. METHODOLOGY’S SEMANTICS
3.1
Observable Matrix
The Observable Matrix is the first step taken to derive semantics between Function Point
Analysis and Analytical Hierarchy Process to reach the base where a framework for a software
quality rating system can be built.
Critical Metrics
Complexity
Efficiency
Functionality
Maintainability
Reusability
Security
Testability
Understandability
Figure2
The above diagram shows all the important metrics that are considered to construct the
observables matrix. There are 10 most common influential factors that the user expects to be
present in the software experience. They are shown below,
Western New England University | EMGT 699
8
10. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
Influential Factors
Installation Simplicity
Updates
Intuitive
Expected Efficiency
Navigation
Uninstallation
3rd Party Support
Easy to Troubleshoot
Adherence to Standards
Error Handling
Figure3
The installation simplicity plays an important role in determining software quality as it shows
how well has the product been packed. The updates factor relates to the post-installation support
given by the developer to make the product do more than promised. Intuitive is creativity, how
good the product is taking the user creativity to the next level. Navigation refers to the Graphical
User Interface [GUI] design. The software is a good quality product when it can be uninstalled
from a system with ease. The third party support is extras offered by companies who originally
did not develop the product. The product must be easy to troubleshoot and good in handling any
errors. It should meet the standards that are devised for a quality software product.
Western New England University | EMGT 699
9
11. December 6, 2012
Metrics &
Category
Complexity
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
Efficiency
Functionality
Maintainability
Reusability
Security
Testability
Understandability
Installation
Simplicity
Updates
Intuitive
Expected
Efficiency
Navigation
Uninstallation
3rd Party
Support
Easy to
Troubleshoot
Adherence to
Standards
Error
Handling
Western New England University | EMGT 699 Thesis
10
12. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
The matrix has been drawn to understand the relationship between the metrics involved in the
software engineering process and the factors that influence the metrics. Now, the Function Point
Analysis method is adapted to understand the semantics.
The matrix is filled by end users and the influence scale has been set to 0 ~ 5, zero being the least
and five being the maximum influence and the factor on the software product. The total of each
column is called as the “Degree of Influence”. The summation of all the degree of influences
gives us the “Unadjusted Function Points”. The average of each metric column is also computed
which will be used further to calculate the “Technical Complexity Factor”.
NOTATIONS
Notation
Meaning
DF
Degree of Influence
UFP
Unadjusted Function Points
TCF
Technical Complexity Factor
FP
Function Points
The initial computations include ∑Di (Sum of the individual Degrees of Influence for each
metric), UFP (Unadjusted Function Points) and µDi (Average of the Degrees of Influence for
each metric). Next, we compute the TCF for the software. We need to do a complexity
classification prior to TCF computation. Since the maximum limit for each metric’s degree of
influence is 5 and the least is 0, it is classified into 3 classes.
Western New England University | EMGT 699 Thesis
11
13. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
Complexity Classification
Low
High
Simple
1.00
2.00
Average
2.01
3.00
Complex
3.01
5.00
The rule of thumb for calculating TCF is given by,
TCF = 0.65 + (∑µDi/100) . . . i
TCF = 1.35 + ((∑µDi/100) . . . ii
We use (i) when majority of the metrics have a very low or mediocre level (simple/average) of
influence on the software and we consider (ii) when majority of the metrics have a significant
level of influence (complex) on the software.
Once we know the TCF and the UFP for our software, then we can calculate the Function Points.
The formula for finding FP is given by,
FP = UFP * TCF
Function Points are a dimensionless number or an arbitrary scale (Symons, 1988). It stands as a
measure on its own and it is not dependent on any other system of measurement. Function Points
help us to calculate effort, time and cost for the software project. It also helps to compute a very
important element that would indirectly help in evaluating software quality.
Western New England University | EMGT 699 Thesis
12
14. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
It helps us to calculate “Reusability” of the modules in the software application. Let us assume
that the function point for a certain software project is estimated to be 250. The total lines of
code have been estimated to 40,000.
Reusability = Lines of Code/Function Points
Reusability = 40,000/250
Reusability = 160
The above calculations help us understand that the utility per line code is 160. This number may
differ according to the package used for developing the product. This can lead us to measure the
efficiency of the software in executing each module. But considering function points adaption
alone to evaluate the quality of a software product is dangerous and incomplete (Vickers, 2003).
3.2
Priority Scaling Factor
To support the function point analysis adaption, AHP is used to understand which metric is of a
higher importance from the consumer point of view. From the values used in our example
(Appendix 2), we figure out the priority of each metric in the software. In Function Point
Analysis, originally the end user is considered to be a sophisticated user (Longstreet, 2005).
However, in the adaption here, there are no such assumptions that the end user is of a high end
type. The priority of each metric may vary from person to person. AHP is used to understand and
generalize the priority ranking.
Western New England University | EMGT 699 Thesis
13
15. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
% Ratio Scale for Priority Ranking
UNDERSTANDABILITY
SECURITY
EFFICIENCY
FUNCTIONALITY
COMPLEXITY
MAINTAINABILITY
REUSABILITY
TESTABILITY
Figure4
The graph above has been drawn after analyzing the AHP in the example considered for this
research. We find that “Understandability” is the metric that tops the chart. Many users are
concerned about how easy it is to understand the software; following that is “Security”, and then
the rest of the metrics that were chalked down for the matrix. Priority scaling helps the
development team to concentrate and focus on the metrics according to the end user preference.
It would drastically increase the productivity and the progress of the software development
(performance improvement). When the initial phase of the development is monitored well,
quality is automatically present throughout the development cycle which justifies the argument
given by - (Bourque, Dupuis, Morre, Moore, Tripp, & Wolff, 2002).
Western New England University | EMGT 699 Thesis
14
16. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
4. FUTURE WORK
A quality rating system can be designed by incorporating some features from the function points
and using the analytical hierarchical process. A quality rating system will be successful when it
is successful in prediction, estimation and evaluation of the metric elements present in the
software product.
Function Point
Analysis
Analytical
Hierarchical Process
Reverse Engineering
of Metrics
Software
Quality
Rating
System
Figure5
The introduction of Reverse Engineering of Software Metrics can help us to reach a concrete
framework for a Software Quality Rating System. Reverse Engineering breaks down the big
chunks of code into smaller groups and helps in studying the structure and the behavior of the
modules used in designing the software application (Tonella & Potrich, 2004). It also studies the
functionality of the modules and makes the software precise by modifying, adding and tweaking
the existing code according the growing and changing needs of the consumers (Pressman, 2000).
Western New England University | EMGT 699 Thesis
15
17. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
5. CONCLUSION
Function Point Analysis when related with lines of code, gives us the utility per line per code.
This puts light on the “Functionality”, “Efficiency” and the “Reusability” metrics. Analytical
Hierarchical Process figures out which of the considered metrics is the most important by scaling
their priority.
This research has focused on establishing the semantics between function point analysis and the
AHP and adapting them with reverse engineering of software metrics to form a framework for
rating software quality. Incorporating reverse engineering will help in avoiding any failures to
ignore the importance of each metric in the software which would eventually elevate the quality
of the application. Software Quality Rating system, when designed must take into account the
developer and the end user as both play equally important roles in a software product’s success.
“Quality is not a tool – you cannot install it. You need to blend it!”
– Anonymous
Western New England University | EMGT 699 Thesis
16
19. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
12. Dr. Petrasch, R. (1999). The Definition of Software Quality: A Practical Approach.
FastAbstract ISSRE, 1 - 2.
13. Dromey, R. (1994). A Model for Software Product Quality. 1 - 35.
14. Edgren, R., Emilsson, H., & Jansson, M. (n.d.). Software Quality Characteristics.
thetesteye.com v1.1.
15. ESA, B. (1995). Guide to Software Quality Assurance. Status Report, European
Space Agency, Paris, Paris.
16. Etzkorn, L. H., Hughes Jr., W. E., & Davis, C. G. (2001). Automated Reusability
Quality Analysis of OO Legacy Software. Information and Software Technology(43),
295 - 308.
17. Fenton, N. (1996). Software Metrics for Quality Control and Assurance. Software
Quality Research Laboratory. McMaster University.
18. Fitzpatrick, R. (1996). Software Quality: Definitions & Strategic Issues. School of
Computing Report, Staffordshire University, Advanced Research Module.
19. Frakes, W., & Terry, C. (1996, June). Software Reuse: Metrics and Models. ACM
Computer Surveys, 28(2), 1 - 21.
20. Galin, D. (2004). Software Quality Assurance: From Theory to Implementation.
Harlow, Essex, England: Pearson Addison Wesley.
21. Gallin, D., & Patton, R. (n.d.). Introduction to Software Quality Assurance. 1 - 33.
22. George, B., Fleurquin, R., & Sadou, S. (2006). A Methodological Approach to
Choose Components in Development and Evolution Process. Electronic Notes in
Theoretical Computer Science(166), 27 - 46.
23. Georgiadou, E. (2003, January). Software Process and Product Improvement: A
Historical Perspective. Cybernetics & System Analysis, 39(1), 125-142.
Western New England University | EMGT 699 Thesis
18
21. December 6, 2012
[FRAMEWORK FOR A SOFTWARE QUALITY RATING SYSTEM]
34. Murali, K., & Drzymalski, J. (2012). A Study on the Need for a Software Quality
Rating System. Thesis Research [EMGT 698], Western New England University,
Industrial Engineering and Engineering Management, Springfield.
35. Naik, K., & Tripathy, P. (2008). Software Testing and Quality Assurance Theory &
Practice (First ed.). August: John Wiley & Sons.
36. Oligny, S., Bourque, P., Abran, A., & Fournier, B. (2000). Exploring the relation
between effort and duration in software engineering. Proceedings of the World
Computer Congress, 175-178.
37. Parallab. (2004). Software Reusability and Efficiency. University of Bergen
(Norway), Bergen Center for Computational Science. Enacts.
38. Pressman, R. S. (2000). Software Engineering: A Practitioner's Approach (Fifth ed.).
December: McGraw Hill.
39. Punter, T., & Lami, G. (1998). Factors of Software Quality Evaluation. ESCOMENCRES' 98, (pp. 1 - 11).
40. Rommel, C., & Girard, A. (2012). Embedded Software & Tools Practice. VDC
Research. Parasoft.
41. Rosenberg, D. H., & Hyatt, L. E. (1997). Software Quality Metrics for Object
Oriented Environment. Crosstalk Journal, 1 - 7.
42. Saaty, T. L. (1980). The Analytical Hierarchy Process: Planning, Priority Setting,
Resource Allocation. McGraw-Hill.
43. Sacha, K. (2005). Evaluation of Software Quality. 1 - 8. Warszawa, Poland.
44. Shaw, M., & Clements, P. (2006, February). The Golden Age of Software
Architecture: A Comprehensive Survey. CMU-ISRI-06-101, 1 - 14.
45. Sommerville, I. (2010). Software Engineering (Ninth ed.). March: Pearson.
Western New England University | EMGT 699 Thesis
20