Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

A study on quality parameters of software and the metrics


Published on

  • Be the first to comment

A study on quality parameters of software and the metrics

  1. 1. International Journal of Computer and Technology (IJCET), ISSN 0976 – 6367(Print),International Journal of Computer Engineering Engineeringand Technology (IJCET), ISSN 0976May - June Print) © IAEMEISSN 0976 – 6375(Online) Volume 1, Number 1, – 6367( (2010),ISSN 0976 – 6375(Online) Volume 1 IJCETNumber 1, May - June (2010), pp. 235-249 ©IAEME© IAEME, A STUDY ON QUALITY PARAMETERS OF SOFTWARE AND THE METRICS FOR EVALUATION J.Emi Retna Karunya University Karunya nagar, Coimbatore E-Mail: Greeshma Varghese Karunya University Karunya nagar, Coimbatore E-Mail: Merlin Soosaiya Karunya University Karunya nagar, Coimbatore E-Mail: Sumy Joseph Karunya University Karunya nagar, Coimbatore E-Mail: almerah.joseph@gmail.comABSTRACT Software Quality is one of the illusive targets to achieve in the softwaredevelopment for the successful software projects. Software Quality activities areconducted throughout the project life cycle to provide objective insight into the maturityand quality of the software processes and associated work products. Software Qualityactivities are performed during each traditional development phase. There are manyparameters or attributes which helps to ensure the quality of the software. The paperanalyses and a detailed report are presented on each quality attribute parameter.INTRODUCTION Gone are the days when software quality was thought of as a luxury. But nowsoftware quality is one of the illusive targets to achieve in the software development. Thesuccessful software projects achieve excellence in software quality. Today it is viewed as 235
  2. 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMEan essential parameter with any software delivered. Software quality has severaldefinitions and is viewed in several perspectives. Software built with all the requirementsspecified by the client or software that has 0% defect (practically impossible) cannot bedeemed as of high quality. Quality however is not a single parameter but it is a collectionof parameters and it has a multidimensional concept. According to Crosby, quality isdefined as “a conformance to requirements”, which means the extent to which theproduct conforms to the intent of design. Quality of design can be regarded as thedetermination of requirements and specifications and quality of conformance isconformance to requirements. Some of the parameters that add up to the quality ofsoftware are as given below: Capability (Functionality), Scalability, Usability,Performance, Reliability, Maintainability, Durability, Serviceability, Availability,Installability, Structured ness and Efficiency. There are two types of parameters namely functional parameters and nonfunctional parameters. Functional parameters deal with the functionality or functionalaspects of the application while non functional parameters deal with the non-functionalparameters (but desirable) like usability, maintainability that a developer usually doesn’tthink of at the time of development. Generally the non functional parameters areconsidered only in the maintenance phase or after the software is being developed whichcauses rework or additional effort requirement. Hence it is a best practice to considerquality even in the initial phases of software development and deliver the software withhigh quality and on time. Some of the quality parameters are interrelated as specified Figure 1 Interrelationships of Software Attributes in figure 1 [1]. 236
  3. 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print), ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMEParameters Models and Metrics Features to improve parametersCapability 1. Function Point 1. Security of overall system(Functionality) 2. Feature set and capabilities of the programUsability 1. Questionnaires, Testing 1. GUI 2. Error Rate 2. Component reuse 3. ISO 9241-11Performance 1. Execution Time 1. Reduced hit ratio 2. Time Reduction 2. Cache 3. Idle Time Reduction 3. Improved Processing Power 4. Response Time 4. Software engineering practices 5. Completion Rate 5. Design 6. Throughput 6. Understanding user requirements 7. Service Unit Reduction 7. Architecture Design 8. Meeting user’s expectations 9. Problem resolution 10. TimelinessReliability 1. exponential distribution models 1. Consistency 2. Weibull distribution model 2. Data Integrity 3. Thompson and Chelsons model 3. Accuracy 4. Jelinski Moranda model 5. LittleWood Model 6. MTTF (Mean Time To Failure) 7. MTTR (Mean Time To Repair) 8. POFOD (Probability of failure on demand) 9. Rate of fault occurrence 10. Reliability = MTBF/(1+MTBF)Maintainability 1. Models like HPMAS 1. Stability 2. Polynomial assessment tool 3. Principle Components Analysis 4. Aggregate Complexity Measure 5. Factor AnalysisDurability 1. Reliability 1. Data Redundancy(also include 2. Availability - Replicationsession 3. Markov Chain Model - Erasure Codingdurability) 4. Supplier / Component Subsystem quality audits 2. Data Repair 5. PPM Defects - Failure Detection 6. % right first time - Repair 7. Initial Quality Survey 8. Customer Satisfaction 9. Warranty Claim RatesServiceability 1. Mean Time To Repair = Unsheduled 1. Help desk notification of exceptional events downtime/ no. of failures 2.Network Monitoring 2. Mean nodenours to repair = Unsheduled 3.Documentation downtime nodehours/ no.of failures 4.Event Logging 3. Mean time to boot system = sum of wall clock 5Training time booting system/ no of boot events 6MaintenanceAvailability 1. System Uptime Work product is operational and available for Mean Time To Failure use 2. Mean Time To Failure+Mean Time To Repair *100% 237
  4. 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print), ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMEInstallability 1. Total installability time 1. Effort needed to install software 2. Total count of installers 3. % of installers involved only in tuning 4. Time spent for user training 5. Count of database reports written 6. “custom features” percentage 7. Customer’s negotiation strength 8. GQMScalability 1. Load scalability 1. Application specific factors 2. Service scalability 2. Load generation tool related 3. Data scalability 3. Hardware configuration of load client 4. Page/screen response time 4. Architecture DesignProductivity 1. Programmer productivity = 1. To reduce defect levels LOC Produced 2. Defect prevention and removal Person Months of Effort technologies amount of output 3. Defect Removal Efficiency 2. Productivity = effort inputComplexity 1. Cyclomatic Complexity (or conditional 1. Increase in RFC(Request for a class) complexity), V (G) = E – N + 2 where E is the 2. Inheritance number of flow graph edges and N is the 3. module strength number of nodes. 4. Degree of data sharing 2. McClure’s Complexity Metric= C + V where C is the number of comparisons in a module and V is the number of control variables referenced in the module 3. Cohesion STRENGTH = where X is reciprocal of the number of assignment statements in a module and Y is the number of unique function outputs divided by number of unique function inputs. 4. Coupling = where Zi =Efficiency (of 1. Resource Utilization 1.Usage of CPUalgorithms) 2. Level of performance 2.I/O capacity 3. End-to-end error detection like performance defects appear under heavy load 3.Usage of RAMReusability 1. Reuse percent- the de facto standard measure of 1. Software system independence. reuse level = (Reused Software / Total 2. Machine independence. Software) *100 3. Generality. 2. Use objective metrics on subjective data to 4. Modularity. obtain reusability readingsPortability 1. Portability =1–(Resources needed to move 1. Design documentation system to the target environment/Resource 2. Code needed to create system for the resident environment)Testability 1. Effort needed for validating the modified 1. Related with code software 2. Effort required to test a programEffort/Cost 1. Boehm’s COCOMO model 1. Preparation and execution 2. Putnam’s SLIM model 2. Ease of maintenance 3. Albrecht’s Function Point model 3. Effort on duplicates and invalids Table 1: Quality Parameters with models /metrics 238
  5. 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME Generally if there is a reliability problem the usability of an application is ratedpoor. If the application becomes unavailable (availability factor decreases), then again theperformance is rated low. If the performance of software improves it means factors suchas availability are high. Hence all the parameters are inter-related and proportional.Table1 proposes the quality parameters, the key models and their key metrics. Thefeature that enhances the quality parameter is also listed.1. CAPABILITY (FUNCTIONALITY) Functionality captures the amount of function contained on a product. Oftenfunctionality is measured rather than the physical code size. The software beingdeveloped needs to satisfy all the functional requirements. Functional requirementsinclude almost all the user requirements or business requirements – the “what for thesoftware is being developed”. Once all the functionality is developed then the softwarebecomes deliverable. The software developers, testers and quality practitioners areentrusted to verify if all the functional requirements are covered in software. If thefunctionality is developed the defects will be minimal in software.There are some sub characteristics that can be derived from the quality features ofsoftware [3]. Figure 2 FunctionalitySuitability: Attributes of software that bear on the presence and appropriateness of a setof functions for specified tasks.Accurateness: Attributes of software that bear on the provision of right or agreed resultsor effects.Interoperability: Attributes of software that bear on its ability to interact with specifiedsystems. 239
  6. 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMECompliance: Attributes of software that make the software adhere to application relatedstandards or conventions or regulations in laws and similar prescriptions.Security: Attributes of software that bears on its ability to prevent unauthorized access,whether accidental or deliberate, to programs or data. The most widely used metric is the“Function Point” metric. Function Points are measures of software size, functionality, andcomplexity used as a basis for software cost estimation [3].The procedure to calculate theFunction Point is as explained below:Step 1: Determine the unadjusted function point count(UFP) 1. Count the number of external inputs. External inputs are those items provided by the user (e.g.) file names and menu selections) 2. count the number of external outputs External outputs are those items provided to the user (e.g.) reports and messages 3. Count the number of external inquiries External inquiries are interactive inputs that needs a response 4. Count the number of internal logical files 5. Count the number of external interface files Table 2 Determination of UFP Weighting Item Simple Factor Complex Average External 3 4 6 inputs 4 5 7 External 3 4 6 output 7 10 15 External 5 7 10 inquiries External files Internal filesUFC= ∑((Number of items of variety i) *weight i) i=1VAF = 0.65 + (0.01 * TDI)FP = UFP * VAF 240
  7. 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME Table 3 The general system characteristicsGeneral System Brief DescriptionCharacteristic1. Data How many communication facilities are there to aid in the communications transfer or exchange of information with the application or system?2. Distributed data How are distributed data and processing functions handled? processing3. Performance Was response time or throughput required by the user?4. Heavily used How heavily used is the current hardware platform where configuration the application will be executed?5. Transaction rate How frequently are transactions executed daily, weekly, monthly, etc.?6. On-Line data entry What percentage of the information is entered On-Line?7. End-user efficiency Was the application designed for end-user efficiency?8. On-Line update How many ILF’s are updated by On-Line transaction?9. Complex processing Does the application have extensive logical or mathematical processing?10. Reusability Was the application developed to meet one or many user’s needs?11. Installation ease How difficult is conversion and installation?12. Operational ease How effective and/or automated are start-up, back-up, and recovery procedures?13. Multiple sites Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple organizations?14. Facilitate change Was the application specifically designed, developed, and supported to facilitate change? 241
  8. 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMEStep 2: Determine the value adjustment factor (VAF)VAF is based on the Total Degree of Influence (TDI) of the 14 general systemcharacteristics.TDI = Sum of Degree of Influence of 14 General System Characteristics (GSC). These14 GSC are listed in Table 3.II. USABILITY Boehm defined software usability as the extent to which the product is convenientand practical to use. The software usability is considered as a combined form ofunderstandability, learn ability, operability and finally the attractiveness of the product tothe final user or user group [Pressman, 1999].How usable is software? Determines itslongevity. Usability evaluation yields better results only if there is a good usabilitydesign. The usability evaluation can be user based or expert based. The software has to beusable. Usability depends on parameters such as ease-of-use, comfort level, simplicityetc. Functional software without the usable factor will be least desired. If the UserInterface is designed with user friendliness in mind it adds to usability. Figure 2 depictsthe consolidated usability model by Abron. As in the figure usability factor increase onlyif an application is effective, efficient, user expectations satisfied, ease to use and highlysecure. Figure 3 UsabilitySome of the methods adopted to measure usability are a. How satisfied is the user b. Whether the user is able to perform the task intended (task success) c. The problems encountered when using the software (failure) d. The time taken to complete the task d. Task completion rates, task time, and satisfaction and error counts. 242
  9. 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME e. Ease of use f. Download delay g. More navigability h. Interactivity i. More responsive j. Higher quality To improve usability it is important to improve page reduction so that the page isinformative and the information needs of the user are satisfied. Website download delayaffects usability. Table 4. Usability Design FactorsIII. PERFORMANCE There are lots of factors that affect the performance of software like communicationfailure, poor bandwidth, hardware or component failure. The parameters for performanceevaluation are: 1. Execution time 2. Service unit reduction 3. Idle time reduction 4. No of tasks completed A high performance application can be designed using various technologies like: 243
  10. 10. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME 1. Caching 2. Multiple servers running on different machines 3. Thread management 4. Improved page design The selection of tools, technology/platform, design, knowledge base etc plays avital role in the performance of an application. Appropriate models needs to be chosen atthe early stage of software development itself. For e.g., Let us consider websiteperformance analysis. The performance of a website is analyzed from a customer’sperspective. When the user is only interested in checking the mails , he may log into anywebsites that provide mailing services. The leading email providers have their owndesign and functionality built-in. Some of the email service providers, on login directs tothe user’s mailbox (category 1)while some other email service providers, on logging inprovide a informative or a little junk page rather than directing to the user’s mailbox onlog in (category 2). So category 1 provides much better performance than category 2since when the user’s perspective is to check mails timeliness is met by category 1 andnot category 2.The responsiveness ofwebsites and the performance can be measured using tools that are available like ECperf Figure 4 ReliabilityMaturity: Attributes of software that bear on the frequency of failure by faults in thesoftwareFault Tolerance: Attributes of software that bear on its ability to maintain a specifiedlevel of performance in case of software faults or of infringement of its specifiedinterface. 244
  11. 11. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMERecoverability: Attributes of software that bear on the capability to re-establish its levelof performance and recover the data directly affected in case of a failure and on the timeand effort needed for it. Probability of failure on demand (POFOD) is a measure of the likelihood that thesystem will fail when a service request is made. It is relevant for safety critical or non-stop systems. Rate of fault occurrence is relevant for operating systems, transactionprocessing systems etc. It considers the frequency of unexpected behavior. Mean TimeTo Failure is the time between observed failures. Reliability is the capability of thesoftware product to maintain a specified level of performance when used under specifiedconditions [9]. Reliability measures include 3 components:1) Measuring the number of system failures for a given number of system inputs2) Measuring the time or number of transactions between system failures3) Measuring the time to restart after failureIV. MATHEMATICAL CONCEPTS OF RELIABILITY Mathematically reliability R(t) is the probability that a system will be successfulin the interval from time 0 to time t: [6]R(t) = P(T > t), t => 0where T is a random variable denoting the time-to-failure or failure time. UnreliabilityF(t), a measure of failure, is defined as the probability that the system will fail by time t.F(t) = P(T =< t), t => 0.In other words, F(t) is the failure distribution function. The following relationshipapplies to reliability in general. The Reliability R(t), is related to failure probability F(t)by:R(t) = 1 - F(t).V. MAINTAINABILITYThe capability of the software product to be modified. Modifications may includecorrections, improvements or adaptation of the software to change in environment, and inrequirements and functional specifications" [7]. Proper documentation needs to bemaintained for maintaining software. The document needs to be complete and updated. 245
  12. 12. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMESoftware Maintainability assessment can be conducted at various levels of granularity. Atthe component level models can be used to monitor changes to the system as they occurand to predict fault occurring at software components. At the file level we can use modelsto identify subsystem that are not well organized. Models like HPMAS [5], a hierarchicalmultidimensional assessment model can be used. The important sub attributes forMaintainability are:Analyzability: Attributes of software that bear on the effort needed for diagnosis ofdeficiencies or causes of failures, or for identification of parts to be modified.Changeability: Attributes of software that bear on the effort needed for modification,fault removal or for environmental change.Stability: Attributes of software that bear on the risk of unexpected effect ofmodifications.Testability: Attributes of software that bear on the effort needed for validating themodified software Fig. 5 MaintainabilityVI. DURABILITY Software usability efforts improve software durability. Software durability can bedata durability or session durability. Session durability generally speaking is of shortduration. Technologies like data replication, data repair can be used to enhancedurability. Reliability and availability can be considered as metrics for data durability.VII. SERVICEABILITY Serviceability is the ability to offer promised services by the software orapplication. With software it deals with the support offered in terms of user manual,technical help, problem resolvement etc. As new versions of software get released, the 246
  13. 13. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEMEsoftware get released, the support for the older versions vanishes. Incorporatingserviceability facilitating features typically results in more efficient product maintenance,reduced operational cost etc. The maintainability feature can be inbuilt within the system.Systems can be built with features that automatically shoot mails or log a service call onexperiencing a faultVIII. AVAILABILITY Availability is the measure of how likely the system is available for use. Itconsiders the repair or restart time into account. Reliability and availability goes hand-in-hand. Software needs to be available even as the load increases. Availability of 0.997means the software is available 997 out of 1000 time units.IX. INSTALLABILITYSoftware needs to be easily installable. The parameter that needs to be set to installsoftware needs to be easy. The software needs to be configurable.X. SCALABILITY The software needs to be highly scalable in all environments. Scalability can beinterms of load, service, data atc. A system inorder to be highly scalable needs to havethe ability to handle large transactions and load. As the load increases in production adesirable system will be able to scale up with additional hardware resources eithervertically or horizontally. Vertical Scaling is when inorder to support an increase in load,an individual server is given increased memory or processing power. Horizontal scalingis when increased load is handled by adding servers to a distributed system.XI. COMPLEXITY Low coupling is often a sign of well-structured system and good design [5]. Highlevels of coupling usually will be associated with lower productivity. Complexity can beof two types. a) Apparent complexity, is the degree to which a system or component has a design or implementation that is difficult to understand and verify. b) Inherent complexity, is the degree of complications of a system or system component determined by such factors as the number and intricacy of 247
  14. 14. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME interfaces[4], the number and intricacy of conditional branches, the degree of nesting, and the type of data structuresMetrics used for measuring the qualities of SoftwareSome software metrics include: • Total Lines of code • Number of characters • Number of Comments • Number of comment characters • Code characters • Halsteads estimate of program length metric • Jensens estimate of program length metric • Cyclomatic complexity metricCONCLUSION Good engineering methods can largely improve software reliability. Before thedeployment of software products, testing, verification and validation are necessary steps.Software testing is heavily used to trigger, locate and remove software defects. Softwaretesting is still in its infant stage; testing is crafted to suit specific needs in varioussoftware development projects in an ad-hoc manner. Various analysis tools such as trendanalysis, fault-tree analysis, Orthogonal Defect classification and formal methods, etc,can also be used to minimize the possibility of defect occurrence after release andtherefore improve software reliability. After deployment of the software product, fielddata can be gathered and analyzed to study the behavior of software defects. Faulttolerance or fault/failure forecasting techniques will be helpful techniques and guide rulesto minimize fault occurrence or impact of the fault on the system.REFERENCES:[1] Stephen H. Kan, Metrics and Models in Software Quality Engineering, SecondEdition.[2] N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rigorous and Practical Approach,Second Edition.[3] 248
  15. 15. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 – 6367(Print),ISSN 0976 – 6375(Online) Volume 1, Number 1, May - June (2010), © IAEME[4] Software Quality assurance: Principles and Practice -Nina S God bole[5] Software Reliability Engineering-John D.Musa[6] Roger S. Pressman. Software Engineering a practitioners Approach. McGraw-Hill,Inc., 1992.[7] International Standard. ISO/IEC 9126-1. Institute of Electrical and ElectronicsEngineers, Part 1, 2, 3: Quality model, 2001.[8] International Standard. ISO/IEC 9126-1 (2001), Institute of Electrical and ElectronicsEngineering, Part 1,2,3: Quality Model, 2001, 249