MEASURING THE CODE QUALITY USING
SOFTWARE METRICS – TO IMPROVE THE
EFFICIENCY OF SPECIFICATION MINING
Guided By
Ms.P.R.Piriyankaa.,ME
Assistant Professor.
Presented By,
M.Geethanjali (ME).,
Sri Krishna College of Engg and Tech.
INTRODUCTION
 Incorrect and buggy software costs up to
$70 Billion each year in US.
 Formal Specifications defines testing,
optimization, refactoring, documentation,
debugging and repair.
 False Positive rates – We think there is a
vulnerability but actually that is not present.
PROBLEM STATEMENT
 The cost of Software Maintenance consumes up
to 90% of the total project cost and 60% of the
maintenance time.
 Formal Specifications are very necessary but
they are difficult for programmers to write them
manually.
 Existing automatic specification mining
produces high false positive rates.
EXISTING SYSTEM
 Formal specification is done for each and every
software and the quality of the code is checked.
 Set of software Metrics are used to measure the
quality of the software.
 General Quality Metrics
 Chidamber and Kemerer Metrics.
 These Software Metrics are used to measure the
quality of the code.
EXISTING SYSTEM CONT...
 The quality of the code is lifted with the results
obtained.
 Prediction is used to compare the obtained
results with randomly generated learned data
items.
 Automatic specification miner that balances the
true and false positive specifications.
 True positive – Required behaviour.
 False positives – Non-Required behaviour.
DISADVANTAGES
 The false positive rates are reduced from
90% to an average of 30%.
 The accuracy of the software is only 80%.
 The computation time is low.
PROPOSED SYSTEM
PROPOSED SYSTEM
 The classification is based on Support
Vector Machine Algorithm.
 The measured attributes of the software is
compared with the training dataset.
 The accuracy of the software is calculated.
 The False Positive rate for the specific
software is also found.
ADVANTAGES
 Reduces the burden of manual inspection of the
code.
 By knowing the quality of the code before the
deployment the developers can easily lift the
quality.
 The accuracy of the software is about 95%.
 Minimises the false positive rates from 90% to
5%.
BLOCK DIAGRAM
LIST OF MODULES
 General Code Quality Metrics.
 Code quality of complexity metrics.
 Implementation of mining algorithm – Naive Bayes
Algorithm
 Implementation of mining algorithm – Support
Vector Machine Algorithm.
 Finding the False positive rates using learning
model.
GENERAL QUALITY METRICS
 The quality of the software is implemented using
the following metrics:
 Code Churns
 Code clones
 Author Rank
 Code Readability
 Path Frequency
 Path Density
CHIDAMBER & KEMERER METRICS
 These are also known as Object Oriented
Metrics:
 Weighted Methods per class (WMC)
 Depth of Inheritance (DIT)
 Number of children (NOC)
 Coupling between Objects (CBO)
PREDICTION ANALYSIS
 The dataset will contain the randomly generated
learned data items.
 Naive Bayes algorithm is used.
 The measured result of the software is compared
along with the data set.
 The predicted result for the selected software
will be displayed.
 Using this result the quality of the code can be
determined.
PREDICTION USING SVM
 The measured attributes are compared with
the learned dataset.
 The accuracy of the for the selected software
will be displayed.
 The false positive rates are obtained.
GENERAL CODE QUALITY METRICS
CODE QUALITY OF CK METRICS
PREDICTION ANALYSIS
FALSE POSITIVES & ACCURACY USING SVM
COMPARISON OF ACCURACY
COMPARISON OF FALSE POSITIVE RATE
CONCLUSION
 Since the quality of the code is checked before
deploying the software, the quality of the
software will be assured.
 The cost spent for maintenance will also be
reduced.
 Compared to other automatic miners the false
positive rate is reduced to a negligible value.
REFERENCES
 Measuring Code Quality to improve
specification mining – Claire Le Goues.
 A study of consistent and inconsistent changes to
code clones –Jens Krinke.
 Who are are Source code contributers and how
do they change? – Massimiliano Di Penta.
 The road not taken: Estimating the Path
Execution Frequency Statically – Raymond
P.L.Buse
THANK YOU!!!

Measuring the Code Quality Using Software Metrics

  • 1.
    MEASURING THE CODEQUALITY USING SOFTWARE METRICS – TO IMPROVE THE EFFICIENCY OF SPECIFICATION MINING Guided By Ms.P.R.Piriyankaa.,ME Assistant Professor. Presented By, M.Geethanjali (ME)., Sri Krishna College of Engg and Tech.
  • 2.
    INTRODUCTION  Incorrect andbuggy software costs up to $70 Billion each year in US.  Formal Specifications defines testing, optimization, refactoring, documentation, debugging and repair.  False Positive rates – We think there is a vulnerability but actually that is not present.
  • 3.
    PROBLEM STATEMENT  Thecost of Software Maintenance consumes up to 90% of the total project cost and 60% of the maintenance time.  Formal Specifications are very necessary but they are difficult for programmers to write them manually.  Existing automatic specification mining produces high false positive rates.
  • 4.
    EXISTING SYSTEM  Formalspecification is done for each and every software and the quality of the code is checked.  Set of software Metrics are used to measure the quality of the software.  General Quality Metrics  Chidamber and Kemerer Metrics.  These Software Metrics are used to measure the quality of the code.
  • 5.
    EXISTING SYSTEM CONT... The quality of the code is lifted with the results obtained.  Prediction is used to compare the obtained results with randomly generated learned data items.  Automatic specification miner that balances the true and false positive specifications.  True positive – Required behaviour.  False positives – Non-Required behaviour.
  • 6.
    DISADVANTAGES  The falsepositive rates are reduced from 90% to an average of 30%.  The accuracy of the software is only 80%.  The computation time is low.
  • 7.
  • 8.
    PROPOSED SYSTEM  Theclassification is based on Support Vector Machine Algorithm.  The measured attributes of the software is compared with the training dataset.  The accuracy of the software is calculated.  The False Positive rate for the specific software is also found.
  • 9.
    ADVANTAGES  Reduces theburden of manual inspection of the code.  By knowing the quality of the code before the deployment the developers can easily lift the quality.  The accuracy of the software is about 95%.  Minimises the false positive rates from 90% to 5%.
  • 10.
  • 11.
    LIST OF MODULES General Code Quality Metrics.  Code quality of complexity metrics.  Implementation of mining algorithm – Naive Bayes Algorithm  Implementation of mining algorithm – Support Vector Machine Algorithm.  Finding the False positive rates using learning model.
  • 12.
    GENERAL QUALITY METRICS The quality of the software is implemented using the following metrics:  Code Churns  Code clones  Author Rank  Code Readability  Path Frequency  Path Density
  • 13.
    CHIDAMBER & KEMERERMETRICS  These are also known as Object Oriented Metrics:  Weighted Methods per class (WMC)  Depth of Inheritance (DIT)  Number of children (NOC)  Coupling between Objects (CBO)
  • 14.
    PREDICTION ANALYSIS  Thedataset will contain the randomly generated learned data items.  Naive Bayes algorithm is used.  The measured result of the software is compared along with the data set.  The predicted result for the selected software will be displayed.  Using this result the quality of the code can be determined.
  • 15.
    PREDICTION USING SVM The measured attributes are compared with the learned dataset.  The accuracy of the for the selected software will be displayed.  The false positive rates are obtained.
  • 16.
  • 17.
    CODE QUALITY OFCK METRICS
  • 18.
  • 19.
    FALSE POSITIVES &ACCURACY USING SVM
  • 20.
  • 21.
    COMPARISON OF FALSEPOSITIVE RATE
  • 22.
    CONCLUSION  Since thequality of the code is checked before deploying the software, the quality of the software will be assured.  The cost spent for maintenance will also be reduced.  Compared to other automatic miners the false positive rate is reduced to a negligible value.
  • 23.
    REFERENCES  Measuring CodeQuality to improve specification mining – Claire Le Goues.  A study of consistent and inconsistent changes to code clones –Jens Krinke.  Who are are Source code contributers and how do they change? – Massimiliano Di Penta.  The road not taken: Estimating the Path Execution Frequency Statically – Raymond P.L.Buse
  • 24.