Measuring the Code Quality Using Software Metrics

4,140 views
3,815 views

Published on

The quality of the code is checked before deploying the software, the quality of the software will be assured.

Published in: Education, Technology, Business
1 Comment
1 Like
Statistics
Notes
  • Really interesting presentation! Is there an accompanying paper or publication that I could download somewhere?
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total views
4,140
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
69
Comments
1
Likes
1
Embeds 0
No embeds

No notes for slide

Measuring the Code Quality Using Software Metrics

  1. 1. MEASURING THE CODE QUALITY USING SOFTWARE METRICS – TO IMPROVE THE EFFICIENCY OF SPECIFICATION MINING Guided By Ms.P.R.Piriyankaa.,ME Assistant Professor. Presented By, M.Geethanjali (ME)., Sri Krishna College of Engg and Tech.
  2. 2. INTRODUCTION  Incorrect and buggy software costs up to $70 Billion each year in US.  Formal Specifications defines testing, optimization, refactoring, documentation, debugging and repair.  False Positive rates – We think there is a vulnerability but actually that is not present.
  3. 3. PROBLEM STATEMENT  The cost of Software Maintenance consumes up to 90% of the total project cost and 60% of the maintenance time.  Formal Specifications are very necessary but they are difficult for programmers to write them manually.  Existing automatic specification mining produces high false positive rates.
  4. 4. EXISTING SYSTEM  Formal specification is done for each and every software and the quality of the code is checked.  Set of software Metrics are used to measure the quality of the software.  General Quality Metrics  Chidamber and Kemerer Metrics.  These Software Metrics are used to measure the quality of the code.
  5. 5. EXISTING SYSTEM CONT...  The quality of the code is lifted with the results obtained.  Prediction is used to compare the obtained results with randomly generated learned data items.  Automatic specification miner that balances the true and false positive specifications.  True positive – Required behaviour.  False positives – Non-Required behaviour.
  6. 6. DISADVANTAGES  The false positive rates are reduced from 90% to an average of 30%.  The accuracy of the software is only 80%.  The computation time is low.
  7. 7. PROPOSED SYSTEM
  8. 8. PROPOSED SYSTEM  The classification is based on Support Vector Machine Algorithm.  The measured attributes of the software is compared with the training dataset.  The accuracy of the software is calculated.  The False Positive rate for the specific software is also found.
  9. 9. ADVANTAGES  Reduces the burden of manual inspection of the code.  By knowing the quality of the code before the deployment the developers can easily lift the quality.  The accuracy of the software is about 95%.  Minimises the false positive rates from 90% to 5%.
  10. 10. BLOCK DIAGRAM
  11. 11. LIST OF MODULES  General Code Quality Metrics.  Code quality of complexity metrics.  Implementation of mining algorithm – Naive Bayes Algorithm  Implementation of mining algorithm – Support Vector Machine Algorithm.  Finding the False positive rates using learning model.
  12. 12. GENERAL QUALITY METRICS  The quality of the software is implemented using the following metrics:  Code Churns  Code clones  Author Rank  Code Readability  Path Frequency  Path Density
  13. 13. CHIDAMBER & KEMERER METRICS  These are also known as Object Oriented Metrics:  Weighted Methods per class (WMC)  Depth of Inheritance (DIT)  Number of children (NOC)  Coupling between Objects (CBO)
  14. 14. PREDICTION ANALYSIS  The dataset will contain the randomly generated learned data items.  Naive Bayes algorithm is used.  The measured result of the software is compared along with the data set.  The predicted result for the selected software will be displayed.  Using this result the quality of the code can be determined.
  15. 15. PREDICTION USING SVM  The measured attributes are compared with the learned dataset.  The accuracy of the for the selected software will be displayed.  The false positive rates are obtained.
  16. 16. GENERAL CODE QUALITY METRICS
  17. 17. CODE QUALITY OF CK METRICS
  18. 18. PREDICTION ANALYSIS
  19. 19. FALSE POSITIVES & ACCURACY USING SVM
  20. 20. COMPARISON OF ACCURACY
  21. 21. COMPARISON OF FALSE POSITIVE RATE
  22. 22. CONCLUSION  Since the quality of the code is checked before deploying the software, the quality of the software will be assured.  The cost spent for maintenance will also be reduced.  Compared to other automatic miners the false positive rate is reduced to a negligible value.
  23. 23. REFERENCES  Measuring Code Quality to improve specification mining – Claire Le Goues.  A study of consistent and inconsistent changes to code clones –Jens Krinke.  Who are are Source code contributers and how do they change? – Massimiliano Di Penta.  The road not taken: Estimating the Path Execution Frequency Statically – Raymond P.L.Buse
  24. 24. THANK YOU!!!

×