Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selection Algorithm

1,624 views
1,419 views

Published on

Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selection Algorithm - PROMISE 2008

Published in: Business, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,624
On SlideShare
0
From Embeds
0
Number of Embeds
71
Actions
Shares
0
Downloads
66
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Improving Analogy Software Effort Estimation using Fuzzy Feature Subset Selection Algorithm

    1. 1. IMPROVING ANALOGY SOFTWARE EFFORT ESTIMATION USING FUZZY FEATURE SUBSET SELECTION ALGORITHM Mohammad Azzeh [email_address] Dr. Daniel Neagu [email_address] Prof. Peter Cowling [email_address] Computing Department, School of informatics University of Bradford Promise’08
    2. 2. Agenda <ul><li>Motivation </li></ul><ul><li>The Problem. </li></ul><ul><li>The Proposed Solution. </li></ul><ul><li>Results. </li></ul><ul><li>Conclusions. </li></ul>
    3. 3. Motivation <ul><li>Estimation by Analogy (EA) is the common technique for Software Cost Estimation. </li></ul><ul><ul><li>The user may be willing to accept such kind of estimation because it mimics human problem solving. </li></ul></ul><ul><ul><li>Based on actual project data and experience. </li></ul></ul><ul><ul><li>It can model the complex relationship. </li></ul></ul>
    4. 4. The Problem <ul><li>The quality of data is important issue in analogy software estimation as being precondition to obtain quality knowledge and accurate estimation. </li></ul><ul><li>Typically data sets are not collected with a particular prediction task in mind. [Kirsopp & Shepperd, 2002] 1 </li></ul><ul><li>This estimation approach is sensitive to </li></ul><ul><ul><li>Incomplete and noisy data </li></ul></ul><ul><ul><li>Irrelevant and misleading features. </li></ul></ul>
    5. 5. Feature Subset Selection (FSS) <ul><li>Benefits of FFS: </li></ul><ul><ul><li>Reducing time of training and utilization, </li></ul></ul><ul><ul><li>Improving data understanding and visualization, and </li></ul></ul><ul><ul><li>Reducing data dimensionality. </li></ul></ul>Dataset D(K x M) FSS Dataset D(K x N) Where N<M
    6. 6. FSS in Software estimation <ul><li>Existing searching techniques: </li></ul><ul><ul><li>Wrappers: use machine learning algorithms [Kirsopp & Shepperd, 2002] </li></ul></ul><ul><ul><ul><li>Exhaustive search </li></ul></ul></ul><ul><ul><ul><li>Random search. </li></ul></ul></ul><ul><ul><ul><li>Hill Climbing. </li></ul></ul></ul><ul><ul><ul><li>Forward selection </li></ul></ul></ul><ul><ul><ul><li>Backward selection </li></ul></ul></ul><ul><ul><li>Filters: use statistical approaches [Briand et al, 2000] </li></ul></ul><ul><ul><li>Fitness criteria of Wrappers is often MMRE. </li></ul></ul>
    7. 7. The proposed Solution <ul><li>The algorithm is designed based on Fuzzy c-Means and Fuzzy Logic. </li></ul><ul><li>Selecting the optimal feature subset based on the similarity between fuzzy clusters. </li></ul><ul><li>The feature set the presents a smaller similarity degree has the potential to deliver accurate estimation. </li></ul><ul><ul><li>It reflects the data structure. </li></ul></ul><ul><ul><li>It combines both numerical and categorical data. </li></ul></ul>
    8. 8. The proposed Solution Using Fuzzy c-Means Using Partition matrix and clusters centres Data set with M features Select a subset feature with N dimension
    9. 9. The Proposed Solution <ul><li>Definition 1. Similarity between two clusters for given m -dimensional features: </li></ul><ul><li> is normalized weighting factor representing the importance of some features among others </li></ul><ul><li>Definition 2 . Overall similarity between all clusters in a feature subset S i is given as : </li></ul>
    10. 10. FFSS algorithm <ul><li>Input: </li></ul><ul><li>D(F 1 ,F 2 ,...F M ) //input dataset (NxM) </li></ul><ul><li>Out //Output Dataset(Nx1) </li></ul><ul><li>Output : </li></ul><ul><li>D best //feature subset of high predictive features. </li></ul><ul><li>begin </li></ul><ul><li>Do : </li></ul><ul><ul><li>Step1 :Select feature subset to be searched S i . </li></ul></ul><ul><ul><li>Step2 :Fuzzify the feature subset (S i ). </li></ul></ul><ul><ul><li>Step3 :For feature subset S i , assess similarity degree between all pairs of clusters (i.e. fuzzy sets) in all features in S i . </li></ul></ul><ul><li>Until all feature subsets are searched . </li></ul><ul><li>Step4 :Evaluate each feature subset S i. using E si </li></ul><ul><li>Step5 : Evaluation: best feature subset is one with minimum E si . </li></ul><ul><li>End; </li></ul>
    11. 11. Empirical validation <ul><li>We built analogy estimation model for each FSS algorithm, where Euclidian distance measure was the similarity measure. </li></ul><ul><li>Validation strategy: 10-Fold CV </li></ul><ul><li>Evaluation criteria: Mean Magnitude of Relative Errors (MMRE), Median MRE (MdMRE), Performance indicator (Pred(25%)) </li></ul>Dataset Number of feature Number of Projects ISBSG (release 10) 14 400 Desharnais 10 77
    12. 12. Kirsopp & Shepperd said: <ul><li>“ It is difficult to compare algorithms used classification problems with algorithms deals with prediction problems so the measures of accuracy used are different” </li></ul>
    13. 13. Results...ISBSG Algorithm used in analogy model One analogy mean of 2 analogies mean of 3 analogies All features 37.74% 41.0% 29.4% Exhaustive search, Hill climbing, Random search 28.25% 30.3% 30.2% Forward subset selection 33.3% 30.4% 31.2% Backward subset selection 34.7% 38% 34.4% FFSS 28.7% 30.6% 32.2%
    14. 14. Results...ISBSG Algorithm used in analogy model One analogy mean of 2 analogies mean of 3 analogies All features 31.6% 33% 20.62% Exhaustive search. Hill climbing, Random search 21.9% 24% 20.8% Forward subset selection 21.0% 21.4% 20.7% Backward subset selection 22.6% 28.7% 25.2% FFSS 21.8% 22.3% 22.7%
    15. 15. Results...Desharnais Algorithm used in analogy model One analogy mean of 2 analogies mean of 3 analogies All features 60.1% 51.5% 50.0% Exhaustive search, Forward subset selection, Hill climbing, Random search 38.2% 39.4% 36.4% Backward subset selection 42.4% 43.9% 46.6% FFSS 40.2% 40.3% 38.5%
    16. 16. Results...Desharnais Algorithm used in analogy model One analogy mean of 2 analogies mean of 3 analogies All features 41.7% 41.0% 36.1% Exhaustive search, Forward subset selection Hill climbing, Random search 30.8% 38.0% 30.9% Backward subset selection 38.4% 37.4% 34.6% FFSS 32.4% 33.3% 31.7%
    17. 17. Conclusions <ul><li>The fuzzy feature subset selection has a significant impact on accuracy of EA. </li></ul><ul><li>Our FFSS algorithm produces comparable results with exhaustive search, Hill climbing and forward selection. </li></ul><ul><li>It reduces uncertainty when categorical data is involved. </li></ul>
    18. 18. Conclusions…cont Which FSS is suitable ? -Fuzzy Feature subset selection -Forward feature selection, -Backward features selection Exhaustive search and Hill Climbing Accuracy only Less time, and quit reasonable Accuracy And data set is large Dataset size Random Hill Climbing Small Large
    19. 19. Threats to experiment validity <ul><li>Selecting ISBSG representative data. </li></ul><ul><li>MMRE has been used as fitness criteria for all feature selection algorithms except our FFSS. </li></ul><ul><li>Number of projects. </li></ul><ul><li>Outliers and extreme values. </li></ul>
    20. 20. References <ul><li>Briand L, Langley T, Wieczorek I. Using the European Space Agency data set: a replicated assessment and comparison of common software cost modelling techniques, presented at 22nd IEEE Intl. Conf. on Softw. Eng., Limerick, Ireland, 2000. </li></ul><ul><li>Kirsopp, C., Shepperd, M. 2002. Case and Feature Subset Selection in Case-Based Software Project Effort Prediction, Proc. 22nd SGAI Int’l Conf. Knowledge-Based Systems and Applied Artificial Intelligence. </li></ul>
    21. 21. <ul><ul><ul><li>Questions </li></ul></ul></ul>

    ×