Data Mining Application in a Software Project Management Process
Upcoming SlideShare
Loading in...5
×
 

Data Mining Application in a Software Project Management Process

on

  • 999 views

 

Statistics

Views

Total Views
999
Views on SlideShare
999
Embed Views
0

Actions

Likes
0
Downloads
23
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Data Mining Application in a Software Project Management Process Data Mining Application in a Software Project Management Process Presentation Transcript

  • Data Mining Application in a Software Project Management Process Richi Nayak School of Information Systems, QUT, Brisbane Tian Qiu EDS Credit Services, Adelaide, Australia
  • Agenda
    • Motivation
    • Data Pre-processing
    • Data Modelling
    • Analysis of Results
    • Conclusion
  • Motivation
    • A software engineering project leader manages a project with several issues involved such as project planning and scheduling, code implementation, testing and release.
    • It is difficult to precisely estimate the project duration in advance.
    • However, he can utilise the data Mining methods and make the accurate estimation by learning from the information gained by previous projects.
    • He can also eliminate several potential problems when a pattern appears in his current project similar to the one that have caused problems in previous projects.
    • Data mining techniques have been successfully applied to various areas such as marketing, medical, and financial.
    • However, few of them can be currently seen in software engineering domain.
  • Data Acquisition
    • Every year, more than 50 software projects are carried out in MASC, more than ten thousands lines of code are created, and thousands pages of documents are released to various customers.
    • In order to control the problems appearing during a project life cycle and to improve the working efficiency, MASC has set up a series of processes such as Software Configuration Management, Software Risk Management, Software Project Metric Report and Software Problem Report Management.
    • The Software Problem Report (PR) management data is chosen as the focus of this research.
    • A software bug-tracking system, GNATS ( A T racking S ystem by GN U), is set up on MASC Intranet to collect and maintain all PRs raised from every department and individual within MASC.
      • Currently the GNATS system stores more than 40,000 of problem reports.
  • Structure of a PR
  • Data Pre-processing: Defining Goals (1)
    • Finding a precise estimation figures on bug fixing or estimation work at the early stage of a project
      • There is few or even no tool exists that can be used for both estimation and project problem reasoning stage.
      • Project team members can only give estimations based on their own experience from previous projects.
      • If the current project is not within their familiar topics, the accuracy of the estimation becomes worse.
      • A PR (problem report) fixing work also becomes more tedious when the responsible person can not estimate the time to fix the problem.
      • It will bring great cost savings and accurate progress control to the development team and to the organization.
  • Data Pre-processing: Defining Goals (2)
    • Improving control over the PR fixing and deduction of the project cycle time
      • Currently the GNATS system has no actual database management system implemented.
      • This limits the potential benefits to software engineers who can obtain valuable information if the existing PR data is being analysed.
      • This is useful especially when a programmer is struggling with a bug while a resolution may already hide behind the knowledge that can be derived from the previous similar problems.
  • Data Pre-processing: Field Selection (1)
    • Whenever a PR is raised, a project leader will have to find answers
      • How severe the problem is (customer impact)?
      • What is the impact of the problem on project schedule (Cost & Team priority)? and
      • What type of the problem it is (a Software bug or a design flaw)?
      • How long it will take to fix?
      • How many people are available?
    • Attributes Severity, Priority, Class, Arrival-Date, Closed-Date, Responsible, and Synopsis are considered for mining.
    • Several fields such as Confidential, Submitter-ID, Environment, Fix, Release Note, Audit Trail, the associated project name and PR number are not considered during mining.
    • All PRs with a closed value in their State field are chosen.
  • Data Pre-processing: Field Selection (2)
    • The attribute ‘ class ’ is chosen as the target attribute in order to find out any valuable knowledge among the type of a problem and the rest of the PR attributes.
      • Association or characteristics rules can reveal the relationship between the fix effort (in time) and the PR class.
      • A project leader can analyse the fix effort versus the human resources available, and put it in the schedule and resource plan.
    • The Synopsis field is pure text briefly describing what the problem is in the associated project.
      • Text mining is used to analyse this data.
      • PR_ID|Category|Severity|Priority|Class|Arrival-Date|Close-Date|Synopsis
    • 17358| bambam| serious| high| sw-bug | 20:50 May 25 CST 1999 | 11:35 Mar 24 CST 1999 | STI STR register not being reset at POR
    • 17436 |bambam| serious| high| support| 18:10 Mar 30 CST 1999| 12:00 May 24 CST 1999| sequence_reg varable in the RDR_CHL task is not defined
    • 580 | bingarra| serious| low| doc-bug| 10:10 May 31 1996| | In URDRT2 of design doc, the word 'last' should be 'first‘
    • 6205 | gali| serious| medium| SW-bug | 14:30 Nov 5 1997| 13:14 Dec 1 1997| grouping of options in dialog box
    Data examples from the original PR data set Data Pre-processing: Cleaning Use of Different Terminology over the time Start-time > close-time Missing Value No Time Zone Value
  • Data Pre-processing: Data Transformation
    • Attributes Arrival-Date, Closed-Date and Responsible are used to calculate the time spent to fix a PR.
      • assuming that the derived attribute is total time spent to fix a problem if there is only one person involved .
    • This attribute is discretized with cutting points be one day (1), half week (3 days), one week (7), two weeks (14), one month (30) and one quarter (90 days), half year (180 days) and more than one year (360 days).
      • So that the mining results are not very highly depended on the exact human resource involved but gives an approximate estimate, allowing a minor change in human resource.
  • Data Modelling and Mining
    • Prediction modelling on the time consuming patterns of the PR data to make estimation.
      • C5 http://www.rulequest.com/see5-info.html
      • CBA, http://www.comp.nus.edu.sg/~dm2/
    • Link analysis to discover association among the contents/values of the variables being selected.
      • CBA
    • Text Mining to analyse Synopsis field.
      • TextAnalyst http:// www.megaputer.com/company/index.html
  • Classification and Association Rule Mining
    • Initial Data set: 40,00 PRs
    • Pre-processed Data set: 10,765 PRs
      • Covering more than 120 projects from 1996 to 2000
    • Different number of data sets are used in training
  • Data Mining result example in CBA
  • CBA Results: All PRs coming from one project 0.09 47.49 1.04 47.059 11 MSAD 0.10 47.56 1.01 45.18 9 MSMD 0.08 52.94 1.00 46.16 15 SSAD 0.07 47.56 1.01 45.180 10 SSMD Testing time cost (seconds) Prediction Inaccuracy on testing data (%) Training time cost (seconds) Prediction Inaccuracy on training data (%) Number of rules 9541 PR Number in the Testing set 1224 PR Number in the Training set
  • CBA Results: large size training set * large size of 5381 PRs from all software projects 1.2 58.91 0.45 58.45 12 MSAD 1.0 58.25 0.44 59.10 21 MSMD 1.3 58.09 0.44 57.39 18 SSAD 1.1 51.95 0.41 57.04 41 SSMD Testing time cost (seconds) Prediction Inaccuracy on testing data (%) Training time cost (seconds) Prediction Inaccuracy on training data (%) Number of rules 5384 PR Number in the Testing set 5381 PR Number in the Training set
  • CBA Results: equally distributed target value
    • 342 PRs with change-request value of ‘ Class ’ from all software projects
    • 1000 PRs with sw-bug, doc-bug and support values.
    1.6 46.9 1.6 46.5 15 MSAD 1.9 45.1 1.6 46.5 15 MSMD 2.1 43.8 1.6 43.6 15 SSAD 2.0 43.5 2.2 43.51 20 SSMD Testing time cost (seconds) Prediction Inaccuracy on testing data (%) Training time cost (seconds) Prediction Inaccuracy on training data (%) Number of rules 7423 PR Number in the Testing set 3342 PR Number in the Training set
  • CBA Results: Cross-validation   25.365 48.87 CV-MSAD   28.944 45.02 CV-MSMD   25.345 50.5 CV-SSAD   25.00 46.05 CV-SSMD   Mining time (seconds) Prediction Inaccuracy (%) 10765 PR Number in the data set  
  • CBA Results: Rules
    • If severity= non-critical and Time-to-fix = 3 to 30 days and priority= medium
    • Then class = doc-bug . Confidence = 76.6%, Support = 2.7%
    • If severity= critical and Time-to-fix = less than 3 days and
    • priority = high
    • Then class = sw-bug . Confidence = 75.2%, Support = 2.3%
    • Conclusion : All software related bugs can be fixed within 3 days with above 75% confidence if they have high priority and are in critical condition.
    • It may take 3 months to fix the problem if the corresponding priority and severity are graded as medium and serious.
    • There is no rule that has confidence value larger than 80%, however they do describe some characters of the PR fixing patterns. Therefore they are useful for software project management in estimation bug fixing related time issues.
  • CBA Results: Summary
    • In general, around 46% p rediction mismatch in training data set
      • the lowest is 43.51%, the highest is above 59.10%.
    • Above 51% p rediction mismatch is in testing data set
      • the lowest is 43.51%, the highest is 58.25%.
    • Attempt to improve the accurate prediction in the way of equal-distributed target-value samples does not lead much change.
    • The error rates from using multiple supports are higher and the number of extracted rules is lower than those from using single support mining engine.
    • The error rates from Multiple Support and Single Support mining engine on Automatic Discretized value are usually higher than the manually discretization.
  • C5 results: Original Data Set 41.1 37.7 5.6 Process Time (seconds) 44.1 39.4 40.3 Error Rate of trees (%) 121.9 N/A. 141 Size of Created tree 43.9 41.3 41.5 Error Rate of rules (%) 57.7 N/A. 51 Number of Created rules CROS_VAL BOOSTING NORMAL 10765 TOTAL NUMBER OF PR RECORDS
  • C5 results: equally distributed target value 1.1 0.4 0.2 Process Time (seconds) 43.1 42.6 42.5 Error Rate of trees (%) 17.4 N/A 21 Size of Created tree 42.8 42.6 42.6 Error Rate of rules (%) 12.4 N/A 11 Number of Created rules CROS_VAL BOOSTING NORMAL 5681 (3342 +2339) TOTAL NUMBER OF PR RECORDS
  • C5 Results: Rules
    • When a PR is in low priority and the time spent is around half a day (0.5 day) Then the rule has a high probability (87.5% Confidence) to classify a bug to be a document related bug .
    • When a PR is in medium priority with non-critical severity and the time spent is around 1.1 day Then the rule has 84.6% Confidence to classify a bug to be a document related bug .
    • When a PR is in low priority and the time spent for fixing is around 1 week
    • Then the rule has 83.3% Confidence to classify a bug to be a software bug .
    • In general, around 42% error rate in rules from the training set
      • the lowest is 40.3%, the highest is 43.9%.
    • Similar error rate value is for the generated trees in testing data set
      • the lowest is 39.4%, the highest is 44.1%.
    • Both of the rates are better than CBA results.
    • The time efficiency of C5 is also better than CBA.
  • Text Mining
    • The analysis of the text together with the rules obtained from classification and association can more accurately predict the time and cost of fixing PRs.
    • automatically summarise the pure text data and extract some valuable rules.
    • TextAnalyst automatically creates the semantic network based on the structure, vocabulary and volume of the analysed text, without any predefined rules.
  •  
  •  
  • Existing problems
    • The error rates of testing data sets in both CBA and C5 are higher than expected.
      • non-uniform value distribution (57, 25, 15, 3%)
      • indicating some amount of noise is still existent in data.
    • The relationship between PRs and human resources within a particular project plays a great impact.
    • The time needed to fix a bug is different for each project depending upon the actual human resources available.
    • Only the attribute ‘ Responsible’ is used to indicate the human resource available.
    • Truly, the relationship with the human resources available for past projects whose data was analysed is needed to use time patterns to help project leaders to predict time consummations more accurately.
    • The use of additional data source ‘Change Request data set’ that records all customer request process data may rectify this problem.
  • Conclusion
    • use of data mining techniques on a set of data collected from the software engineering process under a real software business environment.
    • useful rules are inferred on the time patterns of the PR fixing and the relationship between the content and the type of a PR in the form of association rules, classification rules or semantic trees.
    • the scale of this data mining task is limited.
    • Future work: apply data mining to different phases of software development such as software quality data, etc.
  • Thank You!! Questions?