Promise08 Wrapup


Published on

Wrapup - PROMISE 2008

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Promise08 Wrapup

  1. 1. PROMISE 2008: wrap up The PROMISE effect: Do more together than separately Gary D. Boetticher Tim Menzies Tom Ostrand Guenther Ruhe
  2. 2. The other side of PROMISE <ul><li>PROMISE is not just an annual meeting, </li></ul><ul><ul><li>it is community. </li></ul></ul><ul><li>How much contact have you had with your fellow PROMISE-ians during the year? </li></ul><ul><li>Are you exploiting the “PROMISE effect”? </li></ul><ul><ul><li>Do more together than separately. </li></ul></ul>
  3. 3. Case study #1 <ul><li>Cukic & Menzies: new NSF award 2008 to 2011 </li></ul><ul><ul><li>“ Overcoming ceiling effects in defect predictors </li></ul></ul><ul><li>Surprisingly good reviews </li></ul><ul><ul><li>High praise for the connection to the PROMISE repository </li></ul></ul><ul><ul><li>What are the broader impacts of the proposed activity? </li></ul></ul><ul><ul><ul><li>“… The work has significant broader impacts and the PROMISE repository that the PIs are a part of provides them a platform for making further contributions to the SE community. …” </li></ul></ul></ul><ul><ul><li>Summary Statement </li></ul></ul><ul><ul><ul><li>“… The PROMISE repository and the work of the PI represents a significant input to the proposal. …. “ </li></ul></ul></ul><ul><li>So the repository is now a tool for access funding. </li></ul>
  4. 4. Case study #2 <ul><li>For several years, Ostrand, Weyuker, Bell at AT&T held the state of the art in defect localization (predicting the subset of the code with most errors)‏ </li></ul><ul><li>2007: At PROMISE’07: </li></ul><ul><ul><li>Koru argues that defect distribution is logarithmic (I.e. more errors in the smaller modules)‏ </li></ul></ul><ul><li>Sept 2008: </li></ul><ul><ul><li>Milton (WVU) tells Menzies that their new defect localizer is failing. </li></ul></ul><ul><ul><li>Menzies says “try crawling the suspect modules in the Koru order” (smallest modules first)‏ </li></ul></ul><ul><li>Jan’08: </li></ul><ul><ul><li>Milton shows the new localizer dramatically out-performs standard methods </li></ul></ul><ul><li>Mar’08: </li></ul><ul><ul><li>Milton hosted at AT&T for a week by Ostrand </li></ul></ul><ul><ul><li>Tests the new localizer inside the firewalls at AT&T </li></ul></ul>
  5. 5. Case study #3 <ul><li>TSE 2007, May, </li></ul><ul><ul><li>Kitchenham assess within-company (WC) data vs cross-company (CC) data for effort prediction. </li></ul></ul><ul><li>August ‘07 : Menzies (USA) realizes that the PROMISE repository could perform the same study, but for defect prediction. </li></ul><ul><ul><li>Crowd sources the PROMISE community for collaborators </li></ul></ul><ul><li>Sept ‘07: Bener&Turhan (Turkey) run the experiments </li></ul><ul><ul><li>Turkey makes the data, USA runs the statistics </li></ul></ul><ul><ul><li>CC results: best PD results ever seen with this data (b ut very bad PF rates)‏ </li></ul></ul><ul><li>Oct ‘07: Submission to TSE by Menzies, Bener, Turhan </li></ul><ul><ul><ul><li>CC depreciated for defect prediction </li></ul></ul></ul><ul><li>Feb 14: TSE decision: revise & resubmit </li></ul><ul><li>May 1: Resubmission with 3 news data sets (Turkish whitegoods)‏ </li></ul><ul><ul><ul><li>Paper now much stronger </li></ul></ul></ul><ul><ul><ul><li>PROMISE repository now has 3 more data sets </li></ul></ul></ul><ul><li>Other Bener, Menzies, Turhan papers: two Defects’08, ASE’08 </li></ul>
  6. 6. Case study 4,5,6…. <ul><li>Over to you… </li></ul><ul><li>Any other stories to add of interactions between PROMISE-ians? </li></ul>
  7. 7. The PROMISE effect <ul><li>Do more together separately. </li></ul><ul><li>Look around the room </li></ul><ul><ul><li>What ideas are in the air? </li></ul></ul><ul><ul><li>What is the value-added of the person sitting next to you? </li></ul></ul><ul><ul><li>What are you plans for collaboration over the next 7 months (in time for the next PROMISE cfp)? </li></ul></ul><ul><li>And we can use web-tools to foster this community: </li></ul><ul><ul><li>Chat rooms, discussion groups, wikis, code library </li></ul></ul>
  8. 8. Proposal: a PROMISE chat room <ul><li>Two minutes chatting can sometimes beat 2 months of reading </li></ul><ul><li>? </li></ul><ul><ul><li>Web-based, platform independent </li></ul></ul>
  9. 9. Proposal: a PROMISE discussion list <ul><li>Chatting is a hardly permanent record of a discussion </li></ul><ul><li>Good to access what others have said in the past </li></ul><ul><li>? </li></ul>
  10. 10. Proposal: a PROMISE wiki <ul><li> is read only (except for comments)‏ </li></ul><ul><ul><li>Good for workshop admin and data storage </li></ul></ul><ul><li>Chat rooms and discussion groups good for discussion “volleys” </li></ul><ul><li>Wikis are better for summarizing and synthesizing on-going discussions </li></ul><ul><ul><li>write, rewrite access </li></ul></ul><ul><li>? </li></ul>
  11. 11. Proposal: a shared PROMISE code library <ul><li>Reading is one thing </li></ul><ul><li>Running code is another. </li></ul><ul><li>Have a shared open-source, code base, complete with make files </li></ul><ul><li>Able to reproduce prior results with one command. </li></ul><ul><li>? </li></ul><ul><li>?a Debian package </li></ul>
  12. 12. Proposal: the PROMISE’09 awards <ul><li>Source award </li></ul><ul><ul><li>Awarded to the individuals or groups that contribute the most new data, with baseline results, to the repository. </li></ul></ul><ul><li>Service award </li></ul><ul><ul><li>Awarded to the individuals or groups that offer most support to others in the new groups/ chat room/ wiki. </li></ul></ul><ul><li>Science award </li></ul><ul><ul><li>Awarded to the 2009 paper that best repeats/extends/ interacts with prior results and/or authors of a past PROMISE paper. </li></ul></ul>