Software Systems as Cities: A Controlled Experiment Richard Wettel, Michele Lanza        Romain Robbes    REVEAL @ Faculty...
Software Systems as Cities
City Metaphor                VISSOFT 2007
City Metaphor   class buildingpackage district                    VISSOFT 2007
City Metaphor   class buildingpackage district                    VISSOFT 2007
City Metaphor   class buildingpackage district          nesting level   color                                  VISSOFT 2007
City Metaphor     number of methods (NOM)       height     number of attributes (NOA)    base size  number of lines of cod...
Program Comprehension                  ArgoUML                  LOC     136,325                         ICPC 2007
Program Comprehension
FacadeMDRImplNOA         3                skyscraperNOM       349LOC     3,413
CPPParserNOA        85                office buildingNOM       204LOC     9,111
JavaTokenTypesNOA       173                 parking lotNOM         0LOC         0
house   PropPanelEvent        NOA          2        NOM          3        LOC         37
Program Comprehension                    ICPC 2007
Design Quality Assessmentdisharmony mapArgoUML classes   brain class      8   god class       30   god + brain      6   da...
System Evolution Analysistime traveling                       WCRE 2008
System Evolution Analysis                 time                        ArgoUMLtime traveling          8 major releases     ...
System Evolution Analysis                 time                        ArgoUMLtime traveling          8 major releases     ...
http://codecity.inf.usi.ch                 implemented in Smalltalk                                ICSE 2008 tool demo
Is ituseful  ?
A Controlled Experiment
Design
technical report 2010State of the art?
Design desiderata 1 Avoid comparing using a technique against not using it. 2 Involve participants from the industry. 3 Pr...
Design desiderata 1 Avoid comparing using a technique against not using it. 2 Involve participants from the industry. 3 Pr...
Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
Finding a baseline1. program comprehension2. design quality assessment
Tasks
Tasks                         program comprehension                                    6A1      Identity the convention us...
Tasks                         program comprehension                                    6A1      Identity the convention us...
Tasks                         program comprehension                                    6                                  ...
Tasks                                                     quantitative                 9                                  ...
Main research questions
Main research questions   1       Does the use of CodeCity increase the correctness       of the solutions to program comp...
Main research questions   1       Does the use of CodeCity increase the correctness       of the solutions to program comp...
Variables of the experiment
Variables of the experiment             correctnessdependent             completion time                 CodeCity         ...
Variables of the experiment             correctnessdependent             completion time                 CodeCity         ...
Variables of the experiment             correctnessdependent             completion time                 CodeCity         ...
Variables of the experiment             correctnessdependent                                             FindBugs         ...
Variables of the experiment             correctnessdependent             completion time                 CodeCity         ...
Variables of the experiment             correctnessdependent             completion time                 CodeCity         ...
The experiment’s design                     between-subjects                    randomized-block
The experiment’s design                                             between-subjects                                      ...
The experiment’s designbackground       academia               industryexperience   beginner advanced beginner advanced   ...
The experiment’s designbackground       academia               industryexperience   beginner advanced beginner advanced   ...
Execution
Experimental runs
Experimental runs                   day 1                           timetraining session(1 hour)
Experimental runs                     day 1                             timetraining session(1 hour)                      ...
Experimental runs                     day 1   day 2                                     timetraining session(1 hour)      ...
Experimental runs                     day 1   day 2   day 3                                             timetraining sessi...
Experimental runs                     day 1   day 2   day 3   day 4                                                     ti...
Testing the waters                                       2009         November           December         18 24 25   2   9...
Timeline of the experiment                                         2009 2010           November           December        ...
Timeline of the experiment                                         2009 2010           November           December        ...
Treatments and subjects                               academia        industry                           beginner advanced...
Collecting raw data
Collecting raw data          solution
Collecting raw data        completion time
Controlling time
Controlling timecommon time
Controlling timecommon timeinfo on subjectsName (Task): Remaining time
Assessing correctness       1 T2: Findbugs, analyzed with CodeCity                  A3: Impact Analysis                   ...
Assessing correctnessblinding   1 T2: Findbugs, analyzed with CodeCity                  A3: Impact Analysis               ...
Results
Statistical test   two-way analysis of variance (ANOVA)                            95% confidence interval
Correctness                                                           Ecl+Excl                                            ...
Correctness                                                           Ecl+Excl                                            ...
Correctness                                                           Ecl+Excl                                            ...
Completion time                                                             Ecl+Excl                                      ...
Completion time                                                             Ecl+Excl                                      ...
Completion time                                                             Ecl+Excl                                      ...
after the first roundCodeCityvsEcl+Excl+24% correctness-12% completion time
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Software Systems as Cities: a Controlled Experiment
Upcoming SlideShare
Loading in …5
×

Software Systems as Cities: a Controlled Experiment

3,379
-1

Published on

The slides for the presentation I gave at ICSE 2011, in Honolulu, Hawaii.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,379
On Slideshare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
38
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Hi, I’m Richard Wettel and I am here to present you Software Systems as Cities: A Controlled Experiment. This is work I did while I was a PhD student at the University of Lugano, in collaboration with my advisor, Michele Lanza, and with Romain Robbes, from the University of Chile.\n
  • Before diving into the controlled experiment, I’d like to give you a brief description of the approach evaluated in this work. Our approach, which is a software visualization approach, addresses mostly object-oriented software and is based on the following metaphor...\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • The system is a city, its packages are the city’s districts and its classes, the city’s buildings. The visible properties of the city artifacts reflect a set of software metrics. One of the configuration we use often is the following: the nesting level of the package mapped on the colors of the district (from dark to light grays), while for the classes, the number of methods is mapped on the building’s height, the number of attributes on its base size, and the number of lines of code on the color of the buildings: from dark gray to intense blue. Since software has so many facets, it was important for us to have a versatile metaphor that can be employed in different contexts. We applied it in the following three:\n
  • Program comprehension. This is a code city of ArgoUML, a Java system about 140 thousand lines of code. The visualization gives us a structural overview of the system and reveals several patterns in the form of building archetypes.\n
  • Program comprehension. This is a code city of ArgoUML, a Java system about 140 thousand lines of code. The visualization gives us a structural overview of the system and reveals several patterns in the form of building archetypes.\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Antenna-like skyscrapers, representing classes with many methods and few attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Office buildings for classes with many methods and many attributes,\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Parking lots, for classes with few methods and a lot of attributes, as in the case of this huge Java interface (the color of the parking lot shows that it does not contain any code, just a whole bunch of constants)\n
  • Or houses, for small classes with few attributes and methods.\n
  • Or houses, for small classes with few attributes and methods.\n
  • Or houses, for small classes with few attributes and methods.\n
  • Or houses, for small classes with few attributes and methods.\n
  • Or houses, for small classes with few attributes and methods.\n
  • Or houses, for small classes with few attributes and methods.\n
  • This was the first context, program comprehension.\n
  • The second context is assessing the quality of software design. For this we use code smells, which are violation of the rules of good object-oriented design. For example brain class and god class are design problems related to extensive complexity and lack of collaboration, essential in an object-oriented system. A data class is the opposite, a bare container of data, which does not have any behavior. \nIn our approach, we assign vivid colors to classes affected by such design problems, while the unaffected ones are gray and thus appear muted. This visualization called disharmony map enables us to focus on the problems in the context of the entire system.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The third application context of our approach is system evolution analysis. One of the techniques we developed in this context is time travel which allows us to watch the evolution of the city and thus of the system represented by it. I won’t get into details here, because, for reasons I’ll explain later, we did not evaluate our approach in this context, in spite of its potential.\n
  • The software systems as cities approach is implemented in a freely-available tool called CodeCity.\n
  • PAUSE\nAnd yet, what does this all mean?\nWe needed to take a pragmatic stance and wonder whether this approach could be useful to practitioners, too.\n
  • This is what our control experiment is aimed at finding out.\nFrom this experiment, I’d like to leave you with not only the results, but also the various decisions we took towards obtaining these results.\n
  • A controlled experiment will be at most as good as its design.\n
  • We first performed an extensive study of the works related to empirical evaluation of information visualization in general and of software visualization in particular.\n
  • To synthesize the lessons learned from the existing body of knowledge, we built a list of design desiderata, which we used as guidelines for the design our experiment.\n
  • Here are the ones where we could improve over the existing experiments...\n
  • The first thing we had to consider was the baseline. Ideally, we’d have a tool that supports all the three context that our approach supports. Unfortunately, we could not find one. Therefore, we started building the baseline from several tools.\n
  • The first thing we had to consider was the baseline. Ideally, we’d have a tool that supports all the three context that our approach supports. Unfortunately, we could not find one. Therefore, we started building the baseline from several tools.\n
  • The first thing we had to consider was the baseline. Ideally, we’d have a tool that supports all the three context that our approach supports. Unfortunately, we could not find one. Therefore, we started building the baseline from several tools.\n
  • The first thing we had to consider was the baseline. Ideally, we’d have a tool that supports all the three context that our approach supports. Unfortunately, we could not find one. Therefore, we started building the baseline from several tools.\n
  • The first thing we had to consider was the baseline. Ideally, we’d have a tool that supports all the three context that our approach supports. Unfortunately, we could not find one. Therefore, we started building the baseline from several tools.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • After we knew what we were evaluating, we started designing the set of tasks. We have 6 program comprehension tasks, PAUSE\nand 4 design quality assessment tasks.\n
  • Another classification of our tasks is: 9 quantitative and 1 qualitative (which is not considered in our quantitative test).\n
  • With this task set, we wanted to find out whether the use of our approach brings any benefits over the use of the baseline, in terms of correctness of the solutions (that is, a grade for the solution, as the ones you give your students for any assignment). And we were also interested in the potential improvements in terms of completion time. \n
  • With this task set, we wanted to find out whether the use of our approach brings any benefits over the use of the baseline, in terms of correctness of the solutions (that is, a grade for the solution, as the ones you give your students for any assignment). And we were also interested in the potential improvements in terms of completion time. \n
  • With this task set, we wanted to find out whether the use of our approach brings any benefits over the use of the baseline, in terms of correctness of the solutions (that is, a grade for the solution, as the ones you give your students for any assignment). And we were also interested in the potential improvements in terms of completion time. \n
  • With this task set, we wanted to find out whether the use of our approach brings any benefits over the use of the baseline, in terms of correctness of the solutions (that is, a grade for the solution, as the ones you give your students for any assignment). And we were also interested in the potential improvements in terms of completion time. \n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • Based on the questions, we have the following variables for our experiment. Dependent variables are the ones we measure: correctness and completion time. Independent variables are the ones whose effect we want to measure. Here we find the tool used to solve the tasks (CodeCity vs baseline). Moreover, we wanted to see whether the advantages of using our approach scale with the object system size, which is the second independent variable. This variable has two levels: medium and large and for these we chose two Java systems of different magnitudes and also different application domain (FindBugs is a bug detection tool, while Azureus is a peer-to-peer client). Finally, we wanted to eliminate the effect of background and experience level on the outcome of the experiment, and therefore me made them controlled variables.\n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • And here we have these variables at play. On the one hand, we have two controlled variables, whose combination results in four blocks. On the other hand, the combination of the two independent variables results in 4 treatments (two experimental and two control). Our design is a between-subjects (every subject receives either a control treatment or an experimental one). And it is randomized-blocks, in that we assign a random combination of treatments to each of the four blocks separately. \n\nAll the preparation allowed us to conduct our experience with confidence \n
  • \n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The way we planned to conduct the experiment was that for each group of subjects we would start with a training session, which is a one-hour presentation of the approach, concluded with a tool demonstration of CodeCity, which would train the participants for a potential experimental treatment. After the training session, or in the next days, we would follow with a number of experiment sessions. In this diagram, in a blue square we annotate the number of data points obtained with an experimental treatment, and the number in the gray square represents the number of data points a control treatment. The arrow show us the training session used to train the subjects who received the experimental treatments.\n
  • The first step was to organize a pilot studies with 7 master and 2 phd students, which would allow us to improve our questionnaire and to resolve the most common problems that would appear.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • After this we started the experiment, which spanned 4 cities in Switzerland, Italy and Belgium and took around 6 month. We managed to collect 41 valid data points, of which 20 from industry practitioners.\n
  • Here is the final distribution of treatments to subjects across the four groups.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • To collect the data from our subjects, we used questionnaires containing the tasks, which gives us the raw data for measuring correctness, alternating with pages dedicated to entering the time, which allowed us to compute the completion time for each task.\n
  • We implemented a web app for controlling the time during the experiment. It provides a common time and shows, for each subject the current task and remaining time for the task. We allotted a finite maximum time for each task (10 minutes).\n
  • We implemented a web app for controlling the time during the experiment. It provides a common time and shows, for each subject the current task and remaining time for the task. We allotted a finite maximum time for each task (10 minutes).\n
  • We implemented a web app for controlling the time during the experiment. It provides a common time and shows, for each subject the current task and remaining time for the task. We allotted a finite maximum time for each task (10 minutes).\n
  • While measuring time was straightforward, it was not the case with correctness. For this we build one oracle per treatment and used it to grade the participants’ solutions. Moreover, we used blinding for two of us, such that they did not know while correcting whether the solution was obtained with our approach or with the baseline.\n
  • What did we find from our experiment?\n
  • For the two dependent variables (correctness and completion time), we performed a two-way analysis of variance using the SPSS tool, at a 95% confidence interval.\n
  • Using our approach, the subjects obtained more correct results on average than when using Eclipse and Excel, regardless of the object system size. Namely 24 percent more correct, a statistically significant improvement. This, according to Cohen’s d, is a large effect size.\n
  • In these boxplots you can see the distribution of the data points (the box covers the 50% of the data around the median. The lonely data point shows one data point outlier, representing an exceptionally best result for the control group with a large object system.\n
  • What about completion time. Again, our approach performed better than the baseline, again regardless of the object system size. Our approach enabled an average improvement of 12 percent of the task completion time. This is a statistically significant result and the effect size of this result is moderate, according to Cohen’s d.\n
  • Here are the boxplots which show the distribution of the data points for the completion time dependent variable.\n
  • Although it sounds like a total eclipse for the ones of you who are using Eclipse on a regular basis, the good news is that we know what to do to make it better...\n
  • Our experiment is easily replicable. In our technical report we provided every detail needed and, if you are interested, please feel free to replicate it.\n
  • Although these are only a subset of the stories around the experiment, I need to wrap up here.\n
  • The main points are: we designed our experiment based on a list of desiderata extracted from the body of literature. We then conducted the experiment over a period of 6 months in 4 locations spanning three countries and managed to engage 41 subjects, half of which are industry practitioners. The main results of our experiment show that our approach improved the performance of our experimental participants over the control participants in both correctness and completion time.\n
  • The main points are: we designed our experiment based on a list of desiderata extracted from the body of literature. We then conducted the experiment over a period of 6 months in 4 locations spanning three countries and managed to engage 41 subjects, half of which are industry practitioners. The main results of our experiment show that our approach improved the performance of our experimental participants over the control participants in both correctness and completion time.\n
  • The main points are: we designed our experiment based on a list of desiderata extracted from the body of literature. We then conducted the experiment over a period of 6 months in 4 locations spanning three countries and managed to engage 41 subjects, half of which are industry practitioners. The main results of our experiment show that our approach improved the performance of our experimental participants over the control participants in both correctness and completion time.\n
  • The main points are: we designed our experiment based on a list of desiderata extracted from the body of literature. We then conducted the experiment over a period of 6 months in 4 locations spanning three countries and managed to engage 41 subjects, half of which are industry practitioners. The main results of our experiment show that our approach improved the performance of our experimental participants over the control participants in both correctness and completion time.\n
  • The main points are: we designed our experiment based on a list of desiderata extracted from the body of literature. We then conducted the experiment over a period of 6 months in 4 locations spanning three countries and managed to engage 41 subjects, half of which are industry practitioners. The main results of our experiment show that our approach improved the performance of our experimental participants over the control participants in both correctness and completion time.\n
  • The point I’d like you to take away from this presentation is that our approach is at least a viable alternative to the current state of the practice, non-visual approaches for software exploration.\n
  • An experiment of such scale is not possible without the support and participation of many people. We’d like to thank them all. And I’d like to thank you for your time and attention.\n
  • \n
  • Software Systems as Cities: a Controlled Experiment

    1. 1. Software Systems as Cities: A Controlled Experiment Richard Wettel, Michele Lanza Romain Robbes REVEAL @ Faculty of Informatics PLEIAD @ DCC University of Lugano University of Chile Switzerland Chile
    2. 2. Software Systems as Cities
    3. 3. City Metaphor VISSOFT 2007
    4. 4. City Metaphor class buildingpackage district VISSOFT 2007
    5. 5. City Metaphor class buildingpackage district VISSOFT 2007
    6. 6. City Metaphor class buildingpackage district nesting level color VISSOFT 2007
    7. 7. City Metaphor number of methods (NOM) height number of attributes (NOA) base size number of lines of code (LOC) color class buildingpackage district nesting level color VISSOFT 2007
    8. 8. Program Comprehension ArgoUML LOC 136,325 ICPC 2007
    9. 9. Program Comprehension
    10. 10. FacadeMDRImplNOA 3 skyscraperNOM 349LOC 3,413
    11. 11. CPPParserNOA 85 office buildingNOM 204LOC 9,111
    12. 12. JavaTokenTypesNOA 173 parking lotNOM 0LOC 0
    13. 13. house PropPanelEvent NOA 2 NOM 3 LOC 37
    14. 14. Program Comprehension ICPC 2007
    15. 15. Design Quality Assessmentdisharmony mapArgoUML classes brain class 8 god class 30 god + brain 6 data class 17 unaffected 1,715 SoftVis 2008
    16. 16. System Evolution Analysistime traveling WCRE 2008
    17. 17. System Evolution Analysis time ArgoUMLtime traveling 8 major releases 6 years WCRE 2008
    18. 18. System Evolution Analysis time ArgoUMLtime traveling 8 major releases 6 years WCRE 2008
    19. 19. http://codecity.inf.usi.ch implemented in Smalltalk ICSE 2008 tool demo
    20. 20. Is ituseful ?
    21. 21. A Controlled Experiment
    22. 22. Design
    23. 23. technical report 2010State of the art?
    24. 24. Design desiderata 1 Avoid comparing using a technique against not using it. 2 Involve participants from the industry. 3 Provide a not-so-short tutorial of the experimental tool to the participants. 4 Avoid, whenever possible, giving the tutorial right before the experiment. 5 Use the tutorial to cover both the research behind the approach and the tool. 6 Find a set of relevant tasks. 7 Choose real object systems that are relevant for the tasks. 8 Include more than one object system in the design. 9 Provide the same data to all participants.10 Limit the amount of time allowed for solving each task.11 Provide all the details needed to make the experiment replicable.12 Report results on individual tasks.13 Include tasks on which the expected result is not always to the advantage of the tool being evaluated.14 Take into account the possible wide range of experience level of the participants.
    25. 25. Design desiderata 1 Avoid comparing using a technique against not using it. 2 Involve participants from the industry. 3 Provide a not-so-short tutorial of the experimental tool to the participants. 4 Avoid, whenever possible, giving the tutorial right before the experiment. 5 Use the tutorial to cover both the research behind the approach and the tool. 6 Find a set of relevant tasks. 7 Choose real object systems that are relevant for the tasks. 8 Include more than one object system in the design. 9 Provide the same data to all participants.10 Limit the amount of time allowed for solving each task.11 Provide all the details needed to make the experiment replicable.12 Report results on individual tasks.13 Include tasks on which the expected result is not always to the advantage of the tool being evaluated.14 Take into account the possible wide range of experience level of the participants.
    26. 26. Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
    27. 27. Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
    28. 28. Finding a baseline1. program comprehension2. design quality assessment3. system evolution analysis
    29. 29. Finding a baseline1. program comprehension2. design quality assessment
    30. 30. Tasks
    31. 31. Tasks program comprehension 6A1 Identity the convention used in the system to organize unit tests.A2.1& What is the spread of term T in the name of the classes, their attributes andA2.2 methods?A3 Evaluate the change impact of class C, in terms of intensity and dispersion.A4.1 Find the three classes with the highest number of methods. Find the three classes with the highest average number of lines of code perA4.2 method.
    32. 32. Tasks program comprehension 6A1 Identity the convention used in the system to organize unit tests.A2.1& What is the spread of term T in the name of the classes, their attributes andA2.2 methods?A3 Evaluate the change impact of class C, in terms of intensity and dispersion.A4.1 Find the three classes with the highest number of methods. Find the three classes with the highest average number of lines of code perA4.2 method.B1.1 Identify the package with the highest percentage of god classes.B1.2 Identify the god class with the largest number of methods. Identify the dominant (affecting the highest number of classes) class-levelB2.1 design problem.B2.2 Write an overview of the class-level design problems in the system. design quality assessment 4
    33. 33. Tasks program comprehension 6 5A1 Identity the convention used in the system to organize unit tests.A2.1& What is the spread of term T in the name of the classes, their attributes andA2.2 methods?A3 Evaluate the change impact of class C, in terms of intensity and dispersion.A4.1 Find the three classes with the highest number of methods. Find the three classes with the highest average number of lines of code perA4.2 method.B1.1 Identify the package with the highest percentage of god classes.B1.2 Identify the god class with the largest number of methods. Identify the dominant (affecting the highest number of classes) class-levelB2.1 design problem.B2.2 Write an overview of the class-level design problems in the system. design quality assessment 4
    34. 34. Tasks quantitative 9 8A1 Identity the convention used in the system to organize unit tests.A2.1& What is the spread of term T in the name of the classes, their attributes andA2.2 methods?A3 Evaluate the change impact of class C, in terms of intensity and dispersion.A4.1 Find the three classes with the highest number of methods. Find the three classes with the highest average number of lines of code perA4.2 method.B1.1 Identify the package with the highest percentage of god classes.B1.2 Identify the god class with the largest number of methods. Identify the dominant (affecting the highest number of classes) class-levelB2.1 design problem.B2.2 Write an overview of the class-level design problems in the system. qualitative 1
    35. 35. Main research questions
    36. 36. Main research questions 1 Does the use of CodeCity increase the correctness of the solutions to program comprehension tasks, compared to non-visual exploration tools, regardless of the object system size?
    37. 37. Main research questions 1 Does the use of CodeCity increase the correctness of the solutions to program comprehension tasks, compared to non-visual exploration tools, regardless of the object system size? 2 Does the use of CodeCity reduce the time needed to solve program comprehension tasks, compared to non-visual exploration tools, regardless of the object system size?
    38. 38. Variables of the experiment
    39. 39. Variables of the experiment correctnessdependent completion time CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advancedcontrolled academia background industry
    40. 40. Variables of the experiment correctnessdependent completion time CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advancedcontrolled academia background industry
    41. 41. Variables of the experiment correctnessdependent completion time CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advancedcontrolled academia background industry
    42. 42. Variables of the experiment correctnessdependent FindBugs completion time 1,320 classes 93,310 LOC CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advanced Azureuscontrolled academia 4,656 classes background industry 454,387 LOC
    43. 43. Variables of the experiment correctnessdependent completion time CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advancedcontrolled academia background industry
    44. 44. Variables of the experiment correctnessdependent completion time CodeCity tool Eclipse + Excelindependent medium object system size large beginner experience level advancedcontrolled academia background industry
    45. 45. The experiment’s design between-subjects randomized-block
    46. 46. The experiment’s design between-subjects randomized-block CodeCity T1 large Tool T2 medium Size Ecl+Excl T3 large T4 medium
    47. 47. The experiment’s designbackground academia industryexperience beginner advanced beginner advanced B1 B2 B3 B4 between-subjects randomized-block CodeCity T1 large Tool T2 medium Size Ecl+Excl T3 large T4 medium
    48. 48. The experiment’s designbackground academia industryexperience beginner advanced beginner advanced B1 B2 B3 B4 between-subjects randomized-block CodeCity T1 large Tool T2 medium Size Ecl+Excl T3 large T4 medium
    49. 49. Execution
    50. 50. Experimental runs
    51. 51. Experimental runs day 1 timetraining session(1 hour)
    52. 52. Experimental runs day 1 timetraining session(1 hour) e1experiment session(2 hours) c1
    53. 53. Experimental runs day 1 day 2 timetraining session(1 hour) e1 e2experiment session(2 hours) c1 c2
    54. 54. Experimental runs day 1 day 2 day 3 timetraining session(1 hour) e1 e2 e3experiment session(2 hours) c1 c2
    55. 55. Experimental runs day 1 day 2 day 3 day 4 timetraining session(1 hour) e1 e2 e3experiment session(2 hours) c1 c2 c4
    56. 56. Testing the waters 2009 November December 18 24 25 2 9Lugano 1 3 1 1 1 1 1
    57. 57. Timeline of the experiment 2009 2010 November December January February ... April 18 24 25 2 9 21 28 5 8 14 28 18 22 24 25 14Lugano 1 3 1 1 1 1 1 1 1 1 3 Bologna 2 1 6 1 1 1 1 Antwerp 5 6Bern 4 1 6
    58. 58. Timeline of the experiment 2009 2010 November December January February ... April 18 24 25 2 9 21 28 5 8 14 28 18 22 24 25 14Lugano 1 3 1 1 1 1 1 1 1 1 3 remote sessions Bologna 2 1 6 1 1 1 1 Antwerp 5 6 remote sessionBern 4 1 6
    59. 59. Treatments and subjects academia industry beginner advanced advanced large 2 2 6 10 CodeCity medium 3 2 7 12 large 2 3 3 8 Ecl+Excl medium 2 5 4 11 9 12 20 41
    60. 60. Collecting raw data
    61. 61. Collecting raw data solution
    62. 62. Collecting raw data completion time
    63. 63. Controlling time
    64. 64. Controlling timecommon time
    65. 65. Controlling timecommon timeinfo on subjectsName (Task): Remaining time
    66. 66. Assessing correctness 1 T2: Findbugs, analyzed with CodeCity A3: Impact Analysis B1.2 A1 Multiple locations. The god class containing the largest number of methods in the system is There are 40/41 [0.5pts] classes class MainFrame [0.8pts] Dispersed. [1pt] defined in the following 3 packages [1/6pts for each]: defined in package edu.umd.cs.findbugs.gui2 [0.1pts] which contains 119 [0.1pts] methods. • edu.umd.cs.findbugs A2.1 • edu.umd.cs.findbugs.bcel B2.1 Localized [0.5pts] in package edu.umd.cs.findbugs.detect [0.5pts]. • edu.umd.cs.findbugs.detect The dominant class-level design problem is DataClass [0.5pts] A2.2 A4.1 which affects a number of 67 [0.5pts] classes. Dispersed The 3 classes with the highest number of methods are [ 1 pts each correctly placed and 1 pts each misplaced]: 3 6 in the following (max. 5) packages [0.2pts for each]: 1. class AbstractFrameModelingVisitor • edu.umd.cs.findbugs defined in package edu.umd.cs.findbugs.ba contains 195 methods; • edu.umd.cs.findbugs.anttask 2. class MainFrame • edu.umd.cs.findbugs.ba defined in package edu.umd.cs.findbugs.gui2 contains 119 methods; • edu.umd.cs.findbugs.ba.deref 3. class BugInstance • edu.umd.cs.findbugs.ba.jsr305 defined in package edu.umd.cs.findbugs • edu.umd.cs.findbugs.ba.npe contains 118 methods or • edu.umd.cs.findbugs.ba.vna class TypeFrameModelingVisitor defined in package edu.umd.cs.findbugs.ba.type • edu.umd.cs.findbugs.bcel contains 118 methods. • edu.umd.cs.findbugs.classfile A4.2 • edu.umd.cs.findbugs.classfile.analysis The 3 classes with the highest average number of lines of code per method are [ 1 pts each correctly placed and 1 pts each • edu.umd.cs.findbugs.classfile.engine 3 6 misplaced]: • edu.umd.cs.findbugs.classfile.impl 1. class DefaultNullnessAnnotations • edu.umd.cs.findbugs.cloud defined in package edu.umd.cs.findbugs.ba has an average of 124 lines of code per method; • edu.umd.cs.findbugs.cloud.db 2. class DBCloud.PopulateBugs • edu.umd.cs.findbugs.detect defined in package edu.umd.cs.findbugs.cloud.db has an average of 114.5 lines of code per method; • edu.umd.cs.findbugs.gui 3. class BytecodeScanner • edu.umd.cs.findbugs.gui2 defined in package edu.umd.cs.findbugs.ba • edu.umd.cs.findbugs.jaif has an average of 80.75 lines of code per method. • edu.umd.cs.findbugs.model B1.1 • edu.umd.cs.findbugs.visitclass oracles The package with the highest percentage of god classes in the system is • edu.umd.cs.findbugs.workflow edu.umd.cs.findbugs.ba.deref [0.8pts] which contains 1 [0.1pts] god classes out of a total of 3 [0.1pts] classes.
    67. 67. Assessing correctnessblinding 1 T2: Findbugs, analyzed with CodeCity A3: Impact Analysis B1.2 A1 Multiple locations. The god class containing the largest number of methods in the system is There are 40/41 [0.5pts] classes class MainFrame [0.8pts] Dispersed. [1pt] defined in the following 3 packages [1/6pts for each]: defined in package edu.umd.cs.findbugs.gui2 [0.1pts] which contains 119 [0.1pts] methods. • edu.umd.cs.findbugs A2.1 • edu.umd.cs.findbugs.bcel B2.1 Localized [0.5pts] in package edu.umd.cs.findbugs.detect [0.5pts]. • edu.umd.cs.findbugs.detect The dominant class-level design problem is DataClass [0.5pts] A2.2 A4.1 which affects a number of 67 [0.5pts] classes. Dispersed The 3 classes with the highest number of methods are [ 1 pts each correctly placed and 1 pts each misplaced]: 3 6 in the following (max. 5) packages [0.2pts for each]: 1. class AbstractFrameModelingVisitor • edu.umd.cs.findbugs defined in package edu.umd.cs.findbugs.ba contains 195 methods; • edu.umd.cs.findbugs.anttask 2. class MainFrame • edu.umd.cs.findbugs.ba defined in package edu.umd.cs.findbugs.gui2 contains 119 methods; • edu.umd.cs.findbugs.ba.deref 3. class BugInstance • edu.umd.cs.findbugs.ba.jsr305 defined in package edu.umd.cs.findbugs • edu.umd.cs.findbugs.ba.npe contains 118 methods or • edu.umd.cs.findbugs.ba.vna class TypeFrameModelingVisitor defined in package edu.umd.cs.findbugs.ba.type • edu.umd.cs.findbugs.bcel contains 118 methods. • edu.umd.cs.findbugs.classfile A4.2 • edu.umd.cs.findbugs.classfile.analysis The 3 classes with the highest average number of lines of code per method are [ 1 pts each correctly placed and 1 pts each • edu.umd.cs.findbugs.classfile.engine 3 6 misplaced]: • edu.umd.cs.findbugs.classfile.impl 1. class DefaultNullnessAnnotations • edu.umd.cs.findbugs.cloud defined in package edu.umd.cs.findbugs.ba has an average of 124 lines of code per method; • edu.umd.cs.findbugs.cloud.db 2. class DBCloud.PopulateBugs • edu.umd.cs.findbugs.detect defined in package edu.umd.cs.findbugs.cloud.db has an average of 114.5 lines of code per method; • edu.umd.cs.findbugs.gui 3. class BytecodeScanner • edu.umd.cs.findbugs.gui2 defined in package edu.umd.cs.findbugs.ba • edu.umd.cs.findbugs.jaif has an average of 80.75 lines of code per method. • edu.umd.cs.findbugs.model B1.1 • edu.umd.cs.findbugs.visitclass oracles The package with the highest percentage of god classes in the system is • edu.umd.cs.findbugs.workflow edu.umd.cs.findbugs.ba.deref [0.8pts] which contains 1 [0.1pts] god classes out of a total of 3 [0.1pts] classes.
    68. 68. Results
    69. 69. Statistical test two-way analysis of variance (ANOVA) 95% confidence interval
    70. 70. Correctness Ecl+Excl CodeCityDoes the use of CodeCity increase the 8correctness of the solutions to programcomprehension tasks, compared to non- 7visual exploration tools, regardless of the 6object system size? 5 4 3 2 1 0 medium large
    71. 71. Correctness Ecl+Excl CodeCityDoes the use of CodeCity increase the 8correctness of the solutions to programcomprehension tasks, compared to non- 7visual exploration tools, regardless of the 6object system size? 5 24.26% 4 3 2 more correct with CodeCity 1 large effect size (d=0.89) 0 medium large
    72. 72. Correctness Ecl+Excl CodeCityDoes the use of CodeCity increase the 8correctness of the solutions to programcomprehension tasks, compared to non- 7visual exploration tools, regardless of theobject system size? 6 24.26% 5 4 more correct with CodeCity 3 large effect size (d=0.89) 2 medium large
    73. 73. Completion time Ecl+Excl CodeCityDoes the use of CodeCity reduce the time 60needed to solve program comprehensiontasks, compared to non-visual exploration 50tools, regardless of the object system size? 40 30 20 10 0 medium large
    74. 74. Completion time Ecl+Excl CodeCityDoes the use of CodeCity reduce the time 60needed to solve program comprehensiontasks, compared to non-visual exploration 50tools, regardless of the object system size? 40 12.01% 30 20 less time with CodeCity 10 moderate effect size (d=0.63) 0 medium large
    75. 75. Completion time Ecl+Excl CodeCityDoes the use of CodeCity reduce the time 60needed to solve program comprehensiontasks, compared to non-visual explorationtools, regardless of the object system size? 50 12.01% 40 30 less time with CodeCity moderate effect size (d=0.63) 20 medium large
    76. 76. after the first roundCodeCityvsEcl+Excl+24% correctness-12% completion time
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×