Improving Software Development Management through Software ...
Upcoming SlideShare
Loading in...5
×
 

Improving Software Development Management through Software ...

on

  • 665 views

 

Statistics

Views

Total Views
665
Views on SlideShare
665
Embed Views
0

Actions

Likes
1
Downloads
17
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Improving Software Development Management through Software ... Improving Software Development Management through Software ... Document Transcript

  • feature project management Improving Software Development Management through Software Project Telemetry Philip M. Johnson, Hongbing Kou, Michael Paulding, Qin Zhang, Aaron Kagawa, and Takuya Yamashita, University of Hawaii onventional wisdom in the software engineering research com- C Software project munity says that metrics can make project management more telemetry facilitates effective.1 Software metrics range from internal product at- local, in-process tributes such as size, complexity, and modularity to external decision making. process attributes such as effort, productivity, and reliability. Software Hackystat, an open metrics proponents quote theorists and practitioners from Galileo’s source reference “What is not measurable, make measurable”2 to Tom DeMarco’s “You framework for this can neither predict nor control what you can- for decision making.5 Fortunately, creating new approach, not measure.”3 predictive models based on historical project offers easy Despite metrics’ theoretical potential, effec- data isn’t the only possible way to apply soft- implementation. tively applying them appears to be far from ware metrics to project management. Our mainstream in practice. For example, a recent team at the Collaborative Software Develop- case study of more than 600 software profes- ment Laboratory has developed a new teleme- sionals revealed that only 27 percent viewed try-based approach. metrics as “very” or “extremely” important to their software project decision-making Software project telemetry process.4 The study also revealed that most re- According to the Encyclopedia Britannica, spondents attempted to use metrics only for telemetry is a “highly automated communica- cost and schedule estimation. tions process by which measurements are Practitioners face various barriers in apply- made and other data collected at remote or in- ing metrics (see the sidebar “Explaining the accessible points and transmitted to receiving Gap between Software Metrics Theory and equipment for monitoring, display, and Practice”). It’s no wonder that many practi- recording.” Perhaps the highest-profile tioners find it daunting to apply best practices telemetry user is NASA, which has been using to their own situation. Indeed, the agile com- it since 1965 to monitor operations from the munity generally argues against model-based early Gemini missions to the modern Mars metrics applications, promoting softer metrics Rover flights. At NASA’s Mission Control 2 IEEE SOFTWARE Published by the IEEE Computer Society 0740-7459/05/$20.00 © 2005 IEEE
  • Explaining the Gap between Software Metrics Theory and Practice The Software Metrics Best Practices report1 reveals a sub- plexity C will be prone to defects with density D. Unfortunately, stantial gap between software metrics theory and its actual im- practitioners face several barriers to adopting these predictive, plementation in practice. Why might this be? One hypothesis is model-based metrics approaches. that most practitioners are simply uninformed. Perhaps if they First, to use the model unchanged, practitioners must con- would subscribe to journals and attend conferences on software firm that the set of projects the researcher uses to generate the metrics, they could immediately implement current best prac- model is similar to their current projects. This is the context tices and just as quickly improve their project management de- problem: unless the context associated with the process and cision making. project data in the model is sufficiently similar to the context as- All of us, theorists and practitioners alike, can always bene- sociated with a practitioner’s projects, the model’s outputs might fit from additional education. However, an alternative explana- not apply to the practitioner. tion is that perhaps the metrics methods that theorists use yield Next, practitioners must also confirm that their future proj- results that developers can’t easily translate into practice. To see ects’ context will remain similar to their previous ones. If practi- why, consider that much metrics research involves the following tioners can’t meet those two conditions, they must recalibrate basic method: the model at some point. This involves replicating the model- building method within a practitioner’s organization—with the 1. Collect a set of process and product measures (such risk that the resulting model won’t work or that the organization as size, effort, known defects, and complexity) for a might change yet again, rendering future project contexts dif- set of completed software projects. ferent from those used to calibrate the model. 2. Generate a model that fits this data. Finally, practitioners must assess the cost of metrics collection 3. Claim that this model can now be used to predict and analysis required by the model and ensure that the metrics future project characteristics. and their interpretation are useful enough to justify these costs. For example, a model might predict that a future project of Reference size S will require E person-months of effort; another model 1. P. Kulik and M. Haas, Software Metrics Best Practices 2003, tech. report, might predict that the implementation of a module with com- Accelera Research, 2003. Center, for example, dozens of specialists ■ The data consists of a stream of time- monitor telemetry data from sensors attached stamped events where the time-stamp is to a space vehicle and its occupants. They use significant for analysis. Software project this data for many purposes, including early telemetry data focuses on the changes over warning of anomalies indicating problems, time in measurements of processes and better insight into the mission’s status, and products during development. This con- gauging the impact of incremental course or trasts, for example, with Cocomo,7 where mission adjustments. the precise time at which calibration data We define software project telemetry as a is collected generally isn’t relevant. style of software metrics definition, collec- ■ Both developers and managers can contin- tion, and analysis with the following essential uously and immediately access the data. properties: Telemetry data isn’t hidden away in some obscure database that the software quality ■ The data is collected automatically by improvement group guards. All project tools that regularly measure various char- members can examine and interpret it. acteristics of the project development en- ■ Telemetry analyses exhibit graceful degra- vironment. In other words, software de- dation. Although complete telemetry data velopers work in a location that’s remote provides the best project management or inaccessible to manual metrics collec- support, the analyses shouldn’t be brittle: tion activities. This contrasts with soft- they should still provide value even if ware metrics data that requires human in- complete data over the entire project’s tervention or developer effort to collect, lifespan isn’t available. For example, such as Personal Software Process or telemetry collection and analysis should Team Software Process metrics.6 provide decision-making value even if July/August 2005 IEEE SOFTWARE 3
  • these activities start midway through a modern, iterative development processes. In an project. article on transitioning from CMM to CMMI, Software ■ Analysis includes in-process monitoring, control, and short-term prediction. Walker Royce asserts the importance of “in- strument[ing] the process for objective quality project Telemetry analyses represent the current control. Lifecycle assessments of both the telemetry project state and how it changes at various process and all intermediate products must be timescales; so far, days, weeks, and tightly integrated into the process, using well- provides a months are useful scales. The simultane- defined measures derived directly from the measurement ous display of multiple project state values evolving engineering artifacts and integrated infrastructure and how they change over the same time periods allows opportunistic analyses— into all activities and teams.”9 Software proj- ect telemetry appears well suited to the CMMI that isn’t the emergent knowledge that one state vision for process improvement. focused on a variable appears to co-vary with another Agile method proponents are traditionally in the current project context. suspicious of conventional process and prod- particular long- uct measurements and technologies for project range target. Software project telemetry enables an incre- decision making. Jim Highsmith frames the mental, distributed, visible, and experiential problem as follows: “Agile approaches excel in approach to project decision making. For ex- volatile environments in which conformance to ample, if you found that both complexity and plans made months in advance is a poor meas- defect-density telemetry values were increas- ure of success. If agility is important, one char- ing, you could take corrective action (for ex- acteristic we should measure is that agility. ample, by simplifying overly complex mod- Traditional measures of success emphasize ules) to try to decrease the defect-density conformance to predictions (plans). Agility em- telemetry values. You could also monitor other phasizes responsiveness to change. So there is a telemetry data to see if such simplification has conflict because managers and executives say unintended side effects (such as performance that they want flexibility, but then they still degradation). Project management using measure success based on conformance to telemetry thus involves cycles of hypothesis plans.”10 Software project telemetry provides a generation (“Does module complexity corre- measurement infrastructure that isn’t focused late with defect density?”), hypothesis testing on achieving a particular long-range target but (“If I reduce module complexity, will defect on process and product measurements that density decrease?”), and impact analysis (“Do support adaptation and improvement. the process changes required to reduce module complexity produce unintended side effects?”). Support for telemetry: Project Finally, software project telemetry supports de- Hackystat centralized project management: because all For several years, we’ve been designing, im- project members can access telemetry data, it plementing, and evaluating tools and tech- helps both developers and managers engage in niques to support a telemetry-based software these management activities. project management approach as part of Pro- Software project telemetry is related to in- ject Hackystat. Figure 1 illustrates the system’s process software metrics, such as work done on overall architecture. Developers instrument the software testing management.8 However, such project development environment by installing work tends to focus on a narrow range of meas- Hackystat sensors into various tools, such as ures and management actions related to testing their editor, build system, and configuration and, as a result, is amenable to manual data col- management system. Once installed, the Hack- lection and analysis. Telemetry’s broader scope ystat sensors unobtrusively monitor develop- necessitates automated collection and analysis, ment activities and send process and product with correspondingly broader management de- data to a centralized Web service. Project mem- cision-making support. bers can log in to the Web server to see the col- Software project telemetry also relates in in- lected raw data and run analyses that integrate teresting ways to both the Capability Maturity and abstract the raw sensor data streams into Model Integration and Agile methods. The telemetry. Hackystat also lets project members CMMI, a revision of the original CMM, incor- configure alerts that watch for specific condi- porates lessons learned and supports more tions in the telemetry stream and sends an 4 IEEE SOFTWARE w w w . c o m p u t e r. o r g / s o f t w a r e
  • Figure 1. Hackystat’s Sensors are tool- and data-specific basic architecture. Analysis results/URLs Sensors are attached to Mailer tools that developers directly invoke (such Emacs Emacs Raw sensor as Eclipse or Emacs) sensor FileMetric data is sent and tools that via SOAP Ant developers implicitly Ant Build sensor manipulate (such as Hackystat Commit Web server XML Concurrent Versions CVS database CVS Activity System or an sensor automated build Eclipse process using Ant). Eclipse sensor Browser Telemetry data and drill downs email when these conditions occur. havior such as frequency, types, and se- Hackystat supports the following general quences of command invocations during a classes of software project telemetry: given time period in a given subsystem. ■ Development telemetry is data that Hacky- For a description of specific sensors and data stat gathers by observing the project devel- types that Hackystat supports, see the “Sen- opers’ and managers’ behavior based on sors and Sensor Data Types” sidebar. their tool usage. It includes information The path from sensors to the telemetry about the files they edit, the time they report involves several steps. Hackystat sen- spend using various tools, the changes they sors collect raw data by observing behavior make to project artifacts, and the sequences in various client tools and then send it to the of tool or command invocations. Hackystat Hackystat server, which persists the data in provides sensors for collecting development an XML-based repository. Hackystat analy- telemetry from editors such as Eclipse or sis mechanisms then abstract this raw data Emacs, Office applications such as Word or into DailyProjectData instances, which can Frontpage, configuration management involve synthesizing sensor data from multi- tools such as CVS, and issue management ple group members or sensors into a higher- tools such as Jira. level representation of process or product ■ Build telemetry is data gathered by ob- characteristics for a given project and day. serving the results of tools invoked to Sets of DailyProjectData instances are then compile, link, and test the system. Hacky- manipulated by Reduction Functions, which stat can gather such data from build tools emit a sequence of numerical telemetry val- such as Ant, Make, or CruiseControl, test- ues for a given project at a timescale of ing tools such as JUnit or CppUnit, and days, weeks, or months. size and complexity tools such as LOCC. The previous steps occur automatically as ■ Execution telemetry is data that Hackys- part of the software project telemetry imple- tat gathers by observing the system’s be- mentation. An important benefit of Hackystat havior as it executes. Hackystat provides is its explicit support for the exploratory na- sensors for collecting execution telemetry ture of telemetry-based decision making. from tools that test for load or stress, such We’ve designed a Telemetry Display Language as JMeter. that we use with the Hackystat Web server to ■ Usage telemetry is data that Hackystat interactively define telemetry streams and gathers by observing the users’ interaction specify how practitioners should compose with the system. This data includes user be- them together into charts and reports for pres- July/August 2005 IEEE SOFTWARE 5
  • Sensors and Sensor Data Types entation to developers and managers. Figure 2 shows an example telemetry re- port. This report illustrates the relationship be- Hackystat provides an extensible architecture with respect to both sen- tween aggregate code churn (the lines all proj- sors, the software plug-ins associated with development tools, and sensor ect members add and delete from the CVS data types, which describe a given type of raw metric data’s structure. This repository) and aggregate build results (the mapping isn’t one-to-one: for example, the Eclipse sensor can send Activity, number of build attempts and failures on a File-Metric, and Review sensor data types, and both Integrated Development given day via the Ant build tool). Telemetry re- Environment and Size metric sensors can collect the File-Metric sensor data ports are always defined without reference to type. The following lists describe the range of available sensors and sensor a specific project or time interval. We wait to data types; each organization can decide whether to enable these facilities specify the project and its time interval until or implement their own custom extensions to facilitate their needs. we generate the report. Thus, project members Hackystat sensors are currently implemented for the following tools: can run this telemetry report over differing sets of days or change the timescale to weeks ■ interactive development environments, including Eclipse, Emacs, from months to see if different trends emerge JBuilder, Vim, and Visual Studio; from these alternative perspectives. In addi- ■ office productivity applications, including Excel, Word, PowerPoint, and tion, once a telemetry report is defined, mem- Frontpage; bers of other projects on this server could use ■ build tools, including Ant and the Unix command line; it to see if it adds decision-making value to ■ size measurement tools, including CCCC and LOCC; their project management activities. ■ testing tools, including JUnit and JBlanket; ■ configuration management, including Concurrent Versions System and Telemetry in practice: Managing Harvest; and integration-build failure ■ defect tracking tools, including Jira. As a concrete example of software project telemetry in action, we’re currently using it to Hackystat sensor data types include investigate and improve our own daily (inte- gration) build process. Hackystat consists of ■ Activity, which represents data concerning the active time developers approximately 95 KLOC, organized into ap- spend in their IDE; proximately 30 modules, with five to 10 active ■ BufferTransition, which represents the sequence of files that developers visit; developers. The CVS configuration manage- ■ Review, which represents data on review issues generated during code ment system stores the sources, so developers or design inspections; can check out the latest version of the sources ■ FileMetric, which represents file size information; associated with any given module and commit ■ Build, which represents data about software build occurrences and their changes when finished. Developers rarely outcomes; compile, build, and test against the entire code ■ Perf, which represents data about the occurrence and outcome of per- base; instead, they select a subset of the mod- formance analysis activities such as load testing; ules relevant to their work. An automated ■ CLI, which represents data about command line invocation occurrences; nightly build process compiles, builds, and ■ UnitTest, which represents data about unit test invocation occurrences tests the latest committed code for all modules and outcomes; and sends email if the build fails. We can also ■ Coverage, which represents data about the coverage that unit testing invoke this integration build manually. activities obtain; At the end of 2004, we discovered that our ■ Commit, which represents data about developers’ configuration man- integration-build failure rate was significant. agement commit events; and For the 300 daily integration-build attempts ■ Defect, which represents information about posting defect reports to de- during that year, the build failed on 88 days fect tracking tools by developers or users. with a total of 95 distinct build errors. This high failure rate substantially impacted our Hackystat is an open source system that’s freely available for download productivity. Each failure generally required and use. We encourage interested theorists and practitioners to visit the de- one or more team members to stop concurrent veloper services Web site at www.hackystat.org for access to source code, development, diagnose the problem, determine binaries, and documentation. To try out Hackystat, we also maintain a pub- who was responsible for fixing it, and often lic server at http://hackystat.ics.hawaii.edu. wait until the corrections were committed be- fore checking out or committing additional code. 6 IEEE SOFTWARE w w w . c o m p u t e r. o r g / s o f t w a r e
  • To reduce the integration-build failure rate in 2005, we needed to better understand how, when, and why the previous builds had failed in 2004. To do this, we embarked on a series of analyses involving several Hackystat sensor data streams: Active Time, Commit, Build, and Churn. This revealed many useful insights about our build process: ■ We could partition the 95 distinct build errors into six categories: coding style er- ror (14), compilation error (25), unit test error (40), build script error (8), platform- related error (3), and unknown error (5). ■ We found substantial differences between experienced and new developers with re- spect to integration-build failures. For ex- ample, the least experienced developer had the highest integration-build failure rate—an average of one build failure per four hours of Active Time. In contrast, more experienced developers averaged one build failure per 20 to 40 hours of Ac- tive Time. ■ The two modules with the most depend- encies on other modules also had the two highest numbers of build failures, and to- gether they accounted for almost 30 per- cent of the failures. ■ We found that the 88 days with build fail- ures had, on average, a statistically signif- icant greater number of distinct module commits than days without build failures. ■ We found (somewhat unexpectedly) that there was no relationship between build failure and the number of lines of code committed or the amount of Active Time Figure 2. A telemetry before the commit. In other words, whether The most simple—and most intrusive— report that compares you worked five minutes or five hours be- process improvement following from this hy- code churn (lines added fore committing, or whether you changed pothesis is to require all developers to run a and deleted) to build five or 500 lines of code, didn’t change the full system compile and test locally before results (number of build odds of causing a build failure. every commit. However, even though some attempts and failures). untested commits result in an integration- These findings yield numerous hypotheses build failure, many other untested commits do regarding ways of reducing integration-build not. Given that a full compile and test can take failure, including increased support for new over 30 minutes and that multiple developers developers (such as pair programming) and often perform multiple commits per day, this refactoring modules to reduce coupling and process improvement’s productivity cost could frequent multimodule commits. The most actually exceed the benefits of reducing the provocative hypothesis, however, is that 82 level of integration-build failures. Other percent of the integration failures in 2004 generic changes, such as moving to continuous could have been prevented if the developers integration, also tend to move build failure had run a full system compile and test before costs around without necessarily reducing committing their changes. them. July/August 2005 IEEE SOFTWARE 7
  • Our current approach to managing integra- monitor the Hackystat development project tion-build failure comes from recognizing that across 30 modules, from five to 10 active de- Understanding the decision to invest the time to perform a full build and test before any particular commit velopers. Given that each telemetry stream can be composed from one or more sensor data why any given depends on many factors. The decision in- streams, project modules, and developers, you integration volves developer familiarity with the system, can see the problem: which of the literally the actual changes made to the code, and thousands of possible charts should we be build succeeds other developers’ commits. To improve our monitoring? or fails productivity, we need to give developers tools Our development group decided to address requires and feedback to better decide when to precede a specific commit with a full build and test. To this problem by creating a new interface to the telemetry data that would let us passively mon- multiple forms assess whether the feedback is working, we itor telemetry in a way that a standard Web of process and can use software project telemetry. browser wouldn’t allow. We call this interface Our analyses demonstrate that understand- the Telemetry Control Center (see figure 3). product ing why any given integration build succeeds The TCC consists of a standard PC with a information. or fails requires multiple forms of process and multihead video display card that’s attached to product information, including the occurrence nine 17-inch liquid crystal display panels, of local builds, the integration build’s success mounted on the wall in our laboratory. We im- or failure, the failure type, the modules that plemented a new client-side software system were committed, the developers responsible, called the TelemetryViewer, which periodically the dependency relationships among modules, requests telemetry reports from the Hackystat and the Active Time associated with the work server, retrieves the resulting image file, and dis- prior to commits. Unfortunately, we haven’t plays it on screens. The TelemetryViewer reads discovered any analytical models that can au- in an XML configuration file at startup, which tomate the decision-making process and tell tells it which reports to retrieve, where to dis- developers before they commit whether they’ll play them, and how long to wait before retriev- need full or partial testing. However, using ing the next set of reports. The Teleme- software project telemetry, we can track the tryViewer’s default behavior is to automatically absolute level of integration-build failures and repeatedly cycle through the set of teleme- over time, as well as the number of failures try scenes in the XML configuration file. due to any particular cause, and even the num- The TCC frees us from the “tyranny of the ber of failures that we could have prevented browser” by making a sequence of telemetry with a full build and test. report sets continuously available without any Using Hackystat’s alert mechanism, we can developer action. It also lets us more easily give developers a detailed summary of the look for relationships between telemetry process and product state, including the fac- streams, because the system can display nine tors we’ve identified as relevant, whenever an telemetry reports simultaneously. Finally, it integration-build failure occurs. Our hypothe- provides a new kind of passive awareness sis is that, given appropriate feedback, each about the project’s state to all developers; developer will naturally learn over time to be rather than having to decide to generate a re- more sophisticated in deciding when to per- port or wait for a weekly project update meet- form a full build and test. New developers ing, developers can simply glance at the TCC might quickly learn to do it almost always, whenever they’re passing though the lab to get while more experienced developers might be- a perspective on the state of development. In- gin to recognize more complex indicators. dividuals can still get all TCC reports on their One of our project goals for 2005 is to test this local workstations if they so desire, although hypothesis and measure the results using inte- without the simultaneous display. gration-build failure telemetry streams. Lessons learned The Telemetry Control Center Our first lesson learned is that software For even a moderately large, complex proj- project telemetry can help support project ect, the number of possible telemetry charts management decision making. In addition to and reports quickly explodes. For example, al- our work on integration-build failure, teleme- most a dozen different sensor data streams try data has also revealed to us a recent, subtle 8 IEEE SOFTWARE w w w . c o m p u t e r. o r g / s o f t w a r e
  • Figure 3. The Telemetry Control Center, which shows one scene consisting of nine telemetry reports. The associated TelemetryViewer software controls the TCC by automatically cycling through a set of scenes at a predefined interval. This telemetry viewer is configured to show a dozen separate scenes, each displayed for two minutes. slide in testing coverage over the past six As with any measurement approach, you months that has co-occurred with two signifi- must consider social issues. It’s possible to misin- cant refactoring episodes (and resulting code terpret and misuse software project telemetry churn). As a result, we’re allocating additional data. For example, telemetry data is intrinsically effort to software review with a focus on as- incomplete with respect to measuring effort. sessing new modules’ test quality. We hope that Hackystat implements a measure called Active this project management decision will eventu- Time, which is the time developers and managers ally reverse the declining coverage trend. spend editing files related to a given project in Hackystat provides an open source refer- tools such as Eclipse, Word, or Excel. However, ence framework for software project teleme- many legitimate and productive activities, includ- try, but it isn’t the only technology available. ing meetings, email, and hallway conversations, Commercial measurement tools can also pro- are outside the telemetry-based measurement’s vide infrastructure support, or your organiza- scope. Telemetry can’t measure effort in its broad- tion could decide to develop technology in- est sense, so a project member with little Active house. The key issue is to preserve software Time could still be contributing significantly to project telemetry’s essential properties. the project. Indeed, some organizations might de- We’ve learned that having an automated cide not to collect measures such as Active Time, daily build mechanism adds significant value to simply because it’s susceptible to misinterpreta- software project telemetry. It provides not only tion and abuse. For an excellent analysis of these a convenient hook into which you can add sen- and other forms of “measurement dysfunction,” sors to reliably obtain daily product measures see Robert Austin’s Measuring and Managing information but also a kind of heart beat for the Performance in Organizations.11 development project that makes all the metrics Adopting a software project telemetry ap- more comparable, accessible, and current. proach to measurement and decision making July/August 2005 IEEE SOFTWARE 9
  • About the Authors ever, it won’t be able to unless it moves to an is- sue management tool such as Bugzilla or Jira Philip Johnson is a professor of information and computer sciences at the University of that can support sensor-based measurement. Hawaii and the director of its Collaborative Software Development Laboratory. His research in- terests include software engineering and computer-supported collaborative work. He received his PhD in computer science from the University of Massachusetts. He’s a member of the IEEE D and the ACM. Contact him at the Collaborative Software Development Laboratory, 1680 East- oes software project telemetry provide West Rd., POST 307, Univ. of Hawaii, Honolulu, HI 96822; johnson@hawaii.edu. a silver bullet that solves all problems associated with metrics-based soft- ware project management and decision mak- Hongbing Kou is a PhD student at the University of Hawaii and a researcher in its Col- ing? Of course not. While software project laborative Software Development Laboratory. His research interests include software engineer- telemetry does address certain problems inher- ing and software process modeling, including agile methods. He received his MS in computer ent in traditional measurement and provides a science from the University of Hawaii. Contact him at the Collaborative Software Development Laboratory, 1680 East-West Rd., POST 307, Univ. of Hawaii, Honolulu, HI 96822; hongbing@ new approach to more local, in-process deci- hawaii.edu. sion making, it provides its own issues that fu- ture research and practice must address. Telemetry data’s decision-making value is Michael Paulding is a PhD student at the University of Hawaii and a researcher in its only as good as the data that sensors can ob- Collaborative Software Development Laboratory. His research interests include software engi- tain. Clearly, some threshold exists for sensor neering practices and productivity studies of high-performance computing systems. He received data beneath which the telemetry’s decision- his BS in computer science and engineering from Bucknell University. Contact him at the Col- laborative Software Development Laboratory, 1680 East-West Rd., POST 307, Univ. of Hawaii, making value is compromised. But what is this Honolulu, HI 96822; mpauldin@hawaii.edu. threshold, and how does it vary with the kinds of decision making the development group re- quires? What sets of sensors and sensor data Qin (Cedric) Zhang is a PhD student at the University of Hawaii and a researcher in types are best suited to what project develop- its Collaborative Software Development Laboratory. He received his MA in economics and his ment contexts? MS in information and computer sciences from the University of Hawaii. Contact him at the Col- What are telemetry-based data’s intrinsic laborative Software Development Laboratory, 1680 East-West Rd., POST 307, Univ. of Hawaii, Honolulu, HI 96822; qzhang@hawaii.edu. limitations? A good way to investigate this question involves qualitative, ethnographic re- search, in which a researcher trained in these methods observes a software development Aaron Kagawa is a master’s student in information and computer sciences at the Uni- group to learn what kinds of information rel- versity of Hawaii, a graduate research assistant, and a member of the Collaborative Software evant to project management decision making Development Laboratory. His research interests include software project management, software occur outside the realm of telemetry data. quality, and software inspections. He received his BS with Honors in information and computer sciences from the University of Hawaii. Contact him at the Collaborative Software Development While manual investigation of telemetry Laboratory, 1680 East-West Rd., POST 307, Univ. of Hawaii, Honolulu, HI 96822; kagawaa@ streams and their relationship to each other is hawaii.edu. certainly an important and necessary first step, the sheer number of possible relationships and Takuya Yamashita is a master’s student at the University of Hawaii and a researcher interactions means that only a small percent- in its Collaborative Software Development Laboratory. His research interests include software age of them can be inspected and monitored engineering and automated support for code inspection. He received his BA in law from Hosei manually on an ongoing basis. An intriguing University and his BA in economics from the University of Hawaii at Hilo. Contact him at the Collaborative Software Development Laboratory, 1680 East-West Rd., POST 307, Univ. of future direction is exploring whether using Hawaii, Honolulu, HI 96822; takuyay@hawaii.edu. data mining and clustering algorithms can re- veal relationships in the telemetry data that manual exploration might not discover. What are the costs associated with initially tends to exert a kind of gravitational force to- setting up software project telemetry using a ward increased use of tools for managing system like Hackystat? Unfortunately, we process and products. For example, a small de- can’t use Hackystat to measure the effort in- velopment team might begin by informally man- volved with installing Hackystat. Further- aging tasks and defects using email or index more, some organizations will require devel- cards. As the team adopts telemetry-based deci- opment of new sensors or analyses not sion making, it will inevitably want to relate de- available in the standard distribution. Better velopment process and product characteristics understanding of these costs will aid adoption to open tasks and defect severity levels. How- of this technology and method. 10 IEEE SOFTWARE w w w . c o m p u t e r. o r g / s o f t w a r e
  • Acknowledgments Partial support for Project Hackystat and Software 5. K. Beck, Extreme Programming Explained: Embrace Project Telemetry was provided by NSF grant CCR- Change, Addison-Wesley, 2000. 0234568, NASA, and Sun Microsystems. 6. W.S. Humphrey, A Discipline for Software Engineering, Addison-Wesley, 1995. References 7. B. Boehm, Software Engineering Economics, Prentice 1. N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rig- Hall, 1981. orous and Practical Approach, Thomson Computer Press, 1997. 8. S.H. Kan, J. Parrish, and D. Manlove, “In-Process Met- rics for Software Testing,” IBM Systems J., vol. 40, no. 2. L. Finkelstein, “What Is Not Measurable, Make Mea- 1, 2001, pp. 220–241. surable,” Measurement and Control, vol. 15, 1982, pp. 25–32. 9. W. Royce, “CMM vs. CMMI: From Conventional to 3. T. DeMarco, Controlling Software Projects, Yourdon Modern Software Management,” Rational Edge, Feb. Press, 1982. 2002; www-106.ibm.com/developerworks/rational/ library/content/RationalEdge/archives/feb02.html. 4. P. Kulik and M. Haas, Software Metrics Best Practices 2003, tech. report, Accelera Research, 2003. July/August 2005 IEEE SOFTWARE 11