Your SlideShare is downloading. ×
Wagstaff (2012, jpl) machine learning that matters
Wagstaff (2012, jpl) machine learning that matters
Wagstaff (2012, jpl) machine learning that matters
Wagstaff (2012, jpl) machine learning that matters
Wagstaff (2012, jpl) machine learning that matters
Wagstaff (2012, jpl) machine learning that matters
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Wagstaff (2012, jpl) machine learning that matters

760

Published on

Published in: Technology, Education
1 Comment
0 Likes
Statistics
Notes
  • Be the first to like this

No Downloads
Views
Total Views
760
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
4
Comments
1
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Machine Learning that MattersKiri L. Wagstaff kiri.l.wagstaff@jpl.nasa.govJet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 USA Abstract tively solved spam email detection (Zdziarski, 2005) and machine translation (Koehn et al., 2003), two Much of current machine learning (ML) re- problems of global import. And so on. search has lost its connection to problems of import to the larger world of science and so- And yet we still observe a proliferation of published ciety. From this perspective, there exist glar- ML papers that evaluate new algorithms on a handful ing limitations in the data sets we investi- of isolated benchmark data sets. Their “real world” gate, the metrics we employ for evaluation, experiments may operate on data that originated in and the degree to which results are commu- the real world, but the results are rarely communicated nicated back to their originating domains. back to the origin. Quantitative improvements in per- What changes are needed to how we con- formance are rarely accompanied by an assessment of duct research to increase the impact that ML whether those gains matter to the world outside of has? We present six Impact Challenges to ex- machine learning research. plicitly focus the field’s energy and attention, This phenomenon occurs because there is no and we discuss existing obstacles that must widespread emphasis, in the training of graduate stu- be addressed. We aim to inspire ongoing dis- dent researchers or in the review process for submitted cussion and focus on ML that matters. papers, on connecting ML advances back to the larger world. Even the rich assortment of applications-driven ML research often fails to take the final step to trans-1. Introduction late results into impact.At one time or another, we all encounter a friend, Many machine learning problems are phrased in termsspouse, parent, child, or concerned citizen who, upon of an objective function to be optimized. It is time forlearning that we work in machine learning, wonders us to ask a question of larger scope: what is the field’s“What’s it good for?” The question may be phrased objective function? Do we seek to maximize perfor-more subtly or elegantly, but no matter its form, it gets mance on isolated data sets? Or can we characterizeat the motivational underpinnings of the work that we progress in a more meaningful way that measures thedo. Why do we invest years of our professional lives concrete impact of machine learning innovations?in machine learning research? What difference does itmake, to ourselves and to the world at large? This short position paper argues for a change in how we view the relationship between machine learning andMuch of machine learning (ML) research is inspired science (and the rest of society). This paper does notby weighty problems from biology, medicine, finance, contain any algorithms, theorems, experiments, or re-astronomy, etc. The growing area of computational sults. Instead it seeks to stimulate creative thoughtsustainability (Gomes, 2009) seeks to connect ML ad- and research into a large but relatively unaddressed is-vances to real-world challenges in the environment, sue that underlies much of the machine learning field.economy, and society. The CALO (Cognitive Assistant The contributions of this work are 1) the clear identifi-that Learns and Organizes) project aimed to integrate cation and description of a fundamental problem: thelearning and reasoning into a desktop assistant, poten- frequent lack of connection between machine learningtially impacting everyone who uses a computer (SRI research and the larger world of scientific inquiry andInternational, 2003–2009). Machine learning has effec- humanity, 2) suggested first steps towards addressing this gap, 3) the issuance of relevant Impact ChallengesAppearing in Proceedings of the 29 th International Confer- to the machine learning community, and 4) the iden-ence on Machine Learning, Edinburgh, Scotland, UK, 2012.Copyright 2012 California Institute of Technology. tification of several key obstacles to machine learning
  • 2. Machine Learning that Mattersimpact, as an aid for focusing future research efforts. viewers do not inquire as to which classes were wellWhether or not the reader agrees with all statements classified and which were not, what the common er-in this paper, if it inspires thought and discussion, then ror types were, or even why the particular data setsits purpose has been achieved. were chosen. There is no expectation that the au- thors report whether an observed x% improvement in2. Machine Learning for Machine performance promises any real impact for the original domain. Even when the authors have forged a col- Learning’s Sake laboration with qualified experts, little paper space isThis section highlights aspects of the way ML research devoted to interpretation, because we (as a field) dois conducted today that limit its impact on the larger not require it.world. Our goal is not to point fingers or critique indi- The UCI archive has had a tremendous impact on theviduals, but instead to initiate a critical self-inspection field of machine learning. Legions of researchers haveand constructive, creative changes. These problems do chased after the best iris or mushroom classifier. Yetnot trouble all ML work, but they are common enough this flurry of effort does not seem to have had any im-to merit our effort in eliminating them. pact on the fields of botany or mycology. Do scientistsThe argument here is also not about “theory versus in these disciplines even need such a classifier? Doapplications.” Theoretical work can be as inspired by they publish about this subject in their journals?real problems as applied work can. The criticisms here There is not even agreement in the community aboutfocus instead on the limitations of work that lies be- what role the UCI data sets serve (benchmark? “real-tween theory and meaningful applications: algorith- world”?). They are of less utility than synthetic data,mic advances accompanied by empirical studies that since we did not control the process that generatedare divorced from true impact. them, and yet they fail to serve as real world data due to their disassociation from any real world con-2.1. Hyper-Focus on Benchmark Data Sets text (experts, users, operational systems, etc.). It isIncreasingly, ML papers that describe a new algorithm as if we have forgotten, or chosen to ignore, that eachfollow a standard evaluation template. After present- data set is more than just a matrix of numbers. Fur-ing results on synthetic data sets to illustrate certain ther, the existence of the UCI archive has tended toaspects of the algorithm’s behavior, the paper reports over-emphasize research effort on classification and re-results on a collection of standard data sets, such as gression problems, at the expense of other ML prob-those available in the UCI archive (Frank & Asuncion, lems (Langley, 2011). Informal discussions with other2010). A survey of the 152 non-cross-conference pa- researchers suggest that it has also de-emphasized thepers published at ICML 2011 reveals: need to learn how to formulate problems and de- fine features, leaving young researchers unprepared to148/152 (93%) include experiments of some sort tackle new problems. 57/148 (39%) use synthetic data 55/148 (37%) use UCI data This trend has been going on for at least 20 years. 34/148 (23%) use ONLY UCI and/or synthetic data Jaime Carbonell, then editor of Machine Learning, 1/148 (1%) interpret results in domain context wrote in 1992 that “the standard Irvine data sets areThe possible advantages of using familiar data sets in- used to determine percent accuracy of concept clas-clude 1) enabling direct empirical comparisons with sification, without regard to performance on a largerother methods and 2) greater ease of interpreting external task” (Carbonell, 1992). Can we change thatthe results since (presumably) the data set properties trend for the next 20 years? Do we want to?have been widely studied and understood. However,in practice direct comparisons fail because we have 2.2. Hyper-Focus on Abstract Metricsno standard for reproducibility. Experiments vary in There are also problems with how we measure per-methodology (train/test splits, evaluation metrics, pa- formance. Most often, an abstract evaluation metricrameter settings), implementations, or reporting. In- (classification accuracy, root of the mean squared er-terpretations are almost never made. Why is this? ror or RMSE, F-measure (van Rijsbergen, 1979), etc.)First, meaningful interpretations are hard. Virtually is used. These metrics are abstract in that they ex-none of the ML researchers who work with these data plicitly ignore or remove problem-specific details, usu-sets happen to also be experts in the relevant scientific ally so that numbers can be compared across domains.disciplines. Second, and more insidiously, the ML field Does this seemingly obvious strategy provide us withneither motivates nor requires such interpretation. Re- useful information?
  • 3. Machine Learning that Matters Phrase problem Select or as a machine Collect generate Necessary data preparation learning task features Choose or develop Choose metrics, The "machine algorithm conduct experiments learning contribution" Interpret Publicize results to Persuade users to Impact results relevant user community adopt techniqueFigure 1. Three stages of a machine learning research program. Current publishing incentives are highly biased towardsthe middle row only.It is recognized that the performance obtained by Methods from statistics such as the t-test (Student,training a model M on data set X may not reflect M’s 1908) are commonly used to support a conclusionperformance on other data sets drawn from the same about whether a given performance improvement isproblem, i.e., training loss is an underestimate of test “significant” or not. Statistical significance is a func-loss (Hastie et al., 2001). Strategies such as splitting tion of a set of numbers; it does not compute real-worldX into training and test sets or cross-validation aim significance. Of course we all know this, but it rarelyto estimate the expected performance of M , trained inspires the addition of a separate measure of (true)on all of X, when applied to future data X . significance. How often, instead, a t-test result serves as the final punctuation to an experimental utterance!However, these metrics tell us nothing about the im-pact of different performance. For example, 80% ac-curacy on iris classification might be sufficient for the 2.3. Lack of Follow-Throughbotany world, but to classify as poisonous or edible It is easy to sit in your office and run a Weka (Halla mushroom you intend to ingest, perhaps 99% (or et al., 2009) algorithm on a data set you downloadedhigher) accuracy is required. The assumption of cross- from the web. It is very hard to identify a problemdomain comparability is a mirage created by the appli- for which machine learning may offer a solution, de-cation of metrics that have the same range, but not the termine what data should be collected, select or ex-same meaning. Suites of experiments are often sum- tract relevant features, choose an appropriate learningmarized by the average accuracy across all data sets. method, select an evaluation method, interpret the re-This tells us nothing at all useful about generalization sults, involve domain experts, publicize the results toor impact, since the meaning of an x% improvement the relevant scientific community, persuade users tomay be very different for different data sets. A related adopt the technique, and (only then) to truly haveproblem is the persistence of “bake-offs” or “mind- made a difference (see Figure 1). An ML researcherless comparisons among the performance of algorithms might well feel fatigued or daunted just contemplatingthat reveal little about the sources of power or the ef- this list of activities. However, each one is a necessaryfects of domain characteristics” (Langley, 2011). component of any research program that seeks to haveReceiver Operating Characteristic (ROC) curves are a real impact on the world outside of machine learning.used to describe a system’s behavior for a range of Our field imposes an additional obstacle to impact.threshold settings, but they are rarely accompanied Generally speaking, only the activities in the middleby a discussion of which performance regimes are rele- row of Figure 1 are considered “publishable” in the MLvant to the domain. The common practice of reporting community. The Innovative Applications of Artificialthe area under the curve (AUC) (Hanley & McNeil, Intelligence conference and International Conference1982) has several drawbacks, including summarizing on Machine Learning and Applications are exceptions.performance over all possible regimes even if they are The International Conference on Machine Learningunlikely ever to be used (e.g., extremely high false pos- (ICML) experimented with an (unreviewed) “inviteditive rates), and weighting false positives and false neg- applications” track in 2010. Yet to be accepted as aatives equally, which may be inappropriate for a given mainstream paper at ICML or Machine Learning orproblem domain (Lobo et al., 2008). As such, it is in- the Journal of Machine Learning Research, authorssufficiently grounded to meaningfully measure impact. must demonstrate a “machine learning contribution” that is often narrowly interpreted by reviewers as “the
  • 4. Machine Learning that Mattersdevelopment of a new algorithm or the explication of a system perfectly solves a sub-problem of little interestnovel theoretical analysis.” While these are excellent, to the relevant scientific community, or where the MLlaudable advances, unless there is an equal expectation system’s performance appears good numerically but isfor the bottom row of Figure 1, there is little incentive insufficiently reliable to ever be adopted.to connect these advances with the outer world. We could also solicit short “Comment” papers, to ac-Reconnecting active research to relevant real-world company the publication of a new ML advance, thatproblems is part of the process of maturing as a re- are authored by researchers with relevant domain ex-search field. To the rest of the world, these are the pertise but who were uninvolved with the ML research.only visible advances of ML, its only contributions to They could provide an independent assessment of thethe larger realm of science and human endeavors. performance, utility, and impact of the work. As an additional benefit, this informs new communities3. Making Machine Learning Matter about how, and how well, ML methods work. Rais- ing awareness, interest, and buy-in from ecologists,Rather than following the letter of machine learning, astronomers, legal experts, doctors, etc., can lead tocan we reignite its spirit? This is not simply a matter greater opportunities for machine learning impact.of reporting on isolated applications. What is neededis a fundamental change in how we formulate, attack, 3.3. Eyes on the Prizeand evaluate machine learning research projects. Finally, we should consider potential impact when se- lecting which research problems to tackle, not merely3.1. Meaningful Evaluation Methods how interesting or challenging they are from the MLThe first step is to define or select evaluation meth- perspective. How many people, species, countries, orods that enable direct measurement, wherever possi- square meters would be impacted by a solution to theble, of the impact of ML innovations. In addition to problem? What level of performance would constitutetraditional measures of performance, we can measure a meaningful improvement over the status quo?dollars saved, lives preserved, time conserved, effort Warrick et al. (2010) provides an example of ML workreduced, quality of living increased, and so on. Focus- that tackles all three aspects. Working with doctorsing our metrics on impact will help motivate upstream and clinicians, they developed a system to detect fetalrestructuring of research efforts. They will guide how hypoxia (oxygen deprivation) and enable emergencywe select data sets, structure experiments, and define intervention that literally saves babies from brain in-objective functions. At a minimum, publications can juries or death. After publishing their results, whichreport how a given improvement in accuracy translates demonstrated the ability to have detected 50% of fe-to impact for the originating problem domain. tal hypoxia cases early enough for intervention, withThe reader may wonder how this can be accomplished, an acceptable false positive rate of 7.5%, they are cur-if our goal is to develop general methods that ap- rently working on clinical trials as the next step to-ply across domains. Yet (as noted earlier) the com- wards wide deployment. Many such examples exist.mon approach of using the same metric for all do- This paper seeks to inspire more.mains relies on an unstated, and usually unfounded,assumption that it is possible to equate an x% im- 4. Machine Learning Impact Challengesprovement in one domain with that in another. In-stead, if the same method can yield profit improve- One way to direct research efforts is to articulate am-ments of $10,000 per year for an auto-tire business as bitious and meaningful challenges. In 1992, Carbonellwell as the avoidance of 300 unnecessary surgical in- articulated a list of challenges for the field, not to in-terventions per year, then it will have demonstrated a crease its impact but instead to “put the fun back intopowerful, wide-ranging utility. machine learning” (Carbonell, 1992). They included: 1. Discovery of a new physical law leading to a pub-3.2. Involvement of the World Outside ML lished, referred scientific article.Many ML investigations involve domain experts as col- 2. Improvement of 500 USCF/FIDE chess ratinglaborators who help define the ML problem and label points over a class B level start.data for classification or regression tasks. They canalso provide the missing link between an ML perfor- 3. Improvement in planning performance of 100 foldmance plot and its significance to the problem domain. in two different domains.This can help reduce the number of cases where an ML 4. Investment earnings of $1M in one year.
  • 5. Machine Learning that Matters 5. Outperforming a hand-built NLP system on a 5. Obstacles to ML Impact task such as translation. Let us imagine a machine learning researcher who is 6. Outperforming all hand-built medical diagnosis motivated to tackle problems of widespread interest systems with an ML solution that is deployed and and impact. What obstacles to success can we foresee? regularly used at at least two institutions. Can we set about eliminating them in advance?Because impact was not the guiding principle, these Jargon. This issue is endemic to all specialized re-challenges range widely along that axis. An improved search fields. Our ML vocabulary is so familiar that itchess player might arguably have the lowest real-world is difficult even to detect when we’re using a special-impact, while a medical diagnosis system in active use ized term. Consider a handful of examples: “featurecould impact many human lives. extraction,” “bias-variance tradeoff,” “ensemble meth-We therefore propose the following six Impact Chal- ods,” “cross-validation,” “low-dimensional manifold,”lenges as examples of machine learning that matters: “regularization,” “mutual information,” and “kernel methods.” These are all basic concepts within ML 1. A law passed or legal decision made that relies on that create conceptual barriers when used glibly to the result of an ML analysis. communicate with others. Terminology can serve as 2. $100M saved through improved decision making a barrier not just for domain experts and the general provided by an ML system. public but even between closely related fields such as 3. A conflict between nations averted through high- ML and statistics (van Iterson et al., 2012). We should quality translation provided by an ML system. explore and develop ways to express the same ideas in 4. A 50% reduction in cybersecurity break-ins more general terms, or even better, in terms already through ML defenses. familiar to the audience. For example, “feature ex- 5. A human life saved through a diagnosis or inter- traction” can be termed “representation;” the notion vention recommended by an ML system. of “variance” can be “instability;” “cross-validation” 6. Improvement of 10% in one country’s Human De- is also known as “rotation estimation” outside of ML; velopment Index (HDI) (Anand & Sen, 1994) at- “regularization” can be explained as “choosing simpler tributable to an ML system. models;” and so on. These terms are not as precise, but more likely to be understood, from which a con-These challenges seek to capture the entire process versation about further subtleties can ensue.of a successful machine learning endeavor, includingperformance, infusion, and impact. They differ from Risk. Even when an ML system is no more, or less,existing challenges such as the DARPA Grand Chal- prone to error than a human performing the samelenge (Buehler et al., 2007), the Netflix Prize (Bennett task, relying on the machine can feel riskier because& Lanning, 2007), and the Yahoo! Learning to Rank it raises new concerns. When errors are made, whereChallenge (Chapelle & Chang, 2011) in that they do do we assign culpability? What level of ongoing com-not focus on any single problem domain, nor a par- mitment do the ML system designers have for adjust-ticular technical capability. The goal is to inspire the ments, upgrades, and maintenance? These concernsfield of machine learning to take the steps needed to are especially acute for fields such as medicine, space-mature into a valuable contributor to the larger world. craft, finance, and real-time systems, or exactly those settings in which a large impact is possible. An in-No such list can claim to be comprehensive, including creased sphere of impact naturally also increases thethis one. It is hoped that readers of this paper will associated risk, and we must address those concernsbe inspired to formulate additional Impact Challenges (through technology, education, and support) if wethat will benefit the entire field. hope to infuse ML into real systems.Much effort is often put into chasing after goals in Complexity. Despite the proliferation of ML tool-which an ML system outperforms a human at the same boxes and libraries, the field has not yet matured totask. The Impact Challenges in this paper also differ a point where researchers from other areas can simplyfrom that sort of goal in that human-level performance apply ML to the problem of their choice (as they mightis not the gold standard. What matters is achieving do with methods from physics, math, mechanical en-performance sufficient to make an impact on the world. gineering, etc.). Attempts to do so often fail due toAs an analogy, consider a sick child in a rural setting. lack of knowledge about how to phrase the problem,A neighbor who runs two miles to fetch the doctor what features to use, how to search over parameters,need not achieve Olympic-level running speed (perfor- etc. (i.e., the top row of Figure 1). For this reason, itmance), so long as the doctor arrives in time to address has been said that ML solutions come “packaged in athe sick child’s needs (impact). Ph.D.”; that is, it requires the sophistication of a grad-
  • 6. Machine Learning that Mattersuate student or beyond to successfully deploy ML to Chapelle, Olivier and Chang, Yi. Yahoo! Learning tosolve real problems—and that same Ph.D. is needed to Rank Challenge overview. JMLR: Workshop andmaintain and update the system after its deployment. Conference Proceedings, 14, 2011.It is evident that this strategy does not scale to the Frank, A. and Asuncion, A. UCI machine learninggoal of widespread ML impact. Simplifying, matur- repository, 2010. URL http://archive.ics.uci.ing, and robustifying ML algorithms and tools, while edu/ml.itself an abstract activity, can help erode this obstacle Gomes, Carla P. Computational sustainability: Com-and permit wider, independent uses of ML. putational methods for a sustainable environment, economy, and society. The Bridge, 39(4):5–13, Win-6. Conclusions ter 2009. National Academy of Engineering.Machine learning offers a cornucopia of useful ways to Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reute-approach problems that otherwise defy manual solu- mann, P., and Witten, I. H. The WEKA data min-tion. However, much current ML research suffers from ing software: An update. SIGKDD Explorations, 11a growing detachment from those real problems. Many (1):10–18, 2009.investigators withdraw into their private studies with Hanley, J. A. and McNeil, B. J. The meaning and usea copy of the data set and work in isolation to perfect of the area under a receiver operating characteristicalgorithmic performance. Publishing results to the ML (ROC) curve. Radiology, 143:29–36, 1982.community is the end of the process. Successes usually Hastie, T., Tibshirani, R., and Friedman, J. The El-are not communicated back to the original problem ements of Statistical Learning: Data Mining, Infer-setting, or not in a form that can be used. ence, and Prediction. Springer, 2001.Yet these opportunities for real impact are widespread. Koehn, Philipp, Och, Franz Josef, and Marcu, Daniel.The worlds of law, finance, politics, medicine, edu- Statistical phrase-based translation. In Proc. of thecation, and more stand to benefit from systems that Conf. of the North American Chapter of the Associ-can analyze, adapt, and take (or at least recommend) ation for Computational Linguistics on Human Lan-action. This paper identifies six examples of Impact guage Technology, pp. 48–54, 2003.Challenges and several real obstacles in the hope of Langley, Pat. The changing science of machine learn-inspiring a lively discussion of how ML can best make ing. Machine Learning, 82:275–279, 2011.a difference. Aiming for real impact does not just Lobo, Jorge M., Jimnez-Valverde, Alberto, and Real,increase our job satisfaction (though it may well do Raimundo. AUC: a misleading measure of the per-that); it is the only way to get the rest of the world to formance of predictive distribution models. Globalnotice, recognize, value, and adopt ML solutions. Ecology and Biogeography, 17(2):145–151, 2008. SRI International. CALO: Cognitive assistant thatAcknowledgments learns and organizes. http://caloproject.sri.We thank Tom Dietterich, Terran Lane, Baback com, 2003–2009.Moghaddam, David Thompson, and three insightful Student. The probable error of a mean. Biometrika, 6anonymous reviewers for suggestions on this paper. (1):1–25, 1908.This work was performed while on sabbatical from the van Iterson, M., van Haagen, H.H.B.M., and Goeman,Jet Propulsion Laboratory. J.J. Resolving confusion of tongues in statistics and machine learning: A primer for biologists and bioin-References formaticians. Proteomics, 12:543–549, 2012.Anand, Sudhir and Sen, Amartya K. Human develop- van Rijsbergen, C. J. Information Retrieval. Butter- ment index: Methodology and measurement. Hu- worth, 2nd edition, 1979. man Development Report Office, 1994. Warrick, P. A., Hamilton, E. F., Kearney, R. E., andBennett, James and Lanning, Stan. The Netflix Prize. Precup, D. A machine learning approach to the de- In Proc. of KDD Cup and Workshop, pp. 3–6, 2007. tection of fetal hypoxia during labor and delivery. In Proc. of the Twenty-Second Innovative ApplicationsBuehler, Martin, Iagnemma, Karl, and Singh, Sanjiv of Artificial Intelligence Conf., pp. 1865–1870, 2010. (eds.). The 2005 DARPA Grand Challenge: The Great Robot Race. Springer, 2007. Zdziarski, Jonathan A. Ending Spam: Bayesian Con- tent Filtering and the Art of Statistical LanguageCarbonell, Jaime. Machine learning: A maturing field. Classification. No Starch Press, San Francisco, 2005. Machine Learning, 9:5–7, 1992.

×