Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Human-Centric Machine Learning


Published on

Many times what your machine learning algorithm is optimizing for is not what you really want. Worse, many times you do not really know what you want. In interactive machine learning the human is an integral part of the learning process leading to cooperative human-machine solutions. In this talk, we will see a few such problems from the field of Natural Language Processing (NLP), their challenges, common pitfalls and some solutions.

Published in: Technology
  • Login to see the comments

  • Be the first to like this

Human-Centric Machine Learning

  1. 1. © 2017 NAVER LABS. All rights reserved. Matthias Gallé Naver Labs Europe @mgalle Human-Centric Machine Learning Rakuten Technology Conference 2017
  2. 2. Advanced Chess
  3. 3. Supervised Learning Where f typically such that 𝑓 = argmin 𝑓∈𝐹 1 𝑁 ෍ 𝑖=1 𝐿 𝑓 𝑥𝑖 , 𝑦𝑖 + 𝜆𝑅 𝑓 I know what I want (and can formalize it) I have time & money to label lots of data X,Y f(x)
  4. 4. Example: Machine Translation Given a text s and its proposed translation p, how to measure its distance with respect to a reference translation t ? BLEU: n-gram overlap between t and p typically: 1 ≤ 𝑛 ≤ 4, precision only, brevity penalty METEOR bonus points for matching stems and synonyms use paraphrases
  5. 5. Statistical Machine Translation P Koehn ( evaluation.pdf)
  6. 6. Consequences of not formalizing correctly Users do not use your model Computer-Assisted Translation used rule-based systems for years Ad-hoc solutions Quality Prediction Automatic Post Edition
  7. 7. Unsupervised Learning Where Z(X) capture some prior: • Compression • Clustering • Coverage • …. I am not sure what I want I have a (big) corpus with assumed patterns X Z(X)
  8. 8. Example: Exploratory Search Whenever your task is: • Ill-defined: – Broad / under-specified – Multi-faceted • Dynamic: – Searcher’s understanding inadequate at the beginning – Searcher’s understanding evolves as results are gradually retrieved. The answer to what you search is “I know it when I see it”
  9. 9.
  10. 10. Interactive Learning
  11. 11. Exploratory Search: examples E-Discovery Sensitivity Review • Vo, Ngoc Phuoc An, et al. "DISCO: A System Leveraging Semantic Search in Document Review." COLING (Demos). 2016. • Privault, Caroline, et al. "A new tangible user interface for machine learning document review." Artificial Intelligence and Law 18.4 (2010): 459-479. • Ferrero, Germán, Audi Primadhanty, and Ariadna Quattoni. "InToEventS: An Interactive Toolkit for Discovering and Building Event Schemas." EACL 2017 (2017): 104.
  12. 12. Example: Active Learning Give initiative to the algorithm allow action of type: “please, label instance x” Cognitive effort of labeling a document 3-5x higher than labelling a word [1] Feature labelling: • type(feedback) ≠ type(label) • information load of a word label is small • word sense disambiguation [1] Raghavan, Hema, Omid Madani, and Rosie Jones. "Active learning with feedback on features and instances." Journal of Machine Learning Research7.Aug (2006): 1655-1686.
  13. 13. Conclusion If you really want to solve a problem, don’t be prisoner of your performance indicator Ask yourself: 1. Does it really capture success? does it align with human judgment? 2. What does the [machine | human] best? 3. Can you remove the burden from humans by smarter algorithms?
  14. 14. Further reading & Acknowledgments Jean-Michel RendersMarc Dymetman Ariadna Quattoni
  15. 15. Q&A © 2017 NAVER LABS. All rights reserved.
  16. 16. Appendix © 2017 NAVER LABS. All rights reserved.
  17. 17. Statistical Machine Translation P Koehn ( evaluation.pdf)