From Jeopardy! To Cognitive Agents:
Effective Learning in the Wild
Eric Nyberg
Language Technologies Institute
School of C...
History & Strengths:
Architecture for Info Systems
• Developed advanced service-oriented architectures for
information sys...
CMU’s Contributions to Watson & OAQA
Read more about CMU and Watson: http://www.cs.cmu.edu/~ehn/
• Modular architecture fo...
IARPA AQUAINT Program
JAVELIN I JAVELIN II JAVELIN III
Book chapter
on advanced QA
architectures
CMU
adopts
UIMA
Roadmap
f...
CMU QA Team: Core Collaborators (2001-2011)
Jamie Callan
Teruko Mitamura
Jaime Carbonell
Eric Nyberg
• Probabilistic Model...
What did we learn from Watson?
• QA systems can be fast, accurate, and confident enough to
perform in the real world
– Sca...
Automatic Optimization of QA
for TREC Genomics Questions
CSE Framework: Support automatic evaluation / optimization
of inf...
Results of Automatic Optimization
CSE Framework found a significantly better configuration of
components compared to the p...
Other domains:QA4MRE
• Question Answering for Machine Reading
Evaluation
• Configuration space:
– 12 UIMA components were ...
Leveraging Pre-Competitive, Open-Source
Development for Proprietary R&D
CMU
Student &
Advisor
Pre-Competitive
Requirements...
Open Source Projects
• Repository Location: https://github.com/oaqa
• 18 public / 18 private project repositories
• 33 mem...
QUADS: Question Answering
for Decision Support
Zi Yang1, Ying Li2, James Cai2, Eric Nyberg1
1) Carnegie Mellon University ...
Decision Making: Product
Recommendation from Review Text
Design and
usability
Brand
Functionality
Carrier
Operating
system...
Decision Making: Target Validation
Modulation
the activity
Expression in
tissues
Mutation
Clinical trials
Side effects
In ...
Question Answering for Decision Support
• Decompose an end-user decision process into
weighted decision factors
• Values o...
10/02/2013: IBM Announces New
Collaboration with CMU
• Focus: “How systems should be architected to
support intelligent, n...
Vision
• Automatically learn and improve new analytics through
independent interaction with humans
• Examples:
1. Learn to...
Conceptual Architecture
First phase
of framework
mostly complete
Perform
ReflectLearn
Automatically build and
execute anal...
Service Architecture
History and Strengths:
Proactive Machine Learning
• An approach that is more effective for learning independently
from mul...
Technical Challenges
• Extracting domain-specific entities, relations
– Which ones are important?
– How to interpret outpu...
Related Educational Programs @ CMU
• Language Technologies (MS, PhD)
• Master of Computational Data Science (MCDS)
• Biote...
References
• [1] Nyberg, E., Burger, J.D., Mardis, S., Ferrucci, D.A.: Software Architectures for Advanced
QA. ;In New Dir...
Thank You!
Upcoming SlideShare
Loading in …5
×

Ibm colloquium 070915_nyberg

500 views

Published on

Eric Nyberg's Presentation "From Jeopardy! To Cognitive Agents: Effective Learning in the Wild" on Cognitive Systems Institute Group Speaker Series July 9, 2015

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
500
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
18
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Ibm colloquium 070915_nyberg

  1. 1. From Jeopardy! To Cognitive Agents: Effective Learning in the Wild Eric Nyberg Language Technologies Institute School of Computer Science Carnegie Mellon University Language Technologies Institute School of Computer Science Carnegie Mellon University
  2. 2. History & Strengths: Architecture for Info Systems • Developed advanced service-oriented architectures for information systems as part of IARPA AQUAINT [1] • Contributed to the development of the Unstructured Information Management Architecture (w/IBM) [2] • Establish a framework for open advancement of Question Answering systems (w/IBM) [3] • Participated in the Jeopardy! Challenge (w/IBM) [4] • Established OAQA Consortium at CMU for practical applications of Question Answering (2012-) – Sponsored by Boeing, Roche, Singapore DoD • Joined IBM’s Cognitive Systems Institute in 2013 [5] • Piloted Watson Challenge Course at CMU (F’14) 2
  3. 3. CMU’s Contributions to Watson & OAQA Read more about CMU and Watson: http://www.cs.cmu.edu/~ehn/ • Modular architecture for QA systems • Tools & process for error analysis • Information retrieval for question answering • Statistical machine learning for answer scoring • How to find supporting evidence for answers Dave Ferrucci and Watson visit CMU (3/11) Faculty & students receive Allan Newell Award for Research Excellence (2/12)
  4. 4. IARPA AQUAINT Program JAVELIN I JAVELIN II JAVELIN III Book chapter on advanced QA architectures CMU adopts UIMA Roadmap for QA R&D (LREC 2002) Ephyra I Ephyra II OpenEphyra CMU joins Watson effort (5 internships in 3 years) OAQA defines common framework, process, metrics OAQA Feb 2011: Watson wins Jeopardy! Challenge 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 IBM Open Collaborative Research Awards BlueJ / Watson Research Sponsor Key Project @ Uni Karlsruhe Project @ CMU Project @ IBM QA Research @ CMU: The First 10 Years (Oct. 2001 – Feb. 2011)
  5. 5. CMU QA Team: Core Collaborators (2001-2011) Jamie Callan Teruko Mitamura Jaime Carbonell Eric Nyberg • Probabilistic Models for Answer Scoring • Object type system / component architecture • Source Expansion approach used by Watson • Foundational work in machine learning for answer extraction and answer scoring • Tools for rapid development of QA apps • Language-independent architecture • Answer-scoring algorithms used by Watson • Important extensions to the INDRI/Lemur search engine used by Watson
  6. 6. What did we learn from Watson? • QA systems can be fast, accurate, and confident enough to perform in the real world – Scalable, parallel architecture – Plenty of training data available – Agile, open advancement process • Next big challenge: rapid domain adaptation – Automatic configuration optimization: Given a labeled dataset of inputs and expected outputs, automatically find the best performing composition of existing analytics / agents to provide a solution – In-task learning : Cognitive agents improve performance through proactive interaction with their users and other external sources of knowledge (human/machine), before/during/after performing a task – Combine automatic configuration & optimization with in-task learning to provide a set of personalized cognitive agents and agent brokers to interact with end users
  7. 7. Automatic Optimization of QA for TREC Genomics Questions CSE Framework: Support automatic evaluation / optimization of information systems using UIMA; part of the OAQA project [6]
  8. 8. Results of Automatic Optimization CSE Framework found a significantly better configuration of components compared to the prior published state of the art, in 24 hours of clock time using a modest 30-node cluster. [7]
  9. 9. Other domains:QA4MRE • Question Answering for Machine Reading Evaluation • Configuration space: – 12 UIMA components were first developed – Replace UIMA descriptors with ECD • CSE – 46 configurations – 1,040 combinations – 1,322 executions The best trace identified by CSE achieved 59.6% performance gain over the original pipeline. [Building Optimal Question Answering System Automatically using Configuration Space Exploration (CSE) for QA4MRE 2013 Tasks Alkesh Patel, Zi Yang, Eric Nyberg and Teruko Mitamura]
  10. 10. Leveraging Pre-Competitive, Open-Source Development for Proprietary R&D CMU Student & Advisor Pre-Competitive Requirements & Data Proprietary Requirements & Data Open Source Framework, Modules & Data Proprietary Modules & Data Industry Sponsor OA Consortium Agreement Non-Disclosure & Employment Agreements proprietary extensions to open-source software
  11. 11. Open Source Projects • Repository Location: https://github.com/oaqa • 18 public / 18 private project repositories • 33 members (13 active committers)
  12. 12. QUADS: Question Answering for Decision Support Zi Yang1, Ying Li2, James Cai2, Eric Nyberg1 1) Carnegie Mellon University {ziy, ehn}@cs.cmu.edu 2) Roche Innovation Center {ying_l.li, james.cai}@roche.com 07/09/2014 at SIGIR 2014
  13. 13. Decision Making: Product Recommendation from Review Text Design and usability Brand Functionality Carrier Operating system Weight Thickness Resolution Keyboard Decision decomposition Evidence gathering from Web Synthesis Brand Carrier Decision aaa xxx Good bbb yyy OK ccc zzz Bad 13
  14. 14. Decision Making: Target Validation Modulation the activity Expression in tissues Mutation Clinical trials Side effects In vivo In vitro Normal tissues Disease tissues Decision decomposition Evidence gathering from public/proprietary documents Synthesis In vivo Side effect Decision Yes No Good Yes Yes OK No Yes Bad 14
  15. 15. Question Answering for Decision Support • Decompose an end-user decision process into weighted decision factors • Values of atomic decision factors determined by automatic QA system • Overall decision value combines atomic decision factors according to learned weights • Significant improvement over baseline methods for gene targeting, product rating [8]
  16. 16. 10/02/2013: IBM Announces New Collaboration with CMU • Focus: “How systems should be architected to support intelligent, natural interaction with all kinds of information in support of complex human tasks.” [5]
  17. 17. Vision • Automatically learn and improve new analytics through independent interaction with humans • Examples: 1. Learn to code medical records for insurance payment from a human expert 2. Learn to detect fraudulent transactions (e.g. insurance claims) from a human expert 3. Automatically improve intelligent information systems with proactive learning and machine reading 4. Learn and refine decision-making processes for accident management & fault prediction that combine information written in policy and procedure documents will real-time sensor data, e.g. for mobile robot control 17
  18. 18. Conceptual Architecture First phase of framework mostly complete Perform ReflectLearn Automatically build and execute analytic solutions Proactively evaluate task performance, analyze errors, propose learning tasks Specification of required analytic input/output types, desired information sources, example dataset. 1 23 Subject Matter Experts (SMEs) Analyst’s Information Need Configure Optimize Measure Train Automatically execute learning tasks, update models, KBs, etc. Machine Learning Agents • Targeted Machine Reading • E-R Extraction • Set Extension • Clarification Dialogs • Type/instance knowledge • Concept learning Crowd-Sourcing (e.g. Amazon Mechanical Turk) • Type instance labeling • Relevance judgments Proposed work
  19. 19. Service Architecture
  20. 20. History and Strengths: Proactive Machine Learning • An approach that is more effective for learning independently from multiple sources (“oracles”) (Carbonell et. al) 20 Traditional Active Learning Proactive Learning Number of Oracles Individual (only one) Multiple, with different capabilities, costs and areas of expertise Reliability Infallible (100% right) Variable across oracles and queries, depending on difficulty, expertise, … Reluctance Indefatigable (always answers) Variable across oracles and queries, depending on workload, certainty, … Cost per query Invariant (free or constant) Variable across oracles and queries, depending on workload, difficulty, …
  21. 21. Technical Challenges • Extracting domain-specific entities, relations – Which ones are important? – How to interpret output of general NLP tools? • Modeling inference – How to represent e.g. complex biological processes – How to leverage existing ontologies, inference rules to build complex representations from text • Incorporating direct user feedback – How to present system data to the user – What kinds / how to gather feedback – How can the system learn effectively
  22. 22. Related Educational Programs @ CMU • Language Technologies (MS, PhD) • Master of Computational Data Science (MCDS) • Biotechnology Innovation & Computing (MS) • Intelligent Information Systems (MS)
  23. 23. References • [1] Nyberg, E., Burger, J.D., Mardis, S., Ferrucci, D.A.: Software Architectures for Advanced QA. ;In New Directions in Question Answering (2004) 19-30. • [2] https://www.oasis-open.org/news/pr/oasis-members-approve-open-standard-for- accessing-unstructured-information • [3] https://www.research.ibm.com/deepqa/question_answering.shtml • [4] http://www.prnewswire.com/news-releases/ibm-announces-eight-universities- contributing-to-the-watson-computing-systems-development-115892914.html • [5] http://www-03.ibm.com/press/us/en/pressrelease/42118.wss • [6] http://oaqa.github.io/ • [7] Yang, Z., Garduno, E., Fang, Y., Maiberg, A., McCormack, C. and Nyberg, E. (2013). “Building Optimal Information Systems Automatically: Configuration Space Exploration for Biomedical Information Systems”, Proceedings of the ACM Conference on Information and Knowledge Management • [8] Zi Yang, Ying Li, James Cai, and Eric Nyberg. QUADS: Question Answering for Decision Support. In Proceedings of SIGIR’2014: the Thirty-seventh Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2014.
  24. 24. Thank You!

×