"Enhancing Intelligent Agents with Episoic Memories"

93 views

Published on

Dan Tecuci - IBM Watson NLP Engineer - presentation for the Cognitive Systems Institute Speaker Series on September 8, 2016.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
93
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

"Enhancing Intelligent Agents with Episoic Memories"

  1. 1. Enhancing intelligent agents with episodic memories Dan Tecuci dan.tecuci@us.ibm.com Cognitive Systems Institute Weekly Meeting Sep 8, 2016
  2. 2. 2 Outline • Motivation – Human Memory – Why is episodic memory needed in a cognitive system? • Approach – Generic Episodic Memory Module – Requirements – A proposed implementation • Evaluation • Conclusions & Discussion
  3. 3. WHY DO WE NEED MEMORY? “Those who cannot remember the past are condemned to repeat it.” George Santyana
  4. 4. • Remembering is an essential characteristic of intelligence • Humans can – recall their past experience – Use memories to – solve similar problems – avoid unwanted behavior – recognize plans - infer other people’s goals – remember own goals and track progress • Memory and intelligence go hand in hand Why it’s important to remember the past 4
  5. 5. • Experience – important knowledge source – mostly unused in current systems • Importance of experience grows with – Complexity of task – Life-expectancy of system • Eager approach (generalize & discard) – machine learning – Assumes all value can be extracted up-front • Lazy approach (store for now, use later) – Defers (part of) learning till later The Role of Memory in a Cognitive System 5
  6. 6. 6 Benefits of Using Stored Memories • Memory enables a system to: –improve performance – solve problem faster by adapting previous solutions –improve competence – informed search –perform additional tasks –avoid and detect failures – monitor long-term goals – reflect on past
  7. 7. 7 Human Memory - Episodic vs. Semantic • Differences – concrete vs. abstract – dated vs. timeless – personal vs. general • Similarities – knowledge is acquired through senses – automatic retention – retrieval triggered by stimuli, automatic
  8. 8. 8 Episodic Memory Functions • Encoding – activation (when to store an episode) – salient feature selection (what to sore in an episode) – cue selection (what features to use as cues) • Storage – how to maintain an episode in memory (forgetting) • Retrieval – cue construction – matching – recall – recollective experience
  9. 9. A GENERIC EPISODIC MEMORY MODULE
  10. 10. • Characteristics: – Generic – Same memory - different apps – Can be used for various tasks and domains – Separate from application – Interface through API – Store complex experience (e.g. temporal, graph-based) • Memory function: – Return most relevant prior episodes • Advantages: – focus on memory organization – reduce complexity of overall system PROPOSAL: Generic Memory Module 10
  11. 11. • Accuracy in retrieval - retrieve memories relevant to the situation at hand • Scalability - accommodate a large number of episodes without a significant decrease in performance • Efficiency - efficient storage and retrieval (both in space and time) • Content addressability - memories should be addressable by their content • Flexible matching - recognize prior situations even if they only partially match the current one General Memory Requirements 11
  12. 12. • Conceptual representation for generic events • Domain-independent storage/retrieval algorithms • Flexible interface Challenges 12
  13. 13. • Episode – unit of storage – capture a complex event, with temporal extent. – represented as Conceptual graphs (sets of S-P-O triples) – Use ontology for concept representation • Divide episodes into three dimensions – context = setting of episode – contents = ordered set of events – outcome = evaluation of episode’s effects • What constitutes an episode – application Episode Representation 13
  14. 14. 14 Planning Episode Example Context: “move all perl scripts to the linux folder” Contents: sh> find -name linux ./code/linux sh> find -name *.pl ./code/accessor.pl ./code/constructor.pl ./code/gang/dwarf/aml.pl sh> mv ./bin/gang/set/convert.pl ./code/accessor.pl ./code/constructor.pl ./code/gang/dwarf/aml.pl ./code/linux Outcome: “Success”
  15. 15. 15 Using Stored Episodes • Episodes should be multifunctional – Same episode can be used for different purposes • E.g. Planning Episode = [plan goal, plan steps, plan outcome] • Retrieval can be done on each dimension – On context (plan goal)  planning – On contents (plan steps)  plan recognition  outcome prediction – On outcome  root cause analysis
  16. 16. 16 Memory Implementation - Storage • Episodes stored unchanged (no generalization) • Indexing –separate on each dimension (context, contents, outcome) –shallow indexing (only feature types, no structure) • Forgetting [AISB-10]
  17. 17. 17 Memory Implementation–Retrieval • Shallow indexing then deep semantic matching – Shallow indexing: compute surface-level similarity – Goal: Reduce pool of candidates – Fast, high recall, low precision – Deep semantic matching [Yeh-06] – Goal: Resolve structural mismatches – Slow, high precision – Uses: taxonomic knowledge, transformation rules, resolves mismatches • Given a new stimulus and an episode computes: – similarities and differences – quantitative and qualitative
  18. 18. • store (episode) • retrieve (stimulus, dimension) – Returns: –most similar prior Episodes on dimension –match score –how they matched stimulus –how they differ from stimulus – mappings from stimulus to Episode – Has an incremental version used for recognizing of sequences of events Memory API 18
  19. 19. Goal: –Make predictions after each observation Applicable to: – plan recognition, dialog understanding Idea: – segment episodes based on temporal links, – recognize individual pieces – then aggregate into episodes – Confidence of recognized episode = a combination of: – confidence in recognition of individual pieces – the order in which they were observed Incremental Recognition 19
  20. 20. 20 Incremental Recognition Algorithm initialize candidates observe next action new-candidates ← retrieve (current-action) forall episode in new-candidates do if episode not in candidates then synchronize-candidate(episode, prior-actions) forall candidate in candidates do candidate-match ← match(curr-action, candidate) candidate ← update-candidate(candidate-match) candidates ← sort(candidates) result ← sort(candidates) make-available first-n (N, result)
  21. 21. EXPERIMENTAL EVALUATION “In God we trust, all others bring data” W Edwards Deming
  22. 22. 22 Experimental Evaluation  Evaluated on Three Tasks [Tecuci-Diss] –Memory-based planning: initial state + goal  plan –Episodic-based goal recognition: plan  goal schema –Memory-based question answering: question  answer • Measured –Task Performance (Precision, Recall) –Memory performance (retrieval time, memory overhead) • Same representation across tasks
  23. 23. 23 Memory-Based Planning • Problem – Given: initial state, goal state, operators – Find: sequence of operators that changes initial state into goal state • Solution: – search (restricted: hierarchical, skeletal) • Memory-based planning – reuse and adapt past experience
  24. 24. 24 Episodic-based Plan Recognition • Plan Recognition problem: – predict goals, intentions, future actions from observed actions – keyhole, intended • Desired characteristics – incremental, early predictions – extensible plan library • Approaches: – deductive, abductive, probabilistic, case-based
  25. 25. 25 Memory-Based Problem Solving • Problem – Given: KB, complex question – Find: correct model in KB that answers question and explain answer • Ex: A car starts from rest and reaches 28 m/s in 2 s. What distance does it cover? • Questions = scenario + query • Classical solution: search – KB size and complexity of models makes it infeasible or incomplete • Memory - fast access to relevant models
  26. 26. 26 Summary of Evaluation Results • Accuracy – same as – exhaustive search (planning, problem solving) – statistical approaches (plan recognition) • Scalability – retrieval not proportional to memory size • Sped-up problem solving • Multifunctional memory structure
  27. 27. Watson + Memory • Application domains: complex tasks, temporal aspect – Dialog – Remembering what was said – Goal detection – Prediction – Robotics – Complex behaviors – Virtual agent – Prediction • Memory as a Service ?
  28. 28. 28 Summary • The need for memory in cognitive systems • Separation of memory from system – Generic, reusable memory module – Adds episodic memory functionality to system • Requirements • Implementation satisfying requirements • Evaluation – planning, plan recognition, problem solving
  29. 29. • [Yeh-06] Yeh, P “Flexible semantic matching of rich knowledge structures” PhD Diss, UT Austin 2006 • [Tecuci-Flairs-09] Tecuci, D; Porter, B. “Memory-based goal schema recognition” FLAIRS 2009 • [Tecuci-AAAI-06] Using an Episodic Memory Module for Pattern Capture and Recognition. • [Tecuci-Diss] Tecuci 2007 A Generic Memory Module for Events, PhD Diss., Univ. of Texas • [ICBO] Palla et. al. “A Metadata approach to querying multiple biomedical ontologies” ICBO 2011 • [KCap-11] Palla et. al. “Using Answer Set Programming for Representing and Reasoning with Preferences and Uncertainty in Dynamic Domains” • [AI-04] Friedland et. al. 2004, Project Halo: Towards a Digital Aristotle, AI Magazine 25(4). 2004. • [AI-10] Gunning et. al. “Project Halo update – Progress towards digital Aristotle”, AI Mag 2010 • [KR-04a] Barker et. al. 2004, A question-answering system for AP Chemistry: Assessing KR&R Technologies. KR 2004. • [KR-04b] Friedland et.al. 2004, Towards a Quantitative, Platform-Independent Analysis of Knowledge Systems, KR 2004 • [AAAI-07] Barker et. al. 2007, Learning by Reading: A Prototype System, Performance Baseline and Lessons Learned. AAAI 2007. • [KCAP-07] Chaw et al. 2007, Capturing a Taxonomy of Failures During Automatic Interpretation of Questions Posed in Natural Language • [AISB-10] Nuxoll et. al Comparing Forgetting Algorithms for Artificial Episodic Memory Systems, RWWA at AISB 2010 • [KCAP-01] P. Clark et al. Knowledge Entry as the Graphical Assembly of Components. KCAP ’01 • [HALO] Vulcan Inc. Project Halo Website http://projecthalo.com/halotempl.asp?cid=21 • [KM] The Knowledge Machine http://www.cs.utexas.edu/users/mfkb/RKF/km.html references 29
  30. 30. • Thank you!

×