Cognitive Work Assistants - Vision and Open Challenges

2,472 views

Published on

A talk in the Cognitive Systems Institute call on 03/19/2015.

Published in: Data & Analytics
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,472
On SlideShare
0
From Embeds
0
Number of Embeds
901
Actions
Shares
0
Downloads
75
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Cognitive Work Assistants - Vision and Open Challenges

  1. 1. © 2014 IBM Corporation Cognitive Work Assistants: Vision and Open Challenges Hamid R. Motahari Nezhad IBM Almaden Research Center, San Jose, CA, USA Cognitive Systems Institute
  2. 2. © 2013 IBM Corporation The Work Practices of Human Administrative Assistants  Human assistant activities – Calendaring • Scheduling, information formatting and preparation – Task Management – Email Management • Filtering emails, • Email classification  Interruption management – Mediating interruption – Prioritizing interruptions  Taking care of routine tasks – Tracking – Following up – Travel arrangement, and preparation – Reminding, and organizing – Managing work of human • Pre-processing • Filtering • Prioritizing • Compiling information and reports 2 An assistant “will remove much of the burden of administrative chores from its human user and provide guidance, advice, and assistance in problem solving and decision making.” Gutierrze and Hilfdalgo, 1988
  3. 3. © 2013 IBM Corporation Human Administrative Assistants: conceptual framework 3 T. Erickson, etc.: Assistance: The Work Practice of Human Administrative Assistants and their Implications for IT and Organizations, CSCW’08. Blocking, Doing, Redirecting Key to the performance of assistants
  4. 4. © 2013 IBM Corporation Cognitive Assistance for knowledge workers  Cognitive case management is about providing cognitive support to knowledge workers in handling customer cases in domains such as social care, legal, government services, citizen services, etc.  Handling and managing cases involves understanding policies, laws, rules, regulations, processes, plans, as well as customers, surrounding world, news, social networks, etc.  A cognitive agent would assist employees and customers (from each perspective) – Assisting employees/workers by providing decision support based on understanding the case, context, surrounding world and applicable laws/rules/processes. – Helps employees/workers to be more productive, and effective – Assists citizens by empowering them to know their rights and responsibilities, and helping them to expedite the progress of the case 4 Users Assistant CustomersEmployees/ agents Plansworkflows Rules Policies Regulations Templates Instructions/ Procedures ApplicationsSchedules Communications such as email, chat, social media, etc. Organization Cog. Agent Unstructured Linked Information
  5. 5. © 2013 IBM Corporation Cognitive Assistance: Application Domains  Cognitive Assistance for different Occupations – Finance – Education – Retail – Healthcare (physician assistant) – Government (case management, civilian services, intelligence, defense, etc.)  Health Education Assistance – Assistance to patients – People well-being – Public health education  Retail – Assistance to buyers 5
  6. 6. © 2013 IBM Corporation Cognitive Assistant: what is it?  A software agent that – “augments human intelligence” (Engelbart’s definition1 in 1962) – Complements human by offering capabilities that is beyond the ordinary power and reach of human (intelligence amplification) – Performs tasks and offer assistance to human in decision making and taking actions  A more technical definition – Cognitive Assistant offers computational intelligence capabilities typically based on Natural Language Processing (NLP), Machine Learning (ML) and reasoning, and provides cognition powers that augment and scale human intelligence (Jim Spohrer)  Getting us closer to the vision painted for human-machine partnership in 1960: – “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information handling machines we know today” “Man-Computer Symbiosis , J. C. R. Licklider IRE Transactions on Human Factors in Electronics, volume HFE-1, pages 4-11, March 1960 6 1 Augmenting Human Intellect: A Conceptual Framework, by Douglas C. Engelbart, October 1962
  7. 7. © 2013 IBM Corporation History of Cognitive Assistants from the lens of AI 7 1945 Memex (Bush) 1962 NLS/Augment (Engelbart) 1955/6 Logic Theorist (Newwell, Simon, 1955) Checker Player (Samuel, 1956) Touring Test, 1950 Thinking machines 1966 Eliza (Weizenbaum) 1965-1987 DENDRAL 1974-1984 MYCIN 1987 Cognitive Tutors (Anderson) Apple’s Knowledge Navigator System Expert Systems 1965-1987 1992-1998 Virtual Telephone Assistant Portico, Wildfire, Webley; Speech Recognition Voice Controlled 2002-08 DARPA PAL Program CALO IRIS
  8. 8. © 2013 IBM Corporation Modern Cognitive Assistants: State of the art (2008-present) Commercial  Personal Assistants – Siri, Google Now, Microsoft Cortana, Amazon Echo, – Braina, Samsung's S Voice, LG's Voice Mate, SILVIA, HTC's Hidi, Nuance’ Vlingo – AIVC, Skyvi, IRIS, Everfriend, Evi (Q&A), Alme (patient assistant) – Viv (Global Brain as a Service)  Cognitive systems and platforms – IBM Watson – Wolfram Alpha (a computational engine with NLP interface) – Saffron 10 – Vicarious (Captcha) Open Source/Research  OAQA  DeepDive  OpenCog  YodaQA  OpenSherlock  OpenIRIS  iCub EU projects  Cougaar  Inquire* (intelligent textbook) 8 * Curated knowledge base
  9. 9. © 2013 IBM Corporation COGNITIVE AGENTS’ ABILITIES What Capabilities Cognitive Agents need to have? 9
  10. 10. © 2013 IBM Corporation Human Intelligence in terms of Cognitive Abilities 10 Ability to Achievable by machines today? draw abstractions from particulars. Partially, semantic graphs* maintain hierarchies of abstraction. Partially, semantic graphs* concatenate assertions and arrive at a new conclusion. Partially, relationships present reason outside the current context. No compare and contrast two representations for consistency/inconsistency. Limited reason analogically. Not automated, require domain adaptation learn and use external symbols to represent numerical, spatial, or conceptual information. Better than human in symbolic rep. & processing learn and use symbols whose meanings are defined in terms of other learned symbols. Uses and processes, limited learning invent and learn terms for abstractions as well as for concrete entities. No language development capability invent and learn terms for relations as well as things Partially, using symbols, not cognitive Gentner, D. (2003), In D. Getner & S. Goldin-Meadow (eds.), Language in Mind: Advances in the Study of Language and Thought. MIT Press. 195--235 (2003)
  11. 11. © 2013 IBM Corporation Human Intelligence vs Machine Intelligence: Analytical Cognition, vs. Synthetic Cognition Analytical Skills  Cognitive skills that machines excel at would take intellectual efforts from human – Mathematical calculations, making logical decisions in complex situations, chess  Computational Intelligence – Manipulation of symbols through algorithmic information processing – The processing units (processing device) does not know or care about the “meaning” of symbol – Cognition by “information processing”, or cognition as computation Synthetic Skills  Cognitive skills that human performs effortlessly but hard for machines with current AI – Interpretation of subtle facial expressions, engaging in creative conversations, etc.  Conscious intelligence – Symbol manipulation also happens in the lowest level of hierarchical structure of brain function – The higher levels of hierarchical structure of brain function involve emergent concepts where higher level concepts/ideas combine, and form complex organisms (analogy with ‘cloud’, a whole, relation to air and water molecules, component) – It is at this level of cognition that “understanding of meaning” arise 11 Ref.: Eric Lord, Science, Mind and Paranormal Experience, 2009
  12. 12. © 2013 IBM Corporation Cognitive Assistant Vision: Augmenting Human Intelligence 12 Cognitive Capability • Create new insights and new valueDiscovery • Provide bias-free advice semi- autonomously, learns, and is proactive Decision • Build and reason about models of the world, of the user, and of the system itself Understanding • Leverage encyclopedic domain knowledge in context, and interacts in natural language Question Answering Natural Language Processing and Interaction Skills EmotionalIntelligenceSkills SocialInteractionSkills Touring Test Analytical Abilities needed for a Cognitive Agent for higher order tasks
  13. 13. © 2013 IBM Corporation Not only personal work assistant, but a society of cogs 13 Cognitive Agent to Agent Outage Model Consequence Table Financial Cog Enterprise Process Cog Objective Identification Sensitivity Analysis PR Cog Personal Avatar Deep Thunder Crew Scheduler Feeds Human to Human Cognitive Agent to Human Sara’s Cog Mobile Analytics and Response Two main type of cognitive assistants: personal work assistants, and expert cogs, collaborate to support human activities. Interactions types need to be supported: • cog-to-cog interactions, • human-cog interactions, and • cog-backed human-to- human interactions Cogs need degrees of emotional intelligence, and social interaction skills to support cog- human, and cog-backed human- to-human interactions Sara Debra
  14. 14. © 2013 IBM Corporation A major challenge in passing Touring Test and building Cogs: building domain knowledge bases  “For an artifact, a computational intelligence, to be able to behave with high levels of performance on complex intellectual tasks, perhaps surpassing human level, it must have extensive knowledge of the domain”  The challenge of AI in making progress toward building human-like artifacts: – Knowledge representation, and (especially) knowledge acquisition  Approaches – Build a large knowledge base by reading text – Distilling from the WWW a huge knowledge base  Semantic Web and Linked Data methods over the last decade extensively has explored building models, ontologies and rule-set that contributes to WWW knowledge representation – Manual, and semi-automated, focused on curated ontologies – Community participation in building ontologies have resulted in creation of large knowledge bases: DBPedia, Yago, Wikidata, Freebase, MediaWiki, etc. – Ontologies are expensive to build and scale, and are generic in nature 14 EDWARD A. FEIGENBAUM, Some Challenges and Grand Challenges for Computational Intelligence, Journal of the ACM, Vol. 50, No. 1, January 2003, pp. 32–40
  15. 15. © 2013 IBM Corporation Lesson Learned from Jeopardy in Watson (1)  “The Watson program is already a breakthrough technology in AI. For many years it had been largely assumed that for a computer to go beyond search and really be able to perform complex human language tasks it needed to do one of two things: either it would “understand” the texts using some kind of deep “knowledge representation,” or it would have a complex statistical model based on millions of texts.” – James Hendler, Watson goes to college: How the world’s smartest PC will revolutionize AI, GigaOm, 3/2/2013  Breakthrough: 1. Developing a systematic approach for scalable knowledge building over large, less reliable data sources • Building and curating a robust, and comprehensive knowledge base and ruleset has been a key challenge in expert systems • Watson approach for building on massive, mixed curated and not-curated and less reliable information sources with uncertainty has proved effective 15 Source: Inquire Intelligent Book
  16. 16. © 2013 IBM Corporation Lesson Learned from Jeopardy in Watson (2) 16 Comparison of two QA systems with and without confidence estimation. Both have an accuracy of 40%. With perfect confidence estimator Without confidence estimator 2. Leveraging a large number of not always accurate techniques but delivering higher overall accuracy through understanding and employing confidence levels
  17. 17. © 2013 IBM Corporation Opportunity and challenge (1): explosive amount of data 17 80% of the world’s data today is unstructured 90% of the world’s data was created in the last two years 1 Trillion connected devices generate 2.5 quintillion bytes data / day 3M+ Apps on leading App stores
  18. 18. © 2013 IBM Corporation Cognitive Computing as a Service: Watson in IBM BlueMix 18 Visualization Rendering Graphical representations of data analysis for easier understanding User Modeling Personality profiling to help engage users on their own terms. Language Identification Identifies the language in which text is written Machine Translation Translate text from one language to another. Concept Expansion Maps euphemisms to more commonly understood phrases Message Resonance Communicate with people with a style and words that suits them Question and Answer Direct responses to users inquiries fueled by primary document sources Relationship Extraction Intelligently finds relationships between sentences components Coming • Concept Analytics • Question Generation • Speech Recognition • Text to Speech • Tradeoff Analytics • Medical Information Extraction • Semantic Expansion • Policy Knowledge • Ontology Creation • Q&A in other languages • Policy Evaluation • Inference detection • Social Resonance • Answer Assembler • Relationship identification • Dialog • Machine Translation (French) • Smart Metadata • Visual Recommendation • Industry accelerators Available today Opportunity and challenge (2): cognitive methods and tools
  19. 19. © 2013 IBM Corporation Open Challenges (1) 19  Building the knowledge base and Training Cognitive Agents – How does User Train the Cog? – How does User Delegate to the Cog?  Adaptation and training of Cogs for a new domain – How to quickly train a cog for a new domain? Current approaches is laborious and tedious.  Performance Dimensions, and Evaluation Framework – Metrics, testing and validating functionality of Cog – Are controlled experiments possible? – Do we need to test in Real environment with Real users  User adoption/trust, and privacy – Can I trust that the Cog did what I told/taught/think the Cog did? – Is the Cog working for me? – Issues of privacy, privacy-preserving interaction of cogs.  Team vs. Personal Cogs – Training based on best practices vs. personalized instruction – Imagine Teams of Cogs working with teams of Human Analysts  Symbiosis Issues – What is best for the human to do? What is best for the cog?
  20. 20. © 2013 IBM Corporation Open Challenges (2)  Teaching the Cog what to do – Learning from demonstration, Learning from documentation – Telling the Cog what to do using natural language – Interactive learning where the Cog may ask questions of the trainer – How does the Cog learn what to do, reliably? – Active learning where the Cog improves over time • Moving up the learning curve (how does Cog understand the goal/desired end state?) • Adapts as the environment (e.g., data sources and formats change) – On what conditions should the Cog report back to the Human? – Task composition (of subtasks) and reuse – Adaptation of past learning to new situation  Proactive Action taking – Initiating actions based on learning and incoming requests • E.g., deciding what information sources to search for the request , issuing queries, evaluating responses – Deciding on next steps based on results or whether it needs further guidance from Human  Personal knowledge representation and reasoning – Capturing user behavior, interaction in form of personal knowledge – Ability to build knowledge from various structured and unstructured information – AI Principle: expert knows 70,000+/- 20,000 information pieces, and human tasks involves 1010 rules (foundation of AI, 1988)
  21. 21. © 2013 IBM Corporation Open Challenges (3)  Context understanding, and context-aware interaction – Modeling the world of the person serving, including all context around the work/task, and being able to use the contextual and environmental awareness to proactively and reactively act on behalf of the user  Learning to understand the task and plan to do it – Understanding the meaning of tasks, and coming up with a response (e.g.. How many people replied to an invite over email, accepting the offer, without asking the Cog to do so), or suggestions on how to achieve it (based on any new information discovered by the Cog)  Cognitive Speech recognition, or other human-computer interfaces for communicating with Cogs – Improving the speech-to-text techniques, and personalized, semantic-enriched speech understanding – Non-speech based approaches for communicating with humans
  22. 22. © 2013 IBM Corporation Learning from Jeopardy Challenge  Back in 2006, DeepQA (Question Answering) involved addressing key challenges  Feb 27-28, 2008, a group of researchers and practitioners from industry, academia and government met to discuss state of the Question Answering (QA) field  The result was the development of a document (published in 2009) that included – Vision for QA systems, and DeepQA – Development of challenge problems with measurable dimensions – Approach to open collaboration – Open collaboration model  Defining Performance Dimensions  Challenge Problem Set Comparison 22
  23. 23. © 2013 IBM Corporation Call for Enabling Open Collaboration Model on Cognitive Assistants  The Open Collaboration Model enables sharing knowledge, expertise, datasets, progress and devising interoperable solution components – Challenge Problem Set Comparison – Defining and Developing Performance Dimensions – Open platform for sharing data, testbed and comparative analysis  Building on Watson Ecosystem for Partners and Watson University Program for academic partners for towards Cognitive Assistant Open Collaboration Platform, or similar open platforms for collaboration – Building on open source cog projects 23 Building open platforms similar to Watson Content Marketplace, Watson Ecosystem, and Watson University Programs
  24. 24. © 2013 IBM Corporation THANK YOU! Questions? 24
  25. 25. © 2013 IBM Corporation BACKUP 25
  26. 26. © 2013 IBM Corporation Example: Automatic Task Extraction and Management Over Unstructured Communications 26 Email, Chat, and Calendaring apps are the most used channels for doing work in the enterprise The goal of this project was to monitor Communication channels (email, chat) To capture and organize work of an Employee (tasks) Lifecycle of a typical task: Create, Active, Complete, Cancel
  27. 27. © 2013 IBM Corporation Cognitive Assistant for Task management  Processing text of conversations (email, chat, etc) to extract and manage task lifecycle 27 Anup K. Kalia, Hamid R. Motahari Nezhad, Claudio Bartolini, Munindar P. Singh: Monitoring Commitments in People-Driven Service Engagements. IEEE SCC 2013: 160-167 Deep Parsing of text
  28. 28. Where did it acquire knowledge? • Wikipedia • Time, Inc. • New York Time • Encarta • Oxford University • Internet Movie Database • IBM Dictionary • ... J! Archive/YAGO/dbPedia… • Total Raw Content • Preprocessed Content 28 • 17 GB • 2.0 GB • 7.4 GB • 0.3 GB • 0.11 GB • 0.1 GB • 0.01 GB XXX • 70 GB • 500 GB Three types of knowledge Domain Data (articles, books, documents) Training and test question sets w/answer keys NLP Resources (vocabularies, taxonomies, ontologies)
  29. 29. © 2013 IBM Corporation Use Case: Work Assistant Example for knowledge workers  Assume an executive admin is managing an event organization process for their department – Step 1: sending invite to an event to employees in their department, through email and requests for RSVP • Cog (1): Q&A ability for the admin: How many have confirmed, how many pending, how many not answered • Cog (2): Predictive analytics: how many will eventually RSVP? • Cog (3): Diagnostic analytics: why some not accepted (customers in case of marketing case)? – Step 2: Ordering place, food, transportation, etc • Cog (1): tracking of the process steps, which vendor have replied, which ones pending, have questions, etc. • Cog (2): keeping track of synchronization and consistency (dates, amounts, numbers, etc.) among different steps – Step 3: Pre-event steps (self-discipline, and organization) • Reminding people who have RSVPed • Compiling and sending logistic information (from different steps) – Learning changes to the process 29

×