Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Cognitive Assistants - Opportunities and Challenges - slides

4,541 views

Published on

The slides that are presented in a working session on Cognitive Assistant special interest community in work, enterprise and government areas.

Published in: Data & Analytics
  • Be the first to comment

Cognitive Assistants - Opportunities and Challenges - slides

  1. 1. © 2014 IBM Corporation Cognitive Assistants: Opportunities and Challenges Hamid R. Motahari Nezhad IBM Almaden Research Center, San Jose, CA, USA With Inputs and Contributions from: Jim Spohrer, IBM Research Frank Stein, IBM Analytics CTO
  2. 2. © 2013 IBM Corporation Cognitive Assistant: what is it?  A software agent that – “augments human intelligence” (Engelbart’s definition1 in 1962) – Performs tasks and offer services (assists human in decision making and taking actions) – Complements human by offering capabilities that is beyond the ordinary power and reach of human (intelligence amplification)  A more technical definition – Cognitive Assistant offers computational capabilities typically based on Natural Language Processing (NLP), Machine Learning (ML), and reasoning chains, on large amount of data, which provides cognition powers that augment and scale human intelligence  Getting us closer to the vision painted for human-machine partnership in 1960: – “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information handling machines we know today” “Man-Computer Symbiosis , J. C. R. Licklider IRE Transactions on Human Factors in Electronics, volume HFE-1, pages 4-11, March 1960 2 1 Augmenting Human Intellect: A Conceptual Framework, by Douglas C. Engelbart, October 1962
  3. 3. © 2013 IBM Corporation Human Intelligence in terms of Cognitive Abilities 3 Ability to Achievable by machines today? draw abstractions from particulars. Partially, semantic graphs* maintain hierarchies of abstraction. Partially, semantic graphs* concatenate assertions and arrive at a new conclusion. Partially, relationships present reason outside the current context. Not proactively compare and contrast two representations for consistency/inconsistency. Limited reason analogically. Not automated, require domain adaptation learn and use external symbols to represent numerical, spatial, or conceptual information. Better than human in symbolic rep. & processing learn and use symbols whose meanings are defined in terms of other learned symbols. Uses and processes, limited learning invent and learn terms for abstractions as well as for concrete entities. No language development capability invent and learn terms for relations as well as things Partially, using symbols, not cognitive Gentner, D. (2003), In D. Getner & S. Goldin-Meadow (eds.), Language in Mind: Advances in the Study of Language and Thought. MIT Press. 195--235 (2003)
  4. 4. © 2013 IBM Corporation History of Cognitive Assistants from the lens of AI 4 1945 Memex (Bush) 1962 NLS/Augment (Engelbart) 1955/6 Logic Theorist (Newwell, Simon, 1955) Checker Player (Samuel, 1956) Touring Test, 1950 Thinking machines 1966 Eliza (Weizenbaum) 1965-1987 DENDRAL 1974-1984 MYCIN 1987 Cognitive Tutors (Anderson) Apple’s Knowledge Navigator System Expert Systems 1965-1987 1992-1998 Virtual Telephone Assistant Portico, Wildfire, Webley; Speech Recognition Voice Controlled 2002-08 DARPA PAL Program CALO IRIS
  5. 5. © 2013 IBM Corporation Modern Cognitive Assistants: State of the art (2008-present) Commercial  Personal Assistants – Siri, Google Now, Microsoft Cortana, Amazon Echo, – Braina, Samsung's S Voice, LG's Voice Mate, SILVIA, HTC's Hidi, Nuance’ Vlingo – AIVC, Skyvi, IRIS, Everfriend, Evi (Q&A), Alme (patient assistant) – Viv (Global Brain as a Service)  Cognitive systems and platforms – IBM Watson – Wolfram Alpha – Saffron 10 – Vicarious (Captcha) Open Source/Research  OAQA  DeepDive  OpenCog  YodaQA  OpenSherlock  OpenIRIS  iCub EU projects  Cougaar  Inquire* (intelligent textbook) 5 * Curated knowledge base
  6. 6. © 2013 IBM Corporation Cognitive Assistant Vision: Augmenting Human Intelligence 6 Cognitive Capability • Create new insights and new valueDiscovery • Provide bias-free advice semi- autonomously, learns, and is proactive Decision • Build and reason about models of the world, of the user, and of the system itself Understanding • Leverage encyclopedic domain knowledge in context, and interacts in natural language Question Answering
  7. 7. © 2013 IBM Corporation Building a Society of Cognitive Agents 7 Cognitive Agent to Agent Outage Model Consequence Table Smart Swaps Lighting Objective Identification Sensitivity Analysis Sentiment Analysis Systems of specialized cognitive agents that collaborate effectively with one another Cognitive agents that collaborate effectively with people through natural user interfaces A nucleus from which an internet-scale cognitive computing cloud can be built Personal Avatar Deep Thunder Crew Scheduler News Human to Human Cognitive Agent to Human Watson Mobile Analytics and Response
  8. 8. © 2013 IBM Corporation Cognitive Assistance for knowledge workers  Cognitive case management is about providing cognitive support to knowledge workers in handling customer cases in domains such as social care, legal, government services, citizen services, etc.  Handling and managing cases involves understanding policies, laws, rules, regulations, processes, plans, as well as customers, surrounding world, news, social networks, etc.  A cognitive agent would assist employees and customers (from each perspective) – Assisting employees/workers by providing decision support based on understanding the case, context, surrounding world and applicable laws/rules/processes. – Helps employees/workers to be more productive (taking care of routine task), and effective – Assists citizens by empowering them by knowing their rights and responsibilities, and helping them to expedite the progress of the case 8 Users Assistant CustomersEmployees/ agents Plansworkflows Rules Policies Regulations Templates Instructions/ Procedures ApplicationsSchedules Communications such as email, chat, social media, etc. Organization Cog. Agent Unstructured Linked Information
  9. 9. © 2013 IBM Corporation Learning from an experience: Jeopardy Challenge  Back in 2006, DeepQA (Question Answering) involved addressing key challenges  Feb 27-28, 2008, a group of researchers and practitioners from industry, academia and government met to discuss state of the Question Answering (QA) field  The result was the development of a document (published in 2009) that included – Vision for QA systems, and DeepQA – Development of challenge problems with measurable dimensions – Approach to open collaboration – Open collaboration model  Defining Performance Dimensions  Challenge Problem Set Comparison 9
  10. 10. © 2013 IBM Corporation Lesson Learned from Jeopardy in Watson (1)  “The Watson program is already a breakthrough technology in AI. For many years it had been largely assumed that for a computer to go beyond search and really be able to perform complex human language tasks it needed to do one of two things: either it would “understand” the texts using some kind of deep “knowledge representation,” or it would have a complex statistical model based on millions of texts.” – James Hendler, Watson goes to college: How the world’s smartest PC will revolutionize AI, GigaOm, 3/2/2013  Breakthrough: – Developing a systematic approach for scalable knowledge building over large, less reliable data sources, and deploying a large array of individually imperfect techniques to find right answers • Building and curating a robust, and comprehensive knowledge base and ruleset has been a key challenge in expert systems • Watson approach for building on massive, mixed curated and not-curated and less reliable information sources with uncertainty has proved effective 10 Source: Inquire Intelligent Book
  11. 11. © 2013 IBM Corporation Lesson Learned from Jeopardy in Watson (2) 11 Comparison of two QA systems with and without confidence estimation. Both have an accuracy of 40%. With perfect confidence estimator Without confidence estimator Leveraging a large number of not always accurate techniques but delivering higher overall accuracy through understanding and employing confidence levels
  12. 12. © 2013 IBM Corporation Opportunity assessment (1): building knowledge from data 12 80% of the world’s data today is unstructured 90% of the world’s data was created in the last two years 1 Trillion connected devices generate 2.5 quintillion bytes data / day 3M+ Apps on leading App stores
  13. 13. © 2013 IBM Corporation Cognitive Computing as a Service: Watson in IBM BlueMix 13 Visualization Rendering Graphical representations of data analysis for easier understanding User Modeling Personality profiling to help engage users on their own terms. Language Identification Identifies the language in which text is written Machine Translation Translate text from one language to another. Concept Expansion Maps euphemisms to more commonly understood phrases Message Resonance Communicate with people with a style and words that suits them Question and Answer Direct responses to users inquiries fueled by primary document sources Relationship Extraction Intelligently finds relationships between sentences components Coming • Concept Analytics • Question Generation • Speech Recognition • Text to Speech • Tradeoff Analytics • Medical Information Extraction • Semantic Expansion • Policy Knowledge • Ontology Creation • Q&A in other languages • Policy Evaluation • Inference detection • Social Resonance • Answer Assembler • Relationship identification • Dialog • Machine Translation (French) • Smart Metadata • Visual Recommendation • Industry accelerators Available today Opportunity assessment (2): cognitive techniques and tools
  14. 14. © 2013 IBM Corporation Open Challenges (1) 14  Building the knowledge base and Training Cognitive Agents – How does User Train the Cog? – How does User Delegate to the Cog?  Adaptation and training of Cogs for a new domain – How to quickly train a cog for a new domain? Current approaches is laborious and tedious.  Performance Dimensions, and Evaluation Framework – Metrics, testing and validating functionality of Cog – Are controlled experiments possible? – Do we need to test in Real environment with Real users  User adoption/trust, and privacy – Can I trust that the Cog did what I told/taught/think the Cog did? – Is the Cog working for me? – Issues of privacy, privacy-preserving interaction of cogs.  Team vs. Personal Cogs – Training based on best practices vs. personalized instruction – Imagine Teams of Cogs working with teams of Human Analysts  Symbiosis Issues – What is best for the human to do? What is best for the cog?
  15. 15. © 2013 IBM Corporation Open Challenges (2)  Teaching the Cog what to do – Learning from demonstration, Learning from documentation – Telling the Cog what to do using natural language – Interactive learning where the Cog may ask questions of the trainer – How does the Cog learn what to do, reliably? – Active learning where the Cog improves over time • Moving up the learning curve (how does Cog understand the goal/desired end state?) • Adapts as the environment (e.g., data sources and formats change) – On what conditions should the Cog report back to the Human? – Task composition (of subtasks) and reuse – Adaptation of past learning to new situation  Proactive Action taking – Initiating actions based on learning and incoming requests • E.g., deciding what information sources to search for the request , issuing queries, evaluating responses – Deciding on next steps based on results or whether it needs further guidance from Human  Personal knowledge representation and reasoning – Capturing user behavior, interaction in form of personal knowledge – Ability to build knowledge from various structured and unstructured information – AI Principle: expert knows 70,000+/- 20,000 information pieces, and human tasks involves 1010 rules (foundation of AI, 1988)
  16. 16. © 2013 IBM Corporation Open Challenges (3)  Context understanding, and context-aware interaction – Modeling the world of the person serving, including all context around the work/task, and being able to use the contextual and environmental awareness to proactively and reactively act on behalf of the user  Learning to understand the task and plan to do it – Understanding the meaning of tasks, and coming up with a response (e.g.. How many people replied to an invite over email, accepting the offer, without asking the Cog to do so), or suggestions on how to achieve it (based on any new information discovered by the Cog)  Cognitive Speech recognition, or other human-computer interfaces for communicating with Cogs – Improving the speech-to-text techniques, and personalized, semantic-enriched speech understanding – Non-speech based approaches for communicating with humans
  17. 17. © 2013 IBM Corporation THANK YOU! Questions? 17

×