From TREC to Watson: is open domain question answering a solved problem?

4,393 views

Published on

Invited talk at KEPT 2011, Cluj-Napoca, Romania discussing the current state-of-the-art in question answering.

Published in: Education, Technology
0 Comments
7 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,393
On SlideShare
0
From Embeds
0
Number of Embeds
469
Actions
Shares
0
Downloads
132
Comments
0
Likes
7
Embeds 0
No embeds

No notes for slide

From TREC to Watson: is open domain question answering a solved problem?

  1. 1. ConstantinOrasan<br />Research Group in Computational Linguistics,<br />University of Wolverhampton, UK<br />http://www.wlv.ac.uk/~in6093/<br />From TREC to Watson: is open domain question answering a solved problem?<br />
  2. 2. Structure of the talk<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Brief introduction to QA<br />Video 1: Where are we now – IBM Watson<br />The structure of a QA system<br />Video 2: Watson vs. humans<br />Overview of Watson<br />QA from the point of view of users/companies<br />Conclusions <br />
  3. 3. Information overload<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />“Getting information off the Internet is like taking a drink from a fire hydrant”<br />Mitchell Kapor<br />
  4. 4. What is question answering?<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />A way to address the problem of information overload<br />Question answering aims at identifying the answer to a question posed in natural languagein a large collection of documents<br />The information provided by QA is more focused than information retrieval<br />The output can be the exact answer or a text snippet which contains the answer<br />The domain took off as a result of the introduction of QA track in TREC, whilst cross-lingual QA as a result of CLEF<br />
  5. 5. Types of QA systems<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br /><ul><li>open-domain QA systems: can answer any question from any collection</li></ul>+ can potentially answer any question<br />- very low accuracy (especially in cross-lingual settings)<br /><ul><li>canned QA systems: rely on a very large repository of questions for which the answer is known</li></ul>+ very little language processing necessary<br />- limited to the answers in the database<br /><ul><li>closed-domain QA systems: are built for very specific domains and exploit expert knowledge in them</li></ul>+ very high accuracy<br />- can require extensive language processing and limited to one domain<br />
  6. 6. Evolution of QA domain<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Early QA systems <br />date as back as 1960s and were mainly front ends to databases<br />had limited usability <br />Open-domain QA <br />emerged as a result of the increasing amount of data available<br />to answer a question need to find and extract the answer<br />developed last 1990s as a result of the QA track at Text REtrieval Conferences<br />emphasis on factoid questions, but other types of questions were also explored<br />CLEF competitions have encouraged development of cross-lingual systems.<br />
  7. 7. Where are we now?<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />IBM and the Jeopardy Challenge<br />Jeopardy! is an American quiz show where participants are given clues and need to guess the question (e.g. if the clue is The Father of Our Country; he didn't really chop down a cherry tree the contestant would respond Who is George Washington?)<br />Watson is a QA system developed by IBM<br />http://www.youtube.com/watch?v=FC3IryWr4c8<br />
  8. 8. Structure of an open domain QA system<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />A typical open domain QA system consists of:<br />Question processor<br />Document processor<br />Answer extractor (and validation)<br />Can have components for cross-lingual processing<br />Has access to several external resources<br />
  9. 9. Question processor<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Produces an interpretation of the question<br />Determines the Question Type (e.g. factoid, definition, procedure, etc.)<br />Determines the Expected Answer Type (EAT)<br />On the basis of the question it produces a query<br />Determines syntactic and semantic relations between the words from the questions<br />Expands the query with synonyms<br />May perform translation of the keywords in the query in the case of cross-lingual QA<br />
  10. 10. Expected answer type calculation<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Relies on the existence of an answer type taxonomy<br />This taxonomy can be made open-domain by linking to general ontologies such as WordNet<br />The EAT can be determined using rule-based as well as machine learning approaches<br />Who is the president of Romania?<br />Where is Paris?<br />Knowledge of domain can greatly improve the identification of EAT and help deal with ambiguities<br />
  11. 11. Query formulation<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Produces a query from the question<br />As a list of keywords<br />As a list of phrases<br />Identifies entities present in the question<br />Produce variants of the query by introducing morphological, lexical and semantic variations<br />Domain knowledge is <br />very important for identification of entities and generation of valid variations and<br />vital in cross-lingual scenarios<br />
  12. 12. Document processing<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Uses the query produced in the previous step to retrieve paragraphs which may contain the answer<br />It is largely domain independent as it relies on text retrieval engines<br />Ranks results, but this is largely independent of the QA task<br />For limited collections of texts it is possible to enrich the index with various linguistic information which can help further processing<br />When the domain is known, characteristics of the input files can improve the retrieval (e.g. presence of metadata)<br />
  13. 13. Answer extraction<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Uses a variety of techniques to identify the answer of a question<br />The answer should have the type of EAT<br />Very often rely on previously created patterns (e.g. When was the telephone invented? can be answered if there is a sentence that matches the pattern The telephone was invented in <date>), <br />Many patterns can express the same answer (e.g. the telephone, invented in <date>)<br />Relations identified in the question between the expected answer and entities from the question can be exploited by patterns<br />
  14. 14. Answer extraction (II)<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Potential answers are ranked according to functions which are usually learned from the data<br />The ranking and validation of answers can be done using external sources such as the Internet<br />QA for well defined domains can rely on better patterns<br />The functions learned usually work well only on the type of data used for training<br />
  15. 15. Open domain QA - evaluation<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Great coverage, but low accuracy<br />For example:<br />EPHYRA QA system in TRAC 2007 reports an accuracy of 0.20 for factoid questions (Schlaefer et al. 2007)<br />OpenEphyra was used for a cross-lingual Romanian – English QA system and we obtained 0.11 accuracy for factoid questions (Dornescu et al. 2008) – the best performing system for all cross-lingual QA tasks in CLEF 2008<br />The results are not directly comparable (different QA engines, tuned differently, different collections, different tasks)<br />But does it make sense to do open domain question answering?<br />
  16. 16. How did Watson perform?<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />http://www.youtube.com/watch?v=Puhs2LuO3Zc<br />
  17. 17. How was this achieved?<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Starting point the Practical Intelligent Question Answering Technology (PIQUANT) developed by IBM to participate in TREC<br />Has been under development at IBM for more than 6 years by a team of 4 full time researchers<br />Was one of the top three to five in many TRECs<br />PIQUANT was performing around 0.33 on the TREC data<br />PIQUANT used a standard architecture for QA<br />
  18. 18. How was this achieved? (II)<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Lots of extra work was put in the system: a core team of 20 researchers working for almost 4 years<br />PIQUANT system was enriched with a large number of modules for language processing<br />The processing was parallelised heavily<br />Lots of components were developed to deal with specific problems (lots of experts)<br />Watson tries to combine deep and shallow knowledge<br />Had access to large data sets and very good hardware <br />
  19. 19. Overview of Watson’s structure<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />
  20. 20. Hardware used<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Watson is a workload optimized system designed for complex analytics, made possible by integrating massively parallel POWER7 processors and the IBM DeepQA software to answer Jeopardy! questions in under three seconds. Watson is made up of a cluster of ninety IBM Power 750 servers (plus additional I/O, network and cluster controller nodes in 10 racks) with a total of 2880 POWER7 processor cores and 16 Terabytes of RAM. Each Power 750 server uses a 3.5 GHz POWER7 eight core processor, with four threads per core. The POWER7 processor's massively parallel processing capability is an ideal match for Watson's IBM DeepQA software which is embarrassingly parallel (that is a workload that is easily split up into multiple parallel tasks).<br />According to John Rennie, Watson can process 500 gigabytes, the equivalent of a million books, per second. IBM's master inventor and senior consultant Tony Pearson estimated Watson's hardware cost at about $3 million and with 80 TeraFLOPs would be placed 94th on the Top 500 Supercomputers list.<br />From: http://en.wikipedia.org/wiki/Watson_(computer)<br />
  21. 21. Speed of answer<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />In Jeopardy! an answer needs to be provided in 3-5 seconds<br />In initial experiments with running Watson on a single processor an answer was obtained in about 2 hours<br />The system was implemented using Apache UIMA Asynchronous Scaleout<br />Massively parallel architecture<br />Indexes used to answer the questions had to be pre-processed using Hadoop<br />
  22. 22. Watson was not only NLP<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Betting strategyhttp://www.youtube.com/watch?v=vA9aqAd2iso<br />
  23. 23. To sum up, Watson is:<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />An amazing engineering project<br />A massive investment<br />Research in many domains of NLP<br />A big PR stunt<br />A way to improve the IBM position in text analytics<br />But it is not really a technology ready to be deployed<br />But was it a real progress in open-domain QA?<br />
  24. 24. So is open domain QA a solved problem?<br />Can we really solve open domain QA?<br />Do we really need open domain QA?<br />Do we care?<br />
  25. 25. QA from user perspective<br /><ul><li>Real user questions
  26. 26. Are rarely open domain
  27. 27. Can rarely be formulated in one go
  28. 28. Do not always contain answers from only one source
  29. 29. Companies
  30. 30. Have very well defined needs
  31. 31. Have access to previously asked questions
  32. 32. Need very high accuracy
  33. 33. Most of them cannot afford to invest millions of dollars </li></li></ul><li>The QALL-ME project<br />Question Answering Learning technologies in a multiLingual and Multimodal Environment (QALL-ME) – FP6 funded project on Multilingual and Multimodal Question Answering<br />FBK, Trento, Italy – coordinator<br />University of Wolverhampton, UK<br />DFKI, Germany<br />University of Alicante, Spain<br />Comdata, Italy<br />Ubiest, Italy<br />Waycom, Italy<br />http://qallme.fbk.eu<br />Has established an infrastructure for multilingual and multimodal question answering<br />
  34. 34. The QALL-ME project<br />demonstrators in domain of tourism – can answer questions in the domain of cinema/movies and accommodation. <br />E.g. <br />What movies can I see in Wolverhampton this week? <br />How can I get to Novotel Hotel, Wolverhampton?<br />the questions can be asked in any of the four languages in the consortium<br />small scale demonstrator built for Romanian<br />
  35. 35. QALL-ME framework<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />
  36. 36. The QALL-ME ontology<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />All the reasoning and processing is done using a domain ontology<br />The ontology also provides the means of achieving cross-lingual QA<br />Determines the way data is stored in the database<br />Ontologies need to be developed for each domain<br />
  37. 37. 30<br />Part of the tourism ontology <br />
  38. 38. Evaluation of the QALL-ME prototype<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />For the cinema domain the accuracy ranged between 60% to 85% depending on the language<br />The system was tested on real questions posed by the users which were completely independent from the ones used to develop the system<br />The error were mainly caused by wrongly identified named entities, missing patterns and mistakes of the entailment engine<br />In an commercial environment this system can be revised every day in order to obtain much higher performance<br />
  39. 39. Closed domain QA for commercial companies<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Closed domain QA has a certain appeal with companies<br />These companies normally have large databases of questions and answers from customers<br />The domain can be very clearly defined<br />In some cases the systems needed are actually canned QA systems <br />
  40. 40. Interactive QA<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />It is easy to ask: <br />Where can I eat paella tonight?<br />but what about: <br />What mobile phones are smart phones with a camera, have GPS, have touch screen, are produced by HTC or Nokia, but definitely not Samsung, have 1G internal memory ….<br />Interactivity is needed to deal with ambiguities, wrong answers, user feedback, etc.<br />
  41. 41. Interactive QA<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />“process where the user is a continual part of the information loop”<br />At intersection of Dialogue Systems and Question Answering<br />Initiate dialogue with the user in cases where there are too many or too few answers, or there is some ambiguity in the request<br />Can suggest follow up questions<br />Provide a more natural way of locating a needed information<br />Companies are interested in developing IQA systems as a way for providing customer services<br />
  42. 42. Hypothetical IQA session<br />[1] SYS: Hi!<br />[2] USER: Can you find me a smart phone with a camera?<br />[3] SYS: There are [NUMBER][big number] of smart phones featuring a camera, would you like to search for some [BRAND]?<br />[4] USER: No, which have GPS?<br />[5] SYS: [STILL BIG NUMBER] have GPS, are you interested in ones having TOUCH SCREEN?[Suggesting new constraints]<br />[6] USER: Yes, it would be great.<br />[7] SYS: [NUMBER] of Nokia phones, [NUMBER] HTC phones, [NUMBER] Samsung phones, [NUMBER] of other brands.<br />[8] USER: Ok, what about HTC phones?<br />[9] SYS: [NUMBER] of them have [feature], [NUMBER] of them have [feature].<br />[10] USER: What are the ones with [feature]?<br />[11] SYS: Here you are: [RESULTS].<br />26/05/2011<br />Knowledge acquisition from Wikipedia for IQA<br />35<br />
  43. 43. Answers from more than one source<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Many complex questions need to compose the answer to a question from several sources:<br />List questions: List all the cantons in Switzerland which border Germany<br />Sentiment questions:What features people like in Vista?<br />This is part of the new trend in “deep QA”<br />Even though users probably really need such answers, the technology is still at the stage of research projects<br />
  44. 44. To sum up …<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />Some researchers believe that search is dead and “deep QA” is the future<br />This was largely fuelled by IBM’s Watson’s winning the Jeopardy! <br />Watson is a fantastic QA system, but it does not solve the problem of open domain QA<br />For real applications we still want to focus on very well defined domains<br />We still want to have the user in the loop to facilitate asking questions<br />Watson may have revived the interest in QA<br />
  45. 45. Watson is not always right<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />but it kind of knows this ….<br />http://www.youtube.com/watch?v=7h4baBEi0iA<br />
  46. 46. Thank you for your attention<br />4 July 2011<br />Constantin Orasan - KEPT 2011<br />

×