Sis sat 1000 josh dreller

840 views

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
840
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • In order to know we are making progress on scientific problems like open-domain QA well-defined challenges help demonstrate we can solve concrete & difficult tasks. As you might know Jeopardy! Is a long-standing, well-regarded and highly challenging Television quiz show in the US that demands human contestants to quickly understand and answer richly expressed natural language questions over a staggering array of topics.The Jeopardy! Challenge uniquely provides a palpable, compelling and notable way to drive the technology of Question Answering along key dimensionsIf you are familiar with the quiz show it asks an I incredibly broad range of questions over a huge variety of topics.In a single round there is a grid of 6 Categories and for each category 5 rows with increasing $ values. Once a cell is chosen by 1 of three players, A question, or what is often called a Clue is revealed.Here you see some example questions.<read some of the questions> Jeopardy uses complex and often subtle language to describe what is being asked. To win you have to be extraordinarily precise. You must deliver the exact answer – no more and no less – it is not good enough for it be somewhere in the top 2, 10 or 20 documents – you must know it exactly and get it in first place – otherwise no credit – in fact you loose points. You must demonstrate Accurate Confidences -- That is -- you must know what you know – if you “buzz –in” and then get it wrong you lose the $$ value of the question. And you have to do this all very quickly – deeply analyze huge volumes of content, consider many possible answers, compute your confidence and buzz in – all in just seconds.As we shall see compete with human champions at this game represents a Grand Challenge in Automatic Open-Domain Question Answering.<STOP><NEXT SLIDE>
  • Computer programs are natively explicit and exacting in their calculations over numbers and symbols.But Natural Language - -the words and phrases we humans use to communicate with one another -- is implicit -- the exact meaning is not completely and exactly indicated -- but instead is highly dependent on the context -- what has been said before, the topic, how it is being discussed -- factually, figuratively, fictionally etc.Moreover, natural language is often imprecise – it does not have to treat a subject with numerical precision…humans naturally interact and operate all the time with different degrees of uncertainty and fuzzy associations between words and concepts. We use huge amounts of background knowledge to reconcile and interpret what we read.Consider these examples….it is one thing to build a database table to exactly answer the question “Where is someone born?”. The computer looks up the name in one column and is programmed to know that the other column contains the birth place. STRUCUTRED information, like this database table, is designed for computers to make simple comparisons and to be exactly as accurate as the data entered into the database. Natural language is created and used by humans for humans. A reason we call natural language “Unstructured” is because it lacks the exact structure and meaning that computer programs typically use to answer questions. Understanding what is being represented is a whole other challenge for computer programs .Consider this sentence <read> It implies that Albert Einstein was born in Ulm – but there is a whole lot the computer has to do to figure that out any degree of certainty - it has to understand sentence structure, parts of speech, the possible meaning of words and phrases and how they related to the words and phrases in the question. What does a remembrance, a water color and an Otto have to do with where someone was born.Consider another question in the Jeopardy Style … X ran this? And this potentially answer-bearing sentence. Read the Sentence…Does this sentence answer the question for Jack Welch - -what does “ran” have to do with leadership or painting. How would a computer confidently infer from this sentence that Jack Welch ran GE – might be easer to deduce that he was at least a painter there.
  • Human performance is one of the things that makes the Jeopardy! Challenge so compelling. The best humans are very, very good at this task. In this chart, each dot corresponds to actual historical Jeopardy! games and represents the performance of the winner of those games. We refer to this cluster of dots as the “Winners Cloud”. For each dot, the X-axis, along the bottom of the graph, represents the percentage of questions in a game that the winning player got a chance to ANSWER. These were the questions he or she was confident enough and fast enough to ring-in or buzz in for FIRST. The Y-axis, going up along the left of the graph, represents the winning player’s PRECISION – that is, the percentage of those questions answered the player got RIGHT. CORemember, if a player gets a question wrong then they lose the $ value of the clue and their competitors still get a chance to answer or rebound. But what we humans, tend to do really, really well is – confidently know what we know – computing an accurate confidence turns out to be key ability for winning at Jeopardy! Looking at the center of the green cloud, what you see is that, on average, WINNERS are confident enough and fast enough to answer nearly 50% of the questions in a game and do somewhere between 85% and 95% precision on those questions. That is, they get 85-95% of the ones they answer RIGHT. The red dots represents Ken Jennings's performance. Ken won 74 consecutive games against qualified players. He was confident and fast enough to acquire 60% and even up to 80% of a game’s questions from his competitors and still do 85% and 95% precision on average.Good Jeopardy! players are remarkable in their breadth, precision, confidence and speed. 
  • Sis sat 1000 josh dreller

    1. 1. IBM’s Watson: Is the World’s Trivia Champion the Future of Search?<br />Josh Dreller<br />VP Media Technology & Analytics<br />Fuor Digital @fuordigital<br />
    2. 2.
    3. 3.
    4. 4. Two Types of Innovation<br />Incremental – improving current projects and advancing them in a linear fashion<br />Grand Challenges – pushing the limits of science<br />Must be Difficult (has to be a challenge)<br />Must be Inspiring<br />Irresistible Vision – “just has to be done”<br />
    5. 5. Previous IBM Grand Challenges<br />
    6. 6. Open Question Answering<br />The way normal humans communicate<br />Natural Language – very ambiguous, but at the heart of human intelligence<br />Last night I shot an <br /> elephant in my pajamas. <br /> How it got into my <br /> pajamas I’ll never know.<br />
    7. 7. The Ultimate Advancement:Computers that can communicate with humans in Natural Language<br />
    8. 8. What Can Search Learn From Watson?<br />The only way to push forward is to take huge leaps and look for self-imposed challenges even if we can’t prove out the business case right now<br />What kind of Grand Challenges could Search create?<br />A non-spammable Search Engine?<br />No need for Search Engine Optimization?<br />??<br />8<br />
    9. 9.
    10. 10. The Jeopardy! Challenge: A compelling and notable way to drive and measure the technology of automatic Question Answering along 5 Key Dimensions<br />Broad/Open Domain<br />$200<br />If you're standing, it's the direction you should look to check out the wainscoting.<br />Complex Language<br />$1000<br />Of the 4 countries in the world that the U.S. does not have diplomatic relations with, the one that’s farthest north<br />High Precision<br />$800<br />In cell division, mitosis splits the nucleus & cytokinesis splits this liquid cushioning the nucleus<br />Accurate <br />Confidence<br />High<br />Speed<br />
    11. 11. Jeopardy Reaction<br />Not too inspired at first<br />Weren’t interested in a circus sideshow<br />2009, IBM setup a moch-studio at their New York research facility<br />Sparring matches with ex-Jeopardy winners<br />Eventually saw the potential and thought it was something special<br />
    12. 12. A Very Simple Question for a Computer<br />In (( 12,546,798 * P ) ^ 2) / 34,567.46 = ?<br />Greater than or less than 1?<br />50/50 Shot<br />= .00885<br />
    13. 13. Real Language is Real Hard<br />Chess<br />A finite, mathematically well-defined search space<br />Limited number of moves and states<br />Grounded in explicit, unambiguous mathematical rules<br />Human Language<br />Ambiguous, contextual and implicit<br />Grounded only in human cognition<br />Seemingly infinitenumber of ways to<br /> express the same meaning<br />
    14. 14. The Opposite of Current Computer Language<br />Questions not designed for a computer to answer <br />Slang<br />Crafty questions<br />Shorthand<br />Rhyme<br />Regionalism<br />Anagrams<br />Complex Language!<br />
    15. 15. Structured vs. Unstructured Data<br />Where was Einstein born?<br />Structured<br />Unstructured<br />One day, from among his city views of Ulm, Otto chose a watercolor to send to Albert Einstein as a remembrance of Einstein´s birthplace.<br />
    16. 16. Common Sense Knowledge Base<br />An ontology of classes and individuals<br />Parts and materials of objects<br />Properties of objects (such as color and size)<br />Functions and uses of objects<br />Locations of objects and layouts of locations<br />Locations of actions and events<br />Durations of actions and events<br />Preconditions of actions and events<br />Effects (post conditions) of actions and events<br />Subjects and objects of actions<br />Behaviors of devices<br />Stereotypical situations or scripts<br />Human goals and needs<br />Emotions<br />Plans and strategies<br />Story themes<br />Contexts<br />16<br />Can a can CanCan?<br />
    17. 17. What Can Search Learn From Watson?<br />We need to focus on what computers aren’t good at, not what they are good at<br />Keywords and Links are not savvy enough. Natural language is key to a next generation search engine<br />Most of human knowledge is kept in unstructured data sources or based on common sense context<br />17<br />
    18. 18.
    19. 19. The DeepQA Project<br />Dr. David Ferrucci<br />25-30 full time <br /> researchers from <br /> many disciplines.<br /> 2007-2011<br />Millions of dollars<br />Post Jeopardy implications<br />
    20. 20. Speed Results<br />Deployed Watson on 2,880 IBM POWER 750 computer cores<br />Went from 2 hours per question on a single CPU to an average ofjust 3 seconds – fast enough to compete with the best.<br />
    21. 21. Example Question<br />Related Content<br />(Structured & Unstructured)<br />Keywords: 1698, comet, <br /> paramour, pink, …<br />AnswerType(comet discoverer)<br />Date(1698)<br />Took(discoverer, ship)<br />Called(ship, Paramour Pink)<br />…<br />Primary <br />Search<br />Question <br />Analysis<br />Candidate Answer Generation<br />…<br />Temporal<br />Taxonomic<br />Lexical<br />Spatial<br />[0.58 0 -1.3 … 0.97]<br />Isaac Newton<br />[0.71 1 13.4 … 0.72]<br />Wilhelm Tempel<br />[0.12 0 2.0 … 0.40]<br />HMS Paramour<br />Edmond Halley (0.85)<br />Christiaan Huygens (0.20)<br />Peter Sellers (0.05)<br />[0.84 1 10.6 … 0.21]<br />Christiaan Huygens<br />[0.33 0 6.3 … 0.83]<br />Halley’s Comet<br />[0.21 1 11.1 … 0.92]<br />Edmond Halley<br />Merging &<br />Ranking<br />[0.91 0 -8.2 … 0.61]<br />Pink Panther<br />[0.91 0 -1.7 … 0.60]<br />Peter Sellers<br />Evidence<br />Retrieval<br />…<br />Evidence<br />Scoring<br />IN 1698, THIS COMET DISCOVERER TOOK A SHIP CALLED THE PARAMOUR PINK ON THE FIRST PURELY SCIENTIFIC SEA VOYAGE<br />
    22. 22. Confidence is Key<br />Watson only rings in if it can reach a statistically significant confidence in time<br />Some questions take longer than others<br />Some questions will be able to answer less confident than others<br />Watson manages risk in betting based on confidence<br />
    23. 23. Embarrassingly Parallel Computing<br />Def: “Little or no effort is required to separate the problem into a number of parallel tasks”<br />Works on many algorithms at once and comes back together with confidence scores.<br />different from distributed computing problems (such as Google’s MapReduce) that require communication between tasks, especially communication of intermediate results.<br />
    24. 24. What Can Search Learn From Watson?<br />We can’t be bound by the constraints of current technology<br />Two fold process of first coming up with answers then vetting them with more evidence<br />Ideas like Parallel Processing will allow us to jump ahead<br />24<br />
    25. 25.
    26. 26. Ken Jennings & Brad Rutter<br />
    27. 27. Top human players are remarkably good.<br />Computers?<br />The Best Human Performance: Our Analysis Reveals the Winner’s Cloud<br />Each dot represents an actual historical human Jeopardy! game<br />Winning Human Performance<br />Grand Champion Human Performance<br />2007 QA Computer System<br />More Confident<br />Less Confident<br />27<br />
    28. 28. DeepQA: Incremental Progress in Precision and Confidence 6/2007-11/2010 <br />Now Playing in the Winners Cloud<br />11/2010<br />4/2010<br />10/2009<br />5/2009<br />12/2008<br />Precision<br />8/2008<br />5/2008<br />12/2007<br />Baseline<br />
    29. 29. Confidence Bar<br />
    30. 30. What Can Search Learn From Watson?<br />Confidence bar would be a great addition to SERPs <br />We must benchmark what is “good” and then aim higher<br />These things take time (and money)<br />30<br />
    31. 31.
    32. 32. TJ Watson Research CenterYorktown, NYTwo Games: Aired February 14-16, 2011<br />
    33. 33. The End – Humans Win<br />
    34. 34.
    35. 35. Financial Industry<br />Generates large amounts of data and growing 70% per year<br />Not just numbers, but all info that would influence the biz landscape (news, articles, blogs, etc)<br />Recent financial crisis shows failures of lack of understanding in interdependencies<br />
    36. 36. DeepQA in Continuous Evidence-Based Diagnostic Analysis<br />Considers and synthesizes a broad range of evidence improving quality, reducing cost<br />Symptoms<br />Diagnosis Models<br />Meds<br />Symp<br />Fam<br />Hist<br />Find<br />Confidence<br />Renal failure<br />Family History<br />Patient History<br />UTI<br />Medications<br />Tests/Findings<br />Diabetes<br />Influenza<br />Notes/Hypotheses<br />hypokalemia<br />esophogitis<br />Most Confident Diagnosis: Diabetes<br />Most Confident Diagnosis: UTI<br />Most Confident Diagnosis: Influenza<br />Most Confident Diagnosis: Diabetes and Esophogitis<br />Huge Volumes of Texts, Journals, References, DBs etc.<br />
    37. 37. What Can Search Learn From Watson?<br />Even the most daunting task can be overcome<br />It’s not company versus company, it’s stretching human knowledge<br />How can search engines help other industries<br />37<br />
    38. 38. Sources<br />“What is Watson?” presentation by Adam Lally, IBM Research<br /><ul><li>Jeopardy website with videos: http://www.jeopardy.com/minisites/watson/
    39. 39. NYTimes article: “What Is I.B.M.’s Watson?http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?_r=2&ref=opinion</li></ul>Wired magazine:“IBM’s Watson Supercomputer Wins Practice Jeopardy Round”http://www.wired.com/epicenter/2011/01/ibm-watson-jeopardy/#<br />More technical: AI magazine “Building Watson: An Overview of the DeepQA Project”http://www.stanford.edu/class/cs124/AIMagzine-DeepQA.pdf<br />
    40. 40. Thank You<br />39<br />

    ×