Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Natural Language Search with Knowledge Graphs

572 views

Published on

To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.

In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into

{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }

We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.

Published in: Technology
  • Be the first to comment

Natural Language Search with Knowledge Graphs

  1. 1. 2019.04.25 Natural Language Search with Knowledge Graphs Trey Grainger Chief Algorithms Officer, Lucidworks
  2. 2. Trey Grainger Chief Algorithms Officer • Previously: SVP of Engineering @ Lucidworks; Director of Engineering @ CareerBuilder • Georgia Tech – MBA, Management of Technology • Furman University – BA, Computer Science, Business, & Philosophy • Stanford University – Information Retrieval & Web Search Other fun projects: • Co-author of Solr in Action, plus numerous research publications • Advisor to Presearch, the decentralized search engine • Lucene / Solr contributor About Me
  3. 3. Agenda • About Lucidworks • What is a Knowledge Graph (and related Terminology)? • What is Natural Language Search? • Philosophy of Language (enough to get the approach…) • Solr’s Semantic Knowledge Graph • Knowledge Graph Goals for Natural Language Search • Semantic Query Parsing • Solr Text Tagger • Solr Statistical Phrase Identifier • Full Knowledge Graph Capabilities with Solr • Automated Graph Generation • Demos!
  4. 4. Basic Keyword Search (inverted index, tf-idf, bm25, multilingual text analysis, query formulation, etc.) Query Intent (query classification, semantic query parsing, knowledge graphs, concept expansion, rules, clustering, classification) Relevancy Tuning (signals, AB testing/genetic algorithms, Learning to Rank, Neural Networks) Self-learning Relevance Engineering Sophistication Context for this Talk Taxonomies / Entity Extraction (entity recognition, basic ontologies, synonyms, etc.)
  5. 5. The Search & AI Conference COMPANY BEHIND Who are we? 230 CUSTOMERS ACROSS THE FORTUNE 1000 400+EMPLOYEES OFFICES IN San Francisco, CA (HQ) Raleigh-Durham, NC Cambridge, UK Bangalore, India Hong Kong Employ about 40% of the active committers on the Solr project 40% Contribute over 70% of Solr's open source codebase70% DEVELOP & SUPPORT Apache
  6. 6. Industry’s most powerful Intelligent Search & Discovery Platform.
  7. 7. Let the most respected analysts in the world speak on our behalf Dassault Systèmes Mindbreeze Coveo Microsoft Attivio Expert System Smartlogic Sinequa IBM IHS Markit Funnelback Micro Focus COMPLETENESS OF VISION ABILITYTOEXECUTE CHALLENGERS LEADERS NICHE PLAYERS VISIONARIES Source: June 2018 Gartner Magic Quadrant report on Insight Engines. © Gartner, Inc.
  8. 8. Call for Speakers Open until May 8th, 2019!
  9. 9. What is a Knowledge Graph? (vs. Ontology vs. Taxonomy vs. Synonyms, etc.)
  10. 10. Simplistic Definitions Ontology: Defines relationships between types of things [ animal eats food; human is animal ] Knowledge Graph: Instantiation of an Ontology (contains the things that are related) [ john is human; john eats food ] Taxonomy: Classifies things into Categories [ john is Human; Human is Mammal; Mammal is Animal ] Synonyms List: Provides substitute words that can be used to represent the same or very similar things [ human => homo sapien, mankind; food => sustenance, meal ] Yes, there is overlap…
  11. 11. For Solr, I strongly disagree… back to that later with demos
  12. 12. What is Natural Language Search?
  13. 13. What kind of Knowledge Graph can help us with the kinds of problems we encounter in Search use cases?
  14. 14. Knowledge Graph Challenges of building a traditional knowledge graph Because current knowledge bases / ontology learning systems typically requires explicitly modeling nodes and edges into a graph ahead of time, this unfortunately presents several limitations to the use of such a knowledge graph: • Entities not modeled explicitly as nodes have no known relationships to any other entities. • Edges exist between nodes, but not between arbitrary combinations of nodes, and therefore such a graph is not ideal for representing nuanced meanings of an entity when appearing within different contexts, as is common within natural language. • Substantial meaning is encoded in the linguistic representation of the domain that is lost when the underlying textual representation is not preserved: phrases, interaction of concepts through actions (i.e. verbs), positional ordering of entities and the phrases containing those entities, variations in spelling and other representations of entities, the use of adjectives to modify entities to represent more complex concepts, and aggregate frequencies of occurrence for different representations of entities relative to other representations. • It can be an arduous process to create robust ontologies, map a domain into a graph representing those ontologies, and ensure the generated graph is compact, accurate, comprehensive, and kept up to date. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016.
  15. 15. most often used in reference to “free text”
  16. 16. My Three Philosophical Assertions 1) Unstructured data is actually “hyper-structured” data. It is a graph that contains much more structure than typical “structured data.” 2) That graph is very rich, but is a compression of meaning into a lossy format (text). Much of data science is essentially the decompression from this lossy format into a reconstituted form. 3) Most Important: Every instance of a word or phrase you ever encounter has a unique meaning.
  17. 17. Assertion 1: Unstructured data is actually “hyper-structured” data. It is a graph that contains much more structure than typical “structured data.”
  18. 18. Structured Data Employees Table id name company start_date lw100 Trey Grainger 1234 2016-02-01 dis2 Mickey Mouse 9123 1928-11-28 tsla1 Elon Musk 5678 2003-07-01 Companies Table id name start_date 1234 Lucidworks 2016-02-01 5678 Tesla 1928-11-28 9123 Disney 2003-07-01 Discrete Values Continuous Values Foreign Key
  19. 19. Unstructured Data Trey Grainger works at Lucidworks. He is speaking at Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22- 25, 2019. Trey got his masters from Georgia Tech.
  20. 20. Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail Unstructured Data
  21. 21. Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail Foreign Key?
  22. 22. Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail Fuzzy Foreign Key? (Entity Resolution)
  23. 23. Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail Fuzzier Foreign Key? (metadata, latent features)
  24. 24. Fuzzier Foreign Key? (metadata, latent features) Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail Not so fast!
  25. 25. Giant Graph of Relationships... Trey Grainger works for Lucidworks. He is speaking at the Haystack 2019. #HaystackConf (Haystack) is being held in Charlottesville April 22-25, 2019. Trey got his masters degree from Georgia Tech. Trey’s Voicemail
  26. 26. Assertion 1: Unstructured data is actually “hyper-structured” data. It is a graph that contains much more structure than typical “structured data.”
  27. 27. Assertion 2: That graph is very rich, but is a compression of meaning into a lossy format (text). Much of data science is essentially the decompression from this lossy format into a reconstituted form.
  28. 28. Semantic Data Encoded into Free Text Content
  29. 29. How do we easily harness this “semantic graph” of relationships within unstructured information?
  30. 30. Search Engines are really good at querying across character sequences, term sequences, and documents Example Queries: c?o CTO, CEO, CFO, … "VP Engineering"~2 “VP of Engineering”, VP Engineering” ,“Engineering VP”, “VP of Infrastructure Engineering” (Microsoft OR MS) AND Word “MS Word”, “Microsoft Word”
  31. 31. /solr/collection/select/?q=apache solr Term Documents … … apache doc1, doc3, doc4, doc5 … hadoop doc2, doc4, doc6 … … solr doc1, doc3, doc4, doc7, doc8 … … doc5 doc7 doc8 doc1 doc3 doc4 solr apache apache solr Matching queries to documents
  32. 32. id: 1 job_title: Software Engineer desc: software engineer at a great company skills: .Net, C#, java id: 2 job_title: Registered Nurse desc: a registered nurse at hospital doing hard work skills: oncology, phlebotemy id: 3 job_title: Java Developer desc: a software engineer or a java engineer doing work skills: java, scala, hibernate field doc term desc 1 a at company engineer great software 2 a at doing hard hospital nurse registered work 3 a doing engineer java or software work job_title 1 Software Engineer … … … Terms-Docs Inverted IndexDocs-Terms Forward IndexDocuments Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph field term postings list doc pos desc a 1 4 2 1 3 1, 5 at 1 3 2 4 company 1 6 doing 2 6 3 8 engineer 1 2 3 3, 7 great 1 5 hard 2 7 hospital 2 5 java 3 6 nurse 2 3 or 3 4 registered 2 2 software 1 1 3 2 work 2 10 3 9 job_title java developer 3 1 … … … …
  33. 33. Semantic Knowledge Graph
  34. 34. Serves as a “data science toolkit” API that allows dynamically navigating and pivoting through multiple levels of relationships between items in a domain. Semantic Knowledge Graph API Core similarity engine, exposed via API Any product can leverage the core relationship scoring engine to score any list of entities against any other list Full domain support Keywords, categories, tags, based upon any field on your documents. Graph is build automatically from the content representing your domain. Intersections, overlaps, & relationship scoring, many levels deep Users can either provide a list of items to score, or else have the system dynamically discover the most related items (or both). Knowledge Graph
  35. 35. DOI: 10.1109/DSAA.2016.51 Conference: 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA) Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Graph Traversal Data Structure View Graph View doc 1 doc 2 doc 3 doc 4 doc 5 doc 6 skill: Java skill: Java skill: Scala skill: Hibernate skill: Oncology doc 1 doc 2 doc 3 doc 4 doc 5 doc 6 job_title: Software Engineer job_title: Data Scientist job_title: Java Developer …… Inverted Index Lookup Forward Index Lookup Forward Index Lookup Inverted Index Lookup Java Java Developer Hibernate Scala Software Engineer Data Scientist has_related_skill has_related_skill has_related_skill has_related_job_title has_related_job_title has_related_job_title has_related_job_title has_related_job_title has_related_job_title
  36. 36. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Set-theory View Graph View How the Graph Traversal Works skill: Java skill: Scala skill: Hibernate skill: Oncology has_related_skill has_related_skill has_related_skill doc 1 doc 2 doc 3 doc 4 doc 5 doc 6 skill: Java skill: Java skill: Scala skill: Hibernate skill: Oncology Data Structure View Java Scala Hibernate docs 1, 2, 6 docs 3, 4 Oncology doc 5
  37. 37. Scoring of Node Relationships (Edge Weights) Foreground vs. Background Analysis Every term scored against it’s context. The more commonly the term appears within it’s foreground context versus its background context, the more relevant it is to the specified foreground context. countFG(x) - totalDocsFG * probBG(x) z = -------------------------------------------------------- sqrt(totalDocsFG * probBG(x) * (1 - probBG(x))) { "type":"keywords”, "values":[ { "value":"hive", "relatedness":0.9773, "popularity":369 }, { "value":"java", "relatedness":0.9236, "popularity":15653 }, { "value":".net", "relatedness":0.5294, "popularity":17683 }, { "value":"bee", "relatedness":0.0, "popularity":0 }, { "value":"teacher", "relatedness":-0.2380, "popularity":9923 }, { "value":"registered nurse", "relatedness": -0.3802 "popularity":27089 } ] } We are essentially boosting terms which are more related to some known feature (and ignoring terms which are equally likely to appear in the background corpus) + - Foreground Query: "Hadoop" Knowledge Graph
  38. 38. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Multi-level Graph Traversal with Scores software engineer* (materialized node) Java C# .NET .NET Developer Java Developer Hibernate ScalaVB.NET Software Engineer Data Scientist Skill Nodes has_related_skillStarting Node Skill Nodes has_related_skill Job Title Nodes has_related_job_title 0.90 0.88 0.93 0.93 0.34 0.74 0.91 0.89 0.74 0.89 0.780.72 0.48 0.93 0.76 0.83 0.80 0.64 0.61 0.780.55
  39. 39. Knowledge Graph
  40. 40. Knowledge Graph
  41. 41. Related term vector (for query concept expansion) http://localhost:8983/solr/stack-exchange-health/skg
  42. 42. Content-based Recommendations (More Like This on Steroids) http://localhost:8983/solr/job-postings/skg
  43. 43. Who’s in Love with Jean Grey?
  44. 44. Assertion 2 (Summary): That graph is very rich, but is a compression of meaning into a lossy format. Much of data science is essentially the decompression from this lossy format into a reconstituted form.
  45. 45. Assertion 3: Every instance of a word or phrase you ever encounter has a unique meaning.
  46. 46. Differentiating related terms Misspellings: managr => manager Synonyms: cpa => certified public accountant rn => registered nurse r.n. => registered nurse Ambiguous Terms*: driver => driver (trucking) ~80% likelihood driver => driver (software) ~20% likelihood Related Terms: r.n. => nursing, bsn hadoop => mapreduce, hive, pig *differentiated based upon user and query context
  47. 47. Thought Exercise What do you think of when I say the word “driver”? What about “architect”?
  48. 48. Use Case: Query Disambiguation Example Related Keywords (representing multiple meanings) driver truck driver, linux, windows, courier, embedded, cdl, delivery architect autocad drafter, designer, enterprise architect, java architect, designer, architectural designer, data architect, oracle, java, architectural drafter, autocad, drafter, cad, engineer … … Source: M. Korayem, C. Ortiz, K. AlJadda, T. Grainger. "Query Sense Disambiguation Leveraging Large Scale User Behavioral Data". IEEE Big Data 2015.
  49. 49. Use Case: Query Disambiguation Example Related Keywords (representing multiple meanings) driver truck driver, linux, windows, courier, embedded, cdl, delivery architect autocad drafter, designer, enterprise architect, java architect, designer, architectural designer, data architect, oracle, java, architectural drafter, autocad, drafter, cad, engineer … … Source: M. Korayem, C. Ortiz, K. AlJadda, T. Grainger. "Query Sense Disambiguation Leveraging Large Scale User Behavioral Data". IEEE Big Data 2015.
  50. 50. A few methodologies: 1) Query Log Mining 2) Semantic Knowledge Graph Knowledge Graph
  51. 51. Query Log Mining: Discovering ambiguous phrases 1) Classify users who ran each search in the search logs (i.e. by the job title classifications of the jobs to which they applied) 3) Segment the search term => related search terms list by classification, to return a separate related terms list per classification 2) Create a probabilistic graphical model of those classifications mapped to each keyword phrase. Source: M. Korayem, C. Ortiz, K. AlJadda, T. Grainger. "Query Sense Disambiguation Leveraging Large Scale User Behavioral Data". IEEE Big Data 2015.
  52. 52. Semantic Knowledge Graph: Discovering ambiguous phrases 1) Exact same concept, but use a document classification field (i.e. category) as the first level of your graph, and the related terms as the second level to which you traverse. 2) Has the benefit that you don’t need query logs to mine, but it will be representative of your data, as opposed to your user’s intent, so the quality depends on how clean and representative your documents are. Additional Benefit: Multi-dimensional disambiguation and dynamic materialization of categories. Effectively an dynamically-materialized probabilistic graphical model
  53. 53. Disambiguated meanings (represented as term vectors) Example Related Keywords (Disambiguated Meanings) architect 1: enterprise architect, java architect, data architect, oracle, java, .net 2: architectural designer, architectural drafter, autocad, autocad drafter, designer, drafter, cad, engineer driver 1: linux, windows, embedded 2: truck driver, cdl driver, delivery driver, class b driver, cdl, courier designer 1: design, print, animation, artist, illustrator, creative, graphic artist, graphic, photoshop, video 2: graphic, web designer, design, web design, graphic design, graphic designer 3: design, drafter, cad designer, draftsman, autocad, mechanical designer, proe, structural designer, revit … … Source: M. Korayem, C. Ortiz, K. AlJadda, T. Grainger. "Query Sense Disambiguation Leveraging Large Scale User Behavioral Data". IEEE Big Data 2015.
  54. 54. Using the disambiguated meanings In a situation where a user searches for an ambiguous phrase, what information can we use to pick the correct underlying meaning? 1. Any pre-existing knowledge about the user: • User is a software engineer • User has previously run searches for “c++” and “linux” 2. Context within the query: User searched for windows AND driver vs. courier OR driver 3. If all else fails (and there is no context), use the most commonly occurring meaning. driver 1: linux, windows, embedded 2: truck driver, cdl driver, delivery driver, class b driver, cdl, courier Source: M. Korayem, C. Ortiz, K. AlJadda, T. Grainger. "Query Sense Disambiguation Leveraging Large Scale User Behavioral Data". IEEE Big Data 2015.
  55. 55. Thought Exercise What do you think of when I say the word “Facebook”?
  56. 56. Every term or phrase is a Context-dependent cluster of meaning with an ambiguous label
  57. 57. What does “love” mean? http://localhost:8983/solr/thesaurus/skg
  58. 58. What does “love” mean in the context of “hug”? http://localhost:8983/solr/thesaurus/skg
  59. 59. What does “love” mean in the context of “child”? http://localhost:8983/solr/thesaurus/skg
  60. 60. My Three Assertions (Recap) 1) Unstructured data is actually “hyper-structured” data. It is a graph that contains much more structure than typical “structured data.” 2) That graph is very rich, but is a compression of meaning into a lossy format (text). Much of data science is essentially the decompression from this lossy format into a reconstituted form. 3) Most Important: Every instance of a word or phrase you ever encounter has a unique meaning.
  61. 61. So why all the philosophy? Because it’s much more important to intuitively understand the kinds of problem we’re trying to solve in Natural Language Search than to jump head-first into the Solution. Because building the wrong thing can often be worse than not doing anything. And once you have an intuitive sense of the problems you need to solve, you can confidently use the tools I’m about to describe to build the right solution for your specific domain.
  62. 62. So what’s the end goal here? User’s Query: machine learning research and development Portland, OR software engineer AND hadoop, java Traditional Query Parsing: (machine AND learning AND research AND development AND portland) OR (software AND engineer AND hadoop AND java) Semantic Query Parsing: "machine learning" AND "research and development" AND "Portland, OR" AND "software engineer" AND hadoop AND java Semantically Expanded Query: "machine learning"^10 OR "data scientist" OR "data mining" OR "artificial intelligence") AND ("research and development"^10 OR "r&d") AND AND ("Portland, OR"^10 OR "Portland, Oregon" OR {!geofilt pt=45.512,-122.676 d=50 sfield=geo}) AND ("software engineer"^10 OR "software developer") AND (hadoop^10 OR "big data" OR hbase OR hive) AND (java^10 OR j2ee)
  63. 63. Semantic Search Components: • Apache Solr • Solr Text Tagger • Semantic Knowledge Graph • Statistical Phrase Identifier • Fusion Semantic Query Pipelines • Fusion AI Synonyms Job • Fusion AI Token & Phrase Spell Correction Job • Fusion AI Head/Tail Analysis Job • Fusion AI Phrase Identification Job • Fusion Query Rules Engine
  64. 64. In the past year, Lucidworks added the following capabilities to Solr: • Solr Text Tagger • Semantic Knowledge Graph • Statistical Phrase Identifier
  65. 65. So I’m going to talk about those here : ) See my Activate 2018 talk on “How to Build a Semantic Search System” For details on extended Lucidworks Fusion capabilities.
  66. 66. Semantic Query Parsing Identification of phrases in queries using two steps: 1) Check a dictionary of known terms that is continuously built, cleaned, and refined based upon common inputs from interactions with real users of the system. We use the Solr Text Tagger for this at query time.* 2) Also invoke a probabilistic query parser (“statistical phrase identifier”) to dynamically identify unknown phrases using statistics from a corpus of data (language model) 3) Final algorithm to choose the best merge when the two approaches disagree. *K. Aljadda, M. Korayem, T. Grainger, C. Russell. "Crowdsourced Query Augmentation through Semantic Discovery of Domain-specific Jargon," in IEEE Big Data 2014.
  67. 67. Statistical Phrase Identifier Goal: given a query, predict which combinations of keywords should be combined together as phrases Example: senior java developer hadoop Possible Parsings: senior, java, developer, hadoop "senior java", developer, hadoop "senior java developer", hadoop "senior java developer hadoop” "senior java", "developer hadoop” senior, "java developer", hadoop senior, java, "developer hadoop" Source: Trey Grainger, “Searching on Intent: Knowledge Graphs, Personalization, and Contextual Disambiguation”, Bay Area Search Meetup, November 2015.
  68. 68. Back to this!
  69. 69. …based on this presentation thusfar, that’s a fair conclusion to make. All of the examples you’ve seen to this were from the stand-alone plugin. But when we committed to Solr, the Semantic Knowledge Graph gained full graph capabilities…
  70. 70. More verbose, but way more powerful…
  71. 71. Graph Query Parser • Query-time, cyclic aware graph traversal is able to rank documents based on relationships • Provides controls for depth, filtering of results and inclusion of root and/or leaves • Limitations: distributed queries only traverse intra-shard docs Examples: • http://localhost:8983/solr/graph/query?fl=id,score& q={!graph from=in_edge to=out_edge}id:A • http://localhost:8983/solr/my_graph/query?fl=id& q={!graph from=in_edge to=out_edge traversalFilter='foo:[* TO 15]'}id:A • http://localhost:8983/solr/my_graph/query?fl=id& q={!graph from=in_edge to=out_edge maxDepth=1}foo:[* TO 10]
  72. 72. Example Query:
  73. 73. Find Location (Graph Query) http://localhost:8983/solr/POI/select
  74. 74. Graph Traversal converted to Facet http://localhost:8983/solr/POI/select
  75. 75. For Remaining keywords, find doc type + related terms http://localhost:8983/solr/POI/select
  76. 76. Disambiguation by Category Meaning 1: Restaurant => bbq, brisket, ribs, pork, … Meaning 2: Outdoor Equipment => bbq, grill, charcoal, propane, …
  77. 77. Full Knowledge Graph Traversal in Single Request!
  78. 78. Tricks for Automated Graph Generation
  79. 79. Named Entity Recognition (NER) NER translates… Barack Obama was the president of the United States of America. Before that, Obama was a senator. into… <person id="barack_obama">Barack Obama</person> was the <role>president</role> of the <country id="usa">United States of America</country>. Before that, <person id="barack_obama">Obama</person> was a <role>senator</role>. In Solr, this would become: text: Barack Obama was the president of the United States of America. Before that, Obama was a senator. person: Barack Obama country: United States of America role: [ president, senator ]
  80. 80. Open Information Extraction (automatic RDF triple extraction / explicit knowledge graph learning)
  81. 81. Demo!
  82. 82. popular barbeque near Haystack (popular same as "good", "top", "best") movie theaters near haystack hotels near popular BBQ in Charlottesville BBQ near airports near haystack hotels near movie theaters in Charlottesville … And that’s really just the beginning!
  83. 83. But it’s unfortunately also the end of our time today : (
  84. 84. We operationalize AI for the largest businesses on the planet.
  85. 85. Trey Grainger trey@lucidworks.com @treygrainger http://solrinaction.com Other presentations: http://www.treygrainger.com Discount code: 39grainger Thank you!

×