Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Intent Algorithms: The Data Science of Smart Information Retrieval Systems

2,616 views

Published on

Search engines, recommendation systems, advertising networks, and even data analytics tools all share the same end goal - to deliver the most relevant information possible to meet a given information need (usually in real-time). Perfecting these systems requires algorithms which can build a deep understanding of the domains represented by the underlying data, understand the nuanced ways in which words and phrases should be parsed and interpreted within different contexts, score the relationships between arbitrary phrases and concepts, continually learn from users' context and interactions to make the system smarter, and generate custom models of personalized tastes for each user of the system.

In this talk, we'll dive into both the philosophical questions associated with such systems ("how do you accurately represent and interpret the meaning of words?", "How do you prevent filter bubbles?", etc.), as well as look at practical examples of how these systems have been successfully implemented in production systems combining a variety of available commercial and open source components (inverted indexes, entity extraction, similarity scoring and machine-learned ranking, auto-generated knowledge graphs, phrase interpretation and concept expansion, etc.).

Published in: Software

Intent Algorithms: The Data Science of Smart Information Retrieval Systems

  1. 1. Intent Algorithms: The data science of smart information retrieval systems Trey Grainger SVP of Engineering, Lucidworks Southern Data Science Conference 2017.04.07
  2. 2. Trey Grainger SVP of Engineering • Previously Director of Engineering @ CareerBuilder • MBA, Management of Technology – Georgia Tech • BA, Computer Science, Business, & Philosophy – Furman University • Information Retrieval & Web Search - Stanford University Other fun projects: • Co-author of Solr in Action, plus numerous research papers • Frequent conference speaker • Founder of Celiaccess.com, the gluten-free search engine • Lucene/Solr contributor About Me
  3. 3. • Introduction - Apache Solr - Lucidworks / Fusion • Search Engine Fundamentals - Keyword Search - Relevancy Ranking - Domain-specific Relevancy - Crafting Relevancy Functions … Agenda … • Reflected Intelligence - Signals (Demo) - Recommendations (Demo) - Learning to Rank (Demo) • Semantic Search - RDF / SPARQL - Entity Extraction - Query Parsing - Semantic Knowledge Graph Southern Data Science
  4. 4. Traditional Keyword Search Recommendations Semantic Search User Intent Personalized Search Augmented Search Domain-aware Matching Dimensions of User Intent Southern Data Science
  5. 5. what do you do?
  6. 6. Search-Driven Everything Customer Service Customer Insights Fraud Surveillance Research Portal Online Retail Digital Content
  7. 7. Apache Solr
  8. 8. “Solr is the popular, blazing-fast, open source enterprise search platform built on Apache Lucene™.”
  9. 9. Key Solr Features: ● Multilingual Keyword search ● Relevancy Ranking of results ● Faceting & Analytics (nested / relational) ● Highlighting ● Spelling Correction ● Autocomplete/Type-ahead Prediction ● Sorting, Grouping, Deduplication ● Distributed, Fault-tolerant, Scalable ● Geospatial search ● Complex Function queries ● Recommendations (More Like This) ● Graph Queries and Traversals ● SQL Query Support ● Streaming Aggregations ● Batch and Streaming processing ● Highly Configurable / Plugins ● Learning to Rank ● Building machine-learning models ● … many more *source: Solr in Action, chapter 2
  10. 10. The standard for enterprise search. of Fortune 500 uses Solr. 90%
  11. 11. Lucidworks Fusion
  12. 12. Typical Search Architecture Evolution Optional Worker Worker Cluster Manager Spark/Hadoop Shards Shards Solr HDFS Shared Config Management Leader Election Load Balancing ZK 1 Zookeeper ZK N Nutch/ Heretrix Log Proc. Mahout (Recommender) ManifoldCF* (Connectors) Security (Roll your own) Roll your own *only 12 connectors available, compared w/ 60+ in Fusion SiLK Scheduling (cron?) Admin UI Deployment (Roll your own) Monitoring (Roll your own) Relevance Tools (Roll your own) Tika ships w/ Solr, but can’t be scaled independently NLP tools
  13. 13. SECURITY BUILT-IN Shards Shards Apache Solr Apache Zookeeper ZK 1 Leader Election Load Balancing Shared Config Management Worker Worker Apache Spark Cluster Manager RESTAPI Admin UI Lucidworks View LOGS FILE WEB DATABASE CLOUD Core Services • • • ETL and Query Pipelines Recommenders/Signals NLP Machine Learning Alerting and Messaging Security Connectors Scheduling Fusion Simplifies the Deployment HDFS(Optional)
  14. 14. Lucidworks Fusion
  15. 15. Fusion powers search for the brightest companies in the world.
  16. 16. search & relevancy
  17. 17. Basic Keyword Search (inverted index, tf-idf, bm25, multilingual text analysis, query formulation, etc.) Taxonomies / Entity Extraction (entity recognition, ontologies, synonyms, etc.) Query Intent (query classification, semantic query parsing, concept expansion, rules, clustering, classification) Relevancy Tuning (signals, AB testing/genetic algorithms, Learning to Rank, Neural Networks) Self-learning Intent Algorithm Spectrum Southern Data Science
  18. 18. Basic Keyword Search The beginning of a typical search journey
  19. 19. Term Documents a doc1 [2x] brown doc3 [1x] , doc5 [1x] cat doc4 [1x] cow doc2 [1x] , doc5 [1x] … ... once doc1 [1x], doc5 [1x] over doc2 [1x], doc3 [1x] the doc2 [2x], doc3 [2x], doc4[2x], doc5 [1x] … … Document Content Field doc1 once upon a time, in a land far, far away doc2 the cow jumped over the moon. doc3 the quick brown fox jumped over the lazy dog. doc4 the cat in the hat doc5 The brown cow said “moo” once. … … What you SEND to Lucene/Solr: How the content is INDEXED into Lucene/Solr (conceptually): The inverted index Southern Data Science
  20. 20. /solr/select/?q=apache solr Field Documents … … apache doc1, doc3, doc4, doc5 … hadoop doc2, doc4, doc6 … … solr doc1, doc3, doc4, doc7, doc8 … … doc5 doc7 doc8 doc1 doc3 doc4 solr apache apache solr Matching queries to documents Southern Data Science
  21. 21. Text Analysis Generating terms to index from raw text
  22. 22. Text Analysis in Solr A text field in Lucene/Solr has an Analyzer containing: ① Zero or more CharFilters Takes incoming text and “cleans it up” before it is tokenized ② One Tokenizer Splits incoming text into a Token Stream containing Zero or more Tokens ③ Zero or more TokenFilters Examines and optionally modifies each Token in the Token Stream *From Solr in Action, Chapter 6 Southern Data Science
  23. 23. A text field in Lucene/Solr has an Analyzer containing: ① Zero or more CharFilters Takes incoming text and “cleans it up” before it is tokenized ② One Tokenizer Splits incoming text into a Token Stream containing Zero or more Tokens ③ Zero or more TokenFilters Examines and optionally modifies each Token in the Token Stream Text Analysis in Solr *From Solr in Action, Chapter 6 Southern Data Science
  24. 24. A text field in Lucene/Solr has an Analyzer containing: ① Zero or more CharFilters Takes incoming text and “cleans it up” before it is tokenized ② One Tokenizer Splits incoming text into a Token Stream containing Zero or more Tokens ③ Zero or more TokenFilters Examines and optionally modifies each Token in the Token Stream Text Analysis in Solr *From Solr in Action, Chapter 6 Southern Data Science
  25. 25. A text field in Lucene/Solr has an Analyzer containing: ① Zero or more CharFilters Takes incoming text and “cleans it up” before it is tokenized ② One Tokenizer Splits incoming text into a Token Stream containing Zero or more Tokens ③ Zero or more TokenFilters Examines and optionally modifies each Token in the Token Stream Text Analysis in Solr *From Solr in Action, Chapter 6 Southern Data Science
  26. 26. Per-language Analysis Chains *Some of the 32 different languages configurations in Appendix B of Solr in Action Southern Data Science
  27. 27. Per-language Analysis Chains *Some of the 32 different languages configurations in Appendix B of Solr in Action Southern Data Science
  28. 28. Southern Data Science
  29. 29. Relevancy Ranking Scoring the results, returning the best matches
  30. 30. Classic Lucene/Solr Relevancy Algorithm: *Source: Solr in Action, chapter 3 Score(q, d) = ∑ ( tf(t in d) · idf(t)2 · t.getBoost() · norm(t, d) ) · coord(q, d) · queryNorm(q) t in q Where: t = term; d = document; q = query; f = field tf(t in d) = numTermOccurrencesInDocument ½ idf(t) = 1 + log (numDocs / (docFreq + 1)) coord(q, d) = numTermsInDocumentFromQuery / numTermsInQuery queryNorm(q) = 1 / (sumOfSquaredWeights ½ ) sumOfSquaredWeights = q.getBoost()2 · ∑ (idf(t) · t.getBoost() )2 t in q norm(t, d) = d.getBoost() · lengthNorm(f) · f.getBoost() Southern Data Science
  31. 31. Classic Lucene/Solr Relevancy Algorithm: *Source: Solr in Action, chapter 3 Score(q, d) = ∑ ( tf(t in d) · idf(t)2 · t.getBoost() · norm(t, d) ) · coord(q, d) · queryNorm(q) t in q Where: t = term; d = document; q = query; f = field tf(t in d) = numTermOccurrencesInDocument ½ idf(t) = 1 + log (numDocs / (docFreq + 1)) coord(q, d) = numTermsInDocumentFromQuery / numTermsInQuery queryNorm(q) = 1 / (sumOfSquaredWeights ½ ) sumOfSquaredWeights = q.getBoost()2 · ∑ (idf(t) · t.getBoost() )2 t in q norm(t, d) = d.getBoost() · lengthNorm(f) · f.getBoost() Southern Data Science
  32. 32. • Term Frequency: “How well a term describes a document?” – Measure: how often a term occurs per document • Inverse Document Frequency: “How important is a term overall?” – Measure: how rare the term is across all documents TF * IDF *Source: Solr in Action, chapter 3 Southern Data Science
  33. 33. BM25 (Okapi “Best Match” 25th Iteration) Score(q, d) = ∑ idf(t) · ( tf(t in d) · (k + 1) ) / ( tf(t in d) + k · (1 – b + b · |d| / avgdl ) t in q Where: t = term; d = document; q = query; i = index tf(t in d) = numTermOccurrencesInDocument ½ idf(t) = 1 + log (numDocs / (docFreq + 1)) |d| = ∑ 1 t in d avgdl = = ( ∑ |d| ) / ( ∑ 1 ) ) d in i d in i k = Free parameter. Usually ~1.2 to 2.0. Increases term frequency saturation point. b = Free parameter. Usually ~0.75. Increases impact of document normalization. Southern Data Science
  34. 34. News Search : popularity and freshness drive relevance Restaurant Search: geographical proximity and price range are critical Ecommerce: likelihood of a purchase is key Movie search: More popular titles are generally more relevant Job search: category of job, salary range, and geographical proximity matter TF * IDF of keywords can’t hold it’s own against good domain-specific relevance factors! That’s great, but what about domain-specific knowledge? Southern Data Science
  35. 35. Southern Data Science *Example from chapter 16 of Solr in Action Domain-specific relevancy calculation (News Website Example) News website: /select? fq=$myQuery& q=_query_:"{!func}scale(query($myQuery),0,100)" AND _query_:"{!func}div(100,map(geodist(),0,1,1))" AND _query_:"{!func}recip(rord(publicationDate),0,100,100)" AND _query_:"{!func}scale(popularity,0,100)"& myQuery="street festival"& sfield=location& pt=33.748,-84.391 25% 25% 25% 25%
  36. 36. Southern Data Science Fancy boosting functions (Restaurant Search Example) Distance (50%) + keywords (30%) + category (20%) q=_val_:"scale(mul(query($keywords),1),0,30)" AND _val_:"scale(sum($radiusInKm,mul(query($distance),-1)),0,50)” AND _val_:"scale(mul(query($category),1),0,20)" &keywords=filet mignon &radiusInKm=48.28 &distance=_val_:"geodist(latitudelongitude.latlon_is,33.77402,-84.29659)” &category=”fine dining" &fq={!cache=false v=$keywords}
  37. 37. This is powerful, but feels like a lot of work to get right…
  38. 38. what is “reflected intelligence”?
  39. 39. The Three C’s Content: Keywords and other features in your documents Collaboration: How other’s have chosen to interact with your system Context: Available information about your users and their intent Reflected Intelligence “Leveraging previous data and interactions to improve how new data and interactions should be interpreted” Southern Data Science
  40. 40. ● Recommendation Algorithms ● Building user profiles from past searches, clicks, and other actions ● Identifying correlations between keywords/phrases ● Building out automatically-generated ontologies from content and queries ● Determining relevancy judgements (precision, recall, nDCG, etc.) from click logs ● Learning to Rank - using relevancy judgements and machine learning to train a relevance model ● Discovering misspellings, synonyms, acronyms, and related keywords ● Disambiguation of keyword phrases with multiple meanings ● Learning what’s important in your content Examples of Reflected Intelligence Southern Data Science
  41. 41. John lives in Boston but wants to move to New York or possibly another big city. He is currently a sales manager but wants to move towards business development. Irene is a bartender in Dublin and is only interested in jobs within 10KM of her location in the food service industry. Irfan is a software engineer in Atlanta and is interested in software engineering jobs at a Big Data company. He is happy to move across the U.S. for the right job. Jane is a nurse educator in Boston seeking between $40K and $60K *Example from chapter 16 of Solr in Action Consider what you know about users Southern Data Science
  42. 42. http://localhost:8983/solr/jobs/select/? fl=jobtitle,city,state,salary& q=( jobtitle:"nurse educator"^25 OR jobtitle:(nurse educator)^10 ) AND ( (city:"Boston" AND state:"MA")^15 OR state:"MA") AND _val_:"map(salary, 40000, 60000,10, 0)” *Example from chapter 16 of Solr in Action Query for Jane Jane is a nurse educator in Boston seeking between $40K and $60K Southern Data Science
  43. 43. { ... "response":{"numFound":22,"start":0,"docs":[ {"jobtitle":" Clinical Educator (New England/ Boston)", "city":"Boston", "state":"MA", "salary":41503}, …]}} *Example documents available @ http://github.com/treygrainger/solr-in-action Search Results for Jane {"jobtitle":"Nurse Educator", "city":"Braintree", "state":"MA", "salary":56183}, {"jobtitle":"Nurse Educator", "city":"Brighton", "state":"MA", "salary":71359} Southern Data Science
  44. 44. You just built a recommendation engine!
  45. 45. Southern Data Science Collaborative Filtering Term Documents user1 doc1, doc5 user2 doc2 user3 doc2 user4 doc1, doc3, doc4, doc5 user5 doc1, doc4 … … Document “Users who bought this product” field doc1 user1, user4, user5 doc2 user2, user3 doc3 user4 doc4 user4, user5 doc5 user4, user1 … … What you SEND to Lucene/Solr: How the content is INDEXED into Lucene/Solr (conceptually):
  46. 46. Southern Data Science Step 1: Find similar users who like the same documents Document “Users who bought this product” field doc1 user1, user4, user5 doc2 user2, user3 doc3 user4 doc4 user4, user5 doc5 user4, user1 … … Top-scoring results (most similar users): 1) user4 (2 shared likes) 2) user5 (2 shared likes) 3) user 1 (1 shared like) doc1 user1 user4 user5 user4 user5 doc4 q=documentid: ("doc1" OR "doc4") *Source: Solr in Action, chapter 16
  47. 47. /solr/select/?q=userlikes:("user4"^2 OR "user5"^2 OR "user1"^1) Southern Data Science Step 2: Search for docs “liked” by those similar users Term Documents user1 doc1, doc5 user2 doc2 user3 doc2 user4 doc1, doc3, doc4, doc5 user5 doc1, doc4 … … Top recommended documents: 1) doc1 (matches user4, user5, user1) 2) doc4 (matches user4, user5) 3) doc5 (matches user4, user1) 4) doc3 (matches user4) // doc2 does not match Most similar users: 1) user4 (2 shared likes) 2) user5 (2 shared likes) 3) user 1 (1 shared like) *Source: Solr in Action, chapter 16
  48. 48. Using matrix factorization is typically more efficient (Ships with Fusion 3.1): Southern Data Science
  49. 49. Feedback Loops User Searches User Sees Results User takes an action Users’ actions inform system improvements Southern Data Science
  50. 50. Demo: Signals & Recommendations
  51. 51. • 200%+ increase in click-through rates • 91% lower TCO • 50,000 fewer support tickets • Increased customer satisfaction
  52. 52. Learning to Rank
  53. 53. Learning to Rank (LTR) ● It applies machine learning techniques to discover the best combination of features that provide best ranking. ● It requires labeled set of documents with relevancy scores for given set of queries ● Features used for ranking are usually more computationally expensive than the ones used for matching ● It typically re-ranks a subset of the matched documents (e.g. top 1000) Southern Data Science
  54. 54. Southern Data Science
  55. 55. Common LTR Algorithms • RankNet* (neural networks, boosted trees) • LambdaMart* (regression trees) • SVM Rank** (SVM classifier) ** http://research.microsoft.com/en-us/people/hangli/cao-et-al-sigir2006.pdf * http://research.microsoft.com/pubs/132652/MSR-TR-2010-82.pdf Southern Data Science
  56. 56. Demo: Learning to Rank
  57. 57. #1: Pull, Build, Start Solr git clone https://github.com/apache/lucene-solr.git && cd lucene-solr/solr ant server bin/solr -e techproducts -Dsolr.ltr.enabled=true #2: Run Searches http://localhost:8983/solr/techproducts/browse?q=ipod #3: Supply User Relevancy Judgements cd contrib/ltr/example/ nano user_queries.txt #4: Install Training Library curl -L https://github.com/cjlin1/liblinear/archive/v210.zip > liblinear-2.1.tar.gz tar -xf liblinear-2.1.tar.gz && mv liblinear-210 liblinear cd liblinear && make && cd ../ #5: Train and Upload Model ./train_and_upload_demo_model.py -c config.json #6: Re-run Searches using Machine-learned Ranking Model http://localhost:8983/solr/techproducts/browse?q=ipod &rq={!ltr model=exampleModel reRankDocs=25 efi.user_query=$q}
  58. 58. # Run Searches http://localhost:8983/solr/techproducts/select?q=ipod
  59. 59. # Supply User Relevancy Judgements nano contrib/ltr/example/user_queries.txt #Format: query | doc id | relevancy judgement | source # Train and Upload Model ./train_and_upload_demo_model.py -c config.json
  60. 60. # Re-run Searches using Machine-learned Ranking Model http://localhost:8984/solr/techproducts/browse?q=ipod &rq={!ltr model=exampleModel reRankDocs=100 efi.user_query=$q}
  61. 61. semantic search
  62. 62. Building a Taxonomy of Entities Many ways to generate this: • Statistical Analysis of interesting phrases - Word2Vec / Glove / Dice Conceptual Search • Topic Modelling • Clustering of documents / phrases • Buy a dictionary (often doesn’t work for domain-specific search problems) • Generate a model of domain-specific phrases by mining query logs for commonly searched phrases within the domain* * K. Aljadda, M. Korayem, T. Grainger, C. Russell. "Crowdsourced Query Augmentation through Semantic Discovery of Domain-specific Jargon," in IEEE Big Data 2014.Southern Data Science
  63. 63. Southern Data Science
  64. 64. Southern Data Science
  65. 65. entity extraction
  66. 66. Southern Data Science
  67. 67. semantic query parsing
  68. 68. Southern Data Science
  69. 69. Probabilistic Query Parser Goal: given a query, predict which combinations of keywords should be combined together as phrases Example: senior java developer hadoop Possible Parsings: senior, java, developer, hadoop "senior java", developer, hadoop "senior java developer", hadoop "senior java developer hadoop” "senior java", "developer hadoop” senior, "java developer", hadoop senior, java, "developer hadoop" Source: Trey Grainger, “Searching on Intent: Knowledge Graphs, Personalization, and Contextual Disambiguation”, Bay Area Search Meetup, November 2015. Southern Data Science
  70. 70. Semantic Query Parsing Identification of phrases in queries using two steps: 1) Check a dictionary of known terms that is continuously built, cleaned, and refined based upon common inputs from interactions with real users of the system. The SolrTextTagger works well for this.* 2) Also invoke a probabilistic query parser to dynamically identify unknown phrases using statistics from a corpus of data (language model) *K. Aljadda, M. Korayem, T. Grainger, C. Russell. "Crowdsourced Query Augmentation through Semantic Discovery of Domain-specific Jargon," in IEEE Big Data 2014. Southern Data Science
  71. 71. query augmentation
  72. 72. Southern Data Science
  73. 73. Southern Data Science
  74. 74. id: 1 job_title: Software Engineer desc: software engineer at a great company skills: .Net, C#, java id: 2 job_title: Registered Nurse desc: a registered nurse at hospital doing hard work skills: oncology, phlebotemy id: 3 job_title: Java Developer desc: a software engineer or a java engineer doing work skills: java, scala, hibernate field term postings list doc pos desc a 1 4 2 1 3 1, 5 at 1 3 2 4 company 1 6 doing 2 6 3 8 engineer 1 2 3 3, 7 great 1 5 hard 2 7 hospital 2 5 java 3 6 nurse 2 3 or 3 4 registered 2 2 software 1 1 3 2 work 2 10 3 9 job_title java developer 3 1 … … … … field doc term desc 1 a at company engineer great software 2 a at doing hard hospital nurse registered work 3 a doing engineer java or software work job_title 1 Software Engineer … … … Terms-Docs Inverted IndexDocs-Terms Forward IndexDocuments Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Southern Data Science
  75. 75. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Set-theory View Graph View How the Graph Traversal Works skill: Java skill: Scala skill: Hibernate skill: Oncology doc 1 doc 2 doc 3 doc 4 doc 5 doc 6 skill: Java skill: Java skill: Scala skill: Hibernate skill: Oncology Data Structure View Java Scala Hibernate docs 1, 2, 6 docs 3, 4 Oncology doc 5 Southern Data Science
  76. 76. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Scoring nodes in the Graph Foreground vs. Background Analysis Every term scored against it’s context. The more commonly the term appears within it’s foreground context versus its background context, the more relevant it is to the specified foreground context. countFG(x) - totalDocsFG * probBG(x) z = -------------------------------------------------------- sqrt(totalDocsFG * probBG(x) * (1 - probBG(x))) { "type":"keywords”, "values":[ { "value":"hive", "relatedness": 0.9765, "popularity":369 }, { "value":"spark", "relatedness": 0.9634, "popularity":15653 }, { "value":".net", "relatedness": 0.5417, "popularity":17683 }, { "value":"bogus_word", "relatedness": 0.0, "popularity":0 }, { "value":"teaching", "relatedness": -0.1510, "popularity":9923 }, { "value":"CPR", "relatedness": -0.4012, "popularity":27089 } ] } + - Foreground Query: "Hadoop" Southern Data Science
  77. 77. Source: Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith.“The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain”. DSAA 2016. Knowledge Graph Multi-level Graph Traversal with Scores software engineer* (materialized node) Java C# .NET .NET Developer Java Developer Hibernate ScalaVB.NET Software Engineer Data Scientist Skill Nodes has_related_skillStarting Node Skill Nodes has_related_skill Job Title Nodes has_related_job_title 0.90 0.88 0.93 0.93 0.34 0.74 0.91 0.89 0.74 0.89 0.780.72 0.48 0.93 0.76 0.83 0.80 0.64 0.61 0.780.55 Southern Data Science
  78. 78. Knowledge Graph Southern Data Science
  79. 79. Knowledge Graph Southern Data Science
  80. 80. Southern Data Science
  81. 81. Knowledge Graph Use Case: Summarizing Document Intent Experiment: Pass in raw text (extracting phrases as needed), and rank their similarity to the documents using the SKG. Additionally, can traverse the graph to “related” entities/keyword phrases NOT found in the original document Applications: Content-based and multi-modal recommendations (no cold-start problem), data cleansing prior to clustering or other ML methods, semantic search / similarity scoring
  82. 82. Basic Keyword Search (inverted index, tf-idf, bm25, multilingual text analysis, query formulation, etc.) Taxonomies / Entity Extraction (entity recognition, ontologies, synonyms, etc.) Query Intent (query classification, semantic query parsing, concept expansion, rules, clustering, classification) Relevancy Tuning (signals, AB testing/genetic algorithms, Learning to Rank, Neural Networks) Self-learning Intent Algorithm Spectrum Southern Data Science
  83. 83. Contact Info Trey Grainger trey.grainger@lucidworks.com @treygrainger http://solrinaction.com Meetup discount (39% off): 39grainger Other presentations: http://www.treygrainger.com Southern Data Science
  84. 84. Additional References: Southern Data Science

×