The document describes semantic search and entity summarization techniques. It discusses generating query-dependent entity summaries using a knowledge graph. Facts about an entity are ranked based on importance and relevance to the query. The top facts are grouped by predicate and generated into a summary respecting length and width constraints. User studies found utility-based summaries outperformed relevance-only summaries. The techniques can power applications like knowledge-enabled search engines and personalized summaries.
13. RESULT PRESENTATION
Summarizing Entities for Entity Cards
F. Hasibi, K. Balog, and S.E. Bratsberg. “Dynamic Factual Summaries for Entity Cards”.
In Proceedings of SIGIR ’17
17. Other applications
‣ News search
• hovering over an entity in entity-annotated documents
‣ Job search
• company descriptions for a given topic
ENTITY SUMMARIES
18. ENTITY SUMMARIES
? Question
How to generate
query-dependent entity
summaries that can directly
address users’ information
needs?
19. METHOD
Fact ranking
Ranking a set of entity facts (and a search query)
with respect to some criterion
Summary generation
constructing an entity summary from ranked
entity facts, for a given size
20. RANKING CRITERIA
Importance: The general importance of that fact in
describing the entity, irrespective of any particular
information need.
Relevance: The relevance of fact to query reflects
how well the fact supports the information need
underlying the query.
21. Utility
RANKING CRITERIA
Utility: The utility of a fact combines the general
importance and the relevance of a fact into a single
number
importance relevance
23. FACT RANKING
‣ Knowledge bases statistics as ingredients for Importance features
• absence of query logs
24. METHOD
Fact ranking
Ranking a set of entity facts (and a search query)
with respect to some criterion
Summary generation
constructing an entity summary from ranked
entity facts, for a given size
27. SUMMARY GENERATION
Algorithm 1 Summary generation algorithm
Input: Ranked facts Fe, max height h, max width w
Output: Entity summary lines
1: M Predicate-Name Mapping(Fe)
2: headin s [] . Determine line headings
3: for f in Fe do
4: pname M[fp]
5: if (pname < headin s) AND (size(headin s) h) then
6: headin s.add((fp,pname ))
7: end if
8: end for
9: alues [] . Determine line values
10: for f in Fe do
11: if fp 2 headin s then
12: alues[fp].add(fo)
13: end if
14: end for
15: lines [] . Construct lines
16: for (fp,pname ) in headin s do
17: line pname + ‘:’
18: for in alues[fp] do
19: if len(line) + len(v) w then
20: line line + . Add comma if needed
21: end if
22: end for
23: lines.add(line)
24: end for
‣ Creates a summary of a
given size (length and width)
‣ Resolving identical facts
(RF feature)
‣ Grouping multi-valued
predicates (GF feature)
29. QUERIES
February 2014
Increase profit by
35%
Keyword
Natural languageList search
Named entity
• “madrid”
• “brooklyn bridge”
• “vietnam war facts”
• “eiffel”
• “states that border
oklahoma”
• “What is the second
highest mountain?”
Taken from the DBpedia-entity collection
K. Balog and R. Neumayer. “A Test Collection for Entity Search in DBpedia” In proc of SIGIR ’13.
Query
types
30. EVALUATION (FACT RANKING)
Benchmark construction by Crowd sourcing experiments
‣ rate the importance of
the fact w.r.t. the entity
‣ rate the relevance of the fact to
the query for the given entity
Very important
Important
Not important
How important is this fact for the given entity?
31. EVALUATION (FACT RANKING)
Benchmark construction by crowd sourcing experiments
‣ Collecting judgments for ~4K facts
‣ 5 judgments per record
‣ Fleiss’ Kappa of 0.52 and 0.41 for importance and relevance,
(moderate agreement)
32. RESULTS (FACT RANKING)
number of
oximately
used and
e features.
validation,
We report
cal signi-
= 0.05) or
mance (for
icance.
approach
g fact rele-
ynES uses
nES/imp
mportance
ures only
Table 2: Comparison of fact ranking against the state-of-the-
art of approaches with URI-only objects. Signicance for
lines i 3 are tested against lines 1,2,3, and for lines 2,3
are tested against lines 1,2.
Model
Importance Utility
NDCG@5 NDCG@10 NDCG@5 NDCG@10
RELIN 0.6368 0.7130 0.6300 0.7066
LinkSum 0.7018M 0.7031 0.6504 0.6648
SUMMARUM 0.7181N 0.7412 M 0.6719 0.7111
DynES/imp 0.8354NNN 0.8604NNN 0.7645NNN 0.8117NNN
DynES 0.8291NNN 0.8652NNN 0.8164NNN 0.8569NNN
Table 4: Fact ranking performance by removing features;
features are sorted by the relative dierence they make.
Group Removed feature NDCG@10 % p
DynES - all features 0.7873 - -
Imp. - NEFp 0.7757 -1.16 0.08
Imp. - T peImp 0.7760 -1.13 0.14
16% improvements over the best baseline
33. ‣ Users consume all facts displayed in the summary
‣ The quality of the whole summary should be assessed
‣ Side-by-side evaluation of factual summaries by human
EVALUATION (SUMMARY GENERATION)
34. RESULTS (SUMMARY GENERATION)−10
−5
0
Exp
User prefere
(a) DynES vs. DynES/imp
−10
−5
0
User prefer
−1
−
User preferen
Figure 4: Boxplot for distribution of user preferences for each q
DynES/imp or DynES/rel.
Table 5: Side-by-Side evaluation of summaries for dierent
fact ranking methods.
Model Win Loss Tie RI
DynES vs. DynES/imp 46 23 31 0.23
DynES vs. DynES/rel 75 12 13 0.63
DynES vs. RELIN 95 5 0 0.90
Utility vs. Importance 47 16 37 0.31
Table 6: Side-by-side evaluation of summaries for dierent
summary generation algorithms.
Model Win Loss Tie RI
DynES vs. DynES(-GF)(-RF) 84 1 15 0.83
DynES vs. DynES(-GF) 74 0 26 0.74
DynES vs. DynES(-RF) 46 2 52 0.44
preferred DynES summaries over DynES/imp (or DynES/rel) sum-
maries; ties are ignored. Considering all queries (the black boxes),
we observe that the utility-based summaries (DynES) are generally
preferred over the other two, and especially over the relevance-
−10
−5
0
Exp
User prefere
(a) DynES vs. DynES/imp
−10
−5
0
User prefere
−1
−
User preferenc
Figure 4: Boxplot for distribution of user preferences for each q
DynES/imp or DynES/rel.
Table 5: Side-by-Side evaluation of summaries for dierent
fact ranking methods.
Model Win Loss Tie RI
DynES vs. DynES/imp 46 23 31 0.23
DynES vs. DynES/rel 75 12 13 0.63
DynES vs. RELIN 95 5 0 0.90
Utility vs. Importance 47 16 37 0.31
Table 6: Side-by-side evaluation of summaries for dierent
summary generation algorithms.
Model Win Loss Tie RI
DynES vs. DynES(-GF)(-RF) 84 1 15 0.83
DynES vs. DynES(-GF) 74 0 26 0.74
DynES vs. DynES(-RF) 46 2 52 0.44
preferred DynES summaries over DynES/imp (or DynES/rel) sum-
maries; ties are ignored. Considering all queries (the black boxes),
we observe that the utility-based summaries (DynES) are generally
• Users preferred utility-based
summaries over the others
• Grouping of multivalued
predicates (GF) is perceived
as more important by the
users than the resolution of
identical facts (RF)
37. SEMANTIC SEARCH TOOLKIT
Entity retrieval: Returns a ranked list of entities in
response to a query
Entity linking: Identifies entities in a query and links them
to the corresponding entry in the Knowledge base
Target type identification: Detects the target types
(or categories) of a query
Functionalities
38. SEMANTIC SEARCH TOOLKIT
• Web interface, API, and
command line usage
• 3-tier architecture
• Online source code and
documentation
Highlights