Dynamic Factual Summaries for Entity Cards

Faegheh Hasibi
Faegheh HasibiAssistant Professor in Information Retrieval (IR)
DYNAMIC FACTUAL SUMMARIES FOR
ENTITY CARDS
Faegheh Hasibi, Krisztian Balog, Svein E. Bratsberg

SIGIR 2017
IAI group
Dynamic Factual Summaries for Entity Cards
ENTITY CARDS
Entity
Summary
ENTITY SUMMARIES
einstein awardseinstein family
Other application areas
‣ News search
• hovering over an entity in entity-annotated documents
‣ Product and job search
‣ Mobile search
ENTITY SUMMARIES
? Question
How to generate 

query-dependent entity
summaries that can directly
address users’ information
needs?
ENTITY SUMMARIZATION
Albert Einstein
… and ~700 more facts
dbo:almaMater dbr:ETH_Zurich
dbo:almaMater dbr:University_of_Zurich
dbo:award dbr:Max_Planck_Medal
dbo:award dbr:Nobel_Prize_in_Physics
dbo:birthDate 1879-03-14
dbo:birthPlace dbr:Ulm
dbo:birthPlace dbr:German_Empire
dbo:citizenship dbr:Austria-Hungary
dbo:children dbr:Eduard_Einstein
dbo:children dbr:Hans_Albert_Einstein
dbo:deathDate 1955-04-18
dbo:deathPlace dbr:Princeton,_New_Jersey
dbo:spouse dbr:Elsa_Einstein
dbo:spouse dbr:Mileva_Marić
dbp:influenced dbr:Leo_Szilard
dbo:almaMater dbr:ETH_Zurich
dbo:almaMater dbr:University_of_Zurich
dbo:award dbr:Max_Planck_Medal
dbo:award dbr:Nobel_Prize_in_Physics
dbo:birthDate 1879-03-14
dbo:birthPlace dbr:Ulm
dbo:birthPlace dbr:German_Empire
dbo:citizenship dbr:Austria-Hungary
dbo:children dbr:Eduard_Einstein
dbo:children dbr:Hans_Albert_Einstein
dbo:deathDate 1955-04-18
dbo:deathPlace dbr:Princeton,_New_Jersey
dbo:spouse dbr:Elsa_Einstein
dbo:spouse dbr:Mileva_Marić
dbp:influenced dbr:Nathan_Rosen
dbp:influenced dbr:Leo_Szilard
dbo:knownFor dbr:Brownian_motion
ENTITY SUMMARIZATION
einstein awards
dbo:almaMater dbr:ETH_Zurich
dbo:almaMater dbr:University_of_Zurich
dbo:award dbr:Max_Planck_Medal
dbo:award dbr:Nobel_Prize_in_Physics
dbo:birthDate 1879-03-14
dbo:birthPlace dbr:Ulm
dbo:birthPlace dbr:German_Empire
dbo:citizenship dbr:Austria-Hungary
dbo:children dbr:Eduard_Einstein
dbo:children dbr:Hans_Albert_Einstein
dbo:deathDate 1955-04-18
dbo:deathPlace dbr:Princeton,_New_Jersey
dbo:spouse dbr:Elsa_Einstein
dbo:spouse dbr:Mileva_Marić
dbp:influenced dbr:Nathan_Rosen
dbp:influenced dbr:Leo_Szilard
dbo:knownFor dbr:Brownian_motion
ENTITY SUMMARIZATION
einstein awards
Ranking a set of entity facts (and a search query)
with respect to some criterion
Fact ranking
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
ENTITY SUMMARIZATION
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
ENTITY SUMMARIZATION
Summary generation
constructing an entity summary from ranked
entity facts, for a given size
METHOD
Fact ranking
Ranking a set of entity facts (and a search query)
with respect to some criterion
Summary generation
constructing an entity summary from ranked
entity facts, for a given size
RANKING CRITERIA
Importance: The general importance of that fact in
describing the entity, irrespective of any particular
information need.
Relevance: The relevance of fact to query reflects
how well the fact supports the information need
underlying the query.
Utility
RANKING CRITERIA
Utility: The utility of a fact combines the general
importance and the relevance of a fact into a single
number
importance relevance
Importance
Type-based importance
Predicate specificity
Object specificity
Normalized fact frequency
…
FACT RANKING
Relevance
‣ Supervised ranking, optimized on utility
• more bias towards importance or relevance
‣ fact-query pairs as learning instances
Semantic similarity
Lexical specificity
Inverse rank
Context length
…
FACT RANKING
Ingredients from the knowledge base:
FACT RANKING
Ingredients from the knowledge base:
METHOD
Fact ranking
Ranking a set of entity facts (and a search query)
with respect to some criterion
Summary generation
constructing an entity summary from ranked
entity facts, for a given size
SUMMARY GENERATION
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
SUMMARY GENERATION
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
… …
headingiheadingi valueivaluei
height(⌧h)height(⌧h)
width(⌧w)width(⌧w)
lineilinei
SUMMARY GENERATION
multi-valued
predicates
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
1. dbo:birthDate 1879-03-14
2. dbp:placeOfBirth Ulm
3. dbo:birthPlace dbr:Ulm
4. dbo:deathDate 1955-04-18
5. dbo:award dbr:Nobel_Prize_in_Physics
6. dbo:deathPlace dbr:Princeton,_New_Jersey
7. dbo:birthPlace dbr:German_Empire
8. dbo:almaMater dbr:ETH_Zurich
9. dbo:award dbr:Max_Planck_Medal
10.dbp:influenced dbr:Nathan_Rosen
11.dbo:almaMater dbr:University_of_Zurich
…
SUMMARY GENERATION
identical facts
SUMMARY GENERATION
Algorithm 1 Summary generation algorithm
Input: Ranked facts Fe, max height h, max width w
Output: Entity summary lines
1: M Predicate-Name Mapping(Fe)
2: headin s [] . Determine line headings
3: for f in Fe do
4: pname M[fp]
5: if (pname < headin s) AND (size(headin s)  h) then
6: headin s.add((fp,pname ))
7: end if
8: end for
9: alues [] . Determine line values
10: for f in Fe do
11: if fp 2 headin s then
12: alues[fp].add(fo)
13: end if
14: end for
15: lines [] . Construct lines
16: for (fp,pname ) in headin s do
17: line pname + ‘:’
18: for in alues[fp] do
19: if len(line) + len(v)  w then
20: line line + . Add comma if needed
21: end if
22: end for
23: lines.add(line)
24: end for
‣ Creates a summary of a
given size (length and width)
‣ Resolving identical facts 

(RF feature)
‣ Grouping multi-valued
predicates (GF feature)
EVALUATION
QUERIES
Taken from the DBpedia-entity collection
• Balog and R. Neumayer. “A Test Collection for Entity Search in DBpedia” , In: SIGIR ’13.
February 2014
Increase profit by
35%
Keyword
Natural languageList search
Named entity
• “madrid”
• “brooklyn bridge”
• “vietnam war facts”
• “eiffel”
• “states that border
oklahoma”
• “What is the second
highest mountain?”
Query
types
• F. Hasibi, F. Nikolaev, C. Xiong, K. Balog, S. E. Bratsberg, A. Kotov, and J. Callan. “DBpedia-
Entity v2: A Test Collection for Entity Search”, In: SIGIR ’17.
EVALUATION (FACT RANKING)
do not decide whether the entity card should be displayed or not;
we assume that our information access system generates a card for
a retrievable and presumably relevant entity (cf. §1). We also note
that our focus of aention in this paper is on generating a summary
for a given (assumed to be relevant) entity and not on the entity
retrieval task itself. We therefore treat entity retrieval as a black
box and combine several approaches to ensure that the ndings are
not specic to any particular entity retrieval method.
Formally, for a query q, we dene LEq as the set of relevant
entities according to the ground truth, and Eq,m as the set of entities
retrieved by method m 2 M, where M denotes the collection of
retrieval methods. A single entity e is selected for q such that:
e = arg max
eq 2Eq
(eq ),
(eq ) =
1
|M|
X
m2M
1
rank(eq,Eq,m )
,
Eq = {e|e 2 LEq,9m 2 M : e 2 Eq,m }.
Basically, we select the entity that is retrieved at the highest rank
by all methods, on average. (If the entity is not retrieved by method
and w
a 3-p
In th
and w
w.r.t.
relev
of en
supp
entit
Se
the c
(leve
retai
meet
bann
untru
to ke
by 5
ment
respe
Fi
Selecting entity-query pairs
‣ select the entity that is retrieved at the highest rank by all methods
EVALUATION (FACT RANKING)
Collecting judgments via crowdsourcing
‣ rate the importance of
the fact w.r.t. the entity
‣ rate the relevance of the fact to
the query for the given entity
Very important
Important
Not important
How important is this fact for the given entity?
EVALUATION (FACT RANKING)
‣ ~4K facts
‣ 5 judgments per record
‣ Fleiss’ Kappa of 0.52 and 0.41 for importance and relevance,
(moderate agreement)
Collecting judgments via crowdsourcing
et the number of
to approximately
ures are used and
elevance features.
cross validation,
gether. We report
statistical signi-
th M( = 0.05) or
performance (for
no signicance.
anking approach
sidering fact rele-
e: (i) DynES uses
(ii) DynES/imp
ed on importance
nce features only
Table 2: Comparison of fact ranking against the state-of-the-
art of approaches with URI-only objects. Signicance for
lines i  3 are tested against lines 1,2,3, and for lines 2,3
are tested against lines 1,2.
Model
Importance Utility
NDCG@5 NDCG@10 NDCG@5 NDCG@10
RELIN 0.6368 0.7130 0.6300 0.7066
LinkSum 0.7018M 0.7031 0.6504 0.6648
SUMMARUM 0.7181N 0.7412 M 0.6719 0.7111
DynES/imp 0.8354NNN 0.8604NNN 0.7645NNN 0.8117NNN
DynES 0.8291NNN 0.8652NNN 0.8164NNN 0.8569NNN
Table 4: Fact ranking performance by removing features;
features are sorted by the relative dierence they make.
Group Removed feature NDCG@10 % p
DynES - all features 0.7873 - -
Imp. - NEFp 0.7757 -1.16 0.08
Imp. - T peImp 0.7760 -1.13 0.14
RESULTS (FACT RANKING)
16% improvement over the best baseline
Tailored setting (only relational facts)
RESULTS (FACT RANKING)
Fact ranking results w.r.t importance, relevance, and utility. Signicance for line i  1 is tested against lines 1 .
Model
Importance Relevance Utility
NDCG@5 NDCG@10 NDCG@5 NDCG@10 NDCG@5 NDCG@10
RELIN 0.4733 0.5261 0.3514 0.4255 0.4680 0.5322
DynES/imp 0.7851N 0.7959N 0.4671N 0.5305N 0.7146N 0.7506N
DynES/rel 0.5756NH 0.6151NH 0.5269N 0.5775N 0.6138NH 0.6536NH
DynES 0.7672N N 0.7792N N 0.5771NNM 0.6423NNN 0.7547NMN 0.7873NNN
eature analysis
4 we report on a feature ablation study, where we remove
feature based on the relative dierence it makes in terms
ng performance (w.r.t. utility). e table shows the top 13
individually; the rest of the features are grouped together
oved from the feature set all at once. Interestingly, impor-
d relevance features are evenly distributed among the most
al features. e top-2 features (NEFp, T peImp) are com-
sed on fact predicates, while the rest of importance features
act objects. As for relevance features, we see four dierent
feed it with a ranked list of facts from dierent sources. (i)
vs. DynES/imp uses DynES vs. DynES/Imp for fact ran
DynES vs. DynES/imp uses DynES vs. DynES/rel for fact
(iii) DynES vs. RELIN compares DynES vs. the top-5 rank
from RELIN; and (iv) Utility vs. Importance is an oracle
ison, by taking perfect fact ranking results from crowdsou
For RQ4, we compare our summary generation algorit
three variations of the algorithm, all applied to the utility-b
ranking. We compare DynES with: (i) DynES(-GF)(-RF
is Algorithm 1, without grouping of facts with the same p
• 47% improvement over the most comparable baseline
• capturing the relevance aspect is more challenging than
importance
How to evaluate?
‣ Users consume all facts displayed in the summary
‣ The quality of the whole summary should be
assessed
‣ Side-by-side evaluation of factual summaries by
human
EVALUATION (SUMMARY GENERATION)
RESULTS (SUMMARY GENERATION)−10
−5
0
Export to plot.ly »
User pr
(a) DynES vs. DynES/imp
−10
−5
0
User pr
(b) DynES v−10
−5
0
User pref
Figure 4: Boxplot for distribution of user preferences for each query subset. Positiv
DynES/imp or DynES/rel.
Table 5: Side-by-Side evaluation of summaries for dierent
fact ranking methods.
Model Win Loss Tie RI
DynES vs. DynES/imp 46 23 31 0.23
DynES vs. DynES/rel 75 12 13 0.63
DynES vs. RELIN 95 5 0 0.90
Utility vs. Importance 47 16 37 0.31
Table 6: Side-by-side evaluation of summaries for dierent
summary generation algorithms.
Model Win Loss Tie RI
DynES vs. DynES(-GF)(-RF) 84 1 15 0.83
DynES vs. DynES(-GF) 74 0 26 0.74
DynES vs. DynES(-RF) 46 2 52 0.44
Search Challenge i
ing over Linked D
goal underlying al
needs by identify
to documents, in r
eort, Balog and N
collection, which s
benchmarking cam
Importantly, all th
entities and do no
In this work, our
query-dependent m
Entity cards. 
result pages (SERP
in industry [4, 33,
research in this ar
behavior and inte
Utility-based summaries (DynES) vs. other variations
• Users preferred utility-based summaries over the others
• The same conclusion is observed in oracle settings
Perfect fact ranking
RESULTS (SUMMARY GENERATION)
Table 5: Side-by-Side evaluation of summaries for dierent
fact ranking methods.
Model Win Loss Tie RI
DynES vs. DynES/imp 46 23 31 0.23
DynES vs. DynES/rel 75 12 13 0.63
DynES vs. RELIN 95 5 0 0.90
Utility vs. Importance 47 16 37 0.31
Table 6: Side-by-side evaluation of summaries for dierent
summary generation algorithms.
Model Win Loss Tie RI
DynES vs. DynES(-GF)(-RF) 84 1 15 0.83
DynES vs. DynES(-GF) 74 0 26 0.74
DynES vs. DynES(-RF) 46 2 52 0.44
preferred DynES summaries over DynES/imp (or DynES/rel) sum-
maries; ties are ignored. Considering all queries (the black boxes),
we observe that the utility-based summaries (DynES) are generally
preferred over the other two, and especially over the relevance-
based summaries (DynES/rel). ese summaries are highly biased
towards the query and cannot oer a concise summary; the utility-
based summaries, on the other hand, can strike a balance between
diversity and bias. Considering the query type breakdowns in Fig-
ure 4(a), we observe that the ListSearch and QALD queries, which
Search Challenge
ing over Linked D
goal underlying al
needs by identify
to documents, in
eort, Balog and N
collection, which
benchmarking cam
Importantly, all t
entities and do no
In this work, our
query-dependent
Entity cards.
result pages (SER
in industry [4, 33
research in this ar
behavior and inte
[30] performed ey
relevant entity ca
reduce the amoun
a similar study on
cards with organic
are relevant, user
task faster. With
more time on the
to the results righ
Different summary generation algorithms
‣ Applied to the same ranked list of facts
Grouping of multivalued predicates (GF) is perceived
as more important by the users than the resolution of
identical facts (RF)
RESULTS (SUMMARY GENERATION)
−10
−5
0
5
10
Export to plot.ly »
User preferences
SemSearch
INEX­LD
List Search
QALD
All Queries
(a) DynES vs. DynES/imp
−10
−5
0
5
10
Export to plot.ly »
User preferences
SemSearch
INEX­LD
List Search
QALD
All queries
(b) DynES vs. DynES/rel−10
−5
0
5
10
Export to plot.ly »
User preferences
SemSearch
INEX­LD
List Search
QALD
All Queries
Figure 4: Boxplot for distribution of user preferences for each query subset. Positive values show that DynES is preferred over
DynES/imp or DynES/rel.
Table 5: Side-by-Side evaluation of summaries for dierent
fact ranking methods.
Model Win Loss Tie RI
DynES vs. DynES/imp 46 23 31 0.23
DynES vs. DynES/rel 75 12 13 0.63
DynES vs. RELIN 95 5 0 0.90
Search Challenge in 2010 and 2011 [5, 20], and the estion Answer-
ing over Linked Data (QALD) challenge series [26]. e common
goal underlying all these campaign is to address users’ information
needs by identifying and returning specic entities, as opposed
to documents, in response to search queries. In a complementary
eort, Balog and Neumayer [2] introduced the DBpedia-Entity test
collection, which synthesizes a large number of queries from these
Export to plot.ly »
SemSearch
INEX­LD
List Search
QALD
All Queries
a) DynES vs. DynES/imp
−10
−5
0
5
10
Export to plot.ly »
User preferences
SemSearch
INEX­LD
List Search
QALD
All queries
(b) DynES vs. DynES/rel−10
−5
0
5
10
Export to plot.ly »
User preferences
SemSearch
INEX­LD
List Search
QALD
All Queries
stribution of user preferences for each query subset. Positive values show that DynES is preferred
el.
aluation of summaries for dierent Search Challenge in 2010 and 2011 [5, 20], and the estion An
Generating dynamic summaries without hurting named
entity and keyword queries
Named entity

Keyword
List search
NL queries
All queries
Utility vs. relevanceUtility vs. importance
‣ Generating and evaluating query-dependent
entity summaries for entity cards
• Combining two notions: relevance and importance
Future:
‣ Weighing importance and relevance for different
query types, and even for individual queries
SUMMARY
‣ Generating and evaluating query-dependent
entity summaries for entity cards
• Combining two notions: relevance and importance
Future:
‣ Weighing importance and relevance for different
query types, and even for individual queries
SUMMARY
‣ Go deep! Go neural!
Dynamic Factual Summaries for Entity Cards
THANK YOU
tiny.cc/sigir2017-dynes
RESOURCES
1 of 38

More Related Content

Similar to Dynamic Factual Summaries for Entity Cards(20)

Knowledge Graph MaintenanceKnowledge Graph Maintenance
Knowledge Graph Maintenance
Paul Groth397 views
DB-IR-rankingDB-IR-ranking
DB-IR-ranking
FELIX75907 views
Entity-Relationship Queries over WikipediaEntity-Relationship Queries over Wikipedia
Entity-Relationship Queries over Wikipedia
The Innovative Data Intelligence Research (IDIR) Laboratory, University of Texas at Arlington38 views
DB and IR IntegrationDB and IR Integration
DB and IR Integration
Marco A Torres1.5K views
Entity Retrieval (WWW 2013 tutorial)Entity Retrieval (WWW 2013 tutorial)
Entity Retrieval (WWW 2013 tutorial)
krisztianbalog1.8K views
Probablistic information retrievalProbablistic information retrieval
Probablistic information retrieval
Nisha Arankandath63 views
A Survey of Entity Ranking over RDF GraphsA Survey of Entity Ranking over RDF Graphs
A Survey of Entity Ranking over RDF Graphs
Intelligent Search Systems and Semantic Technologies lab at ITIS KFU1.6K views
My6assoMy6asso
My6asso
ketan533785 views

More from Faegheh Hasibi(6)

Dynamic Factual Summaries for Entity Cards

  • 1. DYNAMIC FACTUAL SUMMARIES FOR ENTITY CARDS Faegheh Hasibi, Krisztian Balog, Svein E. Bratsberg
 SIGIR 2017 IAI group
  • 5. Other application areas ‣ News search • hovering over an entity in entity-annotated documents ‣ Product and job search ‣ Mobile search ENTITY SUMMARIES
  • 6. ? Question How to generate 
 query-dependent entity summaries that can directly address users’ information needs?
  • 7. ENTITY SUMMARIZATION Albert Einstein … and ~700 more facts dbo:almaMater dbr:ETH_Zurich dbo:almaMater dbr:University_of_Zurich dbo:award dbr:Max_Planck_Medal dbo:award dbr:Nobel_Prize_in_Physics dbo:birthDate 1879-03-14 dbo:birthPlace dbr:Ulm dbo:birthPlace dbr:German_Empire dbo:citizenship dbr:Austria-Hungary dbo:children dbr:Eduard_Einstein dbo:children dbr:Hans_Albert_Einstein dbo:deathDate 1955-04-18 dbo:deathPlace dbr:Princeton,_New_Jersey dbo:spouse dbr:Elsa_Einstein dbo:spouse dbr:Mileva_Marić dbp:influenced dbr:Leo_Szilard
  • 8. dbo:almaMater dbr:ETH_Zurich dbo:almaMater dbr:University_of_Zurich dbo:award dbr:Max_Planck_Medal dbo:award dbr:Nobel_Prize_in_Physics dbo:birthDate 1879-03-14 dbo:birthPlace dbr:Ulm dbo:birthPlace dbr:German_Empire dbo:citizenship dbr:Austria-Hungary dbo:children dbr:Eduard_Einstein dbo:children dbr:Hans_Albert_Einstein dbo:deathDate 1955-04-18 dbo:deathPlace dbr:Princeton,_New_Jersey dbo:spouse dbr:Elsa_Einstein dbo:spouse dbr:Mileva_Marić dbp:influenced dbr:Nathan_Rosen dbp:influenced dbr:Leo_Szilard dbo:knownFor dbr:Brownian_motion ENTITY SUMMARIZATION einstein awards
  • 9. dbo:almaMater dbr:ETH_Zurich dbo:almaMater dbr:University_of_Zurich dbo:award dbr:Max_Planck_Medal dbo:award dbr:Nobel_Prize_in_Physics dbo:birthDate 1879-03-14 dbo:birthPlace dbr:Ulm dbo:birthPlace dbr:German_Empire dbo:citizenship dbr:Austria-Hungary dbo:children dbr:Eduard_Einstein dbo:children dbr:Hans_Albert_Einstein dbo:deathDate 1955-04-18 dbo:deathPlace dbr:Princeton,_New_Jersey dbo:spouse dbr:Elsa_Einstein dbo:spouse dbr:Mileva_Marić dbp:influenced dbr:Nathan_Rosen dbp:influenced dbr:Leo_Szilard dbo:knownFor dbr:Brownian_motion ENTITY SUMMARIZATION einstein awards Ranking a set of entity facts (and a search query) with respect to some criterion Fact ranking
  • 10. 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich … ENTITY SUMMARIZATION
  • 11. 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich … ENTITY SUMMARIZATION Summary generation constructing an entity summary from ranked entity facts, for a given size
  • 12. METHOD Fact ranking Ranking a set of entity facts (and a search query) with respect to some criterion Summary generation constructing an entity summary from ranked entity facts, for a given size
  • 13. RANKING CRITERIA Importance: The general importance of that fact in describing the entity, irrespective of any particular information need. Relevance: The relevance of fact to query reflects how well the fact supports the information need underlying the query.
  • 14. Utility RANKING CRITERIA Utility: The utility of a fact combines the general importance and the relevance of a fact into a single number importance relevance
  • 15. Importance Type-based importance Predicate specificity Object specificity Normalized fact frequency … FACT RANKING Relevance ‣ Supervised ranking, optimized on utility • more bias towards importance or relevance ‣ fact-query pairs as learning instances Semantic similarity Lexical specificity Inverse rank Context length …
  • 16. FACT RANKING Ingredients from the knowledge base:
  • 17. FACT RANKING Ingredients from the knowledge base:
  • 18. METHOD Fact ranking Ranking a set of entity facts (and a search query) with respect to some criterion Summary generation constructing an entity summary from ranked entity facts, for a given size
  • 19. SUMMARY GENERATION 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich …
  • 20. SUMMARY GENERATION 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich … … … headingiheadingi valueivaluei height(⌧h)height(⌧h) width(⌧w)width(⌧w) lineilinei
  • 21. SUMMARY GENERATION multi-valued predicates 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich …
  • 22. 1. dbo:birthDate 1879-03-14 2. dbp:placeOfBirth Ulm 3. dbo:birthPlace dbr:Ulm 4. dbo:deathDate 1955-04-18 5. dbo:award dbr:Nobel_Prize_in_Physics 6. dbo:deathPlace dbr:Princeton,_New_Jersey 7. dbo:birthPlace dbr:German_Empire 8. dbo:almaMater dbr:ETH_Zurich 9. dbo:award dbr:Max_Planck_Medal 10.dbp:influenced dbr:Nathan_Rosen 11.dbo:almaMater dbr:University_of_Zurich … SUMMARY GENERATION identical facts
  • 23. SUMMARY GENERATION Algorithm 1 Summary generation algorithm Input: Ranked facts Fe, max height h, max width w Output: Entity summary lines 1: M Predicate-Name Mapping(Fe) 2: headin s [] . Determine line headings 3: for f in Fe do 4: pname M[fp] 5: if (pname < headin s) AND (size(headin s)  h) then 6: headin s.add((fp,pname )) 7: end if 8: end for 9: alues [] . Determine line values 10: for f in Fe do 11: if fp 2 headin s then 12: alues[fp].add(fo) 13: end if 14: end for 15: lines [] . Construct lines 16: for (fp,pname ) in headin s do 17: line pname + ‘:’ 18: for in alues[fp] do 19: if len(line) + len(v)  w then 20: line line + . Add comma if needed 21: end if 22: end for 23: lines.add(line) 24: end for ‣ Creates a summary of a given size (length and width) ‣ Resolving identical facts 
 (RF feature) ‣ Grouping multi-valued predicates (GF feature)
  • 25. QUERIES Taken from the DBpedia-entity collection • Balog and R. Neumayer. “A Test Collection for Entity Search in DBpedia” , In: SIGIR ’13. February 2014 Increase profit by 35% Keyword Natural languageList search Named entity • “madrid” • “brooklyn bridge” • “vietnam war facts” • “eiffel” • “states that border oklahoma” • “What is the second highest mountain?” Query types • F. Hasibi, F. Nikolaev, C. Xiong, K. Balog, S. E. Bratsberg, A. Kotov, and J. Callan. “DBpedia- Entity v2: A Test Collection for Entity Search”, In: SIGIR ’17.
  • 26. EVALUATION (FACT RANKING) do not decide whether the entity card should be displayed or not; we assume that our information access system generates a card for a retrievable and presumably relevant entity (cf. §1). We also note that our focus of aention in this paper is on generating a summary for a given (assumed to be relevant) entity and not on the entity retrieval task itself. We therefore treat entity retrieval as a black box and combine several approaches to ensure that the ndings are not specic to any particular entity retrieval method. Formally, for a query q, we dene LEq as the set of relevant entities according to the ground truth, and Eq,m as the set of entities retrieved by method m 2 M, where M denotes the collection of retrieval methods. A single entity e is selected for q such that: e = arg max eq 2Eq (eq ), (eq ) = 1 |M| X m2M 1 rank(eq,Eq,m ) , Eq = {e|e 2 LEq,9m 2 M : e 2 Eq,m }. Basically, we select the entity that is retrieved at the highest rank by all methods, on average. (If the entity is not retrieved by method and w a 3-p In th and w w.r.t. relev of en supp entit Se the c (leve retai meet bann untru to ke by 5 ment respe Fi Selecting entity-query pairs ‣ select the entity that is retrieved at the highest rank by all methods
  • 27. EVALUATION (FACT RANKING) Collecting judgments via crowdsourcing ‣ rate the importance of the fact w.r.t. the entity ‣ rate the relevance of the fact to the query for the given entity Very important Important Not important How important is this fact for the given entity?
  • 28. EVALUATION (FACT RANKING) ‣ ~4K facts ‣ 5 judgments per record ‣ Fleiss’ Kappa of 0.52 and 0.41 for importance and relevance, (moderate agreement) Collecting judgments via crowdsourcing
  • 29. et the number of to approximately ures are used and elevance features. cross validation, gether. We report statistical signi- th M( = 0.05) or performance (for no signicance. anking approach sidering fact rele- e: (i) DynES uses (ii) DynES/imp ed on importance nce features only Table 2: Comparison of fact ranking against the state-of-the- art of approaches with URI-only objects. Signicance for lines i 3 are tested against lines 1,2,3, and for lines 2,3 are tested against lines 1,2. Model Importance Utility NDCG@5 NDCG@10 NDCG@5 NDCG@10 RELIN 0.6368 0.7130 0.6300 0.7066 LinkSum 0.7018M 0.7031 0.6504 0.6648 SUMMARUM 0.7181N 0.7412 M 0.6719 0.7111 DynES/imp 0.8354NNN 0.8604NNN 0.7645NNN 0.8117NNN DynES 0.8291NNN 0.8652NNN 0.8164NNN 0.8569NNN Table 4: Fact ranking performance by removing features; features are sorted by the relative dierence they make. Group Removed feature NDCG@10 % p DynES - all features 0.7873 - - Imp. - NEFp 0.7757 -1.16 0.08 Imp. - T peImp 0.7760 -1.13 0.14 RESULTS (FACT RANKING) 16% improvement over the best baseline Tailored setting (only relational facts)
  • 30. RESULTS (FACT RANKING) Fact ranking results w.r.t importance, relevance, and utility. Signicance for line i 1 is tested against lines 1 . Model Importance Relevance Utility NDCG@5 NDCG@10 NDCG@5 NDCG@10 NDCG@5 NDCG@10 RELIN 0.4733 0.5261 0.3514 0.4255 0.4680 0.5322 DynES/imp 0.7851N 0.7959N 0.4671N 0.5305N 0.7146N 0.7506N DynES/rel 0.5756NH 0.6151NH 0.5269N 0.5775N 0.6138NH 0.6536NH DynES 0.7672N N 0.7792N N 0.5771NNM 0.6423NNN 0.7547NMN 0.7873NNN eature analysis 4 we report on a feature ablation study, where we remove feature based on the relative dierence it makes in terms ng performance (w.r.t. utility). e table shows the top 13 individually; the rest of the features are grouped together oved from the feature set all at once. Interestingly, impor- d relevance features are evenly distributed among the most al features. e top-2 features (NEFp, T peImp) are com- sed on fact predicates, while the rest of importance features act objects. As for relevance features, we see four dierent feed it with a ranked list of facts from dierent sources. (i) vs. DynES/imp uses DynES vs. DynES/Imp for fact ran DynES vs. DynES/imp uses DynES vs. DynES/rel for fact (iii) DynES vs. RELIN compares DynES vs. the top-5 rank from RELIN; and (iv) Utility vs. Importance is an oracle ison, by taking perfect fact ranking results from crowdsou For RQ4, we compare our summary generation algorit three variations of the algorithm, all applied to the utility-b ranking. We compare DynES with: (i) DynES(-GF)(-RF is Algorithm 1, without grouping of facts with the same p • 47% improvement over the most comparable baseline • capturing the relevance aspect is more challenging than importance
  • 31. How to evaluate? ‣ Users consume all facts displayed in the summary ‣ The quality of the whole summary should be assessed ‣ Side-by-side evaluation of factual summaries by human EVALUATION (SUMMARY GENERATION)
  • 32. RESULTS (SUMMARY GENERATION)−10 −5 0 Export to plot.ly » User pr (a) DynES vs. DynES/imp −10 −5 0 User pr (b) DynES v−10 −5 0 User pref Figure 4: Boxplot for distribution of user preferences for each query subset. Positiv DynES/imp or DynES/rel. Table 5: Side-by-Side evaluation of summaries for dierent fact ranking methods. Model Win Loss Tie RI DynES vs. DynES/imp 46 23 31 0.23 DynES vs. DynES/rel 75 12 13 0.63 DynES vs. RELIN 95 5 0 0.90 Utility vs. Importance 47 16 37 0.31 Table 6: Side-by-side evaluation of summaries for dierent summary generation algorithms. Model Win Loss Tie RI DynES vs. DynES(-GF)(-RF) 84 1 15 0.83 DynES vs. DynES(-GF) 74 0 26 0.74 DynES vs. DynES(-RF) 46 2 52 0.44 Search Challenge i ing over Linked D goal underlying al needs by identify to documents, in r eort, Balog and N collection, which s benchmarking cam Importantly, all th entities and do no In this work, our query-dependent m Entity cards. result pages (SERP in industry [4, 33, research in this ar behavior and inte Utility-based summaries (DynES) vs. other variations • Users preferred utility-based summaries over the others • The same conclusion is observed in oracle settings Perfect fact ranking
  • 33. RESULTS (SUMMARY GENERATION) Table 5: Side-by-Side evaluation of summaries for dierent fact ranking methods. Model Win Loss Tie RI DynES vs. DynES/imp 46 23 31 0.23 DynES vs. DynES/rel 75 12 13 0.63 DynES vs. RELIN 95 5 0 0.90 Utility vs. Importance 47 16 37 0.31 Table 6: Side-by-side evaluation of summaries for dierent summary generation algorithms. Model Win Loss Tie RI DynES vs. DynES(-GF)(-RF) 84 1 15 0.83 DynES vs. DynES(-GF) 74 0 26 0.74 DynES vs. DynES(-RF) 46 2 52 0.44 preferred DynES summaries over DynES/imp (or DynES/rel) sum- maries; ties are ignored. Considering all queries (the black boxes), we observe that the utility-based summaries (DynES) are generally preferred over the other two, and especially over the relevance- based summaries (DynES/rel). ese summaries are highly biased towards the query and cannot oer a concise summary; the utility- based summaries, on the other hand, can strike a balance between diversity and bias. Considering the query type breakdowns in Fig- ure 4(a), we observe that the ListSearch and QALD queries, which Search Challenge ing over Linked D goal underlying al needs by identify to documents, in eort, Balog and N collection, which benchmarking cam Importantly, all t entities and do no In this work, our query-dependent Entity cards. result pages (SER in industry [4, 33 research in this ar behavior and inte [30] performed ey relevant entity ca reduce the amoun a similar study on cards with organic are relevant, user task faster. With more time on the to the results righ Different summary generation algorithms ‣ Applied to the same ranked list of facts Grouping of multivalued predicates (GF) is perceived as more important by the users than the resolution of identical facts (RF)
  • 34. RESULTS (SUMMARY GENERATION) −10 −5 0 5 10 Export to plot.ly » User preferences SemSearch INEX­LD List Search QALD All Queries (a) DynES vs. DynES/imp −10 −5 0 5 10 Export to plot.ly » User preferences SemSearch INEX­LD List Search QALD All queries (b) DynES vs. DynES/rel−10 −5 0 5 10 Export to plot.ly » User preferences SemSearch INEX­LD List Search QALD All Queries Figure 4: Boxplot for distribution of user preferences for each query subset. Positive values show that DynES is preferred over DynES/imp or DynES/rel. Table 5: Side-by-Side evaluation of summaries for dierent fact ranking methods. Model Win Loss Tie RI DynES vs. DynES/imp 46 23 31 0.23 DynES vs. DynES/rel 75 12 13 0.63 DynES vs. RELIN 95 5 0 0.90 Search Challenge in 2010 and 2011 [5, 20], and the estion Answer- ing over Linked Data (QALD) challenge series [26]. e common goal underlying all these campaign is to address users’ information needs by identifying and returning specic entities, as opposed to documents, in response to search queries. In a complementary eort, Balog and Neumayer [2] introduced the DBpedia-Entity test collection, which synthesizes a large number of queries from these Export to plot.ly » SemSearch INEX­LD List Search QALD All Queries a) DynES vs. DynES/imp −10 −5 0 5 10 Export to plot.ly » User preferences SemSearch INEX­LD List Search QALD All queries (b) DynES vs. DynES/rel−10 −5 0 5 10 Export to plot.ly » User preferences SemSearch INEX­LD List Search QALD All Queries stribution of user preferences for each query subset. Positive values show that DynES is preferred el. aluation of summaries for dierent Search Challenge in 2010 and 2011 [5, 20], and the estion An Generating dynamic summaries without hurting named entity and keyword queries Named entity
 Keyword List search NL queries All queries Utility vs. relevanceUtility vs. importance
  • 35. ‣ Generating and evaluating query-dependent entity summaries for entity cards • Combining two notions: relevance and importance Future: ‣ Weighing importance and relevance for different query types, and even for individual queries SUMMARY
  • 36. ‣ Generating and evaluating query-dependent entity summaries for entity cards • Combining two notions: relevance and importance Future: ‣ Weighing importance and relevance for different query types, and even for individual queries SUMMARY ‣ Go deep! Go neural!