Presentation 16 may morning casestudy 1 maarten de rijke
Upcoming SlideShare
Loading in...5
×
 

Presentation 16 may morning casestudy 1 maarten de rijke

on

  • 217 views

 

Statistics

Views

Total Views
217
Views on SlideShare
217
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Presentation 16 may morning casestudy 1 maarten de rijke Presentation 16 may morning casestudy 1 maarten de rijke Presentation Transcript

  • Inside thesocial video live playerSemantic linking based on subtitlesDaan Odijk, Edgar Meij & Maarten de Rijke
  • Het zou de enige redding zijn voor dedoodzieke club.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.3.9%Maandag
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.3.9%Maandag46.5%Johan Cruijff
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.46.5%Johan Cruijff
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.46.5%Johan Cruijff25.7%Barcelona
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.46.5%Johan Cruijff0.5%FC Barcelona25.7%Barcelona
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.46.5%Johan Cruijff0.5%FC Barcelona25.7%Barcelona10.7%Granaat
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Het zou de enige redding zijn voor dedoodzieke club.wenste hij bestuur en directie van Ajaxper onmiddellijk naar huis.In zijn wekelijkse Telegraaf-column...Afgelopen maandag wierp Johan Cruijffvanuit zijn woonplaats Barcelona...weer eens een granaat deAmsterdamse Arena in.
  • Semantic linking
  • Semantic linking2.5 link / minute
  • Real-timeStream-basedPrecision > Recall
  • Real-timeStream-basedPrecision > RecallSemantic linking
  • • Step 1 – Ensuring recall• Ranked list of link candidates• Lexical matchingSemantic linking
  • • Step 1 – Ensuring recall• Ranked list of link candidates• Lexical matching• Step 2 – Boosting precision• Learning to rerank• Supervised machine learning (Random Forests)Semantic linking
  • Context as a graph
  • Context as a graphchunk t1
  • Context as a graphchunk t1 chunk t2
  • Context as a graphchunk t1 chunk t2
  • Context as a graphchunk t1 chunk t2
  • Context as a graphchunk t1 chunk t2anchor (t2, a)
  • Context as a graphchunk t1 chunk t2anchor (t2, a)
  • Context as a graphwchunk t1 chunk t2anchor (t2, a)
  • Context as a graphwchunk t1 chunk t2 chunk t3anchor (t2, a)
  • Context as a graphwchunk t1 chunk t2 chunk t3anchor (t2, a) anchor (t3, a’)
  • Table 1: Features used for the learning to rerank approach.Anchor featuresLEN (a) = |a| Number of terms in the n-gram aIDFf (a) Inverse document frequency of a in representation f, where f 2 {title, anchor, content}KEYPHRASE(a) Probability that a is used as an anchor text in Wikipedia (documents)LINKPROB(a) Probability that a is used as an anchor text in Wikipedia (occurrences)SNIL(a) Number of articles whose title equals a sub-n-gram of aSNCL(a) Number of articles whose title match a sub-n-gram of aTarget featuresLINKSf (w) Number of Wikipedia articles linking to or from w, where f 2 {in, out} respectivelyGEN (w) Depth of w in Wikipedia category hierarchyREDIRECT(w) Number of redirect pages linking to wWIKISTATSn (w) Number of times w was visited in the last n 2 {7, 28, 365} daysWIKISTATSTRENDn,m (w) Number of times w was visited in the last n days divided by the number of times w visitedin last m days, where the pair (n, m) 2 {(1, 7), (7, 28), (28, 365)}Anchor + Target featuresTFf (a, w) =nf (a,w)|f|Relative phrase frequency of a in representation f of w, normalized by length of f, wheref 2 {title, first sentence, first paragraph}POS1 (a, w) = pos1(a)|w| Position of first occurrence of a in w, normalized by length of wNCT(a, w) Does a contain the title of w?TCN (a, w) Does the title of w contain a?TEN (a, w) Does the title of w equal a?COMMONNESS(a, w) Probability of w being the target of a link with anchor text aa set E of edges; vertices are either a chunk ti, a targetw or an anchor (ti, a). Edges link each chunk ti to ti 1.reasons. First, the algorithm to find link candidates is recall-oriented and, therefore, produces many links for each chunk;Features
  • Table 1: Features used for the learning to rerank approach.Anchor featuresLEN (a) = |a| Number of terms in the n-gram aIDFf (a) Inverse document frequency of a in representation f, where f 2 {title, anchor, content}KEYPHRASE(a) Probability that a is used as an anchor text in Wikipedia (documents)LINKPROB(a) Probability that a is used as an anchor text in Wikipedia (occurrences)SNIL(a) Number of articles whose title equals a sub-n-gram of aSNCL(a) Number of articles whose title match a sub-n-gram of aTarget featuresLINKSf (w) Number of Wikipedia articles linking to or from w, where f 2 {in, out} respectivelyGEN (w) Depth of w in Wikipedia category hierarchyREDIRECT(w) Number of redirect pages linking to wWIKISTATSn (w) Number of times w was visited in the last n 2 {7, 28, 365} daysWIKISTATSTRENDn,m (w) Number of times w was visited in the last n days divided by the number of times w visitedin last m days, where the pair (n, m) 2 {(1, 7), (7, 28), (28, 365)}Anchor + Target featuresTFf (a, w) =nf (a,w)|f|Relative phrase frequency of a in representation f of w, normalized by length of f, wheref 2 {title, first sentence, first paragraph}POS1 (a, w) = pos1(a)|w| Position of first occurrence of a in w, normalized by length of wNCT(a, w) Does a contain the title of w?TCN (a, w) Does the title of w contain a?TEN (a, w) Does the title of w equal a?COMMONNESS(a, w) Probability of w being the target of a link with anchor text aa set E of edges; vertices are either a chunk ti, a targetw or an anchor (ti, a). Edges link each chunk ti to ti 1.reasons. First, the algorithm to find link candidates is recall-oriented and, therefore, produces many links for each chunk;FeaturesTable 2: Context features used for learning to rerankon top of the features listed in Table 1.Context featuresDEGREE(w, G) Number of edges connected to thenode representing Wikipedia article win context graph G.DEGREECENTRALITY (w, G)Centrality of Wikipedia article w incontext graph G, computed as theratio of edges connected to the noderepresenting w in G.PAGERANK(w, G) Importance of the node representingw in context graph G, measured usingPageRank.Computing features from context graphs. To feed ourlearning to rerank approach with information from the con-text graph we compute a number of features for each linkcandidate. These features are described in Table 2. First,we compute the degree of the target Wikipedia article inranked list of all targedates that are produa video segment. Ounew ranks for the raements making up thand mean average prciency. We report theon a single core of antime per chunk indicafor one line of subtitlshould be noted, thatonly requiring a simpFeatures and baselia Wikipedia dump frwhich we calculate lintures, we collect visitbasis and aggregate tThis preprocessing mat runtime. We consmodel using COMM
  • Evaluation1,596 links1,446 with known target Wikipedia article897 unique Wikipedia articles2.47 unique Wikipedia articles per minute50 segments of DWDD5,173 lines of subtitles6.97 terms on average per line6h 3m 41s of video
  • Resultstrees, measured in terms of R-precision and MAP.Table 4: Semantic linking results for the graph-based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-.7624.7729N.7728N.7758N.7749N.7849N.7884Nern then termsin pre-tics forrefer toning toetrievalg text.w thatver theness at00 mil-cientbased context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-
  • Resultstrees, measured in terms of R-precision and MAP.Table 4: Semantic linking results for the graph-based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-.7624.7729N.7728N.7758N.7749N.7849N.7884Nern then termsin pre-tics forrefer toning toetrievalg text.w thatver theness at00 mil-cientbased context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-Significantimprovementover baseline
  • Resultstrees, measured in terms of R-precision and MAP.Table 4: Semantic linking results for the graph-based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-.7624.7729N.7728N.7758N.7749N.7849N.7884Nern then termsin pre-tics forrefer toning toetrievalg text.w thatver theness at00 mil-cientbased context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-Significantimprovementover baselineSignificantimprovement overbaseline & L2R
  • Resultstrees, measured in terms of R-precision and MAP.Table 4: Semantic linking results for the graph-based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400based context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-.7624.7729N.7728N.7758N.7749N.7849N.7884Nern then termsin pre-tics forrefer toning toetrievalg text.w thatver theness at00 mil-cientbased context model and an oracle run (indicating aceiling). Significant di↵erences, tested using a two-tailed paired t-test, are indicated for lines 3–6 with(none), M(p < 0.05) and N(p < 0.01); the positionof the symbol indicates whether the comparison isagainst line 1 (left most) or line 2 (right most).Average classification timeper chunk (in ms) R-Prec MAP1. Baseline retrieval model 54 0.5753 0.62352. Learning to rerank approach 99 0.7177 0.7884Learning to rerank (L2R) + one context graph feature3. L2R+DEGREE 104 0.7375NM0.8252NN4. L2R+DEGREECENTRALITY 108 0.7454NN0.8219NN5. L2R+PAGERANK 119 0.7380NM0.8187NNLearning to rerank (L2R) + three context graph features6. L2R+DEGREE+PAGERANK+DEGREECENTRALITY 120 0.7341N0.8204NN7. Oracle picking the best out oflines 3–5 for each video segment0.7636 0.8400combination of the three context features does not yield fur-ther improvements in e↵ectiveness over the individual con-Significantimprovementover baselineSignificantimprovement overbaseline & L2RNo improvementover one contextgraph feature
  • Wrapping up
  • • Semantic linking based on retrieval and machinelearningWrapping up
  • • Semantic linking based on retrieval and machinelearning• Real-time, highly effective and efficientWrapping up
  • • Semantic linking based on retrieval and machinelearning• Real-time, highly effective and efficient• Available as web service and open source software• xTAS, Elastic SearchWrapping up
  • • Semantic linking based on retrieval and machinelearning• Real-time, highly effective and efficient• Available as web service and open source software• xTAS, Elastic Search• Maarten de Rijke, derijke@uva.nlWrapping up
  • • Technical work• Extend to output of speech recognizer instead ofsubtitles• Online learning to rank for automatic optimization• Linking to other types of content, including socialNext steps