The Search and Anchoring in Video Archives (SAVA) task at MediaEval 2015 consists of two sub-tasks: (i) search for multimedia content within a video archive using multimodal queries referring to information contained in the audio and visual streams/content, and (ii) automatic selection of video segments within a list of videos that can be used as anchors for further hyperlinking within the archive. The task used a collection of roughly 2700 hours of the BBC broadcast TV material for the former sub-task, and about 70 files taken from this collection for the latter sub-task. The search subtask is based on an ad-hoc retrieval scenario, and is evaluated using a pooling procedure across participants submissions with crowdsourcing relevance assessment using Amazon Mechanical Turk (MTurk). The evaluation used metrics that are variations of MAP adjusted for this task. For the anchor selection sub-task overlapping regions of interest across participants submissions were assessed using MTurk workers, and mean reciprocal rank (MRR), precision and recall were calculated for evaluation.
http://ceur-ws.org/Vol-1436/
http://www.multimediaeval.org
3. Terminology
• Video
(e.g,
2
hours)
• Search
result
(e.g.
10
min)
• Anchor:
segment
for
which
a
user
requests
a
link
(e.g.,
1
min)
“I
want
to
know
more
about
this”
• Hyperlink
• Target:
relevant
segment
for
given
anchor
(e.g.,
5
min)
7/2/13
DGA
workshop
-‐
July
2013,
Paris
4. Use
Case
7/2/13
DGA
workshop
-‐
July
2013,
Paris
Video 1
Video 2 Video 3
Text query:
Speech cue: “hunger around the globe”
Visual cue: “hungry people slim bodies”
Search results:
Video Start End Jump-In
Video1 13:30 15:00 13:30
Video10 15:10 17:00 15:10
Video12 29:50 31:00 29:50
TargetTarget
Result 1
Anchor Anchor Anchor Anchor
Hyperlink
Hyperlink
5. Search
Task
DefiniGon
Video 1
Text query:
Speech cue: “hunger around the globe”
Visual cue: “hungry people slim bodies”
Search results:
Video Start End Jump-In
Video1 13:30 15:00 13:30
Video10 15:10 17:00 15:10
Video12 29:50 31:00 29:50
Result 1
User
-‐
Input
ParGcipant
Submission
6. Anchoring
Task
DefiniGon
Video 1
Anchor? Anchor? Anchor? Anchor?
Input
ParGcipant
Submission
Video
Start
End
Video
7. Task
history
• ME
2011
Rich
Speech
Retrieval
(predecessor)
• ME
2012
S&HL
“brave
new”
task:
– Search
&
Linking
(blip.tv)
• ME
2013
S&HL
“regular”
task
– Search:
(known-‐item)
Linking:
(bbc
collec=on)
• ME
2014
S&HL
“regular”
task
– Search:
(mulG
relevant)
Linking:
(mul=
relevant)
• ME
2015
Search
&
Anchoring
+
Linking@TRECVid
– Search:
mulG
relevant
– Anchoring:
"brave
new
task"
7/2/13
DGA
workshop
-‐
July
2013,
Paris
8. Dataset:
Video
collecGon
• Test
collecGon
Search:
– copyright
cleared
broadcasts
from
the
period
of
12.05.2008
–
31.07.2008
– 2686
hours
– ~200
videos
rebroadcast
or
audio-‐visual
signal
was
out
of
sync.
• Anchoring
test
collecGon
– 33
videos
for
anchoring
for
anchors
of
2013
and
2014
ediGon
5/13/13
LIME
workshop
-‐
WWW2013
9. Dataset:
Query
GeneraGon
• Users
– BBC
employees
– BriGsh
Film
InsGtute
– Journalists,
+
prospecGve
Students
• InstrucGons:
– Personal
– Teleconference
session
• SubjecGve
impression:
task
difficult
but
doable.
5/13/13
LIME
workshop
-‐
WWW2013
10. GeneraGon
of
Info'Need
7/2/13
DGA
workshop
-‐
July
2013,
Paris
Formulate
InformaGon
need
Text
search
Visual
search
24. EvaluaGon:
Anchoring
Task
• Measures:
P@10
,
Recall
• Segments
overlapping
with
relevant
segments
are
considered
relevant
• Recall:
How
many
of
the
known-‐relevant
segments
were
found
28. Conclusions
• Task
defined
by
users
• Search
task:
maisp
measure
• First
steps
anchoring
task
• Few
runs
prevent
strong
conclusions
29. The
Search
and
Hyperlinking
task
was
funded
by
We
are
grateful
to
Jana
Eggink
and
Andy
O'Dwyer
from
the
BBC
for
preparing
the
collecGon
and
hosGng
the
user
trials.