The 11th WS on Interoperable Social Multimedia Applications (WISMA’10)<br />Collaborative Video Annotation for Multimedia ...
Agenda<br />Motivation & Problem<br />State-of-the-art<br />The SeViAnno development process<br />Requirements analysis<br...
Motivation<br />Annotation activities and experiences of domain experts different from amateurs<br />Amateurs can often co...
State ofthe Art<br />Accuratesystems<br />Intuitivesystems<br /><ul><li>Low usermotivation
Low usability
No helpforplaceinformation
Detailedcontentdescription</li></ul>Annotation misuse<br />Keywordtaggingonly<br />High usability<br />M-OntoMat-Annotizer...
Requirements Analysis – Paper Prototype<br />
Requirements Analysis – Paper Prototype<br />Semanticannotations<br />Video Tags<br />Place annotations<br />
SeViAnnoImplementation<br />
MPEG-7 Support<br /><ul><li>Poweredby LAS MPEG-7 Services
Multimedia Content Descriptions
Upcoming SlideShare
Loading in …5
×

Collaborative Video Annotation for Multimedia Sharing between Experts and Amateurs

1,797 views

Published on

Presentation for the 11th International Workshop of the Multimedia Metadata Community, May 19, 2010, Barcelona, Spain

Published in: Technology, Design
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,797
On SlideShare
0
From Embeds
0
Number of Embeds
17
Actions
Shares
0
Downloads
15
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • - DIN A4Based on bookby Carolyn Snyderpost-its as interactive and changeableitemsAt all 4 screensTestedwith different groups: experts and non-experts- useful to find usabilityflaws
  • - DIN A4Based on bookby Carolyn Snyderpost-its as interactive and changeableitemsAt all 4 screensTestedwith different groups: experts and non-experts- useful to find usabilityflaws
  • Collaborative Video Annotation for Multimedia Sharing between Experts and Amateurs

    1. 1. The 11th WS on Interoperable Social Multimedia Applications (WISMA’10)<br />Collaborative Video Annotation for Multimedia Sharing between Experts and Amateurs<br />Dominik Renzel, Yiwei Cao, Michael Lottko,Ralf Klamma<br />Chair of Computer Science 5 – Information Systems & Databases<br />RWTH Aachen University<br />19 May, 2010<br />Barcelona, Spain <br />
    2. 2. Agenda<br />Motivation & Problem<br />State-of-the-art<br />The SeViAnno development process<br />Requirements analysis<br />Prototype implementation<br />Evaluation & analysis results<br />Outlook <br />Demo<br />
    3. 3. Motivation<br />Annotation activities and experiences of domain experts different from amateurs<br />Amateurs can often contribute knowledge to research<br />Needed: Tools<br />High usability<br />Hiding complexity of metadata standards<br />Find trade-off between usability and complexity<br />RIA good way to provide interactive and usable applications<br />MPEG-7 for reusable multimedia content description<br />Evaluation: cultural heritage communities<br />
    4. 4. State ofthe Art<br />Accuratesystems<br />Intuitivesystems<br /><ul><li>Low usermotivation
    5. 5. Low usability
    6. 6. No helpforplaceinformation
    7. 7. Detailedcontentdescription</li></ul>Annotation misuse<br />Keywordtaggingonly<br />High usability<br />M-OntoMat-Annotizer<br />(www.acemedia.org)<br />VideoAnt<br />(ant.umn.edu)<br />VIA (www.boemie.org)<br />YouTube<br />
    8. 8. Requirements Analysis – Paper Prototype<br />
    9. 9. Requirements Analysis – Paper Prototype<br />Semanticannotations<br />Video Tags<br />Place annotations<br />
    10. 10. SeViAnnoImplementation<br />
    11. 11. MPEG-7 Support<br /><ul><li>Poweredby LAS MPEG-7 Services
    12. 12. Multimedia Content Descriptions
    13. 13. Creation Information
    14. 14. Media Information
    15. 15. Text Annotations (Keyword, FreeTextAnnotation)
    16. 16. Temporal Decomposition
    17. 17. Semantics (Semantic References)
    18. 18. SemanticDescriptions
    19. 19. SemanticBasetypes</li></li></ul><li>The SeViAnno Interface<br />http://tosini.informatik.rwth-aachen.de:8134/media/SeViAnno.html<br />
    20. 20. Evaluation Results – Quantitative Analysis<br />Professionalsspent a lot ofeffortto providedetailedannotations<br />
    21. 21. Evaluation Results – Qualitative Analysis<br />Amateur annotations<br />Fast, but imprecise<br />Representknowledgeotherwise not accessible<br />Manyannotatorsavailable<br />Expert annotations<br />Slow, but veryprecise<br />Fewannotatorsavailable<br /> Amateur & expert annotationscomplementeachother, but amateurannotationsintroducedatauncertainty. <br />
    22. 22. Outlook<br />Semanticnetworks (challengingtocreate simple UI)<br />Spatiotemporaldecompositions<br />Automatic integrationofnewvideosfrom RSS podcasts<br />Transcodingfacilitiesusingcloudcomputinginfrastructure (usingxuggle, ffmpeg)<br />Enhanced community-awareness<br />Integration intoYouTell (Non-linear Multimedia Storytelling)<br />

    ×