Your SlideShare is downloading. ×
A Semantic Multimedia Web (Part 1)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

A Semantic Multimedia Web (Part 1)

2,686

Published on

A Semantic Multimedia Web: Part 1 of a series of lectures given at the Universidad Politécnica de Madrid (UPM), May 2009

A Semantic Multimedia Web: Part 1 of a series of lectures given at the Universidad Politécnica de Madrid (UPM), May 2009

Published in: Technology, Education
1 Comment
4 Likes
Statistics
Notes
No Downloads
Views
Total Views
2,686
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
11
Comments
1
Likes
4
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Transcript

    • 1. A Semantic Multimedia Web : Create, Annotate, Present and Share your Media Raphaël Troncy, < [email_address] > CWI, Interactive Information Access
    • 2. #me
      • Academic
        • PhD (2000-2004)
        • PostDoc (2004-2005)
        • Researcher (2005-2009)
        • Lecturer (2009 - )
      • Projects / Standardizations
        • K-Space
        • W3C
        • IPTC
      • (Semantic) Web identity
      Media Fragments Media Annotations
    • 3. Learning Objectives
      • Understand multimedia applications workflow
        • Take the canonical processes of media production model
      • Explore various multimedia metadata formats
        • Be aware of the advantages and limitations of various models
        • Know the interoperability issues and understand COMM, a Core Ontology for Multimedia
      • Learn about upcoming W3C recommendations that will enable fine-grained access to video on the web
    • 4.  
    • 5. Motivation
      • Growing amount of digital multimedia content available on the Web
      • Content difficult to search and reuse
        • Barely invisible for the search engines
    • 6. Image search (Yahoo!)
    • 7. Image search (Google fr)
    • 8. Image search (Google en)
    • 9. Video search
    • 10. Image/Video indexing
      • Techniques used by current search engines
        • search term occurs in the filename or in the caption
        • no semantics
      • Image indexing: main problem
        • an image is not alphabetic: there is no countable discrete units, that, in combination will provide the meaning of the image
        • image descriptors are not given with the image: one needs to extract or interpret them
      • Video indexing: additional problem
        • a video has additionally a temporal dimension to take into account
        • a video has a priori no discrete units neither (i.e. frames, shots, sequences cannot be absolutely defined)
    • 11. Why is it so difficult to find appropriate multimedia content, to reuse and repurpose content previously published and to adapt interfaces to these content according to different user needs?
    • 12. Agenda
      • Understanding Multimedia Applications Workflow
        • CeWe Color Photo Book creation application
        • Vox Populi argumentation-based video sequences generation
        • Canonical Processes of Media Production
      • Semantic Annotation of Multimedia Content
        • MPEG-7 based ontologies
        • COMM: A Core Ontology for MultiMedia
      • Media Fragments URIs
        • Support the addressing and retrieval of subparts of time-based Web resources
        • Share clips, retrieve excerpts, annotate parts of media
    • 13. Understanding Multimedia Applications Workflow
      • Identify and define a number of canonical processes of media production
      • Community effort
        • 2005: Dagstuhl seminar
        • 2005: ACM MM Workshop on Multimedia for Human Communication
        • 2008: Multimedia Systems Journal Special Issue (core model and companion system papers) editors: Frank Nack, Zeljko Obrenovic and Lynda Hardman
    • 14. Overview of Canonical Processes
    • 15. Example 1: CeWe Color PhotoBook
      • Application for authoring digital photo books
      • Automatic selection, sorting and ordering of photos
        • Context analysis methods: timestamp, annotation, etc.
        • Content analysis methods: color histograms, edge detection, etc.
      • Customized layout and background
      • Print by the European leader photo finisher company
      http://www.cewe-photobook.com
    • 16. CeWe Color PhotoBook Processes
      • My winter ski holidays with my friends
    • 17. CeWe Color PhotoBook Processes
    • 18. CeWe Color PhotoBook Processes
    • 19. CeWe Color PhotoBook Processes
    • 20. CeWe Color PhotoBook Processes
    • 21. CeWe Color PhotoBook Processes
    • 22. Example 2: Vox Populi Video Sequences Generation
      • Stefano Bocconi, Frank Nack
      • Interview with America video footage with interviews and background material about the opinion of American people after 9-11 http://www.interviewwithamerica.com
      • Example question: What do you think of the war in Afghanistan?
      “ I am never a fan of military action, in the big picture I don’t think it is ever a good thing, but I think there are circumstances in which I certainly can’t think of a more effective way to counter this sort of thing…”
    • 23. Vox Populi Premeditate Process
      • Analogous to the pre-production process in the film industry
        • Static versus dynamic video artifact
      • Output
        • Script, planning of the videos to be captured
        • Questions to the interviewee prepared
        • Profiles of the people interviewed: education, age, gender, race
        • Locations where the interviews take place
    • 24. Vox Populi Annotations
      • Contextual
        • Interviewee (social), locations
      • Descriptive
        • Question asked and transcription of the answers
        • Filmic continuity, examples:
          • gaze direction of speaker (left, centre, right)
          • framing (close-up, medium shot, long shot)
      • Rhetorical
        • Rhetorical Statement
        • Argumentation model: Toulmin model
    • 25. Vox Populi Statement Annotations
      • Statement formally annotated:
        • < subject > <modifier> <predicate>
        • E.g. “ war best solution ”
      • A thesaurus containing:
        • Terms on the topics discussed (155)
        • Relations between terms: similar (72), opposite (108), generalization (10), specialization (10)
        • E.g. war opposite diplomacy
    • 26. Toulmin Model 57 Claims, 16 Data, 4 Concessions, 3 Warrants, 1 Condition Claim Data Qualifier Warrant Backing Condition Concession
    • 27. Vox Populi Query Interface
    • 28. Vox Populi Organize Process
      • Using the thesaurus, create a graph of related statements
        • nodes are the statements (corresponding to video segments) “ war best solution ”, “ diplomacy best solution” , “ war not solution”
        • edges are either support or contradict
      support contradict war best solution war not solution diplomacy best solution
    • 29. Result of Vox Populi Query I am not a fan of military actions War has never solved anything I cannot think of a more effective solution Two billions dollar bombs on tents
    • 30. Vox Populi Processes
    • 31. Canonical Processes 101
      • Canonical: reduced to the simplest and most significant form possible without loss of generality
      • Each process
        • short description
        • illustrated with use cases
        • input(s), actor(s) & output(s)
      • Formalization of processes in UML diagrams in paper (see literature list)
    • 32. Premeditate
      • Establish initial ideas about media production
        • Design a photo book of my last holidays for my family
        • Create argument-based sequences of videos of interviews after September 11
      • Inputs: ideas, inspirations from human experience
      • Actors:
        • camera owner
        • group of friends
      • Outputs:
        • decision to take camera onto ski-slope
        • structured set of questions and locations for interviews
    • 33. Create Media Asset
      • Media assets are captured, generated or transformed
        • Photos taken at unspecified moments at holiday locations
        • Synchronized audio video of interviewees responding to fixed questions at many locations
      • Inputs:
        • decision to take camera onto ski-slope;
        • structured set of questions and locations for interviews
      • Actors:
        • (video) camera, editing suite
      • Outputs:
        • images, videos
    • 34. Annotate
      • Annotation is associated with asset
      • Inputs:
        • photo, video, existing annotation
        • optional thesaurus of terms
      • Actors:
        • human, feature analysis program
      • Outputs:
        • Complex structure associating annotations with images, videos
      Q: “What do you think of the Afghanistan war?” Speaker: Female, Caucasian…
    • 35. Semantic Annotate
      • Annotation uses existing controlled vocabularies
        • Subject matter annotations of your photos (COMM, XMP)
        • Rhetorical annotations in Vox Populi
      solution best war Predicate thesaurus Modifier thesaurus Subject
    • 36. Package
      • Process artifacts are packed logically or physically
      • Useful for storing collections of media after capturing…
      • … before selecting subset for further stages
    • 37. Query
      • User retrieves a set of process artifacts based on a user-specified query
      • Inputs:
        • user query, in terms of annotations or by example
        • collection(s) of assets
      • Actors:
        • human
      • Output:
        • subset of assets plus annotations (in no order)
    • 38. Construct Message
      • Author specifies the message they wish to convey
        • Our holiday was sporty, great weather and fun
        • Create clash about whether war is a good thing
      • Inputs: ideas, decisions, available assets
      • Actors:
        • author
      • Outputs:
        • the message that should be conveyed by the assets
    • 39. Organize
      • Process where process artifacts are organized according to the message
        • Organize a number of 2-page layouts in photobook
        • Use semantic graph to select related video clips to form linear presentation of parts of argument structure
      • Inputs: set of assets and annotations (e.g. output from query process)
      • Actors: human or machine
      • Outputs: document structure with recommended groupings and orderings for assets
    • 40. Publish
      • Presentation is created
        • associated annotations may be removed
        • create proprietary format of photobook for upload
        • create SMIL file containing videos and timing information
      • Inputs: set of assets and annotations (e.g. output from organize process)
      • Actors: human or machine
      • Outputs:
        • final presentation in specific document format, such as html, smil or pdf
    • 41. Distribute
      • Presentation is transported to end user, end-user can view and interact with it
        • photobook uploaded to printer, printed then posted to user
        • SMIL file is downloaded to client and played
      • Inputs: published document (output from publish process)
      • Actors: distribution hardware and software
      • Outputs:
        • media assets presented on user’s device
    • 42. Canonical Processes Possible Flow
    • 43. Sum Up
      • Community agreement, not “yet another model”
      • Large proportion of the functionality provided by multimedia applications can be described in terms of this model
      • Initial step towards the definition of open web-based data structures for describing and sharing semantically annotated media assets
    • 44. Discussion
      • Frequently asked questions
        • Complex processes
        • Interaction
        • Complex artifacts and annotations can be annotated
      • Towards a more rigorous formalization of model
        • Relationship to foundational ontologies
        • Semantics of Annotations
    • 45. Literature
      • Lynda Hardman: Canonical Processes of Media Production . In Proceedings of the ACM Workshop on Multimedia for Human Communication - From Capture to Convey (MHC 05) , November 2005.
      • Special Issue on Canonical Processes of Media Production http://www.springerlink.com/content/j0l4g337581652t1/ http:// www.cwi.nl /~media/projects/canonical/
      • Lynda Hardman, Zeljko Obrenovic, Frank Nack, Brigitte Kerhervé and Kurt Piersol: Canonical Processes of Semantically Annotated Media Production . In Multimedia Systems Journal , 14 (6), p327-340, 2008
      • Philipp Sandhaus, Sabine Thieme and Susanne Boll: Canonical Processes in Photo Book Production . In Multimedia Systems Journal , 14 (6), p351-357, 2008
      • Stefano Bocconi, Frank Nack and Lynda Hardman: Automatic generation of matter-of-opinion video documentaries . In Journal of Web Semantics , 6 (2), p139-150, 2008.
    • 46. Agenda
      • Understanding Multimedia Applications Workflow
        • CeWe Color Photo Book creation application
        • Vox Populi argumentative video sequences generation system
        • The Canonical Processes of Media Production
      • Semantic Annotation of Multimedia Content
        • MPEG-7 based ontologies
        • COMM: A Core Ontology for MultiMedia
      • Media Fragment URIs
        • Support the addressing addressing and retrieval of subparts of time-based Web resources
        • Share clips, retrieve excerpts, annotate parts of media
    • 47. Wake Up Video
    • 48. The Importance of the Annotations
    • 49. Multimedia: Description methods MPEG-1 MPEG-2 MPEG-4 MPEG-7 MPEG-21 ISO W3C
    • 50. MPEG-7: a multimedia description language?
      • ISO standard since December of 2001
      • Main components :
        • Descriptors (Ds) and Description Schemes (DSs)
        • DDL (XML Schema + extensions)
      • Concern all types of media
      Part 5 – MDS Multimedia Description Schemes
    • 51. MPEG-7 and the Semantic Web
      • MDS Upper Layer represented in RDFS
        • 2001: Hunter
        • Later on: link to the ABC upper ontology
      • MDS fully represented in OWL-DL
        • 2004: Tsinaraki et al., DS-MIRF model
      • MPEG-7 fully represented in OWL-DL
        • 2005: Garcia and Celma, Rhizomik model
        • Fully automatic translation of the whole standard
      • MDS and Visual parts represented in OWL-DL
        • 2007: Arndt et al., COMM model
        • Re-engineering MPEG-7 using DOLCE design patterns
    • 52. Requirements [aceMedia, MMSEM XG]
      • MPEG-7 compliance
        • Support most descriptors (decomposition, visual, audio)
      • Syntactic and Semantic interoperability
        • Shared and formal semantics represented in a Web language (OWL, RDF/XML, RDFa, etc.)
      • Separation of concerns
        • Domain knowledge versus multimedia specific information
      • Modularity
        • Enable customization of multimedia ontology
      • Extensibility
        • Enable inclusion of further descriptors (non MPEG-7)
    • 53. MPEG-7 Based Ontologies MM Analysis Digital Rights Digital Libraries Digital Libraries Applications MDS+Visual All MDS+CS MDS+Visual Coverage OWL-DL OWL-DL OWL-DL OWL-Full Complexity DOLCE None None ABC Foundational Ontologies COMM Rhizomik DS-MIRF Hunter
    • 54. Common Scenario The &quot; Big Three &quot; at the Yalta Conference (Wikipedia)
    • 55. Common Scenario: Tagging Approach The &quot; Big Three &quot; at the Yalta Conference (Wikipedia)
      • Localize a region
        • Draw a bounding box, a circle around a shape
      • Annotate the content
        • Interpret the content
        • Tag: Winston Churchill, UK Prime Minister, Allied Forces, WWII
      Reg1
    • 56. Common Scenario: SW Approach The &quot; Big Three &quot; at the Yalta Conference (Wikipedia)
      • Localize a region
        • Draw a bounding box, a circle around a shape
      • Annotate the content
        • Interpret the content
        • Link to knowledge on the Web
        • :Reg1 foaf:depicts dbpedia:Winston_Churchill
        • dbpedia:Winston_Churchill skos:altLabel &quot;Sir Winston Leonard Spencer-Churchill&quot;
        • dbpedia:Winston_Churchill rdf:type foaf:Person
      Reg1
    • 57. Hunter's MPEG-7 Ontology
    • 58. DS-MIRF MPEG-7 Ontology
    • 59. Rhizomik MPEG-7 Ontology
    • 60. COMM: Fragment Identification
    • 61. Comparison
      • Link with domain semantics
        • Hunter: ABC model + mpeg7:depicts relationship
        • DS-MIRF: Domain ontologies needs to subclass the general MPEG-7 categories
        • Rhizomik: Use the mpeg7:semantic relationship
        • COMM: Semantic Annotation pattern
      • MPEG-7 coverage
        • Hunter: extension of the MPEG-7 visual descriptors
        • COMM:
          • Formalization of the context of the annotation
          • Representation of the method (algorithm) that provides the annotation
    • 62. Comparison
      • Modeling Decisions:
        • DS-MIRF and Rhizomik: 1-to-1 translation from MPEG-7 to OWL/RDF
        • Hunter: Simplification and link to the ABC upper model
        • COMM: NO 1-to-1 translation
          • Need for patterns: use DOLCE, a well designed foundational ontology as a modeling basis
      • Scalability:
      19 20 27 11 Triples COMM Rhizomik DS-MIRF Hunter
    • 63.  
    • 64. Scenario: Image The &quot; Big Three &quot; at the Yalta Conference (Wikipedia)
      • Localize a region (bounding box)
      • Annotate the content (interpretation)
        • Tag: Winston Churchill, UK Prime Minister, Allied Forces, WWII
        • Link to knowledge on the Web
      Reg1
        • :Reg1 foaf:depicts dbpedia:Winston_Churchill
        • dbpedia:Winston_Churchill skos:altLabel &quot;Sir Winston Leonard Spencer-Churchill&quot;
        • dbpedia:Winston_Churchill rdf:type foaf:Person
    • 65. Scenario: Video A history of G8 violence ( video ) (© Reuters)
      • Localize a region
      • Annotate the content
        • Tag: G8 Summit, Heiligendamn, 2007
        • Link to knowledge on the Web
      EU Summit, Gothenburg, 2001 Seq1 Seq4
        • :Seq1 foaf:depicts dbpedia:34th_G8_Summit
        • :Seq4 foaf:depicts dbpedia:EU_Summit
        • geo:Heilegendamn skos:broader geo:Germany
    • 66. Research Problem The &quot; Big Three &quot; at the Yalta Conference (Wikipedia) A history of G8 violence ( video ) (© Reuters)
      • Multimedia objects are complex
        • Compound information objects, fragment identification
      • Semantic annotation
        • Subjective interpretation, context dependent
      • Linked data principle
        • Open to reuse existing knowledge
       MPEG-7  RDF  D&S | OIO Reg1 Seq1 Seq4
    • 67. COMM: Design Rationale
      • Approach:
        • NO 1-to-1 translation from MPEG-7 to OWL/RDF
        • Need for patterns: use DOLCE, a well designed foundational ontology as a modeling basis
      • Design patterns:
        • Ontology of Information Objects (OIO)
          • Formalization of information exchange
          • Multimedia = complex compound information objects
        • Descriptions and Situations (D&S)
          • Formalization of context
          • Multimedia = contextual interpretation (situation)
      • Define multimedia patterns that translate MPEG-7 in the DOLCE vocabulary
    • 68. COMM: Core Functionalities
      • Most important MPEG-7 functionalities:
        • Decomposition of multimedia content into segments
        • Annotation of segments with metadata
          • Administrative metadata: creation & production
          • Content-based metadata: audio/visual descriptors
          • Semantic metadata: interface with domain specific ontologies
       Note that all are subjective and context dependent situations
    • 69. COMM: D&S / OIO Patterns
      • Definition of design patterns for decomposition and annotation based on D&S and OIO
        • MPEG-7 describes digital data ( multimedia information objects ) with digital data ( annotation )
        • Digital data entities are information objects
        • Decompositions and annotations are situations that satisfy the rules of a method or algorithm
    • 70. COMM: Decomposition Pattern MPEG-7 MPEG-7
    • 71. COMM: Annotation Pattern MPEG-7
    • 72. COMM: Semantic Pattern Domain Ontologies
    • 73. COMM: Modules Decomposition Pattern Annotation Pattern
    • 74. Example 1: Fragment Identification
    • 75. Example 1: Region Annotation
    • 76. Example 2: Fragment Identification
    • 77. Example 2: Sequence Annotation
    • 78. Implementation
      • COMM fully formalized in OWL DL
        • Rich axiomatization, consistency check (Fact++v1.1.5)
        • OWL 2.0: qualified cardinality restrictions for number restrictions of MPEG-7 low-level descriptors
      • JAVA API available
        • MPEG-7 class interface for the construction of meta-data at runtime
    • 79. KAT Annotation Tool
    • 80. Literature
      • Richard Arndt, Raphaël Troncy, Steffen Staab, Lynda Hardman and Miroslav Vacura: COMM: Designing a Well-Founded Multimedia Ontology for the Web . In 6th International Semantic Web Conference (ISWC'2007) , Busan, Korea, November 11-15, 2007.
      • Raphaël Troncy, Oscar Celma, Suzanne Little, Roberto Garcia, Chrisa Tsinaraki: MPEG-7 based Multimedia Ontologies: Interoperability Support or Interoperability Issue? In 1st Workshop on Multimedia Annotation and Retrieval enabled by Shared Ontologies (MAReSO'2007) , Genoa, Italy, December 2007.
    • 81. Agenda
      • Understanding Multimedia Applications Workflow
        • CeWe Color Photo Book creation application
        • Vox Populi argumentative video sequences generation system
        • The Canonical Processes of Media Production
      • Semantic Annotation of Multimedia Content
        • MPEG-7 based ontologies
        • COMM: A Core Ontology for MultiMedia
      • Media Fragment URIs
        • Support the addressing addressing and retrieval of subparts of time-based Web resources
        • Share clips, retrieve excerpts, annotate parts of media
    • 82. Where is the Triumph Arc? http://www.flickr.com/photos/40001315@N00/2705120551 /
    • 83. Who is Saphira? http://www.flickr.com/photos/mhausenblas/2883727293/
      • Region-based annotation in Flickr ... but it cannot be taken out of Flickr
        • the flickr note provides a URI for the region, but one needs to access the flickr API to access the region [ CaMiCatzee ]
    • 84. Sarkozy Laughing with Putin? http://www.youtube.com/watch?v=7fMCTo-GQ2A#t=34s
    • 85. Clinton Laughing with Yeltsin? http://www.youtube.com/watch?v=sxoh1z6s_Cw#t=15s
      • Temporal annotation in YouTube ... but the UA downloads the complete resource and seeks ... and the YouTube syntax is different from Google Video, Vimeo, DailyMotion, etc.
    • 86. Problems
      • How to address and retrieve parts of multimedia content?
        • region of an image / sequence of a video
      • How to describe parts of multimedia content in the Web of Data?
      • How can I make a better use of the general world knowledge?
    • 87. 1. Media Fragments
      • Every popular web site does it ...
        • region-based annotation in Flickr
        • temporal sequence annotation in YouTube
      • ... BUT:
        • region-based annotations cannot be exported
        • YouTube syntax is different than DailyMotion, Vimeo, etc.
      #t=34s #t=15s
    • 88. W3C Media Fragments WG W3C Media Fragments WG http://www.w3.org/2008/WebVideo/Fragments/
    • 89. W3C Media Fragments WG
      • Provide URI-based mechanisms for uniquely identifying fragments for media objects on the Web, such as video, audio, and images.
    • 90. W3C Media Fragments WG
      • Research question:
      • How to access media fragments without retrieving the whole resource?
      • Use Case and Requirements
        • Link to, Display, Bookmark, Annotate, Search
      • Four dimensions considered
        • time, space, track, id
      • Protocols covered: HTTP(S), RTSP
      http://www.w3.org/TR/media-frags-reqs
    • 91. Media Fragment Processing (1)
      • User Agent (media fragment conforming) : http://www.youtube.com/watch?v=sxoh1z6s_Cw#t=12,21
        • GET / watch?v =sxoh1z6s_Cw HTTP/1.1 Host: www.youtube.com Accept: video/* Range: seconds =12-21 [ HTTPBis clarification ]
      • Server:
    • 92. Media Fragment Processing (2)
      • User Agent (media fragment conforming) : http://www.youtube.com/watch?v=sxoh1z6s_Cw#t=12,21
      • Server:
    • 93. 2. Media Annotations
      • Using structured knowledge on the Web :Clip foaf:depicts dbpedia:Laughter :Clip foaf:depicts dbpedia:Boris_Yeltsin :Clip foaf:depicts dbpedia:Bill_Clinton :Clip foaf:depicts dbpedia:Hyde_Park,New_York ---------------------------------------------- dbpedia:Hyde_Park,New_York owl:sameAs fbase:hyde_park fbase:hyde_park skos:broader fbase:new_york_state
      • Annotate the content (interpretation) Boris Yeltsin, Bill Clinton, laugh, Bosnia, Hyde Park
    • 94. Media Annotations
      • Using structured knowledge on the Web :Reg wsmo:partOf flickr:mhausenblas/2883727293/ :Reg foaf:depicts http://saphira.blogr.com/#me
    • 95. 3. What For? [More details in Parts 2 & 3]
      • Answer abstract queries
        • Find all sequences with a Russian President laughing with another President
      • Find (non obvious) connection between pieces of media
      • Present Information
    • 96. Answer abstract queries
      • Research Problems
        • Data modeling, vocabulary alignment, disambiguation
      PREFIX foaf: <http://xmlns.com/foaf/0.1/> SELECT ?Clip WHERE { ?Clip foaf:depicts dbpedia:Laughter , yago:PresidentsOfTheRussianFederation , yago:President110468559 . }
    • 97. Find connection between media
      • Unexpected relationships: enable further discovery, exploration
      • Research problems
        • Where should we stop in the exploration?
        • When does it start to be intrusive for the end-user?
      :Clip foaf:depicts dbpedia:Boris_Yeltsin :Clip foaf:depicts dbpedia:Bill_Clinton :Clip foaf:depicts fbase:Laughter
    • 98. Presenting Information
      • Dimensions used for searching news items
        • When time 10/07/2006
        • Where location Paris
        • What is depicted J. Chirac, Z. Zidane
        • Why event WC 2006
        • Who photographer Bertrand Guay, AFP
      Metadata
    • 99.  
    • 100.  
    • 101.  
    • 102.  
    • 103. Literature
      • Michael Hausenblas, Raphaël Troncy, Yves Raimond and Tobias Bürger. Interlinking Multimedia: How to Apply Linked Data Principles to Multimedia Fragments . In 2nd Workshop on Linked Data on the Web (LDOW'09) , Madrid, Spain, April 20, 2009.
      • Raphaël Troncy, Jack Jansen, Yves Lafon, Erik Mannens, Silvia Pfeiffer, Davy van Deursen. Use cases and requirements for Media Fragments . W3C Working Draft, Media Fragments Working Group, 30 April 2009.
    • 104. Thanks for your attention http:// www.cwi.nl/~troncy

    ×