Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Multimedia content based retrieval slideshare.ppt


Published on

information retrieval for text and multimedia content has become an important research area.
Content based retrieval in multimedia is a challenging problem since multimedia data needs detailed interpretation
from pixel values. In this presentation, an overview of the content based retrieval is presented along with
the different strategies in terms of syntactic and semantic indexing for retrieval. The matching techniques
used and learning methods employed are also analyzed.

Published in: Technology
  • Be the first to comment

Multimedia content based retrieval slideshare.ppt

  1. 1. Multimedia Content Based Retrieval<br />GovindarajuHujigal<br /><br />
  2. 2. Content based retrieval in multimedia<br />an important research area<br />challenging problem since multimedia data needs detailed interpretation from pixel values<br />different strategies in terms of syntactic and semantic indexing for retrieval<br />
  3. 3. Why do we need MCBR ?<br />How do I find what I’m looking for?!<br />
  4. 4. Multimedia content Retrieval<br />multimedia and storage technology that has led to building of a large repository of digital image, video, and audio data.<br />Compared to text search, any assignment of text labels a massively labor intensive effort.<br />Focus is an calculating statistics which can be approximately correlated to the content featureswithout costly human interaction.<br />
  5. 5. Multimedia content Retrieval<br />Search based on Syntactic features<br />Shape, texture, color histogram<br />relatively undemanding<br />Search based on Semantic features<br /> human perception<br />“ List all dogs look like cat”<br />“City” “Landscape” “cricket”<br />
  6. 6. Syntactic indexing<br />Use syntactic features as the basis for matching and employ either Query-through-dialog or Query by-example box to interface with the user.<br />Query-through-dialog <br />Enter the words describing the image<br />Query-through-dialog not convenient as the user needs to know the exact details of the attributes like shape, color, texture etc.<br />
  7. 7. Image descriptors – Color <br />Apples are red … <br />… But tomatoes are too!!!<br />
  8. 8. Image descriptors – Texture <br />Texture differentiates between a Lawn and a Forest<br />
  9. 9. Syntactic indexing<br />Query by example<br />example images and user chose the closest.<br />various features like color, shape, textures and spatial distribution f the chosen image are evaluated and matched against the images in the database.<br />Similarity or distance metric.<br />In Video, various key frames of video clips which are close to the user query are shown.<br />
  10. 10.
  11. 11. Syntactic indexing<br />Query by example limitations<br />Image can be annotated and interpreted in many ways. For example, a particular user may be interested in a waterfall, another may be interested in mountain and yet another in the sky, although all of them may be present in the same image.<br />User may wonder "why do these two images look similar?" or "what specific parts of these images are contributing to the similarity?“. User is required to know the search structure and other details for efficiently searching the database.<br />It requires many comparisons and results may be too many depending on threshold.<br />
  12. 12. Semantic indexing<br /><ul><li>Match the human perception and cognition
  13. 13. Semantic content contains high-level concepts such as objects and events.
  14. 14. As humans think in term of events and remember different events and objects after watching video, these high-level concepts are the most important cues in content-based retrieval. Let’s take as an example a soccer game, humans usually remember goals, interesting actions, red cards etc.</li></li></ul><li>Semantic indexing<br />There exists a relationship between the degree of action and the structure of visual patterns that constitute a movie.<br />Movies can be classified into four broad categories: Comedies, Action, Dramas, or Horror films. Inspired by cinematic principles, four computable video features (average shot length, color variance, motion content and lighting key) are combined in a framework to provide a mapping to these four high-level semantic classes.<br />
  15. 15. Motion feature as indexing cue..<br />Spatial Scene Analysis on video can be fully transferred<br /> from CBIR but temporal analysis is the uniqueness <br /> about video. <br />Temporal Information induces the concept of motion for the objects present in the document<br />
  16. 16. Motion feature as indexing cue..<br />Frame level: Each frame is treated separately.<br /> There is no temporal analysis at this level.<br />Shot-level: A shot is a set of contiguous frames<br /> all acquired through a continuous camera recording. <br /> Only the temporal information is used.<br />Scene-level: A scene is a set of contiguous shots<br /> having a common semantic significance.<br />Video-level: The complete video object is treated as a whole.<br />
  17. 17. Motion feature as indexing cue..<br />The three types of Shot-level are as follows:<br />Cut: A sharp boundary between shots. This generally implies a peak in the difference between color or motion histograms corresponding to the two frames surrounding the cut.<br />Dissolve: The content of last images of the first shots is continuously mixed with that of the first images of the second shot.<br />Wipe: The images of the second shot continuously cover or push out of the display that of the first shot.<br />
  18. 18. Motion feature as indexing cue<br />Often through motion that the content in a video is expressed and the attention of the viewers captivated<br />Query techniques<br />Set of motion vector trajectories mapped to set of objects. Visual query can be ‘player’.[Dimitrova]<br />Use animated sketch to formulate queries.Motion and temporal duration are the key attributes assigned to each object in the sketch in addition to the usual attributes such as shape, color and texture. [VideoQ]<br />
  19. 19. Matching techniques<br />Method of finding similarity between the two sets of multimedia data, which can either be images or videos.<br />Search based on features like location, colors and concepts, examples of which are ‘mostly red’, ‘sunset’, ‘yellow flowers’ etc.<br />User specify the relative weights to the features or assign equal weightage<br />Automatically identifying the relevance of the features is under active research.<br />
  20. 20. Learning methods in retrieval<br />The user generates both the positive and negative retrieval examples (relevance feedback).<br />Each image can represent multiple concepts. To replace one of these ambiguities, each image is modeled as a bag of instances (sub-blocks in the image). <br />A bag is labeled as a positive example of a concept, if there exist some instances representing the concept, which could be a car or a waterfall scene. If there does not exist any instance, the bag is labelled as a negative example.<br />The concept is learned by using a small collection of positive and negative examples and this is used to retrieve images containing a similar concept from the database.<br />
  21. 21. Learning methods in retrieval<br />The ability to infer high-level understanding from a multimedia content has proven to be a difficult goal to achieve.<br />Example, the category “John eating icecream”.<br />Such categories might require the presence of sophisticated scene understanding algorithms along with the understanding of spatio-temporal relationship between entities (like the behavior eating can be characterized as repeatedly putting something eatable in mouth).<br />
  22. 22. Structure in multimedia content<br />To achieve efficiency in content-production and due to the limited number of available resources, standard techniques are employed.<br />The intention of video making is to represent an action or to evoke emotions using various storytelling methods. Figure 1 gives an analysis of the basic techniques of shot transitions that are used to convey particular intentions.<br />
  23. 23.
  24. 24. Structure in multimedia content<br />Special structure of news in ‘begin shot’, ‘newscaster shot’, ‘interview’, ‘weather forecast’ etc. and builds a video model of news.<br />car-race video has unusual zoom-in and zoom-out, basketball has left-panning and right-panning that last for certain maximum duration.<br />The motion activity in interesting shots in sports is higher than its surrounding shots and so on.<br />
  25. 25. Future of CBR systems<br />There is ambiguity in making such conclusions, for example, dissolve can be either due to ‘flashback’ or due to ‘time lapse’. if the number of dissolves is two, most probably ‘flashback’<br /> - “Multimedia Content Description Interface” - specify a standard set of descriptors that can be used to describe various types of multimedia information<br />Make collaborative effort to tag the multimedia<br />
  26. 26. Commercial systems –<br />
  27. 27. Commercial systems –<br />
  28. 28. Commercial systems –<br />
  29. 29. Commercial systems –<br />
  30. 30. Conclusions<br />Systematic exploration of construction of high-level indexes is lacking.<br />None of the work has considered exploring features close to the human perception.<br />In summary, there is a great need to extract semantic indices for making the CBR system serviceable to the user. Though extracting all such indices might not be possible, there is a great scope for furnishing the semantic indices with a certain well-established structure.<br />
  31. 31. Conclusions<br />Content-based video indexing and retrieval is an active area of research with continuing attributions from several domain including image processing, computer vision,databasesystem and artificial intelligence.<br />