• Save

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Advances in Image Search and Retrieval

on

  • 3,527 views

 

Statistics

Views

Total Views
3,527
Views on SlideShare
3,514
Embed Views
13

Actions

Likes
5
Downloads
0
Comments
1

2 Embeds 13

http://faculty.eng.fau.edu 11
http://www.search-results.com 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Advances in Image Search and Retrieval Advances in Image Search and Retrieval Presentation Transcript

  • Advances in Image Search and Retrieval Oge Marques Florida Atlantic University Boca Raton, FL - USA
  • Take-home message •  Visual Information Retrieval (VIR) is a fascinating research field with many open challenges and opportunities which have the potential to impact the way we organize, annotate, and retrieve visual data (images and videos). •  In this tutorial we present some of the latest and most representative advances in image search and retrieval.
  • Disclaimer #1 •  Visual Information Retrieval (VIR) is a highly interdisciplinary field, but … Image and (Multimedia) Information Video Database Retrieval Processing Systems Visual Machine Computer Learning Information Vision Retrieval Visual data Human Visual Data Mining modeling and Perception representation
  • Disclaimer #2 •  There are many things that I believe… •  … but cannot prove
  • Background and Motivation What is it that we’re trying to do  and  why is it so difficult? –  Taking pictures and storing, sharing, and publishing them has never been so easy and inexpensive. –  If only we could say the same about finding the images we want and retrieving them…
  • Background and Motivation The “big mismatch”
  • Background and Motivation •  Q: What do you do when you need to find an image (on the Web)?•  A1: Google (image search), of course!
  • Background and Motivation Google image search results for “sydney opera house”
  • Background and Motivation Google image search results for “opera”
  • Background and Motivation •  Q: What do you do when you need to find an image (on the Web)?•  A2: Other (so-called specialized) image search engines •  http://images.search.yahoo.com/ •  http://pictures.ask.com •  http://www.bing.com/images •  http://pixsy.com/
  • Yahoo!
  • Ask
  • Bing
  • Pixsy – several years ago
  • Pixsy – several hours ago
  • Background and Motivation •  Q: What do you do when you need to find an image (on the Web)? •  A3: Search directly on large photo repositories: –  Flickr –  Webshots –  Shutterstock
  • Background and Motivation Flickr image search results for “opera”
  • Background and Motivation Webshots image search results for “opera”
  • Background and Motivation Shutterstock image search results for “opera”
  • Background and Motivation •  Are you happy with the results so far?
  • Background and Motivation •  Back to our original (two-part) question: –  What is it that we’re trying to do? –  We are trying to create  automated solutions to the problem of  finding and retrieving visual information,  from (large, unstructured) repositories,  in a way that satisfies search criteria specified by users, relying (primarily) on the visual contents of the media.
  • Background and Motivation •  Why is it so difficult? •  There are many challenges, among them: –  The elusive notion of similarity –  The semantic gap –  Large datasets and broad domains –  Combination of visual and textual information –  The users (and how to make them happy)
  • Outline •  Part I – Core concepts, techniques, and tools –  Design, implementation, and evaluation aspects •  Part II – Medical image retrieval –  Challenges, resources, and opportunities •  Part III – Applications and related areas –  Mobile visual search, social networks, and more •  Part IV – Where is image search headed? –  Advice for young researchers
  • Part I Core concepts, techniques, and tools
  • Core concepts, techniques, and tools •  Design –  Challenges –  Principles –  Concepts •  Implementation –  Languages and tools •  Evaluation –  Datasets –  Benchmarks
  • Design challenges •  Capturing and measuring similarity •  Semantic gap (and other gaps) •  Large datasets and broad domains •  Users’ needs and intentions •  Growing up (as a field)
  • The elusive notion of similarity •  Are these two images similar?
  • The elusive notion of similarity •  Are these two images similar?
  • The elusive notion of similarity •  Is the second or the third image more similar to the first?
  • The elusive notion of similarity •  Which image fits better to the first two: the third or the fourth?
  • The semantic gap •  The semantic gap is the lack of coincidence between the information that one can extract from the visual data and the interpretation that the same data have for a user in a given situation. •  The pivotal point in content-based retrieval is that the user seeks semantic similarity, but the database can only provide similarity by data processing. This is what we called the semantic gap. [Smeulders et al., 2000]
  • Alipr
  • Alipr
  • Alipr
  • Alipr
  • Google similarity search
  • Google similarity search
  • Google sort by subject http://www.google.com/landing/imagesorting/
  • Google image swirl http://image-swirl.googlelabs.com/
  • How I see it… •  The semantic gap problem has not been solved (and maybe will never be…) •  What are the alternatives? –  Treat visual similarity and semantic relatedness differently •  Examples: Alipr, Google (or Bing) similarity search, etc. –  Improve both (text-based and visual) search methods independently –  Combine visual and textual information in a meaningful way –  Trust the user •  Collaborative filtering, crowdsourcing, games.
  • •  But, wait… There are other gaps! –  Just when you thought the semantic gap was your only problem… Source: [Deserno, Antani, and Long, 2009]
  • Large datasets and broad domains •  Large datasets bring additional challenges in all aspects of the system: –  Storage requirements: images, metadata, and “visual signatures” –  Computational cost of indexing, searching, retrieving, and displaying images –  Network and latency issues
  • Large datasets and broad domains
  • Challenge: users’ needs and intentions •  Users and developers have quite different views •  Cultural and contextual information should be taken into account •  User intentions are hard to infer –  Privacy issues –  Users themselves don’t always know what they want –  Who misses the MS Office paper clip?
  • Challenge: users’ needs and intentions •  The user’s perspective –  What do they want? –  Where do they want to search? –  In what form do they express their query?
  • Challenge: users’ needs and intentions •  The image retrieval system should be able to be mindful of: –  How users wish the results to be presented –  Where users desire to search –  The nature of user input/ interaction.
  • Challenge: users’ needs and intentions •  Each application has different users (with different intent, needs, background, cultural bias, etc.) and different visual assets.
  • Challenge: growing up (as a field) •  It’s been 10 years since the “end of the early years” –  Are the challenges from 2000 still relevant? –  Are the directions and guidelines from 2000 still appropriate? –  Have we grown up (at all)? –  Let’s revisit the ‘Concluding Remarks’ from that paper…
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Driving forces •  Yes, we have seen many new audiences, new purposes, new –  “[…] content-based image styles of use, and new modes retrieval (CBIR) will continue of interaction emerge. to grow in every direction: new audiences, new purposes, •  Each of these usually requires new styles of use, new modes new methods to solve the of interaction, larger data sets, problems that they bring. and new methods to solve the problems.” •  However, not too many researchers see them as a driving force (as they should).
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Heritage of computer vision •  I’m afraid I have bad news… –  Computer vision hasn’t made –  “An important obstacle to so much progress during the overcome […] is to realize past 10 years. that image retrieval does not entail solving the general –  Some classical problems  image understanding (including image  understanding) problem.” remain unresolved. –  Similarly, CBIR from a  pure computer vision perspective didn’t work  too well either.
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Influence on computer •  The adoption of large data sets became standard practice in vision computer vision. –  “[…] CBIR offers a different •  No reliance on strong look at traditional computer segmentation (still unresolved) led to new areas of research, e.g., vision problems: large data automatic ROI extraction and RBIR. sets, no reliance on strong •  Color image processing and color segmentation, and revitalized descriptors became incredibly interest in color image popular, useful, and (to some processing and invariance.” degree) effective. •  Invariance still a huge problem –  But it’s cheaper than ever to have multiple views.
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Similarity and learning •  The authors were pointing in the right direction (human in the –  “We make a pledge for the loop, role of context, benefits importance of human- based from learning,…) similarity rather than general similarity. Also, the connection •  However: between image semantics, –  Similarity is a tough problem to crack and model. image data, and query context •  Even the understanding of how will have to be made clearer humans judge image similarity is very limited. in the future.” –  Machine learning is almost –  “[…] in order to bring inevitable… •  … but sometimes it can be semantics to the user, learning abused. is inevitable.”
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Interaction •  Significant progress on –  Better visualization options, visualization interfaces and more control to the user, devices. ability to provide feedback […] •  Relevance Feedback: still a very tricky tradeoff (effort vs. perceived benefit), but more popular than ever (rating, thumbs up/down, etc.)
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Need for databases •  Very little progress –  “The connection between CBIR and database research is –  Image search and retrieval has likely to increase in the benefited much more from future. […] problems like the document information definition of suitable query retrieval than from database languages, efficient search in research. high dimensional feature space, search in the presence of changing similarity measures are largely unsolved […]”
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  The problem of evaluation •  Significant progress on –  CBIR could use a reference benchmarks, standardized standard against which new datasets, etc. algorithms could be evaluated (similar to TREC in the field of –  ImageCLEF text recognition). –  Pascal VOC Challenge –  “A comprehensive and publicly –  MSRA dataset available collection of images, –  Simplicity dataset sorted by class and retrieval –  UCID dataset and ground truth purposes, together with a (GT) protocol to standardize –  Accio / SIVAL dataset and GT experimental practices, will be –  Caltech 101, Caltech 256 instrumental in the next phase –  LabelMe of CBIR.”
  • Revisiting [Smeulders et al. 2000] What they said How I see it •  Semantic gap and other •  The semantic gap problem sources has not been solved (and –  “A critical point in the maybe will never be…) advancement of CBIR is the semantic gap, where the meaning of an image is rarely •  But the idea about using self-evident. […] One way to other sources was right on resolve the semantic gap the spot! comes from sources outside –  Geographical context the image by integrating other sources of information about the –  Social networks image in the query.” –  Tags
  • Visual Information Retrieval (VIR) User interface (Querying, Browsing, Query / Search Viewing) Engine User Digital Image and Visual summaries Indexes Video Archive Digitization + Cataloguing / Feature Compression extraction Image or Video
  • Designing a VIR system: a mind map
  • Tools and resources •  Visual descriptors and machine learning algorithms have become commodities. •  Examples of publicly available implementation and tools: –  Visual descriptors: •  img(Rummager) by Savvas Chatzichristofis •  Caliph & Emir and Lire by Mathias Lux –  Machine Learning: •  Weka
  • Part II Medical Image Retrieval
  • Medical image retrieval •  Challenges –  We’re entering a new country… •  How much can we bring? •  Do we speak the language? •  Do we know their culture? •  Do they understand us and where we come from? •  Opportunities –  They use images (extensively) –  They have expert knowledge –  Domains are narrow (almost by definition) –  Fewer clients, but potentially more $$
  • Medical image retrieval •  Selected challenges: –  Different terminology –  Standards –  Modality dependencies •  Other challenges: –  Equipment dependencies –  Privacy issues –  Proprietary data
  • Different terminology •  Be prepared for: –  New acronyms •  CBMIR (Content-Based Medical Image Retrieval) •  PACS (Picture Archiving and Communication System) •  DICOM (Digital Imaging and COmmunication in Medicine) •  Hospital Information Systems (HIS) •  Radiological Information Systems (RIS) –  New phrases •  Imaging informatics –  Lots of technical medical terms
  • Standards •  DICOM (http://medical.nema.org/) –  Global IT standard, created in 1993, used in virtually all hospitals worldwide. –  Designed to ensure the interoperability of different systems and manage related workflow. –  Will be required by all EHR systems that include imaging information as an integral part of the patient record. –  750+ technical and medical experts participate in 20+ active DICOM working groups. –  Standard is updated 4-5 times per year. –  Many available tools! (see http://www.idoimaging.com/)
  • Medical image modalities •  The IRMA code [Lehmann et al., 2003] –  4 axes with 3 to 4 positions, each in {0,...9,a,...,z}, where "0" denotes "unspecified" to determine the end of a path along an axis. •  Technical code (T) describes the imaging modality •  Directional code (D) models body orientations •  Anatomical code (A) refers to the body region examined •  Biological code (B) describes the biological system examined.
  • Medical image modalities •  The IRMA code [Lehmann et al., 2003] –  The entire code results in a character string of <14 characters (IRMA: TTTT – DDD – AAA – BBB). Example: “x-ray, projection radiography, analog, high energy – sagittal, left lateral decubitus, inspiration – chest, lung – respiratory system, lung” Source: [Lehmann et al., 2003]
  • Medical image modalities •  The IRMA code [Lehmann et al., 2003] –  The companion tool… Source: [Lehmann et al., 2004]
  • CBMIR vs. text-based MIR •  Most current retrieval systems in clinical use rely on text keywords such as DICOM header information to perform retrieval. •  CBIR has been widely researched in a variety of domains and provides an intuitive and expressive method for querying visual data using features, e.g. color, shape, and texture. •  However, current CBIR systems: –  are not easily integrated into the healthcare environment; –  have not been widely evaluated using a large dataset; and –  lack the ability to perform relevance feedback to refine retrieval results. Source: [Hsu et al., 2009]
  • Who are the main players? •  USA –  NIH (National Institutes of Health) •  NIBIB - National Institute of Biomedical Imaging and Bioengineering •  NCI - National Cancer Institute •  NLM – National Libraries of Medicine –  Several universities and hospitals •  Europe –  Aachen University (Germany) –  Geneva University (Switzerland) •  Big companies (Siemens, GE, etc.)
  • Medical image retrieval systems: examples •  IRMA (Image Retrieval in Medical Applications) –  Aachen University (Germany) •  http://ganymed.imib.rwth-aachen.de/irma/ –  3 online demos: •  IRMA Query demo: allows the evaluation of CBIR on several databases. •  IRMA Extended Query Refinement demo: CBIR from the IRMA database (a subset of 10,000 images). •  Spine Pathology and Image Retrieval Systems (SPIRS) designed by the NLM/NIH (USA): holds information of ~17,000 spine x-rays.
  • Medical image retrieval systems: examples •  MedGIFT (GNU Image Finding Tool) –  Geneva University (Switzerland) •  http://www.sim.hcuge.ch/medgift/ –  Large effort, including projects such as: •  Talisman (lung image retrieval) •  Case-based fracture image retrieval system •  Onco-Media: medical image retrieval + grid computing •  ImageCLEF: evaluation and validation •  medSearch
  • Medical image retrieval systems: examples •  WebMIRS –  NIH / NLM (USA) •  http://archive.nlm.nih.gov/proj/webmirs/index.php –  Query by text + navigation by categories –  Uses datasets and related x-ray images from the National Health and Nutrition Examination Survey (NHANES)
  • Medical image retrieval systems: examples •  SPIRS (Spine Pathology & Image Retrieval System): Web-based image retrieval system for large biomedical databases –  NIH / UCLA (USA) –  Representative case study on highly specialized CBMIR Source: [Hsu et al., 2009]
  • Medical image retrieval systems: examples •  National Biomedical Imaging Archive (NBIA) –  NCI / NIH (USA) •  https://imaging.nci.nih.gov/ –  Search based on metadata (DICOM fields) –  3 search options: •  Simple •  Advanced •  Dynamic
  • Medical image retrieval systems: examples •  ARSS Goldminer –  American Roentgen Ray Society (USA) •  http://goldminer.arrs.org/ –  Query by text –  Results can be filtered by: •  Modality •  Age •  Sex
  • Medical image retrieval systems: examples •  Yottalook Images –  iVirtuoso (USA) •  http://www.yottalook.com/ –  Developed and maintained by four radiologists –  Query by text –  Claims to use 4 “core technologies”: •  "natural query analysis” •  "semantic ontology” •  “relevance algorithm” •  a specialized content delivery system that provides high yield content based on the search term.
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image  Retrieval •  http://www.imageclef.org/2011/medical –  Dataset: 77,000+ images from articles published in medical journals including text of the captions and link to the html of the full text articles. –  3 types of tasks: •  Modality Classification: given an image, return its modality •  Ad-hoc retrieval: classic medical retrieval task, with 3 “flavors”: textual, mixed and semantic queries •  Case-based retrieval: retrieve cases including images that might best suit the provided case description.
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image Retrieval 2011 –  Modality Classification
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image Retrieval 2011 –  Modality Classification – FAU Team •  Personnel: 4 grad students + 2 undergrads + advisor •  Strategy: –  Textual classification using Lucene and associated tools and libraries –  Visual classification using 8 contemporary descriptors and 3 different families of classifiers, implemented using Weka and associated tools and libraries •  Supporting tools: –  Manual annotation tool (http://imageclef.mlab.ceecs.fau.edu/) –  Training set visualization tool  (http://imageclef.mlab.ceecs.fau.edu/classification/)
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image Retrieval 2011 –  Modality Classification Results – FAU Team (textual)
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image Retrieval 2011 –  Modality Classification Results – FAU Team (visual)
  • Evaluation: ImageCLEF Medical Image Retrieval •  ImageCLEF Medical Image Retrieval 2011 •  Modality Classification Results – FAU Team (visual)
  • Medical Image Retrieval: promising directions •  Better user interfaces (responsive, highly interactive, and capable of supporting relevance feedback) •  New applications of CBMIR, including: –  Teaching –  Research –  Diagnosis –  PACS and Electronic Patient Records •  CBMIR evaluation using medical experts •  Integration of local and global features •  New visual descriptors
  • Medical Image Retrieval: promising directions •  New devices
  • Part III Applications and related areas
  • Applications and related areas •  New devices and services •  Mobile visual search •  Image search and retrieval in the age of social networks •  Games! •  Other related areas •  Our recent work (highlights)
  • New devices and services •  Flickr (b. 2004) •  YouTube (b. 2005) •  Flip video cameras (b. 2006) •  iPhone (b. 2007) •  iPad (b. 2010)
  • Mobile visual search •  Driving factors –  Capable devices 1 GHz ARM Cortex-A8 processor, PowerVR SGX535GPU, Apple A4 chipset Source: http://www.apple.com/iphone/specs.html
  • Mobile visual search •  Driving factors –  Motivated users: image taking and image sharing are huge! –  Source: http://www.onlinemarketing-trends.com/2011/03/facebook-photo-statistics-and-insights.html
  • Mobile visual search •  Facebook for iPhone –  Source: http://statistics.allfacebook.com/applications/single/facebook-for-iphone/6628568379/
  • Mobile visual search •  Instagram: 2 million registered (although not necessarily active) users, who upload ~300,000 photos per day •  Several apps based on it! –  http://iphone.appstorm.net/roundups/photography/5-cool-apps-for-getting-the-most-out-of-instagram/
  • Mobile visual search •  Food photo sharing!
  • Mobile visual search •  Driving factors –  Legitimate (or not quite…) needs and use cases –  Source: http://www.slideshare.net/dtunkelang/search-by-sight-google-goggles
  • Mobile visual search •  Driving factors –  Smart phone market
  • Mobile visual search •  Smart phone market Source: http://www.cellular-news.com/story/48647.php?s=h
  • Mobile visual search •  Examples of applications –  Google Goggles –  oMoby (and the IQ Engines API) –  Others (kooaba, Fetch!, Gazopa, etc.)
  • Mobile visual search •  Google Goggles –  Android and iPhone –  Narrow-domain search and retrieval
  • Mobile visual search •  oMoby (and the IQ Engines API) –  iPhone
  • Mobile visual search •  oMoby (and the IQ Engines API)
  • Image search and retrieval & social networks •  The [so-called] Web 2.0 has brought about: –  New data sources –  New usage patterns –  New understanding about the users, their needs, habits, preferences –  New opportunities –  Lots of metadata! –  A chance to experience a true paradigm shift •  Before: image annotation is tedious, labor-intensive, expensive •  After: image annotation is fun!
  • Games! ◦  Google Image Labeler ◦  Games with a purpose (GWAP)   The ESP Game   Squigl   Matchin
  • Other related areas •  Semi-automatic image annotation •  Tag recommendation systems •  Story annotation engines •  Content-based image filtering •  Copyright detection •  Watermark detection –  and many more
  • Our recent work (highlights) •  PRISM –  Image Genius •  Unsupervised ROI extraction from an image –  Crazy Collage •  MEDIX and associated tools •  Callisto: a content-based tag recommendation tool
  • Research Team
  • PRISM With Liam Mayron, Harris Corp., USA
  • Image Genius With Asif Rahman, FAU, USA
  • Unsupervised ROI extraction With Gustavo B. Borba and Humberto R. Gamba, UTFPR, Brazil
  • Crazy Collage Gustavo B. Borba et al., UTFPR, Brazil
  • MEDIX •  Medical image retrieval system with DICOM capabilities With Asif Rahman, FAU, USA
  • Callisto With Mathias Lux and Arthur Pitman, Klagenfurt University, Austria
  • Part IV Where is image search headed?
  • Where is image search headed? •  Advice for [young] researchers –  In this last part, I’ve compiled pieces and bits of advice that I believe might help researchers who are entering the field. –  They focus on research avenues that I personally consider to be the most promising.
  • Advice for [young] researchers • LOOK • THINK • UNDERSTAND • CREATE
  • Advice for [young] researchers • LOOK… –  at yourself (how do you search for images and videos?) –  around (related areas and how they have grown) –  at Google (and other major players)
  • Advice for [young] researchers • THINK… –  mobile devices –  new devices and services –  social networks –  games
  • Advice for [young] researchers • UNDERSTAND… –  human intentions and emotions –  the context of the search –  user’s preferences and needs
  • Advice for [young] researchers • CREATE… –  better interfaces –  better user experience –  new business opportunities (added value)
  • Concluding thoughts –  I believe (but cannot prove…) that successful VIR solutions will: •  combine content-based image retrieval (CBIR) with metadata (high-level semantic-based image retrieval) •  only be truly successful in narrow domains •  include the user in the loop –  Relevance Feedback (RF) –  Collaborative efforts (tagging, rating, annotating) •  provide friendly, intuitive interfaces •  incorporate results and insights from cognitive science, particularly human visual attention, perception, and memory
  • Concluding thoughts –  I believe (but cannot prove…) that successful VIR solutions will: •  combine content-based image retrieval (CBIR) with metadata (high-level semantic-based image retrieval) •  only be truly successful in narrow domains •  include the user in the loop –  Relevance Feedback (RF) –  Collaborative efforts (tagging, rating, annotating) •  provide friendly, intuitive interfaces •  incorporate results and insights from cognitive science, particularly human visual attention, perception, and memory
  • Concluding thoughts –  I believe (but cannot prove…) that successful VIR solutions will: •  combine content-based image retrieval (CBIR) with metadata (high-level semantic-based image retrieval) •  only be truly successful in narrow domains •  include the user in the loop –  Relevance Feedback (RF) –  Collaborative efforts (tagging, rating, annotating) •  provide friendly, intuitive interfaces •  incorporate results and insights from cognitive science, particularly human visual attention, perception, and memory
  • Concluding thoughts –  I believe (but cannot prove…) that successful VIR solutions will: •  combine content-based image retrieval (CBIR) with metadata (high-level semantic-based image retrieval) •  only be truly successful in narrow domains •  include the user in the loop –  Relevance Feedback (RF) –  Collaborative efforts (tagging, rating, annotating) •  provide friendly, intuitive interfaces •  incorporate results and insights from cognitive science, particularly human visual attention, perception, and memory
  • Concluding thoughts –  I believe (but cannot prove…) that successful VIR solutions will: •  combine content-based image retrieval (CBIR) with metadata (high-level semantic-based image retrieval) •  only be truly successful in narrow domains •  include the user in the loop –  Relevance Feedback (RF) –  Collaborative efforts (tagging, rating, annotating) •  provide friendly, intuitive interfaces •  incorporate results and insights from cognitive science, particularly human visual attention, perception, and memory
  • Concluding thoughts •  “Image search and retrieval” is not a problem, but rather a collection of related problems that look like one. •  There is a great need for good solutions to specific problems. •  10 years after “the end of the early years”, research in visual information retrieval still has many open problems, challenges, and opportunities.
  • Thanks! •  Questions? •  For additional information: omarques@fau.edu