This document discusses using computational techniques to semantically retrieve unannotated images by enabling textual search of imagery without metadata. It describes:
1) Using exemplar image/metadata pairs to learn relationships between visual features and metadata, then projecting this to retrieve unannotated images.
2) Representing images as "visual terms" like words in text.
3) Creating a multidimensional "semantic space" where related images, terms and keywords are placed closely together based on training. This allows retrieving unannotated images that lie near descriptive keywords.
4) Experimental retrieval results on a Corel dataset, showing the approach works better for keywords associated with colors than others. The approach takes progress but significant challenges remain.