Your SlideShare is downloading. ×
0
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

VERIFICATION_&_VALIDATION_OF_A_SEMANTIC_IMAGE_TAGGING_FRAMEWORK_VIA_GENERATION_OF GEOSPATIAL_IMAGERY_GROUND_TRUTH.pptx

199

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
199
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Verification & Validation of a Semantic Image Tagging Framework via Generation of Geospatial Imagery Ground Truth <br />Shaun S. Gleason1, Mesfin Dema2, Hamed Sari-Sarraf2, Anil Cheriyadat1, Raju Vatsavai1, Regina Ferrell1<br /> <br />1Oak Ridge National Laboratory, Oak Ridge, TN<br />2Texas Tech University, Lubbock, TX<br />1<br />
  • 2. Contents<br />Motivation<br />Existing Approaches<br />Proposed Approach <br />Generative Model Formulation <br />General Framework <br />Preliminary Results<br />Conclusions<br />2<br />
  • 3. Motivation<br />Automated identification of complex facilities in aerial imagery is an important and challenging problem.<br />For our application, nuclear proliferation, facilities of interest can be complex.<br />Such facilities are characterized by:<br />the presence of known structures,<br />their spatial arrangement,<br />their geographic location,<br />and their location relative to natural resources.<br />Development, verification, and validation of semantic classification algorithms for such facilities is hampered by the lack of available sample imagery with ground truth.<br />3<br />
  • 4. Semantics:<br />Set of objects like:<br />Switch yard,<br />Containment <br />Building,<br />Turbine <br />Generator,<br />Cooling <br />Towers <br />AND<br />Their spatial <br />arrangement <br />=> may<br />imply a semantic<br />label like “nuclear<br />power plant”<br />Switch<br />Yard<br />Turbine<br />Building<br />Cooling<br />Towers<br />Containment<br />Building<br />Motivation (cont.)<br />
  • 5. Motivation (cont.)<br />5<br />Many algorithms are being developed to extract and classify regions of interest from images, such as in [1].<br />V & V of the algorithms have not kept pace with their development due to lack of image datasets with high accuracy ground truth annotations. <br />The community needs research techniques that can provide images with accurate ground truth annotation at a low cost.<br />[1] Gleason SS, et al., “Semantic Information Extraction from Multispectral Geospatial Imagery via a Flexible Framework,” IGARSS, 2010.<br />
  • 6. Existing Approaches<br />Manual ground truth annotation of images <br />Very tedious for volumes of images<br />Highly subjective <br />Using synthetic images with corresponding ground truth data <br />Digital Imaging and Remote Sensing Image Generation <br />( DIRSIG) [2]<br />Capable of generating hyper-spectral images in range of 0.4-20 microns.<br />Capable of generating accurate ground truth data.<br />Very tedious 3D scene construction stage. <br />Incapable of producing training images in sufficient quantities.<br />6<br />[2] Digital Imaging and Remote Sensing Image Generation (DIRSIG): http://www.dirsig.org/.<br />
  • 7. Existing Approaches<br />In [3,4], researchers attempted to partially automate the cumbersome 3D scene construction of the DIRSIG model. <br />LIDAR sensor is used to extract 3D objects from a given location.<br />Other modalities are used to correctly identify object types.<br />3D CAD models of objects and object locations are extracted.<br />Extracted CAD models are placed at their respective position to reconstruct the 3D scene and finally to generate synthetic image with corresponding ground truth. <br />Availability of 3D model databases, such as Google SketchUp [5], reduces the need for approaches like [3,4].<br />7<br />[3] S.R. Lach, et al., “Semi-automated DIRSIG Scene Modeling from 3D LIDAR and Passive Imaging Sources”, in Proc. SPIE Laser Radar Technology and Applications XI, vol. 6214,2006. <br />[4] P. Gurram, et al., “3D scene reconstruction through a fusion of passive video and Lidar imagery,” in Proc. 36th AIPR Workshop, pp. 133–138, 2007. <br />[5] Google SketchUp: http://sketchup.google.com/.<br />
  • 8. Proposed Approach<br />To generate synthetic images with ground truth annotation at low cost, we need a system which can learn from few training examples.<br />This system must be generative so that one can sample a plausible scene from the model.<br />The system must also be capable of producing synthetic images with corresponding ground truth data in sufficient quantity.<br />Our contribution to the problem is two-fold.<br />We incorporated expert knowledge into the problem with less effort.<br />We adapted a generative model to synthetic image generation process.<br />8<br />
  • 9. 9<br />Knowledge Representation: And-Or Graph<br />Nuclear Power Plant<br />Reactor<br />Turbine Building<br />Building<br />Switchyard<br />Cooling Tower<br />CT Type1<br />CT Type 2<br />[6] S.C. Zhu. and D. Mumford,” A Stochastic Grammar of Images”. Foundation and Trends in Computer Graphics <br />and Vision, 2(4): pp .259–362, 2006<br />
  • 10. Generative Model Formulation: Maximum Entropy Principle(MEP)<br />Given observed constraints (i.e. the hierarchical and contextual information) of an unobserved distribution f , a probability distribution p which best approximates f is the one with maximum entropy [7,8].<br />10<br />[7] J. Porway ,et al. “ Learning compositional Models for Object Categories From Small Sample Sets”, 2009<br />[8] J. Porway ,et al. “ A Hierarchical and Contextual Model for Aerial Image Parsing”, 2010<br />
  • 11. Gibbs Distribution <br />Parameter Learning <br />11<br />Generative Model Formulation:Optimization of MEP<br />
  • 12. General Framework<br />12<br />
  • 13. General Framework<br />13<br />[5]<br />[9]<br />[5] Google SketchUp: http://sketchup.google.com/.<br />[9] Persistence of Vision Raytracer (POV-Ray): http://www.povray.org/.<br />
  • 14. Preliminary Results<br />We are currently working with experts on annotating training images of nuclear power plant sites.<br />To demonstrate the idea of the proposed approach, we have used a simple example as a proof-of-principle.<br />Using this example, we illustrate how the generative framework can sample plausible scenes, and finally generate synthetic images with corresponding ground truth annotation. <br />14<br />
  • 15. Proof-of-Principle:Training Images<br />15<br />
  • 16. 16<br />Proof-of-Principle:Manually Annotated Training Images<br />
  • 17. 17<br />Proof-of-Principle:Manually Annotated Training Images<br />Orientation Corrected Images<br />
  • 18. 18<br />Proof-of-Principle:Manually Annotated Training Images<br />Orientation Corrected Images Followed by Ellipse Fitting<br />
  • 19. 19<br />Relationships<br />
  • 20. Synthesized Images<br />20<br />Before Learning<br />After Learning<br />
  • 21. Synthesized Images<br />21<br />Before Learning<br />After Learning<br />
  • 22. 22<br />Synthesized Images<br />After Learning<br />Synthesized Image<br />
  • 23. 23<br />Synthesized Images<br />Part level ground truth image<br />Object level ground truth image<br />
  • 24. Manually Created Example<br />24<br />3D Google Sketch-Up model of a nuclear plant: Pickering Nuclear Plant, Canada (left), and model manually overlaid on an image (right).<br />
  • 25. Conclusions<br />Maximum Entropy model has proven to be an elegant framework to learn patterns from training data and generate synthetic samples having similar patterns.<br />Using the proposed framework, generating synthetic images with accurate ground truth annotation comes at relatively low cost.<br />The proposed approach is very promising for algorithm verification and validation.<br />25<br />
  • 26. Challenges Ahead<br />The current model generates some results that do not represent a well-learned configuration of objects.<br />We believe that constraint representation using histograms contributes to invalid results, since some values are averaged out while generating histograms.<br />To avoid invalid results, we are currently studying a divide-and-conquer strategy by introducing on-the-fly clustering approaches. This separates the bad samples from the good ones, which helps tune the parameters during the learning phase.<br />26<br />
  • 27. Acknowledgements<br />Funding for this work is provided by the Simulations, Algorithms, and Modeling program within the NA-22 office of the National Nuclear Security Administration, U.S. Department of Energy.<br />27<br />
  • 28. Thank You!<br />28<br />

×