SlideShare a Scribd company logo
1 of 13
Download to read offline
Design of Benchmark Imagery for Validating
                    Facility Annotation Algorithms
    Randy Roberts 1, Paul Pope 2, Raju Vatsavai 3, Ming Jiang1, Lloyd Arrowood4, Tim Trucano 5,
            Shaun Gleason3, Anil Cheriyadat3, Alex Sorokine3, Aggelos Katsaggelos7,
          Thrasyvoulos Pappas7, Lucinda Gaines2, Lawrence Chilton 6, and Ian Burns 2



        IEEE International Geoscience and Remote Sensing Symposium
                                Vancouver, BC
                               25-29 July 2010
  1 LLNL, 2   LANL, 3 ORNL, 4Y-12, 5 SNL, 6 PNNL, 7Northwestern University




Lawrence Livermore National Laboratory, PO Box 808, Livermore CA 94551-0808
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore
                                                                                                LLNL-PRES-490191
National Laboratory under Contract DE-AC52-07NA27344.
Automated annotation of facilities is a non-trivial problem




   Lawrence Livermore National Laboratory                     2
Previous benchmarks for image annotation are
not adequate for our purposes




           Caltech 101                      PASCAL   OIRDS



 Good benchmark datasets drive algorithm research and development


   Lawrence Livermore National Laboratory                     3
Lots of factors in facility benchmark imagery
         Factor                     Levels (3 each)                    Intrinsic/Extrinsic
         Facility Location          Urban, Suburban, Rural             Intrinsic
         Facility Size              Small, Medium, Large               Intrinsic
         Compactness                Sparse, Moderate, Dense            Intrinsic
         Roof type                  Flat, Sloped, Multi-faceted        Intrinsic
         Building Size              Small, Medium, Large               Intrinsic
         Time-of-Day                Morning, Noon, Evening             Extrinsic
         Sensor View Angle          Nadir, Low-Oblique, High-Oblique   Extrinsic
         Spatial Scale              Small, Medium, Large               Extrinsic
         Visibility                 5km, 10km, 20km                    Extrinsic

         Cloud Cover                Clear, Broken, Overcast            Extrinsic
         Season                     Summer, Fall, Winter               Extrinsic
         Climate Zone               Tropical, Temperate, Arid          Extrinsic


Number of Images for a Full Factorial-Design experiment (three images per combination)
Nimages*(Levels)^Factors = 3*312 = 1,594,323 images
 Lawrence Livermore National Laboratory                                                      4
What objects and their spatial arrangements
             constitute a facility?

                                         The upper relationships indicate the
                                         types of industry

                                         The lower relationships indicate parts
                                         (objects) that compose an industrial
                                         facility. They were derived in part by
                                         analysis of nouns in the paper:

                                         “Industrial Components---A Photo
                                         Interpretation Key on Industry,”
                                         T. Chisnell and G. Cole, Photogrammetric
                                         Engineering, vol 24, March 1958




Lawrence Livermore National Laboratory                                            5
Three sources of benchmark imagery

                                    Real imagery



                                                   Composite
                                                   imagery




                                                               Synthetic
                                                               imagery

Lawrence Livermore National Laboratory                                     6
Real imagery, annotated by experts




       Controlled vocabulary for annotations developed from Chisnell and Cole

Lawrence Livermore National Laboratory                                          7
Variation in annotation between
            six experts




  Buildings           Railroad Lines


                                       8
Composite Imagery




      USGS image                         3D facility model + shadow   Blending model into scene




Lawrence Livermore National Laboratory                                                       9
Synthetic Facilities
          “Synthesize a facility consisting of three buildings and a tank farm”




      Several rendering engines available, so we’re focused on
      how to arrange objects into a realistic facility

Lawrence Livermore National Laboratory                                            10
What is the cost of creating these benchmarks?

Real, annotated imagery Composite imagery                                Synthetic imagery
  (7 experts) x (0.5 hr/image)                Cost to build model         Cost to build model

+ cost to reconcile variations                + cost to composite into   + cost to acquire/generate
  in expert annotations                       background                 supporting models
                                                                         (reflectance, illumination,
+ cost to acquire imagery                     + cost to acquire          atmosphere, etc)
                                              background imagery
+ cost to license imagery                                                + cost to render
                                              + cost to license
                                              background imagery




     Lawrence Livermore National Laboratory                                                      11
Future Research and Development
• Automated generation of synthetic facilities
• Expressive, usable knowledge representation
  for encoding relevant aspects of facilities
• V&V methodology: How to perform robust,
  comprehensive V&V using these benchmarks
• What are the proper roles of real, composite
  and synthetic benchmarks?
• How good is good enough?

                                                 12
Three things to remember:
• Design of benchmark imagery for geospatial algorithm V&V is
  a difficult problem
     – Lots of factors ⇒ lots of benchmark imagery
     – Complexity of scene, and objects in scene
     – Geospatial extent of the imagery
• Knowledge representation (ontology) to codify the objects
  (and their geospatial relationships) in the facility/scene that
  are important to us
• Real, composite and synthetic imagery offer the potential to
  span the space of factors for comprehensive V&V. Each has
  their own cost/benefit for particular V&V tasks

  The authors would like to acknowledge the support of the Simulations, Algorithms, and
  Modeling program at the Office of Nonproliferation and Verification Research &
  Development, National Nuclear Security Administration.


 Lawrence Livermore National Laboratory                                                   13

More Related Content

Similar to IGARSS11_BenchmarkImagerySlides_FINAL.pdf

Kasser synergy amateur radio
Kasser synergy   amateur radioKasser synergy   amateur radio
Kasser synergy amateur radioJoseph KAsser
 
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013rossTnick
 
Scene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkScene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkDhirajGidde
 
Systems engineering it's an enabler
Systems engineering it's an enablerSystems engineering it's an enabler
Systems engineering it's an enablerJoseph KAsser
 
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...Larry Smarr
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learningSushant Shrivastava
 
Cyberinfrastructure Day 2010: Applications in Biocomputing
Cyberinfrastructure Day 2010: Applications in BiocomputingCyberinfrastructure Day 2010: Applications in Biocomputing
Cyberinfrastructure Day 2010: Applications in BiocomputingJeremy Yang
 
Andrew Herbert - Hot Topics From Microsoft Research
Andrew Herbert - Hot Topics From Microsoft ResearchAndrew Herbert - Hot Topics From Microsoft Research
Andrew Herbert - Hot Topics From Microsoft ResearchiMinds conference
 
IRJET- Real-Time Object Detection using Deep Learning: A Survey
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET- Real-Time Object Detection using Deep Learning: A Survey
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET Journal
 
Belak_ICME_June02015
Belak_ICME_June02015Belak_ICME_June02015
Belak_ICME_June02015Jim Belak
 
Bring Satellite and Drone Imagery into your Data Science Workflows
Bring Satellite and Drone Imagery into your Data Science WorkflowsBring Satellite and Drone Imagery into your Data Science Workflows
Bring Satellite and Drone Imagery into your Data Science WorkflowsDatabricks
 
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNING
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNINGANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNING
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNINGIRJET Journal
 
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTAPPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTsipij
 
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTAPPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTsipij
 
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...Tulipp. Eu
 

Similar to IGARSS11_BenchmarkImagerySlides_FINAL.pdf (20)

Kasser synergy amateur radio
Kasser synergy   amateur radioKasser synergy   amateur radio
Kasser synergy amateur radio
 
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013
Ross Tredinnick - Rebecca J. Holz Research Data Management Talk 4/16/2013
 
Scene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural NetworkScene recognition using Convolutional Neural Network
Scene recognition using Convolutional Neural Network
 
Image processing ppt
Image processing pptImage processing ppt
Image processing ppt
 
Systems engineering it's an enabler
Systems engineering it's an enablerSystems engineering it's an enabler
Systems engineering it's an enabler
 
Data-Intensive Research
Data-Intensive ResearchData-Intensive Research
Data-Intensive Research
 
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...
Shrinking the Planet—How Dedicated Optical Networks are Transforming Computat...
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learning
 
Cyberinfrastructure Day 2010: Applications in Biocomputing
Cyberinfrastructure Day 2010: Applications in BiocomputingCyberinfrastructure Day 2010: Applications in Biocomputing
Cyberinfrastructure Day 2010: Applications in Biocomputing
 
Semantic Hybridized Image Features in Visual Diagnostic of Plant Health
Semantic Hybridized Image Features in Visual Diagnostic of Plant HealthSemantic Hybridized Image Features in Visual Diagnostic of Plant Health
Semantic Hybridized Image Features in Visual Diagnostic of Plant Health
 
Andrew Herbert - Hot Topics From Microsoft Research
Andrew Herbert - Hot Topics From Microsoft ResearchAndrew Herbert - Hot Topics From Microsoft Research
Andrew Herbert - Hot Topics From Microsoft Research
 
2013-01-17 Research Object
2013-01-17 Research Object2013-01-17 Research Object
2013-01-17 Research Object
 
IRJET- Real-Time Object Detection using Deep Learning: A Survey
IRJET- Real-Time Object Detection using Deep Learning: A SurveyIRJET- Real-Time Object Detection using Deep Learning: A Survey
IRJET- Real-Time Object Detection using Deep Learning: A Survey
 
Belak_ICME_June02015
Belak_ICME_June02015Belak_ICME_June02015
Belak_ICME_June02015
 
Bring Satellite and Drone Imagery into your Data Science Workflows
Bring Satellite and Drone Imagery into your Data Science WorkflowsBring Satellite and Drone Imagery into your Data Science Workflows
Bring Satellite and Drone Imagery into your Data Science Workflows
 
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNING
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNINGANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNING
ANIMAL SPECIES RECOGNITION SYSTEM USING DEEP LEARNING
 
Collins seattle-2014-final
Collins seattle-2014-finalCollins seattle-2014-final
Collins seattle-2014-final
 
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTAPPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
 
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENTAPPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN LAWN MEASUREMENT
 
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
HiPEAC 2019 Workshop - Real-Time Modelling Visual Scenes with Biological Insp...
 

More from grssieee

Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...
Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...
Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...grssieee
 
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODEL
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODELSEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODEL
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODELgrssieee
 
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...grssieee
 
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIES
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIESTHE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIES
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIESgrssieee
 
GMES SPACE COMPONENT:PROGRAMMATIC STATUS
GMES SPACE COMPONENT:PROGRAMMATIC STATUSGMES SPACE COMPONENT:PROGRAMMATIC STATUS
GMES SPACE COMPONENT:PROGRAMMATIC STATUSgrssieee
 
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETER
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETERPROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETER
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETERgrssieee
 
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...grssieee
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...grssieee
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...grssieee
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...grssieee
 
test 34mb wo animations
test  34mb wo animationstest  34mb wo animations
test 34mb wo animationsgrssieee
 
2011_Fox_Tax_Worksheets.pdf
2011_Fox_Tax_Worksheets.pdf2011_Fox_Tax_Worksheets.pdf
2011_Fox_Tax_Worksheets.pdfgrssieee
 
DLR open house
DLR open houseDLR open house
DLR open housegrssieee
 
DLR open house
DLR open houseDLR open house
DLR open housegrssieee
 
DLR open house
DLR open houseDLR open house
DLR open housegrssieee
 
Tana_IGARSS2011.ppt
Tana_IGARSS2011.pptTana_IGARSS2011.ppt
Tana_IGARSS2011.pptgrssieee
 
Solaro_IGARSS_2011.ppt
Solaro_IGARSS_2011.pptSolaro_IGARSS_2011.ppt
Solaro_IGARSS_2011.pptgrssieee
 

More from grssieee (20)

Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...
Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...
Tangent height accuracy of Superconducting Submillimeter-Wave Limb-Emission S...
 
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODEL
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODELSEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODEL
SEGMENTATION OF POLARIMETRIC SAR DATA WITH A MULTI-TEXTURE PRODUCT MODEL
 
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...
TWO-POINT STATISTIC OF POLARIMETRIC SAR DATA TWO-POINT STATISTIC OF POLARIMET...
 
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIES
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIESTHE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIES
THE SENTINEL-1 MISSION AND ITS APPLICATION CAPABILITIES
 
GMES SPACE COMPONENT:PROGRAMMATIC STATUS
GMES SPACE COMPONENT:PROGRAMMATIC STATUSGMES SPACE COMPONENT:PROGRAMMATIC STATUS
GMES SPACE COMPONENT:PROGRAMMATIC STATUS
 
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETER
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETERPROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETER
PROGRESSES OF DEVELOPMENT OF CFOSAT SCATTEROMETER
 
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...
DEVELOPMENT OF ALGORITHMS AND PRODUCTS FOR SUPPORTING THE ITALIAN HYPERSPECTR...
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
 
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
EO-1/HYPERION: NEARING TWELVE YEARS OF SUCCESSFUL MISSION SCIENCE OPERATION A...
 
Test
TestTest
Test
 
test 34mb wo animations
test  34mb wo animationstest  34mb wo animations
test 34mb wo animations
 
Test 70MB
Test 70MBTest 70MB
Test 70MB
 
Test 70MB
Test 70MBTest 70MB
Test 70MB
 
2011_Fox_Tax_Worksheets.pdf
2011_Fox_Tax_Worksheets.pdf2011_Fox_Tax_Worksheets.pdf
2011_Fox_Tax_Worksheets.pdf
 
DLR open house
DLR open houseDLR open house
DLR open house
 
DLR open house
DLR open houseDLR open house
DLR open house
 
DLR open house
DLR open houseDLR open house
DLR open house
 
Tana_IGARSS2011.ppt
Tana_IGARSS2011.pptTana_IGARSS2011.ppt
Tana_IGARSS2011.ppt
 
Solaro_IGARSS_2011.ppt
Solaro_IGARSS_2011.pptSolaro_IGARSS_2011.ppt
Solaro_IGARSS_2011.ppt
 

IGARSS11_BenchmarkImagerySlides_FINAL.pdf

  • 1. Design of Benchmark Imagery for Validating Facility Annotation Algorithms Randy Roberts 1, Paul Pope 2, Raju Vatsavai 3, Ming Jiang1, Lloyd Arrowood4, Tim Trucano 5, Shaun Gleason3, Anil Cheriyadat3, Alex Sorokine3, Aggelos Katsaggelos7, Thrasyvoulos Pappas7, Lucinda Gaines2, Lawrence Chilton 6, and Ian Burns 2 IEEE International Geoscience and Remote Sensing Symposium Vancouver, BC 25-29 July 2010 1 LLNL, 2 LANL, 3 ORNL, 4Y-12, 5 SNL, 6 PNNL, 7Northwestern University Lawrence Livermore National Laboratory, PO Box 808, Livermore CA 94551-0808 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore LLNL-PRES-490191 National Laboratory under Contract DE-AC52-07NA27344.
  • 2. Automated annotation of facilities is a non-trivial problem Lawrence Livermore National Laboratory 2
  • 3. Previous benchmarks for image annotation are not adequate for our purposes Caltech 101 PASCAL OIRDS Good benchmark datasets drive algorithm research and development Lawrence Livermore National Laboratory 3
  • 4. Lots of factors in facility benchmark imagery Factor Levels (3 each) Intrinsic/Extrinsic Facility Location Urban, Suburban, Rural Intrinsic Facility Size Small, Medium, Large Intrinsic Compactness Sparse, Moderate, Dense Intrinsic Roof type Flat, Sloped, Multi-faceted Intrinsic Building Size Small, Medium, Large Intrinsic Time-of-Day Morning, Noon, Evening Extrinsic Sensor View Angle Nadir, Low-Oblique, High-Oblique Extrinsic Spatial Scale Small, Medium, Large Extrinsic Visibility 5km, 10km, 20km Extrinsic Cloud Cover Clear, Broken, Overcast Extrinsic Season Summer, Fall, Winter Extrinsic Climate Zone Tropical, Temperate, Arid Extrinsic Number of Images for a Full Factorial-Design experiment (three images per combination) Nimages*(Levels)^Factors = 3*312 = 1,594,323 images Lawrence Livermore National Laboratory 4
  • 5. What objects and their spatial arrangements constitute a facility? The upper relationships indicate the types of industry The lower relationships indicate parts (objects) that compose an industrial facility. They were derived in part by analysis of nouns in the paper: “Industrial Components---A Photo Interpretation Key on Industry,” T. Chisnell and G. Cole, Photogrammetric Engineering, vol 24, March 1958 Lawrence Livermore National Laboratory 5
  • 6. Three sources of benchmark imagery Real imagery Composite imagery Synthetic imagery Lawrence Livermore National Laboratory 6
  • 7. Real imagery, annotated by experts Controlled vocabulary for annotations developed from Chisnell and Cole Lawrence Livermore National Laboratory 7
  • 8. Variation in annotation between six experts Buildings Railroad Lines 8
  • 9. Composite Imagery USGS image 3D facility model + shadow Blending model into scene Lawrence Livermore National Laboratory 9
  • 10. Synthetic Facilities “Synthesize a facility consisting of three buildings and a tank farm” Several rendering engines available, so we’re focused on how to arrange objects into a realistic facility Lawrence Livermore National Laboratory 10
  • 11. What is the cost of creating these benchmarks? Real, annotated imagery Composite imagery Synthetic imagery (7 experts) x (0.5 hr/image) Cost to build model Cost to build model + cost to reconcile variations + cost to composite into + cost to acquire/generate in expert annotations background supporting models (reflectance, illumination, + cost to acquire imagery + cost to acquire atmosphere, etc) background imagery + cost to license imagery + cost to render + cost to license background imagery Lawrence Livermore National Laboratory 11
  • 12. Future Research and Development • Automated generation of synthetic facilities • Expressive, usable knowledge representation for encoding relevant aspects of facilities • V&V methodology: How to perform robust, comprehensive V&V using these benchmarks • What are the proper roles of real, composite and synthetic benchmarks? • How good is good enough? 12
  • 13. Three things to remember: • Design of benchmark imagery for geospatial algorithm V&V is a difficult problem – Lots of factors ⇒ lots of benchmark imagery – Complexity of scene, and objects in scene – Geospatial extent of the imagery • Knowledge representation (ontology) to codify the objects (and their geospatial relationships) in the facility/scene that are important to us • Real, composite and synthetic imagery offer the potential to span the space of factors for comprehensive V&V. Each has their own cost/benefit for particular V&V tasks The authors would like to acknowledge the support of the Simulations, Algorithms, and Modeling program at the Office of Nonproliferation and Verification Research & Development, National Nuclear Security Administration. Lawrence Livermore National Laboratory 13