Your SlideShare is downloading. ×
0
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Retraining maximum likelihood classifiers using low-rank model.ppt
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Retraining maximum likelihood classifiers using low-rank model.ppt

310

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
310
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Retraining maximum likelihood classifiers using a low-rank model Arnt-Børre Salberg Norwegian Computing Center Oslo, Norway IGARSS July 25, 2011
  • 2. Introduction
    • Challenge: Dataset shift problem:
      • Training data match the test data poorly due to atmospherical, geographical, botanical and phenological variations in the image data
      • -> reduced classification performance
      • Class-dependent data distribution varies
        • between training images
        • between test and training images
    • Goal: Develop a method that re-estimates the parameters such that classifier possess a good fit to the test data
  • 3. Introduction
    • Many surface reflectance algorithms often requires data from external sources
      • LEDAPS (Landsat):
        • ozone and water vapor measurements
    • Phenological, botanical and geographical variation in addition to atmospherical makes the calibration problem even harder
  • 4. An existing method…
    • Models the test image as a mixture distribution and estimates all parameters using the EM-algorithm, with estimated parameters from training data as initial values
    • To many degrees of freedom. Statistic fit is excellent, but class labels get mixed .
  • 5. Low-rank parameter modeling
    • Training image k :
      • Class mean vector and covariance matrix (class i )
    • Class mean vector and covariance matrix model for the test image
      •  and  are unknown parameter vectors to be estimated from the data
  • 6. Low-rank data modeling
    • The proposed method for modeling the test data is a low-rank approach since the number of parameters in  is L<D.
      • This is much less than estimating all C·D parameters i  i , i=1,…,C
    • By using a low-rank estimation of the class mean vectors of the test data, the spectral differences between the classes is in larger degree maintained
  • 7. Parameter estimation
    • Procedure for estimating  and  
      • Select N random samples { y 1, y 2,… y N } from the test image
  • 8. Parameter estimation
    • Procedure for estimating  and  
      • Select N random samples { y 1, y 2,… y N } from the test image
      • Model them using a Gaussian mixture distribution
    • Estimate the parameters by solving the likelihood
  • 9. Experiment 1: Cloud detection in optical images
    • 15 different QuickBird and WorldView-2 images covering 7 different scenes in Norway
    • Features
      • Band 2 (green)
      • Band 3 (red)
    • Classes
      • clouds, cloud shadows, vegetation, concrete/asphalt/etc., haze and water
    • Resolution down-sampled to 19.2 m (16.0 m)
    • 4 different training (sub)images
  • 10. Experiment 1: Cloud detection in optical images
    • Model
      •  i is the eigenvector corresponding to the largest eigenvalue  i of the matrix
    average Test eigenvector
  • 11. Experiment 1: Cloud detection in optical images
    • Parameter estimation. At iteration l +1 :
    • where
  • 12. Results: Cloud detection in optical images Without retraining With retraining
  • 13. Results: Cloud detection in optical images
  • 14. Results: Cloud detection in optical images
  • 15. Experiment 2: Tree cover mapping of tropical forest
    • 13 different Landsat TM images covering an area nearby Amani, Tanzania (path/row 166/063)
    • Features
      • Band 1-5 and 7
    • Classes
      • Forest, spares forest, grass and soil
    • Two training images (1986-10-06 and 2010-02-10)
  • 16. Experiment 2: Tree cover mapping of tropical forest
    • Model
      •  constrained to contain only positive elements
    • Solution found using non-negative least-squares in combination with iterative maximum-likelihood estimation
  • 17. Experiment 2: Tree cover mapping of tropical forest
    • Parameter estimation: At iteration l + 1
    • where
  • 18. Results: Tree cover mapping of tropical forest
    • *
    Without retraining With retraining February 2010 July 2009
  • 19. Summary and conclusion
    • Proposed a simple method for handling the dataset shift between training and test data
    • Cloud detection: Evaluated successfully on a many different Quickbird and WorldView-2 images.
      • Haze versus clouds
      • Confuses snow and clouds
    • Guidelines on how to select the low-rank modeling functions is needed
    • EM-algorithm and local minima problem
    • More testing and evalidation of the method is necessary

×