Digital image processing techniques can be used to extract information from remote sensing images and infer landscape characteristics. There are two main categories of remote sensing techniques: image acquisition and image inference. Image processing allows users to enhance images, perform geometric corrections, and extract patterns and features from remote sensing data. Key techniques include intensity transformations, spatial filtering, histogram processing, and geometric corrections to map image data to real-world coordinates.
Lecture 4 Digital Image Processing (1).pptxsivan96
This document discusses digital image processing in remote sensing. It covers topics such as digital data formats, radiometric and geometric characteristics of image data, and various image processing techniques. The key techniques discussed are preprocessing, which involves radiometric and geometric corrections; image enhancement to improve visual interpretation; image transformations using arithmetic operations to combine bands; and supervised and unsupervised image classification to categorize pixels.
Digital image processing & computer graphicsAnkit Garg
Digital Image Processing & Computer Graphics document discusses several topics related to digital image processing including:
1. Digital image processing involves manipulating digital images using computer programs. It includes operations like geometric transformations, image refinement to remove noise, color adjustments, and combining multiple images.
2. Computer graphics is focused on constructing images, while digital image processing is focused on manipulating existing images.
3. Common digital image processing techniques discussed include image enhancement to improve image quality, image restoration to remove degradation, image segmentation to separate objects, image resizing, compression, and feature extraction.
4. Image filtering is used to reduce noise in images using techniques like convolution with filters that target different image frequency ranges like low-pass and
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
Digital image processing involves the manipulation of digital images through various algorithms and techniques. The key steps involve image acquisition through sensors, preprocessing such as sampling and quantization, processing such as enhancement and analysis, and output. Digital image processing has applications in fields such as medicine, astronomy, security, and more. It allows analysis and manipulation of images to improve quality or extract useful information.
Image enhancement in spatial domain.pptpravin patil
This document discusses various methods for image enhancement in the spatial domain. It describes spatial domain enhancement methods as techniques that directly modify pixels in an image, including point operations and spatial operations. Some specific spatial domain enhancement methods covered are contrast stretching, thresholding, digital negatives, gray level slicing, log transformations, and histogram equalization. Neighborhood operations are also introduced as operating on a larger neighborhood of pixels than point operations.
This document provides an introduction to digital image processing. It defines what an image and digital image are, and discusses the first ever digital photograph. It describes digital image processing as processing digital images using computers, with sources including the electromagnetic spectrum from gamma rays to radio waves. Key concepts covered include digital images, image enhancement through spatial and frequency domain methods, image restoration to remove noise and blurring, and image compression to reduce file size through removing different types of data redundancy.
Lecture 4 Digital Image Processing (1).pptxsivan96
This document discusses digital image processing in remote sensing. It covers topics such as digital data formats, radiometric and geometric characteristics of image data, and various image processing techniques. The key techniques discussed are preprocessing, which involves radiometric and geometric corrections; image enhancement to improve visual interpretation; image transformations using arithmetic operations to combine bands; and supervised and unsupervised image classification to categorize pixels.
Digital image processing & computer graphicsAnkit Garg
Digital Image Processing & Computer Graphics document discusses several topics related to digital image processing including:
1. Digital image processing involves manipulating digital images using computer programs. It includes operations like geometric transformations, image refinement to remove noise, color adjustments, and combining multiple images.
2. Computer graphics is focused on constructing images, while digital image processing is focused on manipulating existing images.
3. Common digital image processing techniques discussed include image enhancement to improve image quality, image restoration to remove degradation, image segmentation to separate objects, image resizing, compression, and feature extraction.
4. Image filtering is used to reduce noise in images using techniques like convolution with filters that target different image frequency ranges like low-pass and
This document provides an overview of various image enhancement techniques. It begins with an introduction to image enhancement and its objectives. It then outlines and describes several categories of enhancement methods, including spatial-frequency domain methods, point operations, histogram operations, spatial operations, and transform operations. Specific techniques discussed in detail include contrast stretching, clipping, thresholding, median filtering, unsharp masking, and principal component analysis for multispectral images. The document also covers color image enhancement and techniques for pseudocoloring.
Digital image processing involves the manipulation of digital images through various algorithms and techniques. The key steps involve image acquisition through sensors, preprocessing such as sampling and quantization, processing such as enhancement and analysis, and output. Digital image processing has applications in fields such as medicine, astronomy, security, and more. It allows analysis and manipulation of images to improve quality or extract useful information.
Image enhancement in spatial domain.pptpravin patil
This document discusses various methods for image enhancement in the spatial domain. It describes spatial domain enhancement methods as techniques that directly modify pixels in an image, including point operations and spatial operations. Some specific spatial domain enhancement methods covered are contrast stretching, thresholding, digital negatives, gray level slicing, log transformations, and histogram equalization. Neighborhood operations are also introduced as operating on a larger neighborhood of pixels than point operations.
This document provides an introduction to digital image processing. It defines what an image and digital image are, and discusses the first ever digital photograph. It describes digital image processing as processing digital images using computers, with sources including the electromagnetic spectrum from gamma rays to radio waves. Key concepts covered include digital images, image enhancement through spatial and frequency domain methods, image restoration to remove noise and blurring, and image compression to reduce file size through removing different types of data redundancy.
This document discusses various techniques for image enhancement, which aims to improve the visual interpretability of images. It describes point operations and local operations that modify pixel brightness values. Common enhancement techniques mentioned include contrast manipulation through thresholding, stretching, and slicing. Spatial feature manipulation techniques like filtering and edge enhancement are also summarized. The document provides examples and explanations of contrast stretching, spatial filtering, and ratio images. It concludes with a brief overview of topographic correction to account for slope and aspect effects.
Image Segmentation
Types of Image Segmentation
Semantic Segmentation
Instance Segmentation
Types of Image Segmentation Techniques based on the image properties:
Threshold Method.
Edge Based Segmentation.
Region-Based Segmentation.
Clustering Based Segmentation.
Watershed Based Method.
Artificial Neural Network Based Segmentation.
Here are the key steps to convert a color image to a binary image in LabVIEW:
1. Read in the color image using the Read PNG or Read JPEG VI. This will return an image structure.
2. Use the Color To Gray VI to convert the color image to grayscale. This removes the color information and leaves only the luminance.
3. Apply a threshold to convert the grayscale image to binary. Use the Threshold VI and choose an appropriate threshold value (usually 128 for 8-bit images).
4. The output of the Threshold VI will be a binary image, where pixels above the threshold are white (255) and pixels below are black (0).
5. You can now process the binary
Image processing, Noise, Noise Removal filtersKuppusamy P
Basics of images, Digital Images, Noise, Noise Removal filters
Reference:
Richard Szeliski, Computer Vision: Algorithms and Applications, Springer 2010
Fingerprints vary person to person i.e. no two persons can have same fingerprint ridge structures, so they play a major role in unique identification of human beings. Fingerprints of human beings are unique and persistent. Hence, in biometric identification applications, Automatic Fingerprint Identification System (AFIS) is emerging as a popular technology. The whole performance of the system depends on the quality of finger print images. Keeping in mind the above fact, this paper is solely based on fingerprint image enhancement so as to improve the quality of fingerprints, which makes feature (minutiae) extraction reliable.
This document provides an introduction to image processing. It discusses key concepts such as signals, signal processing, and how images can be represented as signals and matrices. The document covers how images are converted to digital form and stored on computers. It also describes different levels of image processing from low-level operations like enhancement to higher-level tasks like recognition and interpretation. Overall, the document gives an overview of the fundamentals of digital image processing.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
The document discusses processing of satellite images using digital image processing. It describes importing satellite images, manipulating them through various techniques like color composition, image rectification, enhancement, and information extraction. The key steps are pre-processing images through rectification and restoration, enhancing images, and extracting information through classification techniques to generate thematic maps.
The document discusses processing of satellite images using digital image processing. It describes importing satellite images, manipulating them through various techniques like color composition, image rectification, enhancement, and information extraction. The key steps are pre-processing images through rectification and restoration, enhancing images, and extracting information through classification techniques to generate thematic maps.
This document discusses methods for enhancing spatial features in images. It describes spatial filtering and convolution, which involve applying kernels or filters to images to emphasize different spatial frequencies. Low-pass filters smooth images by reducing high spatial frequencies related to fine detail, while high-pass filters have the opposite effect of sharpening images. Edge enhancement filters aim to highlight linear features by increasing contrast around edges. Fourier analysis represents images as combinations of sine and cosine waves of different frequencies and can reveal features aligned in various directions.
This document discusses image enhancement techniques in digital image processing. It defines image enhancement as modifying image attributes to make an image more suitable for a given task. The main techniques discussed are spatial domain enhancement methods like noise removal, contrast adjustment, and histogram equalization. Examples are provided to demonstrate the effects of these enhancement methods on images.
10-Image rectification and restoration.pptAJAYMALIK97
This document discusses digital image processing. It defines digital image processing as the computer-based manipulation and interpretation of digital images. It outlines seven broad types of computer-assisted operations, including image rectification and restoration, image enhancement, image classification, data merging and GIS integration, hyperspectral image analysis, biophysical modeling, and image transmission and compression. It provides details on image rectification and restoration, which involves preprocessing to correct distorted or degraded image data through techniques like geometric distortions, radiometric calibration, and noise elimination.
Computer vision.pptx for pg students study about computer visionshesnasuneer
This document discusses low-level and high-level image processing techniques in computer vision. It explains that low-level methods use little knowledge about image content and involve steps like preprocessing, segmentation, and object description. High-level processing applies knowledge, goals, and plans to perform tasks like pattern recognition and make decisions based on image understanding. The document also covers basic concepts in computer vision like a priori knowledge, heuristics, and syntactic and semantic analysis, and describes how images can be modeled as signals and functions.
Digital image processing techniques can be used to enhance images in the spatial domain. Spatial averaging reduces noise by replacing each pixel value with the average value of neighboring pixels. As more images are averaged, the output image approaches the true noise-free image. Median filtering removes salt and pepper noise by replacing each pixel with the median value in the local neighborhood. Homomorphic filtering can remove multiplicative noise by separating illumination and reflectance components in the frequency domain and applying a high-pass filter. Histogram equalization can improve contrast in color images by applying the technique to one color band and using original ratios to determine values for other bands.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Digital images represent real-world scenes using a grid of pixels, each with a value representing color or intensity. Common file formats like JPEG and PNG use lossy and lossless compression respectively. Images can be manipulated by changing individual pixel values, while graphics are defined by editable primitives. Digital image processing techniques are used to enhance, analyze and understand image content.
This document discusses image processing and its various applications and techniques. It defines image processing as processing images in a desired manner and explains it has two aspects: improving visual appearance for humans and preparing images for feature measurement. It describes why image processing is needed such as preparing digital images for viewing and optimizing images for applications. It also outlines different types of image processing like image-to-image, image-to-information, and information-to-image transformations.
Signatures of wave erosion in Titan’s coastsSérgio Sacani
The shorelines of Titan’s hydrocarbon seas trace flooded erosional landforms such as river valleys; however, it isunclear whether coastal erosion has subsequently altered these shorelines. Spacecraft observations and theo-retical models suggest that wind may cause waves to form on Titan’s seas, potentially driving coastal erosion,but the observational evidence of waves is indirect, and the processes affecting shoreline evolution on Titanremain unknown. No widely accepted framework exists for using shoreline morphology to quantitatively dis-cern coastal erosion mechanisms, even on Earth, where the dominant mechanisms are known. We combinelandscape evolution models with measurements of shoreline shape on Earth to characterize how differentcoastal erosion mechanisms affect shoreline morphology. Applying this framework to Titan, we find that theshorelines of Titan’s seas are most consistent with flooded landscapes that subsequently have been eroded bywaves, rather than a uniform erosional process or no coastal erosion, particularly if wave growth saturates atfetch lengths of tens of kilometers.
This document discusses various techniques for image enhancement, which aims to improve the visual interpretability of images. It describes point operations and local operations that modify pixel brightness values. Common enhancement techniques mentioned include contrast manipulation through thresholding, stretching, and slicing. Spatial feature manipulation techniques like filtering and edge enhancement are also summarized. The document provides examples and explanations of contrast stretching, spatial filtering, and ratio images. It concludes with a brief overview of topographic correction to account for slope and aspect effects.
Image Segmentation
Types of Image Segmentation
Semantic Segmentation
Instance Segmentation
Types of Image Segmentation Techniques based on the image properties:
Threshold Method.
Edge Based Segmentation.
Region-Based Segmentation.
Clustering Based Segmentation.
Watershed Based Method.
Artificial Neural Network Based Segmentation.
Here are the key steps to convert a color image to a binary image in LabVIEW:
1. Read in the color image using the Read PNG or Read JPEG VI. This will return an image structure.
2. Use the Color To Gray VI to convert the color image to grayscale. This removes the color information and leaves only the luminance.
3. Apply a threshold to convert the grayscale image to binary. Use the Threshold VI and choose an appropriate threshold value (usually 128 for 8-bit images).
4. The output of the Threshold VI will be a binary image, where pixels above the threshold are white (255) and pixels below are black (0).
5. You can now process the binary
Image processing, Noise, Noise Removal filtersKuppusamy P
Basics of images, Digital Images, Noise, Noise Removal filters
Reference:
Richard Szeliski, Computer Vision: Algorithms and Applications, Springer 2010
Fingerprints vary person to person i.e. no two persons can have same fingerprint ridge structures, so they play a major role in unique identification of human beings. Fingerprints of human beings are unique and persistent. Hence, in biometric identification applications, Automatic Fingerprint Identification System (AFIS) is emerging as a popular technology. The whole performance of the system depends on the quality of finger print images. Keeping in mind the above fact, this paper is solely based on fingerprint image enhancement so as to improve the quality of fingerprints, which makes feature (minutiae) extraction reliable.
This document provides an introduction to image processing. It discusses key concepts such as signals, signal processing, and how images can be represented as signals and matrices. The document covers how images are converted to digital form and stored on computers. It also describes different levels of image processing from low-level operations like enhancement to higher-level tasks like recognition and interpretation. Overall, the document gives an overview of the fundamentals of digital image processing.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
The document discusses processing of satellite images using digital image processing. It describes importing satellite images, manipulating them through various techniques like color composition, image rectification, enhancement, and information extraction. The key steps are pre-processing images through rectification and restoration, enhancing images, and extracting information through classification techniques to generate thematic maps.
The document discusses processing of satellite images using digital image processing. It describes importing satellite images, manipulating them through various techniques like color composition, image rectification, enhancement, and information extraction. The key steps are pre-processing images through rectification and restoration, enhancing images, and extracting information through classification techniques to generate thematic maps.
This document discusses methods for enhancing spatial features in images. It describes spatial filtering and convolution, which involve applying kernels or filters to images to emphasize different spatial frequencies. Low-pass filters smooth images by reducing high spatial frequencies related to fine detail, while high-pass filters have the opposite effect of sharpening images. Edge enhancement filters aim to highlight linear features by increasing contrast around edges. Fourier analysis represents images as combinations of sine and cosine waves of different frequencies and can reveal features aligned in various directions.
This document discusses image enhancement techniques in digital image processing. It defines image enhancement as modifying image attributes to make an image more suitable for a given task. The main techniques discussed are spatial domain enhancement methods like noise removal, contrast adjustment, and histogram equalization. Examples are provided to demonstrate the effects of these enhancement methods on images.
10-Image rectification and restoration.pptAJAYMALIK97
This document discusses digital image processing. It defines digital image processing as the computer-based manipulation and interpretation of digital images. It outlines seven broad types of computer-assisted operations, including image rectification and restoration, image enhancement, image classification, data merging and GIS integration, hyperspectral image analysis, biophysical modeling, and image transmission and compression. It provides details on image rectification and restoration, which involves preprocessing to correct distorted or degraded image data through techniques like geometric distortions, radiometric calibration, and noise elimination.
Computer vision.pptx for pg students study about computer visionshesnasuneer
This document discusses low-level and high-level image processing techniques in computer vision. It explains that low-level methods use little knowledge about image content and involve steps like preprocessing, segmentation, and object description. High-level processing applies knowledge, goals, and plans to perform tasks like pattern recognition and make decisions based on image understanding. The document also covers basic concepts in computer vision like a priori knowledge, heuristics, and syntactic and semantic analysis, and describes how images can be modeled as signals and functions.
Digital image processing techniques can be used to enhance images in the spatial domain. Spatial averaging reduces noise by replacing each pixel value with the average value of neighboring pixels. As more images are averaged, the output image approaches the true noise-free image. Median filtering removes salt and pepper noise by replacing each pixel with the median value in the local neighborhood. Homomorphic filtering can remove multiplicative noise by separating illumination and reflectance components in the frequency domain and applying a high-pass filter. Histogram equalization can improve contrast in color images by applying the technique to one color band and using original ratios to determine values for other bands.
Here in the ppt a detailed description of Image Enhancement Techniques is given which includes topics like Basic Gray level Transformations,Histogram Processing.
Enhancement using Arithmetic/Logic Operations.
image averaging and image averaging methods.
Piecewise-Linear Transformation Functions
Digital images represent real-world scenes using a grid of pixels, each with a value representing color or intensity. Common file formats like JPEG and PNG use lossy and lossless compression respectively. Images can be manipulated by changing individual pixel values, while graphics are defined by editable primitives. Digital image processing techniques are used to enhance, analyze and understand image content.
This document discusses image processing and its various applications and techniques. It defines image processing as processing images in a desired manner and explains it has two aspects: improving visual appearance for humans and preparing images for feature measurement. It describes why image processing is needed such as preparing digital images for viewing and optimizing images for applications. It also outlines different types of image processing like image-to-image, image-to-information, and information-to-image transformations.
Similar to ozdogan1_lcluc_8-2010_training_0.pdf (20)
Signatures of wave erosion in Titan’s coastsSérgio Sacani
The shorelines of Titan’s hydrocarbon seas trace flooded erosional landforms such as river valleys; however, it isunclear whether coastal erosion has subsequently altered these shorelines. Spacecraft observations and theo-retical models suggest that wind may cause waves to form on Titan’s seas, potentially driving coastal erosion,but the observational evidence of waves is indirect, and the processes affecting shoreline evolution on Titanremain unknown. No widely accepted framework exists for using shoreline morphology to quantitatively dis-cern coastal erosion mechanisms, even on Earth, where the dominant mechanisms are known. We combinelandscape evolution models with measurements of shoreline shape on Earth to characterize how differentcoastal erosion mechanisms affect shoreline morphology. Applying this framework to Titan, we find that theshorelines of Titan’s seas are most consistent with flooded landscapes that subsequently have been eroded bywaves, rather than a uniform erosional process or no coastal erosion, particularly if wave growth saturates atfetch lengths of tens of kilometers.
Mechanisms and Applications of Antiviral Neutralizing Antibodies - Creative B...Creative-Biolabs
Neutralizing antibodies, pivotal in immune defense, specifically bind and inhibit viral pathogens, thereby playing a crucial role in protecting against and mitigating infectious diseases. In this slide, we will introduce what antibodies and neutralizing antibodies are, the production and regulation of neutralizing antibodies, their mechanisms of action, classification and applications, as well as the challenges they face.
BIRDS DIVERSITY OF SOOTEA BISWANATH ASSAM.ppt.pptxgoluk9330
Ahota Beel, nestled in Sootea Biswanath Assam , is celebrated for its extraordinary diversity of bird species. This wetland sanctuary supports a myriad of avian residents and migrants alike. Visitors can admire the elegant flights of migratory species such as the Northern Pintail and Eurasian Wigeon, alongside resident birds including the Asian Openbill and Pheasant-tailed Jacana. With its tranquil scenery and varied habitats, Ahota Beel offers a perfect haven for birdwatchers to appreciate and study the vibrant birdlife that thrives in this natural refuge.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
ESA/ACT Science Coffee: Diego Blas - Gravitational wave detection with orbita...Advanced-Concepts-Team
Presentation in the Science Coffee of the Advanced Concepts Team of the European Space Agency on the 07.06.2024.
Speaker: Diego Blas (IFAE/ICREA)
Title: Gravitational wave detection with orbital motion of Moon and artificial
Abstract:
In this talk I will describe some recent ideas to find gravitational waves from supermassive black holes or of primordial origin by studying their secular effect on the orbital motion of the Moon or satellites that are laser ranged.
Evidence of Jet Activity from the Secondary Black Hole in the OJ 287 Binary S...Sérgio Sacani
Wereport the study of a huge optical intraday flare on 2021 November 12 at 2 a.m. UT in the blazar OJ287. In the binary black hole model, it is associated with an impact of the secondary black hole on the accretion disk of the primary. Our multifrequency observing campaign was set up to search for such a signature of the impact based on a prediction made 8 yr earlier. The first I-band results of the flare have already been reported by Kishore et al. (2024). Here we combine these data with our monitoring in the R-band. There is a big change in the R–I spectral index by 1.0 ±0.1 between the normal background and the flare, suggesting a new component of radiation. The polarization variation during the rise of the flare suggests the same. The limits on the source size place it most reasonably in the jet of the secondary BH. We then ask why we have not seen this phenomenon before. We show that OJ287 was never before observed with sufficient sensitivity on the night when the flare should have happened according to the binary model. We also study the probability that this flare is just an oversized example of intraday variability using the Krakow data set of intense monitoring between 2015 and 2023. We find that the occurrence of a flare of this size and rapidity is unlikely. In machine-readable Tables 1 and 2, we give the full orbit-linked historical light curve of OJ287 as well as the dense monitoring sample of Krakow.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
3. Dichotomy in Remote Sensing
image acquisition
image inference
Fundamental Question in Remote Sensing:
How to infer information about the landscape from a
set of measurements that constitute an image?
physical
landscape
image of
landscape
4.
5. There are spatial and temporal resolution considerations that must be
made for certain remote sensing applications.
6. Spatial
Resolution
Jensen, 2007
Imagery of residential housing
in Mechanicsville, New York,
obtained on June 1, 1998, at a
nominal spatial resolution of
0.3 x 0.3 m (approximately 1
x 1 ft.) using a digital camera.
7. Question of the
day (1)
• What is a good spatial
resolution?
• What is a good scale?
8. Resolution tradeoff
• High spatial resolution is associated with
low spectral resolution
• High spectral resolution is associated with
low spatial resolution
• So…
• Emphasize most important resolution -
directly related to the application - and
accept other resolutions as given or;
• Don’t emphasize any particular resolution
and accept medium, spectral, spatial,
temporal resolution.
9. Types of images
(with respect to spatial resolution)
• H resolution image: where pixels are
smaller than the objects in the image
• L resolution image: where pixels are
greater than the objects in the image
Strahler, 1980
10. H image
L image
1 meter spatial resolution
500 meter spatial resolution
11. • To extract information
• To evaluate a sensor
• To evaluate images statistically
• To assess/improve quality
• To make base data sets (e.g. for GIS)
• To make art http://www.remotesensingart.com
Why digital image processing?
The use of computer
algorithms to perform signal
processing on digital images
13. What is an image? (What’s in an image?)
• An image is a two-dimensional picture that has the
same appearance to a subject (a person)
• A digital image is a discreet representation of an
image using ones and zeros (binary) as used in
computing
• A raster image is same as a digital image made up of
a finite number of pixels
• A pixel is the smallest individual element in a digital
image holding quantized values of brightness
• A pixel has both a position and a value consisting of
a quantile (sample)
15. Statistical description of images
• image histogram
• individual pixel values
• univariate descriptive statistics
• multivariate statistics
statistics derived from a single variable
statistics derived from multiple variables
16. Image histogram
• The frequency of occurrence of individual
or binned brightness values in an image
17. Why do we care about histograms?
• Histograms provide a lot of information on images even
without looking at the images themselves such as
presence or absence of features, distribution etc.
• Histograms help evaluate images statistically e.g. normal,
skewed, bimodal distribution
• Histograms are used in individual image enhancements
• Histograms are used in image classification
• Histograms are used in image segmentation
• Histograms help matching of images across time or space
23. A note on pixel position
• In addition to the values of pixels that make up an
image, the spatial location and orientation of these
pixels are also important
• For example, a stream of binary data has to be
assembled into a two-dimensional array (matrix) in
correct order for an image to display correctly.
• This is especially important for remotely sensed
data as pixels are inherently spatial (geographic)
• However, the location of the pixels does not
influence image statistics (unless they are spatial
statistics)
24.
25. Image enhancements
• Enhancement is the process of manipulating an image
so that the result is more suitable than the original
for a specific application
• The word specific is highlighted here because it
establishes at the outset that enhancement
techniques are problem specific
• For example, an enhancement method applied to X-
Ray images may not be suitable for enhancing satellite
images
• There is no specific theory of image enhancement
• The end user is the ultimate judge of how well a
particular method works
26. Fundamentals
• The purpose of image enhancements:
• visually appealing images
• information extraction
• noise removal
• smoothing/sharpening
• Enhancements are generally in the form:
•
B = f(A)
• All of the image processing techniques we will
discuss today are in the spatial domain (i.e. in the
image plain itself) as opposed to the transformed
domain
27. Fundamentals
• Two major types of enhancements in the
spatial domain:
• intensity transformation
• operates on a single point (pixel)
• spatial filtering
• operates on a group of points (pixels)
• Both are transformations (T) of the original
input data into a new output data set i.e.
•
g(x,y) = T[f(x,y)]
28. An intensity transformation example
k r0
T(r)
r
s0 =T(r0)
s =T(r)
dark light
dark
light
the effect of applying T would produce a higher
contrast than the original by darkening the intensity
levels below k and brightening the levels above k
input
output
29. Some basic intensity transformation functions
• log transformation - the general form of the log
transformation is s = log(1 + r) and is used for expanding
values of dark pixels while compressing the higher values. The
opposite would be the inverse log transformation
31. Histogram processing
• histograms are the basis for numerous intensity
domain processing techniques.
• in addition to providing useful image statistics,
histogram manipulation can be used for image
enhancement as well as image compression and
segmentation
• histogram processing involves transformation of
image values through transformation
(manipulation) of the image histogram
32. • basic stretch based on histogram shape
•
e.g. enhancing a low contrast image to have
higher contrast
• matching image histogram to a pre-defined
shape, statistical distribution, or to another
histogram
•
e.g. histogram equalization, normal
distribution, gamma distribution
• using histogram statistics
•
e.g. min/max, 2% saturation, 1 sigma
Histogram processing
34. • basic stretch based on histogram shape
•
e.g. enhancing a low contrast image to
have higher contrast
• matching image histogram to a pre-defined
shape, statistical distribution, or to another
histogram
•
e.g. histogram equalization, normal
distribution, gamma distribution
• using histogram statistics
•
e.g. min/max, 2% saturation, 1 sigma
Histogram processing
36. Spatial enhancement (filtering)
• method of selectively emphasizing or
suppressing information across different
spatial scales is the subject of spatial
enhancement
• enhancement of this type uses filters (or
kernels) - a matrix of numbers - that are
applied to the image over small spatial
regions at a time
• so, the techniques operate selectively on the
image data which contain different
information at different spatial scales
38. • noise removal/addition
• representation of spatial variability of a
feature by region
• extract particular spatial scale component
from an image
• smoothing
• edge detection
• frequency domain
Spatial enhancement (filtering)
39. • by analogy with the procedure used in
chemistry to separate components of a
suspension, a digital filter is used to extract
a particular feature (spatial-scale)
component from a digital image
What is a filter?
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9
-1 0 1
-1 0 1
-1 0 1
-1 -1 -1
0 0 0
1 1 1
-1 -1 -1
-1 8 -1
-1 -1 -1
0 -1 0
-1 4 -1
0 -1 0
-1 -1 0
-1 0 1
0 1 1
vertical diagonal
horizontal
average laplacian
laplacian
40. • low-pass (smoothes data)
• moving-average
• median
• mode
• adaptive filters
• high-pass (sharpening)
• image subtraction method
• derivate method (laplacian)
• edge detection
What is a filter?
41. median filter with noise example
original image 3x3 filter 7x7 filter 9x9 filter
44. What is geometric correction?
• RS images are not maps
• Information extracted from RS data is often
integrated with GIS for further analysis or to
present to end-users
• Transformation of RS image data so that it has
appropriate spatial scale and appropriate projection
is called geometric correction
• Registration is the process of fitting the coordinate
system of one image to another image of the same
area
45. • to transform an image to match a map projection
• to locate points of interest on a map or image
• to bring adjacent (or overlapping) images into a
common registration
• to overlay temporal sequences of images of the
same area (different times, different sensors)
• to overlay images and information products with
other maps and GIS
Why geometric correction?
Mather, 1999
46. • instrument error
• distortion in the optical system
• panoramic distortion
• sensor field of view
• Earth rotation
• Earth rotation velocity changes with latitude
• platform instability
• variation in platform altitude and attitude
Sources of geometric distortions in digital
satellite data
Mather, 1999
47. Two forms of geometric correction
1.To match a distorted image to a map reference
- Ground position is important, the correction
information can come from a map, ground collected
points, or another image with geographic coordinates
2.To match a distorted image to a “correct” image
- Ground position is not important (unless you want it
to be) but image-to-image match is the key
48. Common questions in image registration
• How many points?
• Depends on how difficult/easy the image pair is?
30-60 is often sufficient
• How should they be distributed?
• evenly across the entire image
• Moving target problem!
• be careful with lakes, reservoirs, agricultural fields
• What is the desired accuracy?
• minimum 0.5 pixel or better (you’ll never get 100%)
50. Resampling
• Nearest Neighbor - select the nearest neighbor of the target
pixel whose location is computed from transformation
• Bilinear Interpolation - do two linear interpolations, first in
one direction (X), and then again in the other direction (Y)
• Cubic Convolution - the brightness value of a pixel in a
corrected image is interpolated from the brightness values of
the 16 nearest pixels around the location of the corrected
pixel
52. which resampling method?
• For analysis of remotely sensed data, prefer using
Nearest Neighbor resampling to preserve original
reflectance values
• Never perform interpolation-based resampling on
categorical values
• You may be be able to use interpolation-based
algorithms to smooth the data in places where this
is needed
• Interpolation-based algorithms (especially Cubic
Convolution) is slow!
53. What order?
• During registration and resampling, you will be
asked to choose the order of the polynomial
equation that will be used for the transformation
• Often, 1st order polynomials (linear equations) are
sufficient for satellite data because satellite
platforms are inherently stable and only rotational
distortion occurs
• For image acquired with other devices (e.g. from
aircrafts), use 2nd or 3rd order polynomials because
of multiple distortions in the image
56. What is radiometric correction?
• The radiance value recorded by an imaging sensor is not
a true record of ground-leaving radiance (or reflectance)
because the original signal is distorted by atmospheric
absorption, scattering, as well as instrument errors
• The purpose of radiometric correction is to remove
these effects so that an image of true surface properties
can be obtained or compared
• The need for this correction and the chosen method
depends on the remote sensing problem, available
atmospheric information, satellite data, and detail and
expertise available
57. • Internal errors are introduced by the remote sensing
system.They are generally systematic (predictable) and
may be identified and then corrected based on prelaunch
or in-flight calibration measurements. For example, in
many instances, radiometric correction can adjust for
detector mis-calibration
• External errors are introduced by phenomena that vary in
nature through space and time. They include the
atmosphere, terrain, slope, and aspect. Some external
errors may be corrected by relating empirical ground
observations (i.e., radiometric and geometric ground
control points) to sensor measurements
Sources of radiometric errors
58. • Ideally, the radiance recorded by a remote sensing system
in various bands is an accurate representation of the
radiance actually leaving the feature of interest (e.g., soil,
vegetation, water, or urban land cover) on the Earth’s
surface. Unfortunately, noise (error) can enter the data-
collection system at several points. Several of the more
common remote sensing system–induced radiometric
errors are:
random bad pixels (shot noise),
line-start/stop problems,
line or column drop-outs,
partial line or column drop-outs, and
line or column striping
Internal radiometric errors
59. Random bad pixels (shot noise)
a) Landsat Thematic
Mapper band 7 (2.08 –
2.35 µm) image of the
Santee Delta in South
Carolina. One of the 16
detectors exhibits serious
striping and the absence of
brightness values at pixel
locations along a scan line.
b) An enlarged view of the
bad pixels with the
brightness values of the
eight surrounding pixels
annotated.
c) The brightness values of
the bad pixels after shot
noise removal. This image
was not destriped.
Jensen, 2004
61. a) Original band 10 radiance (W
m-2 sr-1) data from a GER DAIS
3715 hyperspectral dataset of the
Mixed Waste Management Facility
on the Savannah River Site near
Aiken, SC.The subset is focused
on a clay-capped hazardous waste
site covered with Bahia grass and
Centipede grass.The 35-band
dataset was obtained at 2 2 m
spatial resolution.The radiance
values along the horizontal (X)
and vertical (Y) profiles are
summarized in the next figure.
b) Enlargement of band 10 data.
c) Band 10 data after destriping.
d) An enlargement of the
destriped data
N-line striping
Jensen, 2004
63. • Even if the remote sensor is functioning properly, external
radiometric errors can be introduced by phenomena that
vary in nature through space and time. They are external to
the remote sensing process but heavily influence the resulting
image data.
• External errors are often non-systematic and include the
atmosphere, terrain, slope, and aspect.
• Some external errors may be corrected by relating empirical
ground observations (i.e., radiometric and geometric ground
control points) to sensor measurements.
• Correcting for these external effects is often called
“atmospheric correction” even though the correction may
apply to both atmosphere and the terrain
External radiometric errors
64. Atmospheric Correction
Relative Correction
(Radiometric normalization)
Absolute Correction
Image-based
Methods
RT-based
methods
Correction is performed
relative to other
(reference) images so that
the corrected image is
normalized as if it was
acquired under same
atmospheric, sensor,
topographic conditions.
Requires (hard-to-obtain)
information on
atmospheric conditions at
the time of image
acquisition. The results
are in absolute surface
reflectance units.
65. Unnecessary atmospheric correction
• Sometimes, it is possible to ignore atmospheric and
terrain effects in remotely sensed data completely.
For example, atmospheric correction is not
necessary for certain types of classification and
change detection.
• Research shows that only when training data from
one place and one time must be extended through
space and time is atmospheric correction necessary.
• For example, atmospheric correction is not
necessary on a single date image data that will be
classified, as long as the learning (training) data comes
from the same image/date Jensen, 2004
66. • The general principle is that atmospheric
correction is not necessary as long as the training
data are extracted from the image (or image
composite) under investigation and are not
imported from another image acquired at another
time or place
Unnecessary atmospheric correction
Jensen, 2004
67. Atmospheric Correction
Relative Correction
(Radiometric normalization)
Absolute Correction
Image-based
Methods
RT-based
methods
Correction is performed
relative to other
(reference) images so that
the corrected image is
normalized as if it was
acquired under same
atmospheric, sensor,
topographic conditions.
Requires (hard-to-obtain)
information on
atmospheric conditions at
the time of image
acquisition. The results
are in absolute surface
reflectance units.
68. • single image normalization using histogram
adjustment
• multi-date image normalization using regression
• the ridge method
Relative Radiometric correction methods
(three among many others)
71. TM blue band TM MIR band
Atmospheric scattering in spectral regions
72. • The simplest method of relative atmospheric
correction is based primarily based on the fact that
infrared data (> 0.7 microns or 700 nm) are largely
free of atmospheric scattering effects, whereas the
visible range (0.4 - 0.7 microns) is strongly influenced
by atmospheric scattering
• The method involves evaluating the histogram of
various bands of remotely sensed data
• Normally, the data collected in visible region has a
higher minimum value because of increased
atmospheric scattering
Single image normalization
using histogram adjustment
74. • But in the NIR region, the atmospheric scattering is
nearly zero, so the histogram minimums are close to
zero (i.e. have true reflectance of zero)
• Of course this only applies to situations where a
“dark object” exists with respect to NIR radiation
(such as water)
• If the the visible band histograms are shifted to the
left so that the values near zero appear in the data,
the effects of atmospheric scattering will be
somewhat minimized
Single image normalization
using histogram adjustment
75. single image normalization using histogram adjustment
NOTE
Does not account
for atmospheric
absorption!!
76. • involves selecting a base (reference) image and
then transforming the spectral characteristics of all
other (subject) images obtained at different dates
to have the approximately the same radiometric
scale as this base (reference) image
• need to select pseudo-invariant features (PIFs) or
radiometric ground control points
• obtain a regression equation using values from the
PIFs and then apply it to the subject images
Radiometric correction
using regression
77. • the PIFs should change very little across time, although
some change is inevitable - deep non-turbid reservoirs
(not coastal areas), bare soil, large rooftops, and other
homogeneous areas are good candidates
• the PIFs should be obtained from the same elevation
as other objects in the scene (image)
• the PIFs should contain minimal amounts of vegetation
- vegetation spectral response changes over time -
extremely stable homogeneous canopies are
acceptable
Characteristics of PIFs
78. date 1 (red) date 2 (red)
Radiometric correction
using regression
83. Atmospheric Correction
Relative Correction
(Radiometric normalization)
Absolute Correction
Image-based
Methods
RT-based
methods
Correction is performed
relative to other
(reference) images so that
the corrected image is
normalized as if it was
acquired under same
atmospheric, sensor,
topographic conditions.
Requires (hard-to-obtain)
information on
atmospheric conditions at
the time of image
acquisition. The results
are in absolute surface
reflectance units.
84. • Absolute atmospheric correction requires information
on atmospheric properties and a method to remove
those interactions using either simple (image based) or a
complex radiometric transfer code.
• The choice of method depends on availability of data,
resources, tools, and expertise
• The desired end result is application dependent
• More involved methods require detailed sensor spectral
profile and information on atmospheric properties at
the time of image acquisition but this information is
rarely available
Atmospheric correction
85. Absolute Atmospheric Correction
• Image based methods
• Use the information available in the image itself to
extract atmospheric parameters and then use this
information to correct the image
• Radiative transfer based methods
• Use the information about the atmosphere within a
radiative transfer model which simulates solar radiation
interaction with atmospheric particles
• Hybrid approaches
• Use the combination of two approaches
86. • Image-based correction
• Empirical line calibration
• DOS (Dark Object Subtraction)
• Hybrid methods
• DDV (Dark DenseVegetation)
• Radiative Transfer Model based correction
• 6S (Second Simulation of the Satellite Signal in the Solar Spectrum)
• Lowtran (LOW resolution atmospheric TRANsmittence)
• MODTRAN (MODerate resolution atmospheric TRANsmittence)
• Streamer (UW-Madison!)
Absolute atmospheric correction
87. • easy to implement - image (and some external
data) is all you need
• the idea is that some or all of the atmospheric
parameters can be obtained from the image itself
• simplified assumption about atmospheric
absorption and transmittance of radiation
• often based on dark targets for haze correction -
only scattering is corrected in a bulk approach
• successfully applied to many environments
Image-based methods
88. Empirical line calibration
• It is a simple method based on regression (i.e. the empirical
line) which forces to image data to match in situ or other
atmospherically corrected image based spectral reflectance
measurements
• the analyst selects two or more areas in the image with
different albedos (bright sand, dark water)
• then, in-situ (or other image data) spectral reflectance
measurements are made over the same targets and a
regression line is computed from the two datasets
• If in-situ spectra is not available, it is possible to use
established libraries of reflectance properties of different
objects but these materials must exist in the image
89. Field crew taking a spectroradiometer measurement from a
calibrated reflectance standard on the tripod. b) 8 x 8 m black
and white calibration targets at the Savannah River Site
Jensen, 2004
90. • It is perhaps the simplest yet most widely used
image-based absolute atmospheric correction
approach (Chavez, 1989).
• Based on the assumption that some pixels in the
scene (image) are in complete shadow and their
radiances received at the satellite are due to
atmospheric scattering
• Also assumes that very few pixels (targets) are
completely dark so 1% [0.01] reflectance is assumed
instead of 0% [completely dark].
DOS - Dark Object Subtraction
91. • Models that calculate radiative transfer of electromagnetic
radiation through the Earth’s atmosphere and radiation
interaction with atmospheric particles
• They are often used in numerical weather prediction and
climate models
• They require information about atmospheric particles at
the time of image acquisition and in absence of this
information, they use standard profiles
• In remote sensing, they are used to estimate the
contribution of atmospheric radiance to the radiance
observed by the sensors using inversion and subtract this
amount from the satellite signal
Radiative Transfer Models
92. • DDV - Dark DenseVegetation postulates a linear
relationship between shortwave IR (2.2 microns)
surface reflectance (unaffected by the atmosphere)
and surface reflectance in the blue (0.4 microns)
and red (0.6 microns) bands
• using this relation, surface reflectance for the
visible bands are calculated and then compared to
TOA reflectance to estimate Aerosol Optical
Depth (AOD) using a radiative transfer code
• assumes dark vegetation exists in the scene and is
identified by NDVI > 0.1 and SWIR(ref) < 0.05
Hybrid methods
96. What is classification?
• thematic information extraction
• pattern recognition
• grouping of like-value measurements (pixels)
• image categorization
97. What is classification?
• The intent of the classification process is to categorize all
pixels in a digital image into one of several land cover
classes, or "themes".
• In remote sensing, multispectral (or multi-anything) data
are used to perform the classification and it is this multi-
variate pattern present within the data for each pixel is
used as the numerical basis for categorization
• The objective of image classification is to identify and
portray, as a unique gray level (or color), the features
occurring in an image in terms of the object or type of
land cover these features actually represent on the
ground.
103. • Across these spatial scales, the methods are similar,
exploiting the most useful information in an image
(or a series of images)
• What is different however is the type of inputs that
go into the classification algorithm
• At local scales, it is the spatial information that
matters the most (and we have less spectral/
temporal information)
• At global scales, it is the temporal information most
useful for accurate land-cover classifications
Spatial scales and inputs
104. • Classification may be performed using the
following methods:
• algorithms based on supervised or unsupervised
classification logic;
• algorithms based on parametric or non-parametric
statistics and non-metric methods;
• the use of hard or soft (fuzzy) set classification logic
to create hard or soft labels;
• the use of per-pixel or object-oriented classification
logic, and
• hybrid approaches
Categorization of classification methods
105. • In a supervised classification, the identity and location of the
land-cover types (e.g., urban, agriculture, or wetland) are
known a priori and the analyst attempts to locate specific
sites in the remotely sensed data that represent
homogeneous examples of these known land-cover types
known as training sites because they are used to train the
classification algorithm for eventual land-cover mapping of
the remainder of the image.
• In an unsupervised classification, the identities of land-cover
types to be specified as classes within a scene are not
generally known a priori and the computer is required to
group pixels with similar spectral characteristics into unique
clusters according to some statistically determined criteria.
The analyst then re-labels and combines the spectral clusters
into information classes.
unsupervised vs. supervised
106. • Parametric methods assume certain distributional
properties in remote sensor data and knowledge about
the forms of the underlying class density functions
• Non-parametric methods may be applied to remote
sensor data with any histogram distribution without the
assumption that the forms of the underlying densities are
known.
• Non-metric methods such as rule-based decision tree
classifiers can operate on both real-valued data (e.g.,
reflectance values from 0 to 100%) and nominal scaled
data (e.g., class 1 = forest; class 2 = agriculture).
parametric vs. non-parametric
107. • Supervised and unsupervised classification algorithms
typically use hard classification logic to produce a
classified map that consists of hard, discrete categories
(e.g., forest, agriculture,water).
• Conversely, it is also possible to use fuzzy set
classification logic, which takes into account the
heterogeneous and imprecise nature of the real world.
Here, the classes are more imprecise and can have a
gradient of labels (e.g., 60% forest, 40% ag)
hard vs. soft (fuzzy)
108. • In the past, most digital image classification was based on
processing the entire scene pixel by pixel.This is commonly
referred to as per-pixel classification.
• Object-oriented classification techniques allow the analyst to
decompose the scene into many relatively homogenous
image objects (referred to as patches or segments) that are
bigger than individual pixels using an image segmentation
process.The various statistical characteristics of these
homogeneous image objects in the scene are then
subjected to traditional statistical or fuzzy logic
classification. Object-oriented classification based on image
segmentation is often used for the analysis of high-spatial-
resolution imagery.
per-pixel vs. object oriented
109. define the problem
acquire data
process data to extract thematic information
perform accuracy assessment
distribute results
Image classification process
select class labels
110. • What is thematic information you wish to
extract?
• What are the categories you need?
• What is the desired extent and temporal
frequency?
• What is the desired accuracy?
• What are your resources?
defining the problem
111. • What are the categories you need?
• How detailed should they be?
• Do you have the information in satellite data to
extract your desired labels?
• What scheme to use?
• What are your resources?
selecting class labels
112. • All class labels of interest must be selected and defined
carefully to classify remotely sensed data successfully
into land-cover information.
• This requires the use of a classification scheme containing
taxonomically correct definitions of classes of information
that are organized according to logical criteria. If a hard
classification is to be performed, then the classes in the
classification system should normally be:
• mutually exclusive
• exhaustive, and
• hierarchical.
Classification schemes (i.e. class labels)
Jensen, 2005
113. • Certain hard classification schemes can readily incorporate land-use
and/or land-cover data obtained by interpreting remotely sensed data,
including the:
American Planning Association Land-Based Classification System which is oriented toward
detailed land-use classification;
United States Geological Survey Land-Use/Land-Cover Classification System for Use with
Remote Sensor Data and its adaptation for the U.S. National Land Cover
Dataset and the NOAA Coastal Change Analysis Program (C-CAP);
U.S. Department of the Interior Fish & Wildlife Service Classification of Wetlands and
Deepwater Habitats of the United States;
U.S. NationalVegetation and Classification System;
International Geosphere-Biosphere Program IGBP Land Cover Classification System
modified for the creation of MODIS land-cover products
LCCS - Land Cover Classification System - UN FAO European effort
Existing classification schemes
114. • Is the data available? i.e. is it collected? Who
collected it? Where is it?
• How do data sets relate to your problem - e.g. do
you have the spatial/spectral/temporal/angular data
requirements to extract the labels you require?
• What are the spatial/spectral/temporal/angular
characteristics?
• How much does it cost?
• Do you have the tools to read/view/edit this data?
acquire data
115. • Which methods to use?
• How much do you know about your study site - a
lot (supervised) - not so much (unsupervised - or
exploratory analysis)
• If supervised, where does your training data come
from? You generate it. You get it from somewhere.
• If unsupervised, how many spectral classes? How
much variability in the landscape?
• Any ancillary information?
process data to extract thematic information
116. • Which methods to use?
• Where will the “truth” data come from? From you
on the ground, from an ancillary data, from other
people, from another map?
• What is the expected accuracy?
• What is the desired accuracy?
• What are your resources?
accuracy assessment
117. • Map vs. area estimates? (are they the same?)
• Who is the audience? Technical person - include
most information; End-user - provide the most
important information
• What format (data type, data size?)
• Is the map registered to Earth coordinates to be
used within a GIS?
• What is the accuracy of the map (or the product)
you are delivering? If your audience don’t ask, you
do!
distribute results
118. • In a supervised classification, the identity and location of the
land-cover types (e.g., urban, agriculture, or wetland) are
known a priori and the analyst attempts to locate specific
sites in the remotely sensed data that represent
homogeneous examples of these known land-cover types
known as training sites because they are used to train the
classification algorithm for eventual land-cover mapping of
the remainder of the image.
• In an unsupervised classification, the identities of land-cover
types to be specified as classes within a scene are not
generally known a priori and the computer is required to
group pixels with similar spectral characteristics into unique
clusters according to some statistically determined criteria.
The analyst then re-labels and combines the spectral clusters
into information classes.
unsupervised vs. supervised
119. • Unsupervised classification (commonly referred to as
clustering) is an effective method of partitioning remote
sensor image data in multispectral feature space and
extracting land-cover information. Compared to
supervised classification, unsupervised classification
normally requires only a minimal amount of initial input
from the analyst.This is because clustering does not
normally require training data.
Unsupervised classification
Jensen, 2005
120. • How many spectral classes should I use? The rule of thumb is to
have 10 spectral classes per land-cover class of interest.This
often captures enough variability in any given image
• Which method should I use? Tests show that many of the
clustering algorithms function more or less the same way and
the results are more dependent on available information in
the image to separate land cover classes than the choice of
the algorithm.
• What would I do if spectral classes don’t capture my land-cover
classes? Isolate those that worked, masked them out, then re-
cluster, this time paying attention to more difficult classes.
• Can I merge supervised classification results with unsupervised
approaches? Of course you can through masks
Common questions
121. • In a supervised classification, the identity and location of the
land-cover types (e.g., urban, agriculture, or wetland) are
known a priori and the analyst attempts to locate specific
sites in the remotely sensed data that represent
homogeneous examples of these known land-cover types
known as training sites because they are used to train the
classification algorithm for eventual land-cover mapping of
the remainder of the image.
• In an unsupervised classification, the identities of land-cover
types to be specified as classes within a scene are not
generally known a priori and the computer is required to
group pixels with similar spectral characteristics into unique
clusters according to some statistically determined criteria.
The analyst then re-labels and combines the spectral clusters
into information classes.
unsupervised vs. supervised
122. • There are a number of ways to collect the training site
data, including:
collection of in situ information such as tree type, height,
percent canopy closure, and diameter-at-breast-height
(dbh) measurements,
on-screen selection of polygonal training data, and/or
on-screen seeding of training data.
Training site selection
Jensen, 2004
123. • All training data must be mutually exclusive i.e. must
represent separate classes although a number of training
sites may represent a single class.
• It is better to define more classes of interest than the
required number of land-cover classes for the final map as
merging of classes is much easier than disaggregating
• Often, homogeneous locations (easy examples) are chosen
to define training sites but there is value in randomization as
well
• The goal is to capture as much variability of a class within a
training data set without overburdening the the classification
algorithm
Jensen, 2004
Characteristics of training data
125. • There is a relationship between the level of detail
in a classification scheme and the spatial resolution
of remote sensor systems used to provide
information.
• This suggests that the level of detail in the desired
classification system dictates the spatial resolution
of the remote sensor data that should be used.
• Of course, the spectral resolution of the remote
sensing system is also an important consideration
Classification and spatial scale
126. Resolution effects on image classification
• In H type images, classification accuracy would
be low due to within class variability
• In L type images, classification accuracy would
be low due to between class variability
• There is always an optimum resolution for the
map of interest
• But, you may not have data at that optimum
resolution
127. • In contrast, there is an inverse relationship between
the level of categorical detail and accuracy of the
classification
• higher categorical detail (i.e. large number of
classes) often leads to less accuracy to extract
increased number of classes (categories) while
lower categorical detail (i.e. fewer number of
classes)
• e.g. a map with only deciduous vs. evergreen forest
labels is generally more accurate (and is easier to
make) than a map that attempts to distinguish
evergreen tree species
Classification and categorical scale
128. Nominal spatial resolution
requirements as a function of the
mapping requirements for Levels I
to IV land-cover classes in the
United States (based on Anderson
et al., 1976). Note the dramatic
increase in spatial resolution
required to map Level II classes.
Jensen, 2005
129. • Land cover refers to the type of material present on
the landscape (e.g., water, sand, crops, forest, wetland,
human-made materials such as asphalt). Remote
sensing image classification always produces a land-
cover
• Land use refers to what people do on the land surface
(e.g., agriculture, commerce, settlement). It is the use
of land-cover and must be interpreted from land-
cover made from remote sensing
land-cover vs. land-use
131. • Why?
- We want to assess the accuracy and test our
hypothesis
- Create an objective means of map comparison
- Correct area estimates
• How?
- Extract known samples and test against
predictions using a confusion a matrix
Accuracy assessment
132. • When a map is made from remotely sensed data
using a classification algorithm, that map is
considered to be only a hypothesis
• As with other hypothesis-based problems, the
hypothesis has to be tested with data
• Testing is done by extracting samples from your
map, compare these samples to a known reference
and simply keep track of how many are correct
• Then accuracy can be reported using a variety of
metrics based on a degree of confidence can be
attached to classification results
Accuracy assessment
133. How do you collect reference data?
• Back-classification of training data
• Cross-validation
• Independent non-random samples
• Independent random samples
• Independent stratified random samples
Which sampling design?
134. • What is the sample?
• Which method to use to assess accuracy?
• What is my metric to measure accuracy?
• What is spatial autocorrelation?
• What is a good accuracy?
Common questions in accuracy assessment
135. What is a sample?
•A sample is a subset of a population
• In general, the population is very large, making a
census of all the values in the population impractical or
impossible
•The sample represents a subset of manageable size.
Samples are collected and statistics are calculated from
the samples so that one can make inferences or
extrapolations from the sample to the population
•This process of collecting information from a sample
is referred to as sampling
Wikipedia
137. Systematic sampling is a statistical method involving
the selection of elements from an ordered sampling
frame.The most common form of systematic
sampling is an equal-probability method, in which
every kth element in the frame is selected
Systematic Sample
138. Cluster sampling is a sampling technique used when
"natural" groupings are evident in population.The
total population is divided into these groups and a
sample of the groups is selected.The method works
best when most of the variation in the population is
within the groups, not between them
Cluster Sampling
139. Each sample is chosen randomly and entirely by
chance, such that each individual in the sample has
the same probability of being chosen at any stage
during the sampling process.
Simple Random Sampling
140. When sub-populations vary considerably, it is
advantageous to sample each subpopulation (stratum)
independently. Stratification is the process of grouping
members of the population into relatively
homogeneous subgroups before sampling.Then
random or systematic sampling is applied within each
stratum.
Stratified Random Sampling
141. • Expected accuracy
• Desired accuracy
• Desired level of confidence interval
• ultimately resources available
Sample Size
Sample size ultimately depends on a number of
factors including:
142. • Proportional Allocation: the sample size of each
stratum is proportionate to the population size
of the stratum. Strata sample sizes are
determined by the following equation:
• ns = (Ns/Np)*np
• ns is stratum sample size
• Ns is stratum population size
• Np is total population size
• np = total sample population size
Proportional Allocation
143. • a good rule of thumb:
• 50 samples per class
• if number of classes is high (>12), 75-100
samples per class
• homogeneous classes - fewer samples
Sample Size
144. • Once a sample is selected from the population, and
checked against a reference data set, the percentage
of pixels from each class in the image labeled
correctly by the classifier can be estimated
• Along with these percentages, the proportions of
pixels from each class erroneously labeled into
every other class can also be estimated
• The tool that expresses these results in a tabular
form is called the confusion or error matrix
Confusion Matrix
(Error matrix)
147. Which is better?
high user’s or
high producer’s accuracy?
• depends on your ultimate use
• example:
aerial tree
spraying program
option #1
• need to spray
only decidous trees
• cannot damage coniferous!
• higher user’s accuracy is good –
if it says deciduous on the map,
it is likely deciduous
148. Which is better?
high user’s or
high producer’s accuracy?
• depends on your ultimate use
• example:
aerial tree
spraying program
option #2
• need to spray deciduous trees and
must get all of them
• no problem if other trees are
sprayed as well
• a higher producer’s accuracy is
good –
we must correctly classify a lot
of deciduous trees
150. What is change detection?
• Extraction of temporal change information from
remotely sensed imagery
• Remote sensing only provides the changes in
measurements (reflectances) over time
• The purpose of digital change detection is to relate
these changes in measurements to changes the
nature and character of biophysical - environmental
variables
• The main challange of digital change detection is then
to separate changes of interest from changes of non-
interest
• What are the examples of change detection?
151. • The purpose of this list is to illustrate the kinds of
changes (think variety) and then determine:
• which instrument (data) to use?
• which method to use?
• which scale of analysis?
• coarse scale (1km and up)
• medium scale (30-100 meters)
• high resolution scale (10 m or less)
What is change detection?
152. • spatial scales (spatial resolution):
• H - high (1-10 m)
• M - Medium (20-100 m)
• C - Coarse (250 m and up)
• time periods over which change is monitored:
• e - event driven (days to months)
• a - one to several years
• d - on the order of decades
• frequency of observations required:
• d - daily to weekly
• s - seasonal
• e - end points
Ways to attack change detection problems
153. • geographic extent of area to be monitored:
• l - local
• r - regional
• c - continental
• g - global
• nature of desired information on environmental
change:
• space matters (map needed)? [yes] [no]
• area estimates of change required? [yes] [no]
• magnitude of change is important? [yes] [no]
• categorical vs. continuous outcome? [yes] [no]
• eventual use of data:
• l - local resource management/planning
• n - national scale planning
• g - monitoring global change
• w - early warning
154. • Forest clearing I: studying forest change at local
to regional scales (e.g. N.Wisconsin). map and
area extent are both important
• Forest clearing II: studying global tropical
deforestation for global change questions. No
interest in map and location of change. Areal
extent of changes are important although the
areal estimates may be through map making
Examples
155. • sun angle
• clouds/shadows
• snow cover
• phenology
• inter-annual variability (climate)
• atmospheric effects
• sensor calibration
• sensor view angle differences
• the problem of agriculture!
The Challenge in change detection is separating:
- the effects of changes unrelated to the surface
- from the effects of changes related to surface
unrelated (undesired) changes across time due to
156. Remote Sensing system considerations
Successful remote sensing change detection
requires careful attention to:
• remote sensor system considerations, and
• environmental characteristics.
Failure to understand the impact of the various
parameters on the change detection process can
lead to inaccurate results. Ideally, the remotely
sensed data used to perform change detection is
acquired by a remote sensor system that holds the
following resolutions constant: temporal, spatial
(and look angle), spectral, and radiometric.
157. • Two temporal resolutions should be held constant
during change detection:
• First, use a sensor system that acquires data at
approximately the same time of day.This eliminates
diurnal Sun angle effects that can cause anomalous
differences in the reflectance properties of the
remote sensor data.
• Second, acquire remote sensor data on anniversary
dates, e.g., Feb 1, 2004, and Feb 1, 2006.Anniversary
date imagery minimizes the influence of seasonal
Sun-angle and plant phenological differences that
can negatively impact a change detection project.
Temporal resolution
158. • Accurate spatial registration of at least two images
is essential for digital change detection.
• Ideally, the remotely sensed data are acquired by a
sensor system that collects data with the same
instantaneous field of view on each date.
• For example, Landsat TM data collected at 30 x 30
m spatial resolution on two dates are relatively easy
to register to one another.
Spatial resolution
159. • Geometric rectification algorithms are used to
register the images to a standard map projection
• Rectification should result in the two images having
a root mean square error (RMSE) of < 0.5 pixel
• Mis-registration of the two images may result in the
identification of spurious areas of change between
the datasets.
• For example, just one pixel mis-registration may
cause a stable road on the two dates to show up as
a new road in the change image.
Spatial resolution
160. • Ideally, the same sensor system is used to acquire
imagery on multiple dates.
• When this is not possible, the analyst should select
bands that approximate one another.
• For example, Landsat MSS bands 4 (green), 5 (red),
and 7 (near-infrared) and SPOT bands 1 (green), 2
(red), and 3 (near-infrared), can be used successfully
with Landsat ETM+ bands 2 (green), 3 (red), and 4
(near-infrared).
• Many change detection algorithms do not function
well when the bands in one image do not match
those of the other image
Spectral resolution
161. • Ideally, soil moisture conditions should be identical
for the N dates of imagery used in a change
detection project.
• Extremely wet or dry conditions on one of the
dates can cause change detection problems.
• When soil moisture differences between dates are
significant for only certain parts of the study area
(perhaps due to a local thunderstorm), it may be
necessary to stratify (cut out) those affected areas
and perform a separate change detection analysis,
which can be added back in the final stages of the
project.
Soil moisture
162. • Natural ecosystems go through repeatable, predictable
cycles of development. Humans also modify the
landscape in predictable stages.
• Analysts use this information to identify when remotely
sensed data should be collected.Therefore, analysts
must be familiar with the biophysical characteristics of
the vegetation, soils, and water constituents of
ecosystems and their phenological cycles.
• Likewise, they must understand human-made
development phenological cycles such as those
associated with residential expansion at the urban/rural
fringe.
Phenological cycle of vegetation
163. • Vegetation grows according to relatively predictable
diurnal, seasonal, and annual phenological cycles.
• Obtaining near-anniversary images greatly
minimizes the effects of seasonal phenological
differences that may cause spurious change to be
detected in the imagery.
• When attempting to identify change in agricultural
crops, the analyst must be aware of when the crops
were planted.
• A month lag in planting date between fields of the
same crop can cause serious change detection
error.
Phenological cycle of vegetation
164. • Define the problem
• Sensor and environmental considerations
• Perform digital change analysis
• Perform accuracy assessment
• Distribute results
Steps required in digital change detection
165. • The selection of an appropriate change detection
algorithm is very important.
• First, it will have a direct impact on the type of image
classification to be performed (if any).
• Second, it will dictate whether important “from–to”
change information can be extracted from the
imagery.
• Many change detection projects require that “from–
to” information be readily available in the form of
maps and tabular summaries.
Selection of change detection algorithm
166. RAW
MULTIDATE
IMAGERY
RADIOMETRIC
PREPROCESSING
scene matching
DOS
RT
TRANSFORMATION
METHOD
image differencing
NDVI diff
multidate ratio
multidate regression
multidate K-T
change vector
GS
PCA
RESULTS
Map
cat.
cont.
Area
estimates
Change Detection Process
DOS: Dark Object Subtraction; RT: Radiative Transfer; K-T: Kauth-Thomas transform; GS: Gramm-Schmidth;
PCA: Principal Components Analysis; SMA: Spectral Mixture Analysis.
METHOD OF
RELATING IMAGE
CHANGE TO CHANGE
OF INTEREST
density slicing
classification
regression
mixture analysis
neural networks
regression trees
decision trees
167. RAW
MULTIDATE
IMAGERY
RADIOMETRIC
PREPROCESSING
scene matching
DOS
RT
TRANSFORMATION
METHOD
image differencing
NDVI diff
multidate ratio
multidate regression
multidate K-T
change vector
GS
PCA
RESULTS
Map
cat.
cont.
Area
estimates
Change Detection Process
DOS: Dark Object Subtraction; RT: Radiative Transfer; K-T: Kauth-Thomas transform; GS: Gramm-Schmidth;
PCA: Principal Components Analysis; SMA: Spectral Mixture Analysis.
METHOD OF
RELATING IMAGE
CHANGE TO CHANGE
OF INTEREST
density slicing
classification
regression
mixture analysis
neural networks
regression trees
decision trees
Machoney and Haack (1993)
Muchoney, D.M. and Haack, B.N., 1993, Change detection for monitoring forest defoliation,
Photogrammetric Engineering and Remote Sensing, 60(10):1243-1251
168. RAW
MULTIDATE
IMAGERY
RADIOMETRIC
PREPROCESSING
scene matching
DOS
RT
TRANSFORMATION
METHOD
image differencing
NDVI diff
multidate ratio
multidate regression
multidate K-T
change vector
GS
PCA
RESULTS
Map
cat.
cont.
Area
estimates
Change Detection Process
DOS: Dark Object Subtraction; RT: Radiative Transfer; K-T: Kauth-Thomas transform; GS: Gramm-Schmidth;
PCA: Principal Components Analysis; SMA: Spectral Mixture Analysis.
METHOD OF
RELATING IMAGE
CHANGE TO CHANGE
OF INTEREST
density slicing
classification
regression
mixture analysis
neural networks
regression trees
decision trees
Vogelmann, J.E. and Rock, B.N., 1989, Use of Thematic Mapper data for the detection of
Forest damage caused by the pear thrips, Remote Sensing of Environment, 30:217-225
Vogelmann and Rock (1989)
169. RAW
MULTIDATE
IMAGERY
RADIOMETRIC
PREPROCESSING
scene matching
DOS
RT
TRANSFORMATION
METHOD
image differencing
NDVI diff
multidate ratio
multidate regression
multidate K-T
change vector
GS
PCA
RESULTS
Map
cat.
cont.
Area
estimates
Change Detection Process
DOS: Dark Object Subtraction; RT: Radiative Transfer; K-T: Kauth-Thomas transform; GS: Gramm-Schmidth;
PCA: Principal Components Analysis; SMA: Spectral Mixture Analysis.
METHOD OF
RELATING IMAGE
CHANGE TO CHANGE
OF INTEREST
density slicing
classification
regression
mixture analysis
neural networks
regression trees
decision trees
Radeloff, V.C., Mladenoff, D.J., and Boyce, M.S., 1999, Detecting jack pine budworm
Defoliation using spectral mixture analysis: separating effects from determinants, Remote
Sensing of Environment, 69:156-169
Radeloff et al. (1999)