This document provides an overview of classification methods in ENVI, including both unsupervised and supervised techniques. It examines Landsat TM data of Cañon City, Colorado using various methods. Unsupervised techniques explored are K-Means and ISODATA clustering, which group pixels without user-defined training samples. Supervised methods covered include parallelepiped, maximum likelihood, minimum distance, and Mahalanobis distance classification. Post-classification processing steps like clumping, sieving, combining classes, and accuracy assessment are also discussed. The tutorial uses the Cañon City data to demonstrate each classification approach and analyze the results.
In the context of remote sensing, change detection refers to the process of identifying differences in the state of land features by observing them at different times. This process can be accomplished either manually (i.e., by hand) or with the aid of remote sensing software. Manual interpretation of change from satellite images or aerial photos involves an observer or analyst defining areas of interest and comparing them between images from two dates. This may be accomplished either on-screen (such as in a GIS) or on paper. When analyzing aerial photographs, a stereoscope which allows for two spatially-overlapping photos to be displayed in 3D, can aid photo interpretation. Manual image interpretation works well when assessing change between discrete classes (forest openings, land use and land cover maps) or when changes are large (e.g., heavy mechanized maneuver damage, engineering training impacts). Manual image interpretation is also an option when trying to determine change using images or photos from different sources (comparing historic aerial photographs to current satellite imagery).
Automated methods of remote sensing change detection usually are of two forms: post-classification change detection and image differencing using band ratios. In post-classification change detection, the images from each time period are classified using the same classification scheme into a number of discrete categories like land cover types. The two (or more) classifications are compared and the area that is classified the same or different is tallied. With image differencing, a band ratio such as NDVI is constructed from each input image, and the difference is taken between the band ratios of different times. In the case of differencing NDVI images, positive output values may indicate an increase in vegetation, negative values a decrease in vegetation, and values near zero no change. With either post-classification or image differencing change detection, it is necessary to specify a threshold below which differences between the two images is considered to be non-significant. The specification of thresholds is critical to the results of change detection analysis and usually must be found through an iterative process.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
This document provides an overview of the fundamentals of remote sensing. It discusses that remote sensing involves acquiring information about the Earth's surface without direct contact through sensing and recording reflected or emitted energy. It describes how remote sensing can be done via satellites, which capture satellite images, and aircrafts, which capture aerial photographs. The document outlines different types of satellite images like true color composite and false color composite images. It also discusses key concepts in remote sensing like spatial resolution, radiometric resolution, spectral resolution, and components of a remote sensing system. Finally, it provides an overview of the electromagnetic spectrum and how the atmosphere can impact the quantity and quality of electromagnetic radiation captured by satellites.
An overlay operation is much more than a simple merging of linework; all the attributes of the features taking part in the overlay are carried through. In general, there are two methods for performing overlay analysis—feature overlay (overlaying points, lines, or polygons) and raster overlay. Some types of overlay analysis lend themselves to one or the other of these methods. Overlay analysis to find locations meeting certain criteria is often best done using raster overlay (although you can do it with feature data). Of course, this also depends on whether your data is already stored as features or raster. It may be worthwhile to convert the data from one format to the other to perform the analysis.
Weighted Overlay
Overlays several raster files using a common measurement scale and weights each according to its importance.
The weighted overlay table allows the calculation of a multiple criteria analysis between several raster files.
Raster- The raster of the criteria being weighted.
Influence- The influence of the raster compared to the other criteria as a percentage of 100.
Field- The field of the criteria raster to use for weighting.
Remap- The scaled weights for the criterion.
In addition to numerical values for the scaled weights in Remap, the following options are available:
Restricted- Assigns the restricted value (the minimum value of the evaluation scale set, minus one) to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
No data - Assigns No Data to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
This document summarizes different image enhancement techniques, including contrast enhancement methods like linear and non-linear contrast stretching. It defines image enhancement as manipulating pixel gray levels to highlight specific image features for a given application. Contrast enhancement works to increase the difference between features by stretching the gray levels. Linear contrast stretching uniformly distributes pixel values across the full display range from 0-255. Non-linear techniques like histogram equalization assign a wider range to more frequently occurring pixel values to better highlight information. The goal of enhancement is to use the full sensor radiometric scale but techniques must be chosen based on the specific application and user preferences.
Digitizing in GIS is the process of converting geographic data either from a hardcopy or a scanned image into vector data by tracing the features. During the digitzing process, features from the traced map or image are captured as coordinates in either point, line, or polygon format.
In this study various techniques for exploratory spatial data analysis are reviewed : spatial autocorrelation, Moran's I statistic, hot spots analysis, spatial lag and spatial error models.
This document help you to prepare Triangulation Network (TIN), Hillshade Map, Slope map, interpolation and Digital Elevation Model (DEM) in a area and how to interpret them.
This document discusses different types of remote sensing systems used in civil engineering, including optical, photogrammetric, thermal, multispectral, hyperspectral, and panchromatic systems. It provides examples and specifications of various sensors, such as MODIS, AVIRIS, IKONOS, and WorldView. The document also covers digital image formats, photogrammetry, image distortions and displacements, reference ellipsoids, relief displacement, and methods of measuring heights from aerial photographs.
In the context of remote sensing, change detection refers to the process of identifying differences in the state of land features by observing them at different times. This process can be accomplished either manually (i.e., by hand) or with the aid of remote sensing software. Manual interpretation of change from satellite images or aerial photos involves an observer or analyst defining areas of interest and comparing them between images from two dates. This may be accomplished either on-screen (such as in a GIS) or on paper. When analyzing aerial photographs, a stereoscope which allows for two spatially-overlapping photos to be displayed in 3D, can aid photo interpretation. Manual image interpretation works well when assessing change between discrete classes (forest openings, land use and land cover maps) or when changes are large (e.g., heavy mechanized maneuver damage, engineering training impacts). Manual image interpretation is also an option when trying to determine change using images or photos from different sources (comparing historic aerial photographs to current satellite imagery).
Automated methods of remote sensing change detection usually are of two forms: post-classification change detection and image differencing using band ratios. In post-classification change detection, the images from each time period are classified using the same classification scheme into a number of discrete categories like land cover types. The two (or more) classifications are compared and the area that is classified the same or different is tallied. With image differencing, a band ratio such as NDVI is constructed from each input image, and the difference is taken between the band ratios of different times. In the case of differencing NDVI images, positive output values may indicate an increase in vegetation, negative values a decrease in vegetation, and values near zero no change. With either post-classification or image differencing change detection, it is necessary to specify a threshold below which differences between the two images is considered to be non-significant. The specification of thresholds is critical to the results of change detection analysis and usually must be found through an iterative process.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
This document provides an overview of the fundamentals of remote sensing. It discusses that remote sensing involves acquiring information about the Earth's surface without direct contact through sensing and recording reflected or emitted energy. It describes how remote sensing can be done via satellites, which capture satellite images, and aircrafts, which capture aerial photographs. The document outlines different types of satellite images like true color composite and false color composite images. It also discusses key concepts in remote sensing like spatial resolution, radiometric resolution, spectral resolution, and components of a remote sensing system. Finally, it provides an overview of the electromagnetic spectrum and how the atmosphere can impact the quantity and quality of electromagnetic radiation captured by satellites.
An overlay operation is much more than a simple merging of linework; all the attributes of the features taking part in the overlay are carried through. In general, there are two methods for performing overlay analysis—feature overlay (overlaying points, lines, or polygons) and raster overlay. Some types of overlay analysis lend themselves to one or the other of these methods. Overlay analysis to find locations meeting certain criteria is often best done using raster overlay (although you can do it with feature data). Of course, this also depends on whether your data is already stored as features or raster. It may be worthwhile to convert the data from one format to the other to perform the analysis.
Weighted Overlay
Overlays several raster files using a common measurement scale and weights each according to its importance.
The weighted overlay table allows the calculation of a multiple criteria analysis between several raster files.
Raster- The raster of the criteria being weighted.
Influence- The influence of the raster compared to the other criteria as a percentage of 100.
Field- The field of the criteria raster to use for weighting.
Remap- The scaled weights for the criterion.
In addition to numerical values for the scaled weights in Remap, the following options are available:
Restricted- Assigns the restricted value (the minimum value of the evaluation scale set, minus one) to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
No data - Assigns No Data to cells in the output, regardless of whether other input raster files have a different scale value set for that cell.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
This document summarizes different image enhancement techniques, including contrast enhancement methods like linear and non-linear contrast stretching. It defines image enhancement as manipulating pixel gray levels to highlight specific image features for a given application. Contrast enhancement works to increase the difference between features by stretching the gray levels. Linear contrast stretching uniformly distributes pixel values across the full display range from 0-255. Non-linear techniques like histogram equalization assign a wider range to more frequently occurring pixel values to better highlight information. The goal of enhancement is to use the full sensor radiometric scale but techniques must be chosen based on the specific application and user preferences.
Digitizing in GIS is the process of converting geographic data either from a hardcopy or a scanned image into vector data by tracing the features. During the digitzing process, features from the traced map or image are captured as coordinates in either point, line, or polygon format.
In this study various techniques for exploratory spatial data analysis are reviewed : spatial autocorrelation, Moran's I statistic, hot spots analysis, spatial lag and spatial error models.
This document help you to prepare Triangulation Network (TIN), Hillshade Map, Slope map, interpolation and Digital Elevation Model (DEM) in a area and how to interpret them.
This document discusses different types of remote sensing systems used in civil engineering, including optical, photogrammetric, thermal, multispectral, hyperspectral, and panchromatic systems. It provides examples and specifications of various sensors, such as MODIS, AVIRIS, IKONOS, and WorldView. The document also covers digital image formats, photogrammetry, image distortions and displacements, reference ellipsoids, relief displacement, and methods of measuring heights from aerial photographs.
This document discusses spatial analysis and modeling in a geographical information system. It defines spatial analysis as gaining an understanding of patterns and processes underlying geographic features in order to make better decisions and understand phenomena. The document outlines four types of spatial analysis: spatial data manipulation, spatial data analysis, spatial statistical analysis, and spatial modeling. It also describes different vector and raster spatial analysis techniques, such as clipping, overlaying, buffering, and slope/aspect calculations. Spatial modeling is defined as using models to predict spatial outcomes and enable "what if" analyses.
The document provides an introduction to ArcGIS. It outlines that it will discuss what GIS is, how geographic data is represented in GIS, how data is stored in ArcGIS, GIS maps, GIS analysis processes, what ArcGIS is, and planning a GIS project. It then proceeds to define GIS, explain how geographic data is modeled in vector and raster formats, describe how data is organized and stored in an ArcGIS geodatabase, discuss GIS mapping and visualization, and overview spatial analysis tools in ArcGIS.
Distortions and displacement on aerial photographchandan00781
This document discusses various types of distortions that can occur in aerial photographs. It defines distortion as a shift in the position of landscape features that alters the perspective of the image. Displacement is defined as any shift that does not change the perspective. The document outlines different types of distortions including lens distortion, relief displacement caused by elevation differences, and tilt distortions from aircraft motion like roll, crab, and pitch. It also discusses parallax, orthorectification to remove distortions, and parameters that influence relief displacement.
Raster data is represented by a grid of cells, where each cell contains numeric or qualitative values. Raster data comes from sources like images, maps, and satellite imagery. Common analyses of raster data include buffering, reclassification, hillshades, interpolation, and surface calculation. Buffering assigns "in" and "out" values to cells based on their distance from a feature. Reclassification reassigns cell values. Hillshades create shaded relief maps from elevation data. Interpolation estimates values between known data points. Surface calculation performs cell-by-cell mathematical functions on rasters.
This document discusses GIS data analysis techniques including raster to vector conversion and spatial analysis through vector overlay. It provides information on various data types and models in GIS. Key analysis techniques covered are raster and vector data overlays, terrain mapping and analysis, and spatial interpolation methods. Specific vector and raster overlay methods like point-in-polygon, line-in-polygon and polygon-on-polygon are described. Spatial data editing techniques involving digitization errors and topological/non-topological editing are also summarized.
Spatial analysis & interpolation in ARC GISKU Leuven
In ArcGIS, a data model describes the thematic layers used in the applications (for example, hamburger stands, roads, and counties); their spatial representation (for example, point, line, or polygon); their attributes; their integrity rules and relationships (for example, counties must nest within states).
Nelson's dominant and distinctiveness FunctionNazrul Islam
This is an explanation of urban functional classification based on arithmetic mean and standard deviation as proposed by Howard Nelson in citing proper practical examples.
Digital image classification involves sorting pixels into discrete classes based on their spectral values. It can be performed using supervised or unsupervised approaches. Supervised classification involves using training data to define classes, while unsupervised classification uses algorithms to automatically group similar pixels. Accuracy assessment involves comparing the classification to reference data to determine accuracy through an error matrix.
This document provides an overview of GIS, raster data formats, and spatial analysis tools. It defines GIS as an automated system for capturing, storing, analyzing and displaying spatial data linked to tabular data on maps. Raster data represents real-world phenomena through a matrix of cells, with each cell storing a value like elevation or satellite imagery. Spatial analysis tools in GIS like Euclidean distance and point density perform cell-based raster analyses, calculating distances from raster cells to points or densities of points within neighborhoods.
This document discusses and compares different interpolation techniques in GIS, including kriging, inverse distance weighting (IDW), natural neighbor, spline, and trend interpolation. It provides details on how each technique works, when each should be used, and an example comparing kriging and IDW interpolation on rainfall data in an area. The key techniques covered are kriging, IDW, natural neighbor, spline, and trend interpolation.
The document discusses raster data models in GIS. Raster data models represent geographic space as a grid of cells or pixels, with each cell storing numeric values representing attributes like elevation. Key points:
- Raster models use a grid-based structure of rows and columns to store imagery and represent continuous surfaces.
- Each cell holds a value like elevation and has a defined spatial resolution (size).
- Raster data is used for things like satellite imagery, elevation maps, and representing variables that vary continuously over space.
The document provides an overview of thermal remote sensing. It discusses key concepts like the thermal infrared spectrum, atmospheric windows and absorption bands, fundamental radiation laws, thermal data acquisition using sensors, and applications in mapping forest fires, urban heat islands, volcanoes, and military purposes. Thermal remote sensing allows measuring the true temperature of objects and detecting features not visible in optical remote sensing. It has advantages like temperature measurement but maintaining sensors at low temperatures can be challenging.
The vector data model represents geographic objects as points, lines, and polygons defined by x-y coordinates, allowing for precise representation of features and spatial relationships. While useful for network analysis, vector data cannot represent continuous gradations and is complex. The raster data model divides space into a grid of cells, facilitating representation of thematic data and compatibility with remote sensing imagery, but with less precision and larger data sizes.
This document discusses raster data in GIS. Raster data represents geographic information as a grid of cells or pixels organized into a matrix of rows and columns. Each cell contains a value representing information such as elevation, vegetation index, or land use class. Raster data can represent data as integers, floats, or other data types and comes in various resolutions, with higher resolution providing more detail. Rasters can be single band (one value per cell) or multiband (multiple values per cell representing different wavelengths). Examples of raster data sources include remote sensing imagery, digital elevation models, and land cover maps.
This document discusses key concepts related to data in GIS systems. It describes the different types of spatial and attribute data as well as vector and raster data formats. It explains how data is organized into layers and how those layers can be queried and overlaid to integrate information from different sources and analyze spatial patterns in the data.
DGPS improves upon standard GPS accuracy by using a fixed reference station to calculate and broadcast differential corrections for errors caused by atmospheric delays of GPS signals. Receivers equipped with DGPS can then apply these corrections to achieve sub-meter accuracy, as low as 10 cm in some cases. It works by having a stationary receiver at a known location calculate differential errors compared to GPS satellites and broadcasting correction signals to enable mobile DGPS receivers to determine their position with much greater precision.
Spatial autocorrelation refers to the similarity of objects near each other in space. It is measured using global and local statistics. Global measures like Moran's I provide a single value for the whole data set, indicating if nearby things tend to be more similar. Local measures like LISA (Local Indicators of Spatial Association) calculate a statistic for each observation to identify local clusters, or "hot spots" and "cold spots". Moran's I is a correlation between a value and the average of its neighbors, while Geary's C uses the actual values. LISA extends these global concepts to detect significant spatial autocorrelation for individual locations.
This document provides an overview of remote sensing and image interpretation. It discusses key topics such as the use of maps as models to represent features on Earth, different types of map scales and spatial referencing systems, and how computers are used in map production. It also outlines the process of image interpretation, including levels of interpretation keys and basic elements to examine like size, shape, shadow, tone, color, and texture. Software programs used in map production like ArcGIS and types of data products from remote sensing are also reviewed.
Gis Geographical Information System FundamentalsUroosa Samman
Gis, Geographical Information System Fundamentals. This presentation includes a complete detail of GIS and GIS Softwares. It will help students of GIS and Environmental Science.
The ENVI Pocket Guide is a quick reference booklet NOT intended to be read from cover to cover although it can be. The intent is to provide Soldiers and civilians, working in the Defense and Intelligence community, succinct steps on how to accomplish common tasks in ENVI. The RPF export workflow and other pertinent information such as how to contact ENVI technical support and online help can also be found in this guide.
The ENVI Pocket Guide Volume 2 | INTERMEDIATE expounds a step further on intermediate procedures using ENVI, IDL and ENVI LiDAR, assuming you have already mastered the basics.
Volume 2 builds on the basics by providing succinct steps on how to perform the following tasks using ENVI and IDL:
1. Add Grid reference & count features
2. Perform Band Math
3. Layer Stack images
4. Exploit Shortwave Infrared (SWIR) bands
5. Perform Spectral Analysis in general
6. Perform Image Calibration/ Atmospheric Correction
7. Extract features from LiDAR
8. Batch Processing using IDL
This document discusses spatial analysis and modeling in a geographical information system. It defines spatial analysis as gaining an understanding of patterns and processes underlying geographic features in order to make better decisions and understand phenomena. The document outlines four types of spatial analysis: spatial data manipulation, spatial data analysis, spatial statistical analysis, and spatial modeling. It also describes different vector and raster spatial analysis techniques, such as clipping, overlaying, buffering, and slope/aspect calculations. Spatial modeling is defined as using models to predict spatial outcomes and enable "what if" analyses.
The document provides an introduction to ArcGIS. It outlines that it will discuss what GIS is, how geographic data is represented in GIS, how data is stored in ArcGIS, GIS maps, GIS analysis processes, what ArcGIS is, and planning a GIS project. It then proceeds to define GIS, explain how geographic data is modeled in vector and raster formats, describe how data is organized and stored in an ArcGIS geodatabase, discuss GIS mapping and visualization, and overview spatial analysis tools in ArcGIS.
Distortions and displacement on aerial photographchandan00781
This document discusses various types of distortions that can occur in aerial photographs. It defines distortion as a shift in the position of landscape features that alters the perspective of the image. Displacement is defined as any shift that does not change the perspective. The document outlines different types of distortions including lens distortion, relief displacement caused by elevation differences, and tilt distortions from aircraft motion like roll, crab, and pitch. It also discusses parallax, orthorectification to remove distortions, and parameters that influence relief displacement.
Raster data is represented by a grid of cells, where each cell contains numeric or qualitative values. Raster data comes from sources like images, maps, and satellite imagery. Common analyses of raster data include buffering, reclassification, hillshades, interpolation, and surface calculation. Buffering assigns "in" and "out" values to cells based on their distance from a feature. Reclassification reassigns cell values. Hillshades create shaded relief maps from elevation data. Interpolation estimates values between known data points. Surface calculation performs cell-by-cell mathematical functions on rasters.
This document discusses GIS data analysis techniques including raster to vector conversion and spatial analysis through vector overlay. It provides information on various data types and models in GIS. Key analysis techniques covered are raster and vector data overlays, terrain mapping and analysis, and spatial interpolation methods. Specific vector and raster overlay methods like point-in-polygon, line-in-polygon and polygon-on-polygon are described. Spatial data editing techniques involving digitization errors and topological/non-topological editing are also summarized.
Spatial analysis & interpolation in ARC GISKU Leuven
In ArcGIS, a data model describes the thematic layers used in the applications (for example, hamburger stands, roads, and counties); their spatial representation (for example, point, line, or polygon); their attributes; their integrity rules and relationships (for example, counties must nest within states).
Nelson's dominant and distinctiveness FunctionNazrul Islam
This is an explanation of urban functional classification based on arithmetic mean and standard deviation as proposed by Howard Nelson in citing proper practical examples.
Digital image classification involves sorting pixels into discrete classes based on their spectral values. It can be performed using supervised or unsupervised approaches. Supervised classification involves using training data to define classes, while unsupervised classification uses algorithms to automatically group similar pixels. Accuracy assessment involves comparing the classification to reference data to determine accuracy through an error matrix.
This document provides an overview of GIS, raster data formats, and spatial analysis tools. It defines GIS as an automated system for capturing, storing, analyzing and displaying spatial data linked to tabular data on maps. Raster data represents real-world phenomena through a matrix of cells, with each cell storing a value like elevation or satellite imagery. Spatial analysis tools in GIS like Euclidean distance and point density perform cell-based raster analyses, calculating distances from raster cells to points or densities of points within neighborhoods.
This document discusses and compares different interpolation techniques in GIS, including kriging, inverse distance weighting (IDW), natural neighbor, spline, and trend interpolation. It provides details on how each technique works, when each should be used, and an example comparing kriging and IDW interpolation on rainfall data in an area. The key techniques covered are kriging, IDW, natural neighbor, spline, and trend interpolation.
The document discusses raster data models in GIS. Raster data models represent geographic space as a grid of cells or pixels, with each cell storing numeric values representing attributes like elevation. Key points:
- Raster models use a grid-based structure of rows and columns to store imagery and represent continuous surfaces.
- Each cell holds a value like elevation and has a defined spatial resolution (size).
- Raster data is used for things like satellite imagery, elevation maps, and representing variables that vary continuously over space.
The document provides an overview of thermal remote sensing. It discusses key concepts like the thermal infrared spectrum, atmospheric windows and absorption bands, fundamental radiation laws, thermal data acquisition using sensors, and applications in mapping forest fires, urban heat islands, volcanoes, and military purposes. Thermal remote sensing allows measuring the true temperature of objects and detecting features not visible in optical remote sensing. It has advantages like temperature measurement but maintaining sensors at low temperatures can be challenging.
The vector data model represents geographic objects as points, lines, and polygons defined by x-y coordinates, allowing for precise representation of features and spatial relationships. While useful for network analysis, vector data cannot represent continuous gradations and is complex. The raster data model divides space into a grid of cells, facilitating representation of thematic data and compatibility with remote sensing imagery, but with less precision and larger data sizes.
This document discusses raster data in GIS. Raster data represents geographic information as a grid of cells or pixels organized into a matrix of rows and columns. Each cell contains a value representing information such as elevation, vegetation index, or land use class. Raster data can represent data as integers, floats, or other data types and comes in various resolutions, with higher resolution providing more detail. Rasters can be single band (one value per cell) or multiband (multiple values per cell representing different wavelengths). Examples of raster data sources include remote sensing imagery, digital elevation models, and land cover maps.
This document discusses key concepts related to data in GIS systems. It describes the different types of spatial and attribute data as well as vector and raster data formats. It explains how data is organized into layers and how those layers can be queried and overlaid to integrate information from different sources and analyze spatial patterns in the data.
DGPS improves upon standard GPS accuracy by using a fixed reference station to calculate and broadcast differential corrections for errors caused by atmospheric delays of GPS signals. Receivers equipped with DGPS can then apply these corrections to achieve sub-meter accuracy, as low as 10 cm in some cases. It works by having a stationary receiver at a known location calculate differential errors compared to GPS satellites and broadcasting correction signals to enable mobile DGPS receivers to determine their position with much greater precision.
Spatial autocorrelation refers to the similarity of objects near each other in space. It is measured using global and local statistics. Global measures like Moran's I provide a single value for the whole data set, indicating if nearby things tend to be more similar. Local measures like LISA (Local Indicators of Spatial Association) calculate a statistic for each observation to identify local clusters, or "hot spots" and "cold spots". Moran's I is a correlation between a value and the average of its neighbors, while Geary's C uses the actual values. LISA extends these global concepts to detect significant spatial autocorrelation for individual locations.
This document provides an overview of remote sensing and image interpretation. It discusses key topics such as the use of maps as models to represent features on Earth, different types of map scales and spatial referencing systems, and how computers are used in map production. It also outlines the process of image interpretation, including levels of interpretation keys and basic elements to examine like size, shape, shadow, tone, color, and texture. Software programs used in map production like ArcGIS and types of data products from remote sensing are also reviewed.
Gis Geographical Information System FundamentalsUroosa Samman
Gis, Geographical Information System Fundamentals. This presentation includes a complete detail of GIS and GIS Softwares. It will help students of GIS and Environmental Science.
The ENVI Pocket Guide is a quick reference booklet NOT intended to be read from cover to cover although it can be. The intent is to provide Soldiers and civilians, working in the Defense and Intelligence community, succinct steps on how to accomplish common tasks in ENVI. The RPF export workflow and other pertinent information such as how to contact ENVI technical support and online help can also be found in this guide.
The ENVI Pocket Guide Volume 2 | INTERMEDIATE expounds a step further on intermediate procedures using ENVI, IDL and ENVI LiDAR, assuming you have already mastered the basics.
Volume 2 builds on the basics by providing succinct steps on how to perform the following tasks using ENVI and IDL:
1. Add Grid reference & count features
2. Perform Band Math
3. Layer Stack images
4. Exploit Shortwave Infrared (SWIR) bands
5. Perform Spectral Analysis in general
6. Perform Image Calibration/ Atmospheric Correction
7. Extract features from LiDAR
8. Batch Processing using IDL
Automatic Image Processing for Agriculture through specific ENVI Modules (add...Esri
Presentation by L. García Torres, J.J. Caballero-Novella, D. Gómez-Candón and F. López-Granados from Institue for Sustainable Agriculture (CSIC) on Esri European User Conference 2011.
This document provides an overview of key European legal issues for US businesses. It discusses the complicated structure of European Union institutions and lawmaking. It also covers areas such as unfair contract terms, business to consumer contracting and consumer protection regulations, privacy and data protection laws, and the proposed new EU data protection framework. The document is intended to help US enterprises understand some of the legal complexities of doing business in Europe.
z/OS Small Enhancements - Episode 2015AMarna Walle
This presentation covers small enhancements from older z/OS releases. You might have missed little functions that are helpful, but you never knew existed! The content of each of these z/OS Small Enhancements changes every half year (Episode A and Episode B each year).
Dokumen tersebut berisi soal-soal ujian tengah semester mata kuliah Media Pembelajaran yang mencakup definisi, kriteria pemilihan, tujuan penggunaan, manfaat, fungsi, dan manfaat media audio visual.
Luis Alirio Pulga Arévalo works with metals and follows a daily routine. He wakes up at 5:30 AM, takes a shower, gets dressed, eats breakfast, and drops off his brother at school before arriving at work at 7:30 AM. At work, he inspects welding equipment and tools. He leaves for lunch at noon and returns at 2 PM, inspecting his workplace before having a snack at 5 PM. He leaves work at 6 PM. In the evenings, he returns home, showers, helps his brother with homework, eats dinner, and goes to bed at 8 PM.
Teks tersebut membahas tiga jenis koleksi utama di perpustakaan yaitu primer, sekunder, dan tersier. Koleksi primer meliputi ensiklopedia, kamus, dan almanak/buku tahunan yang memberikan informasi langsung. Koleksi sekunder adalah bahan rujukan umum seperti bibliografi, sedangkan koleksi tersier berisi sarana bibliografi seperti indeks dan abstrak.
The document provides information about different foods and food groups. It includes a family food maze game identifying who eats what food. It then discusses the history and evolution of food pyramids/plates used to represent recommended daily intake of major food groups. Specific food groups like vegetables, fruits, oils, dairy, meat and beans are defined. The document also discusses calories and provides examples of calorie content in various foods. Physical activities that can be incorporated into daily routines are listed, along with relaxation techniques like yoga, meditation, and types of meditation.
Blink is a creative agency that creates engaging media for cultural brands, destinations, entertainment, attractions, and video games. They provide various creative services including branding, website design, advertising, marketing campaigns, illustrations, and more. They guarantee customer satisfaction and will work with clients to modify designs until the client is satisfied. The document provides examples of work Blink has done for clients including designs for theme parks, attractions, events, and destinations.
The document discusses how the author has improved their camerawork and editing skills since their preliminary task, noting they now use more interesting shots like cut-ins and close-ups that better suit the atmosphere, as well as continuity techniques like match-on-action. While match-on-action was not used much in their opening sequence due to cross-cutting between time periods, they found it useful for providing good continuity.
We differ from one another in many ways such as how we speak, dress, eat, view personal space, practice religion, and see ourselves. These differences can range from minor aspects like accents to major ones like language. Our backgrounds, environments we grew up in, and underlying cultural aspects shape our behaviors and make us diverse.
The document describes how 3D solid modeling was used to help a church furniture manufacturer cut pews to intersect at a 155 degree angle. The model allowed the engineer to virtually "slice" the pew and generate drawings showing the specific cutting angles needed for each part. This helped reduce installation time and rework by providing accurate instructions for cutting matching parts that could be easily assembled on site.
This document provides an overview of a tutorial on multispectral classification using Landsat TM data. It describes examining Landsat TM color images to identify training areas, performing unsupervised classification with K-Means and ISODATA algorithms, and reviewing pre-classified results. The document lists file names and paths for Landsat TM data and various classification results used in the tutorial.
The document compares image classification in ENVI and eCognition software. It details the image classification process for each: in ENVI, images are corrected, filtered, and then classified as supervised or unsupervised; in eCognition, images undergo multiresolution segmentation into objects before creating classes and performing nearest neighbor classification. The conclusion states that ENVI classification is more standardized and supervised, while eCognition uses an object-oriented, hierarchical, trial-and-error approach to segmentation and classification.
Types of Machine Learnig Algorithms(CART, ID3)Fatimakhan325
The document summarizes several machine learning algorithms used for data mining:
- Decision trees use nodes and edges to iteratively divide data into groups for classification or prediction.
- Naive Bayes classifiers use Bayes' theorem for text classification, spam filtering, and sentiment analysis due to their multi-class prediction abilities.
- K-nearest neighbors algorithms find the closest K data points to make predictions for classification or regression problems.
- ID3, CART, and k-means clustering are also summarized highlighting their uses, advantages, and disadvantages.
Localization and classification. Overfeat: class agnostic versu class specific localization, fully convolutional neural networks, greedy merge strategy. Multiobject detection. Region proposal and selective search. R-CNN, Fast R-CNN, Faster R-CNN and YOLO. Image segmentation. Semantic segmentation and transposed convolutions. Instance segmentation and Mask R-CNN. Image captioning. Recurrent Neural Networks (RNNs). Language generation. Long Short Term Memory (LSTMs). DeepImageSent, Show and Tell, and Show, Attend and Tell algorithms.
The document discusses deep learning in computer vision. It provides an overview of research areas in computer vision including 3D reconstruction, shape analysis, and optical flow. It then discusses how deep learning approaches can learn representations from raw data through methods like convolutional neural networks and restricted Boltzmann machines. Deep learning has achieved state-of-the-art results in applications such as handwritten digit recognition, ImageNet classification, learning optical flow, and generating image captions. Convolutional neural networks have been particularly successful due to properties of shared local weights and pooling layers.
This document summarizes a study on classifying silhouette images using Speeded Up Robust Features (SURF). The researchers extracted SURF features from images in the MPEG-7PartB dataset, clustered the features using k-means to form codebooks, and represented each image as a histogram of codebook assignments. They compared using locally merged vs global codebooks and different classifiers. Their approach achieved a classification rate of 98.75% using a global codebook and support vector machine with a histogram intersection kernel, outperforming other methods.
A Trained CNN Based Resolution Enhancement of Digital ImagesIJMTST Journal
Image Resolution Enhancement (RE) is a technique to estimate or synthesize a high resolution(HR) image
from one or several low resolution (LR) images . Resolution Enhancement (RE) technique reconstructs a
higher-resolution image or sequence from the observed LR images. In this project we are going to present
about the methods in resolution enhancement and the advancements that are taking place, since it has lot
many applications in various fields. Most resolution enhancement techniques are based on the same idea,
using information from several different images to create one upsized image. Algorithms try to extract details
from every image in a sequence to reconstruct other frames.
The document describes a vehicle detection system using a fully convolutional regression network (FCRN). The FCRN is trained on patches from aerial images to predict a density map indicating vehicle locations. The proposed system is evaluated on two public datasets and achieves higher precision and recall than comparative shallow and deep learning methods for vehicle detection in aerial images. The system could help with applications like urban planning and traffic management.
IRJET - Cognitive based Emotion Analysis of a Child Reading a BookIRJET Journal
1. The document describes a system to analyze the emotions of children while reading by capturing their facial expressions using CNN and analyzing the sentiment of the text using techniques like SVM.
2. The system aims to classify sentences based on emotions like joy, anger, fear etc. using models trained on Twitter data and detect facial expressions using a CNN model trained on a children's facial expression dataset.
3. The proposed application will record a child's face and text they are reading to analyze emotions over time and log the results to help understand responses to different parts of books or educational material.
The document compares the SVM and KNN machine learning algorithms and applies them to a photo classification project. It first provides a general overview of SVM and KNN, explaining that SVM finds the optimal decision boundary between classes while KNN classifies points based on their nearest neighbors. The document then discusses implementing each algorithm on a project involving photo classification. It finds that SVM achieved higher accuracy on this dataset compared to KNN.
Brain Tumor Classification using Support Vector MachineIRJET Journal
1) The document presents a method for classifying brain tumors as cancerous or non-cancerous using support vector machines (SVM) and image processing techniques.
2) MRI images of brain tumors are preprocessed, features are extracted, and feature vectors are generated before being classified by an SVM classifier trained on labeled tumor data.
3) The SVM model achieves high accuracy in classifying tumors, which is evaluated using measures like true positives, true negatives, false positives and false negatives. This automated classification could help in diagnosis and treatment of brain tumors.
Malicious software are categorized into families based on
their static and dynamic characteristics, infection methods, and nature of threat. Visual exploration of malware instances and families in a low dimensional space helps in giving a first overview about dependencies and
relationships among these instances, detecting their groups and isolating outliers. Furthermore, visual exploration of different sets of features is useful in assessing the quality of these sets to carry a valid abstract representation, which can be later used in classification and clustering algorithms to achieve a high accuracy. We investigate one of
the best dimensionality reduction techniques known as t-SNE to reduce the malware representation from a high dimensional space consisting of
thousands of features to a low dimensional space. We experiment with
different feature sets and depict malware clusters in 2-D.
This document discusses image interpretation and classification techniques in remote sensing. It describes visual interpretation which takes advantage of human skills to recognize information in images based on characteristics like size, color, texture and shape. Digital image classification uses computer algorithms to sort pixels into classes based on brightness values. There are supervised and unsupervised methods. Supervised classification involves training the computer to recognize patterns in data and then categorizing pixels based on spectral signatures developed during training. Unsupervised classification automatically identifies distinct spectral classes present in an image without training data.
CNN FEATURES ARE ALSO GREAT AT UNSUPERVISED CLASSIFICATION cscpconf
This paper aims at providing insight on the transferability of deep CNN features to
unsupervised problems. We study the impact of different pretrained CNN feature extractors on
the problem of image set clustering for object classification as well as fine-grained
classification. We propose a rather straightforward pipeline combining deep-feature extraction
using a CNN pretrained on ImageNet and a classic clustering algorithm to classify sets of
images. This approach is compared to state-of-the-art algorithms in image-clustering and
provides better results. These results strengthen the belief that supervised training of deep CNN
on large datasets, with a large variability of classes, extracts better features than most carefully
designed engineering approaches, even for unsupervised tasks. We also validate our approach
on a robotic application, consisting in sorting and storing objects smartly based on clustering
This document describes a project on age and gender detection using deep learning and convolutional neural networks. The objectives are to pretrain a model using the UTKFace dataset to detect age and gender from facial images. The methodology involves data preprocessing like grayscale conversion, resizing, normalization. A CNN model is built with convolutional blocks and fully connected layers. The model is compiled with binary cross entropy loss for gender classification and mean absolute error for age detection. The trained model is tested and deployed using streamlit. Real-world use cases discussed are advertising based on targeted audiences and an Android app that detects age from photos.
Deep computer vision uses deep learning and machine learning techniques to build powerful vision systems that can analyze raw visual inputs and understand what objects are present and where they are located. Convolutional neural networks (CNNs) are well-suited for computer vision tasks as they can learn visual features and hierarchies directly from data through operations like convolution, non-linearity, and pooling. CNNs apply filters to extract features, introduce non-linearity, and use pooling to reduce dimensionality while preserving spatial data. This repeating structure allows CNNs to learn increasingly complex features to perform tasks like image classification, object detection, semantic segmentation, and continuous control from raw pixels.
Digital image classification is the process of sorting pixels into categories based on their spectral values. There are supervised and unsupervised classification methods. Supervised classification involves using training sites of known categories to define statistical signatures for each class. Unsupervised classification groups pixels into clusters without prior class definitions. Validation is needed to assess classification accuracy by comparing results to ground truth data. Factors like training site selection and signature separability impact classification performance.
1. The document presents a hybrid SVM-KNN classification method for classifying MRI brain images to detect tumors. It combines support vector machines (SVM) and K-nearest neighbors (KNN) to leverage the strengths of both algorithms.
2. The algorithm first uses KNN to classify an MRI image based on its similarity to labeled training images. If KNN is uncertain or confused, it then uses SVM to classify the image by finding the optimal separating hyperplane between tumor and non-tumor classes.
3. The authors implemented this hybrid SVM-KNN algorithm in MATLAB and were able to successfully classify test MRI images as depicting a normal brain or an abnormal brain with a tumor.
Methodological study of opinion mining and sentiment analysis techniquesijsc
Decision making both on individual and organizational level is always accompanied by the search of
other’s opinion on the same. With tremendous establishment of opinion rich resources like, reviews, forum
discussions, blogs, micro-blogs, Twitter etc provide a rich anthology of sentiments. This user generated
content can serve as a benefaction to market if the semantic orientations are deliberated. Opinion mining
and sentiment analysis are the formalization for studying and construing opinions and sentiments. The
digital ecosystem has itself paved way for use of huge volume of opinionated data recorded. This paper is
an attempt to review and evaluate the various techniques used for opinion and sentiment analysis.
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Northern Engraving | Modern Metal Trim, Nameplates and Appliance PanelsNorthern Engraving
What began over 115 years ago as a supplier of precision gauges to the automotive industry has evolved into being an industry leader in the manufacture of product branding, automotive cockpit trim and decorative appliance trim. Value-added services include in-house Design, Engineering, Program Management, Test Lab and Tool Shops.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Northern Engraving | Nameplate Manufacturing Process - 2024
Classification methods envi
1. ENVI Tutorial: Classification
Methods
Classification Methods 2
Files Used in this Tutorial 2
Examining a Landsat TM Color Image 3
Reviewing Image Colors 3
Using the Cursor Location/Value 4
Examining Spectral Plots 4
Exploring Unsupervised Classification Methods 6
Applying K-Means Classification 6
Applying ISODATA Classification 7
Exploring Supervised Classification Methods 9
Selecting Training Sets Using Regions of Interest (ROI) 9
Applying Parallelepiped Classification 9
Applying Maximum Likelihood Classification 10
Applying Minimum Distance Classification 10
Applying Mahalanobis Distance Classification 11
Collecting Endmember Spectra 11
Applying Binary Encoding Classification 12
Exploring Spectral Classification Methods 14
Exploring Rule Images 15
Post Classification Processing 17
Extracting Class Statistics 17
Generating a Confusion Matrix 18
Clumping and Sieving 19
Combining Classes 20
Overlaying Classes 20
Editing Class Colors 22
Working with Interactive Classification Overlays 23
Overlaying Vector Layers 24
Converting a Classification to a Vector 24
Adding Classification Keys Using Annotation 25
1
2. ENVI Tutorial: Classification Methods
Classification Methods
This tutorial provides an introduction to classification procedures using Landsat TM data from Cañon
City, Colorado. Results of both unsupervised and supervised classifications are examined and post
classification processing including clump, sieve, combine classes, and accuracy assessment are
discussed.
Files Used in this Tutorial
ENVI Resource DVD: Datacan_tm
File Description
can_tmr.img Cañon City, Colorado TM reflectance image
can_tmr.hdr ENVI header for above
can_km.img K-means classification
can_km.hdr ENVI header for above
can_iso.img ISODATA classification
can_iso.hdr ENVI header for above
classes.roi Regions of interest (ROI) for supervised classification
can_pcls.img Parallelepiped classification
can_pcls.hdr ENVI header for above
can_bin.img Binary encoding result
can_bin.hdr ENVI header for above
can_sam.img SAM classification result
can_sam.hdr ENVI header for above
can_rul.img Rule image for SAM classification
can_rul.hdr ENVI header for above
can_sv.img Sieved image
can_sv.hdr ENVI header for above
can_clmp.img Clump of sieved image
can_clmp.hdr ENVI header for above
can_comb.img Combined classes image
can_comb.hdr ENVI header for above
can_ovr.img Classes overlain on gray scale image
can_ovr.hdr ENVI header for above
can_v1.evf Vector layer generated from class #1
can_v2.evf Vector layer generated from class #2
2
3. ENVI Tutorial: Classification Methods
Examining a Landsat TM Color Image
This portion of the exercise will familiarize you with the spectral characteristics of the Landsat TM data
of Cañon City, Colorado, USA. Color composite images will be used as the first step in locating and
identifying unique areas for use as training sets in classification.
Before attempting to start the program, ensure that ENVI is properly installed as described in the
Installation Guide that shipped with your software.
1. From the ENVI main menu bar, select File > Open Image File.
2. Navigate to the Datacan_tm directory, select the file can_tmr.img from the list, and click
Open. The Available Bands List appears on your screen.
3. Click on the RGB Color radio button in the Available Bands List. Red, Green, and Blue fields
appear in the middle of the dialog.
4. Select Band 4, Band 3, and Band 2 sequentially from the list of bands at the top of the dialog by
clicking on the band names. The band names are automatically entered in the Red, Green, and
Blue fields.
5. Click Load RGB to load the image into ENVI.
6. Examine the image in the display group.
Reviewing Image Colors
The color image displayed below can be used as a guide to classification. This image is the equivalent of
a false color infrared photograph. Even in a simple three-band image, it’s easy to see that there are areas
that have similar spectral characteristics. Bright red areas on the image represent high infrared
reflectance, usually corresponding to healthy vegetation, either under cultivation, or along rivers. Slightly
darker red areas typically represent native vegetation, in this case in slightly more rugged terrain,
primarily corresponding to coniferous trees. Several distinct geologic and urbanization classes are also
readily apparent.
3
4. ENVI Tutorial: Classification Methods
Using the Cursor Location/Value
Use ENVI’s Cursor Location/Value option to preview image values in the displayed spectral bands.
1. From the Display group menu bar, select Tools > Cursor Location/Value. Alternatively, double-
click the left mouse button in the Image window to toggle the Cursor Location/Value dialog on
and off.
2. Move the cursor around the image and examine the data values in the dialog for specific
locations. Also note the relation between image color and data value.
3. From the Cursor Location/Value dialog, select Files > Cancel.
Examining Spectral Plots
Use ENVI’s integrated spectral profiling capabilities to examine the spectral characteristics of the data.
1. From the Display group menu bar, select Tools > Profiles > Z Profile (Spectrum) to begin
extracting spectral profiles.
2. Examine the spectra for areas that you previewed above using color images and the
Cursor/Location Value dialog by clicking the left mouse button in any of the display group
4
5. ENVI Tutorial: Classification Methods
windows. Note the relations between image color and spectral shape. Pay attention to the location
of the image bands in the spectral profile, marked by the red, green, and blue bars in the plot.
3. From the Spectral Profile dialog menu bar, select File > Cancel.
5
6. ENVI Tutorial: Classification Methods
Exploring Unsupervised Classification Methods
Unsupervised classification can be used to cluster pixels in a dataset based on statistics only, without
any user-defined training classes. The available unsupervised classification techniques are K-Means and
ISODATA.
Applying K-Means Classification
K-Means unsupervised classification calculates initial class means evenly distributed in the data space,
then iteratively clusters the pixels into the nearest class using a minimum-distance technique. Each
iteration recalculates class means and reclassifies pixels with respect to the new means. All pixels are
classified to the nearest class unless a standard deviation or distance threshold is specified, in which
case some pixels may be unclassified if they do not meet the selected criteria. This process continues
until the number of pixels in each class changes by less than the selected pixel change threshold or the
maximum number of iterations is reached.
1. From the ENVI main menu bar, select Classification > Unsupervised > K-Means or review the
pre-calculated results of classifying the image by opening the can_km.img file in the can_tm
directory.
2. Select the can_tmr.img file and click OK. The K-Means Parameters dialog appears.
3. Accept the default values, select the Memory radio button, and click OK. The new band is
loaded into the Available Bands List.
4. From the Available Bands List, click the Display #1 button and select New Display.
5. From the Available Bands List, select the K-Means band and click Load Band.
6. From the Display group menu bar, select Tools > Link > Link Displays then click OK to link the
images.
6
7. ENVI Tutorial: Classification Methods
7. Compare the K-Means classification result to the color-composite image using the dynamic
overlay feature in ENVI (click using the left mouse button in the Image window).
8. From the Display group menu bar, select Tools > Link > Unlink Display to remove the link and
turn off the dynamic overlay feature.
9. If desired, experiment with different numbers of classes, change thresholds, standard deviations,
and maximum distance error values to determine their effect on the classification.
Applying ISODATA Classification
ISODATA unsupervised classification calculates class means evenly distributed in the data space then
iteratively clusters the remaining pixels using minimum distance techniques. Each iteration recalculates
means and reclassifies pixels with respect to the new means. This process continues until the number of
pixels in each class changes by less than the selected pixel change threshold or the maximum number of
iterations is reached.
1. From the ENVI main menu bar, select Classification > Unsupervised > IsoData, or review the
pre-calculated results of classifying the image by opening the can_iso.img file in the can_
tm directory.
2. Select the can_tmr.img file and click OK. The ISODATA Parameters dialog appears.
3. Accept the default values, select the Memory radio button, and click OK. The new band is
loaded into the Available Bands List.
4. From the Available Bands List, click the Display #2 button and select New Display.
5. Select the ISODATA band and click Load Band.
6. From the Display group menu bar, select Tools > Link > Link Displays. The Link Displays
dialog appears.
7
8. ENVI Tutorial: Classification Methods
7. Click the Display #2 toggle button to select No, and click the Display #3 toggle button to select
Yes. Click OK to link the images.
8. Compare the ISODATA classification result to the color-composite image using the dynamic
overlay feature in ENVI (click using the left mouse button in the Image window).
9. From the Display group menu bar, select Tools > Unlink Displays.
10. From the Display group menu bar, select Tools > Link > Link Displays. The Link Displays
dialog appears.
11. Click the Display #1 toggle button to select No, and ensure that the Display #2 and Display #3
toggle buttons say Yes. Click OK to link and compare the K-means and ISODATA images.
12. If desired, experiment with different numbers of classes, change thresholds, standard deviations,
maximum distance error, and class pixel characteristic values to determine their effect on the
classification.
13. From the Display group menu bar on the K-Means Image window, select File > Cancel to close
the display group. Close the ISODATA display group using the same technique.
8
9. ENVI Tutorial: Classification Methods
Exploring Supervised Classification Methods
Supervised classification can be used to cluster pixels in a dataset into classes corresponding to user-
defined training classes. This classification type requires that you select training areas for use as the
basis for classification. Various comparison methods are then used to determine if a specific pixel
qualifies as a class member. ENVI provides a broad range of different classification methods, including
Parallelepiped, Minimum Distance, Mahalanobis Distance, Maximum Likelihood, Spectral Angle
Mapper, Binary Encoding, and Neural Net. In this tutorial, you will experiment with two methods for
selecting training areas, also known as regions of interest (ROIs).
Selecting Training Sets Using Regions of Interest (ROI)
As described in the tutorial, An Introduction to ENVI and summarized here, ENVI lets you define
regions of interest (ROIs) typically used to extract statistics for classification, masking, and other
operations. For the purposes of this exercise, you can either use predefined ROIs, or create your own. In
this exercise, you will restore predefined ROIs.
1. From the #1 Display group menu bar, select Tools > Region of Interest > ROI Tool. The ROI
Tool dialog appears.
2. From the ROI Tool dialog menu bar, select File > Restore ROIs. The Enter ROI Filenames
dialog appears.
3. Select the classes.roi file and click Open. Click OK. The ROIs appear in the Image
window.
Applying Parallelepiped Classification
Parallelepiped classification uses a simple decision rule to classify multispectral data. The decision
boundaries form an n-dimensional parallelepiped classification in the image data space. The dimensions
of the parallelepiped classification are defined based upon a standard deviation threshold from the mean
of each selected class. If a pixel value lies above the low threshold and below the high threshold for all n
bands being classified, it is assigned to that class. If the pixel value falls in multiple classes, ENVI
assigns the pixel to the last class matched. Areas that do not fall within any of the parallelepiped
classifications are designated as unclassified.
1. From the ENVI main menu bar, select Classification > Supervised > Parallelepiped, or review
the pre-calculated results of classifying the image by opening the can_pcls.img file in the
can_tm directory.
2. Select the can_tmr.img file and click OK. The Parallelepiped Parameters dialog appears.
3. Click the Select All Items button to select the ROIs.
4. Select to output the result to Memory using the radio button provided.
5. Click the Output Rule Images toggle button to select No, then click OK. The new band is loaded
into the Available Bands List.
9
10. ENVI Tutorial: Classification Methods
6. From the Available Bands List, click the Display #1 button and select New Display.
7. Select the Parallel band and click Load Band.
8. From the Display group menu bar, select Tools > Link > Link Displays and click OK in the
dialog to link the images.
9. Use image linking and dynamic overlay to compare this classification to the color composite
image.
Applying Maximum Likelihood Classification
Maximum likelihood classification assumes that the statistics for each class in each band are normally
distributed and calculates the probability that a given pixel belongs to a specific class. Unless a
probability threshold is selected, all pixels are classified. Each pixel is assigned to the class that has the
highest probability (i.e., the maximum likelihood).
1. Using the steps above as a guide, perform a Maximum Likelihood classification.
2. Try using the default parameters and various probability thresholds.
3. Use image linking and dynamic overlay to compare this classification to the color composite
image and previous unsupervised and supervised classifications.
Applying Minimum Distance Classification
The minimum distance classification uses the mean vectors of each ROI and calculates the Euclidean
distance from each unknown pixel to the mean vector for each class. All pixels are classified to the
closest ROI class unless the user specifies standard deviation or distance thresholds, in which case some
pixels may be unclassified if they do not meet the selected criteria.
1. Using the steps above as a guide, perform a Minimum Distance classification.
2. Try using the default parameters and various standard deviations and maximum distance errors.
10
11. ENVI Tutorial: Classification Methods
3. Use image linking and dynamic overlay to compare this classification to the color composite
image and previous unsupervised and supervised classifications.
Applying Mahalanobis Distance Classification
The Mahalanobis Distance classification is a direction sensitive distance classifier that uses statistics
for each class. It is similar to the Maximum Likelihood classification but assumes all class covariances
are equal and therefore is a faster method. All pixels are classified to the closest ROI class unless you
specify a distance threshold, in which case some pixels may be unclassified if they do not meet the
threshold.
1. Using the steps above as a guide, perform a Mahalanobis Distance classification.
2. Try using the default parameters and various maximum distance errors.
3. Use image linking and dynamic overlay to compare this classification to the color composite
image and previous unsupervised and supervised classifications.
4. When you are finished, close all classification display groups.
Collecting Endmember Spectra
The Endmember Collection:Parallel dialog is a standardized means of collecting spectra for supervised
classification from ASCII files, ROIs, spectral libraries, and statistics files.
1. From the ENVI main menu bar, select Classification > Endmember Collection. The
Classification Input File dialog appears.
2. Select the can_tmr.img file and click OK.
11
12. ENVI Tutorial: Classification Methods
3. The Endmember Collection dialog appears with the Parallelepiped classification method selected
by default. The available classification and mapping methods are listed under the Algorithm
menu. You will use this dialog in the following exercises.
Applying Binary Encoding Classification
The binary encoding classification technique encodes the data and endmember spectra into zeros and
ones, based on whether a band falls below or above the spectrum mean. An exclusive OR function
compares each encoded reference spectrum with the encoded data spectra, and ENVI produces a
classification image. All pixels are classified to the endmember with the greatest number of bands that
match unless the user specifies a minimum match threshold, in which case some pixels may be
unclassified if they do not meet the criteria.
1. From the Endmember Collection:Parallel dialog menu bar, select Algorithm > Binary Encoding
or review the pre-calculated results of classifying the image by opening the can_bin.img file in the
can_tm directory. These results were created using a minimum encoding threshold of 75%.
2. For this exercise, you will use the predefined ROIs in the classes.roi file that you used
earlier. From the Endmember Collection:Parallel dialog menu bar, select Import > from
ROI/EVF from input file. The Select Regions for Stats dialog appears.
3. Click the Select All Items button, and click OK.
4. In the Endmember Collection:Parallel dialog, click Select All then click Plot to view the
endmember spectral plots for the ROIs collected in the Endmember Collections dialog.
12
13. ENVI Tutorial: Classification Methods
5. In the Endmember Collections dialog, click Apply. The Binary Encoding Parameters dialog
appears.
6. In the Binary Encoding Parameters dialog, select to output the result to Memory using the radio
button provided.
7. Toggle the Output Rule Images to No, then click OK to start the classification. The new band is
loaded into the Available Bands List.
8. From the Available Bands List, select the Bin Encode band, and click Load Band.
9. Use image linking and dynamic overlay to compare this classification to the color composite
image and previous unsupervised and supervised classifications.
13
14. ENVI Tutorial: Classification Methods
Exploring Spectral Classification Methods
The following methods are described in the ENVI User’s Guide. These were developed specifically for
use on hyperspectral data, but they provide an alternative method for classifying multispectral data, often
with improved results that can easily be compared to spectral properties of materials. They typically are
used from the Endmember Collection dialog using image or library spectra; however, they can also be
started from the Classification > Supervised menu option.
See the ENVI Tutorial Spectral Angle Mapper (SAM) and Spectral Information Divergence (SID)
Classification for details of the SAM and SID classification methods.
14
15. ENVI Tutorial: Classification Methods
Exploring Rule Images
ENVI creates images that show the pixel values used to create the classified image. These optional
images allow users to evaluate classification results and to reclassify if desired based on thresholds.
These are gray scale images: one for each ROI or endmember spectrum used in the classification.
The rule image pixel values represent different things for different types of classifications, for example:
Classification
Rule Image Values
Method
Parallelepiped Number of bands satisfying the parallelepiped criteria
Minimum Distance Sum of the distances from the class means
Maximum Probability of pixel belonging to class
Likelihood
Mahalanobis Distances from the class means
Distance
Binary Encoding Binary match in percent
Spectral Angle Spectral angle in radians (smaller angles indicate closer match to the reference
Mapper spectrum)
1. From the ENVI main menu bar, select File > Open Image File.
2. Navigate to the Datacan_tm directory, select the file can_rul.img from the list, and click
Open. The Available Bands List appears on your screen.
3. Click on the Gray Scale radio button in the Available Bands List and open each Rule band into its
own image window (use the Display > New Display button).
4. Use image linking and dynamic overlay to compare the color composite image to the rule images.
15
16. ENVI Tutorial: Classification Methods
5. From the Display group menu bar, select Tools > Color Mapping > ENVI Color Tables and
drag the Stretch Bottom and Stretch Top sliders to opposite ends of the dialog. Areas with low
spectral angles (more similar spectra) appear bright.
16
17. ENVI Tutorial: Classification Methods
Post Classification Processing
Classified images require post-processing to evaluate classification accuracy and to generalize classes
for export to image-maps and vector GIS. Post Classification can be used to classify rule images; to
calculate class statistics and confusion matrices; to apply majority or minority analysis to classification
images; to clump, sieve, and combine classes; to overlay classes on an image; to calculate buffer zone
images; to calculate segmentation images; and to output classes to vector layers. ENVI provides a series
of tools to satisfy these requirements.
Extracting Class Statistics
This function allows you to extract statistics from the image used to produce the classification. Separate
statistics consisting of basic statistics, histograms, and average spectra are calculated for each class
selected.
1. From the ENVI main menu bar, select Classification > Post Classification > Class Statistics.
The Classification Input File dialog appears.
2. Click the Open drop-down button and select New File.
3. Navigate to the Datacan_tm directory, select the file can_pcls.img from the list, and
click Open. The Statistics Input File appears.
4. Select the can_tmr.img file and click OK. The Class Selection dialog appears.
5. Click the Select All Items button and click OK. The Compute Statistics Parameters dialog
appears.
6. Click the Basic Stats, Histograms, Covariance, and Covariance Image check boxes in the
Compute Statistics Parameters dialog to calculate all the possible statistics.
17
18. ENVI Tutorial: Classification Methods
7. Click OK to compute the statistics. The Class Statistics Results dialog appears.
Generating a Confusion Matrix
ENVI’s confusion matrix function allows comparison of two classified images (the classification and the
“truth” image), or a classified image and ROIs. The truth image can be another classified image, or an
image created from actual ground truth measurements. In this exercise, you will compare the
Parallelepiped and SAM classification images using the Parallelepiped classification image as the
ground truth.
1. From the ENVI main menu bar, select Classification > Post Classification > Confusion Matrix
> Using Ground Truth Image. The Classification Input File dialog appears.
2. Select the can_pcls.img file and click OK. The Ground Truth Input File appears.
3. Click the Open drop-down button and select New File.
4. Navigate to the Datacan_tm directory, select the file can_sam.img from the list, and click
Open.
5. Select the can_sam.img file in the Ground Truth Input File dialog and click OK. The Match
Classes Parameters dialog appears.
18
19. ENVI Tutorial: Classification Methods
6. Select Region #1 from both fields and click Add Combination. Continue to pair corresponding
classes from the two images in this way, then click OK. The Confusion Matrix Parameters dialog
appears.
7. Click the Output Result to Memory radio button then click OK.
8. Examine the confusion matrix and confusion images (in the Available Bands List). Determine
sources of error by comparing the classified image to the original reflectance image using
dynamic overlays, spectral profiles, and Cursor Location/Value.
Clumping and Sieving
Clump and Sieve are used to generalize classification images. Sieve is usually run first to remove the
isolated pixels based on a size (number of pixels) threshold, then clump is run to add spatial coherency to
existing classes by combining adjacent similar classified areas.
1. From the ENVI main menu bar, select Classification > Post Classification > Sieve Classes. The
Classification Input File dialog appears.
2. Select the can_sam.img file within the Select Input File section of this dialog and click OK.
The Sieve Parameters dialog appears.
3. Click the Output Result to Memory radio button, then click OK. The image is loaded into the
Available Bands List.
4. You will now use the output of the sieve operation as the input for clumping. From the ENVI main
menu bar, select Classification > Post Classification > Clump Classes. The Classification Input
File dialog appears.
19
20. ENVI Tutorial: Classification Methods
5. Select the previously created image file from memory, and click OK. The Sieve Parameters
dialog appears.
6. Click the Output Result to Memory radio button, then click OK. The image is loaded into the
Available Bands List.
7. Compare the three images (can_sam.img, Clump, and Sieve) and reiterate if necessary to
produce a generalized classification image.
8. Optional: compare the pre-calculated results in the files can_tmcan_sv.img (sieve) and
can_clmp.img (clump of the sieve result) to the classified image can_pcls.img
(parallelepiped classification) or calculate your own images and compare to one of the
classifications.
Combining Classes
The Combine Classes function provides an alternative method for classification generalization. Similar
classes can be combined to form one or more generalized classes.
1. From the ENVI main menu bar, select Classification > Post Classification > Combine Classes
or review the pre-calculated results of classifying the image by opening the can_comb.img file
in the can_tm directory. The Classification Input File dialog appears.
2. Select the can_sam.img file and click OK. The Combine Classes Parameters dialog appears.
3. Select Region #3 from the Select Input Class field, click Unclassified from the Select Output
Class field, click Add Combination, then click OK. The Combine Classes Output dialog
appears.
4. Click the Output Result to Memory radio button then click OK. The image is loaded into the
Available Bands List.
5. Using image linking and dynamic overlays, compare the combined class image to the classified
images and the generalized classification image.
Overlaying Classes
Overlay classes allow you to place the key elements of a classified image as a color overlay on a gray
scale or RGB image.
You can examine the pre-calculated image can_tmcan_ovr.img or create your own overlay(s)
from the can_tmr.img reflectance image and one of the classified images.
1. From the ENVI main menu bar, select Classification > Post Classification > Overlay Classes or
review the pre-calculated results of classifying the image by opening the can_comb.img file in
the can_tm directory. The Input Overlay RGB Image Input Bands dialog appears.
2. Under can_tmr.img in the Available Bands List, select Band 3 for each RGB band (Band 3
for the R band, Band 3 for the G band, and Band 3 for the B band) and click OK. The
Classification Input File dialog appears.
3. Click Open, and select New File. A file selection dialog appears.
4. Open can_tmcan_comb.img, and click Open.
5. Click OK in the Classification Input File dialog.
20
21. ENVI Tutorial: Classification Methods
6. Using the Shift key on your keyboard, select Region #1 and Region #2 in the Class Overlay to
RGB Parameters dialog.
7. Click the Output Result to Memory radio button, then click OK. The image is loaded into the
Available Bands List.
8. Load the overlay image to a new display group.
9. Using image linking and dynamic overlays, compare this image to the classified image and the
reflectance image.
21
22. ENVI Tutorial: Classification Methods
Editing Class Colors
When a classification image is displayed, you can change the color associated with a specific class by
editing the class colors.
1. From the Display group menu bar, select Tools > Color Mapping > Class Color Mapping. The
Class Color Mapping dialog appears.
2. Click on one of the class names in the Class Color Mapping dialog and change the color by
dragging the appropriate color sliders or entering the desired data values. Changes are applied to
the classified image immediately.
3. To make the changes permanent, select Options > Save Changes from the menu bar in this the
dialog.
22
23. ENVI Tutorial: Classification Methods
Working with Interactive Classification Overlays
In addition to the methods above for working with classified data, ENVI also provides an interactive
classification overlay tool. This tool allows you to interactively toggle classes on and off as overlays on
a displayed image, to edit classes, get class statistics, merge classes, and edit class colors.
1. From the Available Bands List, load Band 4 of can_tmr.img as a gray scale image.
2. From the Display group menu bar, select Overlay > Classification. The Interactive Class Tool
Input File dialog appears.
3. Select the can_sam.img file and click OK. The Interactive Class Tool appears with each class
listed along with its corresponding colors.
4. Click each On check box to change the display of each class as an overlay on the gray scale
image.
5. Explore the various options for assessing the classification using the Interactive Class Tool
Options menu.
6. Interactively change the contents of specific classes using the Interactive Class Tool Edit menu.
7. From the Display group menu bar, select File > Save Image As > Image File to burn in the
classes and output to a new file.
8. From the Interactive Class Tool menu bar, select File > Cancel to exit the interactive tool.
23
24. ENVI Tutorial: Classification Methods
Overlaying Vector Layers
You can load pre-calculated vector layers onto a gray scale reflectance image for comparison to raster
classified images, or convert one of the classification images to vector layers.
1. Load the can_clmp.img into a display group.
2. From the Display group menu bar, select Overlay > Vectors. The Vector Parameters: Cursor
Query dialog appears.
3. From the Vector Parameters: Cursor Query dialog menu bar, select File > Open Vector File.
4. Navigate to the Datacan_tm directory, and use the Shift key on your keyboard to select the
files can_v1.evf and can_v2.evf. Click Open. The vectors derived from the classification
polygons will outline the raster classified pixels.
Converting a Classification to a Vector
1. From the ENVI main menu bar, select Classification > Post Classification > Classification to
Vector. The Raster to Vector Input Band dialog appears.
2. Select the can_clmp.img file Clump result within the Select Input File section of this dialog
and click OK. The Raster to Vector Parameters dialog appears.
3. Using the Shift key on your keyboard, select Region #1 and Region #2 from the Select Input
Class field.
4. In the Enter Output Filename field, type canrty and click OK to begin the conversion. The
layers are loaded into the Available Vectors List.
5. Select Region #1 and Region #2 in the Available Vectors List dialog then click Load Selected.
6. Select a display number from the Load Vector dialog and click OK.
7. From the Vector Parameters dialog menu bar, select Edit > Edit Layer Properties to change the
colors and fill of the vector layers to make them more visible.
8. Using image linking and dynamic overlays, compare the combined class image to the classified
images.
24
25. ENVI Tutorial: Classification Methods
Adding Classification Keys Using Annotation
ENVI provides annotation tools to put classification keys on images and in map layouts. The
classification keys are automatically generated.
1. From the Display group menu bar, select Overlay > Annotation for either one of the classified
images, or for the image with the vector overlay.
2. From the Annotation menu bar, select Object > Map Key to start annotating the image. You can
edit the key characteristics by clicking the Edit Map Key Items button in the dialog and changing
the desired characteristics.
3. Click once with the left mouse button in the Image window to place the map key in the image
window.
4. Click and drag the map key using the left mouse button in the display to place the key.
5. Click in the display with the right mouse button to finalize the position of the key. For more
information about image annotation, please see ENVI Help.
25