Satellite Image Processing technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications.
Digital image processing focuses on two major tasks
-Improvement of pictorial information for human interpretation
-Processing of image data for storage, transmission and representation for autonomous machine perception
this presentation briefly describes the digital image processing and its various procedures and techniques which include image correction or rectification with remote sensing data/ images. it also contains various image classification techniques.
Digital image processing focuses on two major tasks
-Improvement of pictorial information for human interpretation
-Processing of image data for storage, transmission and representation for autonomous machine perception
this presentation briefly describes the digital image processing and its various procedures and techniques which include image correction or rectification with remote sensing data/ images. it also contains various image classification techniques.
The advantage of digital imagery is that it allows us to manipulate the digital pixel values in the image. Even after the radiometric corrections image may still not be optimized for visual interpretation. An image 'enhancement' is basically anything that makes it easier or better to visually interpret. An enhancement is performed for a specific application as well. This enhancement may be inappropriate for another purpose, which would demand a different type of enhancement.
Filtering is used to enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific features in an image based on their spatial frequency. ‘Rough’ textured areas of an image, where the changes in tone are abrupt, have high spatial frequencies, while ‘smooth’ areas with little variation have low spatial frequencies. A common filtering procedure involves moving a ‘matrix' of a few pixels in dimension (ie. 3x3, 5x5, etc.) over each pixel in the image, using mathematical calculation and replacing the central pixel with the new value.
A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of an image. In some cases, like 'low-pass filtering', the enhanced image can actually look worse than the original, but such an enhancement was likely performed to help the interpreter see low spatial frequency features among the usual high frequency clutter found in an image. High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an image. Directional, or edge detection filters are designed to highlight linear features, such as roads or field boundaries. These filters can also be designed to enhance features which are oriented in specific directions.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
it is highly useful for geography students in the field of remote sensing and it is in very simple and explanatory for the purpose of simplification with relevant images in this ppt.
In the context of remote sensing, change detection refers to the process of identifying differences in the state of land features by observing them at different times. This process can be accomplished either manually (i.e., by hand) or with the aid of remote sensing software. Manual interpretation of change from satellite images or aerial photos involves an observer or analyst defining areas of interest and comparing them between images from two dates. This may be accomplished either on-screen (such as in a GIS) or on paper. When analyzing aerial photographs, a stereoscope which allows for two spatially-overlapping photos to be displayed in 3D, can aid photo interpretation. Manual image interpretation works well when assessing change between discrete classes (forest openings, land use and land cover maps) or when changes are large (e.g., heavy mechanized maneuver damage, engineering training impacts). Manual image interpretation is also an option when trying to determine change using images or photos from different sources (comparing historic aerial photographs to current satellite imagery).
Automated methods of remote sensing change detection usually are of two forms: post-classification change detection and image differencing using band ratios. In post-classification change detection, the images from each time period are classified using the same classification scheme into a number of discrete categories like land cover types. The two (or more) classifications are compared and the area that is classified the same or different is tallied. With image differencing, a band ratio such as NDVI is constructed from each input image, and the difference is taken between the band ratios of different times. In the case of differencing NDVI images, positive output values may indicate an increase in vegetation, negative values a decrease in vegetation, and values near zero no change. With either post-classification or image differencing change detection, it is necessary to specify a threshold below which differences between the two images is considered to be non-significant. The specification of thresholds is critical to the results of change detection analysis and usually must be found through an iterative process.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
The advantage of digital imagery is that it allows us to manipulate the digital pixel values in the image. Even after the radiometric corrections image may still not be optimized for visual interpretation. An image 'enhancement' is basically anything that makes it easier or better to visually interpret. An enhancement is performed for a specific application as well. This enhancement may be inappropriate for another purpose, which would demand a different type of enhancement.
Filtering is used to enhance the appearance of an image. Spatial filters are designed to highlight or suppress specific features in an image based on their spatial frequency. ‘Rough’ textured areas of an image, where the changes in tone are abrupt, have high spatial frequencies, while ‘smooth’ areas with little variation have low spatial frequencies. A common filtering procedure involves moving a ‘matrix' of a few pixels in dimension (ie. 3x3, 5x5, etc.) over each pixel in the image, using mathematical calculation and replacing the central pixel with the new value.
A low-pass filter is designed to emphasize larger, homogeneous areas of similar tone and reduce the smaller detail in an image. Thus, low-pass filters generally serve to smooth the appearance of an image. In some cases, like 'low-pass filtering', the enhanced image can actually look worse than the original, but such an enhancement was likely performed to help the interpreter see low spatial frequency features among the usual high frequency clutter found in an image. High-pass filters do the opposite and serve to sharpen the appearance of fine detail in an image. Directional, or edge detection filters are designed to highlight linear features, such as roads or field boundaries. These filters can also be designed to enhance features which are oriented in specific directions.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
it is highly useful for geography students in the field of remote sensing and it is in very simple and explanatory for the purpose of simplification with relevant images in this ppt.
In the context of remote sensing, change detection refers to the process of identifying differences in the state of land features by observing them at different times. This process can be accomplished either manually (i.e., by hand) or with the aid of remote sensing software. Manual interpretation of change from satellite images or aerial photos involves an observer or analyst defining areas of interest and comparing them between images from two dates. This may be accomplished either on-screen (such as in a GIS) or on paper. When analyzing aerial photographs, a stereoscope which allows for two spatially-overlapping photos to be displayed in 3D, can aid photo interpretation. Manual image interpretation works well when assessing change between discrete classes (forest openings, land use and land cover maps) or when changes are large (e.g., heavy mechanized maneuver damage, engineering training impacts). Manual image interpretation is also an option when trying to determine change using images or photos from different sources (comparing historic aerial photographs to current satellite imagery).
Automated methods of remote sensing change detection usually are of two forms: post-classification change detection and image differencing using band ratios. In post-classification change detection, the images from each time period are classified using the same classification scheme into a number of discrete categories like land cover types. The two (or more) classifications are compared and the area that is classified the same or different is tallied. With image differencing, a band ratio such as NDVI is constructed from each input image, and the difference is taken between the band ratios of different times. In the case of differencing NDVI images, positive output values may indicate an increase in vegetation, negative values a decrease in vegetation, and values near zero no change. With either post-classification or image differencing change detection, it is necessary to specify a threshold below which differences between the two images is considered to be non-significant. The specification of thresholds is critical to the results of change detection analysis and usually must be found through an iterative process.
THIS PRESENTATION IS TO HELP YOU PERFORM THE TASK STEP BY STEP.
Social Media, Cloud Computing and architectureRick Mans
Slides for a guest lecture on the impact of social media and cloud computing on system architecture. Key is the crown model which enables you to personalize your offerings while still using the 'comply' layer with enterprise applications.
Social Cloud: Cloud Computing in Social NetworksSimon Caton
With the increasingly ubiquitous nature of Social networks and Cloud computing, users are starting to explore new ways to interact with, and exploit these developing paradigms. Social networks are used to reflect real world relationships that allow users to share information and form connections between one another, essentially creating dynamic Virtual Organizations. We propose leveraging the pre-established trust formed through friend relationships within a Social network to form a dynamic “Social Cloud”, enabling friends to share resources within the context of a Social network. We believe that combining trust relationships with suitable incentive mechanisms (through financial payments or bartering) could provide much more sustainable resource sharing mechanisms. This paper outlines our vision of, and experiences with, creating a Social Storage Cloud, looking specifically at possible market mechanisms that could be used to create a dynamic Cloud infrastructure in a Social network environment.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
In the navigation system, the desired destination position plays an essential role since the path planning algorithms takes a current location and goal location as inputs as well as the map of the surrounding environment. The generated path from path planning algorithm is used to guide a user to his final destination. This paper presents a proposed algorithm based on RGB-D camera to predict the goal coordinates in 2D occupancy grid map for visually impaired people navigation system. In recent years, deep learning methods have been used in many object detection tasks. So, the object detection method based on convolution neural network method is adopted in the proposed algorithm. The measuring distance between the current position of a sensor and the detected object depends on the depth data that is acquired from RGB-D camera. Both of the object detected coordinates and depth data has been integrated to get an accurate goal location in a 2D map. This proposed algorithm has been tested on various real-time scenarios. The experiments results indicate to the effectiveness of the proposed algorithm.
Surveyors already have access to ground-based, manned flight, and satellite data, so will they embrace this new technology in earnest?
By Bill McNeil, Contributor/Advisor, and Colin Snow, CEO and Founder, Skylogic Research, LLC
Inclined Image Recognition for Aerial Mapping using Deep Learning and Tree ba...TELKOMNIKA JOURNAL
One of the important capabilities of an unmanned aerial vehicle (UAV) is aerial mapping. Aerial mapping is an image registration problem, i.e., the problem of transforming different sets of images into one coordinate system. In image registration, the quality of the output is strongly influenced by the quality of input (i.e., images captured by the UAV). Therefore, selecting the quality of input images becomes important and one of the challenging task in aerial mapping because the ground truth in the mapping process is not given before the UAV flies. Typically, UAV takes images in sequence irrespective of its flight orientation and roll angle. These may result in the acquisition of bad quality images, possibly compromising the quality of mapping results, and increasing the computational cost of a registration process. To address these issues, we need a recognition system that is able to recognize images that are not suitable for the registration process. In this paper, we define these unsuitable images as “inclined images,” i.e., images captured by UAV that are not perpendicular to the ground. Although we can calculate the inclination angle using a gyroscope attached to the UAV, our interest here is to recognize these inclined images without the use of additional sensors in order to mimic how humans perform this task visually. To realize that, we utilize a deep learning method with the combination of tree-based models to build an inclined image recognition system. We have validated the proposed system with the images captured by the UAV. We collected 192 images and labelled them with two different levels of classes (i.e., coarse- and fine-classification). We compared this with several models and the results showed that our proposed system yielded an improvement of accuracy rate up to 3%.
This paper intends to explain the development of Coastal Video Monitoring System (CoViMoS) with the main
characteristics including low-cost and easy implementation. CoViMoS characteristics have been realized using the device IP
camera for video image acquisition, and development of software applications with the main features including detection of
shoreline and it changes are automatically. This capability was based on segmentation and classification techniques based on data
mining. Detection of shoreline is done by segmenting a video image of the beach, to get a cluster of objects, namely land, sea and
sky, using Self Organizing Map (SOM) algorithms. The mechanism of classification is done using K-Nearest Neighbor (K-NN)
algorithms to provide the class labels to objects that have been generated on the segmentation process. Furthermore, the
classification of land used as a reference object in the detection of costline. Implementation CoViMoS system for monitoring
systems in Cucukan Beach, Gianyar regency, have shown that the developed system is able to detect the shoreline and its changes
automatically.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Feature Tracking of Objects in Underwater Video SequencesIDES Editor
Feature tracking is a key, underlying component in
many approaches to 3D reconstruction, detection, localization
and recognition of underwater objects. In this paper, we
proposed to adapt SIFT technique for feature tracking in
underwater video sequences. Over the past few years the
underwater vision is attracting researchers to investigate
suitable feature tracking techniques for underwater
applications. The researchers have developed many feature
tracking techniques such as KLT, SIFT, SURF etc., to track
the features in video sequence for general applications. The
literature survey reveals that there is no standard feature
tracker suitable for underwater environment. We proposed to
adapt SIFT technique for tracking features of objects in
underwater video sequence. The SIFT extracts features, which
are invariant to scale, rotation and affine transformations.
We have compared and evaluated SIFT with popular techniques
such as KLT and SURF on captured video sequence of
underwater objects. The experimental results shows that
adapted SIFT works well for underwater video sequence
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
3. WHAT IS SATELLITE IMAGE PROCESSING?
It is a technique to enhance raw images received from cameras or
sensors placed on satellites, space probes and aircrafts or pictures
taken in normal day to day life in various applications.
3
5. SATELLITE IMAGERY
The resolution of the sensor defines the pixel size and the detail and
accuracy.
1.
1.
2.
2.
3.
3.
4.
4.
5.
5.
Spatial resolution :It is the area on ground represented by each
Spatial resolution :It is the area on ground represented by each
pixel
pixel
Temporal resolution :: It tells how often a satellite obtaining
Temporal resolution It tells how often a satellite obtaining
imagery of particular area.
imagery of particular area.
Spectral resolution :: It is the specific wavelength interval in
Spectral resolution It is the specific wavelength interval in
electromagnetic spectrum.
electromagnetic spectrum.
Radiometric resolution :: It tells how the sensor changes brightness
Radiometric resolution It tells how the sensor changes brightness
of object .. It’s range is expressed as power of 2^n.
of object It’s range is expressed as power of 2^n.
View angle resolution: Number of angles at which ground objects
View angle resolution: Number of angles at which ground objects
are recorded.
are recorded.
5
6. REMOTE SENSING
Remote sensors are devices that sense energy from remote
location.
• Remote sensing is the science of acquiring , processing and
interpreting information or data collected by remote sensors.
• This technology is very useful to capture condition of surface
ground with high resolution and direct visual assessment
suffered regions.
• In 2004 tsumani was assessed by IKONOS and Quickbird
information resources.
6
7. METHODOLOGY FOR CHANGE DETECTION OF REMOTE
SENSING
IMAGE ALGEBRA:
It identifies the amount of change between pre and post event images.
D(i,j,k)=BV(i,j,k)[1]-BV(i,j,k)[2]+c
where
D(i, j,k ):-change pixel value
BV(i,j,k)[1]:-brightness value at time1
BV(i,j,k)[2]:-brightness value at time2
c:-constant
i:-row index
j:-column index
k:-single band
7
8. Image segmentation is the appropriate
Image segmentation is the appropriate
strategy to acquire image objects.
strategy to acquire image objects.
It divides an image into spatially continuous , ,
It divides an image into spatially continuous
disjoint and homogeneous regions on the
disjoint and homogeneous regions on the
basis of homogeneity and these regions are
basis of homogeneity and these regions are
referred to as IOs.
referred to as IOs.
IMAGE SEGMENTATION
IMAGE SEGMENTATION
8
9. SATELLITE IMAGE OPERATORS
SATELLITE IMAGE OPERATORS
Arithmetic operators: addition,subtraction,multiplication,division ,
exponent,compliment and negation.
• Spatial transformation:blurring,convolution,
• 1.Arithmetic and filtering.
sharpening operators: Addition ,subtraction ,multiplication , division ,
exponent , compliment and negation.
Edge,line and spot detection performs gradient transformation.
2.Spatial transformation:Blurring,sharpening and filtering.
Colour conversion operator:Transform the image from one color model to
3.Edge , line and spot detection performs gradient transformation.
another color model.
4.Colour conversion operator:Transform the image from one color model
Geometric transformation:rotation,scale and wrap.
to another color model.
5.Geometric transformation: Rotation , scale and wrap.
Forward and inverse fourier transformation
.
9
10. GOOGLE MAPS
A web mapping service application and
technology provided by google that powers
many map based services.
Provides high resolution aerial or satellite
images for most urban areas.
Download Map Area: Enables the user to
download the basic road map. It can download
up to 26 sq.km from the spot.
10
11. Various governments have complained that the terrorist attacks are planned using
satellite images so google has blurred some areas for security like U.S.Naval Base
and White House.
According to 2012 survey it has provided voice guidance and live traffic
information in cities of Bengaluru, Mumbai, New Delhi, Chennai, Pune and
Hyderabad.
11
12. FEATURES OF GOOGLE MAPS
Navigation
Navigation
Search in plain English
Search in plain English
Search by voice
Search by voice
Traffic view
Traffic view
Satellite view
Satellite view
Street view
Street view
12
13. APPLICATION
APPLICATION
• The realtime processing of satellite images on grid
architectures could reveal geographic and environmental
Satellite imaging soil,vegetation,water-depth and google
information. e.g., is prevelant in many consumer apps today .e.g.air.
maps,google earth , GPS cars.
• EMAN -a bio-imagining workflow application.
The real time processing of satellite images on grid architectures could
• Pegasus-mapping engine of dataflow. e.g., soil,vegetation,
reveal geographic and environmental information.
water-depth and air.
• Geoeye 1-satellite launched in 2008 has highest resolution .
• EROSPegasus-mapping engine of dataflow. highest resolution .
satellites are light weight,has
Geoeye 1-satellite launched in 2008 has high resolution and
high performance. light weight,has high resolution and high
EROS satellites are
• Meteo-sat 2 is a geostationary weather satellite.
performance.
Meteo-sat 2 is a geostationary weather satellite.
13