Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
Visibility Enhancement of Hazy Images using Depth Estimation ConceptIRJET Journal
This document presents a methodology to improve the visibility of hazy images using depth estimation. The proposed method first converts the input hazy image into white balance and contrast enhanced images. Depth estimation is then performed on these images to estimate the unknown depth from the camera to objects in the scene. Weight maps are generated from the white balance and contrast images and applied to Gaussian and Laplacian pyramids to estimate depth. A gamma correction is applied to the depth estimated image to further improve visibility. Experimental results show that the gamma corrected image has better visibility and a higher PSNR than the depth estimated image alone.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
An Automatic Neural Networks System for Classifying Dust, Clouds, Water, and ...Waqas Tariq
This paper presents an automatic remotely sensed system that is designed to classify dust, clouds, water and vegetation features from red sea area. Thus provides the system to make the test and classification process without retraining again. This system can rebuild the architecture of the neural network (NN) according to a linear combination among the number of epochs, the number of neurons, training functions, activation functions, and the number of hidden layers. Theproposed system is trained on the features of the provided images using 13 training functions, and is designed to find the best networks that has the ability to have the best classification on data is not included in the training data.This system shows an excellent classification of test data that is collected from the training data. The performances of the best three training functionsare%99.82, %99.64 and %99.28for test data that is not included in the training data.Although, the proposed system was trained on data selected only from one image, this system shows correctly classification of the features in the all images. The designed system can be carried out on remotely sensed images for classifying other features.This system was applied on several sub-images to classify the specified features. The correct performance of classifying the features from the sub-images was calculated by applying the proposed system on some small sections that were selected from contiguous areas contained the features.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
The document discusses an unsupervised method for extracting buildings from high resolution satellite images regardless of rooftop structures. The method first calculates NDVI and chromaticity ratios to segment vegetation and shadows. Rooftops and roads are then detected and eliminated. Principal component analysis and area analysis are performed to accurately extract buildings. The algorithm aims to eliminate inhomogeneities caused by varying building hierarchies by focusing on eliminating non-building regions rather than detecting building regions of interest. The methodology is tested on Quickbird satellite imagery and results indicate it can extract buildings in complex environments irrespective of rooftop shape.
Visibility Enhancement of Hazy Images using Depth Estimation ConceptIRJET Journal
This document presents a methodology to improve the visibility of hazy images using depth estimation. The proposed method first converts the input hazy image into white balance and contrast enhanced images. Depth estimation is then performed on these images to estimate the unknown depth from the camera to objects in the scene. Weight maps are generated from the white balance and contrast images and applied to Gaussian and Laplacian pyramids to estimate depth. A gamma correction is applied to the depth estimated image to further improve visibility. Experimental results show that the gamma corrected image has better visibility and a higher PSNR than the depth estimated image alone.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
An Automatic Neural Networks System for Classifying Dust, Clouds, Water, and ...Waqas Tariq
This paper presents an automatic remotely sensed system that is designed to classify dust, clouds, water and vegetation features from red sea area. Thus provides the system to make the test and classification process without retraining again. This system can rebuild the architecture of the neural network (NN) according to a linear combination among the number of epochs, the number of neurons, training functions, activation functions, and the number of hidden layers. Theproposed system is trained on the features of the provided images using 13 training functions, and is designed to find the best networks that has the ability to have the best classification on data is not included in the training data.This system shows an excellent classification of test data that is collected from the training data. The performances of the best three training functionsare%99.82, %99.64 and %99.28for test data that is not included in the training data.Although, the proposed system was trained on data selected only from one image, this system shows correctly classification of the features in the all images. The designed system can be carried out on remotely sensed images for classifying other features.This system was applied on several sub-images to classify the specified features. The correct performance of classifying the features from the sub-images was calculated by applying the proposed system on some small sections that were selected from contiguous areas contained the features.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
This document proposes techniques for detecting dust storms and determining their direction of transport using NOAA-AVHRR satellite imagery. It introduces a two-part approach using image processing algorithms: 1) A visualization technique uses filters, edge detectors and classification to locate dust sources. 2) An automation technique performs power spectrum analysis to detect dust storm direction by analyzing texture orientation in image blocks. The goal is to automatically detect and track dust storms for applications like hazard monitoring.
The document discusses image reconstruction techniques in nuclear medicine. It begins with an introduction to image reconstruction and definitions of key terms. It then describes analytical reconstruction methods like back projection and filtered back projection, as well as iterative reconstruction methods including algebraic reconstruction, statistical reconstruction, and maximum likelihood reconstruction. Both analytical and iterative methods are discussed and their properties and steps are outlined. The document provides an overview of various image reconstruction algorithms used in nuclear medicine.
Investigations on real time RSSI based outdoor target tracking using kalman f...IJECEIAES
Target tracking is essential for localization and many other applications in Wireless Sensor Networks (WSNs). Kalman filter is used to reduce measurement noise in target tracking. In this research TelosB motes are used to measure Received Signal Strength Indication (RSSI). RSSI measurement doesn‟t require any external hardware compare to other distance estimation methods such as Time of Arrival (TOA), Time Difference of Arrival (TDoA) and Angle of Arrival (AoA). Distances between beacon and non-anchor nodes are estimated using the measured RSSI values. Position of the nonanchor node is estimated after finding the distance between beacon and nonanchor nodes. A new algorithm is proposed with Kalman filter for location estimation and target tracking in order to improve localization accuracy called as MoteTrack InOut system. This system is implemented in real time for indoor and outdoor tracking. Localization error reduction obtained in an outdoor environment is 75%.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Photoacoustic tomography based on the application of virtual detectorsIAEME Publication
This document discusses using virtual detectors to improve photoacoustic tomography (PAT) image reconstruction when full scanning data is unavailable. It proposes interpolation and compressed sensing methods to generate virtual detector data and increase the number of measurements. Simulation results show applying these methods to preprocessed photoacoustic data significantly improves the peak signal-to-noise ratio of reconstructed images compared to direct reconstruction with limited detectors. Dictionary-based compressed sensing provides the best performance by learning an over-complete dictionary to sparsify signals. The methods allow better quality PAT imaging when hardware and spatial constraints limit actual detector positions and sampling angles.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
This document summarizes a study that used artificial neural networks to estimate soil moisture levels from cosmic ray sensor neutron count data. Specifically:
- Five neural networks (FFBPN, MLPN, RBFN, Elman, PNN) were tested on cosmic ray sensor data from two Australian sites to estimate soil moisture levels from the Australian Water Availability Project database.
- The Elman neural network achieved the best performance, estimating soil moisture levels with 94% accuracy for one site and 91% accuracy for the other.
- This study demonstrated that neural networks can effectively estimate continuous soil moisture levels remotely using cosmic ray sensor neutron count time series data as input.
Supervised machine learning based dynamic estimation of bulk soil moisture us...eSAT Journals
Abstract In this paper artificial neural network based sensor informatics architecture has been investigated; including proposed continuous daily estimation of area wise surface soil moisture using cosmic ray sensor’s neutron count time series. Study was conducted based on cosmic ray data available from two Australian locations. The main focus of this study was to develop a data driven approach to convert neutron counts into area wise ground surface soil moisture estimates. Independent surface soil moisture data from the Australian Water Availability Project (AWAP) was used as ground truth. A comparative study using five different types of neural networks, namely, Feed Forward Back Propagation (FFBPN), Multi-Layer Perceptron (MLPN), Radial Basis Function (RBFN), Elman (EN), and Probabilistic networks (PNN) was conducted to evaluate the overall soil moisture estimation accuracy. Best performance from the Elman network outperformed all other neural networks with 94% accuracy with 92% sensitivity and 97% specificity based on Tullochgorum data. Overall high accuracy proved the effectiveness of the Elman neural network to estimate surface soil moisture continuously using cosmic ray sensors. Index Terms: Artificial Neural Network, Surface Soil Moisture, Cosmic Ray Sensors, Neutron Counts.
IRJET- Tool: Segregration of Bands in Sentinel Data and Calculation of NDVIIRJET Journal
This document discusses using satellite imagery and vegetation indices to analyze and map vegetation. It summarizes several research papers on using the Normalized Difference Vegetation Index (NDVI) with different sensors and techniques. Specifically, it examines calculating NDVI from mountain terrain satellite data, using selected bands from Sentinel-2 satellite data for agriculture applications, extracting buildings from satellite images using shadow detection, and applying NDVI to unmanned aerial system multispectral remote sensing for post-disaster assessment. The document also discusses preprocessing techniques and algorithms like random forests and support vector machines for satellite image classification.
AN EFFICIENT IMPLEMENTATION OF TRACKING USING KALMAN FILTER FOR UNDERWATER RO...IJCSEIT Journal
The exploration of oceans and sea beds is being made increasingly possible through the development of
Autonomous Underwater Vehicles (AUVs). This is an activity that concerns the marine community and it
must confront the existence of notable challenges. However, an automatic detecting and tracking system is
the first and foremost element for an AUV or an aqueous surveillance network. In this paper a method of
Kalman filter was presented to solve the problems of objects track in sonar images. Region of object was
extracted by threshold segment and morphology process, and the features of invariant moment and area
were analysed. Results show that the method presented has the advantages of good robustness, high
accuracy and real-time characteristic, and it is efficient in underwater target track based on sonar images
and also suited for the purpose of Obstacle avoidance for the AUV to operate in the constrained
underwater environment.
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
This document proposes using cloud computing to rapidly generate earthquake shake maps through stochastic simulations. It describes implementing a map-reduce model on the Microsoft Azure cloud to parallelize simulations for many receivers after an earthquake. Results like peak ground acceleration maps could be produced within minutes to aid scientists and emergency responders. The method was tested in Greece and provides a cost-effective way to process simulations as needed without dedicated computing resources.
RAIN STREAKS ELIMINATION USING IMAGE PROCESSING ALGORITHMSsipij
The paper addresses the problem of rain streak removal from videos. While, Rain streak removal from scene is important but a lot of research in this area, robust and real time algorithms is unavailable in the market. Difficulties in the rain streak removal algorithm arises due to less visibility, less illumination, and availability of moving camera and objects. The challenge that plagues rain streak recovery algorithm is detecting rain streaks and replacing them with original values to recover the scene. In this paper, we discuss the use of photometric and chromatic properties for rain detection. Updated Gaussian Mixture Model (Updated GMM) has detected moving objects. This rain streak removal algorithm is used to detect rain streaks from videos and replace it with estimated values, which is equivalent to original value. The spatial and temporal properties are used to replace rain streaks with its original values.
The document discusses particle image velocimetry (PIV), which is a non-intrusive method for measuring fluid flow velocities. PIV works by seeding the fluid with particles and using a laser sheet and camera to capture particle images. Software then tracks the particle movements between images to calculate velocity vectors across the flow field. PIV has applications in analyzing flows around objects like fish, helicopters, and prosthetic heart valves. Advanced PIV systems are being developed that can perform 3D motion tracking.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
FR2.L09 - PROCESSING OF MEMPHIS MILLIMETER WAVE MULTI-BASELINE INSAR DATAgrssieee
The document summarizes the processing of millimeter-wave (mmW) multi-baseline interferometric synthetic aperture radar (InSAR) data collected by the MEMPHIS system. It describes the MEMPHIS SAR system, data focusing methods, multi-baseline processing algorithms, conversion of phase data to a digital surface model (DSM), correction of azimuth phase undulations, experimental results over a highway roundabout, and discusses challenges with producing accurate products from the movable MEMPHIS hardware.
Image reconstruction in CT is mostly a mathematical process however, this presentation tries to explain the complicated process of image reconstruction in a visual way, mainly focusing om Filtered back projection, Iterative Reconstruction and AI based image reconstruction.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
This document summarizes a research paper that proposes a robust foreground modeling method to segment and detect multiple moving objects in videos. The proposed method uses a running average technique to model the background and subtract it from video frames to detect foreground objects. Morphological operations like dilation and erosion are applied to reduce noise and merge connected regions. Convex hull processing is also used to define object boundaries more clearly. The method was tested on standard video datasets and achieved better performance than other techniques in segmenting objects under various challenging conditions like illumination changes and occlusion. Experimental results demonstrated high precision, recall and specificity based on comparisons with ground truth data.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
Development and Hardware Implementation of an Efficient Algorithm for Cloud D...sipij
This document discusses the development and hardware implementation of an efficient algorithm for cloud detection from satellite images. The algorithm uses an adaptive thresholding approach to segment clouds from background pixels in satellite imagery. It then determines the position of the segmented clouds to calculate cloud coverage percentages. The algorithm was tested on satellite images from Spot4 and Landsat archives. It was implemented on a TMS320C6713 DSK processor using Code Composer Studio and achieved accurate cloud detection and coverage calculation on images with resolutions up to 3600x3000 pixels.
Review on Various Algorithm for Cloud Detection and Removal for ImagesIJERA Editor
Clouds is one of the significant obstacles in extracting information from tea lands using remote sensing imagery Different approaches have been attempted to solve this problem with varying levels of success In the past decade, a number of cloud removal approaches have been proposed . In this paper we review and discuss about the cloud detection & removal, need of cloud computing , its principles, and cloud removal process and various algorithm of cloud removal. This paper attempts to give a recipe for selecting one of the popular cloud removal algorithms like The Information Cloning Algorithm, Cloud Distortion Model And Filtering Procedure, Semi-Automated Cloud/Shadow, And Haze Identification And Removal etc. A cloud removal approach based on information cloning is introduced...Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Based on the specific requirements of the project that necessitates the utilization of certain types of cloud detection algorithms is decided
Understanding climate model evaluation and validationPuneet Sharma
The document discusses model evaluation and validation. It introduces key concepts like evaluation, validation, and the apple-orange problem when directly comparing models and observations. It describes using a satellite simulator like COSP to facilitate apple-to-apple comparisons by simulating what satellites would observe from the model. The document also notes issues with observations like errors and uncertainties that must be considered during evaluation.
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
This document proposes techniques for detecting dust storms and determining their direction of transport using NOAA-AVHRR satellite imagery. It introduces a two-part approach using image processing algorithms: 1) A visualization technique uses filters, edge detectors and classification to locate dust sources. 2) An automation technique performs power spectrum analysis to detect dust storm direction by analyzing texture orientation in image blocks. The goal is to automatically detect and track dust storms for applications like hazard monitoring.
The document discusses image reconstruction techniques in nuclear medicine. It begins with an introduction to image reconstruction and definitions of key terms. It then describes analytical reconstruction methods like back projection and filtered back projection, as well as iterative reconstruction methods including algebraic reconstruction, statistical reconstruction, and maximum likelihood reconstruction. Both analytical and iterative methods are discussed and their properties and steps are outlined. The document provides an overview of various image reconstruction algorithms used in nuclear medicine.
Investigations on real time RSSI based outdoor target tracking using kalman f...IJECEIAES
Target tracking is essential for localization and many other applications in Wireless Sensor Networks (WSNs). Kalman filter is used to reduce measurement noise in target tracking. In this research TelosB motes are used to measure Received Signal Strength Indication (RSSI). RSSI measurement doesn‟t require any external hardware compare to other distance estimation methods such as Time of Arrival (TOA), Time Difference of Arrival (TDoA) and Angle of Arrival (AoA). Distances between beacon and non-anchor nodes are estimated using the measured RSSI values. Position of the nonanchor node is estimated after finding the distance between beacon and nonanchor nodes. A new algorithm is proposed with Kalman filter for location estimation and target tracking in order to improve localization accuracy called as MoteTrack InOut system. This system is implemented in real time for indoor and outdoor tracking. Localization error reduction obtained in an outdoor environment is 75%.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Photoacoustic tomography based on the application of virtual detectorsIAEME Publication
This document discusses using virtual detectors to improve photoacoustic tomography (PAT) image reconstruction when full scanning data is unavailable. It proposes interpolation and compressed sensing methods to generate virtual detector data and increase the number of measurements. Simulation results show applying these methods to preprocessed photoacoustic data significantly improves the peak signal-to-noise ratio of reconstructed images compared to direct reconstruction with limited detectors. Dictionary-based compressed sensing provides the best performance by learning an over-complete dictionary to sparsify signals. The methods allow better quality PAT imaging when hardware and spatial constraints limit actual detector positions and sampling angles.
Estimation of global solar radiation by using machine learning methodsmehmet şahin
In this study, global solar radiation (GSR) was estimated based on 53 locations by using ELM, SVR, KNN, LR and NU-SVR methods. Methods were trained with a two-year data set and accuracy of the mentioned methods was tested with a one-year data set. The data set of each year was consisting of 12 months. Whereas the values of month, altitude, latitude, longitude, vapour pressure deficit and land surface temperature were used as input for developing models, GSR was obtained as output. Values of vapour pressure deficit and land surface temperature were taken from radiometry of NOAA-AVHRR satellite. Estimated solar radiation data were compared with actual data that were obtained from meteorological stations. According to statistical results, most successful method was NU-SVR method. The RMSE and MBE values of NU-SVR method were found to be 1,4972 MJ/m2 and 0,2652 MJ/m2, respectively. R value was 0,9728. Furthermore, worst prediction method was LR. For other methods, RMSE values were changing between 1,7746 MJ/m2 and 2,4546 MJ/m2. It can be seen from the statistical results that ELM, SVR, k-NN and NU-SVR methods can be used for estimation of GSR.
This document summarizes a study that used artificial neural networks to estimate soil moisture levels from cosmic ray sensor neutron count data. Specifically:
- Five neural networks (FFBPN, MLPN, RBFN, Elman, PNN) were tested on cosmic ray sensor data from two Australian sites to estimate soil moisture levels from the Australian Water Availability Project database.
- The Elman neural network achieved the best performance, estimating soil moisture levels with 94% accuracy for one site and 91% accuracy for the other.
- This study demonstrated that neural networks can effectively estimate continuous soil moisture levels remotely using cosmic ray sensor neutron count time series data as input.
Supervised machine learning based dynamic estimation of bulk soil moisture us...eSAT Journals
Abstract In this paper artificial neural network based sensor informatics architecture has been investigated; including proposed continuous daily estimation of area wise surface soil moisture using cosmic ray sensor’s neutron count time series. Study was conducted based on cosmic ray data available from two Australian locations. The main focus of this study was to develop a data driven approach to convert neutron counts into area wise ground surface soil moisture estimates. Independent surface soil moisture data from the Australian Water Availability Project (AWAP) was used as ground truth. A comparative study using five different types of neural networks, namely, Feed Forward Back Propagation (FFBPN), Multi-Layer Perceptron (MLPN), Radial Basis Function (RBFN), Elman (EN), and Probabilistic networks (PNN) was conducted to evaluate the overall soil moisture estimation accuracy. Best performance from the Elman network outperformed all other neural networks with 94% accuracy with 92% sensitivity and 97% specificity based on Tullochgorum data. Overall high accuracy proved the effectiveness of the Elman neural network to estimate surface soil moisture continuously using cosmic ray sensors. Index Terms: Artificial Neural Network, Surface Soil Moisture, Cosmic Ray Sensors, Neutron Counts.
IRJET- Tool: Segregration of Bands in Sentinel Data and Calculation of NDVIIRJET Journal
This document discusses using satellite imagery and vegetation indices to analyze and map vegetation. It summarizes several research papers on using the Normalized Difference Vegetation Index (NDVI) with different sensors and techniques. Specifically, it examines calculating NDVI from mountain terrain satellite data, using selected bands from Sentinel-2 satellite data for agriculture applications, extracting buildings from satellite images using shadow detection, and applying NDVI to unmanned aerial system multispectral remote sensing for post-disaster assessment. The document also discusses preprocessing techniques and algorithms like random forests and support vector machines for satellite image classification.
AN EFFICIENT IMPLEMENTATION OF TRACKING USING KALMAN FILTER FOR UNDERWATER RO...IJCSEIT Journal
The exploration of oceans and sea beds is being made increasingly possible through the development of
Autonomous Underwater Vehicles (AUVs). This is an activity that concerns the marine community and it
must confront the existence of notable challenges. However, an automatic detecting and tracking system is
the first and foremost element for an AUV or an aqueous surveillance network. In this paper a method of
Kalman filter was presented to solve the problems of objects track in sonar images. Region of object was
extracted by threshold segment and morphology process, and the features of invariant moment and area
were analysed. Results show that the method presented has the advantages of good robustness, high
accuracy and real-time characteristic, and it is efficient in underwater target track based on sonar images
and also suited for the purpose of Obstacle avoidance for the AUV to operate in the constrained
underwater environment.
Haze removal for a single remote sensing image based on deformed haze imaging...LogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
This document proposes using cloud computing to rapidly generate earthquake shake maps through stochastic simulations. It describes implementing a map-reduce model on the Microsoft Azure cloud to parallelize simulations for many receivers after an earthquake. Results like peak ground acceleration maps could be produced within minutes to aid scientists and emergency responders. The method was tested in Greece and provides a cost-effective way to process simulations as needed without dedicated computing resources.
RAIN STREAKS ELIMINATION USING IMAGE PROCESSING ALGORITHMSsipij
The paper addresses the problem of rain streak removal from videos. While, Rain streak removal from scene is important but a lot of research in this area, robust and real time algorithms is unavailable in the market. Difficulties in the rain streak removal algorithm arises due to less visibility, less illumination, and availability of moving camera and objects. The challenge that plagues rain streak recovery algorithm is detecting rain streaks and replacing them with original values to recover the scene. In this paper, we discuss the use of photometric and chromatic properties for rain detection. Updated Gaussian Mixture Model (Updated GMM) has detected moving objects. This rain streak removal algorithm is used to detect rain streaks from videos and replace it with estimated values, which is equivalent to original value. The spatial and temporal properties are used to replace rain streaks with its original values.
The document discusses particle image velocimetry (PIV), which is a non-intrusive method for measuring fluid flow velocities. PIV works by seeding the fluid with particles and using a laser sheet and camera to capture particle images. Software then tracks the particle movements between images to calculate velocity vectors across the flow field. PIV has applications in analyzing flows around objects like fish, helicopters, and prosthetic heart valves. Advanced PIV systems are being developed that can perform 3D motion tracking.
This document describes a testbed for image synthesis developed at Cornell University. The testbed was designed to facilitate research on new light reflection models, global illumination algorithms, and rendering of complex scenes. It uses a modular structure with hierarchical levels of functionality. The lowest level contains utility modules, the middle level contains object modules that work across primitive types, and the highest level contains image synthesis modules. The testbed uses a modeler-independent description format to represent environments independently of modeling programs. Renderers can then generate images from this common description.
FR2.L09 - PROCESSING OF MEMPHIS MILLIMETER WAVE MULTI-BASELINE INSAR DATAgrssieee
The document summarizes the processing of millimeter-wave (mmW) multi-baseline interferometric synthetic aperture radar (InSAR) data collected by the MEMPHIS system. It describes the MEMPHIS SAR system, data focusing methods, multi-baseline processing algorithms, conversion of phase data to a digital surface model (DSM), correction of azimuth phase undulations, experimental results over a highway roundabout, and discusses challenges with producing accurate products from the movable MEMPHIS hardware.
Image reconstruction in CT is mostly a mathematical process however, this presentation tries to explain the complicated process of image reconstruction in a visual way, mainly focusing om Filtered back projection, Iterative Reconstruction and AI based image reconstruction.
Robust foreground modelling to segment and detect multiple moving objects in ...IJECEIAES
This document summarizes a research paper that proposes a robust foreground modeling method to segment and detect multiple moving objects in videos. The proposed method uses a running average technique to model the background and subtract it from video frames to detect foreground objects. Morphological operations like dilation and erosion are applied to reduce noise and merge connected regions. Convex hull processing is also used to define object boundaries more clearly. The method was tested on standard video datasets and achieved better performance than other techniques in segmenting objects under various challenging conditions like illumination changes and occlusion. Experimental results demonstrated high precision, recall and specificity based on comparisons with ground truth data.
The document discusses integrating support vector machines (SVMs) and Markov random fields (MRFs) for remote sensing image classification. SVMs are good at identifying optimal discriminant hypersurfaces but do not consider context between samples. The paper aims to integrate SVMs and MRFs to allow for contextual classification. A novel classifier is proposed that reformulates the MRF minimum-energy decision rule as an SVM discriminant function with a "contextual kernel." Experimental results on real remote sensing datasets show the proposed method provides significantly more accurate classifications compared to a standard noncontextual SVM.
Development and Hardware Implementation of an Efficient Algorithm for Cloud D...sipij
This document discusses the development and hardware implementation of an efficient algorithm for cloud detection from satellite images. The algorithm uses an adaptive thresholding approach to segment clouds from background pixels in satellite imagery. It then determines the position of the segmented clouds to calculate cloud coverage percentages. The algorithm was tested on satellite images from Spot4 and Landsat archives. It was implemented on a TMS320C6713 DSK processor using Code Composer Studio and achieved accurate cloud detection and coverage calculation on images with resolutions up to 3600x3000 pixels.
Review on Various Algorithm for Cloud Detection and Removal for ImagesIJERA Editor
Clouds is one of the significant obstacles in extracting information from tea lands using remote sensing imagery Different approaches have been attempted to solve this problem with varying levels of success In the past decade, a number of cloud removal approaches have been proposed . In this paper we review and discuss about the cloud detection & removal, need of cloud computing , its principles, and cloud removal process and various algorithm of cloud removal. This paper attempts to give a recipe for selecting one of the popular cloud removal algorithms like The Information Cloning Algorithm, Cloud Distortion Model And Filtering Procedure, Semi-Automated Cloud/Shadow, And Haze Identification And Removal etc. A cloud removal approach based on information cloning is introduced...Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Based on the specific requirements of the project that necessitates the utilization of certain types of cloud detection algorithms is decided
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
Computer vision approaches are increasingly used in mobile robotic systems, since they allow
to obtain a very good representation of the environment by using low-power and cheap sensors.
In particular it has been shown that they can compete with standard solutions based on laser
range scanners when dealing with the problem of simultaneous localization and mapping
(SLAM), where the robot has to explore an unknown environment while building a map of it and
localizing in the same map. We present a package for simultaneous localization and mapping in
ROS (Robot Operating System) using a monocular camera sensor only. Experimental results in
real scenarios as well as on standard datasets show that the algorithm is able to track the
trajectory of the robot and build a consistent map of small environments, while running in near
real-time on a standard PC.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses applications of data mining techniques to predict mesoscale weather events like tornadoes and cloudbursts. It summarizes previous research that applied data mining methods like neural networks, support vector machines, and clustering to weather prediction. For tornado prediction, studies developed spatiotemporal models to identify relationships between storm variables. Other research used mesocyclone detection algorithms and neural networks to predict tornadoes. For cloudburst prediction, clustering relative humidity and divergence from numerical models provided early formation indications. The document also briefly explains ensemble forecasting, which runs multiple forecasts from slightly different initial conditions to sample forecast uncertainty.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
Surface generation from point cloud.pdfssuserd3982b1
This document summarizes research on generating 3D surface models from point clouds acquired using a vision-based laser scanning sensor. It discusses using adaptive filters to reduce noise in the point clouds and generating triangular meshes from the point clouds to create an initial surface model. It then covers using NURBS (Non-Uniform Rational B-Splines) to optimize the surface model for accuracy by fitting parametric surfaces to the triangular mesh. The goal is to develop accurate 3D surface models of turbine blades for robotic welding applications to repair flaws.
This document presents a method for tracking moving objects in video sequences using affine flow parameters combined with illumination insensitive template matching. The method extracts affine flow parameters from frames to model local object motion using affine transformations. It then applies template matching with illumination compensation to track objects across frames while being robust to illumination changes. The method is evaluated on various indoor and outdoor database videos and is shown to effectively track objects without false detections, handling issues like illumination variations, camera motion and dynamic backgrounds better than other methods.
A Review on Deformation Measurement from Speckle Patterns using Digital Image...IRJET Journal
This document reviews digital image correlation (DIC) for deformation measurement using speckle patterns. DIC is a non-contact optical method that uses digital images of a speckle pattern on a surface before and after deformation. By comparing the speckle patterns in the images, DIC can determine displacement and strain fields with high accuracy. The document discusses speckle pattern types, the DIC process, related works that have improved DIC methods, and applications of DIC such as for high-temperature testing. DIC provides full-field measurements and greater accuracy compared to conventional contact methods.
Tropical Cyclone Determination using Infrared Satellite Imageijtsrd
Many sub continents in the world have the region that are affected by the cyclone in every year. To prevent the loss of life and their assets, cyclone prediction is a major role because of directly related to the lives and household of human being. Satellite images provide an excellent view of clouds which can be used in weather forecasting and especially Infrared Red IR satellite images play in many environmental applications. To find the tropical cyclone TC center, the basic stage is to extract the main cloud of the cyclone. In manual segmentation, selection of the storm region is complicated, time consuming task and it also need the human experts for every time processing. The semi and fully automatic storm detection is sophisticated and difficult process because of the overlapping between boundaries of the cloud. Fuzzy C means FCM clustering and morphological image processing is applied for segmentation each infrared satellite images. The effectiveness is tested for infrared cyclone image over Kalpana satellite which is obtained from the INSAT satellite of India. 45 tropical cyclones are occurred during the period 1989 2014 over Bay of Bengal. Cyclones Nargis that mainly affected to Myanmar in 2 May 2008 as case study. Experimental results show that the high location accuracy can be obtained. Thu Zar Hsan | Myint Myint Sein "Tropical Cyclone Determination using Infrared Satellite Image" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27934.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/27934/tropical-cyclone-determination-using-infrared-satellite-image/thu-zar-hsan
This document describes an open-source 3D solar radiation model called SURFSUN3D that has been integrated with a 3D Geographic Information System (GIS) to allow for interactive assessment of photovoltaic potential in urban areas. The model transforms 3D building surfaces into 2D raster maps to allow for conventional GIS solar radiation calculations on a cell-by-cell basis. It has been validated against a commercial solar modeling software and tested on a 3D model of Boston, demonstrating its ability to calculate solar radiation for selected buildings and surfaces and visualize the results.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
Localization based range map stitching in wireless sensor network under non l...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Chronological Calibration Methods for Landsat Satellite Images iosrjce
This document describes methods for chronologically calibrating Landsat satellite images to account for differences when images are taken days apart. It discusses correcting ETM+ images for scan line failures and converting digital numbers to reflectance. Two methods are proposed to remove phenological effects between Landsat 7 and 8 images taken 8 days apart: linear regression and cross-correlation. Image classification using the visible red and near infrared bands is used to validate the correction methods by comparing land cover detection in study area images.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
The document discusses methods for characterizing the global environment using satellite data to help overcome challenges posed by weather effects on missile defense sensors. It describes adjusting infrared imagery thresholds to approximate radar observations, extracting weather event boundaries, projecting 3D shapes onto a model Earth, and using an existing satellite constellation to provide continuous coverage. The goal is to determine visibility and sensor performance to optimize sensor selection and placement for missile defense.
This paper presents an approach for image restoration in the presence of blur and noise. The image is divided into independent regions modeled with a Gaussian prior. Wavelet-based methods are used for image denoising, while classical Wiener filtering is used for deblurring. The algorithm finds the maximum a posteriori estimate at the intersection of convex sets generated by Wiener filtering. It provides efficient image restoration without sacrificing the simplicity of filtering, and generates a better restored image compared to previous methods.
Similar to AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF (20)
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Pengantar Penggunaan Flutter - Dart programming language1.pptx
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
DOI : 10.5121/ijcseit.2012.2214 159
AUTOMATIC IDENTIFICATION OF CLOUD COVER
REGIONS USING SURF
Dr. Mohamed Mansoor Roomi #1
R.Bhargavi#2
T.M.Hajira Rahima Banu#3
Department of Electronics and Communication Engineering, Thiagarajar College of
Engineering, Madurai
1
smmroomi@tce.edu
2
rbhargavi22@gmail.com
3
tmhrbanu@gmail.com
ABSTRACT
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
KEYWORDS
Weather Forecast, SURF, Affine, Threshold.
1. INTRODUCTION
Weather is a continuous, data-intensive, multidimensional, dynamic and chaotic process, and
these properties make weather forecasting a formidable challenge. There are various techniques
involved in weather casting, from relatively simple observation of the sky to highly complex
computerized mathematical models. Cloud, an element of weather plays vital role in weather
forecasting. Thickening of cloud cover or the invasion of a higher cloud deck is indicative of rain
in the near future. Generally, empirical and dynamic approaches are used to forecast weather. The
first approach is based upon the occurrence of analogs and is often referred to by meteorologists
as analog forecasting. This approach is useful for predicting local-scale weather if recorded cases
are plenty. The second approach is based upon forward simulations of the atmosphere, and is
often referred to as computer modelling. Because of the grid coarseness, the dynamical approach
is only useful for modelling large-scale weather phenomena and may not predict short-term
weather efficiently. Most weather prediction systems use a combination of empirical and
dynamical techniques. Identification and observation of clouds can provide clues to the severity
of approaching bad weather. Current observation methods depend on changes in barometric
pressure, weather conditions and sky condition. Human input is still required to pick the best
possible forecast model and identify the cloud cover region, which involves pattern recognition,
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
160
teleconnections and knowledge of model performance. The solution for cloud cover identification
without human interpretation can be achieved using cloud segmentation as initial process. In [6],
a review of cloud classification techniques is provided. Lim etal has proposed Cloud field
segmentation via multiscale convexity analysis [5].A multispectral classification of
meteorological satellite images using a pixel-clustering algorithm is described in [8]. A number of
cloud segmentation approaches have used pixel-wise neural-network-based classifiers that exploit
textural and spectral features [7][10-13]. Wenlong etal proposed a Satellite Cloud Image
Segmentation Based on the Improved Normalized Cuts Model [17] .In contrast, Dipti.et al [14]
have developed an area-based, shape-preserving morphological segmentation scheme for cloud
extraction. It is a major concern to identify any trends for weather parameters to deviate from its
periodicity, like Natural calamities which disrupt the economy of the country, causing
infrastructure damage, injury and the loss of life. In this situation, it might make sense to provide
an intelligent computing system to perform automated interpretation of satellite images,
especially identifying cloud cover region. The organization of the paper is as follows: Section 2
explains the Speeded Up Robust Features to obtain the affine corrected images. Cloud regions are
segmented from the affine corrected test image by Otsu thresholding followed by latitude and
longitude identification. Section 3 depicts results and discussion in which the proposed method is
compared with an existing Adaptive Average Brightness Thresholding technique for cloud
segmentation and finally Section 4 provides conclusion and future directions.
2. METHODOLOGY
Cloud plays a vital role in weather forecasting. The use of cloud cover in weather prediction has
led to various weather lore over the centuries. The overview of an automated image processing
method based on cloud segmentation is shown in Fig.1. Satellites for meteorological purposes are
used to acquire a sequence of images of earth. At the time of every acquisition, the way the
satellite focuses the earth differs in terms of scale, translation and rotation and these images are
taken as test image for further analysis. A less cloud covered satellite image is taken as a
reference image. The transformation between the reference and test image is calculated using the
descriptors obtained from speeded up robust features, The parameters of transformation are
applied on the test image to obtain the affine corrected test image which is similar in scale space
to the reference image. Cloud regions are segmented from the affine corrected images based on
Otsu thresholding. Grids are overlayed on the segmented image to estimate the latitude and
longitude of the cloud covered region.
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
161
Fig 1: Overview of proposed Algorithm
2.1. Feature Extraction
The feature extraction method used in the proposed paper is Speeded Up Robust Features. The
features so obtained are robust in the sense that they are invariant to translation, rotation, scale
and affine transforms. The feature extraction method used in the proposed paper is Speeded Up
Robust Features (SURF). The SURF algorithm consists of following modules namely
2.1.1. Integral Images
Much of the performance increase in SURF can be attributed to the use of an intermediate image
representation known as the Integral Image. The integral image is computed rapidly from an input
image and is used to speeded up the calculation of any upright rectangular area. Given an input
image I and a point (x; y) the integral image IP is calculated by the sum of the values between the
point and the origin. Formally this can be defined by the formula
∑∑
≤
=
≤
=
Σ =
xi
i
yj
j
yxIyxI
0 0
),(),( (1)
Since computation time is invariant to change in size this approach is particularly useful when
large areas are required. SURF makes good use of this property to perform fast convolutions of
varying size box filters at near constant time.
1) Fast Hessian Detector
The first step in the Fast-Hessian process is to construct the blob-response map for the desired
number of octaves and intervals. The default settings of three octaves with four intervals each are
Determination of SURF
feature
Affine Correction
Cloud Segmentation
Lat-Long Overlay
Weather Casting using lat-long of the
cloud covered states
Reference Image Test Image
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
162
constructed as the scale space. The values for each octave-interval pair dictate the size of the filter
which is convolved at that layer in the scale-space. The filter size is given by the formula,
)1)2((3 +×= iSizeFilter o
(2)
Where ‘o’ is the octave and ‘i’ is the interval. Bay proposed an approximation to the Laplacian of
Gaussians by using box filter representations of the respective kernels which is shown in Fig.2.
For the Dxy filter the black regions are weighted with a value of -1, the white regions with a
value of 1 and the remaining areas not weighted at all. The Dxx and Dyy filters are weighted
similarly but with the white regions have a value of 1 and the black with a value of -2.
Considerable performance increase is found when these filters are used in conjunction with the
integral image.
a) Dxx b) Dyy c) Dxy
Fig 2: Weighted Box filters Approximation of LOG
In order to detect interest points using the determinant of Hessian it is first necessary to introduce
the notion of a scale-space. A scale-space is a continuous function which can be used to find
extrema across all possible scales. In computer vision the scale-space is typically implemented as
an image pyramid where the input image is iteratively convolved with Gaussian kernel and
repeatedly sub-sampled. As the processing time of the kernels used in SURF is size invariant, the
scale-space can be created by applying kernels of increasing size to the original image. This
allows for multiple layers of the scale-space pyramid to be processed simultaneously and negates
the need to sub sample the image hence providing performance increase.
Once the responses are calculated for each layer of the scale-space, it is important that they are
scale-normalized. Scale-normalization is an important step which ensures there is no reduction in
the filter responses as the underlying Gaussian scale increases. For the approximated filters, scale-
normalization is achieved by dividing the response by the area. This yields near perfect scale
invariance. The search for extrema in the response map is conducted by a non-maximal
suppression as shown in Fig. 4. Each pixel is compared to its 26 neighbors of adjacent scale and
only those which are greater than all those surrounding it are classified as interest points.
Fig 4: Extrema Detection
2) Interest Point Descriptor
The SURF descriptor describes how the pixel intensities are distributed within a scale dependent
neighborhood of each interest point detected by the Fast-Hessian. Haar wavelets are used in order
to increase robustness and decrease computation time. Haar wavelets are simple filters which can
be used to find gradients in the x and y directions. Extraction of the descriptor can be divided into
two distinct tasks. First each interest point is assigned a reproducible orientation before a scale
dependent window is constructed in which a 64-dimensional vector is extracted. In order to
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
163
achieve invariance to image rotation each detected interest point is assigned a reproducible
orientation. Extraction of the descriptor components is performed relative to this direction so it is
important that this direction is found to be repeatable under varying conditions.
To determine the orientation, Haar wavelet responses of size 4σ are calculated for set pixels
within a radius of 6 σ of the detected point, where σ refers to the scale at which the point was
detected. The specific set of pixels is determined by sampling those from within the circle using a
step size of σ. The responses are weighted with a Gaussian centered at the interest point. Gaussian
is dependent on the scale of the point and chosen to have standard deviation 2.5σ. Once weighted
the responses are represented as points in vector space, with the x-responses along the abscissa
and the y-responses along the ordinate. The dominant orientation is selected by rotating a circle
segment covering an angle of π/3 around the origin. At each position, the x and y-responses
within the segment are summed and used to form a new vector. The longest vector lends its
orientation the interest point.
SURF descriptor is obtained by constructing a square window around the interest point. This
window contains the pixels which will form entries in the descriptor vector and is of size 20σ,
again where σ refers to the detected scale. Furthermore the window is oriented along its dominant
direction as mentioned previously. The descriptor window is divided into 4 x 4 regular sub
regions. Within each of these subregions, Haar wavelets of size 2σ are calculated for 25 regularly
distributed sample points. Then, the wavelet responses dx and dy are summed up over each sub
region and form a first set of entries to the feature vector.
In order to bring in information about the polarity of the intensity changes, the sum of the
absolute values of the responses, |dx| and |dy| are extracted. Hence, each sub-region has a four-
dimensional descriptor vector v for its underlying intensity structure as expressed in equation 3.
This results in a descriptor vector for all 4×4 sub-regions of length 64. These 64 descriptors of
each key point of the query and reference image is used for feature matching.
( )∑∑∑∑= dydxdydxV ,,, (3)
3) Correlation over scale space
Due to the change in satellite view point on earth, the scale of the test image varies from that of
the reference image. Hence, these raise a need for correlation between the correct scale of test and
reference image.
The scale space generated in SURF for locating feature points is used here. The scale in scale
space increases such that the top most layer the highest scale and the bottom layer has the least
scale. Considering that there are ‘m’ numbers of matched features obtained through the
Descriptor ratio method.
With available matched point, correlation method is to be applied only between the corresponding
matches in query image. The error is computed by considering the neighbourhood pixels of the
feature point and the neighbourhood pixels of the feature point its corresponding match in a single
scale of the test image. This process continues until the errors for the same feature point in
highest scale of the image is computed. Then the pair scale that produces the minimum error for
the considered feature point is detected. Then the entire process is repeated for all the ‘m’ pair of
matched points. Error ‘e’ is calculated for all the points in all the scale pairs of ‘k’ and ‘l’ as given
in expression.
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
164
( )( )
( )
+
= 2
12
),(
,
r
lkd
lkie
(4)
where
( ) ( )
vyux
RQlkd
ii
r
ru
r
rv
+=+=
−= ∑∑−= −=
βα
βαβα
,
,,,),(
(5)
For i=1,2 ….M, k=1,2,…N ,l=1,2,…N .‘N’ is the number of scales in scale space of Query image
and reference image and ‘M’ is the number of matched feature points obtained through the
descriptor ratio method. For each matched feature point, find ‘k’ and ‘l’ such that e(i(k,l)) is
minimum. Let ‘CsQ’ and ‘CsR’ be the closest scales of Query image and reference image. After
finding the closest pair of scale, the mean of errors (merr) is computed for all the feature point
pair in the closest scale pair as per the equation 1
( )( )
∑
=
==
=
i
M
1i
CsRlCsQ,kie
merr
(6)
Only the true feature points that are located in common field of view of both test and reference
image gets matched. Those points that are located outside the overlapping regions are classified
as mismatches and are rejected for further processing.
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
165
2.2. Affine Transformation
The transformation parameters are calculated from the final matched points in the reference and
test image using the affine transformation matrix given below.
−
−
tytx
ss
ss
θθ
θθ
cos*sin*
sin*cos*
where‘s’ denotes scaling, refers to orientation and 'tx', 'ty' are the translation along x and y
direction respectively. The affine parameters calculated from the matched feature points is used to
correct the test image to make the scaling and orientation same as the reference image.
Fig 4: Reference image Fig 5: Test Image 1 Fig 6: Test
Fig 7: Test image 3 Fig 8: Descriptors of Reference Fig 9: Descriptors of Test Image1
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
166
2.3. Thresholding
Otsu thresholding technique [9] has been used to segment cloud regions from the input image. As,
many of the thresholding techniques depend on the shape of the histogram, Otsu thrsholding
works stable without any apriori knowledge. Otsu’s method of image segmentation selects an
optimum threshold by maximizing the between-class variance in a gray image. An image is a 2D
grayscale intensity function, and contains N pixels with gray levels from 1 to 'L'. The number of
pixels with gray level i is denoted fi, giving a probability of gray level i in an image of pi = fi / N.
In the case of bi-level thresholding of an image, the pixels are divided into two classes, 'C1' with
gray levels [1… t] and 'C2' with gray levels [t+1… L]. Then, the gray level probability
distributions for the two classes are provided in equation (14)
( ) ( )tptpC t 1111 ,....,: ωω and
( ) ( ) ( )tptptpC Ltt 222212 ,....,,: ωωω ++ (7)
where
( ) ∑=
=
t
i
ipt
1
1ω
also, the means for classes C1 and C2 are given in equation (16)
( )∑=
=
t
i
i tpi
1
11 ωµ
and
( )∑+=
=
L
ti
i tpi
1
22 ωµ
(8)
Let 'µT' be the mean intensity for the whole image. It is easy to show that,
Tµµωµω =+ 2211 (9)
121 =+ ωω (10)
Fig 10: Affine Corrected Test Image1
9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
167
Using discriminant analysis, Otsu defined the between-class variance of the thresholded image as
given in (11)
( ) ( )2
22
2
11
2
TTB µµωµµωσ −+−= (11)
For bi-level thresholding, Otsu verified that the 't*' is chosen so that the between-class variance
2
Bσ is maximized; that is given in equation (12),
( ){ } LttMaxArgt B <≤= 12*
σ (12)
Let 'Xt' be the adaptive threshold based on Otsu method of the input image 'X' and assume
that
{ }110 ,...,, −∈ Lt XXXX
. Designate the first subimage as 'Xs1' consisting of gray levels
{Xo, X1… Xt}, subimage 'Xs2' consisting of gray levels {Xt+1…XL-1}. The adaptive
threshold obtained from equation (12) can be used to extract the foreground and
background image pixels. Because of the global equalization, the distortion of visual
quality in the background is retrieved with the original input pixels. The enhanced object
features are retained as such in the enhanced image. Threshold value is selected to
segment cloud region by maximizing the criterion function and these segmented regions
are labeled based on its connectivity.
2.4. Connected Component Analysis
Cloud region is determined by means of a region based operation. The segmented cloud region is
scanned pixel by pixel in order to identify connected pixel regions where each pixel is denoted as
'V', i.e. regions of adjacent pixels which share the same set of intensity values. Connected
component analysis with 8-connectivity, scans the image by moving along a row until it comes to
a point p (where p denotes the pixel to be labelled at any stage in the scanning process) for which
V={1}.When this is true, it examines the four neighbours of p, which have already been
encountered in the scan. After completing the scan, the equivalent label pairs are sorted into
equivalence classes and a unique label is assigned to each class. As a final step, a second scan is
made through the image, during which each label is replaced by the label assigned to its
equivalence classes. Thus the labels will represent the distinct regions of the segmented image.
2.5. Weak Boundary Elimination
Let R= R1, R2,....,Rn be the set of labelled regions. Let ’M’ and ’N’ denote the rows and columns
of each region. The number of pixels representing each region Rk is given by
NP= Rk(i,j) (13)
As the images are acquired in a non-ideal environment there is a need to remove non cloud
regions, caused by interference from the aircraft systems, complex terrain including snow, cover,
land and ice. Continuous and massive cloud concentrated over a particular region is identified as
true cloud. In this work, Regions with area lesser than 20 percent of the maximum number of
pixels are considered as noise and interference due to cluttered environment. The latitude and
longitude of the cloud covered region has to be estimated to locate the region geographically.
10. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
168
2.6 Lat-Long Identification
Latitude and longitude are the horizontal and vertical mapping lines of the earth. The latitude of
India ranges from 7º30'N to 35ºN and longitude from 65º66'E to 97º30'E. Cloud covered regions
are identified by forming the grid over the segmented image. The Universal Transverse Mercator
Coordinate (UTM) system provides coordinates on a world wide flat grid for easy computation
which divides the world in to 60 zones [17]. In this work, Indian Territory is subdivided in to 256
zones by forming 16 x 16 grids. Latitude and longitude of each cloud covered region are
estimated approximately from the grid numbers as shown in Table 1. The block (1,1) represents
the leftmost top corner of the image and is traversed in horizontal direction and then in vertical
direction.
Table 1
Representative Look Up Table For Latitude And Longitude Identification
Block
No
Latitude Longitude States covered
(1,1) 35 o
28”- 7 o
30"
65 o
66 -73 o
57' Out of Boundary
(1,2) 35 o
28”- 7 o
30"
73 o
57'-81 o
48' Chandigarh,Haryana ,Himachal Pradesh
Jammu and KashPunjab,Uttarkhan
(1,3) 35 o
28”- 7 o
30"
81 o
48'-89 o
39' Out of Boundary
(1,4) 35 o
28”- 7 o
30"
89 o
39'-97 o
30' Out of Boundary
(2,1) 28 o
7'30"-21
15'
65 o
66 -73 o
57'
Gujarat
(2,2) 28 o
7'30"-21
15'
73 o
57'-81 o
48' Delhi ,Goa,Madya pradesh,Rajasthan
Uttar Pradesh
(2,3) 28 o
7'30"-21
15'
81 o
48'-89 o
39' Bihar ,Chattisgrah,Jharkhand,Sikkim
West Bengal
(2,4) 28 o
7'30"-21
15'
89 o
39'-97 o
30' Arunachal Pradesh,Assam,Manipur
Meghalaya,Mizoram,Nagaland,Tripura
(3,1) 21 o
15'-14 o
22'30"
65 o
66 -73 o
57'
Daman and Diu
(3,2) 21 o
15'-14 o
22'30"
73 o
57'-81 o
48' Andhra pradesh,Karnataka,Maharashtra
(3,3) 21 o
15'-14 o
22'30"
81 o
48'-89 o
39' Orissa
(3,4) 21 o
15'-14 o
22'30"
89 o
39'-97 o
30' Out of Boundary
(4,1) 14 o
22'30"-7
o
30'
65 o
66 -73 o
57' Lakshadweep
(4,2) 14 o
22'30"-7
o
30'
73 o
57'-81 o
48' Kerala,Tamil Nadu, Pondicherry
(4,3) 14 o
22'30"-7
o
30'
81 o
48'-89 o
39' Out of Boundary
(4,4) 14 o
22'30"-7
o
30'
89 o
39'-97 o
30' Andaman and Nicobar
3. RESULTS AND DISCUSSION
To demonstrate the efficacy of the proposed technique, 60 sample images with a spatial resolution
of 1 Km captured by KALPANA-1 satellite in infrared band were taken from the website of
Indian Meteorological Department . SIFT is slow and not good at illumination changes, while it is
invariant to rotation, scale changes and affine transformations SURF shows its stability and fast
11. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
169
speed in the experiments. It is known that ‘Fast-Hessian’ detector that used in SURF is more than
3 times faster that DOG (which was used in SIFT) and 5 times faster than Hessian-Laplace.The
image with least cloud covered region from the collected dataset is chosen as the reference image
which is shown in Fig.5 while the test image with different scale is shown in Fig.6-8. Keypoint
descriptors are found for reference and test image by varying the threshold value in the range 0.1
to 0.9, typically 0.75 and they are shown in Fig.8 and Fig.9. This is followed by the calculation of
transformation parameters for matched feature points which are applied to the test image as in
Fig.6-8.
The proposed method is compared with the existing Adaptive Average Brightness Thresholding
technique [8]. In this method, the average brightness of the digitized, grayscale image is
calculated. Next, a threshold brightness value is chosen based on the average brightness of the
image. Finally, this threshold is applied to the image to divide it into different regions of
brightness. The first decision is to decide on the threshold brightness level. Weather satellite
images can range from ones with no cloud, over bodies of water, to ones with almost complete
cloud coverage. These two cases would provide extremely dark and extremely bright levels of
average brightness, respectively. Adaptive Average Brightness Thresholding uses an average-
cutoff function to determine the appropriate brightness threshold level. This function has the
characteristic that at low average brightness levels, the cutoff is very much above the average,
whereas at high average brightness levels, the cutoff is only marginally above the average
brightness.
))),(ln(max))(ln(*),()l(
2/
1
2/
l
2/
1
2/
!
∑ ∑∑ ∑
+
+=
+
+=
+
+=
+
+=
−+=
rk
ki
cm
mj
rk
ki
cm
mj
jiimGfjiimcutoff (14)
where 'im' refers to the affine corrected test image, 'r' and 'c' refers to the size of the image, ln
denotes the natural logarithm, 'Gmax' is the number of grayscale values, 'f' is the multiplicative
coefficient, determined empirically. The parameters 'k' and 'm' are used to navigate across the
four quadrants of the image.
The cutoff functions for the four quadrants of the test image using Adaptive Average Brightness
Thresholding are 148.3643, 158.7722, 93.3247, and 109.8715. The cloud image is segmented
from the original image using the cutoff function and the segmented cloud image by eliminating
the weak boundaries is shown in Fig.16.The results depict that the segmented cloud image
obtained from the proposed technique is better than the Adaptive Average Brightness Algorithm.
The latitude and longitude of the cloud covered region is estimated by superimposing 16 x 16
grids over the original image. The states that are covered by cloud is found by using the look up
table. Pseudo coloring is applied to the resultant segmented images from Otsu and AABT as
shown in Fig.17 –Fig.18and Fig.19-Fig.20. The segmentation of NOAA satellite image was
performed using OTSU and AABT algorithm. The NOAA satellite image is shown in Fig
21 . The satellite images are not ideal because of noise interference from the aircraft
systems ,complex terrain including land,snow cover,ice and snow pixels with similar
brightness values.Clouds segmented are shown in green colour and undetected clouds are in red
color. The results depict that the cloud region was better segmented using Otsu thresholding
algorithm rather than Adaptive Average Brightness Algorithm. Hence segmentation by OTSU
thresholding is chosen to identify cloud cover regions. The proposed algorithm has been tested
with different sets of images from website of Indian Meteorological Department from June, 2011
to July, 2011 and the results are shown in Fig 11-20 and the cloud covered states are tabulated in
Table 2.
12. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
170
Fig .11 Database of cloud covered images
13. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
171
Fig .12 Affine Corrected And Otsu Segmented Images
14. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
172
Table 2
Cloud Covered States
Test
image
Latitude Longitude States covered
Fig.5 30o
42 76o
54’ Chandigarh
30o
15’ 79o
15’ Uttar Khand
28o
95o
Arunachal Pradesh
27o
30’ 88o
3 ‘ Sikkim
15o
7’ 75o
2’ Karnataka
25o
30’ 91o
1’ Meghalaya
29o
8’ 76o
5’ Delhi
21 o
75 o
Maharashtra
9 o
7’ 76 o
5’ Kerala
Fig.6 30o
42’ 76o
54 Chandigarh
27o
30’ 88o
3’ Sikkim
28 o
95o
Arunachal Pradesh
29 o
85 75 o
85’ Punjab
15o
75o
Karnataka
16o
80o
Andhra Pradesh
25o
30’ 91o
Meghalaya
11o
78o
Tamil Nadu
Fig.7 11o
78o
Delhi
30 o
5 74 o
9’ Jammu and Kashmir
20 o
8’ 75 o
Maharashtra
15o
75o
Karnataka
9 o
7’ 76 o
5’ Kerala
Fig 13: AABT Segmented Fig14: AABT Weak
Image1 Boundary Elimination
15. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
173
Fig 15: AABT Segmented Fig16: AABT Weak
image2 Boundary elimination
Fig17: Pseudo color map of Fig18: Pseudo color map of
Otsu segmented image3 AABT segmented image3
Fig19: Pseudo color map of Fig20: Pseudo color map of
Otsu segmented image2 AABT segmented image2
16. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
174
4. CONCLUSION
In this paper, a method for automatic weather forecasting by segmenting cloud images
irrespective of scale, translation and rotations is proposed. Speeded Up Robust Feature Transform
(SURF) is incorporated in this work to bring the captured satellite image similar to the reference
image. The cloud regions are segmented using the Otsu thresholding followed by connected
component analysis. Latitude and longitude positions were identified by forming a grid over the
segmented image and the cloud covered states are identified. This algorithm has been tested over
INSAT images and compared against Adaptive Average Brightness Thresholding (AABT).The
proposed method outperforms the existing AABT. This work can be extended to explicitly
identify the cloud cover area specific to districts by increasing the number of lat-long grids and
also to cloud tracking by estimating the cloud motion vectors for live weather casting.
REFERENCES
[[1] M.Imran,K.M.Riaz,A.Ajith Neuro-Computing based Canadian weather analysis," International
workshop on Intelligent Systems design and applications., vol. 6, pp.39-44,2002, in press.
[2] C.A.Doswell, Short range forecasting: Mesoscale meterology and forecasting," Journal of Applied
Meterology., chap.29,pp. 689-719,1986, in press.
[3] A.Ajith,P.Sajith and J.Babu, Will We Have a Wet Summer Long-term Rain Forecasting Using Soft
Computing Models," 15th European Simulation Multi Conference , pp.1044-1048,2001, in press.
[4] Kuligowski R. J., Barros A. P , Localized precipitation forecasts from a numerical weather prediction
model using arti_cial neural networks," Journal of AppliedMeterologyl. Vol.13, pp. 1194-1204, 1998,
in press.
[5] Sin Liang Lim ,B. S. Daya Sagar“Cloud field segmentation via multiscale convexity analysis”
,JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 113, D13208, 17 PP., 2008, in press.
[6] G. S. Pankiewicz., Pattern recognition techniques for identi_cation of cloud and cloud systems,"
Journal of Applied Meterology. vol. 2, pp.257-271,September 1995, in press.
[7] Khan Muhammad Riaz and Ondrusek Cestmir., Short-term load forecasting with multilayer
perceptron and recurrent neural network," Journal of ElectricalEngineering, vol.53, pp.17-23,February
2002, in press.
[8] Isaac J.H. Leung, James E. Jordan, `Image Processing for Weather Satellite Cloud Segmentation,"
Electrical and Computer Engineering,Canadian Conference., vol.2, pp. 953-956,Sep 1995, in press.
[9] Ping-sung liao, Tse-sheng chen and Pau-choo chung, A Fast Algorithm for Multilevel Thresholding,"
Journal of Information Science and Engineering, vol.17, pp713-727,2001, in press.
[10] J.Lee,R.Weger ,A neural network approach to cloud classi_cation:.,IEEE trans. Geoscience Remote
sensing, vol.28, pp 846-855,1990, in press.
[11] J.Peak and P.Tag,Segmentation of satellite imagery using hierarchial thresholding and neural
networks," Journal of Applied Metero logy, vol.33,pp 605-616,1994, in press.
[12] B.Tian,A.Mukhtair,R.Mahmood,T.V Haar ,A study of cloud classi_cation with neural networks using
spectral and textural features," IEEE Trans.NeuralNetworks, vol.10,pp138-151,1999, in press.
Fig 21. NOAA Image Otsu Segmented AABT Segmented
17. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.2, No.2, April 2012
175
[13] S.Yhann and J.Simpson,Application of neural networks to AVHRR cloud segmentation," IEEE
Trans. Geoscience Remote Sensing, vol.33,pp 590-604, in press.
[14] Dipti Prasad Mukherjee,Scott T.Acon,Cloud Tracking by scale space classi_cation," IEEE
Trans.Geoscience Remote Sensing, vol.40,pp 405-415, 2002, in press.
[15] David Lowe,"Distinctive Image Features from Scale-Invariant Keypoints," International Journal of
Computer Vision, 2004,in press.
[16] David Lowe, "Object Recognition from Local Scale-Invariant Features," Proc. of the International
Conference on Computer Vision., 1999, in press.
[17] Fei Wenlong Lv Hong Wei Zhihui “ Satellite Cloud Image Segmentation Based on the Improved
Normalized Cuts Model“,Information Science and Engineering (ICISE), 2009 1st International
Conference. 2009, in press.
[18] Moro Sancho Q. I., Alonso L. and Vivaracho C. E., ,Application of neural networks to weather
forecasting with local data,"Applied Informatics,Vol 16,No 11,1984, in press.
[19] http://casanovasadventures.com/catalog/gps/p3034.htm.
Authors
S.Mohamed Mansoor Roomi received his B.E degree in Electronics and communication
Engineering from Madurai Kamarajar University, in 1990.and the M. E(Power
Systems)&ME (Communication Systems) from Thiagarajar College of Engineering,
Madurai in 1992&1997 and PhD in 2009 from Madurai Kamarajar University . His
primary Research Interests include Image Enhancement and Analysis.
Bhargavi is currently doing her under graduation in Electronics and Communication
Engineering at Thiagarajar College of Engineering, Madurai,India. Her research interests
are in the area of Image Enhancement and Image Analysis.
M.Hajira Rahima Banu is currently doing his under graduation in Electronics and
Communication Engineering at Thiagarajar College of Engineering, Madurai,India.Her
research interests are in the area of Image Enhancement and Image Analysis.
.
.