In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Classification of Multi-date Image using NDVI valuesijsrd.com
Advance Wide Field Sensor (AWiFS) of IRS P6 is an improved version of WiFS of IRS-1C/1D. AWiFS operates in four spectral bands identical to LISS III (Low-Imaging Sensing Satellite). Normalized Difference Vegetation Index (NDVI) is a simple graphical indicator that can be used to analyze remote sensing measurements. These indexes can be used to prediction of classes of Remote Sensing (RS) images. In this paper, we will classify the AWiFS image on NDVI values of 5 different date's images (Captured by AWiFS satellite). For classifying images, we will use an algorithm called Sum of Squared Difference (SSD). It will compare the clustered image with the Reference image based on SSD and the best match on the basis of SSD algorithm, it will classify the image. It is simple 1 step process, which will be faster compared to the classical approach.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
Semi-Automatic Classification Algorithm: The differences between Minimum Dist...Fatwa Ramdani
This course will focus in Semi-Automatic Classification Algorithm: The differences between Minimum Distance, Maximum Likelihood, and Spectral Angle Mapper based on remotely-sensed data
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
Spectroscopy or hyperspectral imaging consists in the acquisition, analysis, and extraction of the spectral information measured on a specific region or object using an airborne or satellite device. Hyperspectral imaging has become an active field of research recently. One way of analysing such data is through clustering. However, due to the high dimensionality of the data and the small distance between the different material signatures, clustering such a data is a challenging task.In this paper, we empirically compared five clustering techniques in different hyperspectral data sets. The considered clustering techniques are K-means, K-medoids, fuzzy Cmeans, hierarchical, and density-based spatial clustering of applications with noise. Four data sets are used to achieve this purpose which is Botswana, Kennedy space centre, Pavia, and Pavia University. Beside the accuracy, we adopted four more similarity measures: Rand statistics, Jaccard coefficient, Fowlkes-Mallows index, and Hubert index. According to accuracy, we found that fuzzy C-means clustering is doing better on Botswana and Pavia data sets, K-means and K-medoids are giving better results on Kennedy space centre data set, and for Pavia University the hierarchical clustering is better
Classification of Multi-date Image using NDVI valuesijsrd.com
Advance Wide Field Sensor (AWiFS) of IRS P6 is an improved version of WiFS of IRS-1C/1D. AWiFS operates in four spectral bands identical to LISS III (Low-Imaging Sensing Satellite). Normalized Difference Vegetation Index (NDVI) is a simple graphical indicator that can be used to analyze remote sensing measurements. These indexes can be used to prediction of classes of Remote Sensing (RS) images. In this paper, we will classify the AWiFS image on NDVI values of 5 different date's images (Captured by AWiFS satellite). For classifying images, we will use an algorithm called Sum of Squared Difference (SSD). It will compare the clustered image with the Reference image based on SSD and the best match on the basis of SSD algorithm, it will classify the image. It is simple 1 step process, which will be faster compared to the classical approach.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
Semi-Automatic Classification Algorithm: The differences between Minimum Dist...Fatwa Ramdani
This course will focus in Semi-Automatic Classification Algorithm: The differences between Minimum Distance, Maximum Likelihood, and Spectral Angle Mapper based on remotely-sensed data
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...rsmahabir
Abstract-The purpose of this study was to determine via spectral separability using divergence measures the best individual and combinations of various numbers of bands for five land cover/ land use classes along the Blue Nile in Sudan. The data for this analysis were a stack of 15 layers including RADARSAT-2 C-band and PALSAR L-band quad-polarized radar registered with ASTER optical data, as well as four variance texture measures extracted from the RADARSAT-2 images. Spectral signatures were obtained for each class and examined by various separability measures. This examination is useful for better understanding the relative value of different types of remote sensing data and best band combinations for possible visual analysis and for improving land cover/ land use classification accuracy. Results show that the best single band for analysis was the RADARSAT-2 VH variance texture measure. The best pair of bands was the ASTER visible red and the RADARSAT-2 HV variance texture, which also included the PALSAR VH band for the best three band combination, all bands being very different data types. Further, based upon the divergence values, only eight bands are needed to achieve maximum separation between land cover/ land use classes. Beyond this point, classification accuracy is expected to decrease, with as few as six bands needed to reach viable classification accuracy.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
Hyperspectral images provide detailed spectral info
rmation with more than several hundred
channels. On the other hand, the high dimensionalit
y in hyperspectral images also causes to
classification problems due to the huge ratio betwe
en the number of training samples and the
features. In this paper, Lyapunov Exponents (LEs) a
re used to determine chaotic-type structure
of EO- 1 Hyperion hyperspectral image, a mixed fore
st site in Turkey. Experimental results
demonstrate that EO-1 Hyperion image has a chaotic
structure by checking distribution of
Lyapunov Exponents (LEs) and they can be used as d
iscriminative features to improve
classification accuracy for hyperspectral images.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
Feature Based Image Classification by using Principal Component AnalysisIT Industry
Classification of different types of cloud images is the primary issue used to forecast precipitation and other weather constituents. A PCA based classification system has been presented in this paper to classify the different types of single-layered and multi-layered clouds. Principal Component Analysis (PCA) provides enhanced accuracy in features based image identification and classification as compared to other techniques. PCA is a feature based classification technique that is characteristically used for image recognition. PCA is based on principal features of an image and these features discreetly represent an image. The used approach in this research uses the principal features of an image to identify different cloud image types with better accuracy. A classifier system has also been designed to exhibit this enhancement. The designed system reads features of gray-level images to create an image space. This image space is used for classification of images. In testing phase, a new cloud image is classified by comparing it with the specified image space using the PCA algorithm.
An Automatic Neural Networks System for Classifying Dust, Clouds, Water, and ...Waqas Tariq
This paper presents an automatic remotely sensed system that is designed to classify dust, clouds, water and vegetation features from red sea area. Thus provides the system to make the test and classification process without retraining again. This system can rebuild the architecture of the neural network (NN) according to a linear combination among the number of epochs, the number of neurons, training functions, activation functions, and the number of hidden layers. Theproposed system is trained on the features of the provided images using 13 training functions, and is designed to find the best networks that has the ability to have the best classification on data is not included in the training data.This system shows an excellent classification of test data that is collected from the training data. The performances of the best three training functionsare%99.82, %99.64 and %99.28for test data that is not included in the training data.Although, the proposed system was trained on data selected only from one image, this system shows correctly classification of the features in the all images. The designed system can be carried out on remotely sensed images for classifying other features.This system was applied on several sub-images to classify the specified features. The correct performance of classifying the features from the sub-images was calculated by applying the proposed system on some small sections that were selected from contiguous areas contained the features.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
Performance of RGB and L Base Supervised Classification Technique Using Multi...IJERA Editor
In the present growth of sensor technology is to improve the new chance and applications in GIS. This enhances the technology law a new method that should not focus on real time available products, but it must automatically lead to new ones. The aim of the paper is to make a maximum use of remote sensing data and GIS techniques to access land use and land cover classification in the Kiliyar sub basin sector in palar river of northen part of Tamil Nadu.IRS P6 LISS III is merged data to perform the classification using ERDAS Imaging. The RGB and L base supervised classification was based up on a Multispectral analysis, land use and land cover information‟s (maps and existing reports), which involves advanced technology and complex data processing to find detailed imagery in the study region. Ground surface reflects more radar energy emitted by the sensor from the study region, which makes it easy to distinguish between the water body, hilly, agriculture, settlement and wetland.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Hyperspectral Image (HSI) classification amounts to classify images that contain a multitude of spectral bands. In the H2I project we have been investigating how Convolutional Neural Networks (CNNs) can be adapted to perform HSI classification. In this lighting talk we present a novel way of viewing the HSI through a simple data format transformation and the new design of the network training strategy. With minor modification for the lightweight CNN based classifier Cifar10, the proposed approach enables the network’s ability to exploit the information between the different spectral bands. The classifier is evaluated extensively, using different strategies, on a dataset for wood recognition. Obtained results in terms of accuracy and training time prove that the proposed approach is lightweight, simple to train, and effective.
Performance evaluation of transfer learning based deep convolutional neural n...IJECEIAES
Deep learning (DL) techniques are effective in various applications, such as parameter estimation, image classification, recognition, and anomaly detection. They excel with abundant training data but struggle with limited data. To overcome this, transfer learning is commonly used, leveraging complex learning abilities, saving time, and handling limited labeled data. This study assesses a transfer learning (TL)-based pre-trained “deep convolutional neural network (DCNN)” for classifying land use land cover using a limited and imbalanced dataset of fused spectro-temporal data. It compares the performance of shallow artificial neural networks (ANNs) and deep convolutional neural networks, utilizing multi-spectral sentinel-2 and high-resolution planet scope data. Both machine learning and deep learning algorithms successfully classified the fused data, but the transfer learning-based deep convolutional neural network outperformed the artificial neural network. The evaluation considered a weighted average of F1-score and overall classification accuracy. The transfer learning-based convolutional neural network achieved a weighted average F1-score of 0.92 and a classification accuracy of 0.93, while the artificial neural network achieved a weighted average F1-score of 0.87 and a classification accuracy of 0.89. These results highlight the superior performance of the transfer learned convolutional neural network on a limited and imbalanced dataset compared to the traditional artificial neural network algorithm.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...rsmahabir
Abstract-The purpose of this study was to determine via spectral separability using divergence measures the best individual and combinations of various numbers of bands for five land cover/ land use classes along the Blue Nile in Sudan. The data for this analysis were a stack of 15 layers including RADARSAT-2 C-band and PALSAR L-band quad-polarized radar registered with ASTER optical data, as well as four variance texture measures extracted from the RADARSAT-2 images. Spectral signatures were obtained for each class and examined by various separability measures. This examination is useful for better understanding the relative value of different types of remote sensing data and best band combinations for possible visual analysis and for improving land cover/ land use classification accuracy. Results show that the best single band for analysis was the RADARSAT-2 VH variance texture measure. The best pair of bands was the ASTER visible red and the RADARSAT-2 HV variance texture, which also included the PALSAR VH band for the best three band combination, all bands being very different data types. Further, based upon the divergence values, only eight bands are needed to achieve maximum separation between land cover/ land use classes. Beyond this point, classification accuracy is expected to decrease, with as few as six bands needed to reach viable classification accuracy.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
Hyperspectral images provide detailed spectral info
rmation with more than several hundred
channels. On the other hand, the high dimensionalit
y in hyperspectral images also causes to
classification problems due to the huge ratio betwe
en the number of training samples and the
features. In this paper, Lyapunov Exponents (LEs) a
re used to determine chaotic-type structure
of EO- 1 Hyperion hyperspectral image, a mixed fore
st site in Turkey. Experimental results
demonstrate that EO-1 Hyperion image has a chaotic
structure by checking distribution of
Lyapunov Exponents (LEs) and they can be used as d
iscriminative features to improve
classification accuracy for hyperspectral images.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
Feature Based Image Classification by using Principal Component AnalysisIT Industry
Classification of different types of cloud images is the primary issue used to forecast precipitation and other weather constituents. A PCA based classification system has been presented in this paper to classify the different types of single-layered and multi-layered clouds. Principal Component Analysis (PCA) provides enhanced accuracy in features based image identification and classification as compared to other techniques. PCA is a feature based classification technique that is characteristically used for image recognition. PCA is based on principal features of an image and these features discreetly represent an image. The used approach in this research uses the principal features of an image to identify different cloud image types with better accuracy. A classifier system has also been designed to exhibit this enhancement. The designed system reads features of gray-level images to create an image space. This image space is used for classification of images. In testing phase, a new cloud image is classified by comparing it with the specified image space using the PCA algorithm.
An Automatic Neural Networks System for Classifying Dust, Clouds, Water, and ...Waqas Tariq
This paper presents an automatic remotely sensed system that is designed to classify dust, clouds, water and vegetation features from red sea area. Thus provides the system to make the test and classification process without retraining again. This system can rebuild the architecture of the neural network (NN) according to a linear combination among the number of epochs, the number of neurons, training functions, activation functions, and the number of hidden layers. Theproposed system is trained on the features of the provided images using 13 training functions, and is designed to find the best networks that has the ability to have the best classification on data is not included in the training data.This system shows an excellent classification of test data that is collected from the training data. The performances of the best three training functionsare%99.82, %99.64 and %99.28for test data that is not included in the training data.Although, the proposed system was trained on data selected only from one image, this system shows correctly classification of the features in the all images. The designed system can be carried out on remotely sensed images for classifying other features.This system was applied on several sub-images to classify the specified features. The correct performance of classifying the features from the sub-images was calculated by applying the proposed system on some small sections that were selected from contiguous areas contained the features.
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Extraction of geospatial data from the photogrammetric sensing images becomes more and more important with the advances in the technology. Today Geographic Information Systems are used in a large variety of applications in engineering, city planning and social sciences. Geospatial data like roads, buildings and rivers are the most critical feeds of a GIS database. However, extracting buildings is one of the most complex and challenging tasks as there exist a lot of inhomogeneity due to varying hierarchy. The variety of the type of buildings and also the shapes of rooftops are very inconstant. Also in some areas, the buildings are placed irregularly or too close to each other. For these reasons, even by using high resolution IKONOS and QuickBird satellite imagery the quality percentage of building extraction is very less. This paper proposes a solution to the problem of automatic and unsupervised extraction of building features irrespective of rooftop structures in multispectral satellite images. The algorithm instead of detecting the region of interest, eliminates areas other than the region of interest which extract the rooftops completely irrespective of their shapes. Extensive tests indicate that the methodology performs well to extract buildings in complex environments.
Performance of RGB and L Base Supervised Classification Technique Using Multi...IJERA Editor
In the present growth of sensor technology is to improve the new chance and applications in GIS. This enhances the technology law a new method that should not focus on real time available products, but it must automatically lead to new ones. The aim of the paper is to make a maximum use of remote sensing data and GIS techniques to access land use and land cover classification in the Kiliyar sub basin sector in palar river of northen part of Tamil Nadu.IRS P6 LISS III is merged data to perform the classification using ERDAS Imaging. The RGB and L base supervised classification was based up on a Multispectral analysis, land use and land cover information‟s (maps and existing reports), which involves advanced technology and complex data processing to find detailed imagery in the study region. Ground surface reflects more radar energy emitted by the sensor from the study region, which makes it easy to distinguish between the water body, hilly, agriculture, settlement and wetland.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Hyperspectral Image (HSI) classification amounts to classify images that contain a multitude of spectral bands. In the H2I project we have been investigating how Convolutional Neural Networks (CNNs) can be adapted to perform HSI classification. In this lighting talk we present a novel way of viewing the HSI through a simple data format transformation and the new design of the network training strategy. With minor modification for the lightweight CNN based classifier Cifar10, the proposed approach enables the network’s ability to exploit the information between the different spectral bands. The classifier is evaluated extensively, using different strategies, on a dataset for wood recognition. Obtained results in terms of accuracy and training time prove that the proposed approach is lightweight, simple to train, and effective.
Similar to An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Regions in Hyperspectral Data via Non-Linear Dimensionality Reduction
Performance evaluation of transfer learning based deep convolutional neural n...IJECEIAES
Deep learning (DL) techniques are effective in various applications, such as parameter estimation, image classification, recognition, and anomaly detection. They excel with abundant training data but struggle with limited data. To overcome this, transfer learning is commonly used, leveraging complex learning abilities, saving time, and handling limited labeled data. This study assesses a transfer learning (TL)-based pre-trained “deep convolutional neural network (DCNN)” for classifying land use land cover using a limited and imbalanced dataset of fused spectro-temporal data. It compares the performance of shallow artificial neural networks (ANNs) and deep convolutional neural networks, utilizing multi-spectral sentinel-2 and high-resolution planet scope data. Both machine learning and deep learning algorithms successfully classified the fused data, but the transfer learning-based deep convolutional neural network outperformed the artificial neural network. The evaluation considered a weighted average of F1-score and overall classification accuracy. The transfer learning-based convolutional neural network achieved a weighted average F1-score of 0.92 and a classification accuracy of 0.93, while the artificial neural network achieved a weighted average F1-score of 0.87 and a classification accuracy of 0.89. These results highlight the superior performance of the transfer learned convolutional neural network on a limited and imbalanced dataset compared to the traditional artificial neural network algorithm.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
An Integrated Inductive-Deductive Framework for Data Mapping in Wireless Sens...M H
Wireless sensor networks (WSNs) havean intrinsic interdependency with the environments inwhich they operate. The part of the world with whichan application is concerned is defined as that applica-tion’sdomain.Thispaperadvocatesthatanapplicationdomain of a WSN can serve as a supplement to analysis,interpretation,andvisualisationmethodsandtools.Webelieve it is critical to elevate the capabilities of thedata mapping services proposed in [1] to make use of the special characteristics of an application domain. Inthis paper, we propose an adaptive Multi-DimensionalApplication Domain-driven (M-DAD) mapping frame-work that is suitable for mapping an arbitrary num-ber of sense modalities and is capable of utilising therelations between different modalities as well as otherparameters of the application domain to improve themapping performance. M-DAD starts with an initialuser defined model that is maintained and updatedthroughout the network lifetime. The experimentalresults demonstrate that M-DAD mapping frameworkperforms as well or better than mapping services with-out its extended capabilities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The Forward-Backward Time-Stepping (FBTS) had proven its potential to reconstruct images of buried objects in inhomogeneous medium with useful quantitative information about its size, shape, and locality. The Total Variation regularization method was incorporated with the FBTS algorithm to deal with the ill-posedness or ill-conditionedness of the inverse problem. The effectiveness of the proposed technique is confirmed by numerical simulations. The numerical method was carried out on simple object detection through FBTS with and without TV regularization method. The detection and reconstruction of relative permittivity and conductivity of the simple object have shown an improvement as TV regularization method applied whereas it smoothed the vibrations of the images and gave a better estimation of the image’s boundaries.
Land scene classification from remote sensing images using improved artificia...IJECEIAES
The images obtained from remote sensing consist of background complexities and similarities among the objects that act as challenge during the classification of land scenes. Land scenes are utilized in various fields such as agriculture, urbanization, and disaster management, to detect the condition of land surfaces and help to identify the suitability of the land surfaces for planting crops, and building construction. The existing methods help in the classification of land scenes through the images obtained from remote sensing technology, but the background complexities and presence of similar objects act as a barricade against providing better results. To overcome these issues, an improved artificial bee colony optimization algorithm with convolutional neural network (IABC-CNN) model is proposed to achieve better results in classifying the land scenes. The images are collected from aerial image dataset (AID), Northwestern Polytechnical University-Remote Sensing Image Scene 45 (NWPU-RESIS45), and University of California Merced (UCM) datasets. IABC effectively selects the best features from the extracted features using visual geometry group-16 (VGG-16). The selected features from the IABC are provided for the classification process using multiclass-support vector machine (MSVM). Results obtained from the proposed IABC-CNN achieves a better classification accuracy of 96.40% with an error rate 3.6%.
study and analysis of hy si data in 400 to 500IJAEMSJORNAL
The ability to extract information about world and present it in way that our visual perception can comprehend is ultimate goal of imaging science in remote sensing .Hyperspectral imaging system is most powerful tool in the field of remote sensing also called as imaging spectroscopy, It is new technique used by researcher to detect terrestrial, vegetation and mineral. This paper reports analysis of hyperspectral images. Firstly the hyperspectral image analyzed by using supervised classification of Amravati region from Maharashtra province of India. The report reveals spectral analysis of Amravati region. We acquired satellite imagery to perform the classification using maximum like hood classifier. Analysis is performing in ERDAS to determine the spectral reflectance against the no of band. The analytical outcome of paper is representing the soil, water, vegetation index of the region.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
Hyperparameters analysis of long short-term memory architecture for crop cla...IJECEIAES
Deep learning (DL) has seen a massive rise in popularity for remote sensing (RS) based applications over the past few years. However, the performance of DL algorithms is dependent on the optimization of various hyperparameters since the hyperparameters have a huge impact on the performance of deep neural networks. The impact of hyperparameters on the accuracy and reliability of DL models is a significant area for investigation. In this study, the grid Search algorithm is used for hyperparameters optimization of long short-term memory (LSTM) network for the RS-based classification. The hyperparameters considered for this study are, optimizer, activation function, batch size, and the number of LSTM layers. In this study, over 1,000 hyperparameter sets are evaluated and the result of all the sets are analyzed to see the effects of various combinations of hyperparameters as well the individual parameter effect on the performance of the LSTM model. The performance of the LSTM model is evaluated using the performance metric of minimum loss and average loss and it was found that classification can be highly affected by the choice of optimizer; however, other parameters such as the number of LSTM layers have less influence.
Comparison of Segmentation Algorithms and Estimation of Optimal Segmentation ...Pinaki Ranjan Sarkar
Recent advancement in sensor technology allows very high spatial resolution along with multiple spectral bands. There are many studies, which highlight that Object Based Image Analysis(OBIA) is more accurate than pixel-based classification for high resolution(< 2m) imagery. Image segmentation is a crucial step for OBIA and it is a very formidable task to estimate optimal parameters for segmentation as it does not have any unique solution. In this paper, we have studied different segmentation algorithms (both mono-scale and multi-scale) for different terrain categories and showed how the segmented output depends on upon various parameters. Later, we have introduced a novel method to estimate optimal segmentation parameters. The main objectives of this study are to highlight the effectiveness of presently available segmentation techniques on very high-resolution satellite data and to automate segmentation process. Pre-estimation of segmentation parameter is more practical and efficient in OBIA. Assessment of segmentation algorithms and estimation of segmentation parameters are examined based on the very high-resolution multi-spectral WorldView-3(0.3m, PAN sharpened) data.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...DR.P.S.JAGADEESH KUMAR
Dr.P.S.Jagadeesh Kumar
Dartmouth College, Hanover
New Hampshire, United States
Similar to An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Regions in Hyperspectral Data via Non-Linear Dimensionality Reduction (20)
Power System State Estimation - A ReviewIDES Editor
The aim of this article is to provide a comprehensive
survey on power system state estimation techniques. The
algorithms used for finding the system states under both static
and dynamic state estimations are discussed in brief. The
authors are opinion that the scope of pursuing research in the
area of state estimation with PMU and SCADA measurements
is the state of the art and timely.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
Reactive Power Planning is a major concern in the
operation and control of power systems This paper compares
the effectiveness of Evolutionary Programming (EP) and
New Improved Differential Evolution (NIMDE) to solve
Reactive Power Planning (RPP) problem incorporating
FACTS Controllers like Static VAR Compensator (SVC),
Thyristor Controlled Series Capacitor (TCSC) and Unified
power flow controller (UPFC) considering voltage stability.
With help of Fast Voltage Stability Index (FVSI), the critical
lines and buses are identified to install the FACTS controllers.
The optimal settings of the control variables of the generator
voltages,transformer tap settings and allocation and parameter
settings of the SVC,TCSC,UPFC are considered for reactive
power planning. The test and Validation of the proposed
algorithm are conducted on IEEE 30–bus system and 72-bus
Indian system.Simulation results shows that the UPFC gives
better results than SVC and TCSC and the FACTS controllers
reduce the system losses.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This paper is an attempt to base on auctions which
presents a frame work for the secure multi-party decision
protocols. In addition to the implementations which are very
light weighted, the main focus is on synchronizing security
features for avoiding agreements manipulations and reducing
the user traffic. Through this paper one can understand that
this different auction protocols on top of the frame work can
be collaborated using mobile devices. This paper present the
negotiation between auctioneer and the proffered and this
negotiation shows that multiparty security is far better than
the existing system.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
The Cloud based services provide much efficient
and seamless ways for data sharing across the cloud. The fact
that the data owners no longer possess data makes it very
difficult to assure data confidentiality and to enable secure
data sharing in the cloud. Despite of all its advantages this
will remain a major limitation that acts as a barrier to the
wider deployment of cloud based services. One of the possible
ways for ensuring trust in this aspect is the introduction of
accountability feature in the cloud computing scenario. The
Cloud framework requires promotion of distributed
accountability for such dynamic environment[1]. In some
works, there‘s an accountable framework suggested to ensure
distributed accountability for data sharing by the generation
of only a log of data access, but without any embedded feedback
mechanism for owner permission towards data
protection[2].The proposed system is an enhanced client
accountability framework which provides an additional client
side verification for each access towards enhanced security of
data. The integrity of content of data which resides in the
cloud service provider is also maintained by secured
outsourcing. Besides, the authentication of JAR(Java Archive)
files are done to ensure file protection and to maintain a safer
environment for data sharing. The analysis of various
functionalities of the framework depicts both the
accountability and security feature in an efficient manner.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
in cloud computing data storage is a significant issue
because the entire data reside over a set of interconnected
resource pools that enables the data to be accessed through
virtual machines. It moves the application software’s and
databases to the large data centers where the management of
data is actually done. As the resource pools are situated over
various corners of the world, the management of data and
services may not be fully trustworthy. So, there are various
issues that need to be addressed with respect to the
management of data, service of data, privacy of data, security
of data etc. But the privacy and security of data is highly
challenging. To ensure privacy and security of data-at-rest in
cloud computing, we have proposed an effective and a novel
approach to ensure data security in cloud computing by means
of hiding data within images following is the concept of
steganography. The main objective of this paper is to prevent
data access from cloud data storage centers by unauthorized
users. This scheme perfectly stores data at cloud data storage
centers and retrieves data from it when it is needed.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.