Here we have presented an alternate ANN
structure called functional link ANN (FLANN) for image
denoising. In contrast to a feed forward ANN structure i.e.
a multilayer perceptron (MLP), the FLANN is basically a
single layer structure in which non-linearity is introduced
by enhancing the input pattern with nonlinear function
expansion. In this work three different expansions is
applied. With the proper choice of functional expansion in
a FLANN , this network performs as good as and in some
case even better than the MLP structure for the problem
of denoising of an image corrupted with Salt and Pepper
noise. In the single layer functional link ANN (FLANN)
the need of hidden layer is eliminated. The novelty of this
structure is that it requires much less computation than
that of MLP. In the presence of additive white Gaussian
noise in the image, the performance of the proposed
network is found superior to that of a MLP .In particular
FLANN structure with Chebyshev functional expansion
works best for Salt and Pepper noise suppression from an
image.
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Curved Wavelet Transform For Image Denoising using MATLAB.Nikhil Kumar
This document summarizes a student project on image denoising using wavelet analysis. It introduces wavelet transforms as a method to denoise digital images corrupted by noise. The project uses MATLAB to apply a discrete wavelet transform with a Haar wavelet, thresholds wavelet coefficients at different levels to compress and denoise the image, and demonstrates the results on an example image.
This document summarizes a method for acquiring stereo image pairs with pixel-accurate ground truth correspondence information using structured light. The method involves projecting patterns of structured light onto a scene using one or more light projectors while capturing images using a pair of cameras. By decoding the projected light patterns, each pixel can be uniquely labeled, allowing trivial determination of correspondences between camera views. The structured light patterns help overcome limitations of existing stereo datasets in evaluating stereo matching algorithms.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
An introduction to discrete wavelet transformsLily Rose
This document provides an overview of wavelet transforms and their applications. It introduces continuous and discrete wavelet transforms, including multiresolution analysis and the fast wavelet transform. It discusses how wavelet transforms can be used for image compression, edge detection, and digital watermarking due to properties like decomposing images into different frequency subbands. The fast wavelet transform allows efficient computation of wavelet coefficients by exploiting relationships between scales.
Impact of Spatial Correlation towards the Performance of MIMO Downlink Transm...Rosdiadee Nordin
Rosdiadee Nordin, Mahamod Ismail, "Impact of Spatial Correlation towards the Performance of MIMO Downlink Transmissions", Proceedings of 18th Asia-Pasific Conference on Communications. APCC 2012, Oct. 2012
The document discusses denoising techniques for images captured by single-sensor digital cameras using a color filter array (CFA). It compares principal component analysis (PCA) and independent component analysis (ICA) based denoising of CFA images. PCA and ICA are linear adaptive transforms that can be used to represent image data in a way that better distinguishes signal from noise. The document outlines the PCA and ICA algorithms and discusses how K-means clustering can be used with them. It generates noise to add to a reference image and implements PCA and ICA based denoising in MATLAB. Performance is evaluated using metrics like PSNR, WPSNR, SSIM and correlation coefficient.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Curved Wavelet Transform For Image Denoising using MATLAB.Nikhil Kumar
This document summarizes a student project on image denoising using wavelet analysis. It introduces wavelet transforms as a method to denoise digital images corrupted by noise. The project uses MATLAB to apply a discrete wavelet transform with a Haar wavelet, thresholds wavelet coefficients at different levels to compress and denoise the image, and demonstrates the results on an example image.
This document summarizes a method for acquiring stereo image pairs with pixel-accurate ground truth correspondence information using structured light. The method involves projecting patterns of structured light onto a scene using one or more light projectors while capturing images using a pair of cameras. By decoding the projected light patterns, each pixel can be uniquely labeled, allowing trivial determination of correspondences between camera views. The structured light patterns help overcome limitations of existing stereo datasets in evaluating stereo matching algorithms.
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
An introduction to discrete wavelet transformsLily Rose
This document provides an overview of wavelet transforms and their applications. It introduces continuous and discrete wavelet transforms, including multiresolution analysis and the fast wavelet transform. It discusses how wavelet transforms can be used for image compression, edge detection, and digital watermarking due to properties like decomposing images into different frequency subbands. The fast wavelet transform allows efficient computation of wavelet coefficients by exploiting relationships between scales.
Impact of Spatial Correlation towards the Performance of MIMO Downlink Transm...Rosdiadee Nordin
Rosdiadee Nordin, Mahamod Ismail, "Impact of Spatial Correlation towards the Performance of MIMO Downlink Transmissions", Proceedings of 18th Asia-Pasific Conference on Communications. APCC 2012, Oct. 2012
This document summarizes a seminar presentation on an image denoising method based on the curvelet transform. The presentation covered:
1) How image noise occurs and traditional denoising methods like linear filters and edge-preserving smoothing.
2) The curvelet transform process including sub-band decomposition, smooth partitioning, renormalization, and ridgelet analysis.
3) An image denoising algorithm that applies wavelet and curvelet transforms, then combines results using quad tree decomposition.
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
Performance Analysis of M-ary Optical CDMA in Presence of Chromatic DispersionIDES Editor
The performance of M-ary optical code division
multiple access (OCDMA) is analytically investigated in
presence of chromatic dispersion. The study is carried out for
single mode dispersion shifted and non dispersion shifted
fibers. Walsh code is used as user address. The p-i-n
photodetector is used for optoelectronic conversion process.
In our proposed model 16 different symbols are modulated
with different intensity levels and detected by direct detection
technique. The numerical results show that, the reconstruction
of the transmitted symbol is strongly dependent on the received
symbols magnitude which is reduced by fiber length and
symbol rate. It is found that the proposed OCDMA system
shows better performance when dispersion shifted fiber is
used as a communication medium.
Adaptive Neuro-Fuzzy Inference System based Fractal Image CompressionIDES Editor
This paper presents an Adaptive Neuro-Fuzzy
Inference System (ANFIS) model for fractal image
compression. One of the image compression techniques in
the spatial domain is Fractal Image Compression (FIC)
but the main drawback of FIC using traditional
exhaustive search is that it involves more computational
time due to global search. In order to improve the
computational time and compression ratio, artificial
intelligence technique like ANFIS has been used. Feature
extraction reduces the dimensionality of the problem and
enables the ANFIS network to be trained on an image
separate from the test image thus reducing the
computational time. Lowering the dimensionality of the
problem reduces the computations required during the
search. The main advantage of ANFIS network is that it
can adapt itself from the training data and produce a
fuzzy inference system. The network adapts itself
according to the distribution of feature space observed
during training. Computer simulations reveal that the
network has been properly trained and the fuzzy system
thus evolved, classifies the domains correctly with
minimum deviation which helps in encoding the image
using FIC.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
This document summarizes a research paper on fractal image compression using the Set Partitioning in Hierarchical Trees (SPIHT) algorithm. It begins with an introduction to fractal image compression and how it represents images using iterated function systems and affine transforms of image blocks. It then discusses wavelet image compression and how the SPIHT algorithm efficiently locates significant wavelet coefficients. The main features of the SPIHT algorithm are outlined, including how it partitions coefficients into lists and generates an embedded, progressive code. Experimental results showing reconstructed images at different bit rates and the SPIHT algorithm's rate-distortion performance are presented. Finally, generalized fractal-wavelet transforms that combine fractal and wavelet techniques are briefly mentioned.
The document summarizes key aspects of artificial neural networks and supervised learning. It discusses how biological neural networks inspired the development of artificial neural networks. The basic neuron model and perceptron are introduced as simple computing elements. Multilayer neural networks are presented as able to learn complex patterns through backpropagation algorithms that reduce errors by adjusting weights between layers.
Images may contain different types of noises. Removing noise from image is often the first step in image processing, and remains a challenging problem in spite of sophistication of recent research. This ppt presents an efficient image denoising scheme and their reconstruction based on Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IDWT).
1. The document proposes a new image denoising method called NormalShrink that uses wavelet thresholding with an adaptive threshold estimated based on the subband characteristics of the noisy image.
2. Experimental results on test images like Lena, Barbara and Goldhill show that NormalShrink outperforms other methods like SureShrink, BayesShrink and Wiener filtering in terms of PSNR for most noise levels, remaining within 4% of the best possible OracleShrink method.
3. NormalShrink is also computationally more efficient than BayesShrink, removing noise significantly while preserving important image features better than compared methods.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
A beginner's guide to Style Transfer and recent trendsJaeJun Yoo
Style transfer techniques have evolved from matching gram matrices to using neural networks. Early methods matched gram statistics of CNN features to transfer texture styles. Recent work uses adaptive instance normalization and feed-forward networks. WCT2 achieves photorealistic transfer using wavelet transforms that satisfy the perfect reconstruction condition, enabling high resolution stylization and temporal consistency in videos without post-processing.
The document discusses various frequency domain techniques used in image processing, including the Fourier transform, discrete Fourier transform (DFT), fast Fourier transform (FFT), and discrete cosine transform (DCT). It explains that the Fourier transform decomposes an image into real and imaginary frequency components, and the inverse transform reconstructs the image. The FFT is an efficient algorithm to perform the DFT and is widely used in digital image processing to convert images between the spatial and frequency domains. The DCT also transforms an image into different frequency bands and is useful for image compression applications.
This document summarizes a student project on implementing lossless discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT). It provides an overview of the project, which includes introducing DWT, reviewing literature on lifting schemes for faster DWT computation, and simulating a 2D (5,3) DWT. The results show DWT blocks decomposing signals into high and low pass coefficients. Applications mentioned are in medical imaging, signal denoising, data compression and image processing. The conclusion discusses the need for lossless transforms in medical imaging. Future work could extend this to higher level transforms and applications like compression and watermarking.
Satellite image contrast enhancement using discrete wavelet transformHarishwar Reddy
This document discusses contrast enhancement of satellite images using discrete wavelet transform and singular value decomposition. It provides background on contrast and techniques like histogram equalization. It then describes discrete wavelet transform and singular value decomposition, their applications, advantages, and uses. The document concludes that a new technique was proposed combining DWT and SVD for image equalization, which showed better results than conventional techniques in experiments.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Coding and ANN - assisted Pseudo - Noise Sequence Generator for DS / FH Sprea...IDES Editor
One of the challenging issues in Spread-Spectrum
Modulation (SSM) is the design of the Pseudo - Random or
Pseudo - Noise (PN) sequence generator as an option to the
already available methods. This work is related to the use of
Artificial Neural Network (ANN) for generation of the PN
sequence during transmission and reception of a SSM based
system. The benefit of the ANN - assisted PN generator shall
be that it will simplify the design process of the PN - generator
and yet provide high reliability against disruptions due to
intentional disruptions and degradation of signal quality
resulting out of variations in channel condition. The
performance of the SSM system can be further enhanced by
the use of coding. Hamming and cyclic redundancy check
(CRC) codes have been used here with the data stream to
explore if performance of the SSM system is improved further.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
1. The document describes using a deep neural network to detect changes between two SAR images by preclassifying the images, training the neural network on selected samples, and analyzing the results.
2. A similarity matrix and variance matrix are calculated during preclassification to identify and jointly label similar pixels, while different pixels are labeled separately. Good samples are selected to train the neural network.
3. The neural network is tested on images with different types and levels of noise and performs well at change detection, with performance increasing as noise decreases. Future work could focus on accelerating the training process.
The document discusses adaptive channel equalization using neural networks. It provides an overview of neural networks and their application to channel equalization. Specifically, it summarizes various neural network architectures that have been used for equalization, including multilayer perceptrons, functional link artificial neural networks, Chebyshev neural networks, and radial basis function networks. It compares the bit error rate performance of these different neural network equalizers with traditional linear equalizers such as LMS and RLS. Overall, the document finds that neural network equalizers can better handle nonlinear channel distortions compared to linear equalizers and that radial basis function networks provide particularly good performance for channel equalization applications.
A CNN consists of convolutional layers that extract features, pooling layers that reduce resolution and add robustness, ReLU layers that introduce nonlinearity, and fully connected layers as in regular neural networks. Convolutional layers use small filters that are convolved across input to extract local features at each spatial point. Pooling layers downsample representations to reduce resolution. Fully connected layers at the end integrate all features for classification. CNNs are effective for computer vision tasks due to their ability to learn features directly from data that are robust to distortions and partial occlusions.
This document summarizes a seminar presentation on an image denoising method based on the curvelet transform. The presentation covered:
1) How image noise occurs and traditional denoising methods like linear filters and edge-preserving smoothing.
2) The curvelet transform process including sub-band decomposition, smooth partitioning, renormalization, and ridgelet analysis.
3) An image denoising algorithm that applies wavelet and curvelet transforms, then combines results using quad tree decomposition.
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
Performance Analysis of M-ary Optical CDMA in Presence of Chromatic DispersionIDES Editor
The performance of M-ary optical code division
multiple access (OCDMA) is analytically investigated in
presence of chromatic dispersion. The study is carried out for
single mode dispersion shifted and non dispersion shifted
fibers. Walsh code is used as user address. The p-i-n
photodetector is used for optoelectronic conversion process.
In our proposed model 16 different symbols are modulated
with different intensity levels and detected by direct detection
technique. The numerical results show that, the reconstruction
of the transmitted symbol is strongly dependent on the received
symbols magnitude which is reduced by fiber length and
symbol rate. It is found that the proposed OCDMA system
shows better performance when dispersion shifted fiber is
used as a communication medium.
Adaptive Neuro-Fuzzy Inference System based Fractal Image CompressionIDES Editor
This paper presents an Adaptive Neuro-Fuzzy
Inference System (ANFIS) model for fractal image
compression. One of the image compression techniques in
the spatial domain is Fractal Image Compression (FIC)
but the main drawback of FIC using traditional
exhaustive search is that it involves more computational
time due to global search. In order to improve the
computational time and compression ratio, artificial
intelligence technique like ANFIS has been used. Feature
extraction reduces the dimensionality of the problem and
enables the ANFIS network to be trained on an image
separate from the test image thus reducing the
computational time. Lowering the dimensionality of the
problem reduces the computations required during the
search. The main advantage of ANFIS network is that it
can adapt itself from the training data and produce a
fuzzy inference system. The network adapts itself
according to the distribution of feature space observed
during training. Computer simulations reveal that the
network has been properly trained and the fuzzy system
thus evolved, classifies the domains correctly with
minimum deviation which helps in encoding the image
using FIC.
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
This document summarizes a research paper on fractal image compression using the Set Partitioning in Hierarchical Trees (SPIHT) algorithm. It begins with an introduction to fractal image compression and how it represents images using iterated function systems and affine transforms of image blocks. It then discusses wavelet image compression and how the SPIHT algorithm efficiently locates significant wavelet coefficients. The main features of the SPIHT algorithm are outlined, including how it partitions coefficients into lists and generates an embedded, progressive code. Experimental results showing reconstructed images at different bit rates and the SPIHT algorithm's rate-distortion performance are presented. Finally, generalized fractal-wavelet transforms that combine fractal and wavelet techniques are briefly mentioned.
The document summarizes key aspects of artificial neural networks and supervised learning. It discusses how biological neural networks inspired the development of artificial neural networks. The basic neuron model and perceptron are introduced as simple computing elements. Multilayer neural networks are presented as able to learn complex patterns through backpropagation algorithms that reduce errors by adjusting weights between layers.
Images may contain different types of noises. Removing noise from image is often the first step in image processing, and remains a challenging problem in spite of sophistication of recent research. This ppt presents an efficient image denoising scheme and their reconstruction based on Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IDWT).
1. The document proposes a new image denoising method called NormalShrink that uses wavelet thresholding with an adaptive threshold estimated based on the subband characteristics of the noisy image.
2. Experimental results on test images like Lena, Barbara and Goldhill show that NormalShrink outperforms other methods like SureShrink, BayesShrink and Wiener filtering in terms of PSNR for most noise levels, remaining within 4% of the best possible OracleShrink method.
3. NormalShrink is also computationally more efficient than BayesShrink, removing noise significantly while preserving important image features better than compared methods.
Satellite Image Resolution Enhancement Technique Using DWT and IWTEditor IJCATR
Now a days satellite images are widely used In many applications such as astronomy and
geographical information systems and geosciences studies .In this paper, We propose a new satellite image
resolution enhancement technique which generates sharper high resolution image .Based on the high
frequency sub-bands obtained from the dwt and iwt. We are not considering the LL sub-band here. In this
resolution-enhancement technique using interpolated DWT and IWT high-frequency sub band images and the
input low-resolution image. Inverse DWT (IDWT) has been applied to combine all these images to generate
the final resolution-enhanced image. The proposed technique has been tested on satellite bench mark images.
The quantitative (peak signal to noise ratio and mean square error) and visual results show the superiority of
the proposed technique over the conventional method and standard image enhancement technique WZP.
A beginner's guide to Style Transfer and recent trendsJaeJun Yoo
Style transfer techniques have evolved from matching gram matrices to using neural networks. Early methods matched gram statistics of CNN features to transfer texture styles. Recent work uses adaptive instance normalization and feed-forward networks. WCT2 achieves photorealistic transfer using wavelet transforms that satisfy the perfect reconstruction condition, enabling high resolution stylization and temporal consistency in videos without post-processing.
The document discusses various frequency domain techniques used in image processing, including the Fourier transform, discrete Fourier transform (DFT), fast Fourier transform (FFT), and discrete cosine transform (DCT). It explains that the Fourier transform decomposes an image into real and imaginary frequency components, and the inverse transform reconstructs the image. The FFT is an efficient algorithm to perform the DFT and is widely used in digital image processing to convert images between the spatial and frequency domains. The DCT also transforms an image into different frequency bands and is useful for image compression applications.
This document summarizes a student project on implementing lossless discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT). It provides an overview of the project, which includes introducing DWT, reviewing literature on lifting schemes for faster DWT computation, and simulating a 2D (5,3) DWT. The results show DWT blocks decomposing signals into high and low pass coefficients. Applications mentioned are in medical imaging, signal denoising, data compression and image processing. The conclusion discusses the need for lossless transforms in medical imaging. Future work could extend this to higher level transforms and applications like compression and watermarking.
Satellite image contrast enhancement using discrete wavelet transformHarishwar Reddy
This document discusses contrast enhancement of satellite images using discrete wavelet transform and singular value decomposition. It provides background on contrast and techniques like histogram equalization. It then describes discrete wavelet transform and singular value decomposition, their applications, advantages, and uses. The document concludes that a new technique was proposed combining DWT and SVD for image equalization, which showed better results than conventional techniques in experiments.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Coding and ANN - assisted Pseudo - Noise Sequence Generator for DS / FH Sprea...IDES Editor
One of the challenging issues in Spread-Spectrum
Modulation (SSM) is the design of the Pseudo - Random or
Pseudo - Noise (PN) sequence generator as an option to the
already available methods. This work is related to the use of
Artificial Neural Network (ANN) for generation of the PN
sequence during transmission and reception of a SSM based
system. The benefit of the ANN - assisted PN generator shall
be that it will simplify the design process of the PN - generator
and yet provide high reliability against disruptions due to
intentional disruptions and degradation of signal quality
resulting out of variations in channel condition. The
performance of the SSM system can be further enhanced by
the use of coding. Hamming and cyclic redundancy check
(CRC) codes have been used here with the data stream to
explore if performance of the SSM system is improved further.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
1. The document describes using a deep neural network to detect changes between two SAR images by preclassifying the images, training the neural network on selected samples, and analyzing the results.
2. A similarity matrix and variance matrix are calculated during preclassification to identify and jointly label similar pixels, while different pixels are labeled separately. Good samples are selected to train the neural network.
3. The neural network is tested on images with different types and levels of noise and performs well at change detection, with performance increasing as noise decreases. Future work could focus on accelerating the training process.
The document discusses adaptive channel equalization using neural networks. It provides an overview of neural networks and their application to channel equalization. Specifically, it summarizes various neural network architectures that have been used for equalization, including multilayer perceptrons, functional link artificial neural networks, Chebyshev neural networks, and radial basis function networks. It compares the bit error rate performance of these different neural network equalizers with traditional linear equalizers such as LMS and RLS. Overall, the document finds that neural network equalizers can better handle nonlinear channel distortions compared to linear equalizers and that radial basis function networks provide particularly good performance for channel equalization applications.
A CNN consists of convolutional layers that extract features, pooling layers that reduce resolution and add robustness, ReLU layers that introduce nonlinearity, and fully connected layers as in regular neural networks. Convolutional layers use small filters that are convolved across input to extract local features at each spatial point. Pooling layers downsample representations to reduce resolution. Fully connected layers at the end integrate all features for classification. CNNs are effective for computer vision tasks due to their ability to learn features directly from data that are robust to distortions and partial occlusions.
This document summarizes a research paper on using wavelet neural networks (WNNs) for adaptive equalization in digital communication systems. The paper proposes using WNNs structured with wavelet basis functions as the activation functions. The orthogonal least squares (OLS) algorithm is then used to update the weighting matrix and select the most important wavelet basis units, reducing redundancy. The experimental results showed that a WNN equalizer using OLS outperformed conventional neural network equalizers in terms of signal-to-noise ratio and ability to handle non-linear channels.
This document summarizes a research paper on using wavelet neural networks (WNNs) for adaptive equalization in digital communication systems. The paper proposes using WNNs structured with wavelet basis functions as the activation functions. The orthogonal least squares (OLS) algorithm is then used to update the weighting matrix and select the most important wavelet basis units, reducing redundancy. The experimental results showed that a WNN equalizer using OLS outperformed conventional neural network equalizers in terms of signal-to-noise ratio and ability to handle non-linear channels.
Deep learning (also known as deep structured learning or hierarchical learning) is the application of artificial neural networks (ANNs) to learning tasks that contain more than one hidden layer. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
This document provides an introduction to speech recognition with deep learning. It discusses how speech recognition works, the development of the field from early methods like HMMs to modern deep learning approaches using neural networks. It defines deep learning and explains why it is called "deep" learning. It also outlines common deep learning architectures for speech recognition, including CNN-RNN models and sequence-to-sequence models. Finally, it describes the layers of a CNN like convolutional, pooling, ReLU and fully-connected layers.
Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two common types of deep neural networks. RNNs include feedback connections so they can learn from sequence data like text, while CNNs are useful for visual data due to their translation invariance from pooling and convolutional layers. The document provides examples of applying RNNs and CNNs to tasks like sentiment analysis, image classification, and machine translation. It also discusses common CNN architecture components like convolutional layers, activation functions like ReLU, pooling layers, and fully connected layers.
This presentation is Part 2 of my September Lisp NYC presentation on Reinforcement Learning and Artificial Neural Nets. We will continue from where we left off by covering Convolutional Neural Nets (CNN) and Recurrent Neural Nets (RNN) in depth.
Time permitting I also plan on having a few slides on each of the following topics:
1. Generative Adversarial Networks (GANs)
2. Differentiable Neural Computers (DNCs)
3. Deep Reinforcement Learning (DRL)
Some code examples will be provided in Clojure.
After a very brief recap of Part 1 (ANN & RL), we will jump right into CNN and their appropriateness for image recognition. We will start by covering the convolution operator. We will then explain feature maps and pooling operations and then explain the LeNet 5 architecture. The MNIST data will be used to illustrate a fully functioning CNN.
Next we cover Recurrent Neural Nets in depth and describe how they have been used in Natural Language Processing. We will explain why gated networks and LSTM are used in practice.
Please note that some exposure or familiarity with Gradient Descent and Backpropagation will be assumed. These are covered in the first part of the talk for which both video and slides are available online.
A lot of material will be drawn from the new Deep Learning book by Goodfellow & Bengio as well as Michael Nielsen's online book on Neural Networks and Deep Learning as well several other online resources.
Bio
Pierre de Lacaze has over 20 years industry experience with AI and Lisp based technologies. He holds a Bachelor of Science in Applied Mathematics and a Master’s Degree in Computer Science.
https://www.linkedin.com/in/pierre-de-lacaze-b11026b/
Here, we have implemented CNN network in FPGA by incorporating a novel technique of convolution which includes pipelining technique as well as parallelism (by optimizing) between the two.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
Convolutional neural networks (CNNs) are a type of neural network used for processing grid-like data such as images. CNNs have an input layer, multiple hidden layers, and an output layer. The hidden layers typically include convolutional layers that extract features, pooling layers that reduce dimensionality, and fully connected layers similar to regular neural networks. CNNs are commonly used for computer vision tasks like image classification and object detection due to their ability to learn spatial hierarchies of features in the data. They have applications in areas like facial recognition, document analysis, and climate modeling.
This document describes analyzing images of English alphabets using a neural network with backpropagation. It preprocesses 26 alphabet images into 10x10 bipolar arrays as input data. A multilayer feedforward neural network with backpropagation training is used. The network has 100 input nodes, a hidden layer with variable nodes, and 26 output nodes. Different network structures are tested by varying the hidden nodes. The network is trained on the alphabet images and tested to evaluate performance for character recognition. Results are analyzed and conclusions are drawn based on the size and accuracy of the network structures.
This document presents a new image denoising technique using pixel-component-analysis. It begins by discussing existing denoising methods like spatial filtering, transform domain filtering using wavelets, and non-local mean approaches. It then proposes a two-stage denoising method using principal component analysis (PCA) on local pixel coherence (LPC) vectors. In the first stage, PCA is applied to transform and filter LPC vectors. In the second stage, denoising is repeated on the output of stage one to further reduce noise. Experimental results on test images show PSNR and SSIM improvements between the single-stage and two-stage approaches, demonstrating the effectiveness of the proposed two-stage LPC-PCA deno
Neural networks and deep learning are machine learning techniques inspired by the human brain. Neural networks consist of interconnected nodes that process input data and pass signals to other nodes. The main types discussed are artificial neural networks (ANNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs). ANNs can learn nonlinear relationships between inputs and outputs. CNNs are effective for image processing by learning relevant spatial features. RNNs capture sequential dependencies in data like text. Deep learning uses neural networks with many layers to learn complex patterns in large datasets.
Image classification is perhaps the most important part of digital image analysis. In this paper, we compare the most widely used model CNN Convolutional Neural Network , and MLP Multilayer Perceptron . We aim to show how both models differ and how both models approach towards the final goal, which is image classification. Souvik Banerjee | Dr. A Rengarajan "Hand-Written Digit Classification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42444.pdf Paper URL: https://www.ijtsrd.comcomputer-science/artificial-intelligence/42444/handwritten-digit-classification/souvik-banerjee
Deep learning for image super resolutionPrudhvi Raj
Using Deep Convolutional Networks, the machine can learn end-to-end mapping between the low/high-resolution images. Unlike traditional methods, this method jointly optimizes all the layers of the image. A light-weight CNN structure is used, which is simple to implement and provides formidable trade-off from the existential methods.
Similar to Chebyshev Functional Link Artificial Neural Networks for Denoising of Image Corrupted by Salt and Pepper Noise (20)
Power System State Estimation - A ReviewIDES Editor
This document provides a review of power system state estimation techniques. It discusses both static and dynamic state estimation algorithms. For static state estimation, it covers weighted least squares, decoupled, and robust estimation methods. Weighted least squares is commonly used but can have numerical instability issues. Decoupled state estimation approximates the gain matrix for faster computation. Robust estimation uses M-estimators and other techniques to handle outliers and bad data. Dynamic state estimation applies Kalman filtering, leapfrog algorithms, and other methods to continuously monitor system states over time.
Artificial Intelligence Technique based Reactive Power Planning Incorporating...IDES Editor
This document summarizes a research paper that proposes using artificial intelligence techniques and FACTS controllers for reactive power planning in real-time power transmission systems. The paper formulates the reactive power planning problem and incorporates flexible AC transmission system (FACTS) devices like static VAR compensators (SVC), thyristor controlled series capacitors (TCSC), and unified power flow controllers (UPFC). Evolutionary algorithms like evolutionary programming (EP) and differential evolution (DE) are applied to find the optimal locations and settings of the FACTS controllers to minimize losses and costs. Simulation results on IEEE 30-bus and 72-bus Indian test systems show that UPFC performs best in reducing losses compared to SVC and TCSC.
Design and Performance Analysis of Genetic based PID-PSS with SVC in a Multi-...IDES Editor
Damping of power system oscillations with the help
of proposed optimal Proportional Integral Derivative Power
System Stabilizer (PID-PSS) and Static Var Compensator
(SVC)-based controllers are thoroughly investigated in this
paper. This study presents robust tuning of PID-PSS and
SVC-based controllers using Genetic Algorithms (GA) in
multi machine power systems by considering detailed model
of the generators (model 1.1). The effectiveness of FACTSbased
controllers in general and SVC-based controller in
particular depends upon their proper location. Modal
controllability and observability are used to locate SVC–based
controller. The performance of the proposed controllers is
compared with conventional lead-lag power system stabilizer
(CPSS) and demonstrated on 10 machines, 39 bus New England
test system. Simulation studies show that the proposed genetic
based PID-PSS with SVC based controller provides better
performance.
Optimal Placement of DG for Loss Reduction and Voltage Sag Mitigation in Radi...IDES Editor
This paper presents the need to operate the power
system economically and with optimum levels of voltages has
further led to an increase in interest in Distributed
Generation. In order to reduce the power losses and to improve
the voltage in the distribution system, distributed generators
(DGs) are connected to load bus. To reduce the total power
losses in the system, the most important process is to identify
the proper location for fixing and sizing of DGs. It presents a
new methodology using a new population based meta heuristic
approach namely Artificial Bee Colony algorithm(ABC) for
the placement of Distributed Generators(DG) in the radial
distribution systems to reduce the real power losses and to
improve the voltage profile, voltage sag mitigation. The power
loss reduction is important factor for utility companies because
it is directly proportional to the company benefits in a
competitive electricity market, while reaching the better power
quality standards is too important as it has vital effect on
customer orientation. In this paper an ABC algorithm is
developed to gain these goals all together. In order to evaluate
sag mitigation capability of the proposed algorithm, voltage
in voltage sensitive buses is investigated. An existing 20KV
network has been chosen as test network and results are
compared with the proposed method in the radial distribution
system.
Line Losses in the 14-Bus Power System Network using UPFCIDES Editor
Controlling power flow in modern power systems
can be made more flexible by the use of recent developments
in power electronic and computing control technology. The
Unified Power Flow Controller (UPFC) is a Flexible AC
transmission system (FACTS) device that can control all the
three system variables namely line reactance, magnitude and
phase angle difference of voltage across the line. The UPFC
provides a promising means to control power flow in modern
power systems. Essentially the performance depends on proper
control setting achievable through a power flow analysis
program. This paper presents a reliable method to meet the
requirements by developing a Newton-Raphson based load
flow calculation through which control settings of UPFC can
be determined for the pre-specified power flow between the
lines. The proposed method keeps Newton-Raphson Load Flow
(NRLF) algorithm intact and needs (little modification in the
Jacobian matrix). A MATLAB program has been developed to
calculate the control settings of UPFC and the power flow
between the lines after the load flow is converged. Case studies
have been performed on IEEE 5-bus system and 14-bus system
to show that the proposed method is effective. These studies
indicate that the method maintains the basic NRLF properties
such as fast computational speed, high degree of accuracy and
good convergence rate.
Study of Structural Behaviour of Gravity Dam with Various Features of Gallery...IDES Editor
The size and shape of opening in dam causes the
stress concentration, it also causes the stress variation in the
rest of the dam cross section. The gravity method of the analysis
does not consider the size of opening and the elastic property
of dam material. Thus the objective of study is comprises of
the Finite Element Method which considers the size of
opening, elastic property of material, and stress distribution
because of geometric discontinuity in cross section of dam.
Stress concentration inside the dam increases with the opening
in dam which results in the failure of dam. Hence it is
necessary to analyses large opening inside the dam. By making
the percentage area of opening constant and varying size and
shape of opening the analysis is carried out. For this purpose
a section of Koyna Dam is considered. Dam is defined as a
plane strain element in FEM, based on geometry and loading
condition. Thus this available information specified our path
of approach to carry out 2D plane strain analysis. The results
obtained are then compared mutually to get most efficient
way of providing large opening in the gravity dam.
Assessing Uncertainty of Pushover Analysis to Geometric ModelingIDES Editor
Pushover Analysis a popular tool for seismic
performance evaluation of existing and new structures and is
nonlinear Static procedure where in monotonically increasing
loads are applied to the structure till the structure is unable
to resist the further load .During the analysis, whatever the
strength of concrete and steel is adopted for analysis of
structure may not be the same when real structure is
constructed and the pushover analysis results are very sensitive
to material model adopted, geometric model adopted, location
of plastic hinges and in general to procedure followed by the
analyzer. In this paper attempt has been made to assess
uncertainty in pushover analysis results by considering user
defined hinges and frame modeled as bare frame and frame
with slab modeled as rigid diaphragm and results compared
with experimental observations. Uncertain parameters
considered includes the strength of concrete, strength of steel
and cover to the reinforcement which are randomly generated
and incorporated into the analysis. The results are then
compared with experimental observations.
Secure Multi-Party Negotiation: An Analysis for Electronic Payments in Mobile...IDES Editor
This document summarizes and analyzes secure multi-party negotiation protocols for electronic payments in mobile computing. It presents a framework for secure multi-party decision protocols using lightweight implementations. The main focus is on synchronizing security features to avoid agreement manipulation and reduce user traffic. The paper describes negotiation between an auctioneer and bidders, showing multiparty security is better than existing systems. It analyzes the performance of encryption algorithms like ECC, XTR, and RSA for use in the multiparty negotiation protocols.
Selfish Node Isolation & Incentivation using Progressive ThresholdsIDES Editor
The problems associated with selfish nodes in
MANET are addressed by a collaborative watchdog approach
which reduces the detection time for selfish nodes thereby
improves the performance and accuracy of watchdogs[1]. In
the related works they make use of credit based systems, reputation
based mechanisms, pathrater and watchdog mechanism
to detect such selfish nodes. In this paper we follow an approach
of collaborative watchdog which reduces the detection
time for selfish nodes and also involves the removal of such
selfish nodes based on some progressively assessed thresholds.
The threshold gives the nodes a chance to stop misbehaving
before it is permanently deleted from the network.
The node passes through several isolation processes before it
is permanently removed. Another version of AODV protocol
is used here which allows the simulation of selfish nodes in
NS2 by adding or modifying log files in the protocol.
Various OSI Layer Attacks and Countermeasure to Enhance the Performance of WS...IDES Editor
Wireless sensor networks are networks having non
wired infrastructure and dynamic topology. In OSI model each
layer is prone to various attacks, which halts the performance
of a network .In this paper several attacks on four layers of
OSI model are discussed and security mechanism is described
to prevent attack in network layer i.e wormhole attack. In
Wormhole attack two or more malicious nodes makes a covert
channel which attracts the traffic towards itself by depicting a
low latency link and then start dropping and replaying packets
in the multi-path route. This paper proposes promiscuous mode
method to detect and isolate the malicious node during
wormhole attack by using Ad-hoc on demand distance vector
routing protocol (AODV) with omnidirectional antenna. The
methodology implemented notifies that the nodes which are
not participating in multi-path routing generates an alarm
message during delay and then detects and isolate the
malicious node from network. We also notice that not only
the same kind of attacks but also the same kind of
countermeasures can appear in multiple layer. For example,
misbehavior detection techniques can be applied to almost all
the layers we discussed.
Responsive Parameter based an AntiWorm Approach to Prevent Wormhole Attack in...IDES Editor
The recent advancements in the wireless technology
and their wide-spread deployment have made remarkable
enhancements in efficiency in the corporate and industrial
and Military sectors The increasing popularity and usage of
wireless technology is creating a need for more secure wireless
Ad hoc networks. This paper aims researched and developed
a new protocol that prevents wormhole attacks on a ad hoc
network. A few existing protocols detect wormhole attacks but
they require highly specialized equipment not found on most
wireless devices. This paper aims to develop a defense against
wormhole attacks as an Anti-worm protocol which is based on
responsive parameters, that does not require as a significant
amount of specialized equipment, trick clock synchronization,
no GPS dependencies.
Cloud Security and Data Integrity with Client Accountability FrameworkIDES Editor
This document summarizes a proposed cloud security and data integrity framework that provides client accountability. The framework aims to address issues like lack of user control over cloud data, need for data transparency and tracking, and ensuring data integrity. It proposes using JAR (Java Archive) files for data sharing due to benefits like portability. The framework incorporates client-side verification using MD5 hashing, digital signature-based authentication of JAR files, and use of HMAC to ensure data integrity. It also uses password-based encryption of log files to keep them tamper-proof. The framework is intended to provide both accountability and security for data sharing in cloud environments.
Genetic Algorithm based Layered Detection and Defense of HTTP BotnetIDES Editor
A System state in HTTP botnet uses HTTP protocol
for the creation of chain of Botnets thereby compromising
other systems. By using HTTP protocol and port number 80,
attacks can not only be hidden but also pass through the
firewall without being detected. The DPR based detection
leads to better analysis of botnet attacks [3]. However, it
provides only probabilistic detection of the attacker and also
time consuming and error prone. This paper proposes a Genetic
algorithm based layered approach for detecting as well as
preventing botnet attacks. The paper reviews p2p firewall
implementation which forms the basis of filtering.
Performance evaluation is done based on precision, F-value
and probability. Layered approach reduces the computation
and overall time requirement [7]. Genetic algorithm promises
a low false positive rate.
Enhancing Data Storage Security in Cloud Computing Through SteganographyIDES Editor
This document summarizes a research paper that proposes a method for enhancing data security in cloud computing through steganography. The method hides user data in digital images stored on cloud servers. When data needs to be accessed, it is extracted from the images. The document outlines the cloud architecture and security issues addressed. It then describes the proposed system architecture, security model, and data storage and retrieval process. Data is partitioned and hidden in multiple images to improve security. The goal is to prevent unauthorized access to user data stored on cloud servers.
The main tasks of a Wireless Sensor Network
(WSN) are data collection from its nodes and communication
of this data to the base station (BS). The protocols used for
communication among the WSN nodes and between the WSN
and the BS, must consider the resource constraints of nodes,
battery energy, computational capabilities and memory. The
WSN applications involve unattended operation of the network
over an extended period of time. In order to extend the lifetime
of a WSN, efficient routing protocols need to be adopted. The
proposed low power routing protocol based on tree-based
network structure reliably forwards the measured data towards
the BS using TDMA. An energy consumption analysis of the
WSN making use of this protocol is also carried out. It is
found that the network is energy efficient with an average
duty cycle of 0:7% for the WSN nodes. The OmNET++
simulation platform along with MiXiM framework is made
use of.
Permutation of Pixels within the Shares of Visual Cryptography using KBRP for...IDES Editor
The security of authentication of internet based
co-banking services should not be susceptible to high risks.
The passwords are highly vulnerable to virus attacks due to
the lack of high end embedding of security methods. In order
for the passwords to be more secure, people are generally
compelled to select jumbled up character based passwords
which are not only less memorable but are also equally prone
to insecurity. Multiple use of distributed shares has been
studied to solve the problem of authentication by algorithms
based on thresholding of pixels in image processing and visual
cryptography concepts where the subset of shares is considered
for the recovery of the original image for authentication using
correlation function[1][2].The main disadvantage in the above
study is the plain storage of shares and also one of the shares
is being supplied to the customer, which will lead to the
possibility of misuse by a third party. This paper proposes a
technique for scrambling of pixels by key based random
permutation (KBRP) within the shares before the
authentication has been attempted. Total number of shares to
be created is dependent on the multiplicity of ownership of
the account. By this method the problem of uncertainty among
the customers with regard to security, storage, retrieval of
holding of half of the shares is minimized.
This paper presents a trifocal Rotman Lens Design
approach. The effects of focal ratio and element spacing on
the performance of Rotman Lens are described. A three beam
prototype feeding 4 element antenna array working in L-band
has been simulated using RLD v1.7 software. Simulated
results show that the simulated lens has a return loss of –
12.4dB at 1.8GHz. Beam to array port phase error variation
with change in the focal ratio and element spacing has also
been investigated.
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
Hyperspectral images can be efficiently compressed
through a linear predictive model, as for example the one
used in the SLSQ algorithm. In this paper we exploit this
predictive model on the AVIRIS images by individuating,
through an off-line approach, a common subset of bands, which
are not spectrally related with any other bands. These bands
are not useful as prediction reference for the SLSQ 3-D
predictive model and we need to encode them via other
prediction strategies which consider only spatial correlation.
We have obtained this subset by clustering the AVIRIS bands
via the clustering by compression approach. The main result
of this paper is the list of the bands, not related with the
others, for AVIRIS images. The clustering trees obtained for
AVIRIS and the relationship among bands they depict is also
an interesting starting point for future research.
Microelectronic Circuit Analogous to Hydrogen Bonding Network in Active Site ...IDES Editor
A microelectronic circuit of block-elements
functionally analogous to two hydrogen bonding networks is
investigated. The hydrogen bonding networks are extracted
from â-lactamase protein and are formed in its active site.
Each hydrogen bond of the network is described in equivalent
electrical circuit by three or four-terminal block-element.
Each block-element is coded in Matlab. Static and dynamic
analyses are performed. The resultant microelectronic circuit
analogous to the hydrogen bonding network operates as
current mirror, sine pulse source, triangular pulse source as
well as signal modulator.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.