IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
High Speed Data Exchange Algorithm in Telemedicine with Wavelet based on 4D M...Dr. Amarjeet Singh
Existing Medical imaging techniques such as fMRI, positron emission tomography (PET), dynamic 3D ultrasound and dynamic computerized tomography yield large amounts of four-dimensional sets. 4D medical data sets are the series of volumetric images netted in time, large in size and demand a great of assets for storage and transmission. Here, in this paper, we present a method wherein 3D image is taken and Discrete Wavelet Transform(DWT) and Dual-Tree Complex Wavelet Transform(DTCWT) techniques are applied separately on it and the image is split into sub-bands. The encoding and decoding are done using 3D-SPIHT, at different bit per pixels(bpp). The reconstructed image is synthesized using Inverse DWT technique. The quality of the compressed image has been evaluated using some factors such as Mean Square Error(MSE) and Peak-Signal to Noise Ratio (PSNR).
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...ijaia
Presently, considering the technological advancement of our modern world, we are in dire need for a system that can learn new concepts and give decisions on its own. Hence the Artificial Neural Network is all that is required in the contemporary situation. In this paper, CLBFFNN is presented as a special and intelligent form of artificial neural networks that has the capability to adapt to training and learning of new ideas and be able to give decisions in a trimodal biometric system involving fingerprints, face and iris biometric data. It gives an overview of neural networks.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using A...ijtsrd
This study proposes Artificial Neural Network ANN based field strength prediction models for the rural areas of Abuja, the federal capital territory of Nigeria. The ANN based models were created on bases of the Generalized Regression Neural network GRNN and the Multi Layer Perceptron Neural Network MLP NN . These networks were created, trained and tested for field strength prediction using received power data recorded at 900MHz from multiple Base Transceiver Stations BTSs distributed across the rural areas. Results indicate that the GRNN and MLP NN based models with Root Mean Squared Error RMSE values of 4.78dBm and 5.56dBm respectively, offer significant improvement over the empirical Hata Okumura counterpart, which overestimates the signal strength by an RMSE value of 20.17dBm. Deme C. Abraham ""Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using Artificial Neural Networks"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30228.pdf
Paper Url : https://www.ijtsrd.com/computer-science/artificial-intelligence/30228/mobile-network-coverage-determination-at-900mhz-for-abuja-rural-areas-using-artificial-neural-networks/deme-c-abraham
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
On the High Dimentional Information Processing in Quaternionic Domain and its...IJAAS Team
There are various high dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through a wide spectrum of simulations which demonstrate the significance of the work.
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...csandit
In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees
from neural networks. We empirically evaluated the performance of the algorithm on a set of
databases from real world events. This benchmark enhancement was achieved by adapting
Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The
models are then compared with X-TREPAN for comprehensibility and classification accuracy.
Furthermore, we validate the experimentations by applying statistical methods. Finally, the
modified algorithm is extended to work with multi-class regression problems and the ability to
comprehend generalized feed forward networks is achieved.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild
and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on
visual examination by radiologist or a physician may lead to missing diagnosis when a large number of
MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the
diagnosis of dementia. In this research work, advanced classification techniques using Support Vector
Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of
SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction
technique yields better results than PCA.
Comparative analysis of multimodal medical image fusion using pca and wavelet...IJLT EMAS
nowadays, there are a lot of medical images and their
numbers are increasing day by day. These medical images are
stored in large database. To minimize the redundancy and
optimize the storage capacity of images, medical image fusion is
used. The main aim of medical image fusion is to combine
complementary information from multiple imaging modalities
(Eg: CT, MRI, PET etc.) of the same scene. After performing
image fusion, the resultant image is more informative and
suitable for patient diagnosis. There are some fusion techniques
which are described in this paper to obtain fused image. This
paper presents two approaches to image fusion, namely Spatial
Fusion and Transform Fusion. This paper describes Techniques
such as Principal Component Analysis which is spatial domain
technique and Discrete Wavelet Transform, Stationary Wavelet
Transform which are Transform domain techniques.
Performance metrics are implemented to evaluate the
performance of image fusion algorithm. An experimental result
shows that image fusion method based on Stationary Wavelet
Transform is better than Principal Component Analysis and
Discrete Wavelet Transform.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
Evaluation of deep neural network architectures in the identification of bone...TELKOMNIKA JOURNAL
Automated medical image processing, particularly of radiological images, can reduce the number of diagnostic errors, increase patient care and reduce medical costs. This paper seeks to evaluate the performance of three recent convolutional neural networks in the autonomous identification of fissures over two-dimensional radiological images. These architectures have been proposed as deep neural network types specially designed for image classification, which allows their integration with traditional image processing strategies for automatic analysis of medical images. In particular, we use three convolutional networks: ResNet (residual neural network), DenseNet (dense convolutional network), and NASNet (neural architecture search network) to learn information from a set of 200 images labeled half as fissured bones and half as seamless bones. All three networks are trained and adjusted under the same conditions, and their performance was evaluated with the same metrics. The final results consider not only the model's ability to predict the characteristics of an unknown image but also its internal complexity. The three neural models were optimized to reduce classification errors without producing network over-adjustment. In all three cases, generalization of behavior was observed, and the ability of the models to identify the images with fissures, however the expected performance was only achieved with the NASNet model.
Imran Sarwar Bajwa, S. Irfan Hyder [2005], "PCA Based Image Classification of Single-Layered Cloud Types", in 1st IEEE International Conference on Emerging Technologies (ICET 2005), Islamabad, Pakistan, Jan 2005, pp:365-369
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...ijaia
Presently, considering the technological advancement of our modern world, we are in dire need for a system that can learn new concepts and give decisions on its own. Hence the Artificial Neural Network is all that is required in the contemporary situation. In this paper, CLBFFNN is presented as a special and intelligent form of artificial neural networks that has the capability to adapt to training and learning of new ideas and be able to give decisions in a trimodal biometric system involving fingerprints, face and iris biometric data. It gives an overview of neural networks.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
nternational Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using A...ijtsrd
This study proposes Artificial Neural Network ANN based field strength prediction models for the rural areas of Abuja, the federal capital territory of Nigeria. The ANN based models were created on bases of the Generalized Regression Neural network GRNN and the Multi Layer Perceptron Neural Network MLP NN . These networks were created, trained and tested for field strength prediction using received power data recorded at 900MHz from multiple Base Transceiver Stations BTSs distributed across the rural areas. Results indicate that the GRNN and MLP NN based models with Root Mean Squared Error RMSE values of 4.78dBm and 5.56dBm respectively, offer significant improvement over the empirical Hata Okumura counterpart, which overestimates the signal strength by an RMSE value of 20.17dBm. Deme C. Abraham ""Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using Artificial Neural Networks"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30228.pdf
Paper Url : https://www.ijtsrd.com/computer-science/artificial-intelligence/30228/mobile-network-coverage-determination-at-900mhz-for-abuja-rural-areas-using-artificial-neural-networks/deme-c-abraham
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
On the High Dimentional Information Processing in Quaternionic Domain and its...IJAAS Team
There are various high dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through a wide spectrum of simulations which demonstrate the significance of the work.
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...csandit
In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees
from neural networks. We empirically evaluated the performance of the algorithm on a set of
databases from real world events. This benchmark enhancement was achieved by adapting
Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The
models are then compared with X-TREPAN for comprehensibility and classification accuracy.
Furthermore, we validate the experimentations by applying statistical methods. Finally, the
modified algorithm is extended to work with multi-class regression problems and the ability to
comprehend generalized feed forward networks is achieved.
AN EFFICIENT WAVELET BASED FEATURE REDUCTION AND CLASSIFICATION TECHNIQUE FOR...ijcseit
This research paper proposes an improved feature reduction and classification technique to identify mild
and severe dementia from brain MRI data. The manual interpretation of changes in brain volume based on
visual examination by radiologist or a physician may lead to missing diagnosis when a large number of
MRIs are analyzed. To avoid the human error, an automated intelligent classification system is proposed
which caters the need for classification of brain MRI after identifying abnormal MRI volume, for the
diagnosis of dementia. In this research work, advanced classification techniques using Support Vector
Machines based on Particle Swarm Optimisation and Genetic algorithm are compared. Feature reduction
by wavelets and PCA are analysed. From this analysis, it is observed that the proposed classification of
SVM based PSO is found to be efficient than SVM trained with GA and wavelet based feature reduction
technique yields better results than PCA.
Comparative analysis of multimodal medical image fusion using pca and wavelet...IJLT EMAS
nowadays, there are a lot of medical images and their
numbers are increasing day by day. These medical images are
stored in large database. To minimize the redundancy and
optimize the storage capacity of images, medical image fusion is
used. The main aim of medical image fusion is to combine
complementary information from multiple imaging modalities
(Eg: CT, MRI, PET etc.) of the same scene. After performing
image fusion, the resultant image is more informative and
suitable for patient diagnosis. There are some fusion techniques
which are described in this paper to obtain fused image. This
paper presents two approaches to image fusion, namely Spatial
Fusion and Transform Fusion. This paper describes Techniques
such as Principal Component Analysis which is spatial domain
technique and Discrete Wavelet Transform, Stationary Wavelet
Transform which are Transform domain techniques.
Performance metrics are implemented to evaluate the
performance of image fusion algorithm. An experimental result
shows that image fusion method based on Stationary Wavelet
Transform is better than Principal Component Analysis and
Discrete Wavelet Transform.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
Evaluation of deep neural network architectures in the identification of bone...TELKOMNIKA JOURNAL
Automated medical image processing, particularly of radiological images, can reduce the number of diagnostic errors, increase patient care and reduce medical costs. This paper seeks to evaluate the performance of three recent convolutional neural networks in the autonomous identification of fissures over two-dimensional radiological images. These architectures have been proposed as deep neural network types specially designed for image classification, which allows their integration with traditional image processing strategies for automatic analysis of medical images. In particular, we use three convolutional networks: ResNet (residual neural network), DenseNet (dense convolutional network), and NASNet (neural architecture search network) to learn information from a set of 200 images labeled half as fissured bones and half as seamless bones. All three networks are trained and adjusted under the same conditions, and their performance was evaluated with the same metrics. The final results consider not only the model's ability to predict the characteristics of an unknown image but also its internal complexity. The three neural models were optimized to reduce classification errors without producing network over-adjustment. In all three cases, generalization of behavior was observed, and the ability of the models to identify the images with fissures, however the expected performance was only achieved with the NASNet model.
Imran Sarwar Bajwa, S. Irfan Hyder [2005], "PCA Based Image Classification of Single-Layered Cloud Types", in 1st IEEE International Conference on Emerging Technologies (ICET 2005), Islamabad, Pakistan, Jan 2005, pp:365-369
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
Performance Evaluation of Object Tracking Technique Based on Position VectorsCSCJournals
In this paper, a novel algorithm for moving object tracking based on position vectors has proposed. The position vector of an object in first frame of a video has been extracted based on selection of region of interest. Based on position vector in first frame object direction has shown in nine different directions. We extract nine position vectors for nine different directions. With these position vectors next frame is cropped into nine blocks. We exploit block matching of the first frame with nine blocks of the next frame in a simple feature space by Descrete wavelet transform and dual tree complex wavelet transform. The matched block is considered as tracked object and its position vector is a reference location for the next successive frame. We describe performance evaluation and algorithm in detail to perform simulation experiments of object tracking using different feature vectors which verifies the tracking algorithm efficiency.
Multimodal Medical Image Fusion Based On SVDIOSR Journals
Image fusion is a promising process in the field of medical image processing, the idea behind is to
improve the content of medical image by combining two or more multimodal medical images. In this paper a
novel fusion framework based on singular value decomposition - based image fusion algorithm is proposed.
SVD is an image adaptive transform, it transforms the matrix of the given image into product USVT
, which
allows to refactor a digital image into three matrices called tensors. The proposed algorithm picks out
informative image patches of source images to constitute the fused image by processing the divided subtensors
rather than the whole tensor and a novel sigmoid-function-like coefficient-combining scheme is applied to
construct the fused result. Experimental results show that the proposed algorithm is an alternative image fusion
approach.
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
Medical Image Processing � Detection of Cancer Brainijcnes
The primary notion relying in image processing is image segmentation and classification. The intention behind the processing is to originate the image into regions. Variation formulations that effect in valuable algorithms comprise the essential attributes of its region and boundaries. Works have been carried out both in continuous and discrete formulations, though discrete version of image segmentation does not approximate continuous formulation. An existing work presented unsupervised graph cut method for image processing which leads to segmentation inaccuracy and less flexibility. To enhance the process, our first work describes the process of formation of kernel for the medical images by performing the deviation of mapped image data within the scope of each region. But the segmentation of image is not so effective based on the regions present in the given medical image. To overcome the issue, we implement a Bayesian classifier as our second work to classify the image effectively. The segmented image classification is done based on its classes and processes using Bayesian classifiers. With the classified image, it is necessary to identify the objects present in the image. For that, in this work, we exploit the use of sequential pattern matching algorithm to identify the feature space of the objects in the classified image that are highly of important that improves the speed and accuracy rate in a significant manner. An experimental evaluation is carried out to estimate the performance of the proposed efficient sequential pattern matching [ESPM] algorithm for classified brain image system in terms of estimation of object position, efficiency and compared the results with an existing multiregion classifier method.
Fast and robust tracking of multiple faces is receiving increased attention from computer vision researchers as it finds potential applications in many fields like video surveillance and computer mediated video conferencing. Real-time tracking of multiple faces in high resolution videos involve three basic tasks namely initialization, tracking and display. Among these, tracking is quite compute intensive as it involves particle filtering that won’t yield a real time performance if we use a conventional CPU based system alone.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
1. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
Classification of MRI Brain Images Using Neural Network
Lalit P. Bhaiya1 , Virendra Kumar Verma2
1
Associate Prof. & Head of ETC , RCET , Bhilai , C.G. India
2
M.Tech Student, Department of Electronics & Tele-Communication, RCET, Bhilai, C.G. India
ABSTRACT
There are many difficult problems in the map (SOM) [2] and fuzzy c-means combined with
field of pattern recognition. These problems are feature extraction techniques [5]. Other supervised
the focus of much active research in order to find classification techniques, such as k-nearest neighbors
efficient approaches to address them. We have (k-NN) also group pixels based on their similarities
tried to address the problem of classification MRI in each feature image [1, 6, 7, 8] can be used to
brain images by creating a robust and more classify the normal/pathological T2-wieghted MRI
accurate classifier which can act as an expert images. We used supervised machine learning
assistant to medical practitioners. Magnetic algorithms (ANN and k-NN) to obtain the
Resonance Imaging (MRI) is the state-of the-art classification of images under two categories, either
medical imaging technology which allows cross normal or abnormal.
sectional view of the body with unprecedented Usually an image of size p × q pixels is
tissue contrast. MRI plays an important role in represented by a vector in p.q dimensional space. In
assessing pathological conditions of the ankle, foot practice, however, these (p.q) -dimensional spaces
and brain. are too large to allow robust and fast object
In proposed methodology three recognition. A common way to attempt to resolve this
supervised neural networks has been used: Back problem is to use dimension reduction techniques. In
Propagation Algorithm (BPA), Learning Vector order to reduce the feature vector dimension and
Quantization (LVQ) and Radial Basis Function increase the discriminative power, the principal
(RBF). The features of magnetic resonance images component analysis (PCA) has been used.
have been reduced, using principal component In these approaches, the 2-dimensional
analysis (PCA), to the more essential features. The image is considered as a vector, by concatenating
proposed technique has been carried out over a each row or column of the image. Each classifier has
larger database as compare to any previous work its own representation of basis vectors of a high
and is more robust and effective. dimensional face vector space. The dimension is
reduced by projecting the face vector to the basis
Keywords- Magnetic Resonance Image(MRI) , vectors, and is used as the feature representation of
Principal Component Analysis (PCA), Radial each images. [8],[15]
Basis Function (RBF), Back Propagation (BP), The Back Propagation (BP) algorithm looks
Learning Vector Quantization (LVQ), Multi for the minimum of the error function in weight space
Layer Neural Network . using the method of gradient descent. Properly
trained back propagation networks tend to give
INTRODUCTION reasonable answers when presented with inputs that
Magnetic resonance imaging (MRI) is often they have never seen. Typically, a new input leads to
the medical imaging method of choice when soft an output similar to the correct output for input
tissue delineation is necessary. This is especially true vectors used in training that are similar to the new
for any attempt to classify brain tissues [1]. The most input being presented. This generalization property
important advantage of MR imaging is that it is non- makes it possible to train a network on a
invasive technique [2]. The use of computer representative set of input/target pairs and get good
technology in medical decision support is now results without training the network on all possible
widespread and pervasive across a wide range of input/output pairs. [3]
medical area, such as cancer research, The RBF network performs similar function
gastroenterology, hart diseases, brain tumors etc. [3, mapping with the BP, however its structure and
4]. Fully automatic normal and diseased human brain function are much different. An RBF is a local
classification from magnetic resonance images (MRI) network that is trained in a supervised manner
is of great importance for research and clinical contrasts with the BP network that is a global
studies. Recent work [2, 5] has shown that network. A BP performs a global mapping, meaning
classification of human brain in magnetic resonance all inputs cause an output, while an RBF performs a
(MR) images is possible via supervised techniques local mapping, meaning only inputs near a receptive
such as artificial neural networks and support vector field produce activation.
machine (SVM) [2], and unsupervised classification The LVQ network has two layers: a layer of
techniques unsupervised such as self organization input neurons, and a layer of output neurons. The
751 | P a g e
2. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
network is given by prototypes W=(w(i),...,w(n)). It N 1 N
changes the weights of the network in order to
classify the data correctly. For each data point, the S= i 1 (xi-µ) (xi-µ)T , µ = N i 1 xi . (3)
prototype (neuron) that is closest to it is determined After applying the linear transformation WT ,
(called the winner neuron). The weights of the the scatter of the transformed feature vectors
connections to this neuron are then adapted, i.e. made {y1,y2,…, yN} is WTSW. In PCA, the projection Wopt
closer if it correctly classifies the data point or made is chosen to maximize the determinant of the total
less similar if it incorrectly classifies it. [16] scatter matrix of the projected samples, i.e.,
We performed classification of MRI brain
images on a database of 192 images which contains max
107 normal images and 85 pathological images. We Wopt = arg W | WTSW | = [w1w2…wm] (4)
experimented with three different sets of training and
testing taken from clump of images. In first case s 98 Where {w i | i = 1, 2, … ,m} is the set of n –
(55 normal and 43 pathological) images have been dimensional eigenvectors of S corresponding to the m
used for training purpose and remaining 94 images largest eigen values. In other words, the input vector
for testing. In second case we swapped the testing (face) in an n -dimensional space is reduced to a
and training database and in third case we used 90(50 feature vector in an m -dimensional subspace. We can
normal and 40 pathological) images for training and see that the dimension of the reduced feature vector
remaining 102 images for testing. m is much less than the dimension of the input faces
For feature vectors generation, images are vector n.
preprocessed by PCA which has been described
shortly below. PREPROCESSING OUTPUT
After preprocessing images by PCA, feature
PCA Preprocessing vectors of reduced dimension are produced. PCA
PCA can be used to approximate the original produces feature vector of dimension 20. We
data with lower dimensional feature vectors. The experimented with three different sets of training and
basic approach is to compute the eigenvectors of the testing taken from clump of images. In all the cases
covariance matrix of the original data, and considering the training sample n so input to neural
approximate it by a linear combination of the leading network has become the feature vector matrix of size
eigenvectors. By using PCA procedure, the test 20 by n for PCA.
image can be identified by first, projecting the image
onto the eigen face space to obtain the corresponding Classification:-
set of weights, and then comparing with the set of Input matrix to the neural network is of size
weights of the faces in the training set. [2],[5] 20 by n while target matrix size is determined on the
The problem of low-dimensional feature basis of number of classes. Target matrix is of size 2
representation can be stated as follows: Let X= (x1 by n where if input feature vector (column wise)
,x2,…, xi,…, xn) represents the n × N data matrix, belong to class 2 then corresponding output vector
where each xi is a face vector of dimension n, will have 1 at 2nd row and 0 at other rows. Here value
concatenated from a p × q face image. Here n 1 in any target vector denotes the belongingness of an
represents the total number of pixels (p.q) in the face image to the class denoted by respective row value of
image and N is the number of face images in the target vector.
training set. The PCA can be considered as a linear To classify input feature vectors into target
transformation (1) from the original image vector to a vectors, we used Back Propagation (BP), Radial
projection feature vector, i.e. Basis Function (RBF) & Learning Vector
Quantization (LVQ). We configured and tested each
Y =WT X (1) neural network with various configurations.
Variations are made in the following components:
where Y is the m × N feature vector matrix, Number of input to neural network, Number of
m is the dimension of the feature vector, and hidden layers, Number of nodes in hidden layers,
transformation matrix W is an n×m transformation learning rate. In case of RBF SPREAD is also varied
matrix whose columns are the eigenvectors considering the condition that SPREAD is large
corresponding to the m largest eigen values computed enough so that the active input regions of the radial
according to the formula (2): neurons overlap enough so that several radial neurons
λei = Sei (2) always have fairly large outputs at any given
where ei ,λ are eigenvectors & eigen values moment. However, SPREAD should not be so large
matrix respectively. that each neuron is effectively responding in the
Here the total scatter matrix S and the mean image of same, large, area of the input space. [11],[13] The
all samples are defined as optimum configurations which have generated good
testing results are shown in tables.
Back Propagation as Classifier
752 | P a g e
3. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
Table-I : BP neural network Configuration
The weighting factor of the input-to-hidden neurons
can be computed by (5) Input Vector nodes 20
E ( k )
wijk 1) wij
( k Number of hidden layers 2
wij
(5) Number of neurons (hidden 20 ,35,2
Where k is iteration number; i, j are index of input layer 1 ,hidden layer 2 &
and hidden neuron, respectively; and η is step size output layer)
E Transfer functions (hidden tansig, tansig,
wij layer 1 , hidden layer 2 & purelin
can be calculated from the following series of output layer )
equations (6)-(8). The error function is given by
2
1 p Network Learning rate 0.0001
E (tl ol )
2 l 1 (6) Radial Basis Function as Classifier
Where p is the number of output neurons, l The RBF network performs similar function
is the index of neuron, tl and ol are the target and mapping with the multi-layer neural network,
output values, respectively. The activation function, however its structure and function are much different.
net function and output function are given by A RBF is a local network that is trained in a
equation (7) supervised manner. RBF performs a local mapping,
1 meaning only inputs near a receptive field produce
si activation. [9],[10]
1 e ( neti ) (7) The input layer of this network is a set of n
n units, which accept the elements of an n -dimensional
neti wil xl win 1 input feature vector. n elements of the input vector x
are input to the l hidden functions, the output of the
l 1 (8) hidden function, which is multiplied by the weighting
m
oi vil sl vim 1
factor w(i, j), is input to the output layer of the
network y (x). For each RBF unit k , k = 1, 2,3,..., l
l 1 (9) the center is selected as the mean value of the sample
Where n is the number of input neurons, and patterns belong to class k , i.e.
m is the number of output neurons. Let us define Nk
1
E E si
k
Nk
x
i 1
i
k
, k=1,2,3, … ,m (12)
neti si neti (10)
i
xk
Where is the eigenvector of the i th
E E neti image in the class k, and Nk is the total number of
wij neti wij trained images in class k.
And, (11) Since the RBF neural network is a class of
neural networks, the activation function of the hidden
then we obtain the weight update equation units is determined by the distance between the input
(5) for the input-to-hidden layer by computing Eq. vector and a prototype vector. Typically the
(10) and Eq. (11) with the Eqs. from (6) to (9). Next, activation function of the RBF units (hidden layer
vij, hidden–to–output neurons’ weight update can also unit) is chosen as a Gaussian function with mean
be derived in the same way. vector µi and variance vector σi as follows
Back Propagation networks often have one x i 2
or more hidden layers of sigmoid neurons followed hi ( x) exp
by an output layer of linear neurons. Multiple layers
i2
, i=1,2,…,l
of neurons with nonlinear transfer functions allow the (13)
network to learn nonlinear and linear relationships Note that x is an n -dimensional input
between input and output vectors. The linear output feature vector, µi is an n -dimensional vector called
layer lets the network produce values outside the the center of the RBF unit, σi is the width of the i th
range -1 to +1. [5],[6] RBF unit and l is the number of the RBF units. The
The optimum configuration of BP neural response of the jth output unit for input x is given as:
network for PCA, used for training & testing is l
shown in table-I. y j ( x) hi ( x) w(i, j )
i 1 (14)
753 | P a g e
4. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
Where w(i, j) is the connection weight of the i -th If Xj and Wi do not belong to the same class, then
RBF unit to the j -th output node. The optimum
configuration of RBF with PCA used for training and Wc (n+1) = Wc (n) - α(n)(Xj -Wc (n)) (17)
testing is shown in Table II. The weight vectors of other neurons keep constant.
Table-II: RBF neural network Configuration Wk (n+1) = Wk (n) (18)
Number of Radial Basis 1 where 0 ≤ α(n) ≤1 is the learning rate. The
Layers training algorithm is stopped after reaching a pre-
Number of neurons 20,135,2 specified error limit. Because the neural network
(input ,radial basis & combines the competitive learning with supervised
output layer) learning, its learning speed is faster than BP network.
The optimum configuration of LVQ with PCA & R-
LDA , used for training and testing is shown in Table
Spread 0.8 III.
Table-III : LVQ neural network Configuration
Number of competitive 1
Learning Vector Quantization as Classifier
Layers
LVQ neural network combines the
Number of neurons (input 30,40,2
competitive learning with supervised learning and it
,competitive & output
can realize nonlinear classification effectively. There
layer)
are several variations of the basic LVQ algorithm.
The most common are LVQ1, LVQ2 and LVQ3. The Transfer function Lvq 1.0
basic LVQ neural network classifier (LVQ1), which Network Learning rate 0.001
is adopted in our work, divides the input space into
disjoint regions. A prototype vector represents each Training Graphs and Results
region. In order to classify an input vector, it must be Each neural network took different time for
compared with all prototypes. The Euclidean distance training input feature vectors. RBF neural network
metric is used to select the closest vector to the input was the fastest while LVQ took much time than
vector. The input vector is classified to the same class others. Training graphs of BP applied to PCA
as the nearest prototype. The LVQ classifier consists preprocessed training set are shown in figure 1 .
of an input layer, a hidden unsupervised competitive
layer, which classifies input vectors into subclasses,
and a supervised linear output layer, which combines
the subclasses into the target classes. In the hidden
layer, only the winning neuron has an input of one
and other neurons have outputs of zero. The weight
vectors of the hidden layer neurons are the
prototypes. The number of the hidden neurons is
defined before training and it depends on the :
complexity of the input-output relationship. Fig1.Learning of BP after preprocessing by PCA.
Moreover it significantly affects the results of RBF creates radial basis layer neurons one
differentiation. We carefully select the number of at a time when training starts. In each iteration
hidden neurons based on extensive simulation network error is lowered by appropriate input vector.
experiments. [14] This procedure is repeated until the error goal is met,
The learning phase starts by initiating the or the maximum number of neurons is reached. In
weight vectors of neurons in hidden layer. The input our case RBF creates 135 neurons for PCA input
vectors are presented to the network in turn. For each vectors .Training graphs of RBF applied to PCA
input vector Xj , the weight vector Wc of a winning preprocessed training set are shown in figure 2:
neuron i is adjusted. The winning neuron is chosen
according to:
X j Wc X j Wk
≤ , for k ≠ c (15)
The weight vector Wc of the winning neuron is
updated as follows:
If Xj and Wc belong to same class, then
Fig2 Learning of RBF after preprocessing by PCA
Wc (n+1) = Wc (n) + α(n)(Xj -Wc (n)) (16)
Accordingly Training graphs of LVQ applied to PCA
preprocessed training set are shown in figure 3.
754 | P a g e
5. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
Methods PCA with PCA with PCA with
=> BP RBF LVQ
No. of error
images => 5 7 9
Recognition 95.0 % 93.1% 91.1%
Rate => (97/102) (95/102) (93/102)
Fig3. LVQ Learning after preprocessing by PCA
Table-VI: Recognition Rate using PCA with BP,
PCA preprocessed input vectors’ training results PCA with RBF and PCA with LVQ
for first case shown in table IV.
Methods PCA PCA PCA with CONCLUSION
=> with with LVQ In this study, we have developed a medical
BP RBF decision support system with normal and abnormal
classes. The medical decision making system
No. of error designed by the wavelet transform, principal
images => 4 7 9 component analysis (PCA), and supervised learning
methods (BPA ,RBFN and LVQ) that we have built
gave very promising results in classifying the healthy
and pathological brain. The benefit of the system is to
Recognition 95.7 % 92.5% 90.4
assist the physician to make the final decision
Rate => (90/94) (87/94) %(85/94)
without hesitation.
REFERENCES
[1] L. M. Fletcher-Heath, L. O. Hall,D. B.
Table-IV : Recognition Rate using PCA with BP, Goldgof, F.Murtagh; Automatic
PCA with RBF and PCA with LVQ segmentation of non-enhancing brain tumors
in magnetic
PCA preprocessed input vectors’ training results for resonance images; Arti_cial Intelligence in
second case shown in table V Medicine 21 (2001), pp. 43-63.
Methods PCA with PCA with PCA [2] Sandeep Chaplot, L.M. Patnaik, N.R.
=> BP RBF with Jagannathan; “Classification of magnetic
LVQ resonance brain images using wavelets as
input to support vector machine and neural
No. of error network”; Biomedical Signal Processing and
images => 3 6 8 Control 1 (2006), pp. 86-92.
[3] http://www.abta.org/siteFiles/SitePages/5E8
399DBEEA8F53CBBBBF21C63AE113.pdf
[4] A.Sengur, “An expert system based on
Recognition 96.9 % 93.8% 91.8% principal component analysis, artificial
Rate => (95/98) (92/98) (90/98) immune system and fuzzy k-NN for
diagnosis of valvular heart diseases” Comp.
Biol. Med. (2007), doi:
Table-V: Recognition Rate using PCA with BP, PCA 10.1016/j.compbiomed.2007.11.004.
with RBF and PCA with LVQ [5] M . Maitra, A. Chatterjee; “Hybrid
multiresolution Slantlet transform and fuzzy
PCA preprocessed input vectors’ training result for c-means clustering approach for normal-
third case shown in table VI pathological brain MR image segregation”,
MedEngPhys(2007),doi:10.1016/j.medengp
hy.2007.06.009.
[6] P. Abdolmaleki, Futoshi Mihara, Kouji
Masuda, Lawrence Danso Buadu; Neural
network analysis of astrocytic gliomas from
755 | P a g e
6. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 2, Issue 5, September- October 2012, pp.751-756
MRI appearances' Cancer Letters 118
(1997), pp. 69-78.
[7] T. Rosenbaum, Volkher Engelbrecht,
Wilfried Kro?lls, Ferdinand A. van
Dorstenc, Mathias Hoehn-Berlagec, Hans-
Gerd Lenard; MRI abnormalities in
neuro_bromatosis
type 1 (NF1): a study of men and mice;
Brain & Development 21 (1999), pp. 268-
273. C. Cocosco , Alex P. Zijdenbos, Alan
C. Evans; A fully automatic and robust brain
[8] MRI tissue classi_cation method; Medical
Image Analysis 7 (2003), pp. 513-527..
[9] Lisboa, P.J.G., Taktak, A.F.G.: “The use of
artificial neural networks in decision support
in cancer: a systematic review.” Neural
Networks 19, 408–415 (2006)
[10] Alfredo Vellido , Paulo J.G. Lisboa “Neural
Networks and Other Machine Learning
Methods in Cancer Research” F. Sandoval et
al. (Eds.): IWANN 2007, LNCS 4507, pp.
964–971, 2007.
[11] Lisboa, P.J.G., Wong, H., Harris, P.,
Swindell, R.: “A bayesian neural network
approach for modelling censored data with
an application to prognosis after surgery for
breast cancer”. Artif. Intell. Med. 28, 1–25
(2003)
[12] Lisboa, P.J.G., Vellido, A., Wong, H.: “A
Review of Evidence of Health Benefit from
Artificial Neural Networks in Medical
Intervention.” In: Artificial Neural Networks
in Medicine and Biology, pp. 63–71.
Springer, London (2000).
[13] X. Lu, Y. Wang and A. K. Jain,
“Combining Classifier for Face
Recognition,” Proc. of IEEE 2003 Intern.
Conf. on Multimedia and Expo. Vol. 3. pp.
13-16, 2003.
[14] Xin Ma; Wei Liu; Yibin Li; Rui Song,
“LVQ Neural Network Based Target
Differentiation Method for Mobile Robot”
Advanced Robotics, 2005. ICAR '05.
Proceedings. 12th International Conference
on 18-20 July 2005.
[15] Xudong Jiang; Mandal, B.; Kot, A.
“Eigenfeature Regularization and Extraction
in Face Recognition” Pattern Analysis and
Machine Intelligence, IEEE Transactions on
Volume 30, Issue 3, March 2008.
[16] Yan Jun Wang Dian-hong “Sequential
face recognition based on LVQ networks”
VLSI Design and Video Technology, 2005.
Proceedings of 2005 IEEE International
Workshop.
756 | P a g e