Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                    Applications ...
Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                    Applications ...
Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                    Applications ...
Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                    Applications ...
Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                    Applications ...
Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and                   Applications (...
Upcoming SlideShare
Loading in …5
×

Dx25751756

228 views
177 views

Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
228
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Dx25751756

  1. 1. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756 Classification of MRI Brain Images Using Neural Network Lalit P. Bhaiya1 , Virendra Kumar Verma2 1 Associate Prof. & Head of ETC , RCET , Bhilai , C.G. India 2 M.Tech Student, Department of Electronics & Tele-Communication, RCET, Bhilai, C.G. IndiaABSTRACT There are many difficult problems in the map (SOM) [2] and fuzzy c-means combined withfield of pattern recognition. These problems are feature extraction techniques [5]. Other supervisedthe focus of much active research in order to find classification techniques, such as k-nearest neighborsefficient approaches to address them. We have (k-NN) also group pixels based on their similaritiestried to address the problem of classification MRI in each feature image [1, 6, 7, 8] can be used tobrain images by creating a robust and more classify the normal/pathological T2-wieghted MRIaccurate classifier which can act as an expert images. We used supervised machine learningassistant to medical practitioners. Magnetic algorithms (ANN and k-NN) to obtain theResonance Imaging (MRI) is the state-of the-art classification of images under two categories, eithermedical imaging technology which allows cross normal or abnormal.sectional view of the body with unprecedented Usually an image of size p × q pixels istissue contrast. MRI plays an important role in represented by a vector in p.q dimensional space. Inassessing pathological conditions of the ankle, foot practice, however, these (p.q) -dimensional spacesand brain. are too large to allow robust and fast object In proposed methodology three recognition. A common way to attempt to resolve thissupervised neural networks has been used: Back problem is to use dimension reduction techniques. InPropagation Algorithm (BPA), Learning Vector order to reduce the feature vector dimension andQuantization (LVQ) and Radial Basis Function increase the discriminative power, the principal(RBF). The features of magnetic resonance images component analysis (PCA) has been used.have been reduced, using principal component In these approaches, the 2-dimensionalanalysis (PCA), to the more essential features. The image is considered as a vector, by concatenatingproposed technique has been carried out over a each row or column of the image. Each classifier haslarger database as compare to any previous work its own representation of basis vectors of a highand is more robust and effective. dimensional face vector space. The dimension is reduced by projecting the face vector to the basisKeywords- Magnetic Resonance Image(MRI) , vectors, and is used as the feature representation ofPrincipal Component Analysis (PCA), Radial each images. [8],[15]Basis Function (RBF), Back Propagation (BP), The Back Propagation (BP) algorithm looksLearning Vector Quantization (LVQ), Multi for the minimum of the error function in weight spaceLayer Neural Network . using the method of gradient descent. Properly trained back propagation networks tend to giveINTRODUCTION reasonable answers when presented with inputs that Magnetic resonance imaging (MRI) is often they have never seen. Typically, a new input leads tothe medical imaging method of choice when soft an output similar to the correct output for inputtissue delineation is necessary. This is especially true vectors used in training that are similar to the newfor any attempt to classify brain tissues [1]. The most input being presented. This generalization propertyimportant advantage of MR imaging is that it is non- makes it possible to train a network on ainvasive technique [2]. The use of computer representative set of input/target pairs and get goodtechnology in medical decision support is now results without training the network on all possiblewidespread and pervasive across a wide range of input/output pairs. [3]medical area, such as cancer research, The RBF network performs similar functiongastroenterology, hart diseases, brain tumors etc. [3, mapping with the BP, however its structure and4]. Fully automatic normal and diseased human brain function are much different. An RBF is a localclassification from magnetic resonance images (MRI) network that is trained in a supervised manneris of great importance for research and clinical contrasts with the BP network that is a globalstudies. Recent work [2, 5] has shown that network. A BP performs a global mapping, meaningclassification of human brain in magnetic resonance all inputs cause an output, while an RBF performs a(MR) images is possible via supervised techniques local mapping, meaning only inputs near a receptivesuch as artificial neural networks and support vector field produce activation.machine (SVM) [2], and unsupervised classification The LVQ network has two layers: a layer oftechniques unsupervised such as self organization input neurons, and a layer of output neurons. The 751 | P a g e
  2. 2. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756network is given by prototypes W=(w(i),...,w(n)). It N 1 Nchanges the weights of the network in order to  classify the data correctly. For each data point, the S= i 1 (xi-µ) (xi-µ)T , µ = N i 1 xi . (3)prototype (neuron) that is closest to it is determined After applying the linear transformation WT ,(called the winner neuron). The weights of the the scatter of the transformed feature vectorsconnections to this neuron are then adapted, i.e. made {y1,y2,…, yN} is WTSW. In PCA, the projection Woptcloser if it correctly classifies the data point or made is chosen to maximize the determinant of the totalless similar if it incorrectly classifies it. [16] scatter matrix of the projected samples, i.e., We performed classification of MRI brainimages on a database of 192 images which contains max107 normal images and 85 pathological images. We Wopt = arg W | WTSW | = [w1w2…wm] (4)experimented with three different sets of training andtesting taken from clump of images. In first case s 98 Where {w i | i = 1, 2, … ,m} is the set of n –(55 normal and 43 pathological) images have been dimensional eigenvectors of S corresponding to the mused for training purpose and remaining 94 images largest eigen values. In other words, the input vectorfor testing. In second case we swapped the testing (face) in an n -dimensional space is reduced to aand training database and in third case we used 90(50 feature vector in an m -dimensional subspace. We cannormal and 40 pathological) images for training and see that the dimension of the reduced feature vectorremaining 102 images for testing. m is much less than the dimension of the input facesFor feature vectors generation, images are vector n.preprocessed by PCA which has been describedshortly below. PREPROCESSING OUTPUT After preprocessing images by PCA, featurePCA Preprocessing vectors of reduced dimension are produced. PCA PCA can be used to approximate the original produces feature vector of dimension 20. Wedata with lower dimensional feature vectors. The experimented with three different sets of training andbasic approach is to compute the eigenvectors of the testing taken from clump of images. In all the casescovariance matrix of the original data, and considering the training sample n so input to neuralapproximate it by a linear combination of the leading network has become the feature vector matrix of sizeeigenvectors. By using PCA procedure, the test 20 by n for PCA.image can be identified by first, projecting the imageonto the eigen face space to obtain the corresponding Classification:-set of weights, and then comparing with the set of Input matrix to the neural network is of sizeweights of the faces in the training set. [2],[5] 20 by n while target matrix size is determined on theThe problem of low-dimensional feature basis of number of classes. Target matrix is of size 2representation can be stated as follows: Let X= (x1 by n where if input feature vector (column wise),x2,…, xi,…, xn) represents the n × N data matrix, belong to class 2 then corresponding output vectorwhere each xi is a face vector of dimension n, will have 1 at 2nd row and 0 at other rows. Here valueconcatenated from a p × q face image. Here n 1 in any target vector denotes the belongingness of anrepresents the total number of pixels (p.q) in the face image to the class denoted by respective row value ofimage and N is the number of face images in the target vector.training set. The PCA can be considered as a linear To classify input feature vectors into targettransformation (1) from the original image vector to a vectors, we used Back Propagation (BP), Radialprojection feature vector, i.e. Basis Function (RBF) & Learning Vector Quantization (LVQ). We configured and tested each Y =WT X (1) neural network with various configurations. Variations are made in the following components: where Y is the m × N feature vector matrix, Number of input to neural network, Number ofm is the dimension of the feature vector, and hidden layers, Number of nodes in hidden layers,transformation matrix W is an n×m transformation learning rate. In case of RBF SPREAD is also variedmatrix whose columns are the eigenvectors considering the condition that SPREAD is largecorresponding to the m largest eigen values computed enough so that the active input regions of the radialaccording to the formula (2): neurons overlap enough so that several radial neurons λei = Sei (2) always have fairly large outputs at any given where ei ,λ are eigenvectors & eigen values moment. However, SPREAD should not be so largematrix respectively. that each neuron is effectively responding in the Here the total scatter matrix S and the mean image of same, large, area of the input space. [11],[13] Theall samples are defined as optimum configurations which have generated good testing results are shown in tables. Back Propagation as Classifier 752 | P a g e
  3. 3. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756 Table-I : BP neural network ConfigurationThe weighting factor of the input-to-hidden neuronscan be computed by (5) Input Vector nodes 20 E ( k ) wijk 1)  wij   ( k Number of hidden layers 2 wij (5) Number of neurons (hidden 20 ,35,2Where k is iteration number; i, j are index of input layer 1 ,hidden layer 2 &and hidden neuron, respectively; and η is step size output layer)E Transfer functions (hidden tansig, tansig,wij layer 1 , hidden layer 2 & purelin can be calculated from the following series of output layer )equations (6)-(8). The error function is given by 2 1 p Network Learning rate 0.0001 E   (tl  ol ) 2 l 1 (6) Radial Basis Function as Classifier Where p is the number of output neurons, l The RBF network performs similar functionis the index of neuron, tl and ol are the target and mapping with the multi-layer neural network,output values, respectively. The activation function, however its structure and function are much different.net function and output function are given by A RBF is a local network that is trained in aequation (7) supervised manner. RBF performs a local mapping, 1 meaning only inputs near a receptive field produce si  activation. [9],[10] 1  e (  neti ) (7) The input layer of this network is a set of n n units, which accept the elements of an n -dimensional neti   wil xl  win 1 input feature vector. n elements of the input vector x are input to the l hidden functions, the output of the l 1 (8) hidden function, which is multiplied by the weighting m oi   vil sl  vim 1 factor w(i, j), is input to the output layer of the network y (x). For each RBF unit k , k = 1, 2,3,..., l l 1 (9) the center is selected as the mean value of the sample Where n is the number of input neurons, and patterns belong to class k , i.e.m is the number of output neurons. Let us define Nk 1 E E si k  Nk x i 1 i k  , k=1,2,3, … ,m (12) neti si neti (10) i xk Where is the eigenvector of the i th E E neti image in the class k, and Nk is the total number of  wij neti wij trained images in class k. And, (11) Since the RBF neural network is a class of neural networks, the activation function of the hidden then we obtain the weight update equation units is determined by the distance between the input(5) for the input-to-hidden layer by computing Eq. vector and a prototype vector. Typically the(10) and Eq. (11) with the Eqs. from (6) to (9). Next, activation function of the RBF units (hidden layervij, hidden–to–output neurons’ weight update can also unit) is chosen as a Gaussian function with meanbe derived in the same way. vector µi and variance vector σi as follows Back Propagation networks often have one  x  i 2 or more hidden layers of sigmoid neurons followed hi ( x)  exp by an output layer of linear neurons. Multiple layers    i2   , i=1,2,…,lof neurons with nonlinear transfer functions allow the (13)network to learn nonlinear and linear relationships Note that x is an n -dimensional inputbetween input and output vectors. The linear output feature vector, µi is an n -dimensional vector calledlayer lets the network produce values outside the the center of the RBF unit, σi is the width of the i thrange -1 to +1. [5],[6] RBF unit and l is the number of the RBF units. The The optimum configuration of BP neural response of the jth output unit for input x is given as:network for PCA, used for training & testing is lshown in table-I. y j ( x)   hi ( x) w(i, j ) i 1 (14) 753 | P a g e
  4. 4. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756Where w(i, j) is the connection weight of the i -th If Xj and Wi do not belong to the same class, thenRBF unit to the j -th output node. The optimumconfiguration of RBF with PCA used for training and Wc (n+1) = Wc (n) - α(n)(Xj -Wc (n)) (17)testing is shown in Table II. The weight vectors of other neurons keep constant. Table-II: RBF neural network Configuration Wk (n+1) = Wk (n) (18)Number of Radial Basis 1 where 0 ≤ α(n) ≤1 is the learning rate. TheLayers training algorithm is stopped after reaching a pre-Number of neurons 20,135,2 specified error limit. Because the neural network(input ,radial basis & combines the competitive learning with supervisedoutput layer) learning, its learning speed is faster than BP network. The optimum configuration of LVQ with PCA & R- LDA , used for training and testing is shown in TableSpread 0.8 III. Table-III : LVQ neural network Configuration Number of competitive 1Learning Vector Quantization as Classifier Layers LVQ neural network combines the Number of neurons (input 30,40,2competitive learning with supervised learning and it ,competitive & outputcan realize nonlinear classification effectively. There layer)are several variations of the basic LVQ algorithm.The most common are LVQ1, LVQ2 and LVQ3. The Transfer function Lvq 1.0basic LVQ neural network classifier (LVQ1), which Network Learning rate 0.001is adopted in our work, divides the input space intodisjoint regions. A prototype vector represents each Training Graphs and Resultsregion. In order to classify an input vector, it must be Each neural network took different time forcompared with all prototypes. The Euclidean distance training input feature vectors. RBF neural networkmetric is used to select the closest vector to the input was the fastest while LVQ took much time thanvector. The input vector is classified to the same class others. Training graphs of BP applied to PCAas the nearest prototype. The LVQ classifier consists preprocessed training set are shown in figure 1 .of an input layer, a hidden unsupervised competitivelayer, which classifies input vectors into subclasses,and a supervised linear output layer, which combinesthe subclasses into the target classes. In the hiddenlayer, only the winning neuron has an input of oneand other neurons have outputs of zero. The weightvectors of the hidden layer neurons are theprototypes. The number of the hidden neurons isdefined before training and it depends on the :complexity of the input-output relationship. Fig1.Learning of BP after preprocessing by PCA.Moreover it significantly affects the results of RBF creates radial basis layer neurons onedifferentiation. We carefully select the number of at a time when training starts. In each iterationhidden neurons based on extensive simulation network error is lowered by appropriate input vector.experiments. [14] This procedure is repeated until the error goal is met, The learning phase starts by initiating the or the maximum number of neurons is reached. Inweight vectors of neurons in hidden layer. The input our case RBF creates 135 neurons for PCA inputvectors are presented to the network in turn. For each vectors .Training graphs of RBF applied to PCAinput vector Xj , the weight vector Wc of a winning preprocessed training set are shown in figure 2:neuron i is adjusted. The winning neuron is chosenaccording to: X j  Wc X j  Wk ≤ , for k ≠ c (15)The weight vector Wc of the winning neuron isupdated as follows:If Xj and Wc belong to same class, then Fig2 Learning of RBF after preprocessing by PCA Wc (n+1) = Wc (n) + α(n)(Xj -Wc (n)) (16) Accordingly Training graphs of LVQ applied to PCA preprocessed training set are shown in figure 3. 754 | P a g e
  5. 5. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756 Methods PCA with PCA with PCA with => BP RBF LVQ No. of error images => 5 7 9 Recognition 95.0 % 93.1% 91.1% Rate => (97/102) (95/102) (93/102)Fig3. LVQ Learning after preprocessing by PCA Table-VI: Recognition Rate using PCA with BP,PCA preprocessed input vectors’ training results PCA with RBF and PCA with LVQfor first case shown in table IV.Methods PCA PCA PCA with CONCLUSION => with with LVQ In this study, we have developed a medical BP RBF decision support system with normal and abnormal classes. The medical decision making systemNo. of error designed by the wavelet transform, principalimages => 4 7 9 component analysis (PCA), and supervised learning methods (BPA ,RBFN and LVQ) that we have built gave very promising results in classifying the healthy and pathological brain. The benefit of the system is toRecognition 95.7 % 92.5% 90.4 assist the physician to make the final decisionRate => (90/94) (87/94) %(85/94) without hesitation. REFERENCES [1] L. M. Fletcher-Heath, L. O. Hall,D. B.Table-IV : Recognition Rate using PCA with BP, Goldgof, F.Murtagh; AutomaticPCA with RBF and PCA with LVQ segmentation of non-enhancing brain tumors in magneticPCA preprocessed input vectors’ training results for resonance images; Arti_cial Intelligence insecond case shown in table V Medicine 21 (2001), pp. 43-63.Methods PCA with PCA with PCA [2] Sandeep Chaplot, L.M. Patnaik, N.R. => BP RBF with Jagannathan; “Classification of magnetic LVQ resonance brain images using wavelets as input to support vector machine and neuralNo. of error network”; Biomedical Signal Processing andimages => 3 6 8 Control 1 (2006), pp. 86-92. [3] http://www.abta.org/siteFiles/SitePages/5E8 399DBEEA8F53CBBBBF21C63AE113.pdf [4] A.Sengur, “An expert system based onRecognition 96.9 % 93.8% 91.8% principal component analysis, artificialRate => (95/98) (92/98) (90/98) immune system and fuzzy k-NN for diagnosis of valvular heart diseases” Comp. Biol. Med. (2007), doi:Table-V: Recognition Rate using PCA with BP, PCA 10.1016/j.compbiomed.2007.11.004.with RBF and PCA with LVQ [5] M . Maitra, A. Chatterjee; “Hybrid multiresolution Slantlet transform and fuzzyPCA preprocessed input vectors’ training result for c-means clustering approach for normal-third case shown in table VI pathological brain MR image segregation”, MedEngPhys(2007),doi:10.1016/j.medengp hy.2007.06.009. [6] P. Abdolmaleki, Futoshi Mihara, Kouji Masuda, Lawrence Danso Buadu; Neural network analysis of astrocytic gliomas from 755 | P a g e
  6. 6. Lalit P. Bhaiya, Virendra Kumar Verma / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.751-756 MRI appearances Cancer Letters 118 (1997), pp. 69-78.[7] T. Rosenbaum, Volkher Engelbrecht, Wilfried Kro?lls, Ferdinand A. van Dorstenc, Mathias Hoehn-Berlagec, Hans- Gerd Lenard; MRI abnormalities in neuro_bromatosis type 1 (NF1): a study of men and mice; Brain & Development 21 (1999), pp. 268- 273. C. Cocosco , Alex P. Zijdenbos, Alan C. Evans; A fully automatic and robust brain[8] MRI tissue classi_cation method; Medical Image Analysis 7 (2003), pp. 513-527..[9] Lisboa, P.J.G., Taktak, A.F.G.: “The use of artificial neural networks in decision support in cancer: a systematic review.” Neural Networks 19, 408–415 (2006)[10] Alfredo Vellido , Paulo J.G. Lisboa “Neural Networks and Other Machine Learning Methods in Cancer Research” F. Sandoval et al. (Eds.): IWANN 2007, LNCS 4507, pp. 964–971, 2007.[11] Lisboa, P.J.G., Wong, H., Harris, P., Swindell, R.: “A bayesian neural network approach for modelling censored data with an application to prognosis after surgery for breast cancer”. Artif. Intell. Med. 28, 1–25 (2003)[12] Lisboa, P.J.G., Vellido, A., Wong, H.: “A Review of Evidence of Health Benefit from Artificial Neural Networks in Medical Intervention.” In: Artificial Neural Networks in Medicine and Biology, pp. 63–71. Springer, London (2000).[13] X. Lu, Y. Wang and A. K. Jain, “Combining Classifier for Face Recognition,” Proc. of IEEE 2003 Intern. Conf. on Multimedia and Expo. Vol. 3. pp. 13-16, 2003.[14] Xin Ma; Wei Liu; Yibin Li; Rui Song, “LVQ Neural Network Based Target Differentiation Method for Mobile Robot” Advanced Robotics, 2005. ICAR 05. Proceedings. 12th International Conference on 18-20 July 2005.[15] Xudong Jiang; Mandal, B.; Kot, A. “Eigenfeature Regularization and Extraction in Face Recognition” Pattern Analysis and Machine Intelligence, IEEE Transactions on Volume 30, Issue 3, March 2008.[16] Yan Jun Wang Dian-hong “Sequential face recognition based on LVQ networks” VLSI Design and Video Technology, 2005. Proceedings of 2005 IEEE International Workshop. 756 | P a g e

×