Km2417821785
Upcoming SlideShare
Loading in...5
×
 

Km2417821785

on

  • 119 views

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

Statistics

Views

Total Views
119
Views on SlideShare
119
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Km2417821785 Km2417821785 Document Transcript

  • SIVAKUMAR R, Dr.M.SRIDHAR / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1782-1785 Multi Class Different Problem Solving Using Intelligent Algorithm 1 SIVAKUMAR R, 2Dr.M.SRIDHAR 1 Research Scholar Dept of ECE BHARATH UNIVERSITY India 2 Dept of ECE BHARATH UNIVERSITY IndiaAbstract We perform an extension of pixel range values to the Artificial Intelligence (AI) is a branch of whole intensity spectrum and the equalization ofcomputer science that investigates and histogram of ROI. In order to reduce thedemonstrates the potential for intelligent behavior dimensionality of the feature space and extractin digital computer systems. Replicating human principle components of image the NIPALS algorithmproblem solving competence in a machine has [2] is used. The SVM-classifier solves the problem ofbeen a long-standing goal of AI research, and training and classification of images.there are three main branches of AI, designed toemulate human perception, learning, and INTRODUCTION TO SUPPORT VECTORevolution. In this work we consider pyramidal MACHINESdecomposition algorithm for support vector The Support Vector Machines (SVMs) [3]machines classification problem. This algorithm is present one of kernel-based techniques. SVMsproposed to solve multi-class classification classifiers can be successfully apply for textproblem with use some binary SVM- categorization, face recognition. A special property ofclassifiers for the strategy "one-against-one’’. SVMs is that they simultaneously minimize the empirical classification error and maximize theKeywords— support vector machines, face geometric margin. SVMs are used for classification ofrecognition, decomposition algorithm both linearly separable (see Figure 2.) and unseparable data.INTRODUCTION One of the most problems of computer visionis face recognition with 2D-photograph. The taskof face recognition has a lot of solutions and it’s basedon similar basic principles of image processing andpattern recognition. Our approach is based on well-known methods and algorithms such as PCA andSVM. The process of face recognition consists ofseveral basic stages: face detection, imageenhancement of region of interest, image Figure 2. Optimal hyperplane of support vectorrepresentation as a feature vector, pattern machinesclassification with SVM. Basic idea of SVMs is creating the optimal hyperplane for linearly separable patterns. ThisIMAGE PREPROCESSING approach can be extended to patterns that are not Face detection is the first step of processing linearly separable by transformations of original datain many approaches of face recognition. We use the to map into new space due to using kernel trick.face-detector trained on our own images based on In the context of the Figure 2., illustrated foralgorithm of Viola-Jones [1] with use Haar-like 2-class linearly separable data, the design of thefeatures to detect a region of interest on image conventional classifier would be just to identify thebounded by lines of brows (see Error! Reference decision boundary w between the two classes.source not found.). However, SVMs identify support vectors (SVs) H1 and H2 that will create a margin between the two classes, thus ensuring that the data is “more separable” than in the case of the conventional classifier. Suppose we have N training data points {(x1,y1), (x2,y2),…,(xN,yN)} where xi   d and y i  {1} . We would like to learn a linear separating classifier: Figure 1. Region of interest bounded by lines of f ( x)  sgn(w  x  b) brows (1) 1782 | P a g e
  • SIVAKUMAR R, Dr.M.SRIDHAR / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1782-1785 Furthemore, we want this hyperplane to from another methods. All strategies mentioned abovehave the maximum separating margin with respect to consist in dividing the general multiclass problem intotwo classes. Specifically, we wish to find this minimal two-class problems and use specifiedhyperplane H : y  w  x  b  0 and two hyperplanes procedures of voting.parallel to it and with equal distances to it: The “one-against-all” technique [4] builds N H 1 : y  w  x  b  1 , binary classifiers to solve N-class problem (see Figure (2) 3.). Each of N binary classifiers train to distinguish H 2 : y  w  x  b  1 one class from all another. In this case each pair of (3) classes has one “winner-class”. In validation phase we With the condition that there are no data choose the class which gives maximum of decisionpoints between H1 and H2, and the distance between function. The ith SVM-classifier is trained withH1 and H2 is maximized. positive label for ith class and with negative label for For any separating plane H the the rest classes.corresponding H1 and H2 we can always “normalize”the coefficients vector w so that H1 will be y = w∙x – b = +1, and H2 will be y = w∙x – b = –1. We want to maximize the distance betweenH1 and H2. So there will be some positive exampleson H1 and some negative examples on H2. Theseexamples are called support vectors because onlythey participate in the definition of the separatinghyperplane, and other examples can be removed andmoved around as long as they don’t cross the planes Figure 3. One-against-all strategy of votingH1 and H2. Other basic strategy “one-against-one” calculates Introducing Lagrange multipliers all values of possible binary N(N-1)/2 SVM-classifiersα1, α2,…, αN ≥ 0, we have the following Lagrangian: for N-class problem (see Figure 4.). N N 1 L( w, b,  )  wT w    i y i ( w  xi  b)    i (4) 2 i 1 i 1 If the surface separating the two classes arenot linear we can transform the data points to anotherhigh dimensional space such that the data points willbe linearly separable. Let the transformation be Φ(∙).In the high dimensional space, we solve LD    i    i j y i y j   xi    x j  N 1 (5) i 1 2 i, j Figure 4. One-against-one strategy of voting Suppose, in addition, Φ(xi)∙Φ(xj) = k(xi, xj).That is, the dot product in that high dimensional There are several methodologies to choosespace is equivalent to a kernel function of the input “winner” in technique “one-against-one”. One of themspace. So we need not be explicit about the is DAGSVM-strategy that operates Directed Acyclictransformation Φ(∙) as long as we know that the Graph to estimate the “winner-class” as shown inkernel function k(xi, xj) is equivalent to the dot Figure 5. where each node is associated to a pair ofproduct of some other high dimensional space. There classes and a binary SVM-classifier.are many kernel functions that can be used this way,for example, the radial basis function (Gaussiankernel).MULTICLASS CLASSIFICATION WITH SUPPORTVECTOR MACHINES Support Vector Machines is well-known andreliable technique to solve classification problem.SVMs is method for binary classification. There are several strategies of use binary classifiersto combine them for multiclass classification. Themost famous of them for multiclass SVM- Figure 5. DAGSVM techniqueclassification are “one-against-one”, “one-against-all”and DAGSVM strategies. Each of these approaches The most effective strategy to identify thehas various characteristics and concepts of target “winner-class” is supposed the voting strategydetermination of “winner-class” and distinguished when the ith class gets a ith vote and the total number 1783 | P a g e
  • SIVAKUMAR R, Dr.M.SRIDHAR / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1782-1785of votes for this class is incremented by one.Otherwise total number of votes of jth class isincremented by 1. The recognizable pattern isprocessed with all binary classifiers. Class-winner isdefined as class that scored maximum of votes. Thedescribed methodology of detection of “Winner” wascalled “MaxWin” strategy. In the case when two ormore classes have the same number of votes we Figure 7. Cross-validation algorithmchoose “Winner-class” with the smallest index, The search of learning parameters for SVM-although it’s not the best decision. classifiers is performed for each binary classifier. The We use the “one-against-one” strategy as the regularization parameter C and parameter γ for radialmost effective technique to solve multiclassification basis function look for grid as shown in Figure 8..problem. The source code of SVM-algorithm isimplemented in LIBSVM-library [5]. We built theface recognition system with mentioned abovemethods and algorithms (see Figure 6.). Figure 8. Search learning parameters. Stage 1 This approach of learning parameters is described in [6] and it has a rough estimate because of use a logarithmic scale; we consequently reject the logarithmic scale and we switched over to use immediate value of their bounds. Extreme values of learning parameters are chosen according to boundary points of line corresponding to the highest rate of test recognition (see Figure 9.). Figure 9. Search learning parameters. Stage 2 To increase the speed of recognition at Figure 6. Face recognition system blocks classification stage we suggested a new scheme of As in the Figure 6. shown the procedure of combination binary classifiers to use certain of themtrain SVM-classifiers is associated with solving of to classify patterns (see Figure 10.). At the same timequadratic programming problem and requires we train all binary SVM-classifiers in learning stage.searching for learning parameters. Sequential minimalalgorithm breaks the optimization problem into anarray of smallest possible sub-problems, which arethen solved analytically with use special constraintsfrom Karush–Kuhn–Tucker conditions. Search of optimal to recognize parameters ofSVM-classifiers carry out with cross-validationprocedure. The searched parameters are the same forall binary SVM-classifiers. Main idea of cross-validation is shown in Figure 7.. Figure 10. Pyramidal decomposition algorithm for SVM-classification The general number of binary SVM- classifiers which are used in the scheme “one-against- one” is calculated as shown in equation (6): 1784 | P a g e
  • SIVAKUMAR R, Dr.M.SRIDHAR / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-August 2012, pp.1782-1785 K = N(N-1)/2 (6) places ), We carry out the separation of primary %training set into M subsets and realize the Basic techniqueclassification within each of them. The proposed “one-against- 21,7 1,2 84,8algorithm reduces the number of operations in one”computing of class-winner. In each subset we use the “One-against- 2,8 0,91 85,9learned binary SVM-classifiers which correspond to all”the classes of subset. The quantity of classifiers using Pyramidalin recognition for each layer is computed as 21,7 0,029 93,8 algorithm N  M  M  1   r  r  1  K   , 3  r  M , (7) M  2   2  CONCLUSION N   M  M  1  M  M  1  M  r  1 K   1    , 0  r  3, In this paper we proposed an efficient M   2  2 (8) technique to combine binary SVM-classifiers, which where K – is total quantity of binary SVM- we called pyramidal decomposition algorithm. Thisclassifiers, N – the quantity of binary classifiers for algorithm decreases time of classification andlayer, M – a quantity of classes in subset, N/M is improve index of recognition rate. On the other handrounded to nearest smallest integer, r – the remainder we proposed to use individual learning parameters ofof classes that are not included in subsets. binary SVM-classifiers obtained consequently cross- validation. Furthermore we used cross-validation not only with logarithmic scale.EXPERIMENTS AND RESULTS We used the sample collection of imageswith size 512×768 pixels from database FERET [6] REFERENCEScontaining 611 classes (unique persons) to test our [1] P. Viola, M. Jones, “Rapid Object Detectionface recognition system based on support vector using a Boosted Cascade of Simplemachines. Features”, Computer Vision and Pattern TABLE I. RESULTS OF E XPERIMENTS FOR FERET Recognition, 2001 DATABASE [2] Y. Miyashita, T. Itozawa, H.Katsumi, S.-I. 51st Ove Sasaki “Comments on the NIPALS 11th algorithm”, Journal of Chemometrics, vol. 4, nd 2 – 6 – th – rRecognit 1st – Issue 1, 1990, pp. 97–100. 5th 10th th 100t 100,ion rate, place place plac 50 h % [3] C.J.C. Burges “A Tutorial on Support % ,% ,% e, % plac plac Vector Machines for Pattern Recognition”, e, % Data Mining and Knowledge Discovery, e, % 1998, Vol. 2, pp. 121–167. 94,272 80,5 13,7 1,80 2,29 1,14 0,49 [4] C.-W. Hsu, C.-J. Lin “A comparison of 24 48 0 1 6 1 methods for multi-class support vector 93,126 79,0 14,0 1,30 1,96 2,61 2,94 machines”, IEEE Transactions on Neural 51 75 9 4 9 6 Networks, Vol. 13, 2002, pp. 415–425. 94,435 80,8 13,5 2,29 2,12 0,16 0,98 [5] C.-C. Chang “LIBSVM: a library for 51 84 1 8 4 2 support vector machines”, ACM This collection counts 1833 photos. Each Transactions on Intelligent Systems andclass was presented by 3 images. So, to train SVM- Technology, 2011,classifier we used 1222 images where 2 photos http://www.csie.ntu.edu.tw/~cjlin/libsvm.introduced each class. 611 images were used to test [6] R. Kohavi “A study of cross-validation andour system. Note, that any image for testing doesnt bootstrap for accuracy estimation and modeluse in training process. The results of realized selection”, Proceedings of the Fourteenthexperiments are shown in the table 1. International Joint Conference on Artificial The traditional approach PCA+SVM gives Intelligence, Vol. 2(12), pp. 1137–1143.the recognition rate on FERET database gives results [7] H. Moon, P.J. Phillips “The FERETat 85% [8] and 95% for ORL database. verification testing protocol for face On the other hand we evaluated the recognition algorithms”, Proceedings. Thirdperformance in comparison with traditional IEEE International Conference onalgorithms. The results of second speed test are shown Automatic Face and Gesture Recognition,in the table 2. 1998, pp. 48–53. TABLE II. PERFORMANCE OF MULTICLASS SVM- [8] T.H. Le, L. Bui “Face Recognition Based on CLASSIFICATION ALGORITHMS SVM and 2DPCA”, International Journal of Strategy of Traini Time of 1 Rate of Signal Processing, Image Processing and multiclassificat ng face face Pattern Recognition, 2011, Vol. 4, No. 3, pp. ion time, s recognitio recogniti 85–94. n, s on (1st-5th 1785 | P a g e