Face Detection and Recognition Method Based on            Skin Color and Depth Information                                ...
human face. So, the detection algorithm needs to be improved               binocular camera, depth image of a face can be ...
Such a feature subspace is also known as eigenface space, so                   detection. However, when the degree increas...
For the use of PCA algorithm, the larger 150 feature values                       this method is fast for the robot to do ...
Upcoming SlideShare
Loading in...5

Face Detection and Recognition Method Based on Skin Color and Depth Information


Published on

services on...... embedded(ARM9,ARM11,LINUX,DEVICE DRIVERS,RTOS) VLSI-FPGA DIP/DSP PLC AND SCADA JAVA AND DOTNET iPHONE ANDROID If ur intrested in these project please feel free to contact us@09640648777,Mallikarjun.V

Published in: Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Face Detection and Recognition Method Based on Skin Color and Depth Information

  1. 1. Face Detection and Recognition Method Based on Skin Color and Depth Information Junfeng Qian Shiwei Ma Zhonghua Hao Yujie Shen School of Mechatronics Engineering and Automation Shanghai Key Laboratory of Power Station Automation Technology, Shanghai University Shanghai (200072), China E-mail:qianjunfeng666@163.comAbstract—An improved face detection and recognition method 3D geometric invariants of the human face [14]. Wang et al.based on information of skin color and depth obtained by described a real-time algorithm based on fisherfaces [10].binocular vision system was proposed in this paper. With this Though these 3D methods which emphasis on the shape ofmethod, the face area was detected firstly by using Adaboost human face are robust in variable environment, they overlookalgorithm. Afterwards, the real face was distinguished from fake the texture information of human face on the contrary.one by using the skin color information and the depth data. Then,by using PCA algorithm, a specific face can be recognized by Therefore, in order to get better efficiency, face data shouldcomparing the principal components of the current face to those be sufficiently used and both 2D and 3D face informationof the known individuals in a facial database built in advance. should be considered [1-3].This method was applied to a service robot equipped with abinocular camera system for real-time face detection and In this paper, we propose an improved method whichrecognition experiment, and satisfactory results were obtained. employs the skin color information and depth date of human face for detection and PCA (principal components analysis) Keywords- face detection; face recognition; binocular camera algorithm for recognition. Finally, the experiment results on avision; depth data service robot are given and discussed. I. INTRODUCTION II. IMPROVED FACE DETECTION METHOD A. Adaboost algorithm in OpenCV Face detection and recognition technology [5, 8] hasbeen widely discussed in relation to computer vision and The Adaboost algorithm was often used to detect facepattern recognition. Numerous different techniques have been area in an image. The main idea of this algorithm is to boostdeveloped owing to the growing number of real world up a large number of generally weak classifiers to form strongapplications. For service robot, face detection and recognition classifier, and the strong classifier has strong classificationare extremely important, in which the emphasis must be put ability [4]. Since the OpenCV is a great tool for imageon security, real-time, high ratio of detection and recognition. processing, and it provides several examples including face detection. In our work, for real-time process, we use OpenCV For real-time face detection, typically, in 1997, P.Viola as a development tool to implement this algorithm.presented a machine learning approach based on Adaboostand Cascade algorithm, which is capable of detecting faces inimages in real-time [9]. Based on this work, many researchersbegin to study on Boosting algorithm. Stan Z. Li proposed amulti-view face detection algorithm based on FloatBoost [11].Then P.Viola presented an asymmetric Adaboost algorithmwhich can be used for fast image retrieval and face detection.For face recognition, the eigenface approach was presented byTurk and Pentland introduced in [3]. This approach is basedon PCA, which was later refined by Belhumeur et al. [12] andFrey et al. [13]. However, most of the methods above are based on 2D Figure 1. Example photo of face detection by using Adaboost algorithmface image and are easily affected by changeable factors suchas pose, illumination, expression, makeup and age. In order toovercome these problems, 3D face detection and recognition However, as shown in Fig.1, though the face area can bemethods have been developed rapidly in recent years [6]. easily detected by using only Adaboost algorithm in OpenCV,Bronstein et al. presented a recognition framework based on it can not distinguish fake face, such as picture face, from real This paper was financially supported by 2011’ Innovation Foundation ofGraduate Students of Shanghai University. 978-1-61284-459-6/11/$26.00 ©2011 IEEE 345
  2. 2. human face. So, the detection algorithm needs to be improved binocular camera, depth image of a face can be estimated [7].for real-time applications such as vision guided service robot. Fig.3 gives an example of the depth images and the rightFrom the security point of view, the robot needs to distinguish images obtained from of binocular camera for a specifica picture face and real one. It is really an important ability for person’s face, what used here is the depth data.the service robot to avoid been “cheated”. Here we define Di (i=1…N) as the depth value of the ith pixel in the face area, where N is the total number of pixels inB. Face detection with skin color and depth data the face area. As what can be observed in Fig.4, generally, the depth values of a real face has a much larger varied range than In order to avoid cheating by a picture face, skin color that of a picture face for plane of face. Considering this, weand depth data should be considered [1]. The skin color mode can define the average value of depth of all skin pixels, AvgD,in a color image can be given as: as: R G N r= g= R+G+B R+G+B Di (1) AvgD = i =1 N (4) Y = 0 . 30 R + 0 . 59 G + 0 . 11 B (2) Then the variance of depth values of the face can beIn which, the RGB values determine the original pixels. When calculated as:values of r, g, Y satisfy the following formula (3), the colorinformation in the skin area of human can be determined. di = Di − AvgD (5) 0 .3 3 3 < r < 0 .6 6 4 N 0 .2 4 6 < g < 0 .3 9 8 S = (di)2 (6) (3) i =1 r > g In formula (5), di represents the difference between the depth g ≥ 0 .5 − 0 .5 r value of each skin pixel and the average depth value, S gives Y > 40 the variance decision value of the detection result. Generally, the di varies much on a real face compared with a picture faceThe advantage of this model is better color dot density for the same average value, as depicted in Fig.5. So, the valuedetecting. By using this mode, the real faces with skin color S in (6) is appropriate for decision.information and the drawing faces without skin colorinformation can be distinguished, as shown in Fig.2. Figure 4. Depth of real face (left) and picture face (right) Figure 2. Result of skin color detection Real face Picture face Depth value Depth value Average Average Pixels of skin color Pixels of skin color Figure 5. Distribution of the depth data of real face and picture face Figure 3. Depth images and right images of a face III. FACE RECOGNITION BASED ON PCA After that, face detection can eliminate general drawing The PCA algorithm is based on K-L translation which isfaces without color of skin. But for real-face picture, it is hard a useful orthogonal transformation [5]. After K-L translation,to eliminate. To solve this problem, we use the 3D an image can be dimensionally reduced to a point of a featureinformation of the face [2]. According to the character that subspace. With this feature subspace, any face image can bethere is disparity field between two stereo images of the projected onto it, and get a set of coordinate coefficients. This set of coefficients can be used as a basis for face recognition. 346
  3. 3. Such a feature subspace is also known as eigenface space, so detection. However, when the degree increased, the results arethe method is also known as the eigenface method. not always satisfactory. The possible reason is that, the pixels of people skin are difficult to extract accurately in changeable In our work, by using PCA algorithm, a specific face can environment, such as illumination, race and so on.be recognized by comparing the principal components of thecurrent face to those of the known individuals in a facialdatabase built in advance. And it can be applied to a servicerobot equipped with a binocular camera system for real-timeface recognition experiment. The detailed procedure of PCA algorithm is describedbelow [3]. First, build a training database of human face.Second, represent each image of the database as a vector.Here, the average face vector needs to be calculated. Thensubtract average face vector from vector of each face image. Figure 7. Face detection of real face and picture faceThird, calculate eigenface vector and space, and project thetraining faces into eigenface space. Coordinate coefficients TABLE I. RESULTS OF FACE DETECTIONcan be obtained. Finally, project the test face image intoeigenface space and obtain the coordinate coefficients. Angle Detection Average time of head Total SuccessCalculate the Euclidean distance between coordinate rotate rate (once)coefficients of test image and images in database, the test -30 50 49 98% 1022.34msimage will be classified by using the nearest distance. -15 50 50 100% 980.65ms IV. EXPERIMENTS AND RESULTS 0 50 50 100% 976.24ms In the experiment, a binocular camera (produced by 15 50 50 100% 989.21msPoint Grey company in Canada, model: BB2-08S2), a PC 30 50 49 98% 1006.15ms(AMD 64 Processor, 991MHz) and a service robot equippedwith a same binocular camera are used, the robot uses threewheels to walk and can do speech interaction, as shown inFig.6. The proposed method was programmed and tested on B. Experiments for face recognition on RobotPC and the robot. When the codes are running on the robot,the recognition results are presented by voice interaction. For the real-time experiments of face recognition for the robot, training face database was built in advance. The face images within it meet the following conditions: (a) they are all grayscale images; (b) each of the image is 50 50 in size; (c) the postures of each face only allow little change. One of the experimental training face databases is shown in Fig.8. At the experiment, real faces of different persons are classified. Some of faces are stored in database, they are known to the robot. And some of the faces are strangers to the robot. Several real faces and picture faces are faced to the camera, Figure 6. Experimental environment of a service robot with binocular and they are some known and unknown faces. camera vision systemA. Experiments for face detection First, put a drawing face on a paper and face it to thebinocular camera with different head rotate angles. Second,let a real face and a picture face (the same person) both face tothe camera respectively. Detection experimental results areshown in Fig.7 and numerically listed in TAB.I. Here, thevariance decision value S in formula (6) was set to 4000 fordistinction, and once detection needs time nearly 1s. It can beobserved that, the real face and the fake one can bedistinguished accurately in certain degree of head angle. Thedetection results are really good for the drawing faces. Inother words, it excludes nearly all drawing faces in face Figure 8. Faces in the training database 347
  4. 4. For the use of PCA algorithm, the larger 150 feature values this method is fast for the robot to do services, although theand vectors are chosen to build the eigenface space. In the recognition rate is not high enough. Currently, the recognitionexperiment, the Euclidean distance classification is used. That accuracy may be affected by illumination, expressions andis, to calculate the Euclidean distance of projection of the test mechanical vibrations in some cases. This is left for futureface and various average faces. The test samples will be in the investigation.class having the smallest Euclidean distance. REFERENCES One of the successful face recognition results of a [1] Sergey Kosov, Kristina Scherbaum, Kamil Faber, Thorstenspecific person is shown in Fig.9. In TAB.II, the total face Thorma¨hlen, and Hans-Peter Seidel. “Rapid stereo-vision enhancedrecognition results and the successful recognition results are face detection”. in Proc. IEEE International Conference on Imageobtained with different head title angles, and the face Processing, 2009, pp.1221–1224.recognition rate can be calculated. The average time used in [2] Sergey Kosov, Thorsten Thorma¨hlen, Hans-Peter Seidel. Accurateonce recognition was also recorded. Although the percentage Real-Time Disparity Estimation with Variational Methods . in Proc. International Symposium on Visual Computing, 2009, pp.796–807.of recognition is not so high, it is fast enough for the robot to [3] T.-H. Sun, M. Chen, S. Lo, and F.-C. Tien. Face recognition usingdo services. 2D and disparity eigenface . Expert Syst.Appl.,vol.33,no.2, 2007, pp.265–273. [4] Rainer Lienhart, Alexander Kuranov, and Vadim Pisarevsky. Empirical Analysis of Detection Cascades of Boosted Classifiers for Rapid Object Detection . Springer-Verlag Berlin Heidelberg, LNCS 2781, 2003, pp. 297-304. [5] Kevin W. Bowyer , Kyong Chang, Patrick Flynn. A survey of approaches and challenges in 3D and multi-modal 3D + 2D face recognition . Computer Vision and Image Understanding (101), 2006, pp.1-15. [6] F.Tsalakanidou, D.Tzovaras, Use of depth and colour eigenfaces for face recognition . Pattern Recognition Letters 24, 2003, 1427–1435. [7] Yue Ming, Qiuqi Ruan, Senior Member, IEEE. Face Stereo Matching and Disparity Calculation in Binocular Vision System . 2010 2nd International Conference on Industrial and Information Systems, 2010, 281-284. [8] Andrea F. Abate, Michele Nappi, Daniel Riccio, Gabriele Sabatino. 2D and 3D face recognition: A survey . Pattern Recognition Letters 28, 2007, 1885–1906. Figure 9. Face recognition result of the robot [9] P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in Proc. IEEE Computer Vision and Pattern Recognition, 2001, p. 511. TABLE II. FACE RECOGNITION WITH DIFFERENT HEAD POSTURES [10] J.-G. Wang, E.T. Lim, X. Chen, and R. Venkateswarlu, “Real-time stereo face recognition by fusing appearance and depth fisherfaces,” J. Angle of Recognition Average time VLSI Signal Process. Syst., vol. 49, no. 3, pp. 409–423, 2007. Total Success head rotate rate (once) [11] Li SZ, Zhu L, Zhang ZQ, Zhang HJ. “Learning to Detect Multi-View -30 50 40 80% 870.5ms Faces in Real Time”. in Proeeedings of the 2nd Intemational Conference on Development and Leaming.Washington DC.2002. -15 50 42 82% 830.23ms [12] Peter N. Belhumeur, Hespanha, J.P., Kriegman, D.J.“Eigenfaces vs. 0 50 44 88% 785.72ms fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 15 50 42 84% 805.25ms 711–720, 1997. [13] Brendan J. Frey, Antonio Colmenarez, and Thomas S.Huang, “Mixtures 30 50 41 82% 826.36ms of local linear subspaces for face recognition,” in Proc. IEEEConference on ComputerVision and Pattern Recognition, 1998, pp. 32–37. [14] Alexander M. Bronstein, Michael M. Bronstein, and Ron Kimmel, “Three dimensional face recognition,” Int. J. Comput. Vision, vol. 64, V. CONCLUSION no. 1, pp. 5–30, 2005. The proposed method improves 2D face detectiontechniques by additionally considering the facial 3Dinformation obtained by binocular camera vision system. Theskin color information and depth data of human face fordetection and PCA algorithm for recognition are employed. Itcan not only detect person closed to the camera but alsodistinguish between real face and picture face. Appling themethod to a service robot, a face which is faced to the cameracan be determined whether it is a real face or a picture face,and then just do recognition on real face. The results show that 348