struct video_picture grab_pic; voide_picture contains avariety of properties of the captured images such as:brightness hue contrast whiteness depth and so onstruct video_mmap grab_buf; video_mmap is used to mapmemorystruct video_mbuf grab_vm; video_mbuf uses mmap tomap frames information which actually is the framesinformation from the camera buffer memory. Contains: sizeframes offsets and so onC. Embedded HumanFace Recognition SystemTraditional face recognition system includes five parts: (1)inputting image from camera, (2) face detection, (3)preprocessing, (4) feature extraction, (5) feature matching. TheFigure 1 is the work flow chart of embedded face recognitionsystem in my project.Fig. 1 Work flow chart of embedded face recognition system.In the above Figure the first step is to detect human face.Face detection is one of the main technical parts of theautomatic face recognition system. Face detection needs todetermine whether the given image is a face, if it is someone’sface, and then return the position of the face. In order to betterrecognize all of the faces in the image, the system use faceclass Haar feature to detect human face . After having gottenall of the faces, the next step is finding the special face.Compared with traditional face recognition system, in thisthesis the embedded face recognition system is based on theGabor feature extraction algorithms of the face region andprincipal component analysis (PCA) with nearest neighborclassifier as the core algorithms of the system identification.Face samples are preprocessed for extracting the Gabor featurevector, by using PCA to reduce the feature vector’s dimensionand we can get face PCA subspace of the sample Gabor feature,we call it face space. And then the input faces are projectedinto the subspace of face PCA and get image face. The distancefrom face space to image face as a measure of face similarity.Although this system uses the face similarity to determinethe recognition results without the training classifier, in thesystem human face database is relatively large, the featureextraction and feature subspace transform need a largermemory, embedded devices hardware are basically unable tomeet the requirements, so the subspace transform of thetraining samples will be completed by common PC, and thenPC will transmit the transformed subspace files to theembedded platform by wireless network or Activesyncsynchronization software.III. ANALYSIS OF THE MAIN ALGORITHMSA. Face Detection Base on Haar FeatureViola wrote a paper which has been published in 2001,can be said as watershed of real time face detection technology,they implement the real time face detection by integratingAdaboost and cascade algorithms, face detection is beginningto make practical. Papageorgiou and viola extract features fromimages based on the application of wavelet transform, and putforth the original concept of local Haar like features. Thefeature database contains 3 types and 4 different forms offeatures. The 3 types: 2 rectangles feature, 3 rectangles feature,and 4 rectangles feature. The 4 forms are shown in Figure 2.Fig. 2 The original forms of Haar-like featuresNow we generate classifiers which are based on the basicprinciples of Adaboost face detection.Simple classifier: The input window x , ( )jf x is aneigenvalue of the jth rectangle feature, the jth feature isgenerated into the form of a simple classifier:1, ( ,( )0, ;j j j jjif p f xh xother) pwisejh indicated the value of asimple classifier; j is the threshold; indicated thedirection of inequality, the value is 1.jpStrong classifier: the learning process of Adaboostalgorithm can be understood as the “the process of greedyfeature selection." For solving a problem by weighted votingmechanisms, with plenty of the weighted combination ofclassification function to make the determination. The key ofalgorithm is to give the good classification function a biggerweight value; the bad one is given a smaller weight value.Adaboost algorithm is a very good way that can find out a fewfeatures to effectively classify target. The haar features of myface image by Adaboost are shown in Figure 3. The function ofthe strong classifier is: 1 111, ( )( ) 20,T Ti i iia h x ah xotherwisei T, the isthe training times;1logiia ,1ttt, j indicated theweighted error.Fig. 3 Haar features of a face image by Adaboost.Cascade strong classifier: By Adaboost algorithm thestrong classifier is made of the important features, and can beused to detect face. During the testing process due to have to
scan all different sizes windows in tested image’s everyposition, the number of detection windows are so many.During the process of actual face detection we should adoptclassification classifier of "first weight and then light". At first,using the simple strong classifier which includes moreimportant features to find nonface windows and delete them.As the importance of features declines, the number ofclassifiers is increasing, but the number of detected windows isdeclining.B. GaborFilters and Gabor Feature Extraction of HumanFaceHuman face images are represented by the direct use ofpixel gray value, are vulnerable to be affected by faceexpressions, illumination conditions and various geometrictransformations, while the Gabor filter can capture the imageswhich correspond to the local structure information of thespatial location, spatial frequency and direction. It is notsensitive to the brightness of face image and the changes ofhuman face expression, and it is beneficial for multi-gestureface recognition .Two dimensional Gabor filter is a kind of band-pass filter;in spatial domain and frequency domain Gabor filter has abetter distinguish ability, such as direction selectivity in spatialdomain and frequency selectivity in frequency domain. Theparameters of two dimensional Gabor filters reflect the ways ofsampling in spatial domain and frequency domain, anddetermine the ability of expressing signal.Two dimensional Gabor wavelet is the filter group whichis generated by two dimensional Gabor filter function bymeans of scale stretching and angle rotation, the choice of itsparameters is usually considered in the frequency space. Inorder to sample the entire frequencies domain of an image, theGabor filter group of multiple center frequencies and differentdirections can be used to image sampling. The different optionsof parameter K , separately incarnate the sampling ways ofthe two dimensional Gabor wavelet in the frequency anddirection spatial, the parameter decide the bandwidth offilters. In the spatial domain, parameters , andK reflectthe direction of the filter texture, the wavelength of texture andthe window size of Gaussian.With the different changes of parameters , the real andimaginary parts of two dimensional Gabor filter function showthe different texture features, different filters are able torespond to the image texture features of correspondingdirection K. When the center frequency / 4 , thevariation of the real part of the two dimensional Gabor filterfunction with the directionparameters (0, / 8,2 / 8,3 / 8,4 / 8) is shown in Figure4.Fig. 4 The variation with the direction parameter .With the decreasing of the parameter K , the variation ofthe wavelength ( 2 / K ) is shown in Figure 5.Fig. 5 The variation with parameter .With the increasing of the parameter , the variation of thefilter’s real part window is shown in Figure 6. Effectiveradius: 2 2 /r K .Fig. 6 The variation with parameter .C. Human Face Image Recognition Based on PCAIf the direct use of the above high dimensional Gaborfeature vector for classifier training and image recognition, willcause the disaster of high dimensions. Therefore, we need toreduce the dimensions of high dimensional Gabor featurevector.PCA is a statistical dimensionality reduction method, whichproduces the optimal linear least squares decomposing of atraining set. Kirby and Sirovich (1990) applied PCA torepresenting faces and Turk and Pentland (1991) extendedPCA to recognizing faces . The algorithm is based on aninformation theory approach that decomposes face images intoa small set of characteristic feature images called “eigenfaces”,which may be thought of as the principal components of theinitial training set of face images. Automatically learning andlater recognizing new faces is practical within this framework.Recognition under widely varying conditions is achieved bytraining on a limited number of characteristic views. Theapproach has advantages over other face recognition schemesin its speed and simplicity, learning capacity, and insensitivityto small or gradual changes in the face image.The main idea of the PCA (or Karhunen-Loeve expansion)is to find the vectors that best account for the distribution offace images within the entire image space. These vectors definethe subspace of face images, which we call “face space”. Eachvector is of length , describes a by image. Becausethese vectors are the eigenvectors of the covariance matrixcorresponding to the original face images, and because they areface like in appearance, we refer to them as “eigenfaces”.2N N NIV. SYSTEM TESTA. Face DetectionThe face class Haar feature which is introduced in theabove paragraph is used to detect human faces. The originalimages are shown in Figure 7, which are my own images.Fig. 7 The original imagesThe core codes are as followscascade = (CvHaarClassifierCascade*)cvLoad( cascade_name,0, 0, 0 ); // loads a trained cascade of haar classifiers from a filecvEqualizeHist( small_img, small_img ); // equalizes
histogram of the input imageCvSeq* faces = cvHaarDetectObjects( small_img, cascade,tect_and_draw( IplImage* img ); //to circle markede detection by using face class Haarfeastorage, 1.1, 2, 0, cvSize(30, 30) );//detect the faces of the inputimagevoid dehuman faces on the imageThe final images of facture are shown in Figure 8.Fig. 11 The sample images and the training data of PCA.Then real-time getting human face image, using EuclideanDistance measures the human face image, and testing thesimilarity of the training faces and the image. If the measureddifference is in the range of threshold value, it indicatesreal-time image meets the requirements and the image exists inthe face space, and then finding the most similar face in theface space and outputting the corresponding personalinformation, or the image can’t be recognized and system logsout. The number of training images can be bigger if it is neededfor expanding the scope of recognition. The several systemtests steps show that the system can recognize face to meet therequirements and get the corresponding person’s information.Fig. 8 The final images of face detectionB. Respulti-channmuCv create Gabor/g, CV_GABOR_REAL);IMAG);;onse Features of Gabor TransformGabor transform is suitable for analyzing m el andlti-resolution images in order to obtain spatial frequency,spatial location and the local features information of directionselectivity, such as organs of human face. The next figuresshow the response features of the image edge (Figure 9) andthe spatial location (Figure 10) by the Gabor transform, whichis introduced in the above paragraph.The core codes are as followsGabor *gabor1 = new CvGabor; //toV. CONCLUSIONIn this thesis, the development process of embedded facerecognition system is introduced in detail, includingtransplantation of Linux operating system, the development ofdrivers, and the research of the corresponding algorithms offace recognition. By testing the system, it basically achievesthe desired result..kernel = gabor1->get_image(CV_GABOR_REAL); / get//get//getthe real part of Gabor filtergabor1->conv_img(img, reimthe real part of Gabor filter response of imagegabor1->conv_img(img, reimg, CV_GABOR_the imaginary part of Gabor filter response of imagegabor1->conv_img(img, reimg, CV_GABOR_MAG)REFERENCES//get  He Dong-feng, Ling Jie. Face Recognition : A Survey [J]. MirocomputerDevelopment, 2003,13(12):0075-0078.the modulus of Gabor filter response of image Chellappa R,Wilson C L,Sirohey S.Human and machine recognition offaces:A survey[J]. Digital Object Identifier,1995,83(5):705-741. Yang Jian-wei, Yang Jian-xiang. Linux Transplant Based on theProcessor of S3C2410 [J]. Information Technology,2007,(8):0097-0100.Fig. 9 The features of the image edge.  Liu Chun-cheng. USB Webcam Driver Development Based onEmbedded Linux [J]. Compter Engineering and Design, 2007,28(8):1885-1888. Chen Jian-zhou, Li Li, Hong Gang, Su Da-wei. An Approach of FaceDetection In Color Images Based on Haar Wavelet [J]. MicrocomputerInformation, 2005, 21(10-1):157-159.Fig. 10 The features of the spatial location.C. Traine sample images into the subspace,whivoi raining facesonNumMstNeighbor(float * projectedTestFace); //theg Data of PCA  P. Viola, M. Jones. Rapid object detection using a Boosted cascade ofsimple features. In: IEEE Conference on Computer Vision and PatternRecognition, pp. 511-518, 2001.Now by projecting thch is discussed in the above paragraph, and then getting thetraining data of PCA in Figure 11:The core codes are as followd doPCA(); //do PCA on the tvoid storeTrainingData(); // store training dataint loadTrainingData(CvMat ** pTrainPersload training datadouble * findNearedistance as a measure of face similarity Zhu Jian-xiang, Su Guang-da, Li Ying-chun. Facial ExpressionRecognition Based on Gabor Feature and Adaboost [J]. Journal ofOptoelectronics Laser, 2006, 17(8):0993-0998. Cao Lin, Wang Dong-feng, Liu Xiao-jun, Zou Mou-yan. FaceRecognition Based on Two-Dimensional Gabor Wavelets [J]. Journal ofElectronics & Information Technology, 2006, 28(3): 0490-0494.at); //  Liu Jing, Zhou Ji-liu. Face recognition based on Gabor features andkernel discriminant analysis method [J]. Computer Application, 2005,25(9):2131-2133. Turk M, Pentland A. Face Recognition Using Eigenfaces. Proceeding ofIEEE Computer Society Conference on Computer Vision and PatternRecognition, Oakland CA: IEEE Computer Society Press,1991:586-591.