Research and Implementation of Embedded FaceRecognition System Based on ARM9Shimin Wang, Jihua YeCollege of Computer Infor...
struct video_picture grab_pic; voide_picture contains avariety of properties of the captured images such as:brightness hue...
scan all different sizes windows in tested image’s everyposition, the number of detection windows are so many.During the p...
histogram of the input imageCvSeq* faces = cvHaarDetectObjects( small_img, cascade,tect_and_draw( IplImage* img ); //to ci...
Upcoming SlideShare
Loading in...5

Research and implementation of embedded face recognition system based on arm9


Published on

for more projects visit @

Published in: Technology, News & Politics
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Research and implementation of embedded face recognition system based on arm9

  1. 1. Research and Implementation of Embedded FaceRecognition System Based on ARM9Shimin Wang, Jihua YeCollege of Computer Information EngineeringJiangxi Normal UniversityNanchang,,—Human face recognition is a widely used biologicalrecognition technology, and a core issue of safeguard. Theembedded face recognition system is based on MagicARM2410development board, including transplantation of Linux operatingsystem, the development of drivers, detecting face by using faceclass Haar feature, and then recognizing face by using Gaborfeatures and PCA transform algorithm. By testing the systemwith actual images, it basically achieves the desired result.Keywords-embedded operating system; face recognition; linux;haar; gabor; pcaI. INTRODUCTIONSince the appearance of face Recognition technology [1]inthe 80s of 20th century, and it has rapidly developed,especially in the 90s. Nowadays, it has achieved some resultsperiodically, and has been used in safeguard area and so on.Automatic face recognition is a widely used biologicalrecognition technology. In comparison with other identificationmethods, face recognition has direct, friendly and convenientfeatures. Human can visually identify people by human face.People can be fairly identified even in the very serious visualstimulated situation[2]. Now the market share of facerecognition is just less than the fingerprint recognition and theproportion is increasing, which have broken the situation thatfingerprint recognition monopolized the market in theinternational biological recognition market. Therefore, it isvery important for the research of face recognition technology.This project, which is based on ARM9 and Linux operatingsystem, is a portable device and meets the requirements of upto date Identity Authentication; it also can be used in manyareas.II. IMPLEMENTATION OF SYSTEMThe system is based on MagicARM2410 board, which isdesigned by the Guangzhou Zhiyuan Electronics Co., Ltd. Itexpands enough storage resources and adequate typicalinterfaces of embedded system. Based on this hardwareplatform, Embedded Linux operating system and drivers aredeveloped firstly, and then face recognition system is achievedon the operating system.A. Transplantation of Embedded Linux KernelThe Linux kernel version that the system choose is specialfor the embedded systems: Linux Kernel v2.4.18, in the kernelsource code directory by typing "make menuconfig" commandcan configure the kernel. After entering the main interface thekernel can be configured in many aspects. Generally thecommands, which are used to compile the kernel, are as follow:root# make deproot# make zImageKernel C source code files have a certain dependencyrelationship with header files; Makefile has to know thisrelationship in order to determine which parts of source codesare needed to be compiled. But using "make menuconfig"command to configure the kernel, the command willautomatically generate the required header files for compilingkernel. Therefore, after we change the configuration of thekernel and input the command, the correct dependencyrelationship should be reestablished. But this step in thekernel2.6 version is no longer required[3].B. Implementation of Drivers DevelopmentLinux device driver can be divided into the following parts:Registration and cancellation of the driver; Opening andreleasing the device; Reading and writing the device;Controlling the device; the interrupts of device and the cyclicprocess. Embedded Linux kernel which already contains manysource codes of general purpose hardware device driver fordifferent hardware platforms, they are only needed to be donesome simple modifications and then can be used. For some ofthe more special equipments (such as the camera drive), youneed a detailed understanding of the hardware and then finishthe driver development[4].Video4Linux provides a unified programming interface forthe USB camera. In this thesis, the USB camera device file ----(/ dev/video0) is primary to capture and store images for thecorresponding programs. Data structures are commonly used inthe program as follow:struct voide_capability grab_cap; video_capability containsthe camera’s basic information, such as device name, the maxand min resolution, and signal source information and so on.Corresponding to member variables: name[32] maxwidthmaxheight minwidth minheight channels(the number ofsignal source) type and so onThis work is supported by the Nation Natural Science Foundation of China(NSFC) under Grant No. 60674054 and the Research Plan of Jiangxi Province(2007-136) and the Research Plan of Jiangxi Normal University(2009-7)978-1-4244-7739-5/10/$26.00 ©2010 IEEE
  2. 2. struct video_picture grab_pic; voide_picture contains avariety of properties of the captured images such as:brightness hue contrast whiteness depth and so onstruct video_mmap grab_buf; video_mmap is used to mapmemorystruct video_mbuf grab_vm; video_mbuf uses mmap tomap frames information which actually is the framesinformation from the camera buffer memory. Contains: sizeframes offsets and so onC. Embedded HumanFace Recognition SystemTraditional face recognition system includes five parts: (1)inputting image from camera, (2) face detection, (3)preprocessing, (4) feature extraction, (5) feature matching. TheFigure 1 is the work flow chart of embedded face recognitionsystem in my project.Fig. 1 Work flow chart of embedded face recognition system.In the above Figure the first step is to detect human face.Face detection is one of the main technical parts of theautomatic face recognition system. Face detection needs todetermine whether the given image is a face, if it is someone’sface, and then return the position of the face. In order to betterrecognize all of the faces in the image, the system use faceclass Haar feature to detect human face [5]. After having gottenall of the faces, the next step is finding the special face.Compared with traditional face recognition system, in thisthesis the embedded face recognition system is based on theGabor feature extraction algorithms of the face region andprincipal component analysis (PCA) with nearest neighborclassifier as the core algorithms of the system identification.Face samples are preprocessed for extracting the Gabor featurevector, by using PCA to reduce the feature vector’s dimensionand we can get face PCA subspace of the sample Gabor feature,we call it face space. And then the input faces are projectedinto the subspace of face PCA and get image face. The distancefrom face space to image face as a measure of face similarity.Although this system uses the face similarity to determinethe recognition results without the training classifier, in thesystem human face database is relatively large, the featureextraction and feature subspace transform need a largermemory, embedded devices hardware are basically unable tomeet the requirements, so the subspace transform of thetraining samples will be completed by common PC, and thenPC will transmit the transformed subspace files to theembedded platform by wireless network or Activesyncsynchronization software.III. ANALYSIS OF THE MAIN ALGORITHMSA. Face Detection Base on Haar FeatureViola [6]wrote a paper which has been published in 2001,can be said as watershed of real time face detection technology,they implement the real time face detection by integratingAdaboost and cascade algorithms, face detection is beginningto make practical. Papageorgiou and viola extract features fromimages based on the application of wavelet transform, and putforth the original concept of local Haar like features. Thefeature database contains 3 types and 4 different forms offeatures. The 3 types: 2 rectangles feature, 3 rectangles feature,and 4 rectangles feature. The 4 forms are shown in Figure 2.Fig. 2 The original forms of Haar-like featuresNow we generate classifiers which are based on the basicprinciples of Adaboost face detection.Simple classifier: The input window x , ( )jf x is aneigenvalue of the jth rectangle feature, the jth feature isgenerated into the form of a simple classifier:1, ( ,( )0, ;j j j jjif p f xh xother) pwisejh indicated the value of asimple classifier; j is the threshold; indicated thedirection of inequality, the value is 1.jpStrong classifier: the learning process of Adaboostalgorithm can be understood as the “the process of greedyfeature selection." For solving a problem by weighted votingmechanisms, with plenty of the weighted combination ofclassification function to make the determination. The key ofalgorithm is to give the good classification function a biggerweight value; the bad one is given a smaller weight value.Adaboost algorithm is a very good way that can find out a fewfeatures to effectively classify target. The haar features of myface image by Adaboost are shown in Figure 3. The function ofthe strong classifier is: 1 111, ( )( ) 20,T Ti i iia h x ah xotherwisei T, the isthe training times;1logiia ,1ttt, j indicated theweighted error.Fig. 3 Haar features of a face image by Adaboost.Cascade strong classifier: By Adaboost algorithm thestrong classifier is made of the important features, and can beused to detect face. During the testing process due to have to
  3. 3. scan all different sizes windows in tested image’s everyposition, the number of detection windows are so many.During the process of actual face detection we should adoptclassification classifier of "first weight and then light". At first,using the simple strong classifier which includes moreimportant features to find nonface windows and delete them.As the importance of features declines, the number ofclassifiers is increasing, but the number of detected windows isdeclining.B. GaborFilters and Gabor Feature Extraction of HumanFaceHuman face images are represented by the direct use ofpixel gray value, are vulnerable to be affected by faceexpressions, illumination conditions and various geometrictransformations, while the Gabor filter can capture the imageswhich correspond to the local structure information of thespatial location, spatial frequency and direction. It is notsensitive to the brightness of face image and the changes ofhuman face expression, and it is beneficial for multi-gestureface recognition [7].Two dimensional Gabor filter is a kind of band-pass filter;in spatial domain and frequency domain Gabor filter has abetter distinguish ability, such as direction selectivity in spatialdomain and frequency selectivity in frequency domain. Theparameters of two dimensional Gabor filters reflect the ways ofsampling in spatial domain and frequency domain, anddetermine the ability of expressing signal.Two dimensional Gabor wavelet [8]is the filter group whichis generated by two dimensional Gabor filter function bymeans of scale stretching and angle rotation, the choice of itsparameters is usually considered in the frequency space. Inorder to sample the entire frequencies domain of an image, theGabor filter group of multiple center frequencies and differentdirections can be used to image sampling. The different optionsof parameter K , separately incarnate the sampling ways ofthe two dimensional Gabor wavelet in the frequency anddirection spatial, the parameter decide the bandwidth offilters. In the spatial domain, parameters , andK reflectthe direction of the filter texture, the wavelength of texture andthe window size of Gaussian.With the different changes of parameters , the real andimaginary parts of two dimensional Gabor filter function showthe different texture features, different filters are able torespond to the image texture features of correspondingdirection K[9]. When the center frequency / 4 , thevariation of the real part of the two dimensional Gabor filterfunction with the directionparameters (0, / 8,2 / 8,3 / 8,4 / 8) is shown in Figure4.Fig. 4 The variation with the direction parameter .With the decreasing of the parameter K , the variation ofthe wavelength ( 2 / K ) is shown in Figure 5.Fig. 5 The variation with parameter .With the increasing of the parameter , the variation of thefilter’s real part window is shown in Figure 6. Effectiveradius: 2 2 /r K .Fig. 6 The variation with parameter .C. Human Face Image Recognition Based on PCAIf the direct use of the above high dimensional Gaborfeature vector for classifier training and image recognition, willcause the disaster of high dimensions. Therefore, we need toreduce the dimensions of high dimensional Gabor featurevector.PCA is a statistical dimensionality reduction method, whichproduces the optimal linear least squares decomposing of atraining set. Kirby and Sirovich (1990) applied PCA torepresenting faces and Turk and Pentland (1991) extendedPCA to recognizing faces [10]. The algorithm is based on aninformation theory approach that decomposes face images intoa small set of characteristic feature images called “eigenfaces”,which may be thought of as the principal components of theinitial training set of face images. Automatically learning andlater recognizing new faces is practical within this framework.Recognition under widely varying conditions is achieved bytraining on a limited number of characteristic views. Theapproach has advantages over other face recognition schemesin its speed and simplicity, learning capacity, and insensitivityto small or gradual changes in the face image.The main idea of the PCA (or Karhunen-Loeve expansion)is to find the vectors that best account for the distribution offace images within the entire image space. These vectors definethe subspace of face images, which we call “face space”. Eachvector is of length , describes a by image. Becausethese vectors are the eigenvectors of the covariance matrixcorresponding to the original face images, and because they areface like in appearance, we refer to them as “eigenfaces”.2N N NIV. SYSTEM TESTA. Face DetectionThe face class Haar feature which is introduced in theabove paragraph is used to detect human faces. The originalimages are shown in Figure 7, which are my own images.Fig. 7 The original imagesThe core codes are as followscascade = (CvHaarClassifierCascade*)cvLoad( cascade_name,0, 0, 0 ); // loads a trained cascade of haar classifiers from a filecvEqualizeHist( small_img, small_img ); // equalizes
  4. 4. histogram of the input imageCvSeq* faces = cvHaarDetectObjects( small_img, cascade,tect_and_draw( IplImage* img ); //to circle markede detection by using face class Haarfeastorage, 1.1, 2, 0, cvSize(30, 30) );//detect the faces of the inputimagevoid dehuman faces on the imageThe final images of facture are shown in Figure 8.Fig. 11 The sample images and the training data of PCA.Then real-time getting human face image, using EuclideanDistance measures the human face image, and testing thesimilarity of the training faces and the image. If the measureddifference is in the range of threshold value, it indicatesreal-time image meets the requirements and the image exists inthe face space, and then finding the most similar face in theface space and outputting the corresponding personalinformation, or the image can’t be recognized and system logsout. The number of training images can be bigger if it is neededfor expanding the scope of recognition. The several systemtests steps show that the system can recognize face to meet therequirements and get the corresponding person’s information.Fig. 8 The final images of face detectionB. Respulti-channmuCv create Gabor/g, CV_GABOR_REAL);IMAG);;onse Features of Gabor TransformGabor transform is suitable for analyzing m el andlti-resolution images in order to obtain spatial frequency,spatial location and the local features information of directionselectivity, such as organs of human face. The next figuresshow the response features of the image edge (Figure 9) andthe spatial location (Figure 10) by the Gabor transform, whichis introduced in the above paragraph.The core codes are as followsGabor *gabor1 = new CvGabor; //toV. CONCLUSIONIn this thesis, the development process of embedded facerecognition system is introduced in detail, includingtransplantation of Linux operating system, the development ofdrivers, and the research of the corresponding algorithms offace recognition. By testing the system, it basically achievesthe desired result..kernel = gabor1->get_image(CV_GABOR_REAL); / get//get//getthe real part of Gabor filtergabor1->conv_img(img, reimthe real part of Gabor filter response of imagegabor1->conv_img(img, reimg, CV_GABOR_the imaginary part of Gabor filter response of imagegabor1->conv_img(img, reimg, CV_GABOR_MAG)REFERENCES//get [1] He Dong-feng, Ling Jie. Face Recognition : A Survey [J]. MirocomputerDevelopment, 2003,13(12):0075-0078.the modulus of Gabor filter response of image[2] Chellappa R,Wilson C L,Sirohey S.Human and machine recognition offaces:A survey[J]. Digital Object Identifier,1995,83(5):705-741.[3] Yang Jian-wei, Yang Jian-xiang. Linux Transplant Based on theProcessor of S3C2410 [J]. Information Technology,2007,(8):0097-0100.Fig. 9 The features of the image edge. [4] Liu Chun-cheng. USB Webcam Driver Development Based onEmbedded Linux [J]. Compter Engineering and Design, 2007,28(8):1885-1888.[5] Chen Jian-zhou, Li Li, Hong Gang, Su Da-wei. An Approach of FaceDetection In Color Images Based on Haar Wavelet [J]. MicrocomputerInformation, 2005, 21(10-1):157-159.Fig. 10 The features of the spatial location.C. Traine sample images into the subspace,whivoi raining facesonNumMstNeighbor(float * projectedTestFace); //theg Data of PCA [6] P. Viola, M. Jones. Rapid object detection using a Boosted cascade ofsimple features. In: IEEE Conference on Computer Vision and PatternRecognition, pp. 511-518, 2001.Now by projecting thch is discussed in the above paragraph, and then getting thetraining data of PCA in Figure 11:The core codes are as followd doPCA(); //do PCA on the tvoid storeTrainingData(); // store training dataint loadTrainingData(CvMat ** pTrainPersload training datadouble * findNearedistance as a measure of face similarity[7] Zhu Jian-xiang, Su Guang-da, Li Ying-chun. Facial ExpressionRecognition Based on Gabor Feature and Adaboost [J]. Journal ofOptoelectronics Laser, 2006, 17(8):0993-0998.[8] Cao Lin, Wang Dong-feng, Liu Xiao-jun, Zou Mou-yan. FaceRecognition Based on Two-Dimensional Gabor Wavelets [J]. Journal ofElectronics & Information Technology, 2006, 28(3):; // [9] Liu Jing, Zhou Ji-liu. Face recognition based on Gabor features andkernel discriminant analysis method [J]. Computer Application, 2005,25(9):2131-2133.[10] Turk M, Pentland A. Face Recognition Using Eigenfaces. Proceeding ofIEEE Computer Society Conference on Computer Vision and PatternRecognition, Oakland CA: IEEE Computer Society Press,1991:586-591.