• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
human face counting
 

human face counting

on

  • 1,138 views

 

Statistics

Views

Total Views
1,138
Views on SlideShare
1,138
Embed Views
0

Actions

Likes
0
Downloads
51
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    human face counting human face counting Document Transcript

    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATION Automatic Human Face Counting in Digital Color Images MONA F.M. GHAZY M.R. ABEER AL- KHALED MURSI 1,2 ASSASSA 1,3 HUMAIMEEDY 1,2 ALGHATHBAR 1,4monmursi@coeia.edu.sa gassassa@coeia.edu.sa humaimeedy@coeia.edu.sa ghathbar@coeia.edu.sa 1 Center of Excellence in Information Assurance (CoEIA), 2 Department of Information Technology 3 Department of Computer Science 4 Department of Information Systems College of Computer and Information Sciences King Saud University Kingdom of Saudi Arabia Abstract: - Automatic human face detection is considered as the initial process of any fully automatic system that analyzes the information contained in human faces (e.g., identity, gender, expression, age, race and pose). In this paper, color segmentation is used as a first step in the human face detection process followed by grouping likely face regions into clusters of connected pixels. Median filtering is then performed to eliminate the small clusters and the resulting blobs are matched against a face pattern (ellipse) subjected to constraints for rejecting non-face blobs. The system was implemented and validated for images with different formats, sizes, number of people, and complexity of the image background. Key-words: - Face detection, color segmentation, face pattern (ellipse). localization: determination of the image position of 1 Introduction a single face [1], (2) facial feature detection: detecting the presence and location of features such Face detection deals with finding faces in a given as eyes, nose, nostrils, eyebrow, mouth, lips, ears, etc image and hence returns the face location and [2, 3], (3) face recognition: compares an input image content. The problem of automatic human face against a database (gallery) and reports a match, if detection can be stated as follows: Given a still or any, (4) face tracking: continuously estimate the video image, detect and localize an unknown number location and possibly the orientation of a face in an of faces, if any. The problem of human face image sequence in real time , and (5) facial detection is a complex and highly challenging one expression recognition: identifying the affective having a variety of parameters including face color, states , e.g., happy, sad, wondering, frightened, scale, location, size, pose (frontal, profile), disgusted. The solution to the face detection orientation (up-right, rotated), illumination, partial problem involves various areas in image processing occlusion, cluttered scenes, image background and including segmentation, extraction, and verification camera distance. Face detection is one of the visual of faces and possibly facial features for an tasks which humans can do effortlessly. However, in uncontrolled background. Segmentation schemes computer vision terms, this task is not trivial. Face have been presented, particularly those using motion, detection has various applications in many areas, an color, and generalized information. A survey on important example of which is security related pixel-based skin color detection techniques was surveillance in confined areas. Once an image is presented in [4]. The use of statistics and neural analyzed for faces detection, the faces in the image networks has also enabled faces to be detected from are tallied. There are many related research problems cluttered scenes at different distances from the of face detection; the list includes: (1) face camera [5. Many algorithms have been implemented ISSN: 1790-5117 269 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATIONfor facial detection, including the use of color and lips appear generally darker than theirinformation, template matching, neural network, surrounding facial regions [6].edge detection and Hough transforms [5, 6]. Color: It was found that different human skin colorThis paper is organized as follows: section 2 presents gives rise to a tight cluster in color spaces even whenrelated work; section 3 discusses the proposed face faces of difference races are considered [10]. Thisdetection approach; in section 4, test results are means color composition of human skin differs littlepresented and discussed. Finally, section 5 concludes across individuals. Since the main variation in skinthis paper. appearance is largely due to luminance change (brightness), normalized RGB colors are generally2 Related Work preferred, so that the effect of luminance can be filtered out.. In skin color analysis, a color histogramFace detection (FD) methods require a priori based on r and g shows that face color occupies ainformation of the face; these methods can be small cluster in the histogram. By comparing colorgenerally classified under two broad categories, with information of a pixel with respect to the r and gsome methods overlapping the boundaries of these values of the face cluster, the likelihood of the pixelcategories: feature-based and appearance-based belonging to a flesh region of the face can bemethods. deduced. HSI color representation has advantages over other2.1 Feature-based Approach models in giving large variance among facial featureThis approach makes explicit use of face knowledge color clusters[11]; it is used to extract facial featuresand follows the classical detection methodology in such as lips, eyes, and eyebrows.which low level features are derived prior to Color segmentation can basically be performed usingknowledge-based analysis [7]. The apparent appropriate skin color thresholds where skin color isproperties of the face such as skin color and face modeled through histograms charts [12, 13]. Moregeometry are exploited at different system levels. complex methods make use of statistical measuresTypically, in these techniques face detection tasks that model face variation within a wide userare accomplished by manipulating distance, angles, spectrum [10].and area measurements of the visual features derivedfrom the scene. Since features are the main Motion: A straightforward way to achieve motioningredients, these techniques are termed the feature- segmentation is by frame difference analysis. Thisbased approach. approach, whilst simple, is able to discern a moving foreground efficiently regardless of the background2.1.1 Low-level Analysis content [6].This heading includes edges, gray information,color, and motion as briefly introduced hereafter. 2.1.2 Feature Analysis Features generated from low-level analysis are likelyEdges: Edge detection is the foremost step in to be ambiguous. For instance, in locating facialderiving edge representation [8]. So far, many regions using a skin color model, background objectsdifferent types of edge operators have been applied; of similar color can also be detected. This is athe Sobel operator being the most common filter. In classical many-to-one mapping problem which canan edge-detection-based approach to face detection, be solved by higher level feature analysis. In manyedges need to be labeled and matched to a face face detection techniques, the knowledge of facemodel in order to verify correct detections [9]. geometry has been employed to characterize and subsequently verify various features from theirGray Information: Besides edge details, the gray ambiguous state. There are two approaches in theinformation within a face can also be used as application of face geometry: (1) the first involvesfeatures. Facial features such as eyebrows, pupils, sequential feature searching strategies based on the relative positioning of individual facial features, (2) ISSN: 1790-5117 270 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATIONgrouping features as flexible constellations using essence just an exhaustive search of the input imagevarious face models. for possible face locations at all scales. The image-based approaches may be classified intoFeature searching: The detection of the prominent linear subspace, neural networks, and statisticalfeatures then allows for the existence of other less methods as briefly described below.prominent features to be hypothesized usinganthropometric measurements of face geometry. For Linear Subspace Methods- PCA: Images of humaninstance, a small area on top of a larger area in a faces lie in a subspace of the overall image space. Tohead and shoulder sequence implies a "face on top of represent this subspace, many methods may beshoulder" scenario, and a pair of dark regions found applied such as the principal component analysisin the face area increase the confidence of a face (PCA) [16, 17]. Given an ensemble of different faceexistence. Among the literature survey, a pair of eyes images, the technique first finds the principalis the most commonly applied reference feature [6] components of the distribution of faces, expressed indue to its distinct side-by-side appearance. terms of eigenvectors of the covariance matrix of the distribution. Each individual face in the face set canConstellation analysis: Some of the algorithms then be approximated by a linear combination of thementioned in the last section rely extensively on largest eigenvectors, more commonly referred to asheuristic information taken from various face images eigenfaces, using appropriate weightsmodeled under fixed conditions. If given a moregeneral task such as locating the face(s) of various Neural Networks: Neural networks have become aposes in complex backgrounds, many such popular technique for pattern recognition problems,algorithms will fail because of their rigid nature. including face detection. The first neuralLater efforts in face detection research address this approaches to face detection were based on Multiproblem by grouping facial features in face-like Layer Perceptrons (MLPs) [5]. Ref [18] usedconstellations using more robust modeling methods possible face features (eyes, and mouth) and passsuch as statistical analysis[14]. them to a neural network to confirm face validation.2.2 Image-based Approach Statistical Approaches: There are several statistical Face detection by explicit modeling of facial approaches to face detection including support vectorfeatures, presented above, is troubled by the variation machine (SVM) and Bayes decision rule. In [19], aof face appearance and environmental conditions. system was developed for real-time tracking andThere is still a need for techniques that can perform analysis of faces by applying the SVM algorithm onin more hostile scenarios such as detecting multiple segmented skin regions in the input images to avoidfaces with clutter-intensive backgrounds. This exhaustive scanning. SVMs have also been used forrequirement has inspired a new research area in multi-view face detection by constructing separatewhich face detection is treated as a pattern SVMs for different parts of the view sphere [20]. Inrecognition problem. The basic approach in [21] Rong Xiao] SVM-filter and color-filter arerecognizing face patterns is via a training procedure applied to refine the final prediction of facewhich classifies examples into face and non-face detection.prototype classes. Comparison between these classes Recently in [22] Farajzadeh, Nacer 2008], a robustand a 2D intensity array (hence the name image- face and non-face discriminability was achieved viabased) extracted from an input image allows the the introduction of a hybrid system for face detectiondecision of face existence to be made. The simplest based on Viola and Jones’s work and the usage ofimage-based approaches rely on template matching Radial Basis Neural Network.[15], but these approaches do not perform as well asthe more complex techniques presented in thefollowing sections. Most of the image-basedapproaches apply a window scanning technique fordetecting faces. The window scanning algorithm is in ISSN: 1790-5117 271 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATION3 The Proposed Face Detection effects of the erosion like enlarging of the holes occupying more than 1% of the total size of the Approach image and, refilling the holes which have a sizeThis section describes the main steps used in the equivalent to or less than the size of the kernel. Theproposed face detection method; this includes noise last preparation done before the template matching isreduction, skin color segmentation, and applying the median filter on the dilated image formorphological processing. Some details are given removal of the small areas which appear as noise inbelow, more details may be found in [23]. the skin color image processing output, the source of which comes from the false consideration of some3.1 Noise Reduction areas from the background as a face region due to theThe goal of noise reduction is to reduce the number similarity of its color to the skin color space.of non-facial blobs in the image. Two algorithmswere considered for noise removal, (1) the median 3.4 Face Pattern Informationfilter as applied in the spatial domain of the image, The oval shape of a face can be approximated by anand (2) the band-pass filter as applied in the ellipse. Therefore, face detection can be performedfrequency domain of the image. Numerical by detecting objects with elliptical shape. This can beExperiments showed that the two algorithms yielded done based on edges or regions. As a first step,approximately similar results. The median filter was the connected components are located and extracted,selected in the present work because it showed faster and then each connected component is checkedperformance. whether its shape can be approximated by an ellipse or not.3.2 Skin Color SegmentationWith the assumption of a typical photographicscenario, it would be clearly wise to take advantageof face-color correlations to limit our face search toareas of an input image that have at least the correctcolor components. We used the hue-saturation-value(HSV) color space because it is compatible with thehuman color perception. Hue (H) is represented as anangle, the purity of colors is defined by the saturation(S), which varies from 0 to 1. The darkness of a color (a) (b)is specified by the value component (V), which Fig. 1: (a) morphological processing, (b) thevaries also from 0 (root level) to 1 (top level). Hue first connected component.and saturation were taken as discriminating colorinformation for the segmentation of skin-like regions The connected component is identified by applyingwith the following values [24]: S(min) = 0.23, the region-growing algorithm described in [23, 24]S(max) = 0.68, H(min) = 0° and H(max) = 50°. which depends on finding the 8-connected neighborhoods to a starting pixel which is considered3.3 Morphological Processing as the first white pixel in the image. This process isBased on the HSV thresholding, a black and white iterated until there is no pixel 8-connected to anymask is obtained with all the faces in addition to pixel in the resulting region (see fig. 1).some artifacts (body parts, background). This maskis then refined through binary morphological A hole-filling algorithm is used to refill the gapsoperations to reduce the background contribution and inside the region which is not filled by dilation. Theremove holes within faces. The image is then eroded connected component C is defined by its centerto eliminate the small undesired knobs of the regionwhich may affect the template matching (applied ( x, y ), its orientation θ, and the lengths a and b oflater) and cause it to give a false alarm. The eroded its minor and major axis, respectively. The center isimage is then dilated to fix some of the undesired computed as the center of mass (centroid) of the ISSN: 1790-5117 272 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATIONregion. The orientation θ can be computed by calculated and used to determine the ellipses that areelongating the connected component and then good approximation of connected components.finding the orientation of the axis of elongation. Thisaxis corresponds to the least moment of inertia. 4 Validation 1 ⎛ 2 μ1,1 ⎞ The proposed approach was validated for different θ = arac tan ⎜ ⎜μ −μ ⎟ ⎟ (1) 2 configurations via testing images with various sizes, ⎝ 2,0 0,2 ⎠ formats (JPEG, GIF, Bitmap), and with differentwhere number of people in the images; also, the μ1,1 = ∑ xy* b(x,y), μ2,0 = ∑ (x)2* b(x,y) background varied from simple white background to complex background with different colors. The μ0,2 = ∑ (y)2* b(x,y) system recoded an average detection time of 1.7 x = x - x , y = y - y , seconds on a 2.0 GHz processor. Samples of the experimental results are shown inand b(x,y) = 1 if (x,y) ∈ C, or zero otherwise. For figures 2 to 6. Figures 2, 3, 4, and 5 displayfurther consideration of C, the value of the successful counting cases for various image formatorientation, in degrees, with respect to the vertical and sizes. On the contrary, figure 6 illustrates anaxis is checked to be in the interval [-45, +45]. Then, incorrect counting case. Analysis of the reasonsthe major and minor axes of the connected behind unsuccessful detection of the image displayedcomponent are computed by evaluating the minimum on figure 6 suggests that dark (black) faces are notand maximum moments of inertia of the connected correctly detected by the proposed approach.component with orientation θ: However, it should be noticed that dark faces are ∑[(x − x) cosθ − (y − y)sinθ ] 2 correctly detected if there is enough brightness as is I min = (2) ( x, y )∈c illustrated in figures 3 and 4; in such cases, the lighting effect is such that it renders the face lighter ∑[(x − x) sinθ − (y − y)cosθ ] 2 than its original dark color and thus can be detected.I max = Many research works is being carried out in this area ( x, y )∈c to arrive at an optimal solution for all cases. Hence, (3) inevitably the images which are taken for a personThe lengths of the major and minor axes, a and b who is swimming or a person that has a part of hisrespectively, are given by: face shaded would be miscounted. 1/ 8 1/ 8 ⎛ 4 ⎞ ⎡ (I ) ⎤ 1/ 4 3 ⎛ 4 ⎞ ⎡ (I ) ⎤ 31/ 4 5 Conclusion a = ⎜ ⎟ ⎢ max ⎥ , b = ⎜ ⎟ ⎢ max ⎥ (4) ⎝ π ⎠ ⎣ I min ⎦ ⎝ π ⎠ ⎣ I min ⎦ In this paper, face detection was investigated usingBased on the ratio between the major and minor color hue and saturation segmentation as the firstaxes, a decision is taken whether the connected cue, followed by template (ellipse) matching. Thecomponent is a candidate face. In this respect, the proposed approach is validated for images withratio of the major to minor axes is taken as satisfying different formats, sizes, number of people, andthe golden ratio: complexity of the image background. Numerical results suggest that the proposed approach correctly height 1+ 5 = detects and counts non dark faces with reasonable width 2 background complexity.with an error range from (-0.5, +0.5). There are several possible extensions of the currentIn effect, the distance between the connected work, this includes: (a) Account for dark (black)component and the best-fit ellipse is determined by faces, (b) Gender detection and counting of femalescounting the holes inside of the ellipse and the points vs. males, (c) Color segmentation using adaptiveof the connected component that are outside the thresholding techniques that would increase both theellipse. The ratio of the number of false points to the robustness of larger variations of illumination andnumber of points of the interior of the ellipse is the portability between different camera systems., (d) ISSN: 1790-5117 273 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATIONAccount for separation of overlapping faces, such ascreating more accurate eye and nose kernel, canpossible help with this. Fig. 5: 270x453 JPEG (34.1 KB) image, # of people = 0, computed # of people = 0 Fig. 2: 633x640 JPEG (22.6 KB) image, # of people = 2, computed # of people = 2 Fig. 6: 155x260 GIF (11.2 KB) image, # of people = 5, computed # of people = 4 References: Fig. 3: 155x260 JPEG (11.2 KB) image, [1] B. Moghaddam and A. Pentland, # of people = 4, computed # of people = 4 “Probabilistic Visual Learning for Object Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, July 1997, pp. 696-710. [2] H. Verma, P. Sharma, and V.S. Sharma , “A Procedure for Face Detection & Recognition,” Proceedings of the 18th IASTED International Conference: Modelling and Simulation, Montreal, Canada, 2007, pp. 471-476. [3] M-H Yan, D.J. Kriegman, and N. Ahuja, “Detecting Faces in Images: A Survey”, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 24, No. 1, January Fig. 4: 200x267 JPEG (6.93 KB) image, 2002, pp. 34-58. # of people = 1, computed # of people = 1 [4] V. Vezhnevets, V. Sazonov,and A. Andreeva,” A Survey on Pixel-Based Skin Color Detection Techniques,” Proc. Graphicon, Moscow, Russia, Sep. 2003, pp. 85-92. ISSN: 1790-5117 274 ISBN: 978-960-474-054-3
    • Proceedings of the 8th WSEAS International Conference on SIGNAL PROCESSING, ROBOTICS and AUTOMATION[5] G. Burel and D. Caret, “Detection and faces,” J. Opt. Soc. Amer. 4, 1987, pp. 519- localization of faces on digital images,” 524. Pattern Recog. Lett. 15, 1994, pp. 963-967. [17] H. A. Aboalsamh, “Human Face Recognition[6] P.J. L. Van Beek, M.J.T. Reinders, B. Sankur, using Eigen and Fisher Faces,” Egyptian and J. C. A. Van Der Lubbe, “Semantic Computer Science Journal, ISSN-1110-2586, segmentation of videophone image vol. 31, no. 1, January 2009. sequences,” in Proc. of SPIE Int. Conf. on [18] I.M. Mansour and S.R. Weshah, “A Robust Visual Communications and Image Algorithm for Face Detection in Color Processing, 1992, pp. 1182-1193. Images,” Proceedings of the 16th Danish[7] R. Brunelli and T. Poggio, “Face recognition: Conference on Pattern Recognition, 2005. Feature versus templates,” IEEE Trans. [19] V. Kumar and T. Poggio, “Learning-based Pattern Anal. Mach. Intell. 15, 1993, pp. 1042- approach to real time tracking and analysis of 1052. faces,” in Proceedings Fourth IEEE[8] T. Sakai, M. Nagao, and T. Kanade, International Conference on Automatic Face “Computer analysis and classification of and Gesture Recognition, 2000. photographs of human faces,” Proc. First [20] J. Ng and S. Gong, “Performing multi-view USA-Japan Computer Conference, 1972, pp. face detection and pose estimation using a 2-7. composite support vector machine across the[9] Craw, H. Ellis, and J.R. Lishman, Automatic view sphere,” in Proc. IEEE International extraction of face-feature, Pattern Recog. Lett. Workshop on Recognition, Analysis and Feb. 1987, pp. 183-187. Tracking of Faces and Gesture in Real-Time[10] J. Yang and A. Waibel, “A real-time face Systems, 1999. tracker,” in IEEE Proc. of the 3rd Workshop [21] Rong Xiao , Ming-Jing Li, Hong-Jiang on Applications of Computer Vision, Florida, Zhang , “Robust multipose face detection in 1996. images,” IEEE Transactions on Circuits and[11] Rafael C. Gonzalez Richard E. Woods, Digital Systems for Video Technology, vol. 14, issue Image Processing, 2nd Edition, Prentice Hall, 1, Jan. 2004, pp. 31–41. New Jersey, USA, 2002. [22] N. Farajzadeh and K. Faez, “Hybrid face[12] H. Hongo, M. Ohya, M. Yasumoto, Y. Niwa, detection system with robust face and non- and K. Yamamoto, “Focus of attention for face discriminability,” 23rd International face and hand gesture recognition using Conference on Image and Vision Computing multiple cameras,” in Proceedings Fourth New Zealand, 2008, pp.1-6. IEEE International Conference on Automatic [23] A. S. Al-Humaimeedy, “Automatic Face Face and Gesture Recognition, 2000. Detection for Count Estimates,” Master[13] W. Yoo and I. S. Oh, “A fast algorithm for Project in Computer Sciences, College of tracking human faces based on chromatic Computer and Information Sciences, King histograms, Pattern Recog. Lett. 20, 1999, pp. Saud University, 2003. 967-978. [24] Sobottka, K. and Pittas, I., “A novel method[14] K. C. Yow and R. Cipolla, “Feature-based for automatic face segmentation, facial feature human face detection,” Image Vision Comput. extraction and tracking,” Signal Processing: 15(9), 1997. Image Communication 12, 1998, pp.263-281.[15] G. Holst, “Face detection by facets: Combined bottom-up and top-down search using compound templates,” in Proceedings of the 2000 International Conference on Image Processing, 2000, p. TA07.08.[16] L. Sirovich and M. Kirby, “Low-dimensional procedure for the characterization of human ISSN: 1790-5117 275 ISBN: 978-960-474-054-3