• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Soft biometric classification using local appearance periocular region features

Soft biometric classification using local appearance periocular region features



NANO SCIENTIFIC RESEARCH CENTRE (ISO 90001:2008) branches @ (Hyderabad & Nagpur) (040-39151877, 09640648777) ...

NANO SCIENTIFIC RESEARCH CENTRE (ISO 90001:2008) branches @ (Hyderabad & Nagpur) (040-39151877, 09640648777)
For base paper or abstract please visit www.nanocdac.com
Mtech VLSI IEEE Projects hot

Our training features :
** Project Training session are conducted by real-time instructor with real-time examples.
** Best Project training material .
** State-of-the-art lab with required software for practicing.



Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds


Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Soft biometric classification using local appearance periocular region features Soft biometric classification using local appearance periocular region features Document Transcript

    • Pattern Recognition 45 (2012) 3877–3885 Contents lists available at SciVerse ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Soft biometric classification using local appearance periocular region features Jamie R. Lyle n, Philip E. Miller, Shrinivas J. Pundlik, Damon L. Woodard nn School of Computing, Biometrics and Pattern Recognition Lab, Clemson University, Clemson, SC 29634, USA a r t i c l e i n f o abstract Article history: Received 13 August 2011 Received in revised form 14 March 2012 Accepted 26 April 2012 Available online 4 May 2012 This paper investigates the effectiveness of local appearance features such as Local Binary Patterns, Histograms of Oriented Gradient, Discrete Cosine Transform, and Local Color Histograms extracted from periocular region images for soft classification on gender and ethnicity. These features are classified by Artificial Neural Network or Support Vector Machine. Experiments are performed on visible and near-IR spectrum images derived from FRGC and MBGC datasets. For 4232 FRGC images of 404 subjects, we obtain baseline gender and ethnicity classifications of 97.3% and 94%. For 350 MBGC images of 60 subjects, we obtain baseline gender and ethnicity results of 90% and 89%. & 2012 Elsevier Ltd. All rights reserved. Keywords: Biometrics Gender classification Ethnic classification Periocular 1. Introduction The periocular region biometric has lately gained attention as an alternative method for recognition where the face and iris modalities are captured under non-ideal conditions [1,2]. Several studies have shown the periocular region to be one of the most discriminative regions of the face for partial face recognition [3–6]. The periocular region, when used independently, has achieved limited success in highly controlled settings. Recent work has been done to investigate periocular recognition under non-ideal conditions [7]. Other studies have investigated using the periocular region to boost performance in identification tasks by fusing with face or iris information [8]. Previous work has also postulated that periocular region features can be used independently for soft biometric classification [9]. Soft biometric information can be used to describe an individual in broad categories, but is not specific enough to identify the individual uniquely [10]. Soft biometric information can include attributes such as gender, ethnicity, or even age. While the soft biometric information does not uniquely identify a subject, it can narrow the search space or provide additional information to boost performance during recognition tasks. Few studies have been performed using the periocular region for soft biometric classification. Merkow et al. investigated gender classification of the periocular region using Local Binary Patterns n Corresponding author. Principal corresponding author. E-mail addresses: jlyle@clemson.edu (J.R. Lyle), pemille@clemson.edu (P.E. Miller), spundli@clemson.edu (S.J. Pundlik), woodard@clemson.edu (D.L. Woodard). nn 0031-3203/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.patcog.2012.04.027 (LBP) and pixels as features with LDA and SVM as classifiers achieving 85% performance on images collected from the web [24]. Lyle et al. performed gender and ethnicity classification using the same features and SVM classification. They achieved 93% for gender and 91% for ethnicity using images derived from the FRGC database [9]. Due to the popularity of facial recognition, face images have been used quite frequently to obtain both gender and ethnicity information. Table 1 details other key approaches in gender and ethnicity classification which use facial images. The majority of the approaches listed use very small images or feature vectors to perform classification while achieving very good results. A few detail the variables present in the dataset such as varying illumination and expression. Most of the approaches rely on appearance based features. For classification, the most popular schemes are Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Adaboost (including various boosters). In most cases the approaches were evaluated using k-fold cross-validation. While the feature vector size differs, this paper endeavors to apply the most successful, widely used approaches for face classification to soft biometric classification of periocular images. The goal of this paper is to investigate the effectiveness of different classifiers and periocular features across varying conditions according to soft biometric classifications. We explore the utility of various features, classifiers, sub-regions of an periocular image, and different combinations thereof. Within this work we concentrate on gender and ethnicity classification using the periocular and eye regions comparing the results to similar full face experiments. Fig. 1 shows examples of periocular regions belonging to the different classes used in this work. For this work we define the periocular region as the region surrounding the eye
    • 3878 J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 Table 1 A summary of the gender and ethnic classification approaches using face images. The recognition results are the best combined results reported [9]. Approach Features Classifier Variables Dataset Recognition Ethnicity Gutta et al. [11] Grayscale pixel intensities Moghaddam and Yang [12] Balci and Atalay [13] Wu et al. [14] Low-res grayscale images Ensembles of RBFs using DTs SVM PCA eigenvectors Multilayer Perceptron Grayscale pixel intensity Hosoi et al. [15] Gabor wavelet transform with retina sampling LUT weak classifier based Adaboost SVM BenAbdelkader and Griffin [16] Lapedriza et al. [17] Lu et al. [18] Local and global features (eigenfaces) DOG, LOG filters on facial fragments Range and pixel intensity SVM, FLD Yang et al. [19] Normalized face Yang and Ai [20] LBP, Haar like features Makinen and Raisamo [21] Xu et al. [22] Grayscale pixel intensity, Haar like, features, LBP Haar (appearance) Gao and Ai [23] ASM based landmarks for normalization (grayscale intensities) Gender (%) 96 97 – FERET – FERET 92% (Caucasian, South Asian, East Asian, African) – FERET – 92 – FERET, WWW images – 88 Illumination HOIP, misc. images FERET, PIE, Univ. of Essex FRGC 94 Adaboost, Jointboost Illumination expression Illumination Asian—96%, European—93%, African—94% – – 92 SVM – 98% (Asian, non-Asian) 91 – 97 97% (Asian, non-Asian) 93 – 84 – 92 Ethnicity specific gender classification 97 Univ. of Notre Dame, Michigan State Univ. SVM, LDA, Adaboost Automatic Chinese snapshot, detection FERET Adaboost Expression FERET, PIE, Chinese snapshot SVM, NN, Adaboost Automatic IMM face dataset, detection FERET SVM – FERET, AR, WWW images Illumination Chinese snapshot, SVM, Adaboost, Probabilistic Boosting Trees expression pose consumer images (PBT) – Fig. 2. Example of the different masks on the periocular region. From left to right: original MBGC periocular region, periocular mask, eye mask. Fig. 1. Examples of right periocular region images for different classes in the FRGC dataset. Top row shows the ethnicity classes while the bottom row shows the gender classes considered in this work. which may or may not include the eyebrow. The periocular experiments for this work have the eye masked out. The eye experiments include the iris, sclera, eyelashes and some of the eyelid, in reality, the inverse of the periocular mask. These regions can be seen in Fig. 2. This paper investigates whether periocular images contain enough information to accurately obtain similar soft biometric classification performance to that acquired from full face images. The images used in this work are subsets of the FRGC and MBGC face datasets. The experiments utilize appearance cues present in periocular images using multiple low level features. These features include various texture measures such as: Histograms of Oriented Gradients (HOG), Local Binary Patterns (LBP), and a Discrete Cosine Transform (DCT) of the LBP features. These features are chosen on the assumption that the success of similar texture features in separating the different classes in face images would translate well to periocular images. Color information is also utilized in the visible light images using Local Color Histograms (LCH). For classification, both ANN and SVM classifiers are used to provide a comparison of two widely used classification schemes. As an investigative work, the focus of this paper is to establish a baseline for comparison. To achieve this focus, the complexity of the classifier was kept to a minimum, so boosting was not included at this time. Performance is evaluated using stratified 5-fold cross-validation. Fig. 3 shows the overall view of the proposed approach. The approach is divided into separate training and testing phases for both classifiers used. In the training phase, features are extracted from the preprocessed periocular images, which are used for training the classifiers. Gender and ethnicity training proceed separately and four classifiers are trained, two SVM and two ANN. In the testing phase, appearance features are extracted, given to the classifiers as input, and the classifiers output a class label associated with gender or ethnicity. The next section describes
    • J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 3879 on the image so that the region masked out in the periocular experiments is showing. Fig. 2 contains examples of masked images used in periocular and eye experiments. The face images used in this portion of the work are cropped to face only, converted to grayscale, normalized according to histogram, and non-face pixels masked out. The color face images are also cropped with non-face pixels masked out. 2.2. Near infra-red data (MBGC) Fig. 3. Overview of the proposed gender and ethnicity classification approach. datasets and preprocessing. Section 3 covers in detail the proposed approach. Experimental results and discussion are provided in Section 4, followed by conclusions and future work. 2. Datasets and preprocessing 2.1. Visible spectrum data (FRGC) The visible spectrum periocular images used in this work are obtained from high resolution frontal face images in the FRGC 2.0 dataset [25]. This data was collected at the University of Notre Dame over a two year period. The dataset includes high resolution color still images and 3D images of a large number of subjects mostly between the ages of 18 and 22. The images were captured during multiple recording sessions under varying illumination and expression. The high resolution still images ( % 1200 Â 1400 pixels, 72 dpi) from the FRGC 2.0 dataset allow for periocular texture to be captured in significant detail for our experiments. Also, in the FRGC dataset, ground truth eye centers are provided, making it simpler to extract the periocular regions and scale them to the necessary size. For this work, the extracted periocular images are scaled to a uniform size of 251 Â 251 pixels. The distance from the camera to the subject is assumed constant for controlled settings, thus minimizing the effects of a scale change. The FRGC dataset also contains ground truth labels for gender and ethnicities of the 466 subjects included. The distribution according to gender in the FRGC subjects is 57% male and 43% female. The subjects are of a variety of ethnicities in the FRGC dataset, but they can primarily be divided into three classes: Caucasian (68%), Asian (22%), and other (10%). For this work, we will consider only two ethnicity classes, Asian and non-Asian, since the other ethnicity class is so sparsely represented in the dataset. Due to the small percentage of the other class, training samples are sparse, leading to poor classification performance. During preprocessing for the texture features, the images are converted to grayscale and histogram normalization is performed. For the color feature preprocessing, the RGB image is first converted to the CIE-Lab color space. The histogram of the luminance channel is normalized, and the image is converted back to the RGB color space. These steps are followed in order to preserve color information. In the periocular experiments, an elliptical mask of neutral color centered over the eye is placed in the image. The dimensions of the ellipse are preset for all images based on the dimension of the image rather than the dimension of the subject’s eye. The basic assumption is that the changes in the eye are due mainly to the opening and closing of the eye rather than changes in scale. This assumption, combined with the fact that the images are centered on the eye and scaled to the same dimensions, allows the placing of a fixed size ellipse over the eye such that a significant amount of periocular skin remains visible. In the eye experiments a neutral mask is overlaid The near infra-red (NIR) periocular images used in this work are extracted from stills of the NIR face videos in the Multi Biometric Grand Challenge (MBGC) dataset [26]. This dataset includes 149 facial video recordings of 114 subjects walking through a portal with intermittent NIR illumination. The videos are stored in AVI format with a size of 2048 Â 2048 pixels and a frame rate of 15 frames per second. Each frame of video is extracted and stored as an image. Many of these images contain motion-blur, occlusions, and insufficient illumination which would need to be accounted for in order to use those images. This paper focuses more on investigating the use of the periocular region for soft biometric classification than on accounting for non-ideal conditions, so many of these images were omitted from the final subset used for experiments. Some of the frames which had insufficient illumination were excluded automatically using a global average intensity value to quantify the illumination level and thresholding for sufficient illumination. Eye centers were manually marked in the usable images and used as centers for cropping the periocular region images. The images were preprocessed similar to the grayscale FRGC images. The distribution according to gender and ethnicity of the images used can be found in Section 4.1.2. An example of this dataset can be seen in Fig. 2. 3. Methods 3.1. Feature extraction Thefeatures used for this work are Local Binary Patterns (LBP), Histograms of Oriented Gradients (HOG), Discrete Cosine Transform (DCT), and Local Color Histograms (LCH). For each of these features, a local feature approach is used where the image is subdivided into blocks and the features from each block are combined to form the feature vector for the entire image. 3.1.1. Local Binary Pattern features Texture features as computed by Local Binary Patterns are extracted using the preprocessed grayscale images. The texture for a particular pixel can be found by the following equations: PÀ1 X LBPP,R ¼ sðg p Àg c Þ2p p¼0 sðxÞ ¼  1 if x Z0 0 otherwise ð1Þ where P is the number of pixels in the neighborhood set along a circle of radius R. gc refers to the center pixel’s grayscale value and gp refers to the value of one of the pixels in the neighborhood. For each type of texture found, a bin is incremented in the corresponding histogram [27]. These bins encode various textures that represent curved edges, spots, flat areas, and others. Using the parameters P¼8 and R¼1, each LBP histogram captures 59 different texture representations, giving each block a feature vector of 59 elements.
    • 3880 J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 3.1.2. Histograms of Oriented Gradients features Histograms of Oriented Gradients (HOG) are also extracted from the preprocessed grayscale image. HOG is a popular appearance based feature extraction method originally used to detect pedestrians [28]. The main idea behind HOG is that objects in an image can be described by the distribution of the intensity gradients. The HOG features in this work use a variation of their approach for extracting features from the periocular region. The first step is to compute the image gradient. In this work, a Prewitt convolution kernel is used to compute the gradient. The gradient magnitude, GM, and gradient angle, GA, are computed by the qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi equations: GM ¼ G2 þG2 and GA ¼ atan2ðGY ,GX Þ, where GX and X Y GY are the image gradients in the horizontal and vertical directions. Each pixel location is used in the next step to increment orientation bins. The orientation bins represent segments of a circle that are evenly spaced. The HOG features use 12 bins, so pixel locations are separated into 301 segments. For each pixel location (x, y), the orientation bin GA ðx,yÞ is incremented by GM ðx,yÞ. The feature vector for each block consists of 12 elements. 3.1.3. Discrete Cosine Transform features The Discrete Cosine Transform features are computed from the LBP texture features described in Section 3.1.1. Features similar to the DCT features used in this work can be found in [29] and were used for classification in several medical databases, for pedestrian detection, and to classify the social impression a certain face makes. The features for this work are extracted from the LBP histograms by the following equations: yðkÞ ¼ wðkÞ N X xðnÞ cos pð2nÀ1ÞðkÀ1Þ n¼1 8 1 > pffiffiffiffi, > > < N wðkÞ ¼ rffiffiffiffi > > 2, > : N 2N k¼1 ð2Þ 2 rk rN where x is the LBP histogram, N is the length of x (the number of bins in the histogram) and k¼1, y, N [30]. The LBP feature, x, extracted from each block and the DCT feature, y, are of the same dimension. All the elements are retained, providing a feature vector of 59 elements. 3.1.4. Local Color Histograms Color histograms have previously been used as color features for periocular recognition [2]. Woodard et al. performed experiments using histograms which utilized only the red and green color channels. The two dimensional color histograms are computed by splitting the color range into four regions. Thus each 4 Â 4 histogram provides a 16 element feature vector per block. while maximizing the margin (or distance) between samples of either class and the hyperplane. A test sample x is classified using the separating hyperplane, given by the equation f ðxÞ ¼ w Á fðxÞ þ b where w ¼ M X ai yi fðxi Þ ð3Þ i¼1 where w is the weight vector, b is a bias term, fðxÞ is a transform related to the chosen kernel function by the equation kðx,xi Þ ¼ fðxÞ Á fðxi Þ, ai denote the Lagrange multipliers, and the sign of f(x) determines the class of x. In many cases, the choice of a kernel is non-linear to project the samples to a higher dimension allowing for better separation of the data. The results in this paper were found using a radial base function (RBF) kernel and the LIBSVM implementation [32] of the Support Vector Machine algorithm. Parameters for each feature were found using a grid search tool provided by the LIBSVM implementation. 3.2.2. Artificial Neural Network The individual neurons in the ANN learn weights during training which can model relationships between inputs and outputs or expose patterns present in the data. In this instance, neural networks are used as classifiers to explore patterns in the data according to gender and ethnicity. The networks used are two-layer, fully connected, feed-forward networks. The input layer is of the same dimensions as the feature vectors; the hidden layer is of 1 Â m dimension, and the output layer is a single neuron. Neural networks with hidden layers of sizes 15, 30, 60, and 120 were investigated in this work. In a feed-forward network, each hidden neuron computes the weighted sum of its inputs for its net activation value, neti [33]. Since there are m neurons in the hidden layer, net activations are labeled net1 to netm. Each neti is the input of a sigmoid function which produces the output for neuron i to be passed to the next layer. The output neuron also computes a weighted sum of its inputs (the hidden layer outputs) as its net activation and uses that as input to its activation function producing the final output. The training method used was scaled conjugate gradient backpropagation. For training, a sample is presented to the network and the output calculated. The output is compared to the target output (i.e. the ethnicity and gender training labels) and an error is computed. The weights are updated to minimize this error component. The learning rule to update the weights is based on scaled conjugate gradient descent which is a type of fast supervised learning method outlined in [34]. The results in this paper were found s using neural networks implemented with the Matlab Neural Network Toolbox. 4. Experimental results 4.1. Experimental setup 3.2. Classification Two different classifiers are employed in this work, Support Vector Machines (SVM) and Artificial Neural Networks (ANN), to investigate the suitability to different features and image conditions. 3.2.1. Support Vector Machine The fundamental principle of the SVM training phase is to find the optimal hyperplane that separates the classes with the maximal margin [31]. Given a set of M training samples xi (i.e. any of the features mentioned in the previous section) and a set of M labels yi (i.e. gender or ethnicity labels), where xi A RN and yi A fÀ1; 1g, the SVM training algorithm finds the best hyperplane that classifies the largest subset of the training samples correctly, 4.1.1. FRGC experiments Left and right periocular images and the corresponding face images from the FRGC face dataset are used in the experiments. Similar feature extraction methods are used for periocular and face experiments, varying in block size. The experiments using the periocular region are divided into periocular and eye as described in Section 2.1, with a feature-level fusion experiment being performed as well. Feature-level fusion experiments are performed by concatenating the eye features to the periocular features. This fusion of the periocular and eye regions is performed for both left and right images. Periocular features are fused with eye features of the same type. Each feature set is classified by both SVM and ANN classifiers. There are 404 subjects
    • J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 Table 2 Parameters of image lists for the FRGC dataset. Image list Abbreviation Lighting Expression Session Number of (to gallery) subjects Gallery Probe 1 Probe 2 Probe 3 Probe 4 Neutral Neutral Alternate Alternate Neutral G P1 P2 P3 P4 Controlled Controlled Controlled Controlled Uncontrolled Same Different Same Different Different 404 356 402 353 197 total in the experiments. Each subject has 2–6 images captured under different conditions giving a total of 2116 images for each set: face, left and right periocular regions. These images are divided into five sets for the experiments: one gallery set (G), and four probe sets (P1, P2, P3, and P4). These lists are the same across left and right periocular regions and face. A description of the image lists can be found in Table 2. The session column denotes whether images in the list were taken in the same session as the gallery images or in a later session. The gallery set contains two images per subject while the probe sets contain a single image for each subject present in the list. Seven experiments are performed with each classifier to evaluate the soft biometric classification performance. The experiments use various combinations of training and testing sets from the lists of images. The training and testing set configurations for the FRGC experiments are labeled (ALL,ALL), (G,ALL), (G,G), (G,P1), (G,P2), (G,P3), and (G,P4), where the training set is given first, followed by the testing set. The ALL set is the superset of the gallery images and all four probe sets. For each experiment mentioned above, stratified 5-fold cross-validation is used to report the classification accuracy. A 5-fold cross-validation scheme requires five rounds where, in each round, images belonging to 80% of the subjects, Strain, are used for training and the remaining 20% are used for testing, Stest. The sets, Strain, Stest, Itrain, and Itest, of subject and images used for training and testing are created such that Strain Stest ¼ | and Itrain Itest ¼ |. Each subject appears in Stest exactly once in the five rounds. In making the sets each round, the ratios of classes in the testing and training sets are kept as equal as possible. The experiment (ALL,ALL) can be considered our baseline. For this experiment, images are drawn from all five of the lists for subjects in Strain and testing is done with images from all the lists for subjects in Stest. In the experiment (G,ALL), for each image I in the set Itrain, I A G and SubjectðIÞ A Strain . The test set, Itest, contains images I such that I A ALL and SubjectðIÞ A Stest . For each of the rest of the experiments (G,G), (G,P1), (G,P2), (G,P3), and (G,P4), the classifiers are trained on the gallery images, two per subject, and testing is performed on the image(s) associated with the testing subjects from each individual list. 4.1.2. MBGC experiments For the NIR images using the MBGC dataset, experiments are performed with left and right periocular images. The left and right periocular regions are classified by both SVM and ANN. For experiments in this data set, there are two image lists for each periocular region, a probe (P) and a gallery (G). Both gallery lists contain two images per subject. For the left region there are 61 subjects in the gallery and 54 subjects in the probe set. So the gallery contains 122 images and the probe set 54, giving a total of 176 left periocular region images. For the right region there are 60 subjects with two images per subject in the gallery and 54 subjects in the probe, with one image per subject, for a total of 174 right periocular region images. Due to the uneven illumination across the face, the number of images for left and right 3881 periocular regions is not the same. For similar reasons, face experiments were not performed with this data. For both periocular region experiment lists the gender distribution of the subjects is approximately 44% female and 56% male. The ethnic distribution is approximately 84% non-Asian and 16% Asian. The experimental setup is similar to the FRGC setup but since there is only one probe list instead of four, there are only four experiments. The experiments follow the same naming convention of the FRGC experiments and are labeled NIR(ALL,ALL), NIR(G,ALL), NIR(G,G), and NIR(G,P). Experiments are performed over the periocular and eye regions. Fusion experiments are performed as well. 4.2. Discussion The average results for the FRGC ethnic experiments (ALL,ALL), (G,ALL), (G,G), (G,P1), (G,P2), and (G,P3) can be found in Fig. 4. Standard deviation for the averages can be found in Table 3. The uncontrolled lighting experiments (G,P4) had a significant drop in performance and were not included in the average. The (G,P2) and (G,P3) experiments with varying expression performed on par with the first four experiments, so the results were included in the averages. The (G,P3) results can be seen separately in Fig. 5. The eye regions and periocular regions have similar results across left and right regions with the eye being slightly lower than the periocular results. The periocular results are within a few percentage points of the face experiments, however, the fusion results are equal or better than face. Ethnic classification on (G,P3) is Fig. 4. Average ethnic performance over (ALL,ALL), (G,ALL), (G,G), (G,P1), (G,P2), and (G,P3) by region. Averages include best performing classification scheme for each region and feature (includes both SVM and ANN results). Table 3 Standard deviations for average performance of (ALL,ALL), (G,ALL), (G,G), (G,P1), (G,P2), (G,P3) experiments given in percentages. Gender results are in bold, ethnic results are in italics. Region LBP HOG DCT LCH Periocular Left Right 0.9 0.9 1.3 1.1 0.8 0.4 1.1 1.2 0.5 0.7 1.3 1.6 1.5 0.7 1.5 1.3 Eye Left Right 1.1 0.5 1.1 0.7 1.0 0.5 1.0 1.5 1.5 0.7 1.1 0.9 0.9 1.5 1.0 1.9 Fusion Left Right 0.6 0.4 1.4 1.1 0.5 0.5 1.5 1.6 0.4 0.4 1.3 1.3 1.7 1.0 1.8 1.8 Face 0.9 1.3 0.9 1.2 0.6 1.5 1.0 1.9
    • 3882 J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 Fig. 5. Performance on (G,P3) experiment. Left chart is ethnic performance; right is gender. Results reported are the highest performing classification scheme for each feature (includes both SVM and ANN results). Fig. 6. Performance on (G,P4) experiment. Left chart is ethnic performance; right is gender. Results reported are the highest performing classification scheme for each feature (includes both SVM and ANN results). Fig. 7. Average gender performance over (ALL,ALL), (G,ALL), (G,G), (G,P1), (G,P2), and (G,P3) by region. Averages include best performing classification scheme for each region and feature (includes both SVM and ANN results). slightly lower than the average, but on the whole still classifies fairly well. The results for (G,P4), seen in Fig. 6, drop even further to the mid-eighties range, but the periocular, eye, and fusion experiments achieve performance very close to the face experiments. The average FRGC gender experiment results are shown in Fig. 7. Standard deviations of the averages can be found in the shaded regions of Table 3. Gender classification results on the whole are lower than the ethnic results. This could be due to the ratio of classes found in the experiment set. In the ethnic experiments, the ratio of classes is highly skewed to the nonAsian class. If the classifier performs well on the non-Asian class and very poorly on the Asian subjects, the poor performance will not offset the good performance on non-Asian subjects very much. However, in gender classification, the classes are more evenly distributed, so if performance on either class is bad, the total performance will be impacted. We noted a larger disparity in the performance of the eye region versus the periocular region in gender classification. One reason might be that the eye experiments contain less information than the periocular experiments. Also it seems the case for color at least, that the eye region would provide more information as to ethnicity than gender. Males and females can have the same color of eyes; however, this trait tends to be more of a distinguishing factor for ethnicity. Fusion experiments do not perform as well as in the ethnic experiments due to the low performance of the eye region, which affects the fusion performance on (G,P3) seen in Fig. 5. However, when images are under uncontrolled lighting, the fusion technique does perform as well as face, see Fig. 6. Overall, in the FRGC experiments, texture features seemed to provide more discriminating data than the color features used. In the (G,P2) and (G,P3) experiments with alternate expression, all regions and features encounter only a slight drop, if any, in their performances as compared to neutral expression experiments (G,G) and (G,P1). This indicates that the periocular region contains adequate information across expression change to classify according to ethnicity and gender. The results for (G,P4), the uncontrolled lighting experiment, drop significantly from the other experiments, but periocular, eye, and fusion results are all comparable to the face results of the same experiment. The average ethnic classification results for the NIR data from the MBGC dataset are found in Fig. 8. The standard deviation for these averages can be found in Table 4. The periocular and eye regions tend to perform about the same over left and right regions, with a small drop on right eye. LBP is the feature that shows up most often as the best classifier for ethnicity, followed by the DCT feature. In general the fusion results are lower than the periocular results, due to the large gap between periocular and eye results. Fig. 9 contains the gender classification results for the NIR data. The gender results are generally lower than the ethnic results. This is most likely due to the same factors mentioned for the FRGC results, since the ratio of non-Asians to Asians is close to 5:1 and the male to female ratio is nearly 1:1. The HOG feature performs very well on all regions experimented on. In the experiments where it does not have the best performance, it is very close. The right eye results for the gender experiments are
    • J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 3883 Fig. 10. Distribution of best performing classifiers in FRGC experiment. Left to right: ethnic, gender. Fig. 8. Average ethnic performance over NIR(ALL,ALL), NIR(G,ALL), NIR(G,G), and NIR(G,P) by region. Averages include best performing classification scheme for each region and feature (includes both SVM and ANN results). Table 4 Standard deviations for average performance of all NIR experiments given in percentages. Gender results are in bold, ethnic results are in italics. Region LBP HOG DCT Periocular Left Right 0.8 1.5 2.2 4.4 2.8 1.2 2.1 2.2 1.7 0.4 3.5 2.2 Eye Left Right 2.8 0.7 3.9 1.3 2.5 1.3 1.7 1.9 3.3 1.1 1.3 1.1 Fusion Left Right 1.4 0.7 1.7 2.8 2.0 0.7 0.4 1.3 3.4 0.7 2.6 1.7 Fig. 11. Distribution of best performing classifiers in MBGC experiment. Left to right: ethnic, gender. portion of the best NIR results reported use ANN classifiers. With the MBGC data closer to half the experiments achieve the best results with ANN classifier whereas with the FRGC data, ANN results make up only one third of the best results. We believe this difference is due to the nature of the NIR experiment images which are not of as high a quality as the visible light images and the ability of neural networks to account for noise and fill in gaps in information. For all regions in the FRGC experiments, the majority of the experiments achieved the best performance with SVM classifiers. For the NIR MBGC experiments, the results are not as clear. Fig. 12 separates the usage of classifier by region while Fig. 13 separates by feature. For ethnic classification both periocular regions seem to perform better using ANN while the eye and fusion experiments perform best with SVM. The gender results are very mixed with at least one result (left or right) from each region classifying best more often with ANN. Then, looking by feature, LBP tends to classify better with ANN for both gender and ethnicity. In the ethnic experiments with HOG and DCT, the margin between ANN and SVM is smaller than the gender experiments. 5. Conclusions and future work Fig. 9. Average gender performance over NIR(ALL,ALL), NIR(G,ALL), NIR(G,G), and NIR(G,P) by region. Averages include best performing classification scheme for each region and feature (includes both SVM and ANN results). consistently 10% or more lower than the other regions. We are unsure as to why this is the case. It could be due to a more severe lighting change between images of the same subject in the eye region, but that would most likely effect the right periocular results as well and this is not the case. Figs. 10 and 11 show the distribution of classifiers used in the highest performing feature/classifier for all experiments. A large This paper investigates the effectiveness of a soft biometric classification approach using various local appearance based features derived from periocular region images using multiple classification schemes. Experiments were performed over the periocular, and eye regions of subjects from the FRGC and MBGC datasets. Face experiments were performed on the FRGC dataset. Experiments included variables in lighting and expression to test the effectiveness of the periocular region over these conditions. The experiments in this work indicate that the face and periocular regions, as well as the fusion of periocular and eye, perform comparably in gender and ethnic classifications. Comparable performance of these regions occurs across expression and lighting change experiments as well, indicating that the periocular region is an effective region for gender and ethnicity classifications. For future work we plan to perform more experiments with a finer ethnic classification and more complex classifiers. We also plan to look at other features for gender and ethnic classification, possibly shape-based features.
    • 3884 J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 Fig. 12. Distribution of best performing classifiers in NIR experiments by region. Left to right: ethnic, gender. Fig. 13. Distribution of best performing classifiers in NIR experiments by feature. Left to right: ethnic, gender. Acknowledgment This research was funded by the Office of the Director of National Intelligence (ODNI), Center for Academic Excellence (CAE) for the multiuniversity Center for Advanced Studies in Identity Sciences (CASIS). References [1] U. Park, A. Ross, A.K. Jain, Periocular biometrics in the visible spectrum: a feasibility study, in: Biometrics: Theory, Applications and Systems, 2009. [2] D. Woodard, S. Pundlik, J. Lyle, P. Miller, Periocular region appearance cues for biometric identification, in: CVPR Workshop on Biometrics, 2010. [3] P. Heisele, B. Ho, J. Wu, T. Poggio, Face recognition: component-based versus global approaches, Computer Vision and Image Understanding 91 (2003) 6–21. [4] B. Heisele, T. Serre, T. Poggio, A component-based framework for face detection and identification, International Journal of Computer Vision 74 (2007) 167–181. [5] H.K. Ekenel, R. Stiefelhagen, Generic versus salient region based partitioning for local appearance face recognition, in: IAPR/IEEE International Conference on Biometrics (ICB), 2009. [6] M. Savvides, R. Abiantun, J. Heo, C. Xie, B.K. Vijayakumar, Partial and holistic face recognition on FRGC II using support vector machines, in: Proceedings of IEEE Computer Vision Workshop (CVPW), 2006, p. 48. [7] P. Miller, J. Lyle, S. Pundlik, D. Woodard, Performance evaluation of local appearance based periocular recognition, in: Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS), 2010, pp. 1–6. [8] D. Woodard, S. Pundlik, P. Miller, R. Jillela, A. Ross, On the fusion of periocular and iris biometrics in non-ideal imagery, in: Proceedings of the IAPR International Conference on Pattern Recognition, 2010. [9] J. Lyle, P. Miller, S. Pundlik, D. Woodard, Soft biometric classification using periocular region features, in: IEEE 4th International Conference on Biometrics: Theory, Applications, and Systems, 2010 (BTAS ’10). [10] A.K. Jain, S.C. Dass, K. Nandakumar, Can soft biometric traits assist user recognition, SPIE 5404 (2004) 561–572. [11] S. Gutta, J. Huang, P. Jonathan, H. Wechsler, Mixture of experts for classification of gender, ethnic origin, and pose of human faces, IEEE Transactions on Neural Networks 11 (2000). [12] B. Moghaddam, M.H. Yang, Learning gender with support faces, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2002). [13] K. Balci, V. Atalay, PCA for gender estimation: which eigenvectors contribute?, in: Proceedings of the IAPR International Conference on Pattern Recognition, 2002. [14] B. Wu, A. Haizhou, C. Huang, LUT based adaboost for gender classification, in: Audio and Video Based Biometric Person Authentication, 2003. [15] S. Hosoi, E. Takikawa, M. Kawade, Ethnicity estimation with facial images, in: Automatic Face and Gesture Recognition, 2004. [16] C. BenAbdelkader, P. Griffin, A local region based approach to gender classification from face images, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2005. [17] A. Lapedriza, M. Marin-Jimenez, J. Vitria, Gender recognition in non-controlled environments, in: Proceedings of the IAPR International Conference on Pattern Recognition, 2006. [18] X. Lu, H. Chen, A.K. Jain, Multi-modal facial gender and ethnicity identification, in: International Conference on Biometrics, 2007. [19] Z. Yang, M. Li, H. Ai, An experimental study on automatic face gender classification, in: Proceedings of the IAPR International Conference on Pattern Recognition, 2006. [20] Z. Yang, H. Ai, Demographic classification with local binary patterns, in: International Conference on Biometrics, 2008. [21] E. Makinen, R. Raisamo, Evaluation of gender classification methods with automatically detected and aligned faces, IEEE Transactions on Pattern Analysis and Machine Intelligence 30 (2008) 541–547. [22] Z. Xu, L. Lu, P. Shi, A hybrid approach to gender classification from face images, in: Proceedings of the IAPR International Conference on Pattern Recognition, 2008. [23] W. Gao, H. Ai, Face gender classification on consumer images in a multiethnic environment, in: International Conference on Biometrics, 2009. [24] J. Merkow, B. Jou, M. Savvides, An exploration of gender identification using only the periocular region, in: Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS), 2010, pp. 1–5. [25] P.J. Phillips, P.J. Flynn, T. Scruggs, K.W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, W. Worek, Overview of face recognition grand challenge, in: IEEE Conference on Computer Vision and Pattern Recognition, 2005. [26] J. Philips, Multiple Biometrics Grand Challenge, 2010 /http://face.nist.gov/ mbgc/S. ¨ ¨ ¨¨ [27] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (2002) 971–987. [28] N. Dalal, B. Triggs, Histograms of oriented gradients for human detection, in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005 (CVPR 2005), vol. 1, pp. 886–893. [29] L. Nanni, S. Brahnam, A. Lumini, Combining different local binary pattern variants to boost performance, Expert Systems with Applications 38 (2011) 6209–6216. [30] A.K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, 1989. [31] B.E. Boser, I.M. Guyon, V.N. Vapnik, A training algorithm for optimal margin classifiers, in: COLT ’92: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, ACM, New York, NY, USA, 1992, pp. 144–152. [32] C.-C. Chang, C.-J. Lin, LIBSVM: A Library for Support Vector Machines, 2001. Software available at /http://www.csie.ntu.edu.tw/cjlin/libsvmS. [33] R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classification, 2nd ed., John Wiley & Sons, Inc., New York, NY, USA, 2001. [34] M.F. Møller, A scaled conjugate gradient algorithm for fast supervised learning, Neural Networks 6 (1993) 525–533.
    • J.R. Lyle et al. / Pattern Recognition 45 (2012) 3877–3885 3885 Jamie R. Lyle received her MS degree in computer science from Clemson University in 2009. She is working toward the PhD degree in computer science in the School of Computing at Clemson University. Her research interests include biometrics and pattern recognition. Philip E. Miller received his MS degree in computer science from Clemson University in 2010. He is currently a PhD student in the School of Computing at Clemson University. His research interests include biometrics and image processing. Shrinivas J. Pundlik received the PhD degree in electrical engineering in 2009 from Clemson University. He is currently a postdoctoral fellow at Schepens Eye Research Institute at Harvard Medical School. His research interests include image and motion segmentation, pattern recognition, and biometrics. Damon L. Woodard received his PhD in computer science and engineering from the University of Notre Dame. He is currently an assistant professor within the HumanCentered Computing (HCC) Division in the School of Computing at Clemson University. His research interests include biometrics, pattern recognition, and image processing.