The project presents recognition of face expressions based on textural analysis and PNN classifier. Automatic facial expression recognition (FER) plays an important role in HCI systems for measuring people’s emotions by linking expressions to a group of basic emotions such as disgust, sadness, anger, surprise and normal. This approach is another version made to protect the network effectively from hackers and strangers. The recognition system involves face detection, feature extraction and classification through PNN classifier. The face detection module obtains only face images and the face region which have normalized intensity, uniformity in size and shape. The distinct LBP and GLCM are used to extract the texture from face regions to discriminate the illumination changes for texture feature extraction. These features are used to distinguish the maximum number of samples accurately. PNN classifier based on discriminate analysis is used to classify the six different expressions. The simulated results will provide better accuracy and have less algorithmic complexity compared to facial expression recognition approaches.
Robust Human Emotion Analysis Using LBP, GLCM and PNN Classifier
1. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
98
Robust Human Emotion Analysis Using LBP, GLCM
and PNN Classifier
S.Seedhana Devi1
Assistant Professor
Department of Information
Technology
1
Sri Vidya College of
Engineering & Technology
Seedhana19@gmail.com
S.Jasmine Rumana2
UG Student
Department of Information
Technology
2
Sri Vidya College of
Engineering & Technology
jasminerumana@gmail.com
G.Jayalakshmi3
UG Student
Department of Information
Technology
3
Sri Vidya College of
Engineering & Technology
jayalakshmi.tech@gmail.com
Abstract- The project presents recognition of face expressions based on textural analysis and PNN classifier. Automatic facial
expression recognition (FER) plays an important role in HCI systems for measuring people’s emotions by linking expressions to a
group of basic emotions such as disgust, sadness, anger, surprise and normal. This approach is another version made to protect the
network effectively from hackers and strangers. The recognition system involves face detection, feature extraction and
classification through PNN classifier. The face detection module obtains only face images and the face region which have
normalized intensity, uniformity in size and shape. The distinct LBP and GLCM are used to extract the texture from face regions
to discriminate the illumination changes for texture feature extraction. These features are used to distinguish the maximum
number of samples accurately. PNN classifier based on discriminate analysis is used to classify the six different expressions. The
simulated results will provide better accuracy and have less algorithmic complexity compared to facial expression recognition
approaches.
Index Terms-Distinct Local Binary Pattern (LBP), First Ordered Compressed Image, Gray Level Co-occurrence Matrix (GLCM)
and Probabilistic Neural Network (PNN) Classifier, Triangular Pattern
INTRODUCTION
Face Expressions convey high recognizable emotional
Signals, their specific forms may have originated not for
communication, but as functional adaptations of more direct
benefit to the expresser. The common human facial expression
deals with happy or angry thoughts, feelings or understanding of
the speaker expected or unexpected response from listeners,
sympathy, or even what the speaker is conversing with the person
opposite to them.
The traditional face detection is used to extract face area
from an original image. Then to extract eyes, mouth and eyebrow
outlines’ position from face area. Face expressions are recognized
to have accurate detection. The accuracy can be predicted, if the
original face is hidden with duplicate face. If the whole face image
is recognized, the performance will be low, so face expressions are
chosen. Mostly local features are detected. Beyond the
performance, facial expression plays a vital role in communication
with sign language.
Many geometric approaches are existed for face analysis, which
include techniques as linear discriminant analysis (LDA [1],
Independent Component analysis (ICA) [3], principal component
analysis (PCA) [14],Support Vector Machine (SVM) [4]. Object
recognition based on Gabor wavelet features where global features
are detected and classified using SVM classifier [2]. Facial
expression illustrates intention, personality, and psychopathology
of a person [9]. These methods suffer from the generality
problem, which might be extremely different from that of training
the face images. To avoid this problem, non-statistical face
analysis method using local binary pattern (LBP) has been
proposed. Initially, LBP was first introduced by ojala et al [5],
which showed a high discriminative power for texture
classification due to its invariance to monotonic gray level
changes.
In existing system, face expressions are recognized using Principle
Component Analysis and Geometric methods [3], these methods
suffer from Low discriminatory power and high computational
load. Geometric features will not provide optimal results. PNN
classifier and local binary pattern has been proposed to overcome
the problem faced in existing system. PNN classifier has been used
widely to identify facial areas with greater discrimination.
The significant of above literature has its own limitations to
recognize facial expression. To avoid such problems a novel
method is derived for facial expression in the present paper. The
paper is organized as follows. The overview of the proposed
system is presented in section 1. The section 2 represents methods
2. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
99
of expression recognition. Section 3 contains classification and
followed by conclusion in Section 4.
I.OVERVIEW OF THE PROPOSED METHOD
In order to detect the face expressions accurately and to
show variations, an original images are reducesd to HSV from
RGB plane. The textures where to get expressions is cropped from
HSV image. Both these processing steps playa an important role in
face detection. Facial expression are derived using the features
calculated from Gray Level Co-occurrence Matrix and DLBP’s of
First Order Compressed Image.The images are trained and tested
based on PNN classifier. The proposed method comprises of seven
steps is described as in figure 1
Figure1. Block diagram of overall the proposed system
II. RGB TO HSV COLOR MODEL
CONVERSION
In order to extracts gray level features from color information,
the proposed method represents the HSV color space. Color
vision can be processed into RGB color space or HSV color
space. RGB color space describes colors in terms of red,
green, and blue. HSV describes the Hue, Saturation, and
Value. color description, the HSV color model is often
preferred over the RGB model. The HSV model describes
colors to how human eye tends to classify the color. RGB
defines color in terms of a mixture of primary colors,
whereas, HSV describes color using more familiar
comparisons (color, vibrancy and brightness).
III. CROPPING OF IMAGE:
Cropping is the elimination of the outer parts of an image to
get better framing, emphasize subject matter or change aspect
ratio. The Gray scale facial image is cropped based on the two
eye location.An original image and cropped image is shown in
Figure 2 respectively.
Figure 2 a) An original image b) Cropped image
IV. FEATURE EXTRACTSION
The input data to an algorithm is too large to be processed and it
is suspected to be redundant, so it can be transformed into a
reduced set of features. The Features are extracted from each
cropped pixel of size 5 x5 matrix and the edge information is
collected based on neighborhood pixel analysis. The
representation of locality pixel of size 5 x 5 is shown in table 1. A
neighborhood of 5x5 pixels is denoted and it comprises 25 pixel
elements= {P11…P15:P21…P25:
P31…P35:P41…P45:P51…P55} . P33 is used as center pixel.
Training
Samples
Face
Detection
Features
Extraction
PNN
Classifier
Testing
Samples
Face
Detection
Features
Extraction
Decision/Anger
Disgust/Fear/
Sadness/Happiness/Surprise
3. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
100
Table 1: Representation of a 5x5 neighborhood pixel
A. Formation of First order compressed image matrix of size
3x3 from 5x5
The FCIM is a 3x3 matrix with nine pixel element (FCP1 to
FCP9). Each overlapped 3x3 sub matrix is extracted and separated
into 5x5 matrix. FCIM maintain local neighborhood properties
including edge information. FCP1=Avg of (ni) for i values are
i=1,2,…..9. The group of nine pixel element of gray level FCIM is
given in Table 2.
Table 2. Representation of Gray level FCIM
The nine overlapped 3 x 3 sub pixel formed from 5 x 5 pixel to
handle FCIM is given in Table 3.
Table3. Formation of nine overlapped 3x3 neighborhoods{n1,n2,n3….n9}
N1 N2 N3
N4 N5 N6
N7
N7 N8 N9
B. Formation of two Distinct LBP’s from FCIM
From the binary FCIM of 3X3 neighborhoods four Triangular
LBP unit values are derived as shown in Figure.3. Each Triangular
LBP unit value contains only three pixels. The Upper TLBP’s
(UTLBP) and Lower TLBP’s (LTLBP) are formed from the
combination of pixels (FCP1, FCP2, FCP4 and FCP2, FCP3, FCP6
and FCP4, FCP7, FCP8 and FCP6, FCP8, FCP9). The two DLBP’s
are formed from sum of UTLBP (SUTLBP) and sum of LTLBP
(SLTLBP) values of FCIM.
SUTLBP – Sum of Triangular Local Binary Pattern 1 and
Triangular Local Binary Pattern 2
SLTLBP=TLBP3+TLBP4
SUTLBP – Sum of Triangular Local Binary Pattern 3 and
Triangular Local Binary Pattern 4
SUTLBP=TLBP1+TLBP2
P11 P12 P13 P14 P15
P21 P22 P23 P24 P25
P31 P32 P33 P34 P35
P41 P42 P43 P44 P45
P51 P52 P53 P54 P55 FCP1 FCP2 FCP3
FCP4 FCP5 FCP6
FCP7 FCP8 FCP9
P11 P12 P13
P21 P22 P23
P31 P32 P33
P12 P13 P14
P22 P23 P24
P32 P33 P34
P13 P14 P15
P23 P24 P25
P33 P34 P35
P23 P24 P25
P33 P34 P35
P43 P44 P45
P21 P22 P23
P31 P32 P33
P41 P42 P43
P22 P23 P24
P32 P33 P34
P42 P43 P44
P31 P32 P33
P41 P42 P43
P51 P52 P53
P32 P33 P34
P42 P43 P44
P52 P53 P54
P33 P34 P35
P43 P44 P45
P53 P54 P55
4. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
101
Figure3.a
Figure 3.b.
Figure3: Formation of DLBP on FCIM
C. Formation of GLCM based on DLBP and FCIM
Features are formed by a formation of GLCM and DLBP’s i.e.:
SUTLBP and SLTLBP values of FCIM. The GLCM on DLBP is
obtained by representing the SUTLBP values on X-axis and
SLTLBP values on Y-axis. This method has the elements of
relative frequencies in both SUTLBP and SLTLBP.
V. CLASSIFICATION
PNN, Feed Forward Network is used to classify the test
images from trained images. It is a fast executing process. It is
assured as optimal classifier for calculating size of various
representative training set. Finally PNN classifier is used to give
the output. Based on expressions used in the proposed method,
PNN classifier consists of four layers: input layer, pattern layer,
summation layer and output layer.
VI. PERFORMANCE EVALUATION
The approach shows the discrepancies over duplicate image
from an original image through PNN classifier. The performance
is evaluated based on four measures as shown below
Contrast= −ln(𝑝𝑖𝑗 )2𝑁−1
𝑖,𝑗=0
Homogeneity=
𝑃 𝑖𝑗
1+𝑖−𝑗 2
𝑁−1
𝑖,𝑗=0
Correlation= 𝑝𝑖𝑗
(𝑖−𝜇(𝑗−𝜇)
𝜎2
𝑁−1
𝑖,𝑗=0
Energy= −ln(𝑝𝑖𝑗 )2𝑁−1
𝑖,𝑗=0
VII. RESULT AND DISCUSSIONS:
.
The proposed method is established from the database
containing 35 expressions. 18 expressions are used in training, 17
expressions are used in testing. Database contains facial
expressions. The set of 7 expressions are collected from five
distinct face images. Few sample expressions are shown in
figure4.
The proposed GLGM on DLBP method gives complete
information about an image.GLGM depends on Gray level range
image. DLBP on FCI reduces the size of GLGM from 0-14.Thus it
reduces overall complexity
In the proposed GLGM based DLBP method, the example
images are grouped into seven type of expressions and they are
stored in the database. Feature are extracted into GLGM on
DLBP. Features are extracted into seven type of facial
expressions.Google database are considered and used images are
scanned images. The numerical features are extracted from test
images and the results are stored in the test database.
Figure4: Sample Images collected as database
FCP1 FCP2 FCP3
FCP4 FCP5 FCP6
FCP7 FCP8 FCP9
SUTLBP UTLBP
SLTLBP LTLBP
5. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
102
Finally feature database and test database are estimated. Test
images are classified using PNN classifier and results are
predicted. The successful results are followed below.The approach
evaluated based on performance measures is shown in Table4. The
measures are compared and it is represented graphically in figure
5.
Table4: Performance Measure Evaluation
Imag
e
Homogenit
y
Contras
t
Energ
y
Correlatio
n
1 0.0723 1.7845 2.7156 84.0213
2 0.0670 1.7371 2.6165 112.3886
3 0.0694 1.7138 2.5960 37.4682
4 0.0745 1.7121 2.7458 150.2601
5 0.0729 1.8101 2.7276 42.7056
Figure5:Graphical Rpresentation
VIII. CONCLUSIONS
The proposed method represents absolute
information of the Face expressions. The GLCM on DLBP
of HCI is a three phase model for recognizing facial
expression. In the first stage it reduces the 5X5 image into a
3X3 sub image without losing any important information.
GLCM features on Distinct LBP is derived from second
and third stages. The computational cost and other
complexity involved in the formation of GLCM are
reducesd by reducing the size of the GLCM by 15X15
using DLBP. In the fourth stage, PNN classifier is used to
avoid multiple layer perception . PNN classifier extract
overall features so that the result can be exact. The
proposed method leads to unpredictable distribution of the
facial expressions.The performance estimate is shown only
for few sample expressions as in Table 4. The approach
mainly aims at network security. In future, the work will be
further refined to have many biometric applications as in
border security system.
References
[1] Ms .Aswathy . R .., “A Literature review on Facial
Expression Recognition Techniques", IOSR Journal of
Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p-
ISSN: 2278-8727Volume 11, Issue 1 (May. - Jun. 2013),
PP 61-64
[2]Arivazhagan, S.; Priyadharshini, R.A.; Seedhanadevi, S.,
"Object recognition based on gabor wavelet
features", Devices, Circuits and Systems (ICDCS), 2012
International Conference on , vol., no., pp.340,344, 15-16
March 2012
[3]M. Bartlett, J. Movellan, T. Sejnowski, “Face
recognition by independent component analysis”,IEEE
Transactions on Neural Networks 13 (2002) 1450–1464.
[4]B. Heisele, Y. Ho, T. Poggio, “Face recognition with
support vector machines: global versus component-based
approach”, in: Proceedings of International Conference on
Computer Vision,2001, pp. 688–694.
[5]T. Ojala, M. Pietikainen, D. Harwood, “A comparative
study of texture measures with classification based on
feature distributions”, Pattern Recognition 29 (1996) 51–
59.
[6]B. Fasel and J. Luettin, “Automatic facial expression
analysis: A survey. Pattern Recognition”, 2003
[7]S. M. Lajevardi and H. R. Wu, “Facial Expression
Recognition in Perceptual Color Space”, IEEE
Transactions on Image Processing, vol. 21, no. 8, pp. 3721-
3732, 2012.
[8]Marian Stewart Barlett, Gwen Littlewort, Ian Fasel, and
Javier R. Movellan, “Real time face detection and facial
expression recognition: Development and applications to
human computer interaction”, in Proceeding of the 2003
Conference on Computer Vision and Pattern Recognition
Workshop, 2003.
[9]Shyna Dutta, V.B. Baru., “Review of Facial Expression
Recognition System and Used Datasets, International
Journal of Research in Engineering and Technology”
eISSN: 2319-1163 | pISSN: 2321-7308 ,Volume: 02 Issue:
12 | Dec-2013.
6. INTERNATIONAL JOURNAL FOR TRENDS IN ENGINEERING & TECHNOLOGY
VOLUME 4 ISSUE 2 – APRIL 2015 - ISSN: 2349 - 9303
103
[10]F. Dela Torre and J. F. Cohn, “Facial expression
analysis. In Th. B. Moeslund”, A. Hilton, V. Kruger, and L.
Sigal, editors, Guide to Visual Analysis of Humans:
Looking at People, pages 377–410. Springer, 2011.
[11]S. Moore and R. Bowden, “Local binary patterns for
multi-view facial expression recognition,” Computer.
Vision. Image Understand., vol. 115, no. 4, pp. 541–558,
2011.
[12]M. Pantic, L.J.M. Rothkrantz, "Automatic Analysis of
Facial Expressions: the State of the Art", IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol. 22, No. 12,
pp. 1424-1445, 2000.
[13] Anitha C, M K Venkatesha, B Suryanarayana
Adiga“A Survey On Facial Expression Databases”
International Journal of Engineering Science and
Technology Vol. 2(10), 2010, 5158-5174.
[14] Timo Ahonen, Abdenour Hadid, and Matti Pietik¨ainen
“Face Recognition with Local Binary Patterns” Machine
Vision Group, Infotech Oulu PO Box 4500, FIN-90014
University of Oulu, Finland.