SlideShare a Scribd company logo
1 of 12
Download to read offline
Journal for Research | Volume 02 | Issue 02 | April 2016
ISSN: 2395-7549
All rights reserved by www.journalforresearch.org 30
Detecting Facial Expression in Images
Ghazaala Yasmin Prof. Samir K. Bandyopadhyay
M.Tech. Student Professor
Department of Computer Science & Engineering Department of Computer Science & Engineering
University of Calcutta University of Calcutta
Abstract
Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric
area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial
expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and
particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop
automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is
proposed.
Keywords: Facial feature detection, Template matching and Face position detection
_______________________________________________________________________________________________________
I. INTRODUCTION
Facial expression recognition [1] [2] is another fruitfulness of computer vision research. Computer vision is the way to
electronically represent the human vision with the help of some data analysis techniques. In a human computer interaction (HCI)
system, the communication between human and computer can take place through different aspects (like verbal, non-verbal) of a
human. Here we are considering only the non-verbal aspects of human being like, facial expression, body movement etc. In this
paper we are mainly concern about the facial expression recognition process, which needs a face image on which we should
apply our facial expression recognition algorithm. Now once we have the face image data, we need to apply some processing
techniques with the help of pattern recognition, artificial intelligence, mathematics, computer science, electronics or any kind of
scientific concept. Hence there are huge numbers of applications in computer vision research, but we will discuss only face
recognition and facial expression.
There are many applications, where facial expression detection process plays an important role. Researches in the field of
social psychology show that facial expression are more natural in nature than the speaker’s spoken words and truly reflects the
emotion of a person. According to statistical reports verbal part of a message contributes only for 7 percent to the effect of the
message as a whole. The vocal part contributes for 38 percent, while facial expression of the speaker contributes for 55 percent
to the effect of the spoken message. The facial expression recognition system was introduced in 1978 by Suwa et. al [4]. The
main issue of building a facial expression recognition system is face detection [3] and alignment, image normalization, feature
extraction, and classification.
The analysis of the human face via image (and video) is one of the most interesting and focusing research topics in the last
years for the image community. From the analysis (sensing, computing, and perception) of face images, much information can be
extracted, such as the sex/gender, age, facial expression, emotion/temper, mentality/mental processes and behaviour/psychology,
and the health of the person captured. According to this information, many practical tasks can be performed and completed; these
include not only person identification or verification (face recognition), but also the estimation and/or determination of person's
profession, hobby, name (recovered from memory), etc.
Research on face image analysis has been carried out and is being conducted around various application topics, such as (in
alphabetical order) age estimation, biometrics, biomedical instrumentations, emotion assessment, face recognition, facial
expression classification, gender determination, human-computer/human-machine interaction, human behaviour and emotion
study, industrial automation, military service, psychosis judgment, security checking systems, social signal processing,
surveillance systems, sport training, tele-medicine service, etc.
Therefore facial expressions are the most important information for emotions perception in face to face communication. This
paper explains about an approach to the problem of facial feature extraction from a non-l frontal posed image For face portion
segmentation basic image processing operation like morphological dilation, erosion, reconstruction techniques with disk
structuring element are used. Six permanent Facial features like eyebrows(left and right), eye (left and right) , mouth and nose
are extracted using facial geometry, edge projection analysis and distance measure and feature vector is formed considering
height and width of left eye, height and width of left eyebrow, height and width of right eye, height and width of right eyebrow,
height and width of nose and height and width of mouth along with distance between left eye and eyebrow, distance between
right eye and eyebrow and distance between nose and mouth.
Human face detection has drawn considerable attention in the past decades as it is one of the fundamental problems in
computer vision. Given a single image, the ideal face detection should identify and locate all faces regardless of its three-
dimensional position, orientation, and lighting conditions. The existing face detection techniques can be classified into four
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 31
categories, namely, knowledge-based methods, feature invariant approaches, template matching methods, appearance based
methods.
Human face detection and segmentation is an active research area until recently. This field of research plays an important role
in many applications such as face identification system, face tracking, video surveillance and security control system, and human
computer interface.
Those applications often require segmented human face which is ready to be processed. There are many factors that influence
the success of human face detection and segmentation. Those factors include complex colour background, condition of
illumination, change of position and expression, rotation of head, and distance between camera and subject.
Face detection is a sub branch of object detection. The human face is a dynamic object and has a high degree of variability in
its appearance, which makes face detection a difficult problem in computer vision.
Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face
processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported
methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated
systems that analyse the information contained in face images, robust and efficient face detection algorithms are required.
Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three-
dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are non- rigid and have a
high degree of variability in size, shape, colour, and texture. Numerous techniques have been developed to detect faces in a
single image.
Face detection and localization is the task of checking whether the given input image contains any human face, and if so,
returning the location of the human face in the image. The wide variety of applications and the difficulty of face detection have
made it an interesting problem for the researchers in recent years.
Face detection is difficult mainly due to a large component of non-rigidity and textural differences among faces. The great
challenge for the face detection problem is the large number of factors that govern the problem space. The long list of these
factors include the pose, orientation, facial expressions, facial sizes found in the image, luminance conditions, occlusion,
structural components, gender, ethnicity of the subject, the scene and complexity of image’s background.
The scene in which the face is placed ranges from a simple uniform background to highly complex backgrounds. In the latter
case it is obviously more difficult to detect a face. Faces appear totally different under different lighting conditions. Not only do
different persons have different sized faces, faces closer to the camera appear larger than faces that are far away from the camera
Basic emotions are emotions that have been scientifically proven to have a certain facial expression associated with it. Different
emotional stages are indicated as follows.
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 32
Facial Feature Detection system is fast becoming a familiar feature in ‘apps’ and on websites on different purposes. Human
face identification and detection is often the first step in applications such as video surveillance, human computer interface, and
face recognition and image database management [1]. Furthermore facial feature characteristics are very much effective in both
biometric identification which automatically identifies a person from a digital image or a video image. Facial expressions are
used not only to express our emotions, but also to provide important communicative cues during social interaction, such as our
level of interest [2]. Facial features, eye feature has more application domain. It is reported that facial expressions have
considerable effects on listener, near about 55 % effect of the spoken words depend on eye movements and facial expressions of
the speaker.
Facial expressions play a major role in Face Recognition Systems and image processing techniques of Human Machine
Interface. There are several techniques for facial features selection like Principal Component Analysis, Distance calculation
among face components, Template Matching. This algorithm describes a simple template matching based facial feature
selection technique and detects facial expressions based on distances between facial featuresusing a set of image databases. The
process for facial expression recognition system involves three stages: Pre Processing, Facial Feature Extraction and
classification facial expressions.
II. REVIEW WORKS
Human face detection has drawn considerable attention in the past decades as it is one of the fundamental problems in computer
vision. Given a single image, the ideal face detection should identify and locate all faces regardless of its three-dimensional
position, orientation, and lighting conditions. The existing face detection techniques can be classified into four categories,
namely, knowledge-based methods, feature invariant approaches, template matching methods, appearance based methods.
The use of colour information has been introduced to the face-locating problem in recent years. Most publications [1-5] have
shown that colour is a powerful descriptor that has practical use in the extraction of the face detection.
Modelling skin colour requires choosing an appropriate colour space and identifying a cluster associated with skin colour in
this space. YIQ colour space is used in commercial colour television broadcasting. YCbCr space is a hardware orientated model
and is used in most video standards. So an effective use of the chrominance information for modelling human skin colour can be
achieved in these colour space. Second, this format is typically used in video coding, and therefore the use of the same format for
segmentation will avoid the extra computation required in conversation. Many research studies [6- 8] assume that chrominance
components of skin-tone colour are independent of the luminance component. In fact, the skin-tone colour is non-linearly
dependent on luminance. Researcher found that although skin colours of different people appear to vary over a wide range, they
differ less in chrominance than brightness, specially the skin colours from a compact area in the YCbCr plane [9-10].
Human-like robots and machines that are expected to enjoy truly intelligent and transparent communications with human can
be created using automatic facial expression recognition with a set of specific desired accuracy and performance requirements.
Expression recognition involves a variety of subjects such as perceptual recognition, machine learning, affective computing etc.
One case study uses skin colour range of human face to localize face area. After face detection, various facial features are
identified by calculating the ratio of width of multiple regions in human face. Finally the test image is partitioned into a set of
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 33
sub-images and each of these sub-images is matched against a set of sub-pattern training set. Partitioning is done using Aw-
SpPCA algorithm. Given as input any emotion of face, this pattern training set will classify the particular emotion [1].
Face component extraction by dividing the face region into eye pair and mouth region and measurement of Euclidean distance
among various facial features is also adopted by a case study. Similar study is done by Neha Gupta to detect emotions. This
research includes four steps: pre-processing, edge detection, feature extraction and distance measurement among the features to
classify different emotions. This type of approach is classified as Geometric Approach [3].
Another research includes Face detection method using segmentation technique. First, the face area of the test image is
detected using skin colour detection. RGB colour space is transformed into yCbCr colour space in the image and then skin
blocks quantization is done to detect skin colour blocks. As next step, a face cropping algorithm is used to localize the face
region. Then, different facial features are extracted using segmentation of each component region (eyes, nose, mouth). Finally,
vertical & angular distances between various facial features are measured and based on this any unique facial expression is
identified. This approach can be used in any biometric recognition system [2].
A template matching based facial feature detection technique is used in a different case study [4,7-8]. Different methods of
face detection and their comparative study are done in another review work. Face detection methods are divided into two primary
techniques: Feature based & View based methods [5, 9].
Gabor filters are used to extract facial features in another study. This approach is called Appearance based approach. This
classification based facial expression recognition method uses a bank of multilayer perceptron neural networks. Feature size
reduction is done by Principal Component Analysis (PCA) [6, 10]. Thus existing works primarily focused in detecting facial
features and they are served as input to emotion recognition algorithm. In this study, a template based feature detection technique
is used for facial feature selection and then distance between eye and mouth regions is measured.
III. FACIAL EXPRESSION RECOGNITION SYSTEM AND RESULTS
In recent technology we have seen that how the advance image processing techniques with the help of pattern recognition and
artificial intelligence can be effectively used in automatic detection and classification of various facial signals. Among them face
recognition and facial expression recognition are the best ones to describe the concept of man-machine interaction efficiently. In
both of these techniques we are doing patter recognition. For example, in face recognition we consider two patterns ‘known’ and
‘unknown’ whereas in facial expression recognition we consider five patterns ‘neutral’, ‘happy’, ‘sad’, ‘angry’, ‘disgust’ etc.
Facial expression recognition can be used in behavior monitor system and medical system. In this paper we will show how the
concept of face recognition with the help of neural network can be used in facial expression recognition process. The following
figure shows the concept of a typical facial expression recognition system.
Figure 1: Block diagram of a typical facial expression recognition system.
At first we need to acquire the image on which we will apply our facial expression recognition techniques. Input image can be
captured by any kind of imaging system. If the Input Image is colour (RGB), then convert it to Gray scale image.
The input image can be of different size, format, colour (RGB or gray) etc. Hence we should preprocess the input image, so
that we can efficiently apply our algorithm to get better result. In the preprocessing technique we use some compression
technique like 2D-DCT to compress the data, because an image consists a large number of data, which increases the computation
time. Also, we can apply some filtering techniques to remove the noise from the input image, because the presence of any
artifacts can lead to false detection of facial features, which could produce wrong output.
As in all imaging process, artifacts can occur, resulting in degraded quality of image which can compromise imaging
evaluation. An artifact is a feature appearing in an image that is not present in the original object. Artefacts remain a problematic
area and it affects the quality of the image. Pre-processing (artefact removal) techniques are used to improve the detection of the
unwanted portion from the given images.
Algorithm for Artefact Removal:
1) Step 1. Grayscale facial images are taken as input.
2) Step 2.Threshold value of the image is calculated using the standard deviation technique described above.
3) Step 3. The image is binarized using the threshold value. i.e. pixels having value greater than the threshold is set to 1
and pixels less than the threshold are set to 0.
4) Step 4. The binarized image is labelled and areas of connected components are calculated using equivalence classes.
5) Step 5. The connected component with the maximum area and the connected component with the second highest area
are found out.
6) Step 6. The ratio of the maximum area to that of second maximum area are calculated.
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 34
7) Step 7. On the basis of the ratio if ratio is high only the component with highest area is kept and all others are removed
otherwise if ratio is low the component with the highest and second highest area are kept and all others are removed.
8) Step 8. A convex hull is calculated for the one pixel in the image and all regions within the convex hull are set to one.
9) Step 9.Now the above obtained image matrix is multiplied to the original image matrix to obtain an image consisting of
only medical image without any artefact.
In RGB images each pixel has a particular colour; that colour is described by the amount of red, green and blue in it. If each of
these components has a range 0–255, this gives a total of 256^3 different possible colours. Such an image is a “stack” of three
matrices; representing the red, green and blue values for each pixel. This means that for every pixel there correspond 3 values.
Whereas in greyscale each pixel is a shade of gray, normally from 0 (black) to 255 (white). This range means that each pixel can
be represented by eight bits, or exactly one byte. Other greyscale ranges are used, but generally they are a power of 2.so,we can
say gray image takes less space in memory in comparison to RGB images.
Edge detection refers to the process of identifying and locating sharp discontinuities in an image. The discontinuities are
abrupt changes in pixel intensity which characterize boundaries of objects in a scene. Edges characterize boundaries and are
therefore a problem of fundamental importance in image processing and an important tool for image segmentation. The concept
of edge is highly useful in dealing with regions and boundaries as an edge point is transition in gray level associated with a point
with respect to its background. Edges typically occur on the boundary between two regions. The following algorithm is used for
edge detection as pre-processing step.
Algorithm for Edge Detection:
Basic functions are defined which are used in algorithms:
Parent (i)
Return ⌊i/2⌋
Left (i)
Return 2i
Right (i)
Return 2i+1
Total number of nodes in a Complete Binary Tree
tNode (h)
Return 2h-1
The number of terminal nodes (leaf nodes) in a Complete Binary Tree
lNode (h)
Return 2h-1
The number of internal nodes (non-leaf nodes) in a Complete Binary Tree
iNode (h)
Return tNode (h) – lNode (h)
Algorithms for: Storing Original Colour Space at Leaf Nodes of Tree
ORIGINAL-HISTOGRAM (Image, height, width)
Loop x← 1 to height
Do Loop y← 1 to width
Do Intensity ← Image [x, y]
Tree [(iNode (h) + 1) + Intensity].count ← Tree [(iNode (h) + 1) + Intensity].count + 1
x←x +1
y←y +1
Return Tree
Algorithms for: generate quantize colour spaces in different level of tree
LEVEL-HISTOGRAM (Tree)
Loop x ⟵ iNode (h) + 1 To tNode (h)
Do Lcount ⟵ Tree (x).count
Loop y ⟵ Parent (x) down To 0
Do If x mod 2 ≠ 0
Then Tree [y].intensity ⟵ Tree [x].intensity
Tree [y].count ⟵ Tree [y].count + Lcount
Else If Tree [y].count ˂ Tree [x].count
ThenTree [y].intensity ⟵ Tree [x].intensity
Tree [y].count ⟵ Tree [y].count + Lcount
x ⟵ y
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 35
y ⟵ Parent (x)
x ⟵ x + 1
Return Tree
Algorithms to: Calculate the Average Bin Distance
BIN-DISTANCE (Tree, h1)
TotBin ⟵ 0
TotBinDist ⟵ 0
Loop x ⟵ iNode (h1) + 2 to tNode (h1)
Do TotBin ⟵ TotBin + 1
TotBinDist ⟵TotBinDist + (Tree [x] .intensity - Tree[x - 1] .intensity)
x ⟵ x + 1
AvgBinDist ⟵ TotBinDist / TotBin
Return AvgBinDist
Algorithms for: Calculation of MDT by Identifying the Prominent Bins and Truncate the Non-Prominent Bins
CALCULATE-MDT (Tree, h1)
Tree [iNode (h1) + 1].prominent ⟵1
TotPrmBin ⟵ 0
TotPrmBinDist ⟵ 0
Loop x ⟵ iNode (h1) + 2 to tNode (h1)
Do If Tree [x] .intensity - Tree [x - 1] .intensity ≥ AvgBinDist
ThenTree[x].prominent ⟵ 1
TotPrmBin ⟵ TotPrmBin + 1
TotPrmBinDist ⟵ TotPrmBinDist + (Tree [x] .intensity - Tree [x - 1] .intensity) Else
Tree[x].prominent ⟵ 0
x ⟵ x + 1
MDT ⟵ TotPrmBinDist / TotPrmBin
Return MDT
Algorithms for: Redraw the Image Using Truncated Histogram
REDRAW - IMAGE (Image, height, width, Tree, h1, h)
Loop x ⟵ 1 to height
Do Loop y ⟵ 1 to width
Do NewIntensity ⟵ (Image [x, y] / (tNode (h) / tNode (h1))) + 1
If Tree[iNode (h1) + NewIntensity + 1].prominent ≠1
Then While Tree [iNode (h1) + NewIntensity + 1].prominent ≠1 Do NewIntensity ⟵ NewIntensity – 1
NewImage [x, y] ⟵ NewIntensity
y ⟵ y + 1
x ⟵ x + 1
Return NewImage
Algorithms for: derive the Horizontal Edge of the image
HozEdgeMap (NewImage, height, width, MDT)
Loop x ⟵ 1 to height
Do flag⟵ 1
Loop y ⟵ 1 to width
Do If Flag = 1
Then NewIntensity ⟵NewImage [x, y]
NxtNewIntensity ⟵NewImage [x, y]
If |NewIntensity – NxtNewIntensity |≥ MDT
Then Flag ⟵ 1
HozEdgeMapImage [x, y] ⟵ BLACK
Else
Flag ⟵ 0
HozEdgeMapImage [x, y] ⟵ WHITE
y ⟵y + 1
x⟵x + 1
Return HozEdgeMapImage
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 36
Algorithms for: derive the Edge of the image
EDGEMAP (HozEdgeMapImage, VerEdgeMapImage, height, width)
Loop x ⟵ 1 to height
Do Loop y ⟵ 1 to width
Do EdgeMapImage [x, y] ⟵HozEdgeMapImage [x, y] OR VerEdgeMapImage [x, y] y ⟵ y + 1
x ⟵ x + 1
Return EdgeMapImage
The results obtained are shown Figure 2.
(a) (b) (c)
Fig. 2: (a) Template of the Face (b) Edge Detection of Side Face (c) Edge Detection of Front Face
Once we acquire the face image, we need to extract the facial features from the background of the image. In this paper we use
skin colour based face detection technique, which uses RGB and HSV colour model. There are another colour model (YCbCr)
that can also be used to detect skin colour region.
We use 2D-DCT to compress the extracted facial feature, which can make our processing faster. As our algorithm uses an
image database, we have to apply the compression technique in all the images in the database. An example image is shown in the
figure 5 on which we want to execute our proposed algorithm.
Fig. 3: Original image containing Fig. 4: Image in HSV colour space Fig. 5: Extracted skin colour region from
a single frontal viewed face. the image
Fig. 6: Binary image showing region boundary Fig. 7: input Image Fig. 8: Detected Face Region
The following algorithm extracts facial features.
Facial_Feature_Detection (Input Image, Template Images)
Step1. Start
Step2. Read Input Human Face Image.
If the Input Image is colour (RGB), then
convert it to Gray scale Image and save the pixel values to a 2D array let gface.
Else
save the pixel values of the input image to a 2D array let gface.
Step3. Read Left eye template image.
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 37
If the template image is colour (RGB), then
convert it to Gray scale Image and save the pixel values to a 2D array let gleft.
Else
save the pixel values of the input image to a 2D array let gleft.
Step4. Read Right eye template image.
If the template image is colour (RGB), then
convert it to Gray scale Image and save the pixel values to a 2D array let gright.
Else
save the pixel values of the input image to a 2D array let gright.
Step5. Read Nose template image.
If the template image is colour (RGB), then
convert it to Gray scale Image and save the pixel values to a 2D array let gnose.
Else
save the pixel values of the input image to a 2D array let gnose.
Step6. Read Mouth template image.
If the template image is colour (RGB), then
convert it to Gray scale Image and save the pixel values to a 2D array let gmouth.
Else
save the pixel values of the input image to a 2D array let gmouth.
Step7. Declare 4 2D Array C1, C2, C3 & C4 of size m*n where m*n is the size of gface.
Step8. Calcualte C1[][] = 2D_norm_crosscorr(gleft,gface)
C2[][]= 2D_norm_crosscorr(gright,gface)
C3[][] = 2D_norm_crosscorr(gnose,gface)
C4[][] = 2D_norm_crosscorr(gmouth,gface)
Step9. Call (x11,y11,w1,h1) = Find_max(C1)
(x21,y21,w2,h2) = Find_max(C2)
(x31,y31,w3,h3) = Find_max(C3)
(x41,y41,w4,h4) = Find_max(C4)
where (x11,y11,w1,h1), (x21,y21,w2,h2), (x31,y31,w3,h3), (x41,y41,w4,h4) are top – left pixel
coordinate, width, height of the matched rectangular area around left eye, right eye,
nose and mouth respectively.
Step10. Calculate x12 = x11 + w1 & y12 = y11 + h1
x22 = x21 + w2 & y22 = y21 + h2
x32 = x31 + w3 & y32 = y31 + h3
x42 = x41 + w4 & y42 = y41 + h4
where (x12,y12), (x22,y22), (x32,y32), (x42,y42) are bottom right pixel coordinate of the
matched rectangular area around left eye, right eye, nose and mouth respectively.
Step11. Draw Boundary Rectangle around left eye in gface with top left, top right, bottom
left and bottom right pixel coordinates as (x11,y11), (x12,y11), (x11,y12) & (x12,y12)
respectively.
Draw Boundary Rectangle around right eye in gface with top left, top right, bottom
left and bottom right pixel coordinates as (x21,y21), (x22,y21), (x21,y22) & (x22,y22)
respectively.
Draw Boundary Rectangle around nose in gface with top left, top right, bottom left
and bottom right pixel coordinates as (x31,y31), (x32,y31), (x31,y32) & (x32,y32)
respectively.
Draw Boundary Rectangle around mouth in gface with top left, top right, bottom left
and bottom right pixel coordinates as (x41,y41), (x42,y41), (x41,y42) & (x42,y42)
respectively.
Calculate middle point pixel coordinate (x1mid,y1mid) of the boundary rectangle around
Left eye as x1mid = (x11+x12)/2 and y1mid = (y11+y12)/2.
Step12. Calculate Euclidian Distance between middle point pixel coordinate (x1mid,y1mid) of
the boundary rectangle around left eye and top – left pixel coordinate (x41,y41) of the
boundary rectangle around mouth as:
Dist1 = √{( x1mid – x41)2 + ( y1mid - y41)2} unit.
Step13. Calculate middle point pixel coordinate (x2mid,y2mid) of the boundary rectangle around
right eye as x2mid = (x21+x22)/2 and y2mid = (y21+y22)/2.
Step14. Calculate Euclidian Distance between middle point pixel coordinate (x2mid,y2mid) of
the boundary rectangle around right eye and top – right pixel coordinate (x42,y41) of
the boundary rectangle around mouth as:
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 38
Dist2 = √{( x2mid – x42)2 + ( y2mid - y41)2} unit.
Step15. Write the value of Dist1 and Dist2 in a output text file for comparison.
Step16. Repeat step 1 to 15 for another same human face but with smiling facial expression.
Step17. Compare both input face images according the distances measured between eyes &
mouth. The image with larger distance is considered as Happy face or smiling face,
in general,.
Step18. Exit
2D_Norm_Crosscorr (Template Gray scale Image, Input Gray scale Image)
Step1. Start
Step2. Perform 2D Cross Correlation between Template Image and Input Image pixel values
and return 2D array C of size m*n with values of the corresponding Cross correlation
where m*n is the size of the Input Image.
Step3. End
Find_Max(C[ ][ ])
Step1. Start
Step2. Find Maximum Value of 2D Array C[ ][ ] and determine the corresponding
rectangular region where the maximum value is found.
Step3. Find top – left position coordinate (x,y), width (w) and height (h) of the rectangular
region and return the values.
Step4. End
Facial_expression_recognition (Input Image, 3 Training Image Databases)
Step19. Start
Step20. Read Input Human Face Image and Store the pixel values to an array let face.
Step21. Call Processed_Face = imPreprocess(face)
Step22. Set i=1
Step23. Repeat Step 6 to 9 for every Image of Train_Neutral_Other Image Database
Step24. Read the Image from the Database and Store the pixel values to an array let t.
Step25. Call t1= imPreprocess(t)
Step26. Store t1 into image cell Train_Neutral_Other_Cell as
Train_Normal_Other_Cell(1,i)=t1
Step27. Set i=i+1
Step28. Set i=1
Step29. Repeat Step 12 to 15 for every Image of Train_Smiling_Other Image Database
Step30. Read the Image from the Database and Store the pixel values to an array let t.
Step31. Call t1= imPreprocess(t)
Step32. Store t1 into image cell Train_Smiling_Other_Cell as
Train_Smiling_Other_Cell(1,i)=t1
Step33. Set i=i+1
Step34. Set i=1
Step35. Repeat Step 18 to 21 for every Image of Train_Angry_Sad Image Database
Step36. Read the Image from the Database and Store the pixel values to an array let t.
Step37. Call t1= imPreprocess(t)
Step38. Store t1 into image cell Train_Angry_Sad_Cell as
Train_Angry_Sad_Cell(1,i)=t1
Step39. Set i=i+1
Step40. Create 3 2D Array of size (no_of_images * mn) for 3 Training Databases where
no_of_images refers to the total number of images in the corresponding training databases
respectively and m,n refers to the predefined size mentioned in Impreprocess function. Let
traindata1 (n1 *mn), traindata2 (n2*mn) and traindata3 (n3*mn) are 3 arrays for
Train_Neutral_Other, Train_Smiling_Other and Train_Angry_Sad Training Image
Databases respectively, with n1, n2 and n3 are number of images in the corresponding
databases.
Step41. Initialize all elements of traindata1, traindata2 and traindata3 array to 0.
Step42. Set i=1
Step43. Repeat step 26 to27 for n1 times
Step44. Set traindata1(i,:)= Train_Normal_Other_Cell(1,i)
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 39
Step45. Set i=i+1
Step46. Set i=1
Step47. Repeat step 30 to31 for n2 times
Step48. Set traindata2(i,:)= Train_Smiling_Other_Cell(1,i)
Step49. Set i=i+1
Step50. Set i=1
Step51. Repeat step 34 to35 for n3 times
Step52. Set traindata3(i,:)= Train_Angry_Sad_Cell(1,i)
Step53. Set i=i+1
Step54. Create 3 1D Arrays, namely class1, class2 and class3 of size n1,n2 and n3 respectively corresponding to
Train_Neutral_Other, Train_Smiling_Other and Train_Angry_Sad Training Image Databases respectively.
Step55. Set i=1
Step56. Repeat Step 39 to 40 for all images of Train_Neutral_Other Image Database
Step57. If ith image of Train_Neutral_Other is of Neutral expression
then set class1(i)= 1
else set class1(i)= -1
Step58. Set i=i+1
Step59. Set i=1
Step60. Repeat Step 43 to 44 for all images of Train_Smiling_Other Image Database
Step61. If ith image of Train_Smiling_Other is of Smiling expression
then set class2(i)= 2
else set class2(i)= -2
Step62. Set i=i+1
Step63. Set i=1
Step64. Repeat Step 47 to 48 for all images of Train_Angry_Sad Image Database
Step65. If ith image of Train_Angry_Sad is of Angry expression
then set class3(i)= 3
else set class3(i)= -3
Step66. Set i=i+1
Step67. Call SVMTrained1=SVM_Training(traindata1,class1)
SVMTrained2=SVM_Training(traindata2,class2)
SVMTrained3=SVM_Training(traindata3,class3)
Step68. Call result1=SVM_Classify(SVMTrained1, Processed_Face)
result2=SVM_Classify(SVMTrained2, Processed_Face)
result3=SVM_Classify(SVMTrained3, Processed_Face)
Step69. Call FinalExpression=Recognize_Expression(result1,result2,result3)
Step70. Display FinalExpression as output
Step71. Exit
Impreprocess (Image_Pixel_Array)
Step1. Start
Step2. Convert Image_Pixel_Array to its corresponding double format let
Image_Pixel_Array_Double.
Step3. If Image_Pixel_Array_Double is of format a*b*3, then
convert it to Corresponding Gray scale and save the pixel values to a 2D array let
gImage.
Else
save the pixel values of the input image to a 2D array let gImage.
Step4. Resize gImage to a Predefined size say m*n & save the pixel values to a 2D array let
gImage_Resized.
Step5. Reshape gImage_Resized Array to a 2D array of size 1*(mn) & save the pixel values to a
2D array let gImage_Reshaped.
Step6. Return the Array gImage_Reshaped.
Step7. End
SVM_Training (Training_Data, Group_Membership_Class)
Step1. Start
Step2. Train Linear Support Vector Machine with Training_Data and Group_Membership_Class
and store the value in an array let SVM1.
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 40
Step3. Return the array SVM1
Step4. End
SVM_Classify (SVM_Trained, Img_Array)
Step1. Start
Step2. Classify Img_Array in one of the classes with SVM_Trained and SVM Binary Classifier
and store the value in a variable let Classifier1
Step3. Return the value Classifier1
Step4. End
Recognize_Expression (Val1, Val2, Val3)
Step1. Start
Step2. If Val1=1
Set Expression=Neutral
Else if Val1= -1 and Val2=2
Set Expression=Smiling
Else if Val1= -1 and Val2= -2 and Val3=3
Set Expression=Angry
Else if Val1= -1 and Val2= -2 and Val3= -3
Set Expression=Sad
Step3. Return the value of Expression
Step4. End
The following figures describes the output of the proposed system
Figure 9 Test image Figure 10 Templates of this image Fig. 11 Left eye template
Fig. 12: Detected Face Region Fig. 13: Neutral face Fig. 14: Neutral face Fig. 15: Smiling face
template matching
IV. CONCLUSIONS
Facial expression recognition or emotion detection system has numerous applications in image processing domains, security
applications domain or any type of biometric system.
REFERENCES
[1] Farah Azirar, ‘Facial Expression Recognition’, Bachelor Thesis, School of Electronics and Physical Sciences, Department of Electronic Engineering,
University of Surrey, 2004.
[2] L. Franco and A. Treves. A Neural Network Face Expression Recognition System using Unsupervised Local Processing. Proceedings of the Second
International Symposium on Image and Signal Processing and Analysis (ISPA 01), Croatia, pp. 628-632, June 2001.
Detecting Facial Expression in Images
(J4R/ Volume 02 / Issue 02 / 006)
All rights reserved by www.journalforresearch.org 41
[3] Angel Noe Martinez-Gonzalez and Victor Ayala-Ramirez, “Real Time Face Detection Using Neural Networks”, 2011 10th Mexican International
Conference on Artificial Intelligence.
[4] M. Suwa, N. Sugie, and K. Fujimora. A preliminary note on pattern recognition of human emotional expression. In International Joint Conference on
Pattern Recognition, pages 408–410, 1978.
[5] H.A. Rowley, S. Baluja and T.Kanade “Rotation Invariant Neural Network-Based Face Detection“, Proc IEEE Conf. Computer Vision and Pattern
Recognition, 1998, pp 38-44.
[6] C.H. Lee, J.S. Kim, K.H. Park, “Automatic Human Face Location in a Complex Background Using Motion and Colour Information”, Pattern Recognition,
vol. 29, no. 11, 1996, pp. 129-140.
[7] K. Sobottka, I. Pitas, “A Novel Method for Automatic Face Segmentation, Facial Feature Extraction and Tracking”, Signal Process. Image Communication,
vol. 12, no. 3, 1998, 263-281.
[8] Hsu, Rein-Lien, Abdel-Mottaleb, Mohamed, Jain, and Anil K. “Face Detection in Colour Images”. IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 24, no. 5, 2002, pp. 696-706.
[9] A. Pentland and M. Turk. Eigen faces for recognition. Journal of Cognitive Neuroscience, 3(1):71-86, 1991.
[10] H. A. Rowley. Neural Network-Based Face Detection. PhD thesis, Carnegie Mellon University, Pitsburgh, 1999.

More Related Content

What's hot

FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSFACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsijma
 
A Hybrid Approach to Face Detection And Feature Extraction
A Hybrid Approach to Face Detection And Feature ExtractionA Hybrid Approach to Face Detection And Feature Extraction
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
 
Face Recognition for Personal Photos using Online Social Network Context and ...
Face Recognition for Personal Photos using Online Social Network Context and ...Face Recognition for Personal Photos using Online Social Network Context and ...
Face Recognition for Personal Photos using Online Social Network Context and ...Wesley De Neve
 
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...ijtsrd
 
Comparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition TechniquesComparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition Techniquesijtsrd
 
4.face detection authentication on smartphones end users usability assessment...
4.face detection authentication on smartphones end users usability assessment...4.face detection authentication on smartphones end users usability assessment...
4.face detection authentication on smartphones end users usability assessment...Hamed Raza
 
Facial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalFacial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalahmad abdelhafeez
 
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET-  	  Persons Identification Tool for Visually Impaired - Digital EyeIRJET-  	  Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Persons Identification Tool for Visually Impaired - Digital EyeIRJET Journal
 
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTION
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTIONFACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTION
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTIONvivatechijri
 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition Systemijscai
 
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationMulti Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
 
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
 
Face Detection
Face DetectionFace Detection
Face Detectionamar kakde
 
IRJET-A Survey on Face Recognition based Security System and its Applications
IRJET-A Survey on Face Recognition based Security System and its ApplicationsIRJET-A Survey on Face Recognition based Security System and its Applications
IRJET-A Survey on Face Recognition based Security System and its ApplicationsIRJET Journal
 
Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Taowei Huang
 

What's hot (18)

FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSFACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithms
 
A Hybrid Approach to Face Detection And Feature Extraction
A Hybrid Approach to Face Detection And Feature ExtractionA Hybrid Approach to Face Detection And Feature Extraction
A Hybrid Approach to Face Detection And Feature Extraction
 
Face Recognition for Personal Photos using Online Social Network Context and ...
Face Recognition for Personal Photos using Online Social Network Context and ...Face Recognition for Personal Photos using Online Social Network Context and ...
Face Recognition for Personal Photos using Online Social Network Context and ...
 
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
 
Comparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition TechniquesComparative Studies for the Human Facial Expressions Recognition Techniques
Comparative Studies for the Human Facial Expressions Recognition Techniques
 
4.face detection authentication on smartphones end users usability assessment...
4.face detection authentication on smartphones end users usability assessment...4.face detection authentication on smartphones end users usability assessment...
4.face detection authentication on smartphones end users usability assessment...
 
PAPER2
PAPER2PAPER2
PAPER2
 
Facial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns finalFacial expression recognition based on local binary patterns final
Facial expression recognition based on local binary patterns final
 
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET-  	  Persons Identification Tool for Visually Impaired - Digital EyeIRJET-  	  Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
 
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTION
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTIONFACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTION
FACE DETECTION AND FEATURE EXTRACTION FOR FACIAL EMOTION DETECTION
 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition System
 
Kh3418561861
Kh3418561861Kh3418561861
Kh3418561861
 
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationMulti Local Feature Selection Using Genetic Algorithm For Face Identification
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
 
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)International Journal of Image Processing (IJIP) Volume (1) Issue (2)
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
 
Face Detection
Face DetectionFace Detection
Face Detection
 
IRJET-A Survey on Face Recognition based Security System and its Applications
IRJET-A Survey on Face Recognition based Security System and its ApplicationsIRJET-A Survey on Face Recognition based Security System and its Applications
IRJET-A Survey on Face Recognition based Security System and its Applications
 
Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.Recent Advances in Face Analysis: database, methods, and software.
Recent Advances in Face Analysis: database, methods, and software.
 

Viewers also liked

EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTURE
EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTUREEVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTURE
EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTUREJournal For Research
 
Reed Solomon Frame Structures Revealed
Reed Solomon Frame Structures RevealedReed Solomon Frame Structures Revealed
Reed Solomon Frame Structures RevealedDavid Alan Tyner
 
Tube Frame Structures:An overview
Tube Frame Structures:An overviewTube Frame Structures:An overview
Tube Frame Structures:An overviewzafrin mohamed
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codesShailesh Tanwar
 
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...Ijripublishers Ijri
 
Natives of south africa
Natives of south africaNatives of south africa
Natives of south africaMatthew Dunne
 
Bundled Tube Structure
Bundled Tube StructureBundled Tube Structure
Bundled Tube StructureErsa Sitompul
 
Study on the effect of viscous dampers for RCC frame Structure
Study on the effect of viscous dampers for RCC frame StructureStudy on the effect of viscous dampers for RCC frame Structure
Study on the effect of viscous dampers for RCC frame StructurePuneet Sajjan
 
Seismic Analysis of regular & Irregular RCC frame structures
Seismic Analysis of regular & Irregular RCC frame structuresSeismic Analysis of regular & Irregular RCC frame structures
Seismic Analysis of regular & Irregular RCC frame structuresDaanish Zama
 
2 san and khoikhoi
2   san and khoikhoi2   san and khoikhoi
2 san and khoikhoiMB SITHOLE
 
Frame Structures including sap2000
Frame Structures including sap2000Frame Structures including sap2000
Frame Structures including sap2000Wolfgang Schueller
 
Fundamentals of sdh
Fundamentals of sdhFundamentals of sdh
Fundamentals of sdhsreejithkt
 
Report on rigid frame structures
Report  on rigid frame structuresReport  on rigid frame structures
Report on rigid frame structuresManisha Agarwal
 

Viewers also liked (14)

EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTURE
EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTUREEVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTURE
EVALUATION OF RESPONSE OF INELASTIC RCC FRAME STRUCTURE
 
Reed Solomon Frame Structures Revealed
Reed Solomon Frame Structures RevealedReed Solomon Frame Structures Revealed
Reed Solomon Frame Structures Revealed
 
Tube Frame Structures:An overview
Tube Frame Structures:An overviewTube Frame Structures:An overview
Tube Frame Structures:An overview
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codes
 
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...
PERFORMANCE BASED ANALYSIS OF VERTICALLY IRREGULAR STRUCTURE UNDER VARIOUS SE...
 
Natives of south africa
Natives of south africaNatives of south africa
Natives of south africa
 
Bundled Tube Structure
Bundled Tube StructureBundled Tube Structure
Bundled Tube Structure
 
Study on the effect of viscous dampers for RCC frame Structure
Study on the effect of viscous dampers for RCC frame StructureStudy on the effect of viscous dampers for RCC frame Structure
Study on the effect of viscous dampers for RCC frame Structure
 
Seismic Analysis of regular & Irregular RCC frame structures
Seismic Analysis of regular & Irregular RCC frame structuresSeismic Analysis of regular & Irregular RCC frame structures
Seismic Analysis of regular & Irregular RCC frame structures
 
2 san and khoikhoi
2   san and khoikhoi2   san and khoikhoi
2 san and khoikhoi
 
SDH Frame Structure
SDH Frame StructureSDH Frame Structure
SDH Frame Structure
 
Frame Structures including sap2000
Frame Structures including sap2000Frame Structures including sap2000
Frame Structures including sap2000
 
Fundamentals of sdh
Fundamentals of sdhFundamentals of sdh
Fundamentals of sdh
 
Report on rigid frame structures
Report  on rigid frame structuresReport  on rigid frame structures
Report on rigid frame structures
 

Similar to DETECTING FACIAL EXPRESSION IN IMAGES

Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a surveyieijjournal
 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenfacesadique_ghitm
 
Review of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetsReview of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetseSAT Journals
 
Review of facial expression recognition system and
Review of facial expression recognition system andReview of facial expression recognition system and
Review of facial expression recognition system andeSAT Publishing House
 
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGAN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGijiert bestjournal
 
Eigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesEigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesAlexander Decker
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)inventionjournals
 
A Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionA Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionCSCJournals
 
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKHUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKijiert bestjournal
 
Facial Image Classification And Searching - A Survey
Facial Image Classification And Searching - A SurveyFacial Image Classification And Searching - A Survey
Facial Image Classification And Searching - A SurveyZac Darcy
 
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...IRJET Journal
 
Emotion Recognition using Image Processing
Emotion Recognition using Image ProcessingEmotion Recognition using Image Processing
Emotion Recognition using Image Processingijtsrd
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewIOSR Journals
 
IRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET Journal
 
IRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
 
IRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
 

Similar to DETECTING FACIAL EXPRESSION IN IMAGES (20)

Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a survey
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenface
 
Review of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetsReview of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasets
 
Review of facial expression recognition system and
Review of facial expression recognition system andReview of facial expression recognition system and
Review of facial expression recognition system and
 
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGAN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
 
Eigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesEigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant faces
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
Ijetcas14 435
Ijetcas14 435Ijetcas14 435
Ijetcas14 435
 
A Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionA Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression Detection
 
20320130406016
2032013040601620320130406016
20320130406016
 
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKHUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
 
Facial Image Classification And Searching - A Survey
Facial Image Classification And Searching - A SurveyFacial Image Classification And Searching - A Survey
Facial Image Classification And Searching - A Survey
 
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
 
Emotion Recognition using Image Processing
Emotion Recognition using Image ProcessingEmotion Recognition using Image Processing
Emotion Recognition using Image Processing
 
Techniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive ReviewTechniques for Face Detection & Recognition Systema Comprehensive Review
Techniques for Face Detection & Recognition Systema Comprehensive Review
 
IRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face RecognitionIRJET- A Review on Various Approaches of Face Recognition
IRJET- A Review on Various Approaches of Face Recognition
 
50120140504002
5012014050400250120140504002
50120140504002
 
IRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET - Emotionalizer : Face Emotion Detection System
IRJET - Emotionalizer : Face Emotion Detection System
 
IRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET- Emotionalizer : Face Emotion Detection System
IRJET- Emotionalizer : Face Emotion Detection System
 

More from Journal For Research

Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Journal For Research
 
Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Journal For Research
 
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
 
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016Journal For Research
 
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...Journal For Research
 
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015Journal For Research
 
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014Journal For Research
 
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...Journal For Research
 
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...Journal For Research
 
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012Journal For Research
 
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...Journal For Research
 
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009Journal For Research
 
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008Journal For Research
 
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002Journal For Research
 
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001Journal For Research
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021Journal For Research
 
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...Journal For Research
 
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023Journal For Research
 
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024Journal For Research
 

More from Journal For Research (20)

Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
 
Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...
 
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
 
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
 
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
 
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
 
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014
 
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
 
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
 
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
 
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
 
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
 
LINE FOLLOWER ROBOT | J4RV4I1010
LINE FOLLOWER ROBOT | J4RV4I1010LINE FOLLOWER ROBOT | J4RV4I1010
LINE FOLLOWER ROBOT | J4RV4I1010
 
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
 
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
 
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
 
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
 
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
 
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
 

Recently uploaded

ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersSabitha Banu
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 

Recently uploaded (20)

ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptxYOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
YOUVE_GOT_EMAIL_PRELIMS_EL_DORADO_2024.pptx
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
DATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginnersDATA STRUCTURE AND ALGORITHM for beginners
DATA STRUCTURE AND ALGORITHM for beginners
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptxYOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
YOUVE GOT EMAIL_FINALS_EL_DORADO_2024.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 

DETECTING FACIAL EXPRESSION IN IMAGES

  • 1. Journal for Research | Volume 02 | Issue 02 | April 2016 ISSN: 2395-7549 All rights reserved by www.journalforresearch.org 30 Detecting Facial Expression in Images Ghazaala Yasmin Prof. Samir K. Bandyopadhyay M.Tech. Student Professor Department of Computer Science & Engineering Department of Computer Science & Engineering University of Calcutta University of Calcutta Abstract Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is proposed. Keywords: Facial feature detection, Template matching and Face position detection _______________________________________________________________________________________________________ I. INTRODUCTION Facial expression recognition [1] [2] is another fruitfulness of computer vision research. Computer vision is the way to electronically represent the human vision with the help of some data analysis techniques. In a human computer interaction (HCI) system, the communication between human and computer can take place through different aspects (like verbal, non-verbal) of a human. Here we are considering only the non-verbal aspects of human being like, facial expression, body movement etc. In this paper we are mainly concern about the facial expression recognition process, which needs a face image on which we should apply our facial expression recognition algorithm. Now once we have the face image data, we need to apply some processing techniques with the help of pattern recognition, artificial intelligence, mathematics, computer science, electronics or any kind of scientific concept. Hence there are huge numbers of applications in computer vision research, but we will discuss only face recognition and facial expression. There are many applications, where facial expression detection process plays an important role. Researches in the field of social psychology show that facial expression are more natural in nature than the speaker’s spoken words and truly reflects the emotion of a person. According to statistical reports verbal part of a message contributes only for 7 percent to the effect of the message as a whole. The vocal part contributes for 38 percent, while facial expression of the speaker contributes for 55 percent to the effect of the spoken message. The facial expression recognition system was introduced in 1978 by Suwa et. al [4]. The main issue of building a facial expression recognition system is face detection [3] and alignment, image normalization, feature extraction, and classification. The analysis of the human face via image (and video) is one of the most interesting and focusing research topics in the last years for the image community. From the analysis (sensing, computing, and perception) of face images, much information can be extracted, such as the sex/gender, age, facial expression, emotion/temper, mentality/mental processes and behaviour/psychology, and the health of the person captured. According to this information, many practical tasks can be performed and completed; these include not only person identification or verification (face recognition), but also the estimation and/or determination of person's profession, hobby, name (recovered from memory), etc. Research on face image analysis has been carried out and is being conducted around various application topics, such as (in alphabetical order) age estimation, biometrics, biomedical instrumentations, emotion assessment, face recognition, facial expression classification, gender determination, human-computer/human-machine interaction, human behaviour and emotion study, industrial automation, military service, psychosis judgment, security checking systems, social signal processing, surveillance systems, sport training, tele-medicine service, etc. Therefore facial expressions are the most important information for emotions perception in face to face communication. This paper explains about an approach to the problem of facial feature extraction from a non-l frontal posed image For face portion segmentation basic image processing operation like morphological dilation, erosion, reconstruction techniques with disk structuring element are used. Six permanent Facial features like eyebrows(left and right), eye (left and right) , mouth and nose are extracted using facial geometry, edge projection analysis and distance measure and feature vector is formed considering height and width of left eye, height and width of left eyebrow, height and width of right eye, height and width of right eyebrow, height and width of nose and height and width of mouth along with distance between left eye and eyebrow, distance between right eye and eyebrow and distance between nose and mouth. Human face detection has drawn considerable attention in the past decades as it is one of the fundamental problems in computer vision. Given a single image, the ideal face detection should identify and locate all faces regardless of its three- dimensional position, orientation, and lighting conditions. The existing face detection techniques can be classified into four
  • 2. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 31 categories, namely, knowledge-based methods, feature invariant approaches, template matching methods, appearance based methods. Human face detection and segmentation is an active research area until recently. This field of research plays an important role in many applications such as face identification system, face tracking, video surveillance and security control system, and human computer interface. Those applications often require segmented human face which is ready to be processed. There are many factors that influence the success of human face detection and segmentation. Those factors include complex colour background, condition of illumination, change of position and expression, rotation of head, and distance between camera and subject. Face detection is a sub branch of object detection. The human face is a dynamic object and has a high degree of variability in its appearance, which makes face detection a difficult problem in computer vision. Images containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. However, many reported methods assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyse the information contained in face images, robust and efficient face detection algorithms are required. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its three- dimensional position, orientation, and lighting conditions. Such a problem is challenging because faces are non- rigid and have a high degree of variability in size, shape, colour, and texture. Numerous techniques have been developed to detect faces in a single image. Face detection and localization is the task of checking whether the given input image contains any human face, and if so, returning the location of the human face in the image. The wide variety of applications and the difficulty of face detection have made it an interesting problem for the researchers in recent years. Face detection is difficult mainly due to a large component of non-rigidity and textural differences among faces. The great challenge for the face detection problem is the large number of factors that govern the problem space. The long list of these factors include the pose, orientation, facial expressions, facial sizes found in the image, luminance conditions, occlusion, structural components, gender, ethnicity of the subject, the scene and complexity of image’s background. The scene in which the face is placed ranges from a simple uniform background to highly complex backgrounds. In the latter case it is obviously more difficult to detect a face. Faces appear totally different under different lighting conditions. Not only do different persons have different sized faces, faces closer to the camera appear larger than faces that are far away from the camera Basic emotions are emotions that have been scientifically proven to have a certain facial expression associated with it. Different emotional stages are indicated as follows.
  • 3. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 32 Facial Feature Detection system is fast becoming a familiar feature in ‘apps’ and on websites on different purposes. Human face identification and detection is often the first step in applications such as video surveillance, human computer interface, and face recognition and image database management [1]. Furthermore facial feature characteristics are very much effective in both biometric identification which automatically identifies a person from a digital image or a video image. Facial expressions are used not only to express our emotions, but also to provide important communicative cues during social interaction, such as our level of interest [2]. Facial features, eye feature has more application domain. It is reported that facial expressions have considerable effects on listener, near about 55 % effect of the spoken words depend on eye movements and facial expressions of the speaker. Facial expressions play a major role in Face Recognition Systems and image processing techniques of Human Machine Interface. There are several techniques for facial features selection like Principal Component Analysis, Distance calculation among face components, Template Matching. This algorithm describes a simple template matching based facial feature selection technique and detects facial expressions based on distances between facial featuresusing a set of image databases. The process for facial expression recognition system involves three stages: Pre Processing, Facial Feature Extraction and classification facial expressions. II. REVIEW WORKS Human face detection has drawn considerable attention in the past decades as it is one of the fundamental problems in computer vision. Given a single image, the ideal face detection should identify and locate all faces regardless of its three-dimensional position, orientation, and lighting conditions. The existing face detection techniques can be classified into four categories, namely, knowledge-based methods, feature invariant approaches, template matching methods, appearance based methods. The use of colour information has been introduced to the face-locating problem in recent years. Most publications [1-5] have shown that colour is a powerful descriptor that has practical use in the extraction of the face detection. Modelling skin colour requires choosing an appropriate colour space and identifying a cluster associated with skin colour in this space. YIQ colour space is used in commercial colour television broadcasting. YCbCr space is a hardware orientated model and is used in most video standards. So an effective use of the chrominance information for modelling human skin colour can be achieved in these colour space. Second, this format is typically used in video coding, and therefore the use of the same format for segmentation will avoid the extra computation required in conversation. Many research studies [6- 8] assume that chrominance components of skin-tone colour are independent of the luminance component. In fact, the skin-tone colour is non-linearly dependent on luminance. Researcher found that although skin colours of different people appear to vary over a wide range, they differ less in chrominance than brightness, specially the skin colours from a compact area in the YCbCr plane [9-10]. Human-like robots and machines that are expected to enjoy truly intelligent and transparent communications with human can be created using automatic facial expression recognition with a set of specific desired accuracy and performance requirements. Expression recognition involves a variety of subjects such as perceptual recognition, machine learning, affective computing etc. One case study uses skin colour range of human face to localize face area. After face detection, various facial features are identified by calculating the ratio of width of multiple regions in human face. Finally the test image is partitioned into a set of
  • 4. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 33 sub-images and each of these sub-images is matched against a set of sub-pattern training set. Partitioning is done using Aw- SpPCA algorithm. Given as input any emotion of face, this pattern training set will classify the particular emotion [1]. Face component extraction by dividing the face region into eye pair and mouth region and measurement of Euclidean distance among various facial features is also adopted by a case study. Similar study is done by Neha Gupta to detect emotions. This research includes four steps: pre-processing, edge detection, feature extraction and distance measurement among the features to classify different emotions. This type of approach is classified as Geometric Approach [3]. Another research includes Face detection method using segmentation technique. First, the face area of the test image is detected using skin colour detection. RGB colour space is transformed into yCbCr colour space in the image and then skin blocks quantization is done to detect skin colour blocks. As next step, a face cropping algorithm is used to localize the face region. Then, different facial features are extracted using segmentation of each component region (eyes, nose, mouth). Finally, vertical & angular distances between various facial features are measured and based on this any unique facial expression is identified. This approach can be used in any biometric recognition system [2]. A template matching based facial feature detection technique is used in a different case study [4,7-8]. Different methods of face detection and their comparative study are done in another review work. Face detection methods are divided into two primary techniques: Feature based & View based methods [5, 9]. Gabor filters are used to extract facial features in another study. This approach is called Appearance based approach. This classification based facial expression recognition method uses a bank of multilayer perceptron neural networks. Feature size reduction is done by Principal Component Analysis (PCA) [6, 10]. Thus existing works primarily focused in detecting facial features and they are served as input to emotion recognition algorithm. In this study, a template based feature detection technique is used for facial feature selection and then distance between eye and mouth regions is measured. III. FACIAL EXPRESSION RECOGNITION SYSTEM AND RESULTS In recent technology we have seen that how the advance image processing techniques with the help of pattern recognition and artificial intelligence can be effectively used in automatic detection and classification of various facial signals. Among them face recognition and facial expression recognition are the best ones to describe the concept of man-machine interaction efficiently. In both of these techniques we are doing patter recognition. For example, in face recognition we consider two patterns ‘known’ and ‘unknown’ whereas in facial expression recognition we consider five patterns ‘neutral’, ‘happy’, ‘sad’, ‘angry’, ‘disgust’ etc. Facial expression recognition can be used in behavior monitor system and medical system. In this paper we will show how the concept of face recognition with the help of neural network can be used in facial expression recognition process. The following figure shows the concept of a typical facial expression recognition system. Figure 1: Block diagram of a typical facial expression recognition system. At first we need to acquire the image on which we will apply our facial expression recognition techniques. Input image can be captured by any kind of imaging system. If the Input Image is colour (RGB), then convert it to Gray scale image. The input image can be of different size, format, colour (RGB or gray) etc. Hence we should preprocess the input image, so that we can efficiently apply our algorithm to get better result. In the preprocessing technique we use some compression technique like 2D-DCT to compress the data, because an image consists a large number of data, which increases the computation time. Also, we can apply some filtering techniques to remove the noise from the input image, because the presence of any artifacts can lead to false detection of facial features, which could produce wrong output. As in all imaging process, artifacts can occur, resulting in degraded quality of image which can compromise imaging evaluation. An artifact is a feature appearing in an image that is not present in the original object. Artefacts remain a problematic area and it affects the quality of the image. Pre-processing (artefact removal) techniques are used to improve the detection of the unwanted portion from the given images. Algorithm for Artefact Removal: 1) Step 1. Grayscale facial images are taken as input. 2) Step 2.Threshold value of the image is calculated using the standard deviation technique described above. 3) Step 3. The image is binarized using the threshold value. i.e. pixels having value greater than the threshold is set to 1 and pixels less than the threshold are set to 0. 4) Step 4. The binarized image is labelled and areas of connected components are calculated using equivalence classes. 5) Step 5. The connected component with the maximum area and the connected component with the second highest area are found out. 6) Step 6. The ratio of the maximum area to that of second maximum area are calculated.
  • 5. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 34 7) Step 7. On the basis of the ratio if ratio is high only the component with highest area is kept and all others are removed otherwise if ratio is low the component with the highest and second highest area are kept and all others are removed. 8) Step 8. A convex hull is calculated for the one pixel in the image and all regions within the convex hull are set to one. 9) Step 9.Now the above obtained image matrix is multiplied to the original image matrix to obtain an image consisting of only medical image without any artefact. In RGB images each pixel has a particular colour; that colour is described by the amount of red, green and blue in it. If each of these components has a range 0–255, this gives a total of 256^3 different possible colours. Such an image is a “stack” of three matrices; representing the red, green and blue values for each pixel. This means that for every pixel there correspond 3 values. Whereas in greyscale each pixel is a shade of gray, normally from 0 (black) to 255 (white). This range means that each pixel can be represented by eight bits, or exactly one byte. Other greyscale ranges are used, but generally they are a power of 2.so,we can say gray image takes less space in memory in comparison to RGB images. Edge detection refers to the process of identifying and locating sharp discontinuities in an image. The discontinuities are abrupt changes in pixel intensity which characterize boundaries of objects in a scene. Edges characterize boundaries and are therefore a problem of fundamental importance in image processing and an important tool for image segmentation. The concept of edge is highly useful in dealing with regions and boundaries as an edge point is transition in gray level associated with a point with respect to its background. Edges typically occur on the boundary between two regions. The following algorithm is used for edge detection as pre-processing step. Algorithm for Edge Detection: Basic functions are defined which are used in algorithms: Parent (i) Return ⌊i/2⌋ Left (i) Return 2i Right (i) Return 2i+1 Total number of nodes in a Complete Binary Tree tNode (h) Return 2h-1 The number of terminal nodes (leaf nodes) in a Complete Binary Tree lNode (h) Return 2h-1 The number of internal nodes (non-leaf nodes) in a Complete Binary Tree iNode (h) Return tNode (h) – lNode (h) Algorithms for: Storing Original Colour Space at Leaf Nodes of Tree ORIGINAL-HISTOGRAM (Image, height, width) Loop x← 1 to height Do Loop y← 1 to width Do Intensity ← Image [x, y] Tree [(iNode (h) + 1) + Intensity].count ← Tree [(iNode (h) + 1) + Intensity].count + 1 x←x +1 y←y +1 Return Tree Algorithms for: generate quantize colour spaces in different level of tree LEVEL-HISTOGRAM (Tree) Loop x ⟵ iNode (h) + 1 To tNode (h) Do Lcount ⟵ Tree (x).count Loop y ⟵ Parent (x) down To 0 Do If x mod 2 ≠ 0 Then Tree [y].intensity ⟵ Tree [x].intensity Tree [y].count ⟵ Tree [y].count + Lcount Else If Tree [y].count ˂ Tree [x].count ThenTree [y].intensity ⟵ Tree [x].intensity Tree [y].count ⟵ Tree [y].count + Lcount x ⟵ y
  • 6. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 35 y ⟵ Parent (x) x ⟵ x + 1 Return Tree Algorithms to: Calculate the Average Bin Distance BIN-DISTANCE (Tree, h1) TotBin ⟵ 0 TotBinDist ⟵ 0 Loop x ⟵ iNode (h1) + 2 to tNode (h1) Do TotBin ⟵ TotBin + 1 TotBinDist ⟵TotBinDist + (Tree [x] .intensity - Tree[x - 1] .intensity) x ⟵ x + 1 AvgBinDist ⟵ TotBinDist / TotBin Return AvgBinDist Algorithms for: Calculation of MDT by Identifying the Prominent Bins and Truncate the Non-Prominent Bins CALCULATE-MDT (Tree, h1) Tree [iNode (h1) + 1].prominent ⟵1 TotPrmBin ⟵ 0 TotPrmBinDist ⟵ 0 Loop x ⟵ iNode (h1) + 2 to tNode (h1) Do If Tree [x] .intensity - Tree [x - 1] .intensity ≥ AvgBinDist ThenTree[x].prominent ⟵ 1 TotPrmBin ⟵ TotPrmBin + 1 TotPrmBinDist ⟵ TotPrmBinDist + (Tree [x] .intensity - Tree [x - 1] .intensity) Else Tree[x].prominent ⟵ 0 x ⟵ x + 1 MDT ⟵ TotPrmBinDist / TotPrmBin Return MDT Algorithms for: Redraw the Image Using Truncated Histogram REDRAW - IMAGE (Image, height, width, Tree, h1, h) Loop x ⟵ 1 to height Do Loop y ⟵ 1 to width Do NewIntensity ⟵ (Image [x, y] / (tNode (h) / tNode (h1))) + 1 If Tree[iNode (h1) + NewIntensity + 1].prominent ≠1 Then While Tree [iNode (h1) + NewIntensity + 1].prominent ≠1 Do NewIntensity ⟵ NewIntensity – 1 NewImage [x, y] ⟵ NewIntensity y ⟵ y + 1 x ⟵ x + 1 Return NewImage Algorithms for: derive the Horizontal Edge of the image HozEdgeMap (NewImage, height, width, MDT) Loop x ⟵ 1 to height Do flag⟵ 1 Loop y ⟵ 1 to width Do If Flag = 1 Then NewIntensity ⟵NewImage [x, y] NxtNewIntensity ⟵NewImage [x, y] If |NewIntensity – NxtNewIntensity |≥ MDT Then Flag ⟵ 1 HozEdgeMapImage [x, y] ⟵ BLACK Else Flag ⟵ 0 HozEdgeMapImage [x, y] ⟵ WHITE y ⟵y + 1 x⟵x + 1 Return HozEdgeMapImage
  • 7. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 36 Algorithms for: derive the Edge of the image EDGEMAP (HozEdgeMapImage, VerEdgeMapImage, height, width) Loop x ⟵ 1 to height Do Loop y ⟵ 1 to width Do EdgeMapImage [x, y] ⟵HozEdgeMapImage [x, y] OR VerEdgeMapImage [x, y] y ⟵ y + 1 x ⟵ x + 1 Return EdgeMapImage The results obtained are shown Figure 2. (a) (b) (c) Fig. 2: (a) Template of the Face (b) Edge Detection of Side Face (c) Edge Detection of Front Face Once we acquire the face image, we need to extract the facial features from the background of the image. In this paper we use skin colour based face detection technique, which uses RGB and HSV colour model. There are another colour model (YCbCr) that can also be used to detect skin colour region. We use 2D-DCT to compress the extracted facial feature, which can make our processing faster. As our algorithm uses an image database, we have to apply the compression technique in all the images in the database. An example image is shown in the figure 5 on which we want to execute our proposed algorithm. Fig. 3: Original image containing Fig. 4: Image in HSV colour space Fig. 5: Extracted skin colour region from a single frontal viewed face. the image Fig. 6: Binary image showing region boundary Fig. 7: input Image Fig. 8: Detected Face Region The following algorithm extracts facial features. Facial_Feature_Detection (Input Image, Template Images) Step1. Start Step2. Read Input Human Face Image. If the Input Image is colour (RGB), then convert it to Gray scale Image and save the pixel values to a 2D array let gface. Else save the pixel values of the input image to a 2D array let gface. Step3. Read Left eye template image.
  • 8. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 37 If the template image is colour (RGB), then convert it to Gray scale Image and save the pixel values to a 2D array let gleft. Else save the pixel values of the input image to a 2D array let gleft. Step4. Read Right eye template image. If the template image is colour (RGB), then convert it to Gray scale Image and save the pixel values to a 2D array let gright. Else save the pixel values of the input image to a 2D array let gright. Step5. Read Nose template image. If the template image is colour (RGB), then convert it to Gray scale Image and save the pixel values to a 2D array let gnose. Else save the pixel values of the input image to a 2D array let gnose. Step6. Read Mouth template image. If the template image is colour (RGB), then convert it to Gray scale Image and save the pixel values to a 2D array let gmouth. Else save the pixel values of the input image to a 2D array let gmouth. Step7. Declare 4 2D Array C1, C2, C3 & C4 of size m*n where m*n is the size of gface. Step8. Calcualte C1[][] = 2D_norm_crosscorr(gleft,gface) C2[][]= 2D_norm_crosscorr(gright,gface) C3[][] = 2D_norm_crosscorr(gnose,gface) C4[][] = 2D_norm_crosscorr(gmouth,gface) Step9. Call (x11,y11,w1,h1) = Find_max(C1) (x21,y21,w2,h2) = Find_max(C2) (x31,y31,w3,h3) = Find_max(C3) (x41,y41,w4,h4) = Find_max(C4) where (x11,y11,w1,h1), (x21,y21,w2,h2), (x31,y31,w3,h3), (x41,y41,w4,h4) are top – left pixel coordinate, width, height of the matched rectangular area around left eye, right eye, nose and mouth respectively. Step10. Calculate x12 = x11 + w1 & y12 = y11 + h1 x22 = x21 + w2 & y22 = y21 + h2 x32 = x31 + w3 & y32 = y31 + h3 x42 = x41 + w4 & y42 = y41 + h4 where (x12,y12), (x22,y22), (x32,y32), (x42,y42) are bottom right pixel coordinate of the matched rectangular area around left eye, right eye, nose and mouth respectively. Step11. Draw Boundary Rectangle around left eye in gface with top left, top right, bottom left and bottom right pixel coordinates as (x11,y11), (x12,y11), (x11,y12) & (x12,y12) respectively. Draw Boundary Rectangle around right eye in gface with top left, top right, bottom left and bottom right pixel coordinates as (x21,y21), (x22,y21), (x21,y22) & (x22,y22) respectively. Draw Boundary Rectangle around nose in gface with top left, top right, bottom left and bottom right pixel coordinates as (x31,y31), (x32,y31), (x31,y32) & (x32,y32) respectively. Draw Boundary Rectangle around mouth in gface with top left, top right, bottom left and bottom right pixel coordinates as (x41,y41), (x42,y41), (x41,y42) & (x42,y42) respectively. Calculate middle point pixel coordinate (x1mid,y1mid) of the boundary rectangle around Left eye as x1mid = (x11+x12)/2 and y1mid = (y11+y12)/2. Step12. Calculate Euclidian Distance between middle point pixel coordinate (x1mid,y1mid) of the boundary rectangle around left eye and top – left pixel coordinate (x41,y41) of the boundary rectangle around mouth as: Dist1 = √{( x1mid – x41)2 + ( y1mid - y41)2} unit. Step13. Calculate middle point pixel coordinate (x2mid,y2mid) of the boundary rectangle around right eye as x2mid = (x21+x22)/2 and y2mid = (y21+y22)/2. Step14. Calculate Euclidian Distance between middle point pixel coordinate (x2mid,y2mid) of the boundary rectangle around right eye and top – right pixel coordinate (x42,y41) of the boundary rectangle around mouth as:
  • 9. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 38 Dist2 = √{( x2mid – x42)2 + ( y2mid - y41)2} unit. Step15. Write the value of Dist1 and Dist2 in a output text file for comparison. Step16. Repeat step 1 to 15 for another same human face but with smiling facial expression. Step17. Compare both input face images according the distances measured between eyes & mouth. The image with larger distance is considered as Happy face or smiling face, in general,. Step18. Exit 2D_Norm_Crosscorr (Template Gray scale Image, Input Gray scale Image) Step1. Start Step2. Perform 2D Cross Correlation between Template Image and Input Image pixel values and return 2D array C of size m*n with values of the corresponding Cross correlation where m*n is the size of the Input Image. Step3. End Find_Max(C[ ][ ]) Step1. Start Step2. Find Maximum Value of 2D Array C[ ][ ] and determine the corresponding rectangular region where the maximum value is found. Step3. Find top – left position coordinate (x,y), width (w) and height (h) of the rectangular region and return the values. Step4. End Facial_expression_recognition (Input Image, 3 Training Image Databases) Step19. Start Step20. Read Input Human Face Image and Store the pixel values to an array let face. Step21. Call Processed_Face = imPreprocess(face) Step22. Set i=1 Step23. Repeat Step 6 to 9 for every Image of Train_Neutral_Other Image Database Step24. Read the Image from the Database and Store the pixel values to an array let t. Step25. Call t1= imPreprocess(t) Step26. Store t1 into image cell Train_Neutral_Other_Cell as Train_Normal_Other_Cell(1,i)=t1 Step27. Set i=i+1 Step28. Set i=1 Step29. Repeat Step 12 to 15 for every Image of Train_Smiling_Other Image Database Step30. Read the Image from the Database and Store the pixel values to an array let t. Step31. Call t1= imPreprocess(t) Step32. Store t1 into image cell Train_Smiling_Other_Cell as Train_Smiling_Other_Cell(1,i)=t1 Step33. Set i=i+1 Step34. Set i=1 Step35. Repeat Step 18 to 21 for every Image of Train_Angry_Sad Image Database Step36. Read the Image from the Database and Store the pixel values to an array let t. Step37. Call t1= imPreprocess(t) Step38. Store t1 into image cell Train_Angry_Sad_Cell as Train_Angry_Sad_Cell(1,i)=t1 Step39. Set i=i+1 Step40. Create 3 2D Array of size (no_of_images * mn) for 3 Training Databases where no_of_images refers to the total number of images in the corresponding training databases respectively and m,n refers to the predefined size mentioned in Impreprocess function. Let traindata1 (n1 *mn), traindata2 (n2*mn) and traindata3 (n3*mn) are 3 arrays for Train_Neutral_Other, Train_Smiling_Other and Train_Angry_Sad Training Image Databases respectively, with n1, n2 and n3 are number of images in the corresponding databases. Step41. Initialize all elements of traindata1, traindata2 and traindata3 array to 0. Step42. Set i=1 Step43. Repeat step 26 to27 for n1 times Step44. Set traindata1(i,:)= Train_Normal_Other_Cell(1,i)
  • 10. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 39 Step45. Set i=i+1 Step46. Set i=1 Step47. Repeat step 30 to31 for n2 times Step48. Set traindata2(i,:)= Train_Smiling_Other_Cell(1,i) Step49. Set i=i+1 Step50. Set i=1 Step51. Repeat step 34 to35 for n3 times Step52. Set traindata3(i,:)= Train_Angry_Sad_Cell(1,i) Step53. Set i=i+1 Step54. Create 3 1D Arrays, namely class1, class2 and class3 of size n1,n2 and n3 respectively corresponding to Train_Neutral_Other, Train_Smiling_Other and Train_Angry_Sad Training Image Databases respectively. Step55. Set i=1 Step56. Repeat Step 39 to 40 for all images of Train_Neutral_Other Image Database Step57. If ith image of Train_Neutral_Other is of Neutral expression then set class1(i)= 1 else set class1(i)= -1 Step58. Set i=i+1 Step59. Set i=1 Step60. Repeat Step 43 to 44 for all images of Train_Smiling_Other Image Database Step61. If ith image of Train_Smiling_Other is of Smiling expression then set class2(i)= 2 else set class2(i)= -2 Step62. Set i=i+1 Step63. Set i=1 Step64. Repeat Step 47 to 48 for all images of Train_Angry_Sad Image Database Step65. If ith image of Train_Angry_Sad is of Angry expression then set class3(i)= 3 else set class3(i)= -3 Step66. Set i=i+1 Step67. Call SVMTrained1=SVM_Training(traindata1,class1) SVMTrained2=SVM_Training(traindata2,class2) SVMTrained3=SVM_Training(traindata3,class3) Step68. Call result1=SVM_Classify(SVMTrained1, Processed_Face) result2=SVM_Classify(SVMTrained2, Processed_Face) result3=SVM_Classify(SVMTrained3, Processed_Face) Step69. Call FinalExpression=Recognize_Expression(result1,result2,result3) Step70. Display FinalExpression as output Step71. Exit Impreprocess (Image_Pixel_Array) Step1. Start Step2. Convert Image_Pixel_Array to its corresponding double format let Image_Pixel_Array_Double. Step3. If Image_Pixel_Array_Double is of format a*b*3, then convert it to Corresponding Gray scale and save the pixel values to a 2D array let gImage. Else save the pixel values of the input image to a 2D array let gImage. Step4. Resize gImage to a Predefined size say m*n & save the pixel values to a 2D array let gImage_Resized. Step5. Reshape gImage_Resized Array to a 2D array of size 1*(mn) & save the pixel values to a 2D array let gImage_Reshaped. Step6. Return the Array gImage_Reshaped. Step7. End SVM_Training (Training_Data, Group_Membership_Class) Step1. Start Step2. Train Linear Support Vector Machine with Training_Data and Group_Membership_Class and store the value in an array let SVM1.
  • 11. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 40 Step3. Return the array SVM1 Step4. End SVM_Classify (SVM_Trained, Img_Array) Step1. Start Step2. Classify Img_Array in one of the classes with SVM_Trained and SVM Binary Classifier and store the value in a variable let Classifier1 Step3. Return the value Classifier1 Step4. End Recognize_Expression (Val1, Val2, Val3) Step1. Start Step2. If Val1=1 Set Expression=Neutral Else if Val1= -1 and Val2=2 Set Expression=Smiling Else if Val1= -1 and Val2= -2 and Val3=3 Set Expression=Angry Else if Val1= -1 and Val2= -2 and Val3= -3 Set Expression=Sad Step3. Return the value of Expression Step4. End The following figures describes the output of the proposed system Figure 9 Test image Figure 10 Templates of this image Fig. 11 Left eye template Fig. 12: Detected Face Region Fig. 13: Neutral face Fig. 14: Neutral face Fig. 15: Smiling face template matching IV. CONCLUSIONS Facial expression recognition or emotion detection system has numerous applications in image processing domains, security applications domain or any type of biometric system. REFERENCES [1] Farah Azirar, ‘Facial Expression Recognition’, Bachelor Thesis, School of Electronics and Physical Sciences, Department of Electronic Engineering, University of Surrey, 2004. [2] L. Franco and A. Treves. A Neural Network Face Expression Recognition System using Unsupervised Local Processing. Proceedings of the Second International Symposium on Image and Signal Processing and Analysis (ISPA 01), Croatia, pp. 628-632, June 2001.
  • 12. Detecting Facial Expression in Images (J4R/ Volume 02 / Issue 02 / 006) All rights reserved by www.journalforresearch.org 41 [3] Angel Noe Martinez-Gonzalez and Victor Ayala-Ramirez, “Real Time Face Detection Using Neural Networks”, 2011 10th Mexican International Conference on Artificial Intelligence. [4] M. Suwa, N. Sugie, and K. Fujimora. A preliminary note on pattern recognition of human emotional expression. In International Joint Conference on Pattern Recognition, pages 408–410, 1978. [5] H.A. Rowley, S. Baluja and T.Kanade “Rotation Invariant Neural Network-Based Face Detection“, Proc IEEE Conf. Computer Vision and Pattern Recognition, 1998, pp 38-44. [6] C.H. Lee, J.S. Kim, K.H. Park, “Automatic Human Face Location in a Complex Background Using Motion and Colour Information”, Pattern Recognition, vol. 29, no. 11, 1996, pp. 129-140. [7] K. Sobottka, I. Pitas, “A Novel Method for Automatic Face Segmentation, Facial Feature Extraction and Tracking”, Signal Process. Image Communication, vol. 12, no. 3, 1998, 263-281. [8] Hsu, Rein-Lien, Abdel-Mottaleb, Mohamed, Jain, and Anil K. “Face Detection in Colour Images”. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 5, 2002, pp. 696-706. [9] A. Pentland and M. Turk. Eigen faces for recognition. Journal of Cognitive Neuroscience, 3(1):71-86, 1991. [10] H. A. Rowley. Neural Network-Based Face Detection. PhD thesis, Carnegie Mellon University, Pitsburgh, 1999.