Davis plaque method.pptx recombinant DNA technology
LEARNING BASES OF ACTICITY
1. Learning Bases of Activity for Facial
Expression Recognition
K.PADMA PRIYA
ASSISTANT PROFESSOR
SBK COLLEGE
ARUPPKOTTAI
2. Domain Introduction:
Image processing is a method to perform some operations on an image, in order to get an enhanced
image or to extract some useful information from it.
It is a type of signal processing in which input is an image and output may be image or
characteristics/features associated with that image.
Nowadays, image processing is among rapidly growing technologies. It forms core research area
within engineering and computer science disciplines too.
◦ Image processing basically includes the following three steps:
◦ Importing the image via image acquisition tools;
◦ Analyzing and manipulating the image;
◦ Output in which result can be altered image or report that is based on image analysis.
3. Objective:
The main objective of this process is to identify the facial expression of the person to classify the
mental state of the person, on the other hand it is typically said by the application like, identifying
the customer feedbacks by the facial expressions.
The main goal of the process is to improve the facial expression recognition rate of the person.
4. Abstract:
In this approach, we propose a novel data-driven feature extraction framework that represents facial expression variations
as a linear combination of localized basis functions, whose coefficients are proportional to movement intensity.
We show that the linear basis functions of the proposed framework can be obtained by training a sparse linear model with
Gabor phase shifts computed from facial videos.
The proposed framework process addresses generalization issues that are not tackled by existing learnt representations,
and achieves, with the same learning parameters, state-of-the-art results in recognizing both posed expressions and
spontaneous micro-expressions. And has potential applications in different aspects of day-to-day life not yet realized due
to absence of effective expression recognition techniques.
This facial expression approach discusses the application of Gabor filter based feature extraction in combination feed
forward neural networks (classifier) for recognition of seven different facial expressions from still pictures of the human
face.
5. Existing System:
In existing system, several methods are used to extract image face features vector, which presents small inter-person
variation.
This feature vector is feed to a multilayer perceptron to carry out the face recognition or identity verification tasks.
Proposed system consists in a combination of Gabor and Eigenfaces to obtain the feature vector.
Evaluation results show that proposed system provides robustness against changes in illumination, wardrobe, facial
expressions, scale, and position inside the captured image, as well as inclination, noise contamination and filtering.
DISADVANTAGES:
One problem that can be associated with the use of these methods is the fact that they are very sensitive to variations in
pose, illumination, occlusion, aging, and rotation changes of the face.
These techniques do not work well in case of a small sample size.
LBP does not provide the directional information of the facial frame.
The performance of this approach also degrades with variation in illumination.
6. Proposed system:
In most cases of facial expression classification, the process of feature extraction yields a definitively
large number of features and subsequently a smaller sub-set of features needs to be selected according to
some optimality criteria.
Gabor filters have been proved to be effective for expression recognition because of its superior
capability of multi-scale representation.
Gabor wavelet can use very better description of biological visual neuron about receptive field,.
According to the needs of special vision, it can adjust the spatial and frequency properties to face
expression characteristic wanted, so Gabor filter wavelet is suitable for people face analysis and
treatment of expression.
7. Advantages:
It has good predictive ability, and computationally.
It is less expensive than other existing methods.
The advantage of this method is that it is very efficient for seeking localized features.
It also provided good recognition rates when used across multiple datasets.
The proposed HCRF model also showed significant improvement over existing work in terms of
recognition accuracy.
8. Modules:
Input Face image
Face Detection and extraction
Feature Extraction
Classification
Performance Estimation
9. Input Image
Input Image:
An image is a rectangular array of values (pixels). Each pixel represents the measurement of some
property of a scene measured over a finite area.
The property could be many things, but we usually measure either the average brightness (one
value) or the bright nesses of the image filtered through red, green and blue filters (three values).
The values are normally represented by an eight bit integer, giving a range of 256 levels of
brightness.
We talk about the resolution of an image: this is defined by the number of pixels and number of
brightness values.
10. Face Detection and extraction
The Viola–Jones object detection framework is the first object detection framework to provide
competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones.
Although it can be trained to detect a variety of object classes, it was motivated primarily by the problem
of face detection.
The problem to be solved is detection of faces in an image. A human can do this easily, but a computer
needs precise instructions and constraints. To make the task more manageable, Viola–Jones requires full
view frontal upright faces.
11. Face Detection and extraction
Thus in order to be detected, the entire face must point towards the camera and should not be tilted to
either side.
While it seems these constraints could diminish the algorithm's utility somewhat, because the
detection step is most often followed by a recognition step, in practice these limits on pose are quite
acceptable. This algorithm is implemented in MATLAB as DetectObjects().
12. Classification
Image classification refers to the task of extracting information classes from a multiband raster image.
The resulting raster from image classification can be used to create thematic maps. Depending on the
interaction between the analyst and the computer during classification, there are two types of
classification: supervised and unsupervised.
Supervised classification uses the spectral signatures obtained from training samples to classify an
image. With the assistance of the Image Classification toolbar, you can easily create training samples to
represent the classes you want to extract.
Unsupervised classification finds spectral classes (or clusters) in a multiband image without the
analyst’s intervention.
The Image Classification toolbar aids in unsupervised classification by providing access to the tools to
create the clusters, capability to analyze the quality of the clusters, and access to classification tools.
13. Performance Analysis
Sensitivity and specificity are statistical measures of the performance of a binary classification test, also
known in statistics as classification function:
Sensitivity (also called the true positive rate, the recall, or probability of detection in some fields)
measures the proportion of positives that are correctly identified as such (i.e. the percentage of sick people
who are correctly identified as having the condition).
Specificity (also called the true negative rate) measures the proportion of negatives that are correctly
identified as such (i.e., the percentage of healthy people who are correctly identified as not having the
condition)
◦ True positive: Sick people correctly identified as sick
◦ False positive: Healthy people incorrectly identified as sick
◦ True negative: Healthy people correctly identified as healthy
◦ False negative: Sick people incorrectly identified as healthy.