The document discusses a facial recognition system based on locality preserving projections (LPP). It begins by explaining that existing facial recognition systems using PCA and LDA aim to preserve global structure but local structure is more important. It then proposes a system using LPP, which aims to preserve local manifold structure by modeling the image space as a nearest-neighbor graph. The system represents faces as "Laplacianfaces" in a low-dimensional subspace that preserves local structure for more accurate identification. It provides theoretical analysis showing how PCA, LDA and LPP can be derived from different graph models.
Face recognition system plays an important role when its comes to security, In this slide using of neural networking system for face recognition system has demonstrated.
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
Face recognition system plays an important role when its comes to security, In this slide using of neural networking system for face recognition system has demonstrated.
Attendance Management System using Face RecognitionNanditaDutta4
The project ppt presentation is made for the academic session for the completion of the work from Bharati Vidyapeeth Deemed University(IMED) MCA department
Presentation on Face Recognition: A facial recognition is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Detecting malaria using a deep convolutional neural networkYusuf Brima
Experiment with Deep Residual Convolutional Neural Network to classify microscopic blood cell images (Uninfected, Parasitized)
Utiling ResNet,Deep Residual Learning for Image Recognition (He et al, 2015) architecture.
Uses Keras with a Tensorflow backend.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
Deep Learning Model- Solution Design
A Convolution Neural Network architecture was preferred due to its strengthen to classify images.
The problem was a binary classification to determine if an image had a parasite or uninfected
A total of 6 models were trained and tested to choose the best performing models
They were divided into 4 distinct CNNs and 2 VGG type neural networks
A smart environment is one that is able to identify people, interpret their actions, and react appropriately. Thus, one of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are very robust against changes in the environment.
Presentation on Face Recognition: A facial recognition is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source.
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Detecting malaria using a deep convolutional neural networkYusuf Brima
Experiment with Deep Residual Convolutional Neural Network to classify microscopic blood cell images (Uninfected, Parasitized)
Utiling ResNet,Deep Residual Learning for Image Recognition (He et al, 2015) architecture.
Uses Keras with a Tensorflow backend.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
INTRODUCTION
FACE RECOGNITION
CAPTURING OF IMAGE BY STANDARD VIDEO CAMERAS
COMPONENTS OF FACE RECOGNITION SYSTEMS
IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY
PERFORMANCE
SOFTWARE
ADVANTAGES AND DISADVANTAGES
APPLICATIONS
CONCLUSION
Deep Learning Model- Solution Design
A Convolution Neural Network architecture was preferred due to its strengthen to classify images.
The problem was a binary classification to determine if an image had a parasite or uninfected
A total of 6 models were trained and tested to choose the best performing models
They were divided into 4 distinct CNNs and 2 VGG type neural networks
A smart environment is one that is able to identify people, interpret their actions, and react appropriately. Thus, one of the most important building blocks of smart environments is a person identification system. Face recognition devices are ideal for such systems, since they have recently become fast, cheap, unobtrusive, and, when combined with voice-recognition, are very robust against changes in the environment.
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...Editor IJCATR
Content-based image retrieval is a technique which uses visual contents to search images from large scale image databases
according to users' interests. Given a query face image, content-based face image retrieval tries to find similar face images from a large
image database. Initially face of the image is detected from the query image. After the removal of noise present in the image, it is
separated as patches. For each patch, the Local binary pattern (LBP) is extracted which improves the detection performance. LBP is a
type of feature used for classification in computer vision. The LBP operator assigns a label to every pixel of a gray level image. The
label mapping to a pixel is affected by the relationship between this pixel and its eight neighbors. Support Vector Machine (SVM) is
used then which will produce a model (based on the training data) that predicts the target values of the test data given only the test data
attributes. When the feature values are provided to the SVM classifier, it will train about the feature. Finally it will classify about the
result. SVM maps input vectors to a higher dimensional vector space where an optimal hyper plane is constructed. Among the
available hyper planes, there is one hyper plane alone that maximizes the distance between itself and the nearest data vectors of each
category. The Euclidean distance between the query image and database image is calculated and the index of the Euclidean distance is
sorted.The indexing scheme used for this purpose provides an efficient way to search the image. Then the corresponding image from
the database is retrieved based upon the index. This SVM classifier mainly improves the detection performance and the rate of
accuracy.
CDS is the criminal face identification by capsule neural network.
Solving the common problems in image recognition such as illumination problem, scale variability, and to fight against a most common problem like pose problem, we are introducing Face Reconstruction System.
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
Visual prior from generic real-world images study to represent that objects in a scene. The existing work presented online tracking algorithm to transfers visual prior learned offline for online object tracking. To learn complete dictionary to represent visual prior with collection of real world images. Prior knowledge of objects is generic and training image set does not contain any observation of target object. Transfer learned visual prior to construct object representation using Sparse coding and Multiscale max pooling. Linear classifier is learned online to distinguish target from background and also to identify target and background appearance variations over time. Tracking is carried out within Bayesian inference framework and learned classifier is used to construct observation model. Particle filter is used to estimate the tracking result sequentially however, unable to work efficiently in noisy scenes. Time sift variance were not appropriated to track target object with observer value to prior information of object structure. Proposal HMM based kalman filter to improve online target tracking in noisy sequential image frames. The covariance vector is measured to identify noisy scenes. Discrete time steps are evaluated for identifying target object with background separation. Experiment conducted on challenging sequences of scene. To evaluate the performance of object tracking algorithm in terms of tracking success rate, Centre location error, Number of scenes, Learning object sizes, and Latency for tracking.
NO1 Uk Amil Baba In Lahore Kala Jadu In Lahore Best Amil In Lahore Amil In La...Amil baba
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
MATHEMATICS BRIDGE COURSE (TEN DAYS PLANNER) (FOR CLASS XI STUDENTS GOING TO ...PinkySharma900491
Class khatm kaam kaam karne kk kabhi uske kk innings evening karni nnod ennu Tak add djdhejs a Nissan s isme sniff kaam GCC bagg GB g ghan HD smart karmathtaa Niven ken many bhej kaam karne Nissan kaam kaam Karo kaam lal mam cell pal xoxo
1. CHAPTER-1
INTRODUCTION
A smart environment is one that is able to identify people, interpret
their actions, and react appropriately. Thus, one of the most important building
blocks of smart environments is a person identification system. Face recognition
devices are ideal for such systems, since they have recently become fast, cheap,
unobtrusive, and, when combined with voice-recognition, are very robust against
changes in the environment. Moreover, since humans primarily recognize each
other by their faces and voices, they feel comfortable interacting with an
environment that does the same.
Facial recognition systems are built on computer programs that
analyze images of human faces for the purpose of identifying them. The programs
take a facial image, measure characteristics such as the distance between the eyes,
the length of the nose, and the angle of the jaw, and create a unique file called a
"template." Using templates, the software then compares that image with another
image and produces a score that measures how similar the images are to each
other. Typical sources of images for use in facial recognition include video camera
signals and pre-existing photos such as those in driver's license databases.
Facial recognition systems are computer-based security systems that
are able to automatically detect and identify human faces. These systems depend
on a recognition algorithm, such as eigenface or the hidden Markov model. The
first step for a facial recognition system is to recognize a human face and extract it
for the rest of the scene. Next, the system measures nodal points on the face, such
as the distance between the eyes, the shape of the cheekbones and other
distinguishable features.
1
2. These nodal points are then compared to the nodal points computed
from a database of pictures in order to find a match. Obviously, such a system is
limited based on the angle of the face captured and the lighting conditions present.
New technologies are currently in development to create three-dimensional models
of a person's face based on a digital photograph in order to create more nodal
points for comparison. However, such technology is inherently susceptible to error
given that the computer is extrapolating a three-dimensional model from a two-
dimensional photograph.
Principle Component Analysis is an eigenvector method designed to
model linear variation in high-dimensional data. PCA performs dimensionality
reduction by projecting the original n-dimensional data onto the k << n
-dimensional linear subspace spanned by the leading eigenvectors of the data’s
covariance matrix. Its goal is to find a set of mutually orthogonal basis functions
that capture the directions of maximum variance in the data and for which the
coefficients are pair wise decorrelated. For linearly embedded manifolds, PCA is
guaranteed to discover the dimensionality of the manifold and produces a compact
representation.
Facial Recognition Applications:
Facial recognition is deployed in large-scale citizen identification
applications, surveillance applications, law enforcement applications such as
booking stations, and kiosks.
2
3. 1.1 Problem Definition
Facial recognition systems are computer-based security systems that are able
to automatically detect and identify human faces. These systems depend on a
recognition algorithm. But the most of the algorithm considers some what global
data patterns while recognition process. This will not yield accurate recognition
system. So we propose a face recognition system which can able to recognition
with maximum accuracy as possible.
1.2 System Environment
The front end is designed and executed with the J2SDK1.4.0 handling
the core java part with User interface Swing component. Java is robust , object
oriented , multi-threaded , distributed , secure and platform independent language.
It has wide variety of package to implement our requirement and number of classes
and methods can be utilized for programming purpose. These features make the
programmer’s to implement to require concept and algorithm very easier way in
Java.
The features of Java as follows:
Core java contains the concepts like Exception handling, Multithreading;
Streams can be well utilized in the project environment.
The Exception handling can be done with predefined exception and
has provision for writing custom exception for our application.
3
4. Garbage collection is done automatically, so that it is very secure in
memory management.
The user interface can be done with the Abstract Window tool KitAnd
also Swing class. This has variety of classes for components and containers. We
can make instance of these classes and this instances denotes particular object that
can be utilized in our program.
Event handling can be performed with Delegate Event model. The
objects are assigned to the Listener that observe for event, when the event takes
place the corresponding methods to handle that event will be called by Listener
which is in the form of interfaces and executed.
This application makes use of Action Listener interface and the event
click event gets handled by this. The separate method actionPerformed() method
contains details about the response of event.
Java also contains concepts like Remote method invocation;
Networking can be useful in distributed environment.
4
5. CHAPTER-2
SYSYTEM ANALYSIS
2.1 Existing System:
Many face recognition techniques have been developed over the past
few decades. One of the most successful and well-studied techniques to face
recognition is the appearance-based method. When using appearance-based
methods, we usually represent an image of size n *m pixels by a vector in an n *m-
dimensional space. In practice, however, these n*m dimensional spaces are too
large to allow robust and fast face recognition. A common way to attempt to
resolve this problem is to use dimensionality reduction techniques.
Two of the most popular techniques for this purpose are,
2.1.1 Principal Component Analysis (PCA).
2.1.2 Linear Discriminant Analysis (LDA).
2.1.1 Principal Component Analysis (PCA):
The purpose of PCA is to reduce the large dimensionality of the data
space (observed variables) to the smaller intrinsic dimensionality of feature space
(independent variables), which are needed to describe the data economically. This
5
6. is the case when there is a strong correlation between observed variables. The jobs
which PCA can do are prediction, redundancy removal, feature extraction, data
compression, etc. Because PCA is a known powerful technique which can do
something in the linear domain, applications having linear models are suitable,
such as signal processing, image processing, system and control theory,
communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D
vector of pixels constructed from 2-D face image into the compact principal
components of the feature space. This is called eigenspace projection. Eigenspace
is calculated by identifying the eigenvectors of the covariance matrix derived from
a set of fingerprint images (vectors).
2.1.2 Linear Discriminant Analysis (LDA):
LDA is a supervised learning algorithm. LDA searches for the
project axes on which the data points of different classes are far from each other
while requiring data points of the same class to be close to each other. Unlike
PCA which encodes information in an orthogonal linear space, LDA encodes
discriminating information in a linearly separable space using bases that are not
necessarily orthogonal. It is generally believed that algorithms based on LDA
are superior to those based on PCA.
But the most of the algorithm considers some what global data patterns while
recognition process. This will not yield accurate recognition system.
6
7. Less accurate
Does not deal with manifold structure
It doest not deal with biometric characteristics.
2.2 Proposed System:
PCA and LDA aim to preserve the global structure. However, in
many real-world applications, the local structure is more important. In this section,
we describe Locality Preserving Projection (LPP), a new algorithm for learning a
locality preserving subspace.
The objective function of LPP is as follows,
The manifold structure is modeled by a nearest-neighbor graph which
preserves the local structure of the image space. A face subspace is obtained by
Locality Preserving Projections (LPP).Each face image in the image space is
mapped to a low-dimensional face subspace, which is characterized by a set of
feature images, called Laplacianfaces. The face subspace preserves local
structure and seems to have more discriminating power than the PCA approach for
classification purpose. We also provide
Theoretical analysis to show that PCA, LDA, and LPP can be obtained
from different graph models. Central to this is a graph structure that is inferred on
the data points. LPP finds a projection that respects this graph structure. In our the
7
8. theoretical analysis, we show how PCA, LDA, and LPP arise from the same
principle applied to different choices of this graph structure.
It is worth while to highlight several aspects of the proposed approach here:
1. While the Eigenfaces method aims to preserve the global structure of the
image space, and the Fisher faces method aims to preserve the discriminating
information .Our Laplacianfaces method aims to preserve the local structure of the
image space which real -world application mostly needs.
2. An efficient subspace learning algorithm for face recognition should be
able to discover the nonlinear manifold structure of the face space. Our proposed
Laplacianfaces method explicitly considers the manifold structure which is
modeled by an adjacency graph and they reflect the intrinsic face manifold
structures.
3. LPP shares some similar properties to LLE . LPP is linear, while LLE is
nonlinear. Moreover, LPP is defined everywhere, while LLE is defined only on the
training data points and it is unclear how to evaluate the maps for new test points.
In contrast, LPP may be simply applied to any new data point to locate it in.
The algorithmic procedure of Laplacianfaces is formally stated below:
8
9. 1. PCA projection.
We project the image set into the PCA subspace by throwing away
the smallest principal components. In our experiments, we kept 98 percent
information in the sense of reconstruction error. For the sake of simplicity, we still
use x to denote the images in the PCA subspace in the following steps. We denote
by WPCA the transformation matrix of PCA.
2. Constructing the nearest-neighbor graph.
Let G denote a graph with n nodes. The ith node corresponds to the
face image xi . We put an edge between nodes i and j if xi and xi are “close,” i.e., xi
is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The
constructed nearest neighbor graph is an approximation of the local manifold
structure. Note that here we do not use the neighborhood to construct the graph.
This is simply because it is often difficult to choose the optimal " in the real-world
applications, while k nearest-neighbor graph can be constructed more stably. The
disadvantage is that the k nearest-neighbor search will increase the computational
complexity of our algorithm. When the computational complexity is a major
concern, one can switch to the "-neighborhood.
3. Choosing the weights. If node i and j are connected, put
9
10. where t is a suitable constant. Otherwise, put Sij = 0.The weight matrix S of
graph G models the face manifold structure by preserving local structure. The
justification for this choice of weights can be traced.
where D is a diagonal matrix whose entries are column (or row, since S is
symmetric) sums of S, Dii = ∑j Sji. L =D - S is the Laplacian matrix. The
ith row of matrix X is xi.
These eigenvalues are equal to or greater than zero because the matrices XLXT
and XDXT are both symmetric and positive semi definite. Thus, the embedding is
as follows:
where y is a k-dimensional vector. W is the transformation matrix. This linear
mapping best preserves the manifold’s estimated intrinsic geometry in a linear
sense. The column vectors of W are the so-called Laplacianfaces.
This principle is implemented with unsupervised learning concept with
training and test data.
10
11. The system must require to implement Principle Component Analysis
to reduce image in the dimension less than n and co-variance of the data.
The system must be used in Unsupervised learning algorithm . So it must be
trained properly with relevant data sets. Based on this training , input data is tested
by the application and result is displayed to the user.
2.3 System Requirement
Hardware specifications:
Processor : Intel Processor IV
RAM : 128 MB
Hard disk : 20 GB
CD drive : 40 x Samsung
Floppy drive : 1.44 MB
Monitor : 15’ Samtron color
Keyboard : 108 mercury keyboard
Mouse : Logitech mouse
Software Specification:
Operating System – Windows XP/2000
Language used – J2sdk1.4.0
11
12. 2.4 System Analysis Methods
System analysis can be defined, as a method that is determined to use
the resources, machine in the best manner and perform tasks to meet the
information needs of an organization. It is also a management technique that helps
us in designing a new systems or improving an existing system. The four basic
elements in the system analysis are
• Output
• Input
• Files
• Process
The above-mentioned are mentioned are the four basis of the System
Analysis.
2.5 Feasibility Study
Feasibility is the study of whether or not the project is worth doing.
The process that follows this determination is called a Feasibility Study. This study
is taken in right time constraints and normally culminates in a written and oral
12
13. feasibility report. This feasibility study is categorized into seven different types.
They are
• Technical Analysis
• Economical Analysis
• Performance Analysis
• Control and Security Analysis
• Efficiency Analysis
• Service Analysis
2.5.1 Technical Analysis
This analysis is concerned with specifying the software that will
successfully satisfy the user requirements. The technical needs of a system are to
have the facility to produce the outputs in a given time and the response time under
certain conditions..
2.5.2 Economic Analysis
Economic Analysis is the most frequently used technique for
evaluating the effectiveness of prepared system. This is called Cost/Benefit
analysis. It is used to determine the benefits and savings that are expected from a
proposed system and compare them with costs. If the benefits overweigh the cost,
then the decision is taken to the design phase and implements the system.
2.5.3 Performance Analysis
13
14. The analysis on the performance of a system is also a very important
analysis. This analysis analyses about the performance of the system both before
and after the proposed system. If the analysis proves to be satisfying from the
company’s side then this analysis result is moved to the next analysis phase.
Performance analysis is nothing but invoking at program execution to pinpoint
where bottle necks or other performance problems such as memory leaks might
occur. If the problem is spotted out then it can be rectified.
2.5.4 Efficiency Analysis
This analysis mainly deals with the efficiency of the system based on
this project. The resources required by the program to perform a particular function
are analyzed in this phase. It is also checks how efficient the project is on the
system, in spite of any changes in the system. The efficiency of the system should
be analyzed in such a way that the user should not feel any difference in the way of
working. Besides, it should be taken into consideration that the project on the
system should last for a longer time.
14
15. CHAPTER-3
SYSTEM DESIGN
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue
print for the document phase.
Modularity is one of the desirable properties of large systems. It
implies that the system is divided into several parts. In such a manner, the
interaction between parts is minimal clearly specified.
Design will explain software components in detail. This will help the
implementation of the system. Moreover, this will guide the further changes in the
system to satisfy the future requirements.
15
16. 3.1 Project modules:
3.1.1 Read/Write Module:
Here, the basic operations for loading and saving input and resultant
images respectively from the algorithms. The image files are read, processed and
new images are written into the output images.
3.1.2 Resizing Module:
Here, the faces are converted into equal size using linearity algorithm,
for the calculation and comparison. In this module large images or smaller images
are converted into standard sizing.
3.1.3 Image Manipulation:
Here, the face recognition algorithm using Locality Preserving
Projections (LPP) is developed for various enrolled into the database.
3.1.4 Testing Module:
Here, the Input images are resized then compared with the
Intermediate image and find the tested image then again compared with the
laplacian faces to find the aureate faces.
16
19. This system is developed to implement Principle component analysis.
Image manipulation: This module designed to view all the faces that are
considered in our training case. Principle Component Analysis is an eigenvector
method designed to model linear variation in high-dimensional data. PCA performs
dimensionality reduction by projecting the original n-dimensional data onto the k
<< n -dimensional linear subspace spanned by the leading eigenvectors of the
data’s covariance matrix. Its goal is to find a set of mutually orthogonal basis
functions that capture the directions of maximum variance in the data and for
which the coefficients are pair wise decorrelated. For linearly embedded
manifolds, PCA is guaranteed to discover the dimensionality of the manifold and
produces a compact representation.
1)Training module:
Unsupervised learning - this is learning from observation and
discovery. The data mining system is supplied with objects but no classes are
defined so it has to observe the examples and recognize patterns (i.e. class
description) by itself. This process requires training data set .This system provides
training set as 17 faces and each contains three different poses of faces. It
undergoes iterative process stores require detail in face Template two dimension
array.
2) Test module:
After training process is over , it process the input image face for
eigenface process then can able to say whether it recognizes or not.
19
20. CHAPTER-4
IMPLEMENTATION
Implementation includes all those activities that take place to convert
from the old system to the new. The new system may be totally new, replacing an
existing system or it may be major modification to the system currently put into
use.
This system “Face Recognition” is a new system. Implementation as a
whole involves all those tasks that we do for successfully replacing the existing or
introduce new software to satisfy the requirement.
The entire work can be described as retrieval of faces from database,
processed for eigen faces training method and test case are executed and finally
result is displayed to the user.
The test case has performed in all aspect and the system has given
correct result in all the cases.
4.1. Implementation Details:
4.1.1 Form design
Form is a tool with a message; it is the physical carrier of data or
information. It also can constitute authority for actions. In the form design files are
used to do each module. The following are list of forms used in this project:
1) Main Form
20
21. Contains option for viewing face from data base. The system retrieves
the images stored in the folder called train and test folder, which is available in bin
folder of your application.
2) View database Form:
This form retrieves face available in the train folder. It is just for
viewing purpose for the user.
3) Recognition Form :
This form provides option for loading input image from test folder.
Then user has to click Train button which leads the application for training to gain
knowledge as it is of the form unsupervised learning algorithm.
Unsupervised learning - This is learning from observation and discovery. The
data mining system is supplied with objects but no classes are defined so it has to
observe the examples and recognize patterns (i.e. class description) by itself. This
system results in a set of class descriptions, one for each class discovered in the
environment. Again this is similar to cluster analysis as in statistics.
Then user can click the test button Test button to see the matching for
the faces. The matched face will be displayed in the place provided for matched
face option. In case of any difference the information will be displayed in place
provided in the form.
21
22. 4.1.2 Input design
Accurate input data is the most common case of errors in data
processing. Errors entered by data entry operators can control by input design.
Input design is the process of converting user-originated inputs to a computer-
based format. Input data are collected and organized into group of similar data.
4.1.3 Menu Design
The menu in this application is organized into mdiform that organizes
viewing of image files from folder. Also it has option for loading image as input ,
try to perform training method and test whether it recognizes the face or not.
4.1.4 Data base design:
A database is a collection of related data. The database has following
properties:
i) Database reflects the changes of the information.
ii)A database is logically coherent collection of data with some
inherent meaning.
This application takes the images form the default folder set for this
application train and test folders. The file extension is .jpeg option.
4.1.5 Code Design
o Face Enrollment
-a new face can be added by the user into facespace database
22
23. o Face Verification
-verifies a persons face in the database with reference to his/her
identity.
o Face Recognition
-compares a persons face with all the images in database and choose
the closest match. Here Principle Component Analysis is performed with training
data set . The result is performed from test data set.
o Face Retrieval
-displays all the faces and its templates in the database
o Statistics
-stores a list of recognition accuracy for analyzing the FRR (False
Rejection Rate) and FAR (False Acceptance Rate)
23
24. 4.2 Coding:
import java.lang.*;
import java.io.*;
public class PGM_ImageFilter
{
//constructor
public PGM_ImageFilter()
{
inFilePath="";
outFilePath="";
}
//get functions
public String get_inFilePath()
{
return(inFilePath);
}
public String get_outFilePath()
{
return(outFilePath);
}
//set functions
public void set_inFilePath(String tFilePath)
{
inFilePath=tFilePath;
}
public void set_outFilePath(String tFilePath)
{
outFilePath=tFilePath;
}
//methods
public void resize(int wout,int hout)
{
PGM imgin=new PGM();
PGM imgout=new PGM();
if(printStatus==true)
{
System.out.print("nResizing...");
24
26. CHAPTER-5
SYSTEM TESTING
5.1 Software Testing
Software Testing is the process of confirming the functionality and
correctness of software by running it. Software testing is usually performed for one
of two reasons:
i) Defect detection
ii)Reliability estimation.
Software Testing contains two types of testing. They are
1) White Box Testing
2) Block Box Testing
1) White Box Testing
White box testing is concerned only with testing the software product,
it cannot guarantee that the complete specification has been implemented. White
box testing is testing against the implementation and will discover
faults of commission, indicating that part of the implementation is faulty.
26
27. 2) Block Box Testing
Black box testing is concerned only with testing the specification, it
cannot guarantee that all parts of the implementation have been tested. Thus black
box testing is testing against the specification and will discover faults of omission,
indicating that part of the specification has not been fulfilled.
Functional testing is a testing process that is black box in nature. It is
aimed at examine the overall functionality of the product. It usually includes
testing of all the interfaces and should therefore involve the clients in the process.
The key to software testing is trying to find the myriad of failure
modes – something that requires exhaustively testing the code on all possible
inputs. For most programs, this is computationally infeasible. It is common place
to attempt to test as many of the syntactic features of the code as possible (within
some set of resource constraints) are called white box software testing technique.
Techniques that do not consider the code’s structure when test cases are selected
are called black box technique.
In order to fully test a software product both black and white box
testing are required.The problem of applying software testing to defect detection is
that software can only suggest the presence of flaws, not their absence (unless the
testing is exhaustive). The problem of applying software testing to reliability
estimation is that the input distribution used for selecting test cases may be flawed.
In both of these cases, the mechanism used to determine whether program output is
correct is often impossible to develop. Obviously the benefit of the entire software
27
28. testing process is highly dependent on many different pieces. If any of these parts
is faulty, the entire process is compromised.
Software is now unique unlike other physical processes where inputs
are received and outputs are produced. Where software differs is in the manner in
which it fails. Most physical systems fail in a fixed (and reasonably small) set of
ways. By contrast, software can fail in many bizarre ways. Detecting all of the
different failure modes for software is generally infeasible.
Final stage of the testing process should be System Testing. This type
of test involves examination of the whole computer system, all the software
components, all the hard ware components and any interfaces. The whole computer
based system is checked not only for validity but also to meet the objectives.
5.2 Efficiency of Laplacian Algorithm
Now, consider a simple example of image variability. Imagine that a
set of face images are generated while the human face rotates slowly. Thus, we can
say that the set of face images are intrinsically one dimensional.
Many recent works shows that the face images do reside on a low
dimensional(image space).Therefore, an effective subspace learning algorithm
should be able to detect the nonlinear manifold structure. PCA and LDA,
effectively see only the Euclidean structure; thus, they fail to detect the intrinsic
low-dimensionality. With its neighborhood preserving character, the Laplacian
faces capture the intrinsic face manifold structure .
28
29. FIGURE:2
Two-dimensional linear embedding of face images by Laplacianfaces
Fig. 1 shows an example that the face images with various pose and
expression of a person are mapped into two-dimensional subspace. This data set
contains face images. The size of each image is 20 _ 28 pixels, with256 gray-levels
per pixel. Thus, each face image is represented by a point in the 560-dimensional
ambientspace. However, these images are believed to come from a sub manifold
with few degrees of freedom.
The face images are mapped into a two-dimensional space with
continuous change in pose and expression. The representative face images are
shown in the different parts of the space. The face images are divided into two
parts. The left part includes the face images with open mouth, and the right part
includes the face images with closed mouth. This is because in trying to preserve
local structure .. Specifically, it makes the neighboring points in the image face
nearer in the face space. . The 10 testing samples can be simply located in the
reduced representation space by the Laplacian faces (columnvectors of the matrix
W).
29
30. FIGURE:3
Fig. 2. Distribution of the 10 testing samples in the reduced representation subspace. As can be seen, these testing samples
optimally find their coordinates which reflect their intrinsic properties, i.e., pose and expression.
As can be seen, these testing samples optimally find their coordinates which
reflect their intrinsic properties, i.e., pose and expression. This observation tells us
that the Laplacianfaces are capable of capturing the intrinsic face manifold
structure.
FIGURE:4
The eigenvalues of LPP and LaplacianEigenmap.
Fig. 3 shows the eigen values computed by the two methods. As can be seen,
the eigen values of LPP is consistently greater than those of Laplacian Eigenmaps.
30
31. 5.2. 1 Experimental Results
A face image can be represented as a point in image space. However, due to
the unwanted variations resulting from changes in lighting, facial expression, and
pose, the image space might not be an optimal space for visual representation.
We can display the eigenvectors as images. These images may be
called Laplacianfaces. Using the Yale face database as the training set, we present
the first 10 Laplacianfaces in Fig. 4, together with Eigen faces and Fisher faces. A
face image can be mapped into the locality preserving subspace by using the
Laplacian faces.
FIGURE:5
Fig (a) Eigenfaces, (b) Fisher faces, and (c) Laplacianfaces calculated from the face images in the YALE database.
5.2.2 Face Recognition Using Laplacianfaces
In this section, we investigate the performance of our proposed
Laplacianfaces method for face recognition. The system performance is compared
with the Eigen faces method and the Fisher faces method.
31
32. In this study, three face databases were tested. The first one is the PIE
(pose, illumination, and expression) .The second one is the Yale database and the
Third one is the MSRA database.
In short, the recognition process has three steps. First, we calculate the
Laplacianfaces from the training set of face images; then the new face image to be
identified is projected into the face subspace spanned by the Laplacianfaces;
finally, the new face image is identified by a nearest neighbor classifier.
FIGURE:6
Fig 5.The original face image and the cropped image
5.2.3 Yale Database
The Yale face database was constructed at the Yale Center for
Computational Vision and Control. It contains 165 grayscale images of 15
individuals. The images demonstrate variations in lighting condition (left-light,
center-light, right light),facial expression (normal, happy, sad, sleepy, surprised,
and wink), and with/without glasses.
A random subset with six images was taken for the training set. The rest
was taken for testing. The testing samples were then projected into the low-
32
33. dimensional Representation. Recognition was performed using a nearest-neighbor
classifier.
In general, the performance of the Eigen faces method and the
Laplacian faces method varies with the number of dimensions. We show the best
results obtained by Fisher faces, Eigen faces, and Laplacian faces. The recognition
results are shown in Table 1. It is found that the Laplacian faces method
significantly outperforms both Eigen faces and Fisher faces methods.
TABLE :1
Performance Comparison on the Yale Database
FIGURE:7
Fig. 6 shows the plots of error rate versus dimensionality reduction.
33
34. 5.2.4 PIE Database
Fig. 7 shows some of the faces with pose, illumination and
expression variations in the PIE database. Table 2 shows the recognition results.
As can be seen Fisher faces performs comparably to our algorithm on this
Database, while Eigenfaces performs poorly. The error rate for Laplacian faces,
Fisher faces, and Eigen faces .As can be seen, the error rate of our Laplacianfaces
method decreases fast as the dimensionality of the face subspace.
FIGURE:8
Fig. 7. The sample cropped face images of one individual from PIE database. The original face images are taken
under varying pose, illumination, and expression.
TABLE :2
Performance Comparison on the PIE Database
34
35. 5.2.5 MSRA Database
This database was collected at Microsoft Research Asia. Sixty-
four to eighty face images were collected for each individual in each session. All
the faces are frontal. Fig. 9 shows the sample cropped face images from this
database. In this test, one session was used for training and the other was used for
testing.
FIGURE:9
Fig.8 The sample cropped face images of one individual from MISRA database. The original face images are taken
under varying pose, illumination, and expression.
35
36. TABLE :3
shows the recognition results. Laplacian faces method has lower error rate than
those of Eigen faces and fisher faces .
Performance Comparison on the MSRA Database
Performance comparison on the MSRA database with different number of training samples
36
37. CHPTER-6
CONCLUSION
Our system is proposed to use Locality Preserving Projection in Face
Recognition which eliminates the flaws in the existing system. This system makes
the faces to reduce into lower dimensions and algorithm for LPP is performed for
recognition. The application is developed successfully and implemented as
mentioned above.
This system seems to be working fine and successfully. This system
can able to provide the proper training set of data and test input for recognition.
The face matched or not is given in the form of picture image if matched and text
message in case of any difference.
37
45. REFERENCES
1. X. He and P. Niyogi, “Locality Preserving Projections,” Proc. Conf.
Advances in Neural Information Processing Systems, 2003.
2. A.U. Batur and M.H. Hayes, “Linear Subspace for Illumination
Robust Face Recognition,” (dec2001).
3. M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral
Techniques for Embedding and Clustering,”
4. P.N. Belhumeur, J.P. Hespanha, and D.J. Kriegman, “Eigenfaces
Vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Pattern Analysis and Machine Intelligence (July 1997).
5. M. Belkin and P. Niyogi, “Using Manifold Structure for Partially
Labeled Classification (2002).
6. M. Brand, “Charting a Manifold,” Proc. Conf. Advances in Neural
Information Processing Systems, 2002.
7. F.R.K. Chung, “Spectral Graph Theory,” Proc. Regional Conf. Series
in Math., no. 92, 1997.
8. Y. Chang, C. Hu, and M. Turk, “Manifold of Facial Expression,”
Proc. IEEE Int’l Workshop Analysis and Modeling of Faces and
Gestures, Oct. 2003.
9. R. Gross, J. Shi, and J. Cohn, “Where to Go with Face
Recognition,” Proc. Third Workshop Empirical Evaluation Methods
in Computer Vision, Dec. 2001.
45
46. 10. A.M. Martinez and A.C. Kak, “PCA versus LDA,” IEEE Trans.
Pattern Analysis and Machine Intelligence, Feb. 2001.
46