Criminal Detection System (CDS)
Gurvinder Singh(COE-4)
Harshdeep Singh(COE-5)
Amit Mangotra(COE-2)
Amit kumar(COE-4)
 
1 
 
•Mentor Consent Form
•Project Overview
•Need Analysis
•Literature Survey
•Objectives
•Methodology
•Work Plan
•Project Outcomes & Individual Roles
•Course Subjects
•References
 
2 
 
Introduction
● In the present scenario, there is great need to maintain information security or protection
for physical property
● Information and property can be secured through verification of “true” individual identity
● Unlike other forms of identification such as fingerprinting analysis and iris scans, face
recognition is also user-friendly and non-intrusive.
● It consist of unique shape analysis,patten and positioning of facial features.
Motivation
● Human have the ability to recognize a face with any background condition even after
years of not seeing the face.Therefore, it is the motivation to mimic such a
system,However,it is very challenging task to recognize the face with stuffs like
beard,makeup,covered face,scars etc.
● Numeros studies exploiting various concepts and problems in the face recognition
process.
● Artificial neural networks have been researched heavily in the recent past because of its
similarity to the human brain.
● Our Facial recognition algorithm compares a captured image against a database of stored
faces and tries to match them in extreme conditions.
 
3 
 
● Background, illumination,angle and other factors make it difficult
● System designed to accurately classify images subject to a variety of unpredictable
conditions.
Overview
​The title of this project is “Criminal Detection System”. This application detects the faces from
different angles and run a search through the database for the nearest match and if found, it
displays the matched face. This is done by using neural networks technology.
The face is our primary focus of attention in social life playing an important role in conveying
identity and emotions. We can identify a number of faces learned throughout our lifespan and
identify faces at a glance even after years of separation. This skill is quite robust despite of large
variations in visual stimulus due to changing condition, aging and distractions such as beard,
glasses or changes in hairstyle.
Computational models of face recognition are interesting because they can contribute not only to
theoretical knowledge but also to practical applications. Computers that detect and recognize
faces could be applied to a wide variety of tasks including criminal identification, security
system, image and film processing, identity verification, tagging purposes and human-computer
interaction. Unfortunately, developing a computational model of face detection and recognition
is quite difficult because faces are complex, multidimensional and meaningful visual stimuli.
In our project, we have studied and implemented a pretty simple but very effective face detection
algorithm which takes human skin colour and effective face pattern into account.The individual
can be identified under covered face situation.Our Algorithm mainly focuses on the face
complexity stages and the weighted pattern to recognise the face in hard situation.
Our aim, which we believe we have reached, was to develop a method of face recognition that is
fast, robust, reasonably simple and accurate with a relatively simple and easy to understand
algorithms and techniques.
Technologies Needed
•Software
MATLAB 8.1 (r2015a)
•Hardware
 
4 
 
USB PC Camera
•Required Products
Image Acquisition Toolbox
Image Processing Toolbox
Computer Vision System Toolbox
Neural Network Toolbox
Methodology
Face Recognition consist of two major tasks-
Locating a face from an image
1. Determine parameter of the image like color,stillness ,etc
2. Check lighting condition and filter it out.
3. Making an image Restoration point of various face parts.
Recognizing the face
1. Ensuring proper capture of the image and filtering the background.
2. Matching the feature with the database
3. Output
2 FACE RECOGNITION
The face recognition algorithms used here are Principal Component Analysis(PCA), Multilinear
Principal Component Analysis (MPCA) and Linear Discriminant Analysis(LDA).Every
algorithm has its own advantage. While PCA is the most simple and fast algorithm, MPCA and
LDA which have been applied together as a single algorithm named MPCALDA provide better
results under complex circumstances like face position, luminance variation etc. Each of them
have been discussed one by one below.
2.1 PRINCIPAL COMPONENT ANALYSIS (PCA)
PCA involves a mathematical procedure that transforms a number of possibly correlated
variables into a number of uncorrelated variables called principal components, related to the
original variables by an orthogonal transformation. This transformation is defined in such a way
that the first principal component has as high a variance as possible (that is, accounts for as much
 
5 
 
of the variability in the data as possible), and each succeeding component in turn has the highest
variance possible under the constraint that it be orthogonal to the preceding components. PCA is
sensitive to the relative scaling of the original variables. ​PCA which aims to find the projected
directions along with the minimum reconstructing error and then map the face dataset to a
low-dimensional space spanned by those directions corresponding to the top eigenvalues
Traditional PCA face recognition technology can reach accuracy rate of 70%–92%. However, it
is still not fully practical.
The major advantage of PCA is that the eigenface approach helps reducing the size
of the database required for recognition of a test image. The trained images are not
stored as raw images rather they are stored as their weights which are found out
projecting each and every trained image to the set of eigenfaces obtained.
2.1.1 The eigenface approach
In the language of information theory, the relevant information in a face needs to be extracted,
encoded efficiently and one face encoding is compared with the similarly encoded database. The
trick behind extracting such kind of information is to capture as many variations as possible from
the set of training images.Mathematically, the principal components of the distribution of faces
are found out using the eigenface approach. First the eigenvectors of the covariance matrix of the
set of face images is found out and then they are sorted according to their corresponding
eigenvalues. Then a threshold eigenvalue is taken into account and eigenvectors with
eigenvalues less than that threshold values are discarded. So ultimately the eigenvectors having
the most significant eigenvalues are selected. Then the set of face images are projected into the
significant eigenvectors to obtain a set called eigenfaces. Every face has a contribution to the
eigenfaces obtained. The best M eigenfaces from a M dimensional subspace is called “face
space”[2]
Each individual face can be represented exactly as the linear combination of “eigenfaces” or each
face can also be approximated using those significant eigenfaces obtained using the most
significant eigenvalues.
Now the test image subjected to recognition is also projected to the face space and then the
weights corresponding to each eigenface are found out. Also the weights of all the training
images are found out and stored. Now the weights of the test image is compared to the set of
weights of the training images and the best possible match is found out.
The comparison is done using the “Euclidean distance” measurement. Minimum the distance is
the maximum is the match.
 
6 
 
Euclidean distance
It is the distance through which we will define or identify the images and also find the matched
image for further neural network recognition.There will be a threshold value for euclidean
distance( .If the comparing value will be less than the threshold,the image will be selected for)λ
neural recognition.
The approach to face recognition involves the following initialisation operations:
1.​ Fetching an initial set of N face images (training images).
2​. Calculate the eigenface from the training set keeping only the M images that correspond to the
highest eigenvalues. These M images define the “facespace”. As new faces are encountered, the
“eigenfaces” can be updated or recalculated accordingly and this will reduce the dimensionality
of image to an efficient number.
3.​ Calculate the corresponding distribution in M dimensional weight space for each known
individual by projecting their face images onto the “face space”.
4.​ Calculate a set of weights projecting the input image to the M “eigenfaces”.
5. ​Determine whether the image is a face or not by checking the closeness of the image to the
“face space” and the capsule neural network.
6.​ If it is close enough, classify, the weight pattern as either a known person or as an unknown
based on the Euclidean distance measured.
7​. If it is close enough then cite the recognition successful and provide relevant information
about the recognised face form the database which contains information about the faces​.
2.3 Advantages of PCA
1​. It’s the simplest approach which can be used for data compression and face recognition.
2​. Operates at a faster rate.
3​. Main Feature includes Dimension Reduction,Relevance Removal,Probability Estimation
2.4 Limitations of PCA
1​. Requires full frontal display of faces
2​. Not sensitive to lighting conditions, position of faces.
3.​ Considers every face in the database as a different image. Faces of the same person are not
classified in classes.
 
7 
 
A better approach was studied and used to compensate these limitations which are called
MPCALDA.Its a combination of MPCA and PCA. While MPCA considers the different
variations in images, LDA classifies the images according to same or different person.
2.5​ (Multilinear Principal Component Analysis and Linear Discriminant
Analysis )MPCALDA
2.5.1 ​Multilinear Principal Component Analysis ​(MPCA)
MPCA is the extension of PCA that uses multilinear algebra and proficient of learning the
interactions of the multiple factors like different viewpoints, different lighting conditions,
different expressions etc.
By Using the ​MPCALDA​ approach we want to compared with current traditional existing face
recognition methods, our approach treats face images as multidimensional tensor in order to find
the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project
samples to a new discriminant feature space, while the K nearest neighbor (KNN) is adopted for
sample set classification.[3] ​In PCA the aim was to reduce the dimensionality of the images. For
example a 20x32x30 dataset was converted to 640x30 that is images are converted to 1D
matrices and then the eigenfaces were found out out of them. But this approach ignores all other
dimensions of an image as an image of size 40 x 32 speaks of a lot of dimensions in a face and
1D vectorizing doesn’t take advantage of all those features. Therefore a dimensionality reduction
technique operating directly on the tensor object rather than its 1D vectorized version will be
applied here.[4]
The approach is similar to PCA in which the features representing a face are reduced by
eigenface approach. While in PCA only one transformation vector was used, in MPCA N
number of different transformation vectors representing the different dimensionality of the face
images are applied.
2.5.2 Linear Discriminant Analysis (LDA)
LDA is a computational scheme for evaluating the significance of different facial attributes in
terms of their discrimination power. The database is divided into a number of classes each class
contains a set of images of the same person in different viewing conditions like different frontal
views, facial expression, different lighting and background conditions and images with or
 
8 
 
without glasses etc. It is also assumed that all images consist of only the face regions and are of
same size
2.5.3 Face Slicing(FS)
By splitting the data it would become an easier task for face identification. For the construction
of a new image, we can combine any of the facial clippings with any other clipping to construct a
new face. Now based on these newly constructed faces, we can compare these new faces with the
previously saved images in the database and start matching the complete image with the
images which are having some similarities so that we could get the best match from the available
database.
Capsule Network(Network Classifier)
This has three steps--
1. The fed forward of the input training pattern
2. The calculation backpropagation with hidden neural of the associated image slice.Each
image slice will be matched with the database image to ensure the accurate matching og
image.
3. The weighted adjustment.
Neural Networks
● Inspired by biological network of neurons
● The system trains itself initially and keep learning with time and the face use
● Each neuron processes data individually and drives an output
● Layers of neuron form the entire system
● For faster convolution network,we are using improved version of convolution named
capsule network.
● Adjusts weights in training period and real time learning
● Possess incredible ability to recognize pattern of known image under any face cover
condition like beard,makup,enhanced makeup,scars on face etc.
 
9 
 
Procedure
Sliced Matrix and Training of Images-
•Comparison of sliced images to match the complete image.
•Recursive match computation done to all parts of the image against database
images.
•The classifier identify each and every part of face and train itself for better result.
Thus, we would arrive at a particular image which showcases maximum matches
Image Segmentation
• In Segmentation are using Edge based segmentation technique
• In edge detection technique, the image is split by spotting the difference in pixels
of the digital image or intensity .
• Edge detection technique is determining the value of pixels on the boundaries of
region. The image segmentation is done by edge detection method by noticing
pixels or edges in between diverse section.
Clustering
 
10 
 
•Clustering is a method in which objects are unified into groups based on their
characteristics.
Uses
All of this makes face recognition ideal for high traffic areas open to the general public, such as:
● Airports and railway stations
● Corporations
● Cashpoints
● Stadiums
● Public transportation
● Financial institutions
● Government offices
● Businesses of all kinds
4 References
[1] M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol.
3, No. 1, Win. 1991, pp. 71-86
[2] MPCA: Multilinear Principal Component Analysis of Tensor Objects, Haiping Lu, Student
Member, IEEE, Konstantinos N. (Kostas) Plataniotis, Senior Member, IEEE, and Anastasios N.
Venetsanopoulos, Fellow, IEEE
[3] Face detection ,Inseong Kim, Joon Hyung Shim, and Jinkyu Yang
 
11 
 
 

Criminal Detection System

  • 1.
      Criminal Detection System(CDS) Gurvinder Singh(COE-4) Harshdeep Singh(COE-5) Amit Mangotra(COE-2) Amit kumar(COE-4)  
  • 2.
    1    •Mentor Consent Form •ProjectOverview •Need Analysis •Literature Survey •Objectives •Methodology •Work Plan •Project Outcomes & Individual Roles •Course Subjects •References  
  • 3.
    2    Introduction ● In thepresent scenario, there is great need to maintain information security or protection for physical property ● Information and property can be secured through verification of “true” individual identity ● Unlike other forms of identification such as fingerprinting analysis and iris scans, face recognition is also user-friendly and non-intrusive. ● It consist of unique shape analysis,patten and positioning of facial features. Motivation ● Human have the ability to recognize a face with any background condition even after years of not seeing the face.Therefore, it is the motivation to mimic such a system,However,it is very challenging task to recognize the face with stuffs like beard,makeup,covered face,scars etc. ● Numeros studies exploiting various concepts and problems in the face recognition process. ● Artificial neural networks have been researched heavily in the recent past because of its similarity to the human brain. ● Our Facial recognition algorithm compares a captured image against a database of stored faces and tries to match them in extreme conditions.  
  • 4.
    3    ● Background, illumination,angleand other factors make it difficult ● System designed to accurately classify images subject to a variety of unpredictable conditions. Overview ​The title of this project is “Criminal Detection System”. This application detects the faces from different angles and run a search through the database for the nearest match and if found, it displays the matched face. This is done by using neural networks technology. The face is our primary focus of attention in social life playing an important role in conveying identity and emotions. We can identify a number of faces learned throughout our lifespan and identify faces at a glance even after years of separation. This skill is quite robust despite of large variations in visual stimulus due to changing condition, aging and distractions such as beard, glasses or changes in hairstyle. Computational models of face recognition are interesting because they can contribute not only to theoretical knowledge but also to practical applications. Computers that detect and recognize faces could be applied to a wide variety of tasks including criminal identification, security system, image and film processing, identity verification, tagging purposes and human-computer interaction. Unfortunately, developing a computational model of face detection and recognition is quite difficult because faces are complex, multidimensional and meaningful visual stimuli. In our project, we have studied and implemented a pretty simple but very effective face detection algorithm which takes human skin colour and effective face pattern into account.The individual can be identified under covered face situation.Our Algorithm mainly focuses on the face complexity stages and the weighted pattern to recognise the face in hard situation. Our aim, which we believe we have reached, was to develop a method of face recognition that is fast, robust, reasonably simple and accurate with a relatively simple and easy to understand algorithms and techniques. Technologies Needed •Software MATLAB 8.1 (r2015a) •Hardware  
  • 5.
    4    USB PC Camera •RequiredProducts Image Acquisition Toolbox Image Processing Toolbox Computer Vision System Toolbox Neural Network Toolbox Methodology Face Recognition consist of two major tasks- Locating a face from an image 1. Determine parameter of the image like color,stillness ,etc 2. Check lighting condition and filter it out. 3. Making an image Restoration point of various face parts. Recognizing the face 1. Ensuring proper capture of the image and filtering the background. 2. Matching the feature with the database 3. Output 2 FACE RECOGNITION The face recognition algorithms used here are Principal Component Analysis(PCA), Multilinear Principal Component Analysis (MPCA) and Linear Discriminant Analysis(LDA).Every algorithm has its own advantage. While PCA is the most simple and fast algorithm, MPCA and LDA which have been applied together as a single algorithm named MPCALDA provide better results under complex circumstances like face position, luminance variation etc. Each of them have been discussed one by one below. 2.1 PRINCIPAL COMPONENT ANALYSIS (PCA) PCA involves a mathematical procedure that transforms a number of possibly correlated variables into a number of uncorrelated variables called principal components, related to the original variables by an orthogonal transformation. This transformation is defined in such a way that the first principal component has as high a variance as possible (that is, accounts for as much  
  • 6.
    5    of the variabilityin the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it be orthogonal to the preceding components. PCA is sensitive to the relative scaling of the original variables. ​PCA which aims to find the projected directions along with the minimum reconstructing error and then map the face dataset to a low-dimensional space spanned by those directions corresponding to the top eigenvalues Traditional PCA face recognition technology can reach accuracy rate of 70%–92%. However, it is still not fully practical. The major advantage of PCA is that the eigenface approach helps reducing the size of the database required for recognition of a test image. The trained images are not stored as raw images rather they are stored as their weights which are found out projecting each and every trained image to the set of eigenfaces obtained. 2.1.1 The eigenface approach In the language of information theory, the relevant information in a face needs to be extracted, encoded efficiently and one face encoding is compared with the similarly encoded database. The trick behind extracting such kind of information is to capture as many variations as possible from the set of training images.Mathematically, the principal components of the distribution of faces are found out using the eigenface approach. First the eigenvectors of the covariance matrix of the set of face images is found out and then they are sorted according to their corresponding eigenvalues. Then a threshold eigenvalue is taken into account and eigenvectors with eigenvalues less than that threshold values are discarded. So ultimately the eigenvectors having the most significant eigenvalues are selected. Then the set of face images are projected into the significant eigenvectors to obtain a set called eigenfaces. Every face has a contribution to the eigenfaces obtained. The best M eigenfaces from a M dimensional subspace is called “face space”[2] Each individual face can be represented exactly as the linear combination of “eigenfaces” or each face can also be approximated using those significant eigenfaces obtained using the most significant eigenvalues. Now the test image subjected to recognition is also projected to the face space and then the weights corresponding to each eigenface are found out. Also the weights of all the training images are found out and stored. Now the weights of the test image is compared to the set of weights of the training images and the best possible match is found out. The comparison is done using the “Euclidean distance” measurement. Minimum the distance is the maximum is the match.  
  • 7.
    6    Euclidean distance It isthe distance through which we will define or identify the images and also find the matched image for further neural network recognition.There will be a threshold value for euclidean distance( .If the comparing value will be less than the threshold,the image will be selected for)λ neural recognition. The approach to face recognition involves the following initialisation operations: 1.​ Fetching an initial set of N face images (training images). 2​. Calculate the eigenface from the training set keeping only the M images that correspond to the highest eigenvalues. These M images define the “facespace”. As new faces are encountered, the “eigenfaces” can be updated or recalculated accordingly and this will reduce the dimensionality of image to an efficient number. 3.​ Calculate the corresponding distribution in M dimensional weight space for each known individual by projecting their face images onto the “face space”. 4.​ Calculate a set of weights projecting the input image to the M “eigenfaces”. 5. ​Determine whether the image is a face or not by checking the closeness of the image to the “face space” and the capsule neural network. 6.​ If it is close enough, classify, the weight pattern as either a known person or as an unknown based on the Euclidean distance measured. 7​. If it is close enough then cite the recognition successful and provide relevant information about the recognised face form the database which contains information about the faces​. 2.3 Advantages of PCA 1​. It’s the simplest approach which can be used for data compression and face recognition. 2​. Operates at a faster rate. 3​. Main Feature includes Dimension Reduction,Relevance Removal,Probability Estimation 2.4 Limitations of PCA 1​. Requires full frontal display of faces 2​. Not sensitive to lighting conditions, position of faces. 3.​ Considers every face in the database as a different image. Faces of the same person are not classified in classes.  
  • 8.
    7    A better approachwas studied and used to compensate these limitations which are called MPCALDA.Its a combination of MPCA and PCA. While MPCA considers the different variations in images, LDA classifies the images according to same or different person. 2.5​ (Multilinear Principal Component Analysis and Linear Discriminant Analysis )MPCALDA 2.5.1 ​Multilinear Principal Component Analysis ​(MPCA) MPCA is the extension of PCA that uses multilinear algebra and proficient of learning the interactions of the multiple factors like different viewpoints, different lighting conditions, different expressions etc. By Using the ​MPCALDA​ approach we want to compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN) is adopted for sample set classification.[3] ​In PCA the aim was to reduce the dimensionality of the images. For example a 20x32x30 dataset was converted to 640x30 that is images are converted to 1D matrices and then the eigenfaces were found out out of them. But this approach ignores all other dimensions of an image as an image of size 40 x 32 speaks of a lot of dimensions in a face and 1D vectorizing doesn’t take advantage of all those features. Therefore a dimensionality reduction technique operating directly on the tensor object rather than its 1D vectorized version will be applied here.[4] The approach is similar to PCA in which the features representing a face are reduced by eigenface approach. While in PCA only one transformation vector was used, in MPCA N number of different transformation vectors representing the different dimensionality of the face images are applied. 2.5.2 Linear Discriminant Analysis (LDA) LDA is a computational scheme for evaluating the significance of different facial attributes in terms of their discrimination power. The database is divided into a number of classes each class contains a set of images of the same person in different viewing conditions like different frontal views, facial expression, different lighting and background conditions and images with or  
  • 9.
    8    without glasses etc.It is also assumed that all images consist of only the face regions and are of same size 2.5.3 Face Slicing(FS) By splitting the data it would become an easier task for face identification. For the construction of a new image, we can combine any of the facial clippings with any other clipping to construct a new face. Now based on these newly constructed faces, we can compare these new faces with the previously saved images in the database and start matching the complete image with the images which are having some similarities so that we could get the best match from the available database. Capsule Network(Network Classifier) This has three steps-- 1. The fed forward of the input training pattern 2. The calculation backpropagation with hidden neural of the associated image slice.Each image slice will be matched with the database image to ensure the accurate matching og image. 3. The weighted adjustment. Neural Networks ● Inspired by biological network of neurons ● The system trains itself initially and keep learning with time and the face use ● Each neuron processes data individually and drives an output ● Layers of neuron form the entire system ● For faster convolution network,we are using improved version of convolution named capsule network. ● Adjusts weights in training period and real time learning ● Possess incredible ability to recognize pattern of known image under any face cover condition like beard,makup,enhanced makeup,scars on face etc.  
  • 10.
    9    Procedure Sliced Matrix andTraining of Images- •Comparison of sliced images to match the complete image. •Recursive match computation done to all parts of the image against database images. •The classifier identify each and every part of face and train itself for better result. Thus, we would arrive at a particular image which showcases maximum matches Image Segmentation • In Segmentation are using Edge based segmentation technique • In edge detection technique, the image is split by spotting the difference in pixels of the digital image or intensity . • Edge detection technique is determining the value of pixels on the boundaries of region. The image segmentation is done by edge detection method by noticing pixels or edges in between diverse section. Clustering  
  • 11.
    10    •Clustering is amethod in which objects are unified into groups based on their characteristics. Uses All of this makes face recognition ideal for high traffic areas open to the general public, such as: ● Airports and railway stations ● Corporations ● Cashpoints ● Stadiums ● Public transportation ● Financial institutions ● Government offices ● Businesses of all kinds 4 References [1] M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, Vol. 3, No. 1, Win. 1991, pp. 71-86 [2] MPCA: Multilinear Principal Component Analysis of Tensor Objects, Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis, Senior Member, IEEE, and Anastasios N. Venetsanopoulos, Fellow, IEEE [3] Face detection ,Inseong Kim, Joon Hyung Shim, and Jinkyu Yang  
  • 12.