- 1 -
A PROJECT REPORT
On
FACE RECOGNITION AND TRACKING SYSTEM
Bachelor of Technology
In
Electronics & Communication Engineering
Submitted By
ABHISHEK GUPTA (1002831004)
A. SANDEEP (1002831001)
ABHISHEK SOAM (1002831007)
ASHISH KUMAR (1002831029)
Under the Guidance of
Mr. Prashant Gupta
Assistant Professor
DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGG.
IDEAL INSTITUTE OF TECHNOLOGY
GHAZIABAD (INDIA)
- 2 -
ACKNOWLEDGEMENT
I take this opportunity to express my profound gratitude and deep regards to our
Guide Mr. PRASHANT GUPTA, Assistant Professor Department of Electronics &
Communication Engineering, Ideal Institute of Technology, Ghaziabad for his
exemplary guidance, advice and constant encouragement throughout the course of this
project. The blessing, help and guidance given by his time to time shall carry me a
long way in the journey of life on which I am about to embark.
I am also very thankful to Mr. NARBADA PRASAD GUPTA, H.O.D. ECE
Department, Ideal Institute of Technology, Ghaziabad for approving this project as a
final year major project.
I want to thank my teammates A.SANDEEP, ABHISHEK GUPTA, ASHISH
KUMAR and ABHISHEK SOAM for their valuable role in this project. A.SANDEEP
and ABHISHEK SOAM did the motor movement programming and hardware
building and synchronizing delays and pauses for perfection in motor movement
programming. ASHISH KUMAR and ABHISHEK GUPTA helped in making the face
recognition program and the report work. All the team members have a keen role in
the research and development of this project.
I am also thankful to my father and mother for motivating me and helping for
the testing of this project. I want to acknowledge all my friends, who donated their
faces for testing and training algorithm.
A.SANDEEP (1002831001)
ABHISHEK GUPTA (1002831004)
ABHISHEK SOAM (1002831007)
ASHISH KUMAR(1002831029)
- 3 -
ABSTRACT
Image processing is the future of the world, through this amazing tool we can
control or operate almost everything in this world in terms of security, controlling
the computer and lots more. In this image project, camera is used as sensor or we
can say input device which takes input in the form of photos or video and that
input is used to make algorithms. MATLAB is one of the most powerful software
through which we can build any kind of project according to our requirement.
MATLAB contains many toolboxes including image processing toolbox and image
acquisition toolbox. These toolboxes contain various functions and libraries with
which we can perform several task. It converts complex tasks into simple ones.
This language converts the codes into C/C++ formats and generate HEX file for
the microcontroller or processor. This is all about image processing.
The project named, “Face recognition and tracking system”is basically build on
the conceptof image processingby using the image processingand computer vision
toolboxes . In this project we use two different cameras one for the purposeof
generate the data base and other for the recognition and tracking purposewhich is
movable ,when the face of any personmatches with the face from generated
database the operation of detection and tracking get started.
- 4 -
TABLE OF CONTENTS
1. WHAT IS IMAGE PROCESSING?? .…………5
1.1 Introduction ...………. 5
1.2 Overview ………… 6
2. PROJECT INTRODUCTION …….…… 8
2.1 Introduction …………. 8
2.2 Objective ………… 8
2.3 Project overview ………….8
3. SOFTWARE AND HARDWARE USED …………. 9
3.1 Introduction ………….. 9
3.2 Hardware Requirements ………….. 9
3.3 Software Requirements …………..9
3.4 Introduction to hardware: Arduino ….…………9
3.5 Introduction to software: MATLAB & Arduino IDE ……...14
4. BLOCKDIAGRAM …….…….20
5. PRINCIPLE OF OPERATION ....................21
5.1 Eigen Face Algorithm …………….21
5.2 Viola Jones algorithm …………….23
6. RESULT ……………49
7. PPOBLEMS FACED ……….…….51
8. APPLICATIONS OF THIS PROJECT ....…………. 52
9. APPENDIX …………….
10.REFERENCES ......………..55
- 5 -
Chapte
WHAT IS IMAGE PROCESSING??
1.1 Introduction
Image processing is a MATLAB software tool which is used for the following purposes:
 Transforming digital information representing images
 Improve pictorial information for human interpretation
 Remove noise
 Correct for motion, camera position, distortion
 Enhance by changing contrast, color
 Process pictorial information by machine.
 Segmentation - dividing an image up into constituent parts
 Representation - representing an image by some more abstract
 Models Classification
 Reduce the size of image information for efficient handling.
 Compression with loss of digital information that minimizes loss
of "perceptual" information. JPEG and GIF, MPEG, Multi-resolution representations.
 Lens focuses an image on the retina (like a camera).
 Pattern is affected by distribution of light receptors (rods and cones)
 The (6-7 million) cones are in the center of the retina (fovea) and are sensitive to color -
each connected to own neuron.
 The (75-150 million) rods are distributed everywhere, connected in clusters to a neuron.
 Unlike ordinary camera, the eye is flexible.
 Range of intensity levels supported by the human visual system is 1010.
 Uses brightness adaptation to set sensitivity.
ColorVision
The color-responsive chemicals in the cones are called cone pigments and are very similar to the
chemicals in the rods. The retinal portion of the chemical is the same, however the scotopsin is
replaced with photopsins. Therefore, the color-responsive pigments are made of retinal and
photopsins. There are three kinds of color-sensitive pigments:
 Red-sensitive pigment
 Green-sensitive pigment
 Blue-sensitive pigment Image processing involves changing the nature of an image in order to
either
- 6 -
1.2 Overview
What Is An Image?
 An image is a 2D rectilinear array of pixels
 Any image from a scanner, or from a digital camera, or in a computer, is a digital
image.
 A digital image is formed by the collections of different color pixel.
Fig. 1.1 B&W image Fig. 1.2 Gray scale image Fig. 1.3 RGB image
Types of image:
There are three different types of image in MATLAB
 Binary images or B&W images
 Intensity images or Gray scale images
 Indexed images or RGB images
Binary Image:
They are also called B&W images, containing ‘1’ for white and ‘0’ for Black.
- 7 -
Fig. 1.4 B&W image with regionpixel value
Intensity image:
They are also called ‘Gray Scale images’, containing values in the range of 0 to 1.
Fig. 1.5 Intensityimage withregionpixel value
Indexed image:
These are the color images and also represented as ‘RGB image’.
Fig. 1.6 Color spectrum Fig. 1.7 Indexed image with region pixel value
- 8 -
PROJECT INTRODUCTION
2.1 Introduction
The project named, “Face recognition and tracking system” is basically build on the concept of
image processing by using image processing and computer vision tool box of MATLAB software.
In this project we use two different cameras one for the purpose of generate the data base and
other for the recognition purpose which is movable ,when the face of any person matches with the
face from generated database the operation of detection and tracking get started.
2.2 Objective
The prime objective of the project is to develop a standalone application and an interlinking
hardware that can recognize live faces from a generated database of images. This application
will contain features like recognition of faces from still image taken by camera or from hard
drive, tracking of faces which will observed by movement of camera in direction of faces, real
time recognition from database.
2.3. Projectoverview
In our project we have to develop a standalone application and interlinking a hardware that can do
following tasks:
1. Face detection from live image.
2. Face detection from drive.
3. Tracking any face.
4. Recognize and track a face from live image.
5. Recognize and track a face from drive.
6. Determine the population density.
- 9 -
SOFTWARE AND HARDWARE USED
3.1 Introduction
For making such a project we need software as well as hardware because here we handle a
embedded product with the help of our gesture.
Hardware : It is the physical part of our project that relates with the real world . E.g. Arduino kit
,microcontroller etc.
Software : It is the set of programs and instructions that tells hardware which task is to be
performed. E.g. Set of instructions written on MATLAB IDE.
3.2 Hardware Requirements
The following are the hardware components that are required during this project :
A) 1 Personal computer.
B) 1 Arduino kit (with ATmega328).
C) 2 1.3MP cameras.
D) 1 Motor driving shield (with L293D IC).
E) 1 Seven segment display shield.
F) 1 Program burner wire.
G) 2 Line wires.
H) 20 Single stand wires.
I) 2 Batteries.
J) 2 Battery holders.
K) 2 DC Motors.
L) 1 Holding frame.
3.3 Software Requirements
- 10 -
Software required to program our project hardware part microcontroller are given as follow:
A) MATLAB R2012b software.
B) Arduino IDE software.
C) Arduino to MATLAB interfacing files.
3.4 Introduction to hardware: ARDUINO
Arduino is a single-board microcontroller to make using electronics in multidisciplinary projects
more accessible. The hardware consists of a simple open source hardware board designed around an
8-bit Atmel AVR microcontroller, or a 32-bit Atmel ARM. The software consists of a standard
programming language compiler and a boot loader that executes on the microcontroller.
Fig 3.1: Arduino board
Arduino boards can be purchased pre-assembled or as do-it-yourself kits. Hardware design information
is available for those who would like to assemble an Arduino by hand. It was estimated in mid-2011
that over 300,000 official Arduinos had been commercially produced.
The Arduino board exposes most of the microcontroller's I/O pins for use by other circuits. The
Diecimila, Duemilanove, and current Uno provide 14 digital I/O pins, six of which can produce pulse-
width modulated signals, and six analog inputs. These pins are on the top of the board, via female 0.1-
inch (2.5 mm) headers. Several plug-in application shields are also commercially available.
The Arduino Nano, and Arduino-compatible Bare Bones Board and Boarduino boards may provide
male header pins on the underside of the board to be plugged into solderless breadboards.
- 11 -
There are a great many Arduino-compatible and Arduino-derived boards. Some are functionally
equivalent to an Arduino and may be used interchangeably. Many are the basic Arduino with the
addition of commonplace output drivers, often for use in school-level education to simplify the
construction of buggies and small robots. Others are electrically equivalent but change the form factor,
sometimes permitting the continued use of Shields, sometimes not. Some variants use completely
different processors, with varying levels of compatibility.
OFFICIAL BOARDS
The original Arduino hardware is manufactured by the Italian company Smart Projects. Some Arduino-
branded boards have been designed by the American company SparkFun Electronics. Sixteen versions
of the Arduino hardware have been commercially produced to date.
Duemilanove (rev 2009b) Arduino UNO
Arduino Leonardo Arduino Mega
- 12 -
Arduino Nano Arduino Due (ARM-based) LilyPad (rev 2007)
SHIELDS
Arduino and Arduino-compatible boards make use of shields, printed circuit expansion boards that plug
into the normally supplied Arduino pin-headers. Shields can provide motor controls, GPS, ethernet,
LCD display, or bread boarding (prototyping). A number of shields can also be made DIY.
Fig 3.2: Arduino shields
- 13 -
3.5 Introduction to software : MATLAB & ARDUINO IDE
An introduction to MATLAB
MATLAB = Matrix Laboratory
“MATLAB is a high-level language and interactive environment that enables you to perform
computationally intensive tasks faster than with traditional programming languages such as C, C++ and
Fortran.”
MATLAB is an interactive, interpreted language that is designed for fast numerical matrix calculations.
The MATLAB Environment
Fig. 3.3 MATLAB Environment
- 14 -
MATLAB window components
1. Workspace
> Displays all the defined variables
2. Command Window
> To execute commands in the MATLAB environment
3. Command History
> Displays record of the commands used
4. File Editor Window
> Define your functions
MATLAB Help
Fig. 3.4: MATLAB Help
MATLAB Help is an extremely powerful assistance to learning MATLAB
Help not only contains the theoretical background, but also shows demos for implementation
MATLAB Help can be opened by using the HELP pull-down menu
The purpose of this tutorial is to gain familiarity with MATLAB’s Image Processing
Toolbox. This tutorial does not contain all of the functions available in MATLAB. It is
very useful to go to HelpMATLAB Help in the MATLAB window if you have any
- 15 -
questions not answered by this tutorial. Many of the examples in this tutorial are
modified versions of MATLAB’s help examples. The help tool is especially useful in image
processing applications, since there are numerous filter examples.
Fig. 3.5: M-file for Loading Images
Fig. 3.6: BitmapImage Fig. 3.7: Grayscale Image
- 16 -
BASIC EXAMPLES
EXAMPLE 1
How to build a matrix(or image)?
r = 256;img = zeros(r, c);
img(100:105, :) = 0.5;
img(:, 100:105) = 1;
figure;
imshow(img);
OUTPUT
- 17 -
Fig.3.8: Example 1 output
EXAMPLE 2
PROGRAM:
r = 256;
c = 256;
img = rand(r,c);
img = round(img);
figure;
imshow(img);
3.9:Example 2 output
- 18 -
An introduction to Arduino IDE
The Arduino integrated development environment (IDE) is a cross-platform application written
in Java, and is derived from the IDE for the Processing programming language and
the Wiring projects. It is designed to introduce programming to artists and other newcomers
unfamiliar with software development. It includes a code editor with features such as syntax
highlighting, brace matching, and automatic indentation, and is also capable of compiling and
uploading programs to the board with a single click. A program or code written for Arduino is
called a "sketch".
Arduino programs are written in C or C++. The Arduino IDE comes with a software library called
"Wiring" from the original Wiring project, which makes many common input/output operations
much easier. Users only need define two functions to make a runnable cyclic executive program:
 setup(): a function run once at the start of a program that can initialize settings
 loop(): a function called repeatedly until the board powers off
 #define LED_PIN 13

 void setup () {
 pinMode (LED_PIN, OUTPUT); // Enable pin 13 for digital output
 }

 void loop () {
 digitalWrite (LED_PIN, HIGH); // Turn on the LED
 delay (1000); // Wait one second (1000 milliseconds)
 digitalWrite (LED_PIN, LOW); // Turn off the LED
 delay (1000); // Wait one second
 }
It is a feature of most Arduino boards that they have an LED and load resistor connected between
pin 13 and ground, a convenient feature for many simple tests.[9] The previous code would not be
seen by a standard C++ compiler as a valid program, so when the user clicks the "Upload to I/O
board" button in the IDE, a copy of the code is written to a temporary file with an extra include
header at the top and a very simple main() function at the bottom, to make it a valid C++ program.
- 19 -
The Arduino IDE uses the GNU toolchain and AVR Libc to compile programs, and uses avrdude
to upload programs to the board.
BLOCK DIAGRAM
The complete hardware arrangement is shown by the following block diagram:-
Fig. 4.1 Block diagram
Schemetic diagram
- 20 -
Fig4.2: Schematic of actual circuit
Actual Hardware Design
Fig. 4.3: Actual Hardware Design
- 21 -
PRINCIPLE OF OPERATION
5.1 Menu
We have created a standalone application that is used for Face detection and recognition.
For this we have created graphical menu which contain choices. Simply clicking through mouse
on specified choice can perform the desired action.
Menu can be made from simple if loops and inside the if loop desired functions can be written.
The figure given below will show the menu we have made.
Fig5.1: Menu
- 22 -
5.2 Database generation
The next step was to create a database. We have created database by simply taking face photos
through camera, renaming and saving face images of .jpeg format in a particular folder then
comparing with the face to recognize with database faces. The number of faces that saves in
database is equal to M * times, where M is the no. of person entered by user and times is used for
increasing the accuracy i.e. let times be 5, so 5 face images per person. The flow chart for
database generation is given below.
Fig 5.2: flow diagram for generating database
5.3 Recognition
The recognition process is done by Eigen face algorithm. The flow diagram for recognition
process is given below.
- 23 -
Fig 5.3: flow diagram for recognition process
Eigen Face Algorithm
The Eigen face algorithm used for the detection purpose of any face is described as follow:
1. The first step is to obtain a set S with M face images. In our example M = 25 as shown at the
beginning of the tutorial. Each image is transformed into a vector of size N and placed into the
set.
2. After you have obtained your set, you will obtain the mean image Ψ
3. Then you will find the difference Φ between the input image and the mean image
- 24 -
4. Next we seek a set of M orthonormal vectors, un, which best describes the distribution of the
data. The kth vector, uk, is chosen such that
is a maximum, subject to
Note: uk and λk are the eigenvectors and eigenvalues of the covariance matrix C
5. We obtain the covariance matrix C in the following manner
6. AT
7. Once we have found the eigenvectors, vl, ul
- 25 -
These are the Eigen faces of our set of original images
Recognition Procedure
1. A new face is transformed into its eigenface components. First we compare our input image
with our mean image and multiply their difference with each eigenvector of the L matrix. Each
value would represent a weight and would be saved on a vector Ω.
2. We now determine which face class provides the best description for the input image. This is
done by minimizing the Euclidean distance
3. The input face is consider to belong to a class if εk is bellow an established threshold θε. Then the
face image is considered to be a known face. If the difference is above the given threshold, but
bellow a second threshold, the image can be determined as a unknown face. If the input image is
above these two thresholds, the image is determined NOT to be a face.
4. If the image is found to be an unknown face, you could decide whether or not you want to add
the image to your training set for future recognitions. You would have to repeat steps 1 trough 7
to incorporate this new face image.
5.4 Tracking
- 26 -
The face tracking is done using Viola Jones algorithm. We have combined motors through Arduino
UNO board which has ATmega328 microcontroller which is used to run dc motors in direction of
face. The flow diagram for tracking is given below.
Fig 5.4: flow diagram for tracking process
The live image coming from camera is of 320*240 resolution. The script file vj_track_face.m
returns coordinates (x-axis and y-axis) of bounding box that surrounds the face (starting
coordinates) using Viola Jones algorithm.
These x and y axis is passed to motor_motion.m file. This function is responsible for the motor
movement in direction of faces.
We have used two dc motors, the lower one is used for left and right motion and upper/front one
is used for up and down motion. The working of motors is defined by coordinates, which is shown
below:
1. If x-axis is between 120 to 200 and y between 100 to 160 then there is no movement
2. If x-axis is less than 120 then there is left motion.
3. If x-axis is greater than 200 then there is right motion.
4. If y-axis is greater than 160 then there is up motion.
5. If y-axis is less than 100 then there is down motion.
Viola Jones algorithm
The Voila Jones algorithm is given as follow:
– In voila zone algorithm the detection is done by the Feature extraction and feature evaluation
Rectangular features are used, with a new image representation their calculation is very fast.
- 27 -
Fig 5.5: Rectangular features
Fig 5.6: Rectangular feature matching with face
– They are easy to calculate.
– The white areas are subtracted from the black ones.
– A special representation of the sample called the integral image makes feature extraction faster.
– Features are extracted from sub windows of a sample image.
– The base size for a sub window is 24 by 24 pixels.
– Each of the four feature types are scaled and shifted across all possible combinations.
– In a 24 pixel by 24 pixel sub window there are ~160,000 possible features to be calculated
– A real face may result in multiple nearby detections
– Post process detected sub windows to combine overlapping detections into a single detection
- 28 -
Fig 5.7: A Cascade of Classifiers
Fig 5.7: Notice detection at multiple scales
PROCESS FLOW DIAGRAM
- 29 -
Fig 5.9.: Process flow diagram
RESULT
- 30 -
The result of our project is summarized as follows:
1. When the code is compiled, the set of input images is given by
2. The mean image for the respected set of images is given as follow:
3. The Eigen faces for the given set of image with respect to evaluated mean image is given
by:
- 31 -
Realtime tracking
- 32 -
Fig. Tracking one person in real time( SSD showing 1)
PROBLEMS FACED
- 33 -
There were number of challenges and problems faced by us, some of them were:
1. In generating database, renaming was the main problem and saving them in a proper way
so that the images can be easily access in processing. So we have used numbers for
naming faces like 1.jpg, 2.jpg… and so on.
2. Coding the Eigen face algorithm with live images was complex.
3. Taking tolerance for real time recognition.
4. Motors assembly on frame as motors were not fixing on frame properly.
5. We have first tried to move the frame using stepper motors, but the code that we have
made didn’t work properly as well as tracking from stepper is only 180 degree. DC motors
work fine with the code and it can also track 360 degree.
6. Realizing pauses and delays so that motor can work perfectly.
APPLICATION OF THIS PROJECT
- 34 -
APPLICATIONS
This project can have many applications. It can be used in –
1. For accessing a secure area by face .
Fig 8.1 Person entering in an organization.
Camera detecting and marking attendance.
2. For attendance registration
Fig 8.2 Attendance is marked with count.
3. For anti terror activity
Fig 8.3 Camera in the station detects
Most wanted criminal. Alarm raises.
- 35 -
Fig 8.4:CCTV camera tracks the position
Of the criminal.
Fig 8.5After all the efforts,
Security guards finally capture the criminal.
4. For automatic videography
- 36 -
5. For counting number of person in a room
Fig 8.6: Counting the no. of persons
- 37 -
APPENDIX
Programcode
%% face_choice.m- main script file
imaqreset;
clear all;
close all;
clc;
i=1;
global M; %input no. of faces
global face_id
global times %no of faces for one person
times=5; %no. of pics capture for an individual
N=4; %default no. of faces
config_arduino(); %function for configuring arddino board
vid = videoinput('winvideo',2,'YUY2_320x240');
while (1==1)
choice=menu('Face Recognition',...
'Generate database',...
'Recognize face from drive',...
'Recognize face from camera',...
'Track face from camera',...
'Track the recognized face from camera',...
'Exit');
if (choice==1)
choice1=menu('Face Recognition',...
'Enter no. of faces',...
'Exit');
if (choice1==1)
M=input('Enter : ');
preview(vid);
while(i<((M*times)+1))
choice2=menu('Face Recognition',...
'Capture');
if(choice2==1)
g=getsnapshot(vid);
%saving rgb image in specified folder
rgbImage=ycbcr2rgb(g);
str=strcat(int2str(i),'.jpg');
fullImageFileName = fullfile('E:New Folder',str);
imwrite(rgbImage,fullImageFileName);
- 38 -
%saving grayscale image in current directory
grayImage=rgb2gray(rgbImage);
Dir_name=fullfile(pwd,str);
imwrite(grayImage,Dir_name);
i=(i+1);
end
end
closepreview(vid);
end
if (choice1==2)
clear choice1;
end
end
if(choice==2)
if(isempty(M)==1)
default=N*times;
face_id=recognize_face_drive((default));
else
faces=M*times;
face_id=recognize_face_drive(faces);
end
end
if(choice==3)
if(isempty(M)==1)
default=N*times;
face_id=recognize_face_cam(default);
else
faces=M*times;
face_id=recognize_face_cam(faces);
end
end
if (choice==4)
vj_track_face();
end
if (choice==5)
vj_faceD_live();
end
if (choice==6)
close all;
return;
end
end
stop(vid);
- 39 -
%% config_arduino.m– for configuring Arduino board
function config_arduino() %function for configuring arddino board
global b %global arduino class object
b=arduino('COM29');
b.pinMode(4,'OUTPUT'); % pin 4&5 for right & left and pin 6&7 for up & down
b.pinMode(5,'OUTPUT');
b.pinMode(6,'OUTPUT');
b.pinMode(7,'OUTPUT');
b.pinMode(2,'OUTPUT');
b.pinMode(3,'OUTPUT');
b.pinMode(8,'OUTPUT');
b.pinMode(9,'OUTPUT');
b.pinMode(10,'OUTPUT');
b.pinMode(11,'OUTPUT');
b.pinMode(12,'OUTPUT');
b.pinMode(13,'OUTPUT'); % pin 2,3,8,9,10,11,12,13 for seven segment display 8 leds
b.pinMode(14,'OUTPUT');
b.pinMode(15,'OUTPUT');
b.pinMode(16,'OUTPUT');
b.pinMode(17,'OUTPUT'); % pin 14,15,16,17 for multiplexing 4 seven segment display
end
%% recognize_face_drive.m– function for matching face using Eigenface
algorithm from hard drive
% Thanks to Santiago Serrano
function Min_id = recognize_face_drive(M)
close all
clc
% number of images on your training set.
%Chosen std and mean.
%It can be any number that it is close to the std and mean of most of the images.
um=100;
ustd=80;
person_no=0;
times=5;
%read and show images(bmp);
S=[]; %img matrix
figure(1);
for i=1:M
str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image
eval('img=imread(str);');
- 40 -
%eval('img=rgb2gray(image);');
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
if i==3
title('Training set','fontsize',18)
end
drawnow;
[irow icol]=size(img); % get the number of rows (N1) and columns (N2)
temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix
S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence
%this is our S
end
%Here we change the mean and std of all images. We normalize all images.
%This is done to reduce the error due to lighting conditions.
for i=1:size(S,2)
temp=double(S(:,i));
m=mean(temp);
st=std(temp);
S(:,i)=(temp-m)*ustd/st+um;
end
%show normalized images
figure(2);
for i=1:M
str=strcat(int2str(i),'.jpg');
img=reshape(S(:,i),icol,irow);
img=img';
eval('imwrite(img,str)');
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
drawnow;
if i==3
title('Normalized Training Set','fontsize',18)
end
end
%mean image;
m=mean(S,2); %obtains the mean of each row instead of each column
tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255
img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix
img=img'; %creates a N1xN2 matrix by transposing the image.
figure(3);
imshow(img);
title('Mean Image','fontsize',18)
% Change image for manipulation
dbx=[]; % A matrix
- 41 -
for i=1:M
temp=double(S(:,i));
dbx=[dbx temp];
end
%Covariance matrix C=A'A, L=AA'
A=dbx';
L=A*A';
% vv are the eigenvector for L
% dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx';
[vv dd]=eig(L);
% Sort and eliminate those whose eigenvalue is zero
v=[];
d=[];
for i=1:size(vv,2)
if(dd(i,i)>1e-4)
v=[v vv(:,i)];
d=[d dd(i,i)];
end
end
%sort, will return an ascending sequence
[B index]=sort(d);
ind=zeros(size(index));
dtemp=zeros(size(index));
vtemp=zeros(size(v));
len=length(index);
for i=1:len
dtemp(i)=B(len+1-i);
ind(i)=len+1-index(i);
vtemp(:,ind(i))=v(:,i);
end
d=dtemp;
v=vtemp;
%Normalization of eigenvectors
for i=1:size(v,2) %access each column
kk=v(:,i);
temp=sqrt(sum(kk.^2));
v(:,i)=v(:,i)./temp;
end
%Eigenvectors of C matrix
u=[];
for i=1:size(v,2)
temp=sqrt(d(i));
u=[u (dbx*v(:,i))./temp];
end
- 42 -
%Normalization of eigenvectors
for i=1:size(u,2)
kk=u(:,i);
temp=sqrt(sum(kk.^2));
u(:,i)=u(:,i)./temp;
end
% show eigenfaces;
figure(4);
for i=1:size(u,2)
img=reshape(u(:,i),icol,irow);
img=img';
img=histeq(img,255);
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
drawnow;
if i==3
title('Eigenfaces','fontsize',18)
end
end
% Find the weight of each face in the training set.
omega = [];
for h=1:size(dbx,2)
WW=[];
for i=1:size(u,2)
t = u(:,i)';
WeightOfImage = dot(t,dbx(:,h)');
WW = [WW; WeightOfImage];
end
omega = [omega WW];
end
% Acquire new image
% Note: the input image must have a bmp or jpg extension.
% It should have the same size as the ones in your training set.
% It should be placed on your desktop
InputImage = input('Please enter the name of the image and its extension n','s');
InputImage = imread(strcat('E:',InputImage));
figure(5)
subplot(1,2,1)
imshow(InputImage); colormap('gray');title('Input image','fontsize',18)
input_img=rgb2gray(InputImage);
%imshow(input_img);
InImage=reshape(double(input_img)',irow*icol,1);
temp=InImage;
me=mean(temp);
- 43 -
st=std(temp);
temp=(temp-me)*ustd/st+um;
NormImage = temp;
Difference = temp-m;
p = [];
aa=size(u,2);
for i = 1:aa
pare = dot(NormImage,u(:,i));
p = [p; pare];
end
ReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvector
ReshapedImage = reshape(ReshapedImage,icol,irow);
ReshapedImage = ReshapedImage';
%show the reconstructed image.
subplot(1,2,2)
imagesc(ReshapedImage); colormap('gray');
title('Reconstructed image','fontsize',18)
InImWeight = [];
for i=1:size(u,2)
t = u(:,i)';
WeightOfInputImage = dot(t,Difference');
InImWeight = [InImWeight; WeightOfInputImage];
end
ll = 1:M;
figure(68)
subplot(1,2,1)
stem(ll,InImWeight)
title('Weight of Input Face','fontsize',14)
% Find Euclidean distance
e=[];
for i=1:size(omega,2)
q = omega(:,i);
DiffWeight = InImWeight-q;
mag = norm(DiffWeight);
e = [e mag];
end
kk = 1:size(e,2);
subplot(1,2,2)
stem(kk,e)
title('Eucledian distance of input image','fontsize',14)
MaximumValue=max(e)
MinimumValue=min(e)
Min_id=find(e==min(e));
- 44 -
person_no=Min_id/times;
p1=(round(person_no));
if(person_no<p1)
p1=(p1-1);
display('Detected face number')
display(p1)
write_digit(14,15,16,17,p1);
end
if(person_no>p1)
p1=(p1+1);
display('Detected face number')
display(p1)
write_digit(14,15,16,17,p1);
end
if(person_no==p1)
display('Detected face number')
display(p1)
write_digit(14,15,16,17,p1);
end
end
%% recognize_face_cam.m– function for matching face using Eigenface
algorithm from camera
% Thanks to Santiago Serrano
function Min_id = recognize_face_cam(M)
imaqreset;
close all
clc
% number of images on your training set.
vid = videoinput('winvideo',1,'YUY2_320x240');
%Chosen std and mean.
%It can be any number that it is close to the std and mean of most of the images.
um=100;
ustd=80;
person_no=0;
times=5;
%read and show images(bmp);
S=[]; %img matrix
figure(1);
for i=1:M
str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image
eval('img=imread(str);');
%eval('img=rgb2gray(image);');
- 45 -
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
if i==3
title('Training set','fontsize',18)
end
drawnow;
[irow icol]=size(img); % get the number of rows (N1) and columns (N2)
temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix
S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence
%this is our S
end
%Here we change the mean and std of all images. We normalize all images.
%This is done to reduce the error due to lighting conditions.
for i=1:size(S,2)
temp=double(S(:,i));
m=mean(temp);
st=std(temp);
S(:,i)=(temp-m)*ustd/st+um;
end
%show normalized images
figure(2);
for i=1:M
str=strcat(int2str(i),'.jpg');
img=reshape(S(:,i),icol,irow);
img=img';
eval('imwrite(img,str)');
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
drawnow;
if i==3
title('Normalized Training Set','fontsize',18)
end
end
%mean image;
m=mean(S,2); %obtains the mean of each row instead of each column
tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255
img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix
img=img'; %creates a N1xN2 matrix by transposing the image.
figure(3);
imshow(img);
title('Mean Image','fontsize',18)
% Change image for manipulation
dbx=[]; % A matrix
for i=1:M
- 46 -
temp=double(S(:,i));
dbx=[dbx temp];
end
%Covariance matrix C=A'A, L=AA'
A=dbx';
L=A*A';
% vv are the eigenvector for L
% dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx';
[vv dd]=eig(L);
% Sort and eliminate those whose eigenvalue is zero
v=[];
d=[];
for i=1:size(vv,2)
if(dd(i,i)>1e-4)
v=[v vv(:,i)];
d=[d dd(i,i)];
end
end
%sort, will return an ascending sequence
[B index]=sort(d);
ind=zeros(size(index));
dtemp=zeros(size(index));
vtemp=zeros(size(v));
len=length(index);
for i=1:len
dtemp(i)=B(len+1-i);
ind(i)=len+1-index(i);
vtemp(:,ind(i))=v(:,i);
end
d=dtemp;
v=vtemp;
%Normalization of eigenvectors
for i=1:size(v,2) %access each column
kk=v(:,i);
temp=sqrt(sum(kk.^2));
v(:,i)=v(:,i)./temp;
end
%Eigenvectors of C matrix
u=[];
for i=1:size(v,2)
temp=sqrt(d(i));
u=[u (dbx*v(:,i))./temp];
end
%Normalization of eigenvectors
- 47 -
for i=1:size(u,2)
kk=u(:,i);
temp=sqrt(sum(kk.^2));
u(:,i)=u(:,i)./temp;
end
% show eigenfaces;
figure(4);
for i=1:size(u,2)
img=reshape(u(:,i),icol,irow);
img=img';
img=histeq(img,255);
subplot(ceil(sqrt(M)),ceil(sqrt(M)),i)
imshow(img)
drawnow;
if i==3
title('Eigenfaces','fontsize',18)
end
end
% Find the weight of each face in the training set.
omega = [];
for h=1:size(dbx,2)
WW=[];
for i=1:size(u,2)
t = u(:,i)';
WeightOfImage = dot(t,dbx(:,h)');
WW = [WW; WeightOfImage];
end
omega = [omega WW];
end
% Acquire new image from camera
preview(vid);
choice=menu('Push CAM button for taking pic',...
'CAM');
if(choice==1)
g=getsnapshot(vid);
end
rgbImage=ycbcr2rgb(g);
imwrite(rgbImage,'camshot.jpg');
closepreview(vid);
InputImage = imread('camshot.jpg');
figure(5)
subplot(1,2,1)
imshow(InputImage); colormap('gray');title('Input image','fontsize',18)
input_img=rgb2gray(InputImage);
- 48 -
%imshow(input_img);
InImage=reshape(double(input_img)',irow*icol,1);
temp=InImage;
me=mean(temp);
st=std(temp);
temp=(temp-me)*ustd/st+um;
NormImage = temp;
Difference = temp-m;
p = [];
aa=size(u,2);
for i = 1:aa
pare = dot(NormImage,u(:,i));
p = [p; pare];
end
ReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvector
ReshapedImage = reshape(ReshapedImage,icol,irow);
ReshapedImage = ReshapedImage';
%show the reconstructed image.
subplot(1,2,2)
imagesc(ReshapedImage); colormap('gray');
title('Reconstructed image','fontsize',18)
InImWeight = [];
for i=1:size(u,2)
t = u(:,i)';
WeightOfInputImage = dot(t,Difference');
InImWeight = [InImWeight; WeightOfInputImage];
end
ll = 1:M;
figure(68)
subplot(1,2,1)
stem(ll,InImWeight)
title('Weight of Input Face','fontsize',14)
% Find Euclidean distance
e=[];
for i=1:size(omega,2)
q = omega(:,i);
DiffWeight = InImWeight-q;
mag = norm(DiffWeight);
e = [e mag];
end
kk = 1:size(e,2);
subplot(1,2,2)
stem(kk,e)
title('Eucledian distance of input image','fontsize',14)
- 49 -
MaximumValue=max(e)
MinimumValue=min(e)
Min_id=find(e==min(e));
person_no=Min_id/times;
p1=(round(person_no));
if(person_no<p1)
p1=(p1-1);
display('Detected face number')
display(p1);
write_digit(14,15,16,17,p1);
end
if(person_no>p1)
p1=(p1+1);
display('Detected face number')
display(p1);
write_digit(14,15,16,17,p1);
end
if(person_no==p1)
display('Detected face number')
display(p1);
write_digit(14,15,16,17,p1);
end
stop(vid);
end
%% vj_track_face.m– function for tracking face
% created and coded by Abhishek Gupta (abhishekgpt10@gmail.com)
function vj_track_face()
imaqreset;
close all;
clc
no_face=0;
%Detect objects using Viola-Jones Algorithm
vid = videoinput('winvideo',1,'YUY2_320x240');
set(vid,'ReturnedColorSpace','rgb');
set(vid,'TriggerRepeat',Inf);
vid.FrameGrabInterval = 1;
vid.FramesPerTrigger=20;
figure; % Ensure smooth display
set(gcf,'doublebuffer','on');
start(vid);
while(vid.FramesAcquired<=1000);
- 50 -
FDetect = vision.CascadeObjectDetector; %To detect Face
I = getsnapshot(vid); %Read the input image
BB = step(FDetect,I); %Returns Bounding Box values based on number of objects
hold on
figure(1),imshow(I);
title('Face Detection');
for i = 1:size(BB,1)
no_face=size(BB,1);
write_digit(14,15,16,17,no_face);
rectangle('Position',BB(i,:),'LineWidth',2,'LineStyle','-','EdgeColor','y');
display(BB(1));
display(BB(2));
motor_motion(BB(1),BB(2));
hold off;
flushdata(vid);
end
end
stop(vid);
end
%% vj_faceD_live.m– function for tracking recognizedface
function vj_faceD_live(std,mean)
imaqreset;
close all
clc
detect=0;
std_2=0;
mean_2=0;
tlrnce=7;
while (1==1)
choice=menu('Face Recognition',...
'Real time recognition',...
'Track last recognised face',...
'Exit');
if (choice==1)
[std,mean]= face_stdmean();
%Detect objects using Viola-Jones Algorithm
vid = videoinput('winvideo',1,'YUY2_320x240');
set(vid,'ReturnedColorSpace','rgb');
set(vid,'TriggerRepeat',Inf);
vid.FrameGrabInterval = 1;
vid.FramesPerTrigger=20;
- 51 -
figure; % Ensure smooth display
set(gcf,'doublebuffer','on');
start(vid);
while(vid.FramesAcquired<=1500);
FDetect = vision.CascadeObjectDetector; %To detect Face
I = getsnapshot(vid); %Read the input image
BB = step(FDetect,I); %Returns Bounding Box values based on number of objects
hold on
if(size(BB,1) == 1)
I2=imcrop(I,BB);
gray_face=rgb2gray(I2);
std_2 = std2(gray_face);
mean_2 = mean2(gray_face);
%figure(1),imshow(gray_face);
end
figure(1),imshow(I);
title('Face Recognition');
display(std);
display(mean);
display(std_2);
display(mean_2);
for i = 1:size(BB,1)
if((((std_2<=(std+tlrnce))&&(std_2>=(std-
tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce))))
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g');
display('DETECTED');
detect=(detect+1)
if(detect==2)
display('tracking....');
detect=0;
motor_motion(BB(1),BB(2));%for motion of motors in direction of faces
end
else
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r');
display('NOT DETECTED');
end
hold off;
flushdata(vid);
end
end
- 52 -
stop(vid);
end
if(choice==2)
[std,mean]= face_stdmean_recgnz();
%Detect objects using Viola-Jones Algorithm
vid = videoinput('winvideo',1,'YUY2_320x240');
set(vid,'ReturnedColorSpace','rgb');
set(vid,'TriggerRepeat',Inf);
vid.FrameGrabInterval = 1;
vid.FramesPerTrigger=20;
figure; % Ensure smooth display
set(gcf,'doublebuffer','on');
start(vid);
while(vid.FramesAcquired<=600);
FDetect = vision.CascadeObjectDetector; %To detect Face
I = getsnapshot(vid); %Read the input image
BB = step(FDetect,I); %Returns Bounding Box values based on number of objects
hold on
if(size(BB,1) == 1)
I2=imcrop(I,BB);
gray_face=rgb2gray(I2);
std_2 = std2(gray_face);
mean_2 = mean2(gray_face);
%figure(1),imshow(gray_face);
end
figure(1),imshow(I);
title('Face Recognition');
display(std);
display(mean);
display(std_2);
display(mean_2);
for i = 1:size(BB,1)
if((((std_2<=(std+tlrnce))&&(std_2>=(std-
tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce))))
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g');
display('DETECTED');
detect=(detect+1)
if(detect==2)
display('tracking....');
detect=0;
motor_motion(BB(1),BB(2));%for motion of motors in direction of faces
- 53 -
end
else
rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r');
display('NOT DETECTED');
end
hold off;
flushdata(vid);
end
end
stop(vid);
end
if(choice==3)
return
end
end
%% face_stdmean.m – function returns standard deviation and mean for real
time recognition
function [std_f,mean_f] = face_stdmean()
imaqreset;
close all;
clc;
i=1;
global std
global mean
global times
std=0;mean=0;
vid = videoinput('winvideo',1,'YUY2_320x240');
while (1==1)
choice=menu('Face Recognition',...
'Taking photos for recognition',...
'Exit');
if (choice==1)
FDetect = vision.CascadeObjectDetector;
preview(vid);
while(i<(times+1))
choice2=menu('Face Recognition',...
'Capture');
if(choice2==1)
g=getsnapshot(vid);
%saving rgb image in specified folder
rgbImage=ycbcr2rgb(g);
str=strcat(int2str(i),'f.jpg');
fullImageFileName = fullfile('E:New Folder',str);
imwrite(rgbImage,fullImageFileName);
- 54 -
BB = step(FDetect,rgbImage);
I2=imcrop(rgbImage,BB);
%saving grayscale image in current directory
grayImage=rgb2gray(I2);
Dir_name=fullfile(pwd,str);
imwrite(grayImage,Dir_name);
std = (std+std2(grayImage));
mean =(mean+mean2(grayImage));
i=(i+1);
end
std_f=(std/times)
mean_f=(mean/times)
end
end
closepreview(vid);
if (choice==2)
std_f=(std/times)
mean_f=(mean/times)
return;
end
end
end
%% face_stdmean_recgnz.m– function returns standard deviation and mean
of last recognizedface
function [std_f,mean_f] = face_stdmean_recgnz()
close all;
clc;
global face_id
global std
global mean
global times
std=0;mean=0;
%i=face_id;
i=input('Enter face id for live recognition: ');
FDetect = vision.CascadeObjectDetector;
j=(i*times);
k=(j-times);
while(j>=(k+1))
str=strcat(int2str(j),'.jpg');
fullImageFileName = fullfile('E:New Folder',str);
I=imread(fullImageFileName);
BB = step(FDetect,I);
I2=imcrop(I,BB);
grayImage=rgb2gray(I2);
str2=strcat(int2str(j),'f.jpg');
- 55 -
Dir_name=fullfile(pwd,str2);
imwrite(grayImage,Dir_name);
std = (std+std2(grayImage));
mean =(mean+mean2(grayImage));
j=(j-1);
end
std_f=(std/times)
mean_f=(mean/times)
end
%% motor_motio.m – function for movement of motors
function motor_motion(x,y) %for motion of motors in direction of faces
global b %global arduino class object
if ((x<200 && x>120)&&(y<160 && y>100))
disp('Stop');
b.digitalWrite(4,0);
b.digitalWrite(5,0);
b.digitalWrite(6,0);
b.digitalWrite(7,0);
end
if (x<120)
disp('right');
b.digitalWrite(4,0);
b.digitalWrite(5,1);
pause(0.05);
b.digitalWrite(4,0);
b.digitalWrite(5,0);
pause(0.1);
end
if (x>200)
disp('left');
b.digitalWrite(4,1);
b.digitalWrite(5,0);
pause(0.05);
b.digitalWrite(4,0);
b.digitalWrite(5,0);
pause(0.1);
end
if (y>160)
disp('up');
b.digitalWrite(6,1);
b.digitalWrite(7,0);
pause(0.05);
b.digitalWrite(6,0);
b.digitalWrite(7,0);
pause(0.1);
end
- 56 -
if (y<100)
disp('down');
b.digitalWrite(6,0);
b.digitalWrite(7,1);
pause(0.05);
b.digitalWrite(6,0);
b.digitalWrite(7,0);
pause(0.1);
end
end
%% write_digit.m – function for switching digits on sevensegmentdisplay
function write_digit(w,x,y,z,a)
global b
switch a
case 1
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
one();
pause(.001);
case 2
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
two();
pause(.001);
case 3
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
three();
pause(.001);
case 4
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
four();
pause(.001);
case 5
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
- 57 -
b.digitalWrite(z,0);
five();
pause(.001);
case 6
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
six();
pause(.001);
case 7
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
seven();
pause(.001);
case 8
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
eight();
pause(.001);
case 9
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
nine();
pause(.001);
case 0
b.digitalWrite(w,1);
b.digitalWrite(x,0);
b.digitalWrite(y,0);
b.digitalWrite(z,0);
zero();
pause(.001);
end
end
%% zero.m - function for write zero
function zero()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,1);
b.digitalWrite(8,0);
b.digitalWrite(9,0);
- 58 -
b.digitalWrite(10,0);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,0);
end
%% one.m - function for write one
function one()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,1);
b.digitalWrite(8,1);
b.digitalWrite(9,1);
b.digitalWrite(10,1);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,1);
end
%% two.m - function for write two
function two()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,1);
b.digitalWrite(9,0);
b.digitalWrite(10,0);
b.digitalWrite(11,1);
b.digitalWrite(12,0);
b.digitalWrite(13,0);
end
%% three.m - function for write three
function three()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,1);
b.digitalWrite(9,1);
b.digitalWrite(10,0);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
- 59 -
b.digitalWrite(13,0);
end
%% four.m - function for write four
function four()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,0);
b.digitalWrite(9,1);
b.digitalWrite(10,1);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,1);
end
%% five.m - function for write five
function five()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,0);
b.digitalWrite(9,1);
b.digitalWrite(10,0);
b.digitalWrite(11,0);
b.digitalWrite(12,1);
b.digitalWrite(13,0);
end
%% six.m - function for write six
function six()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,0);
b.digitalWrite(9,0);
b.digitalWrite(10,0);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,1);
end
- 60 -
%% seven.m- function for write seven
function seven()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,1);
b.digitalWrite(8,1);
b.digitalWrite(9,1);
b.digitalWrite(10,1);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,0);
end
%% eight.m - function for write eight
function eight()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,0);
b.digitalWrite(9,0);
b.digitalWrite(10,0);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,0);
end
%% nine.m - function for write nine
function nine()
global b
b.digitalWrite(2,1);
b.digitalWrite(3,0);
b.digitalWrite(8,0);
b.digitalWrite(9,1);
b.digitalWrite(10,1);
b.digitalWrite(11,0);
b.digitalWrite(12,0);
b.digitalWrite(13,0);
end
- 61 -
Arduino Uno board Schematics
Fig: Arduino Uno Schematics
Driver for Arduino UNO
The driver that is used for Arduino UNO is Prolific pl2303.This driver is available on the site
given below.
http:www.prolific.com
MATLAB interfacing files for Arduino
MATLAB has provided interfacing files for almost all Arduino boards. These files should be
included in the working directory. There is one file in folder ‘ardiosrv’ in which there is a file
named ‘ardiosrv.pde’, this file is need to be burn in Arduino UNO board. These files are available
on the link given below.
http://www.mathsworks.com/academia/arduino-software/arduino-matlab.html
- 62 -
REFERENCES
The usefull topics for my project are taken from these references :
[1] Santiago Serrano, Eigen Faces tutorial, Drexel University
[2] Padhraic Smyth, Face Detection using the Viola-Jones Method, Department Computer
Science University of California, Irvine.
[3] Abboud, F. Davoine, and M. Dang. Facial expression recognition
and synthesis based on an appearance model. Signal Process., Image Commun., 2004.
[2] J. Ahlberg. Candide-3 – an updated parametrised face. Technical report,
Link¨oping University, 2001.
[4] J. Ahlberg and R. Forchheimer. Face tracking for model-based coding
and face animation. International journal of imaging systems and technology, 2003.
[5] A. Azerbayejani and A. Pentland. Recursive estimation of motion, structure, and focal
length. IEEE PAMI, 1995.
[6] V. Belle, T. Deselaers, and S. Schiffer. Randomized trees for real-time
one-step face detection and recognition. 2008.
[7] M. J. Black and Y. Yacoob. Recognizing facial expressions in image
sequences using local parameterized models of image motion. IJCV, 1997.
[8] S. Basu, I. Essa, and A. Pentland. Motion regularization for model-based
head tracking. In CVPR 1996, 1996.
[9] M. L. Cascia, S. Scarloff, and V. Athitsos. Fast, reliable head tracking
under varying illumination: An approach based on registration of texturemapped
3d models. IEEE PAMI, 2000.

MAJOR PROJECT

  • 1.
    - 1 - APROJECT REPORT On FACE RECOGNITION AND TRACKING SYSTEM Bachelor of Technology In Electronics & Communication Engineering Submitted By ABHISHEK GUPTA (1002831004) A. SANDEEP (1002831001) ABHISHEK SOAM (1002831007) ASHISH KUMAR (1002831029) Under the Guidance of Mr. Prashant Gupta Assistant Professor DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGG. IDEAL INSTITUTE OF TECHNOLOGY GHAZIABAD (INDIA)
  • 2.
    - 2 - ACKNOWLEDGEMENT Itake this opportunity to express my profound gratitude and deep regards to our Guide Mr. PRASHANT GUPTA, Assistant Professor Department of Electronics & Communication Engineering, Ideal Institute of Technology, Ghaziabad for his exemplary guidance, advice and constant encouragement throughout the course of this project. The blessing, help and guidance given by his time to time shall carry me a long way in the journey of life on which I am about to embark. I am also very thankful to Mr. NARBADA PRASAD GUPTA, H.O.D. ECE Department, Ideal Institute of Technology, Ghaziabad for approving this project as a final year major project. I want to thank my teammates A.SANDEEP, ABHISHEK GUPTA, ASHISH KUMAR and ABHISHEK SOAM for their valuable role in this project. A.SANDEEP and ABHISHEK SOAM did the motor movement programming and hardware building and synchronizing delays and pauses for perfection in motor movement programming. ASHISH KUMAR and ABHISHEK GUPTA helped in making the face recognition program and the report work. All the team members have a keen role in the research and development of this project. I am also thankful to my father and mother for motivating me and helping for the testing of this project. I want to acknowledge all my friends, who donated their faces for testing and training algorithm. A.SANDEEP (1002831001) ABHISHEK GUPTA (1002831004) ABHISHEK SOAM (1002831007) ASHISH KUMAR(1002831029)
  • 3.
    - 3 - ABSTRACT Imageprocessing is the future of the world, through this amazing tool we can control or operate almost everything in this world in terms of security, controlling the computer and lots more. In this image project, camera is used as sensor or we can say input device which takes input in the form of photos or video and that input is used to make algorithms. MATLAB is one of the most powerful software through which we can build any kind of project according to our requirement. MATLAB contains many toolboxes including image processing toolbox and image acquisition toolbox. These toolboxes contain various functions and libraries with which we can perform several task. It converts complex tasks into simple ones. This language converts the codes into C/C++ formats and generate HEX file for the microcontroller or processor. This is all about image processing. The project named, “Face recognition and tracking system”is basically build on the conceptof image processingby using the image processingand computer vision toolboxes . In this project we use two different cameras one for the purposeof generate the data base and other for the recognition and tracking purposewhich is movable ,when the face of any personmatches with the face from generated database the operation of detection and tracking get started.
  • 4.
    - 4 - TABLEOF CONTENTS 1. WHAT IS IMAGE PROCESSING?? .…………5 1.1 Introduction ...………. 5 1.2 Overview ………… 6 2. PROJECT INTRODUCTION …….…… 8 2.1 Introduction …………. 8 2.2 Objective ………… 8 2.3 Project overview ………….8 3. SOFTWARE AND HARDWARE USED …………. 9 3.1 Introduction ………….. 9 3.2 Hardware Requirements ………….. 9 3.3 Software Requirements …………..9 3.4 Introduction to hardware: Arduino ….…………9 3.5 Introduction to software: MATLAB & Arduino IDE ……...14 4. BLOCKDIAGRAM …….…….20 5. PRINCIPLE OF OPERATION ....................21 5.1 Eigen Face Algorithm …………….21 5.2 Viola Jones algorithm …………….23 6. RESULT ……………49 7. PPOBLEMS FACED ……….…….51 8. APPLICATIONS OF THIS PROJECT ....…………. 52 9. APPENDIX ……………. 10.REFERENCES ......………..55
  • 5.
    - 5 - Chapte WHATIS IMAGE PROCESSING?? 1.1 Introduction Image processing is a MATLAB software tool which is used for the following purposes:  Transforming digital information representing images  Improve pictorial information for human interpretation  Remove noise  Correct for motion, camera position, distortion  Enhance by changing contrast, color  Process pictorial information by machine.  Segmentation - dividing an image up into constituent parts  Representation - representing an image by some more abstract  Models Classification  Reduce the size of image information for efficient handling.  Compression with loss of digital information that minimizes loss of "perceptual" information. JPEG and GIF, MPEG, Multi-resolution representations.  Lens focuses an image on the retina (like a camera).  Pattern is affected by distribution of light receptors (rods and cones)  The (6-7 million) cones are in the center of the retina (fovea) and are sensitive to color - each connected to own neuron.  The (75-150 million) rods are distributed everywhere, connected in clusters to a neuron.  Unlike ordinary camera, the eye is flexible.  Range of intensity levels supported by the human visual system is 1010.  Uses brightness adaptation to set sensitivity. ColorVision The color-responsive chemicals in the cones are called cone pigments and are very similar to the chemicals in the rods. The retinal portion of the chemical is the same, however the scotopsin is replaced with photopsins. Therefore, the color-responsive pigments are made of retinal and photopsins. There are three kinds of color-sensitive pigments:  Red-sensitive pigment  Green-sensitive pigment  Blue-sensitive pigment Image processing involves changing the nature of an image in order to either
  • 6.
    - 6 - 1.2Overview What Is An Image?  An image is a 2D rectilinear array of pixels  Any image from a scanner, or from a digital camera, or in a computer, is a digital image.  A digital image is formed by the collections of different color pixel. Fig. 1.1 B&W image Fig. 1.2 Gray scale image Fig. 1.3 RGB image Types of image: There are three different types of image in MATLAB  Binary images or B&W images  Intensity images or Gray scale images  Indexed images or RGB images Binary Image: They are also called B&W images, containing ‘1’ for white and ‘0’ for Black.
  • 7.
    - 7 - Fig.1.4 B&W image with regionpixel value Intensity image: They are also called ‘Gray Scale images’, containing values in the range of 0 to 1. Fig. 1.5 Intensityimage withregionpixel value Indexed image: These are the color images and also represented as ‘RGB image’. Fig. 1.6 Color spectrum Fig. 1.7 Indexed image with region pixel value
  • 8.
    - 8 - PROJECTINTRODUCTION 2.1 Introduction The project named, “Face recognition and tracking system” is basically build on the concept of image processing by using image processing and computer vision tool box of MATLAB software. In this project we use two different cameras one for the purpose of generate the data base and other for the recognition purpose which is movable ,when the face of any person matches with the face from generated database the operation of detection and tracking get started. 2.2 Objective The prime objective of the project is to develop a standalone application and an interlinking hardware that can recognize live faces from a generated database of images. This application will contain features like recognition of faces from still image taken by camera or from hard drive, tracking of faces which will observed by movement of camera in direction of faces, real time recognition from database. 2.3. Projectoverview In our project we have to develop a standalone application and interlinking a hardware that can do following tasks: 1. Face detection from live image. 2. Face detection from drive. 3. Tracking any face. 4. Recognize and track a face from live image. 5. Recognize and track a face from drive. 6. Determine the population density.
  • 9.
    - 9 - SOFTWAREAND HARDWARE USED 3.1 Introduction For making such a project we need software as well as hardware because here we handle a embedded product with the help of our gesture. Hardware : It is the physical part of our project that relates with the real world . E.g. Arduino kit ,microcontroller etc. Software : It is the set of programs and instructions that tells hardware which task is to be performed. E.g. Set of instructions written on MATLAB IDE. 3.2 Hardware Requirements The following are the hardware components that are required during this project : A) 1 Personal computer. B) 1 Arduino kit (with ATmega328). C) 2 1.3MP cameras. D) 1 Motor driving shield (with L293D IC). E) 1 Seven segment display shield. F) 1 Program burner wire. G) 2 Line wires. H) 20 Single stand wires. I) 2 Batteries. J) 2 Battery holders. K) 2 DC Motors. L) 1 Holding frame. 3.3 Software Requirements
  • 10.
    - 10 - Softwarerequired to program our project hardware part microcontroller are given as follow: A) MATLAB R2012b software. B) Arduino IDE software. C) Arduino to MATLAB interfacing files. 3.4 Introduction to hardware: ARDUINO Arduino is a single-board microcontroller to make using electronics in multidisciplinary projects more accessible. The hardware consists of a simple open source hardware board designed around an 8-bit Atmel AVR microcontroller, or a 32-bit Atmel ARM. The software consists of a standard programming language compiler and a boot loader that executes on the microcontroller. Fig 3.1: Arduino board Arduino boards can be purchased pre-assembled or as do-it-yourself kits. Hardware design information is available for those who would like to assemble an Arduino by hand. It was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced. The Arduino board exposes most of the microcontroller's I/O pins for use by other circuits. The Diecimila, Duemilanove, and current Uno provide 14 digital I/O pins, six of which can produce pulse- width modulated signals, and six analog inputs. These pins are on the top of the board, via female 0.1- inch (2.5 mm) headers. Several plug-in application shields are also commercially available. The Arduino Nano, and Arduino-compatible Bare Bones Board and Boarduino boards may provide male header pins on the underside of the board to be plugged into solderless breadboards.
  • 11.
    - 11 - Thereare a great many Arduino-compatible and Arduino-derived boards. Some are functionally equivalent to an Arduino and may be used interchangeably. Many are the basic Arduino with the addition of commonplace output drivers, often for use in school-level education to simplify the construction of buggies and small robots. Others are electrically equivalent but change the form factor, sometimes permitting the continued use of Shields, sometimes not. Some variants use completely different processors, with varying levels of compatibility. OFFICIAL BOARDS The original Arduino hardware is manufactured by the Italian company Smart Projects. Some Arduino- branded boards have been designed by the American company SparkFun Electronics. Sixteen versions of the Arduino hardware have been commercially produced to date. Duemilanove (rev 2009b) Arduino UNO Arduino Leonardo Arduino Mega
  • 12.
    - 12 - ArduinoNano Arduino Due (ARM-based) LilyPad (rev 2007) SHIELDS Arduino and Arduino-compatible boards make use of shields, printed circuit expansion boards that plug into the normally supplied Arduino pin-headers. Shields can provide motor controls, GPS, ethernet, LCD display, or bread boarding (prototyping). A number of shields can also be made DIY. Fig 3.2: Arduino shields
  • 13.
    - 13 - 3.5Introduction to software : MATLAB & ARDUINO IDE An introduction to MATLAB MATLAB = Matrix Laboratory “MATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++ and Fortran.” MATLAB is an interactive, interpreted language that is designed for fast numerical matrix calculations. The MATLAB Environment Fig. 3.3 MATLAB Environment
  • 14.
    - 14 - MATLABwindow components 1. Workspace > Displays all the defined variables 2. Command Window > To execute commands in the MATLAB environment 3. Command History > Displays record of the commands used 4. File Editor Window > Define your functions MATLAB Help Fig. 3.4: MATLAB Help MATLAB Help is an extremely powerful assistance to learning MATLAB Help not only contains the theoretical background, but also shows demos for implementation MATLAB Help can be opened by using the HELP pull-down menu The purpose of this tutorial is to gain familiarity with MATLAB’s Image Processing Toolbox. This tutorial does not contain all of the functions available in MATLAB. It is very useful to go to HelpMATLAB Help in the MATLAB window if you have any
  • 15.
    - 15 - questionsnot answered by this tutorial. Many of the examples in this tutorial are modified versions of MATLAB’s help examples. The help tool is especially useful in image processing applications, since there are numerous filter examples. Fig. 3.5: M-file for Loading Images Fig. 3.6: BitmapImage Fig. 3.7: Grayscale Image
  • 16.
    - 16 - BASICEXAMPLES EXAMPLE 1 How to build a matrix(or image)? r = 256;img = zeros(r, c); img(100:105, :) = 0.5; img(:, 100:105) = 1; figure; imshow(img); OUTPUT
  • 17.
    - 17 - Fig.3.8:Example 1 output EXAMPLE 2 PROGRAM: r = 256; c = 256; img = rand(r,c); img = round(img); figure; imshow(img); 3.9:Example 2 output
  • 18.
    - 18 - Anintroduction to Arduino IDE The Arduino integrated development environment (IDE) is a cross-platform application written in Java, and is derived from the IDE for the Processing programming language and the Wiring projects. It is designed to introduce programming to artists and other newcomers unfamiliar with software development. It includes a code editor with features such as syntax highlighting, brace matching, and automatic indentation, and is also capable of compiling and uploading programs to the board with a single click. A program or code written for Arduino is called a "sketch". Arduino programs are written in C or C++. The Arduino IDE comes with a software library called "Wiring" from the original Wiring project, which makes many common input/output operations much easier. Users only need define two functions to make a runnable cyclic executive program:  setup(): a function run once at the start of a program that can initialize settings  loop(): a function called repeatedly until the board powers off  #define LED_PIN 13   void setup () {  pinMode (LED_PIN, OUTPUT); // Enable pin 13 for digital output  }   void loop () {  digitalWrite (LED_PIN, HIGH); // Turn on the LED  delay (1000); // Wait one second (1000 milliseconds)  digitalWrite (LED_PIN, LOW); // Turn off the LED  delay (1000); // Wait one second  } It is a feature of most Arduino boards that they have an LED and load resistor connected between pin 13 and ground, a convenient feature for many simple tests.[9] The previous code would not be seen by a standard C++ compiler as a valid program, so when the user clicks the "Upload to I/O board" button in the IDE, a copy of the code is written to a temporary file with an extra include header at the top and a very simple main() function at the bottom, to make it a valid C++ program.
  • 19.
    - 19 - TheArduino IDE uses the GNU toolchain and AVR Libc to compile programs, and uses avrdude to upload programs to the board. BLOCK DIAGRAM The complete hardware arrangement is shown by the following block diagram:- Fig. 4.1 Block diagram Schemetic diagram
  • 20.
    - 20 - Fig4.2:Schematic of actual circuit Actual Hardware Design Fig. 4.3: Actual Hardware Design
  • 21.
    - 21 - PRINCIPLEOF OPERATION 5.1 Menu We have created a standalone application that is used for Face detection and recognition. For this we have created graphical menu which contain choices. Simply clicking through mouse on specified choice can perform the desired action. Menu can be made from simple if loops and inside the if loop desired functions can be written. The figure given below will show the menu we have made. Fig5.1: Menu
  • 22.
    - 22 - 5.2Database generation The next step was to create a database. We have created database by simply taking face photos through camera, renaming and saving face images of .jpeg format in a particular folder then comparing with the face to recognize with database faces. The number of faces that saves in database is equal to M * times, where M is the no. of person entered by user and times is used for increasing the accuracy i.e. let times be 5, so 5 face images per person. The flow chart for database generation is given below. Fig 5.2: flow diagram for generating database 5.3 Recognition The recognition process is done by Eigen face algorithm. The flow diagram for recognition process is given below.
  • 23.
    - 23 - Fig5.3: flow diagram for recognition process Eigen Face Algorithm The Eigen face algorithm used for the detection purpose of any face is described as follow: 1. The first step is to obtain a set S with M face images. In our example M = 25 as shown at the beginning of the tutorial. Each image is transformed into a vector of size N and placed into the set. 2. After you have obtained your set, you will obtain the mean image Ψ 3. Then you will find the difference Φ between the input image and the mean image
  • 24.
    - 24 - 4.Next we seek a set of M orthonormal vectors, un, which best describes the distribution of the data. The kth vector, uk, is chosen such that is a maximum, subject to Note: uk and λk are the eigenvectors and eigenvalues of the covariance matrix C 5. We obtain the covariance matrix C in the following manner 6. AT 7. Once we have found the eigenvectors, vl, ul
  • 25.
    - 25 - Theseare the Eigen faces of our set of original images Recognition Procedure 1. A new face is transformed into its eigenface components. First we compare our input image with our mean image and multiply their difference with each eigenvector of the L matrix. Each value would represent a weight and would be saved on a vector Ω. 2. We now determine which face class provides the best description for the input image. This is done by minimizing the Euclidean distance 3. The input face is consider to belong to a class if εk is bellow an established threshold θε. Then the face image is considered to be a known face. If the difference is above the given threshold, but bellow a second threshold, the image can be determined as a unknown face. If the input image is above these two thresholds, the image is determined NOT to be a face. 4. If the image is found to be an unknown face, you could decide whether or not you want to add the image to your training set for future recognitions. You would have to repeat steps 1 trough 7 to incorporate this new face image. 5.4 Tracking
  • 26.
    - 26 - Theface tracking is done using Viola Jones algorithm. We have combined motors through Arduino UNO board which has ATmega328 microcontroller which is used to run dc motors in direction of face. The flow diagram for tracking is given below. Fig 5.4: flow diagram for tracking process The live image coming from camera is of 320*240 resolution. The script file vj_track_face.m returns coordinates (x-axis and y-axis) of bounding box that surrounds the face (starting coordinates) using Viola Jones algorithm. These x and y axis is passed to motor_motion.m file. This function is responsible for the motor movement in direction of faces. We have used two dc motors, the lower one is used for left and right motion and upper/front one is used for up and down motion. The working of motors is defined by coordinates, which is shown below: 1. If x-axis is between 120 to 200 and y between 100 to 160 then there is no movement 2. If x-axis is less than 120 then there is left motion. 3. If x-axis is greater than 200 then there is right motion. 4. If y-axis is greater than 160 then there is up motion. 5. If y-axis is less than 100 then there is down motion. Viola Jones algorithm The Voila Jones algorithm is given as follow: – In voila zone algorithm the detection is done by the Feature extraction and feature evaluation Rectangular features are used, with a new image representation their calculation is very fast.
  • 27.
    - 27 - Fig5.5: Rectangular features Fig 5.6: Rectangular feature matching with face – They are easy to calculate. – The white areas are subtracted from the black ones. – A special representation of the sample called the integral image makes feature extraction faster. – Features are extracted from sub windows of a sample image. – The base size for a sub window is 24 by 24 pixels. – Each of the four feature types are scaled and shifted across all possible combinations. – In a 24 pixel by 24 pixel sub window there are ~160,000 possible features to be calculated – A real face may result in multiple nearby detections – Post process detected sub windows to combine overlapping detections into a single detection
  • 28.
    - 28 - Fig5.7: A Cascade of Classifiers Fig 5.7: Notice detection at multiple scales PROCESS FLOW DIAGRAM
  • 29.
    - 29 - Fig5.9.: Process flow diagram RESULT
  • 30.
    - 30 - Theresult of our project is summarized as follows: 1. When the code is compiled, the set of input images is given by 2. The mean image for the respected set of images is given as follow: 3. The Eigen faces for the given set of image with respect to evaluated mean image is given by:
  • 31.
  • 32.
    - 32 - Fig.Tracking one person in real time( SSD showing 1) PROBLEMS FACED
  • 33.
    - 33 - Therewere number of challenges and problems faced by us, some of them were: 1. In generating database, renaming was the main problem and saving them in a proper way so that the images can be easily access in processing. So we have used numbers for naming faces like 1.jpg, 2.jpg… and so on. 2. Coding the Eigen face algorithm with live images was complex. 3. Taking tolerance for real time recognition. 4. Motors assembly on frame as motors were not fixing on frame properly. 5. We have first tried to move the frame using stepper motors, but the code that we have made didn’t work properly as well as tracking from stepper is only 180 degree. DC motors work fine with the code and it can also track 360 degree. 6. Realizing pauses and delays so that motor can work perfectly. APPLICATION OF THIS PROJECT
  • 34.
    - 34 - APPLICATIONS Thisproject can have many applications. It can be used in – 1. For accessing a secure area by face . Fig 8.1 Person entering in an organization. Camera detecting and marking attendance. 2. For attendance registration Fig 8.2 Attendance is marked with count. 3. For anti terror activity Fig 8.3 Camera in the station detects Most wanted criminal. Alarm raises.
  • 35.
    - 35 - Fig8.4:CCTV camera tracks the position Of the criminal. Fig 8.5After all the efforts, Security guards finally capture the criminal. 4. For automatic videography
  • 36.
    - 36 - 5.For counting number of person in a room Fig 8.6: Counting the no. of persons
  • 37.
    - 37 - APPENDIX Programcode %%face_choice.m- main script file imaqreset; clear all; close all; clc; i=1; global M; %input no. of faces global face_id global times %no of faces for one person times=5; %no. of pics capture for an individual N=4; %default no. of faces config_arduino(); %function for configuring arddino board vid = videoinput('winvideo',2,'YUY2_320x240'); while (1==1) choice=menu('Face Recognition',... 'Generate database',... 'Recognize face from drive',... 'Recognize face from camera',... 'Track face from camera',... 'Track the recognized face from camera',... 'Exit'); if (choice==1) choice1=menu('Face Recognition',... 'Enter no. of faces',... 'Exit'); if (choice1==1) M=input('Enter : '); preview(vid); while(i<((M*times)+1)) choice2=menu('Face Recognition',... 'Capture'); if(choice2==1) g=getsnapshot(vid); %saving rgb image in specified folder rgbImage=ycbcr2rgb(g); str=strcat(int2str(i),'.jpg'); fullImageFileName = fullfile('E:New Folder',str); imwrite(rgbImage,fullImageFileName);
  • 38.
    - 38 - %savinggrayscale image in current directory grayImage=rgb2gray(rgbImage); Dir_name=fullfile(pwd,str); imwrite(grayImage,Dir_name); i=(i+1); end end closepreview(vid); end if (choice1==2) clear choice1; end end if(choice==2) if(isempty(M)==1) default=N*times; face_id=recognize_face_drive((default)); else faces=M*times; face_id=recognize_face_drive(faces); end end if(choice==3) if(isempty(M)==1) default=N*times; face_id=recognize_face_cam(default); else faces=M*times; face_id=recognize_face_cam(faces); end end if (choice==4) vj_track_face(); end if (choice==5) vj_faceD_live(); end if (choice==6) close all; return; end end stop(vid);
  • 39.
    - 39 - %%config_arduino.m– for configuring Arduino board function config_arduino() %function for configuring arddino board global b %global arduino class object b=arduino('COM29'); b.pinMode(4,'OUTPUT'); % pin 4&5 for right & left and pin 6&7 for up & down b.pinMode(5,'OUTPUT'); b.pinMode(6,'OUTPUT'); b.pinMode(7,'OUTPUT'); b.pinMode(2,'OUTPUT'); b.pinMode(3,'OUTPUT'); b.pinMode(8,'OUTPUT'); b.pinMode(9,'OUTPUT'); b.pinMode(10,'OUTPUT'); b.pinMode(11,'OUTPUT'); b.pinMode(12,'OUTPUT'); b.pinMode(13,'OUTPUT'); % pin 2,3,8,9,10,11,12,13 for seven segment display 8 leds b.pinMode(14,'OUTPUT'); b.pinMode(15,'OUTPUT'); b.pinMode(16,'OUTPUT'); b.pinMode(17,'OUTPUT'); % pin 14,15,16,17 for multiplexing 4 seven segment display end %% recognize_face_drive.m– function for matching face using Eigenface algorithm from hard drive % Thanks to Santiago Serrano function Min_id = recognize_face_drive(M) close all clc % number of images on your training set. %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=100; ustd=80; person_no=0; times=5; %read and show images(bmp); S=[]; %img matrix figure(1); for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);');
  • 40.
    - 40 - %eval('img=rgb2gray(image);'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) ifi==3 title('Training set','fontsize',18) end drawnow; [irow icol]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images figure(2); for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; eval('imwrite(img,str)'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Normalized Training Set','fontsize',18) end end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. figure(3); imshow(img); title('Mean Image','fontsize',18) % Change image for manipulation dbx=[]; % A matrix
  • 41.
    - 41 - fori=1:M temp=double(S(:,i)); dbx=[dbx temp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end
  • 42.
    - 42 - %Normalizationof eigenvectors for i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; figure(4); for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Eigenfaces','fontsize',18) end end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image % Note: the input image must have a bmp or jpg extension. % It should have the same size as the ones in your training set. % It should be placed on your desktop InputImage = input('Please enter the name of the image and its extension n','s'); InputImage = imread(strcat('E:',InputImage)); figure(5) subplot(1,2,1) imshow(InputImage); colormap('gray');title('Input image','fontsize',18) input_img=rgb2gray(InputImage); %imshow(input_img); InImage=reshape(double(input_img)',irow*icol,1); temp=InImage; me=mean(temp);
  • 43.
    - 43 - st=std(temp); temp=(temp-me)*ustd/st+um; NormImage= temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end ReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvector ReshapedImage = reshape(ReshapedImage,icol,irow); ReshapedImage = ReshapedImage'; %show the reconstructed image. subplot(1,2,2) imagesc(ReshapedImage); colormap('gray'); title('Reconstructed image','fontsize',18) InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end ll = 1:M; figure(68) subplot(1,2,1) stem(ll,InImWeight) title('Weight of Input Face','fontsize',14) % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end kk = 1:size(e,2); subplot(1,2,2) stem(kk,e) title('Eucledian distance of input image','fontsize',14) MaximumValue=max(e) MinimumValue=min(e) Min_id=find(e==min(e));
  • 44.
    - 44 - person_no=Min_id/times; p1=(round(person_no)); if(person_no<p1) p1=(p1-1); display('Detectedface number') display(p1) write_digit(14,15,16,17,p1); end if(person_no>p1) p1=(p1+1); display('Detected face number') display(p1) write_digit(14,15,16,17,p1); end if(person_no==p1) display('Detected face number') display(p1) write_digit(14,15,16,17,p1); end end %% recognize_face_cam.m– function for matching face using Eigenface algorithm from camera % Thanks to Santiago Serrano function Min_id = recognize_face_cam(M) imaqreset; close all clc % number of images on your training set. vid = videoinput('winvideo',1,'YUY2_320x240'); %Chosen std and mean. %It can be any number that it is close to the std and mean of most of the images. um=100; ustd=80; person_no=0; times=5; %read and show images(bmp); S=[]; %img matrix figure(1); for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); %eval('img=rgb2gray(image);');
  • 45.
    - 45 - subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) ifi==3 title('Training set','fontsize',18) end drawnow; [irow icol]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our S end %Here we change the mean and std of all images. We normalize all images. %This is done to reduce the error due to lighting conditions. for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um; end %show normalized images figure(2); for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; eval('imwrite(img,str)'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Normalized Training Set','fontsize',18) end end %mean image; m=mean(S,2); %obtains the mean of each row instead of each column tmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255 img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matrix img=img'; %creates a N1xN2 matrix by transposing the image. figure(3); imshow(img); title('Mean Image','fontsize',18) % Change image for manipulation dbx=[]; % A matrix for i=1:M
  • 46.
    - 46 - temp=double(S(:,i)); dbx=[dbxtemp]; end %Covariance matrix C=A'A, L=AA' A=dbx'; L=A*A'; % vv are the eigenvector for L % dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx'; [vv dd]=eig(L); % Sort and eliminate those whose eigenvalue is zero v=[]; d=[]; for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; end end %sort, will return an ascending sequence [B index]=sort(d); ind=zeros(size(index)); dtemp=zeros(size(index)); vtemp=zeros(size(v)); len=length(index); for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i); end d=dtemp; v=vtemp; %Normalization of eigenvectors for i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp; end %Eigenvectors of C matrix u=[]; for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp]; end %Normalization of eigenvectors
  • 47.
    - 47 - fori=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp; end % show eigenfaces; figure(4); for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Eigenfaces','fontsize',18) end end % Find the weight of each face in the training set. omega = []; for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW]; end % Acquire new image from camera preview(vid); choice=menu('Push CAM button for taking pic',... 'CAM'); if(choice==1) g=getsnapshot(vid); end rgbImage=ycbcr2rgb(g); imwrite(rgbImage,'camshot.jpg'); closepreview(vid); InputImage = imread('camshot.jpg'); figure(5) subplot(1,2,1) imshow(InputImage); colormap('gray');title('Input image','fontsize',18) input_img=rgb2gray(InputImage);
  • 48.
    - 48 - %imshow(input_img); InImage=reshape(double(input_img)',irow*icol,1); temp=InImage; me=mean(temp); st=std(temp); temp=(temp-me)*ustd/st+um; NormImage= temp; Difference = temp-m; p = []; aa=size(u,2); for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare]; end ReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvector ReshapedImage = reshape(ReshapedImage,icol,irow); ReshapedImage = ReshapedImage'; %show the reconstructed image. subplot(1,2,2) imagesc(ReshapedImage); colormap('gray'); title('Reconstructed image','fontsize',18) InImWeight = []; for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage]; end ll = 1:M; figure(68) subplot(1,2,1) stem(ll,InImWeight) title('Weight of Input Face','fontsize',14) % Find Euclidean distance e=[]; for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag]; end kk = 1:size(e,2); subplot(1,2,2) stem(kk,e) title('Eucledian distance of input image','fontsize',14)
  • 49.
    - 49 - MaximumValue=max(e) MinimumValue=min(e) Min_id=find(e==min(e)); person_no=Min_id/times; p1=(round(person_no)); if(person_no<p1) p1=(p1-1); display('Detectedface number') display(p1); write_digit(14,15,16,17,p1); end if(person_no>p1) p1=(p1+1); display('Detected face number') display(p1); write_digit(14,15,16,17,p1); end if(person_no==p1) display('Detected face number') display(p1); write_digit(14,15,16,17,p1); end stop(vid); end %% vj_track_face.m– function for tracking face % created and coded by Abhishek Gupta (abhishekgpt10@gmail.com) function vj_track_face() imaqreset; close all; clc no_face=0; %Detect objects using Viola-Jones Algorithm vid = videoinput('winvideo',1,'YUY2_320x240'); set(vid,'ReturnedColorSpace','rgb'); set(vid,'TriggerRepeat',Inf); vid.FrameGrabInterval = 1; vid.FramesPerTrigger=20; figure; % Ensure smooth display set(gcf,'doublebuffer','on'); start(vid); while(vid.FramesAcquired<=1000);
  • 50.
    - 50 - FDetect= vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on figure(1),imshow(I); title('Face Detection'); for i = 1:size(BB,1) no_face=size(BB,1); write_digit(14,15,16,17,no_face); rectangle('Position',BB(i,:),'LineWidth',2,'LineStyle','-','EdgeColor','y'); display(BB(1)); display(BB(2)); motor_motion(BB(1),BB(2)); hold off; flushdata(vid); end end stop(vid); end %% vj_faceD_live.m– function for tracking recognizedface function vj_faceD_live(std,mean) imaqreset; close all clc detect=0; std_2=0; mean_2=0; tlrnce=7; while (1==1) choice=menu('Face Recognition',... 'Real time recognition',... 'Track last recognised face',... 'Exit'); if (choice==1) [std,mean]= face_stdmean(); %Detect objects using Viola-Jones Algorithm vid = videoinput('winvideo',1,'YUY2_320x240'); set(vid,'ReturnedColorSpace','rgb'); set(vid,'TriggerRepeat',Inf); vid.FrameGrabInterval = 1; vid.FramesPerTrigger=20;
  • 51.
    - 51 - figure;% Ensure smooth display set(gcf,'doublebuffer','on'); start(vid); while(vid.FramesAcquired<=1500); FDetect = vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on if(size(BB,1) == 1) I2=imcrop(I,BB); gray_face=rgb2gray(I2); std_2 = std2(gray_face); mean_2 = mean2(gray_face); %figure(1),imshow(gray_face); end figure(1),imshow(I); title('Face Recognition'); display(std); display(mean); display(std_2); display(mean_2); for i = 1:size(BB,1) if((((std_2<=(std+tlrnce))&&(std_2>=(std- tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce)))) rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g'); display('DETECTED'); detect=(detect+1) if(detect==2) display('tracking....'); detect=0; motor_motion(BB(1),BB(2));%for motion of motors in direction of faces end else rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r'); display('NOT DETECTED'); end hold off; flushdata(vid); end end
  • 52.
    - 52 - stop(vid); end if(choice==2) [std,mean]=face_stdmean_recgnz(); %Detect objects using Viola-Jones Algorithm vid = videoinput('winvideo',1,'YUY2_320x240'); set(vid,'ReturnedColorSpace','rgb'); set(vid,'TriggerRepeat',Inf); vid.FrameGrabInterval = 1; vid.FramesPerTrigger=20; figure; % Ensure smooth display set(gcf,'doublebuffer','on'); start(vid); while(vid.FramesAcquired<=600); FDetect = vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on if(size(BB,1) == 1) I2=imcrop(I,BB); gray_face=rgb2gray(I2); std_2 = std2(gray_face); mean_2 = mean2(gray_face); %figure(1),imshow(gray_face); end figure(1),imshow(I); title('Face Recognition'); display(std); display(mean); display(std_2); display(mean_2); for i = 1:size(BB,1) if((((std_2<=(std+tlrnce))&&(std_2>=(std- tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce)))) rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g'); display('DETECTED'); detect=(detect+1) if(detect==2) display('tracking....'); detect=0; motor_motion(BB(1),BB(2));%for motion of motors in direction of faces
  • 53.
    - 53 - end else rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r'); display('NOTDETECTED'); end hold off; flushdata(vid); end end stop(vid); end if(choice==3) return end end %% face_stdmean.m – function returns standard deviation and mean for real time recognition function [std_f,mean_f] = face_stdmean() imaqreset; close all; clc; i=1; global std global mean global times std=0;mean=0; vid = videoinput('winvideo',1,'YUY2_320x240'); while (1==1) choice=menu('Face Recognition',... 'Taking photos for recognition',... 'Exit'); if (choice==1) FDetect = vision.CascadeObjectDetector; preview(vid); while(i<(times+1)) choice2=menu('Face Recognition',... 'Capture'); if(choice2==1) g=getsnapshot(vid); %saving rgb image in specified folder rgbImage=ycbcr2rgb(g); str=strcat(int2str(i),'f.jpg'); fullImageFileName = fullfile('E:New Folder',str); imwrite(rgbImage,fullImageFileName);
  • 54.
    - 54 - BB= step(FDetect,rgbImage); I2=imcrop(rgbImage,BB); %saving grayscale image in current directory grayImage=rgb2gray(I2); Dir_name=fullfile(pwd,str); imwrite(grayImage,Dir_name); std = (std+std2(grayImage)); mean =(mean+mean2(grayImage)); i=(i+1); end std_f=(std/times) mean_f=(mean/times) end end closepreview(vid); if (choice==2) std_f=(std/times) mean_f=(mean/times) return; end end end %% face_stdmean_recgnz.m– function returns standard deviation and mean of last recognizedface function [std_f,mean_f] = face_stdmean_recgnz() close all; clc; global face_id global std global mean global times std=0;mean=0; %i=face_id; i=input('Enter face id for live recognition: '); FDetect = vision.CascadeObjectDetector; j=(i*times); k=(j-times); while(j>=(k+1)) str=strcat(int2str(j),'.jpg'); fullImageFileName = fullfile('E:New Folder',str); I=imread(fullImageFileName); BB = step(FDetect,I); I2=imcrop(I,BB); grayImage=rgb2gray(I2); str2=strcat(int2str(j),'f.jpg');
  • 55.
    - 55 - Dir_name=fullfile(pwd,str2); imwrite(grayImage,Dir_name); std= (std+std2(grayImage)); mean =(mean+mean2(grayImage)); j=(j-1); end std_f=(std/times) mean_f=(mean/times) end %% motor_motio.m – function for movement of motors function motor_motion(x,y) %for motion of motors in direction of faces global b %global arduino class object if ((x<200 && x>120)&&(y<160 && y>100)) disp('Stop'); b.digitalWrite(4,0); b.digitalWrite(5,0); b.digitalWrite(6,0); b.digitalWrite(7,0); end if (x<120) disp('right'); b.digitalWrite(4,0); b.digitalWrite(5,1); pause(0.05); b.digitalWrite(4,0); b.digitalWrite(5,0); pause(0.1); end if (x>200) disp('left'); b.digitalWrite(4,1); b.digitalWrite(5,0); pause(0.05); b.digitalWrite(4,0); b.digitalWrite(5,0); pause(0.1); end if (y>160) disp('up'); b.digitalWrite(6,1); b.digitalWrite(7,0); pause(0.05); b.digitalWrite(6,0); b.digitalWrite(7,0); pause(0.1); end
  • 56.
    - 56 - if(y<100) disp('down'); b.digitalWrite(6,0); b.digitalWrite(7,1); pause(0.05); b.digitalWrite(6,0); b.digitalWrite(7,0); pause(0.1); end end %% write_digit.m – function for switching digits on sevensegmentdisplay function write_digit(w,x,y,z,a) global b switch a case 1 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); one(); pause(.001); case 2 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); two(); pause(.001); case 3 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); three(); pause(.001); case 4 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); four(); pause(.001); case 5 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0);
  • 57.
    - 57 - b.digitalWrite(z,0); five(); pause(.001); case6 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); six(); pause(.001); case 7 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); seven(); pause(.001); case 8 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); eight(); pause(.001); case 9 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); nine(); pause(.001); case 0 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); zero(); pause(.001); end end %% zero.m - function for write zero function zero() global b b.digitalWrite(2,1); b.digitalWrite(3,1); b.digitalWrite(8,0); b.digitalWrite(9,0);
  • 58.
    - 58 - b.digitalWrite(10,0); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,0); end %%one.m - function for write one function one() global b b.digitalWrite(2,1); b.digitalWrite(3,1); b.digitalWrite(8,1); b.digitalWrite(9,1); b.digitalWrite(10,1); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,1); end %% two.m - function for write two function two() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,1); b.digitalWrite(9,0); b.digitalWrite(10,0); b.digitalWrite(11,1); b.digitalWrite(12,0); b.digitalWrite(13,0); end %% three.m - function for write three function three() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,1); b.digitalWrite(9,1); b.digitalWrite(10,0); b.digitalWrite(11,0); b.digitalWrite(12,0);
  • 59.
    - 59 - b.digitalWrite(13,0); end %%four.m - function for write four function four() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,0); b.digitalWrite(9,1); b.digitalWrite(10,1); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,1); end %% five.m - function for write five function five() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,0); b.digitalWrite(9,1); b.digitalWrite(10,0); b.digitalWrite(11,0); b.digitalWrite(12,1); b.digitalWrite(13,0); end %% six.m - function for write six function six() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,0); b.digitalWrite(9,0); b.digitalWrite(10,0); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,1); end
  • 60.
    - 60 - %%seven.m- function for write seven function seven() global b b.digitalWrite(2,1); b.digitalWrite(3,1); b.digitalWrite(8,1); b.digitalWrite(9,1); b.digitalWrite(10,1); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,0); end %% eight.m - function for write eight function eight() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,0); b.digitalWrite(9,0); b.digitalWrite(10,0); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,0); end %% nine.m - function for write nine function nine() global b b.digitalWrite(2,1); b.digitalWrite(3,0); b.digitalWrite(8,0); b.digitalWrite(9,1); b.digitalWrite(10,1); b.digitalWrite(11,0); b.digitalWrite(12,0); b.digitalWrite(13,0); end
  • 61.
    - 61 - ArduinoUno board Schematics Fig: Arduino Uno Schematics Driver for Arduino UNO The driver that is used for Arduino UNO is Prolific pl2303.This driver is available on the site given below. http:www.prolific.com MATLAB interfacing files for Arduino MATLAB has provided interfacing files for almost all Arduino boards. These files should be included in the working directory. There is one file in folder ‘ardiosrv’ in which there is a file named ‘ardiosrv.pde’, this file is need to be burn in Arduino UNO board. These files are available on the link given below. http://www.mathsworks.com/academia/arduino-software/arduino-matlab.html
  • 62.
    - 62 - REFERENCES Theusefull topics for my project are taken from these references : [1] Santiago Serrano, Eigen Faces tutorial, Drexel University [2] Padhraic Smyth, Face Detection using the Viola-Jones Method, Department Computer Science University of California, Irvine. [3] Abboud, F. Davoine, and M. Dang. Facial expression recognition and synthesis based on an appearance model. Signal Process., Image Commun., 2004. [2] J. Ahlberg. Candide-3 – an updated parametrised face. Technical report, Link¨oping University, 2001. [4] J. Ahlberg and R. Forchheimer. Face tracking for model-based coding and face animation. International journal of imaging systems and technology, 2003. [5] A. Azerbayejani and A. Pentland. Recursive estimation of motion, structure, and focal length. IEEE PAMI, 1995. [6] V. Belle, T. Deselaers, and S. Schiffer. Randomized trees for real-time one-step face detection and recognition. 2008. [7] M. J. Black and Y. Yacoob. Recognizing facial expressions in image sequences using local parameterized models of image motion. IJCV, 1997. [8] S. Basu, I. Essa, and A. Pentland. Motion regularization for model-based head tracking. In CVPR 1996, 1996. [9] M. L. Cascia, S. Scarloff, and V. Athitsos. Fast, reliable head tracking under varying illumination: An approach based on registration of texturemapped 3d models. IEEE PAMI, 2000.