This document summarizes a research project on face and facial feature detection from images. The project uses the Viola-Jones object detection framework combined with geometric information to detect faces and locate features like eyes, nose, and mouth. Key steps include face detection using Haar-like features and AdaBoost classification, then detecting facial features based on characteristics like size, shape, color, and position relative to other facial features. MATLAB functions like videoinput and getsnapshot are used to acquire video frames and capture images for processing.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Feature extraction is becoming popular in face recognition method. Face recognition is the interesting and growing area in real time applications. In last decades many of face recognitions methods has been developed. Feature extraction is the one of the emerging technique in the face recognition methods. In this method an attempt to show best faces recognition method. Here used different descriptors combination like LBP and SIFT, LBP and HOG for feature extraction. Using a single descriptor is difficult to address all variations so combining multiple features in common. Find LBP and SIFT features separately from the images and fuse them with a canonical correlation analysis and same procedure also done using LBP and HOG. The SIFT features have some limitations they don’t work well with lighting changes, quite slow, and mathematically complicated and computationally heavy. The combinations of HOG and LBP features make the system robust against some variations like illumination and expressions. Also, face recognition technique used a different classifier to extract the useful information from images to solve the problems. This paper is organized into four sections. Introduction in the first section. The second section describes feature descriptors and the third section describes proposed methods, final sections describes experiments result and conclusion phase.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
Face detection basedon image processing by using the segmentation methods for detection of the various types of the faces to helpfull for the many different careers and it will easy to do.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Detection and Tracking of Objects: A Detailed StudyIJEACS
Detecting and tracking objects are the most widespread and challenging tasks that a surveillance system must achieve to determine expressive events and activities, and automatically interpret and recover video content. An object can be a queue of people, a human, a head or a face. The goal of this article is to state the Detecting and tracking methods, classify them into different categories, and identify new trends, we introduce main trends and provide method to give a perception to fundamental ideas as well as to show their limitations in the object detection and tracking for more effective video analytics.
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Region based elimination of noise pixels towards optimized classifier models ...IJERA Editor
The extraction of the skin pixels in a human image and rejection of non-skin pixels is called the skin segmentation. Skin pixel detection is the process of extracting the skin pixels in a human image which is typically used as a pre-processing step to extract the face regions from human image. In past, there are several computer vision approaches and techniques have been developed for skin pixel detection. In the process of skin detection, given pixels are been transformed into an appropriate color space such as RGB, HSV etc. And then skin classifier model have been applied to label the pixel into skin or non-skin regions. Here in this research a “Region based elimination of noise pixels and performance analysis of classifier models for skin pixel detection applied on human images” would be performed which involve the process of image representation in color models, elimination of non-skin pixels in the image, and then pre-processing and cleansing of the collected data, feature selection of the human image and then building the model for classifier. In this research and implementation of skin pixels classifier models are proposed with their comparative performance analysis. The definition of the feature vector is simply the selection of skin pixels from the human image or stack of human images. The performance is evaluated by comparing and analysing skin colour segmentation algorithms. During the course of research implementation, efforts are iterative which help in selection of optimized skin classifier based on the machine learning algorithms and their performance analysis.
Ioannis Pitas, Professor, Aristotle University of Thessaloniki, Department of Informatics (IEEE Fellow), Semantic 3DTV Content Analysis and Description
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature extraction is becoming popular in face recognition method. Face recognition is the interesting and growing area in real time applications. In last decades many of face recognitions methods has been developed. Feature extraction is the one of the emerging technique in the face recognition methods. In this method an attempt to show best faces recognition method. Here used different descriptors combination like LBP and SIFT, LBP and HOG for feature extraction. Using a single descriptor is difficult to address all variations so combining multiple features in common. Find LBP and SIFT features separately from the images and fuse them with a canonical correlation analysis and same procedure also done using LBP and HOG. The SIFT features have some limitations they don’t work well with lighting changes, quite slow, and mathematically complicated and computationally heavy. The combinations of HOG and LBP features make the system robust against some variations like illumination and expressions. Also, face recognition technique used a different classifier to extract the useful information from images to solve the problems. This paper is organized into four sections. Introduction in the first section. The second section describes feature descriptors and the third section describes proposed methods, final sections describes experiments result and conclusion phase.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
Face detection basedon image processing by using the segmentation methods for detection of the various types of the faces to helpfull for the many different careers and it will easy to do.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Detection and Tracking of Objects: A Detailed StudyIJEACS
Detecting and tracking objects are the most widespread and challenging tasks that a surveillance system must achieve to determine expressive events and activities, and automatically interpret and recover video content. An object can be a queue of people, a human, a head or a face. The goal of this article is to state the Detecting and tracking methods, classify them into different categories, and identify new trends, we introduce main trends and provide method to give a perception to fundamental ideas as well as to show their limitations in the object detection and tracking for more effective video analytics.
The goal of this report is the presentation of our biometry and security course’s project: Face recognition for Labeled Faces in the Wild dataset using Convolutional Neural Network technology with Graphlab Framework.
Region based elimination of noise pixels towards optimized classifier models ...IJERA Editor
The extraction of the skin pixels in a human image and rejection of non-skin pixels is called the skin segmentation. Skin pixel detection is the process of extracting the skin pixels in a human image which is typically used as a pre-processing step to extract the face regions from human image. In past, there are several computer vision approaches and techniques have been developed for skin pixel detection. In the process of skin detection, given pixels are been transformed into an appropriate color space such as RGB, HSV etc. And then skin classifier model have been applied to label the pixel into skin or non-skin regions. Here in this research a “Region based elimination of noise pixels and performance analysis of classifier models for skin pixel detection applied on human images” would be performed which involve the process of image representation in color models, elimination of non-skin pixels in the image, and then pre-processing and cleansing of the collected data, feature selection of the human image and then building the model for classifier. In this research and implementation of skin pixels classifier models are proposed with their comparative performance analysis. The definition of the feature vector is simply the selection of skin pixels from the human image or stack of human images. The performance is evaluated by comparing and analysing skin colour segmentation algorithms. During the course of research implementation, efforts are iterative which help in selection of optimized skin classifier based on the machine learning algorithms and their performance analysis.
Ioannis Pitas, Professor, Aristotle University of Thessaloniki, Department of Informatics (IEEE Fellow), Semantic 3DTV Content Analysis and Description
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
CDS is the criminal face identification by capsule neural network.
Solving the common problems in image recognition such as illumination problem, scale variability, and to fight against a most common problem like pose problem, we are introducing Face Reconstruction System.
Identifying Gender from Facial Parts Using Support Vector Machine ClassifierEditor IJCATR
Gender classification can be stated as inferring female or male from a collection of facial images. There exist different
methods for gender classification, such as gait, iris, hand shape and hair, it is probably better way to find out gender based on facial
features. In this paper SVM basic kernel function has been employed firstly to detect and classify the human gender Image into
two labels i.e. (1) male and (2) female. The gender classifier achieves over 96% accuracy.
1. Face and its Parts detection system
Inamullah
Electrical Enginerring Department
Comsats Institute Of Information Technology Abbottababd
Email-inamullah638@yahoo.com
ABSTRACT This project paper presents a new face parts
information analyzer, as a promising model for detecting faces
and locating the facial features in images. The main objective is
to build fully automated human facial measurements systems
from images with complex backgrounds. Detection of facial
features such as eye, nose, and mouth is an important step for
many subsequent facial image analysis tasks. The main study of
face detection is detect the portion of part and mention the circle
or rectangular of the every portion of body. In this paper face
detection is depend upon the face pattern which is match the face
from the pattern reorganization. The study present a novel and
simple model approach based on a mixture of techniques and
algorithms in a shared pool based on viola jones object detection
framework algorithm combined with geometric and symmetric
information of the face parts from the image in a smart
algorithm.
General Terms Keywords
Image processing, human face detection Algorithms
Face detection, Video frames, Viola- Jones, Skin detection, Skin
color classification. Face detection, gender detection recognition,
machine learning.
INTRODUCTION It‗s a true challenge to build an
automated system which equals human ability to detect faces,
recognized estimates human body dimensions or body part from
an image or a video. The conceptual and intellectual challenges
of such a problem, faces are non-rigid and have a high degree of
variability in size, shape, color and texture. Auto focus in
cameras, visual surveillance, traffic safety monitoring and
human computer interaction . Face reorganization will be
following a pattern, which is focus on face or body. Face
detection is the step stone to the entire facial analysis algorithms,
including face alignment, face modeling head pose tracking, face
verification authentication, face relighting facial expression
tracking/recognition, gender/age recognition, and face
recognition and lots of more. Only when computers can
recognize face because computer is compute the logic and facial
expiration and match the expiration according to the facial
structure. They begin to truly understand people‘s thoughts and
intentions. Given an arbitrary image, the goal of face detection is
to determine whether or not there are any faces in the image and
if the image is present then it return the image location and
extent of each face .
OBJECT DETECTION FRAMEWORK
The Viola–Jones object detection frame work is the first object
detection framework to provide competitive object detection
rates in real-time. It was motivated primarily by the problem of
face detection , although it can be trained to detect a variety of
object classes. In order to find an object of an unknown size is
usually adopted to work this field that possesses a high
efficiency and accuracy to locate the face region in an image.
The Viola - Jones method contains three techniques:
Integral image for feature extraction the Haar-like features is
rectangular type that is obtained by integral image
Adaboost is a machine-learning method for face detection,
The word ―boosted‖ means that the classifiers at every stage
of the cascade are complex themselves and they are built out
of basic classifiers using one of four boosting techniques
(weighted voting).The Adaboost algorithm is a learning
process that is a weak classification and then uses the
weight value to learn and construct as a strong
classification. Cascade classifier used to combine
many features efficiently. The word ―cascade‖ in the
classifier name means that the resultant classifier
consists of several simpler classifiers (stages) that are
applied subsequently to a region of interest until at
some stage the candidate is rejected or all the stages are
passed. Finally, the model can obtain the non-face
region and face region after cascading each of strong
classifiers .
Face Detection Algorithm
Face detection techniques can be categorized into two
major groups that are feature based approaches and image
based approaches. Image based approaches use linear
subspace method, neural networks and statistical
approaches for face detection. Feature based approaches
can be subdivided into low level analysis, feature analysis
and active shape model.
Face detection is controlled by special trained scanning
window classifiers Viola-Jones Face Detection
Algorithm is the first real-time face detection system.
There are three ingredients working in concert to enable
a fast and accurate detection: the integral image for
feature computation, Adaboost for feature selection and
cascade for efficient computational resource allocation.
Eye Detection Algorithm
Eyes are detected based on the hypothesis that they are
darker than other part of the face, finding eye analogue
segments searching small patches in the input image that
are roughly as large as an eye and are darker than their
neighborhoods. a pair of potential eye regions is
considered as eyes if it satisfies some constraints based
on anthropological characteristics of human eyes .To
discard regions corresponding to eyebrows, the model
uses the fact that the center part of an eye region is
darker than other parts. Then a simple histogram
analysis of the region is done for selecting eye regions
since an eye region should exhibit two peaks while an
eyebrow region shows only one. A final constraint is
the alignment of the two major axis, so the two eye
regions belong to the same line. The study propose a
new algorithm for eyes‘ detection that uses Iris
geometrical information for determining the whole
image region containing an eye, and then applying the
symmetry for selecting both eyes.
Mouth Detection Algorithm
Detection and Extraction features from the mouth region;
this model is composed of weak classifiers, based on a
decision stump, which uses Haar features to encode
mouth details. Experimental results show that the
algorithm is Face image division based on physical
approximation of location of eyes, nose and mouth on
face and can find out the mouth region rapidly. It is
useful in a wide range; moreover, it is effectual for
complex background such as public mouth detection.
2. Matlab Funcations used in this code :
1-videoinput
Create video input object Syntax
obj = videoinput(adaptorname)
obj = videoinput(adaptorname,deviceID) obj =
videoinput(adaptorname,deviceID,format) obj =
videoinput(adaptorname,deviceID,format,P1,V1,...)
Description obj = videoinput(adaptorname) constructs the video
input object obj. A video input object represents the
connection between MATLAB®
and a particular image
acquisition device. adaptorname is a text string that
specifies the name of the adaptor used to communicate
with the device. Use the imaqhwinfo function to
determine the adaptors available on your system.
obj = videoinput(adaptorname,deviceID) constructs a
video input object obj, where deviceID is a numeric
scalar value that identifies a particular device available
through the specified adaptor, adaptorname. Use the
imaqhwinfo(adaptorname) syntax to determine the
devices available through the specified adaptor. If
deviceID is not specified, the first available device ID is
used. As a convenience, a device's name can be used in
place of the deviceID. If multiple devices have the same
name, the first available device is used.
obj = videoinput(adaptorname,deviceID,format)
constructs a video input object, where format is a text
string that specifies a particular video format supported
by the device or the full path of a device configuration
file (also known as a camera file). To get a list of the
formats supported by a particular device, view the
DeviceInfo structure for the device that is returned by
the imaqhwinfo function. Each DeviceInfo structure
contains a SupportedFormats field. If format is not
specified, the device's default format is used.
When the video input object is created, its VideoFormat
field contains the format name or device configuration
file that you specify.
obj =
videoinput(adaptorname,deviceID,format,P1,V1,...)
creates a video input object obj with the specified
property values. If an invalid property name or property
value is specified, the object is not created. Examples
Construct a video input object. obj =
videoinput('matrox', 1); Select the source to use for
acquisition. obj.SelectedSourceName = 'input1' View
the properties for the selected video source object.
src_obj = getselectedsource(obj); get(src_obj)
Preview a stream of image frames. preview(obj);
Acquire and display a single image frame.
frame = getsnapshot(obj); image(frame);
Remove video input object from memory.
delete(obj)
2-getsnapshot
Immediately return single image frame
Syntax
frame = getsnapshot(obj)
[frame, metadata] = getsnapshot(obj) Description frame =
getsnapshot(obj) immediately returns one single image frame,
frame, from the video input object obj. The frame of data returned
is independent of the video input object FramesPerTrigger
property and has no effect on the value of the
FramesAvailable or
FramesAcquired property.
The object obj must be a 1-by-1 video input object.
frame is returned as an H-by-W-by-B matrix where
H Image height, as specified in the
ROIPosition property
W Image width, as specified in the
ROIPosition property
B Number of bands associated with
obj, as specified in the
NumberOfBands property
frame is returned to the MATLAB®
workspace in its native
data type using the color space specified by the
ReturnedColorSpace property. You can use the MATLAB
image or imagesc function to view the returned data.
[frame, metadata] = getsnapshot(obj) returns metadata, a 1-
by-1 array of structures. This structure contains information
about the corresponding frame. The metadata structure
contains the field AbsTime, which is the absolute time the
frame was acquired, expressed as a time vector. In addition
to that field, some adaptors may choose to add other
adaptorspecific metadata as well.
3.vision.CascadeObjectDetec tor
System object
Package: vision
Detect objects using the Viola-Jones algorithm
Description
The cascade object detector uses the ViolaJones algorithm
to detect people's faces, noses, eyes, mouth, or upper body.
You can also use the Training Image Labeler to train a
custom classifier to use with this System object. For details
on how the function works, see Construction detector =
vision.CascadeObjectDetector creates a System object,
detector, that detects objects using the Viola-Jones
algorithm. The
ClassificationModel property controls the type of object to
detect. By default, the detector is configured to detect faces.
detector = vision.CascadeObjectDetector(MODEL)
creates a System object, detector, configured to detect
objects defined by the input string, MODEL. The
MODEL input describes the type of object to detect.
There are several valid MODEL strings, such
as 'FrontalFaceCART', 'UpperBody',
and 'ProfileFace'. See the
ClassificationModel property description for a full list of
available models.
detector =
vision.CascadeObjectDetector(XMLFILE)
creates a System object, detector, and configures
it to use the custom classification model specified
with the XMLFILE input. The XMLFILE can be
created using the trainCascadeObjectDetector
function or OpenCV (Open Source Computer
Vision) training functionality. You must specify
a full or relative path to the XMLFILE, if it is not
on the MATLAB®
path.
detector =
vision.CascadeObjectDetector(Name,Value)
configures the cascade object detector object
properties. You specify these properties as one or
more name-value pair arguments. Unspecified
properties have default value.
4-Step funcation:
3. Use the step syntax with input image, I, the selected
Cascade object detector object, and any optional
properties to perform detection.
BBOX = step(detector,I) returns BBOX, an M-by-4
matrix defining M bounding boxes containing the
detected objects. This method performs multiscale
object detection on the input image, I. Each row of the
output matrix, BBOX, contains a four-element vector, [x
y width height], that specifies in pixels, the upperleft
corner and size of a bounding box. The input image I,
must be a grayscale or truecolor (RGB) image.
BBOX = step(detector,I,roi) detects objects within the
rectangular search region specified by roi. You must
specify roi as a 4-element vector, [x y width height], that
defines a rectangular region of interest within image I.
Set the 'UseROI' property to true to use this syntax.
5-ClassificationModel — Trained
cascade classification model
Trained cascade classification model, specified as a
comma-separated pair consisting of
'ClassificationModel' and a string. This value sets the
classification model for the detector. You may set this
string to an XML file containing a custom classification
model, or to one of the valid model strings listed below.
You can train a custom classification model using the
trainCascadeObjectDetector function. The function can
train the model using Haar-like features, histograms of
oriented gradients (HOG), or local binary patterns
(LBP). For details on how to use the
function, see Train a Cascade Object Detector
MinSize — Size of smallest detectable object
Size of smallest detectable object, specified as a comma-
separated pair consisting of 'MinSize' and a two-element
[height width] vector. Set this property in pixels for the
minimum size region containing an object. It must be
greater than or equal to the image size used to train the
model. Use this property to reduce computation time
when you know the minimum object size prior to
processing the image. When you do not specify a value
for this property, the detector sets it to the size of the
image used to train the classification model. This
property is tunable.
MaxSize — Size of largest detectable object Size
of largest detectable object, specified as a comma-
separated pair consisting of 'MaxSize' and a two-element
[height width] vector. Specify the size in pixels of the
largest object to detect. Use this property to reduce
computation time when you know the maximum object
size prior to processing the image.
When you do not specify a value for this property, the
detector sets it to size(I). This property is tunable.
ScaleFactor — Scaling for multiscale object
detection
Scaling for multiscale object detection, specified as a
comma-separated pair consisting of 'ScaleFactor' and a
value greater than 1.0001. The scale factor incrementally
scales the detection resolution between MinSize
and MaxSize. You can set the scale factor to an
ideal value using: size(I)/(size(I)-0.5)
The detector scales the search region at
increments between MinSize and MaxSize using
the following relationship:
search region = round((Training
Size)*(ScaleFactorN
))
N is the current increment, an integer greater than
zero, and Training Size is the image size used to
train the classification model. This property is
tunable.
Default: 1.1
6-MergeThreshold —
Detection threshold
Detection threshold, specified as a
commaseparated pair consisting of
'MergeDetections' and a scalar integer. This value
defines the criteria needed to declare a final
detection in an area where there are multiple
detections around an object. Groups of colocated
detections that meet the threshold are merged to
produce one bounding box around the target
object. Increasing this threshold may help
suppress false detections by requiring that the
target object be detected multiple times during
the multiscale detection phase. When you set this
property to 0, all detections are returned without
performing thresholding or merging operation.
This property is tunable. Default: 4.
Conclusions:
This project take any instant image(we set
that instant ) from a live video and detect
face mouth eye noise
Reference: www.mathworks.com
Figures:There is live streaming and code take
any instant image from video which we want the
following image be taken from live video
Orgional Pic:
2-Face Detection: 6-Eye Crap-
5. Matlab Code:
close all clear all clc vid
= videoinput('winvideo', 1);
%input the video from webcam
preview(vid)
% Read the input image
for j=1:100
I = getsnapshot(vid); %take an
image from video at any instant
end imshow(I)
FDetector =
vision.CascadeObjectDetector;
% Returns Bounding Box values
based on number of objects BB =
step(FDetector,I); figure,
imshow(I); for i = 1:size(BB,1)
EyeDetect =
vision.CascadeObjectDetector('EyePa
irBig');
BB=step(EyeDetect,I);
figure,imshow(I);
rectangle('Position',BB,'LineWidth'
,4,'LineStyle','','EdgeColor','b');
title('Eyes Detection');
Eyes=imcrop(I,BB);
figure,imshow(Eyes);
rectangle('Position',BB(i,:),'LineWidth
',5,'LineStyle','-','EdgeColor','r');
end
title('Face Detection');
hold off;
% To detect Nose NoseDetect
=
vision.CascadeObjectDetector('Nose','Me
rgeThreshold',16);
BB=step(NoseDetect,I); figure,
imshow(I); hold on for i =
1:size(BB,1)
rectangle('Position',BB(i,:),'LineWidth
',4,'LineStyle','-','EdgeColor','b');
end
title('Nose Detection')
hold off; % To detect
Mouth MouthDetect =
vision.CascadeObjectDetector('Mouth','M
ergeThreshold',111);
BB=step(MouthDetect,I); figure,
imshow(I); hold on for i =
1:size(BB,1)
rectangle('Position',BB(i,:),'LineWidth
',4,'LineStyle','-','EdgeColor','r');
end
title('Mouth Detection');
hold off; %To detect
Eyes