SlideShare a Scribd company logo
1 of 6
Download to read offline
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
785 | P a g e
A Matlab Based High Speed Face Recognition System Using Som
Neural Networks
Nisha Soni1
, Garima Mathur2
, Mahendra Kumar3
Department of Electronics1, 2, 3
JEC Kukas1, 2
, Mewar University Gangrar3
Jaipur1, 2
, Chittorgarh3
, India
Abstract
Face recognition (FR) is a challenging
issue due to variations in pose, illumination, and
expression. The search results for most of the
existing FR methods are satisfactory but still
included irrelevant images for the target image.
We have introduced a new technique uses an
image-based approach towards artificial presents a
new technique for human face recognition. This
intelligence by removing redundant data from face
images through image compression using Sobel
Edge Detection (SED) and comparing this with the
two-dimensional discrete cosine transform (2D-
DCT) method for the better speed and efficiency.
SED which is a popular edge detection method is
considered in this work. There exists a function,
edge.m which is in the image toolbox. In the edge
function, the Sobel method uses the derivative
approximation to find edges. Therefore, it returns
edges at those points where the gradient of the
considered image is maximum. The developed
algorithm for the face recognition system
formulates an image-based approach, using the
Two-Dimensional Discrete Cosine Transform (2D-
DCT) or SED for image compression and the Self-
Organizing Map (SOM) Neural Network for
recognition purposed, simulated in MATLAB.
Key Words-- Face recognition; discrete cosine
transform (DCT); sobel edge detection (SED); SOM
network.
I. INTRODUCTION
Using computers to do image processing
has two objectives: First, create more suitable
images for people to observe and identify. Second,
we wish that computers can automatically recognize
and understand images. The edge of an image is the
most basic features of the image. It contains a
wealth of internal information of the image. Edge
detection is the process of localizing pixel intensity
transitions. The edge detection has been used by
object recognition, target tracking, segmentation,
and etc. Therefore, the edge detection is one of the
most important parts of image processing. The
current image edge detection methods are mainly
differential operator technique and high-pass
filtration. Among these methods, the most primitive
of the differential and gradient edge detection
methods are complex and the effects are not
satisfactory. The widely used operators such as
Sobel, Prewitt, Roberts and Laplacian are sensitive
to noises and their anti-noise performances are poor.
The Log and Canny edge detection operators which
have been proposed use Gaussian function to
smooth or do convolution to the original image, but
the computations are very large. This paper mainly
used the Sobel operator method to do edge detection
processing on the images It has been proved that the
effect by using this method to do edge detection is
very good and its anti-noise performance is very
strong too accuracy. The Sobel edge detector uses
two masks, one vertical and one horizontal. These
masks are generally used 3×3 matrices. Especially,
the matrices which have 3×3 dimensions are used in
MATLAB (see, edge.m). The masks of the Sobel
edge detection are extended to 5×5 dimensions [1],
are constructed in this work. A matlab function,
called as Sobel 5×5, is developed by using these
new matrices. Matlab, which is a product of The
Mathworks Company, contains has a lot of
toolboxes. One of these toolboxes is image toolbox
which has many functions and algorithms [2]. Edge
function which contains several detection methods
(Sobel, Prewitt, Roberts, Canny, etc) is used by the
user.
The image set, which consist of 8 images
(256×256), is used to test Sobel3×3 and Sobel5×5
edge detectors in Matlab.
II. THE PRINCIPLE OF EDGE
DETECTION
In digital image, the so-called edge is a
collection of the pixels whose gray value has a step
or roof change, and it also refers to the part where
the brightness of the image local area changes
significantly. The gray profile in this region can
generally be seen as a step. That is, in a small buffer
area, a gray value rapidly changes to another whose
gray value is largely different with it. Edge widely
exists between objects and backgrounds, objects and
objects, primitives and primitives. The edge of an
object is reflected in the discontinuity of the gray.
Therefore, the general method of edge detection is
to study the changes of a single image pixel in a
gray area, use the variation of the edge neighbouring
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
786 | P a g e
first order or second-order to detect the edge. This
method is used to refer as local operator edge
detection method. Edge detection is mainly the
measurement, detection and location of the changes
in image gray. Image edge is the most basic features
of the image. When we observe the objects, the
clearest part we see firstly is edge and line.
According to the composition of the edge and line,
we can know the object structure. Therefore, edge
extraction is an important technique in graphics
processing and feature extraction. The basic idea of
edge detection is as follows: First, use edge
enhancement operator to highlight the local edge of
the image. Then, define the pixel "edge strength"
and set the threshold to extract the edge point set.
However, because of the noise and the blurring
image, the edge detected may not be continuous. So,
edge detection includes two contents. First is using
edge operator to extract the edge point set. Second is
removing some of the edge points from the edge
point set, filling it with some another and linking the
obtained edge point set into lines.
III. AN EDGE DETECTION SOBEL
FILTER DESIGN
Compared to other edge operator, Sobel
has two main advantages: 1) since the introduction
of the average factor, it has some smoothing effect
to the random noise of the image. 2) Because it is
the differential of two rows or two columns, so the
element of the edge on both sides has been
enhanced, so that the edge seems thick and bright.
Most edge detection methods work on the
assumption that the edge occurs where there is a
discontinuity in the intensity function or a very steep
intensity gradient in the image. Using this
assumption, if one take the derivative of the
intensity value across the image and find points
where the derivative is maximum, then the edge
could be located. The gradient is a vector, whose
components measure how rapid pixel value are
changing with distance in the x and y direction.
Thus, the components of the gradient may be found
using the following approximation:
(2.1)
(2.2)
where dx and dy measure distance along the x and y
directions respectively. In discrete images, one can
consider dx and dy in terms of numbers of pixel
between two points. dx = dy = 1 (pixel spacing) is
the point at which pixel coordinates are(i,j) thus,
(2.3)
(2.4)
In order to detect the presence of a gradient
discontinuity, one could calculate the change in the
gradient at (i, j) .This can be done by finding the
following magnitude measure
(2.5)
and the gradient direction is given by
(2.6)
A. METHOD OF FILTER DESIGN
There are many methods of detecting edges; the
majority of different methods may be grouped into
these two categories:
i. Gradient: The gradient method detects the edges
by looking for the maximum and minimum in the
first derivative of the image. For example Roberts,
Prewitt, Sobel where detected features have very
sharp edges. (see Figure 1)
ii. Laplacian: The Laplacian method searches for
zero crossings in the second derivative of the image
to find edges e.g. Marr-Hildreth, Laplacian of
Gaussian etc. An edge has one dimensional shape of
a ramp and calculating the derivative of the image
can highlight its location (see Figure 2).
Edges may be viewpoint dependent: these
are edges that may change as the viewpoint changes
and typically reflect the geometry of the scene
which in turn reflects the properties of the viewed
objects such as surface markings and surface shape.
A typical edge might be the border between a block
of red colour and a block of yellow, in contrast.
However, what happens when one looks at the
pixels of that image is that all visible portion of one
edge are compacted.
Input Image Output Edges
Fig3.1: The Gradient Method
Input Image Output Edges
Fig 3.2: the Laplacian Method
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
787 | P a g e
The Sobel operator is a discrete
differentiation operator which computes an
approximation of the gradient of the image intensity
function (Sobel, 1968). The different operators in
eq. (2.3) and (2.4) correspond to convolving the
image with the following masks
, (3.1)
When this is done, then
i. The top left-hand corner of the appropriate
mask is super-imposed over each pixel of the
image in turn
ii. A value is calculated for x or y by using
the mask coefficients in a weighted sum of
the value of pixels i, j and its neighbours
iii. These masks are referred to as convolution
masks or sometimes convolution kernels.
Instead of finding approximate gradient
components along the x and y directions,
approximation of the gradient components
could be done along directions at 45 and
135to the axes respectively.
iv. An advantage of using a larger mask size is
that the errors due to the effects of noise are
reduced by local averaging within the
neighbourhood of the mask. An advantage of
using a mask of odd size is that the operators
are centred and can therefore provide an
estimate that is based on a centre pixel (i,j).
The Sobel edge operator masks are given as
(3.2)
(3.3)
The operator calculates the gradient of the
image intensity at every point, giving the direction
of the largest possible increase from light to dark
and the rate of change in that direction. Therefore
the result shows how "abruptly" or "smoothly" the
image changes at that point and therefore how likely
it is that part of the image represents an edge and
how that the edge is likely to be oriented. In
practice, the magnitude (likelihood of an edge)
calculation is more reliable and easier to interpret
than the direction calculation. The gradient of a two-
variable function (the image intensity function) at
each image point is a 2D vector with the
components given by the derivatives in the
horizontal and vertical directions. At each point of
image, the gradient vector points to the direction of
largest possible increases the intensity and the
length of the gradient vector corresponds to the rate
of change in that direction which results in Sobel
operator at any image point which is in a region of
constant image intensity is a zero vector and at a
point on an edge is a vector which points across the
edge, from darker to brighter values. The algorithm
for developing the Sobel model for edge detection is
given below.
PSEDO CODES
Input: A Sample Image
Output: Detected Edges
Step 1: Accept the input image
Step 2: Apply mask Gx, Gy to the input image
Step 3: Apply Sobel edge detection algorithm and
the gradient
Step 4: Masks manipulation of Gx, Gy separately on
the input image
Step 5: Results combined to find the absolute
magnitude of the gradient
(3.4)
Step 6: the absolute magnitude is the output edges
Fig 3.3: Detected Image
IV. SELF-ORGANIZING MAPS
A. OVERVIEW
The self-organizing map also known as a
Kohonen Map is a well-known artificial neural
network. It is an unsupervised learning process,
which learns the distribution of a set of patterns
without any class information. It has the property of
topology preservation. There is a competition
among the neurons to be activated or fired. The
result is that only one neuron that wins the
competition is fired and is called the “winner”[5]. A
SOM network identifies a winning neuron using the
same procedure as employed by a competitive layer.
However, instead of updating only the winning
neuron, all neurons within a certain neighborhood of
the winning neuron are updated using the Kohonen
Rule. The Kohonen rule allows the weights of a
neuron to learn an input vector, and because of this
it is useful in recognition applications. Hence, in this
system, a SOM is employed to classify DCT-based
vectors into groups to identify if the subject in the
input image is “present” or “not present” in the
image database [3].
B. NETWORK ARCHITECTURE
SOMs can be one-dimensional, two-
dimensional or multi-dimensional maps. The
number of input connections in a SOM network
depends on the number of attributes to be used in
the classification [4].
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
788 | P a g e
Fig. 4 Architecture of a simple SOM
The input vector p shown in Fig. 4 is the
row of pixels of the input compressed image. The
||dist|| box accepts the input vector p and the input
weight matrix IW1, 1
, which produces a vector
having S1
elements. The elements are the negative
of the distances between the input vector and
vectors iIW1,1
formed from the rows of the input
weight matrix. The ||dist|| box computes the net
input n1
of a competitive layer by finding the
Euclidean distance between input vector p and the
weight vectors. The competitive transfer function C
accepts a net input vector for a layer and returns
neuron outputs of 0 for all neurons except for the
winner, the neuron associated with the most positive
element of net input n1
. The winner’s output is 1.
The neuron whose weight vector is closest to the
input vector has the least negative net input and,
therefore, wins the competition to output a 1. Thus
the competitive transfer function C produces a 1 for
output element a1
i corresponding to i*
, the “winner”.
All other output elements in a1
are 0[6].
n1
= − IW11 − p (4.1)
a1
= compet(n1
) (4.2)
Thus, when a vector p is presented, the
weights of the winning neuron and its close
neighbours move toward p. Consequently, after
many presentations, neighbouring neurons learn
vectors similar to each other[6]. Hence, the SOM
network learns to categorize the input vectors it
sees.
The SOM network used here contains N
nodes ordered in a two-dimensional lattice structure.
In these cases, each node has 2 or 4 neighboring
nodes, respectively. Typically, a SOM has a life
cycle of three phases: the learning phase, the
training phase and the testing phase.
C. UNSUPERVISED LEARNING
During the learning phase, the neuron with
weights closest to the input data vector is declared
as the winner. Then weights of all of the neurons in
the neighborhood of the winning neuron are
adjusted by an amount inversely proportional to the
Euclidean distance. It clusters and classifies the data
set based on the set of attributes used. The learning
algorithm is summarized as follows [4]:
1. Initialization: Choose random values for the
initial weight vectors wj(0), the weight vectors being
different for j = 1,2,...,l where l is the total number
of neurons.
wi = [wi1, wi2 ,..., wil ]T
n
(4.3)
2. Sampling: Draw a sample x from the input space
with a certain probability.
x[x1,x2,...,xl]T
n
(4.4)
3. Similarity Matching: Find the best
matching (winning) neuron i(x) at time t, 0 < t ≤ n
by using the minimum distance Euclidean criterion:
i(x)arg min x(n) – wj|
j j = 1 2...,l (4.5)
4. Updating: Adjust the synaptic weight vector of all
neurons by using the update formula:
wj(n+1)=wj(n)+η(n)hj,i(x)(n)(x(n)−wj (n) (4.6)
where η(n) is the learning rate parameter, and
hj,ix(n) is the neighborhood function centered around
the winning neuron i(x). Both η(n) and hj,ix(n) are
varied dynamically during learning for best results.
5. Continue with step 2 until no noticeable changes
in the feature map are observed.
Training images are mapped into a lower
dimension using the SOM network and the weight
matrix of each image stored in the training database.
During recognition trained images are reconstructed
using weight matrices and recognition is through
untrained test images using Euclidean distance as
the similarity measure. Training and testing for our
system was performed using the MATLAB Neural
Network Toolbox.
D. TRAINING
During the training phase, labeled Sobel-
vectors are presented to the SOM one at a time. For
each node, the number of “wins” is recorded along
with the label of the input sample. The weight
vectors for the nodes are updated as described in the
learning phase. By the end of this stage, each node
of the SOM has two recorded values: the total
number of winning times for subject present in
image database, and the total number of winning
times for subject not present in image database [2].
E) TESTING
During the testing phase, each input vector
is compared with all nodes of the SOM, and the best
match is found based on minimum Euclidean
distance, as given in (4.5)[4]. The final output of the
system based on its recognition, displays if the test
image is “present” or “not present” in the image
database.
V. EXPERIMENTAL RESULTS
A. IMAGE DATABASE
A face image database was created for the
purpose of benchmarking the face recognition
system. The image database is divided into two
subsets, for separate training and testing purposes.
During SOM training, 30 images were used,
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
789 | P a g e
containing six subjects and each subject having 5
images with different facial expressions. Fig. 5
shows the training and testing image database
constructed.
(a)
(b)
Fig. 5.1 Image database for training: (a) Image
database for training.(b) Untrained image for testing
The face recognition system presented in this paper
was developed, trained, and tested using
MATLAB™ 7.2. The computer was a Windows XP
machine with a 3.00 GHz Intel Pentium 4 processor
and 1 GB of RAM.
B. VALIDATION of TECHNIQUE
The pre-processed grayscale images of size
8 × 8 pixels are reshaped in MATLAB to form a 64
× 1 array with 64 rows and 1 column for each
image. This technique is performed on all 5 test
images to form the input data for testing the
recognition system. Similarly, the image database
for training uses 30 images and forms a matrix of 64
× 30 with 64 rows and 30 columns. The input
vectors defined for the SOM are distributed over a
2D-input space varying over [0 255], which
represents intensity levels of the gray scale pixels.
These are used to train the SOM with dimensions
[64 2], where 64 minimum and 64 maximum values
of the pixel intensities are represented for each
image sample. The resulting SOM created with
these parameters is a single-layer feed forward SOM
map with 128 weights and a competitive transfer
function. The weight function of this network is the
negative of the Euclidean distance[3]. As many as 5
test images are used with the image database for
performing the experiments. Training and testing
sets were used without any overlapping. Fig. 5.3
shows the result of the face recognition system
simulated in MATLAB using the image database
and test input image shown in Fig. 5.1.
Fig.5.2 Training & testing image database
Fig. 5.3 Result of face recognition system. (a)
Untrained input image for testing. (b) Best match
image of subject found in training database
40
50
60
70
80
20
30
40
50
20
30
40
50
60
70
W(i,1)
Weight Vectors
W(i,2)
W(i,3)
0 10 20 30 40 50 60 70
0
200
400
600
800
1000
1200
1400
SOM Layer Weights
Layer Weights
Magnitude
(b)
Fig.5.4 (a) Weight vector graph (b) SOM layer
weight graph
20 40 60
10
20
30
40
50
60
20 40 60 80 100 120
20
40
60
80
100
120
140
160
180
50100150
50100
150
200
20406080100120
50
100
150
20406080100120
50
100
150
20406080100120
50
100
150
50100150
50100
150
200250
20406080100120140
50
100
150
200
20406080100120
50
100
150
200
20406080100120140
50
100
150
200
20406080100120140
50
100
150
20406080100120140
50
100
150
20406080100120140
50
100
150
20406080100120140
50
100
150
200
20406080100120140
50
100
150
200
50100150
50
100
150
200
50100150
50
100
150
200
50100150
50
100150
200
250
50100150
50
100150
200
250
50100150
50
100150
200
250
50100150
50
100150
200250
50100150
50
100150
200
250
20406080100120
50
100
150
200
20406080100120
50
100
150
20406080100120
50
100
150
20406080100120140
50
100
150
20406080100120
50
100
150
20406080100120140
50
100
150
200
20406080100120
50
100
150
20406080100120
50
100
150
200
20406080100120
50
100
150
50100150
50
100
150
200
Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering
Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790
790 | P a g e
Table 5.1: Comparison for 5 images B/W DCT &
SED based face recognition system
Time NO.of
epoch
No.of
Imag
e
DCT SED
Training
time
200
1 10.2594 Unmatch
ed
Executio
n time
27.9148
Training
time
2 10.1553 10.1383
Executio
n time
36.0948 25.0202
Training
time
3 10.3623 9.9243
Executio
n time
26.3857 29.4493
Training
time
4 10.3039 10.0291
Executio
n time
36.8931 29.0859
Training
time
5 10.2635 10.0649
Executio
n time
24.2630 12.0557
Recogniti
on rate
100% 83.33%
This table shows that training and
execution time through SED is less then DCT but
the recognition rate provided by DCT is 100%.
Table 5.2: Comparison for one image at different
epoch B/W DCT & SED based face recognition
system
I
m
ag
e
Epoch Method
Time
DCT Sobel
S0
1_
01
100 Training time 5.9940 4.6029
Execution
time
115.7445 14.0523
S0
1_
01
500 Training time 24.8952 24.4737
Execution
time
43.3684 43.3329
S0
1_
01
1000 Training time 46.2868 45.6684
Execution
time
59.6358 68.7433
This table shows that SOBEl is faster
then DCT.
VI. CONCLUSION
This paper met all objectives by the
successful design of an efficient high-speed face
recognition system. The SED in MATLAB was
successful and all face images were successfully
compressed to the desired size and quality and Self-
Organizing Maps (SOM’s) which proved to be
highly accurate for recognizing a variety of face
images with different facial expressions under
uniform light conditions with light backgrounds.
Hence as a conclusion, the SED and the SOM neural
network are the heart for the design and
implementation, which are the final algorithms used
for the design of an efficient high-speed face
recognition system.
References
[1] D. Kumar, C.S. Rai, and S. Kumar, “Face
Recognition using Self- Organizing Map
and Principal Component Analysis” in
Proc. on Neural Networks and Brain,
ICNNB 2005, Vol. 3, Oct 2005, pp. 1469-
1473.
[2] Y. Zi Lu and Z. You Wei, “Facial
Expression Recognition Based on Wavelet
Transform and MLP Neural Network”, in
Proc. 7th International Conference on
Signal Processing, ICSP 2004, Vol. 2, Aug
2004, pp. 1340-1343.
[3] AYBAR, E., “Topolojik Kenar _slecleri”,
Anadolu Üniversitesi, Fen Bilimleri
Enstitüsü, Ph.D. thesis, 2003.
[4] Image Toolbox (for use with Matlab)
User’s Guide, The MathWorks Inc., 2000.
[5] J. Nagi, “Design of an Efficient High-speed
Face Recognition System”,Department of
Electrical and Electronics Engineering,
College of Engineering, Universiti Tenaga
Nasional, March 2007.
[6] A. Abdallah, M. Abou El-Nasr, and A.
Lynn Abbott, “A New Face Detection
Technique using 2D DCT and Self
Organizing Feature Map” in Proc. of World
Academy of Science, Engineering and
Technology,Vol. 21, May 2007, pp. 15-19.
About auther
Nisha Soni was born on 7th November 1984 at
Bhanpura, madhyapradesh (India). She has completed
Bachelor Engineering Degree in Electronics &
Communication Engineering branch from M.L.V.
Textile & Tech. Engineering College , Bhilwara,
University of Rajasthan, in 2007. She has registered for
M. Tech (Full Time) in Digital Communication branch
at Electronics Engineering Department, Jaipur
Engineering College,kukas under Rajasthan Technical
University (Kota) .

More Related Content

What's hot

2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET Journal
 
Improvement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestImprovement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestIJERA Editor
 
Improvement oh the recognition rate by random forest
Improvement oh the recognition rate by random forestImprovement oh the recognition rate by random forest
Improvement oh the recognition rate by random forestYoussef Rachidi
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...eSAT Journals
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...eSAT Publishing House
 
A new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowA new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowAlexander Decker
 
An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
 
Edge detection by modified otsu method
Edge detection by modified otsu methodEdge detection by modified otsu method
Edge detection by modified otsu methodcsandit
 
EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD cscpconf
 
Image segmentation ajal
Image segmentation ajalImage segmentation ajal
Image segmentation ajalAJAL A J
 
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...IOSR Journals
 

What's hot (19)

2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
 
By33458461
By33458461By33458461
By33458461
 
Ex4301908912
Ex4301908912Ex4301908912
Ex4301908912
 
F045033337
F045033337F045033337
F045033337
 
PPT s06-machine vision-s2
PPT s06-machine vision-s2PPT s06-machine vision-s2
PPT s06-machine vision-s2
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
 
Gabor Filter
Gabor FilterGabor Filter
Gabor Filter
 
Improvement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random ForestImprovement of the Recognition Rate by Random Forest
Improvement of the Recognition Rate by Random Forest
 
Improvement oh the recognition rate by random forest
Improvement oh the recognition rate by random forestImprovement oh the recognition rate by random forest
Improvement oh the recognition rate by random forest
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...
 
A new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowA new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial window
 
An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...
 
Edge detection by modified otsu method
Edge detection by modified otsu methodEdge detection by modified otsu method
Edge detection by modified otsu method
 
EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD
 
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. pptImage segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
 
Vol2no2 17
Vol2no2 17Vol2no2 17
Vol2no2 17
 
Image segmentation ajal
Image segmentation ajalImage segmentation ajal
Image segmentation ajal
 
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...
Design of Gabor Filter for Noise Reduction in Betel Vine leaves Disease Segme...
 

Viewers also liked (20)

Da34617622
Da34617622Da34617622
Da34617622
 
Dr34722727
Dr34722727Dr34722727
Dr34722727
 
Dy34760764
Dy34760764Dy34760764
Dy34760764
 
Jt3417681771
Jt3417681771Jt3417681771
Jt3417681771
 
Ab32189194
Ab32189194Ab32189194
Ab32189194
 
At32307310
At32307310At32307310
At32307310
 
Ap32283286
Ap32283286Ap32283286
Ap32283286
 
Am32265271
Am32265271Am32265271
Am32265271
 
Ae32208215
Ae32208215Ae32208215
Ae32208215
 
Mr3422682272
Mr3422682272Mr3422682272
Mr3422682272
 
Projecte d'aprofundiment dels coneixements definitivo
Projecte d'aprofundiment dels coneixements definitivoProjecte d'aprofundiment dels coneixements definitivo
Projecte d'aprofundiment dels coneixements definitivo
 
นำเสนอผอ.ใหม่
นำเสนอผอ.ใหม่นำเสนอผอ.ใหม่
นำเสนอผอ.ใหม่
 
Estadísticas.
Estadísticas.Estadísticas.
Estadísticas.
 
Sgc ava (notas)
Sgc ava (notas)Sgc ava (notas)
Sgc ava (notas)
 
Historia fotografica
Historia fotograficaHistoria fotografica
Historia fotografica
 
Guilherme Abbud 14.30 Sala C
Guilherme Abbud 14.30 Sala CGuilherme Abbud 14.30 Sala C
Guilherme Abbud 14.30 Sala C
 
OBJETOS DE APRENDIZAJE
OBJETOS DE APRENDIZAJEOBJETOS DE APRENDIZAJE
OBJETOS DE APRENDIZAJE
 
para papai
para papaipara papai
para papai
 
Leithold el calculo - español - 7a.ed.
Leithold   el calculo - español - 7a.ed.Leithold   el calculo - español - 7a.ed.
Leithold el calculo - español - 7a.ed.
 
Curso 100h
Curso 100hCurso 100h
Curso 100h
 

Similar to Ed34785790

ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEY
ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEYALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEY
ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEYsipij
 
EDGE Detection Filter for Gray Image and Observing Performances
EDGE Detection Filter for Gray Image and Observing PerformancesEDGE Detection Filter for Gray Image and Observing Performances
EDGE Detection Filter for Gray Image and Observing PerformancesIOSR Journals
 
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...paperpublications3
 
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...IJECEIAES
 
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
A NOBEL HYBRID APPROACH FOR EDGE  DETECTIONA NOBEL HYBRID APPROACH FOR EDGE  DETECTION
A NOBEL HYBRID APPROACH FOR EDGE DETECTIONijcses
 
Study and Comparison of Various Image Edge Detection Techniques
Study and Comparison of Various Image Edge Detection TechniquesStudy and Comparison of Various Image Edge Detection Techniques
Study and Comparison of Various Image Edge Detection TechniquesCSCJournals
 
Land Boundary Detection of an Island using improved Morphological Operation
Land Boundary Detection of an Island using improved Morphological OperationLand Boundary Detection of an Island using improved Morphological Operation
Land Boundary Detection of an Island using improved Morphological OperationCSCJournals
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Editor IJARCET
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Editor IJARCET
 
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposure
Performance of Efficient Closed-Form Solution to Comprehensive Frontier ExposurePerformance of Efficient Closed-Form Solution to Comprehensive Frontier Exposure
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
 
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...IJCSEIT Journal
 
Algorithm for the Comparison of Different Types of First Order Edge Detection...
Algorithm for the Comparison of Different Types of First Order Edge Detection...Algorithm for the Comparison of Different Types of First Order Edge Detection...
Algorithm for the Comparison of Different Types of First Order Edge Detection...IOSR Journals
 
International Journal of Image Processing (IJIP) Volume (3) Issue (1)
International Journal of Image Processing (IJIP) Volume (3) Issue (1)International Journal of Image Processing (IJIP) Volume (3) Issue (1)
International Journal of Image Processing (IJIP) Volume (3) Issue (1)CSCJournals
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Evaluate Combined Sobel-Canny Edge Detector for Image Procssing
Evaluate Combined Sobel-Canny Edge Detector for Image ProcssingEvaluate Combined Sobel-Canny Edge Detector for Image Procssing
Evaluate Combined Sobel-Canny Edge Detector for Image Procssingidescitation
 

Similar to Ed34785790 (20)

ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEY
ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEYALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEY
ALGORITHM AND TECHNIQUE ON VARIOUS EDGE DETECTION: A SURVEY
 
EDGE Detection Filter for Gray Image and Observing Performances
EDGE Detection Filter for Gray Image and Observing PerformancesEDGE Detection Filter for Gray Image and Observing Performances
EDGE Detection Filter for Gray Image and Observing Performances
 
G010124245
G010124245G010124245
G010124245
 
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...
 
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...
 
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
A NOBEL HYBRID APPROACH FOR EDGE  DETECTIONA NOBEL HYBRID APPROACH FOR EDGE  DETECTION
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
 
Study and Comparison of Various Image Edge Detection Techniques
Study and Comparison of Various Image Edge Detection TechniquesStudy and Comparison of Various Image Edge Detection Techniques
Study and Comparison of Various Image Edge Detection Techniques
 
Land Boundary Detection of an Island using improved Morphological Operation
Land Boundary Detection of an Island using improved Morphological OperationLand Boundary Detection of an Island using improved Morphological Operation
Land Boundary Detection of an Island using improved Morphological Operation
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251
 
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposure
Performance of Efficient Closed-Form Solution to Comprehensive Frontier ExposurePerformance of Efficient Closed-Form Solution to Comprehensive Frontier Exposure
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposure
 
I010634450
I010634450I010634450
I010634450
 
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
 
Iw3515281533
Iw3515281533Iw3515281533
Iw3515281533
 
A010110104
A010110104A010110104
A010110104
 
Algorithm for the Comparison of Different Types of First Order Edge Detection...
Algorithm for the Comparison of Different Types of First Order Edge Detection...Algorithm for the Comparison of Different Types of First Order Edge Detection...
Algorithm for the Comparison of Different Types of First Order Edge Detection...
 
International Journal of Image Processing (IJIP) Volume (3) Issue (1)
International Journal of Image Processing (IJIP) Volume (3) Issue (1)International Journal of Image Processing (IJIP) Volume (3) Issue (1)
International Journal of Image Processing (IJIP) Volume (3) Issue (1)
 
Bx4301429434
Bx4301429434Bx4301429434
Bx4301429434
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Evaluate Combined Sobel-Canny Edge Detector for Image Procssing
Evaluate Combined Sobel-Canny Edge Detector for Image ProcssingEvaluate Combined Sobel-Canny Edge Detector for Image Procssing
Evaluate Combined Sobel-Canny Edge Detector for Image Procssing
 

Recently uploaded

Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfjimielynbastida
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 

Recently uploaded (20)

Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Science&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdfScience&tech:THE INFORMATION AGE STS.pdf
Science&tech:THE INFORMATION AGE STS.pdf
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 

Ed34785790

  • 1. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 785 | P a g e A Matlab Based High Speed Face Recognition System Using Som Neural Networks Nisha Soni1 , Garima Mathur2 , Mahendra Kumar3 Department of Electronics1, 2, 3 JEC Kukas1, 2 , Mewar University Gangrar3 Jaipur1, 2 , Chittorgarh3 , India Abstract Face recognition (FR) is a challenging issue due to variations in pose, illumination, and expression. The search results for most of the existing FR methods are satisfactory but still included irrelevant images for the target image. We have introduced a new technique uses an image-based approach towards artificial presents a new technique for human face recognition. This intelligence by removing redundant data from face images through image compression using Sobel Edge Detection (SED) and comparing this with the two-dimensional discrete cosine transform (2D- DCT) method for the better speed and efficiency. SED which is a popular edge detection method is considered in this work. There exists a function, edge.m which is in the image toolbox. In the edge function, the Sobel method uses the derivative approximation to find edges. Therefore, it returns edges at those points where the gradient of the considered image is maximum. The developed algorithm for the face recognition system formulates an image-based approach, using the Two-Dimensional Discrete Cosine Transform (2D- DCT) or SED for image compression and the Self- Organizing Map (SOM) Neural Network for recognition purposed, simulated in MATLAB. Key Words-- Face recognition; discrete cosine transform (DCT); sobel edge detection (SED); SOM network. I. INTRODUCTION Using computers to do image processing has two objectives: First, create more suitable images for people to observe and identify. Second, we wish that computers can automatically recognize and understand images. The edge of an image is the most basic features of the image. It contains a wealth of internal information of the image. Edge detection is the process of localizing pixel intensity transitions. The edge detection has been used by object recognition, target tracking, segmentation, and etc. Therefore, the edge detection is one of the most important parts of image processing. The current image edge detection methods are mainly differential operator technique and high-pass filtration. Among these methods, the most primitive of the differential and gradient edge detection methods are complex and the effects are not satisfactory. The widely used operators such as Sobel, Prewitt, Roberts and Laplacian are sensitive to noises and their anti-noise performances are poor. The Log and Canny edge detection operators which have been proposed use Gaussian function to smooth or do convolution to the original image, but the computations are very large. This paper mainly used the Sobel operator method to do edge detection processing on the images It has been proved that the effect by using this method to do edge detection is very good and its anti-noise performance is very strong too accuracy. The Sobel edge detector uses two masks, one vertical and one horizontal. These masks are generally used 3×3 matrices. Especially, the matrices which have 3×3 dimensions are used in MATLAB (see, edge.m). The masks of the Sobel edge detection are extended to 5×5 dimensions [1], are constructed in this work. A matlab function, called as Sobel 5×5, is developed by using these new matrices. Matlab, which is a product of The Mathworks Company, contains has a lot of toolboxes. One of these toolboxes is image toolbox which has many functions and algorithms [2]. Edge function which contains several detection methods (Sobel, Prewitt, Roberts, Canny, etc) is used by the user. The image set, which consist of 8 images (256×256), is used to test Sobel3×3 and Sobel5×5 edge detectors in Matlab. II. THE PRINCIPLE OF EDGE DETECTION In digital image, the so-called edge is a collection of the pixels whose gray value has a step or roof change, and it also refers to the part where the brightness of the image local area changes significantly. The gray profile in this region can generally be seen as a step. That is, in a small buffer area, a gray value rapidly changes to another whose gray value is largely different with it. Edge widely exists between objects and backgrounds, objects and objects, primitives and primitives. The edge of an object is reflected in the discontinuity of the gray. Therefore, the general method of edge detection is to study the changes of a single image pixel in a gray area, use the variation of the edge neighbouring
  • 2. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 786 | P a g e first order or second-order to detect the edge. This method is used to refer as local operator edge detection method. Edge detection is mainly the measurement, detection and location of the changes in image gray. Image edge is the most basic features of the image. When we observe the objects, the clearest part we see firstly is edge and line. According to the composition of the edge and line, we can know the object structure. Therefore, edge extraction is an important technique in graphics processing and feature extraction. The basic idea of edge detection is as follows: First, use edge enhancement operator to highlight the local edge of the image. Then, define the pixel "edge strength" and set the threshold to extract the edge point set. However, because of the noise and the blurring image, the edge detected may not be continuous. So, edge detection includes two contents. First is using edge operator to extract the edge point set. Second is removing some of the edge points from the edge point set, filling it with some another and linking the obtained edge point set into lines. III. AN EDGE DETECTION SOBEL FILTER DESIGN Compared to other edge operator, Sobel has two main advantages: 1) since the introduction of the average factor, it has some smoothing effect to the random noise of the image. 2) Because it is the differential of two rows or two columns, so the element of the edge on both sides has been enhanced, so that the edge seems thick and bright. Most edge detection methods work on the assumption that the edge occurs where there is a discontinuity in the intensity function or a very steep intensity gradient in the image. Using this assumption, if one take the derivative of the intensity value across the image and find points where the derivative is maximum, then the edge could be located. The gradient is a vector, whose components measure how rapid pixel value are changing with distance in the x and y direction. Thus, the components of the gradient may be found using the following approximation: (2.1) (2.2) where dx and dy measure distance along the x and y directions respectively. In discrete images, one can consider dx and dy in terms of numbers of pixel between two points. dx = dy = 1 (pixel spacing) is the point at which pixel coordinates are(i,j) thus, (2.3) (2.4) In order to detect the presence of a gradient discontinuity, one could calculate the change in the gradient at (i, j) .This can be done by finding the following magnitude measure (2.5) and the gradient direction is given by (2.6) A. METHOD OF FILTER DESIGN There are many methods of detecting edges; the majority of different methods may be grouped into these two categories: i. Gradient: The gradient method detects the edges by looking for the maximum and minimum in the first derivative of the image. For example Roberts, Prewitt, Sobel where detected features have very sharp edges. (see Figure 1) ii. Laplacian: The Laplacian method searches for zero crossings in the second derivative of the image to find edges e.g. Marr-Hildreth, Laplacian of Gaussian etc. An edge has one dimensional shape of a ramp and calculating the derivative of the image can highlight its location (see Figure 2). Edges may be viewpoint dependent: these are edges that may change as the viewpoint changes and typically reflect the geometry of the scene which in turn reflects the properties of the viewed objects such as surface markings and surface shape. A typical edge might be the border between a block of red colour and a block of yellow, in contrast. However, what happens when one looks at the pixels of that image is that all visible portion of one edge are compacted. Input Image Output Edges Fig3.1: The Gradient Method Input Image Output Edges Fig 3.2: the Laplacian Method
  • 3. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 787 | P a g e The Sobel operator is a discrete differentiation operator which computes an approximation of the gradient of the image intensity function (Sobel, 1968). The different operators in eq. (2.3) and (2.4) correspond to convolving the image with the following masks , (3.1) When this is done, then i. The top left-hand corner of the appropriate mask is super-imposed over each pixel of the image in turn ii. A value is calculated for x or y by using the mask coefficients in a weighted sum of the value of pixels i, j and its neighbours iii. These masks are referred to as convolution masks or sometimes convolution kernels. Instead of finding approximate gradient components along the x and y directions, approximation of the gradient components could be done along directions at 45 and 135to the axes respectively. iv. An advantage of using a larger mask size is that the errors due to the effects of noise are reduced by local averaging within the neighbourhood of the mask. An advantage of using a mask of odd size is that the operators are centred and can therefore provide an estimate that is based on a centre pixel (i,j). The Sobel edge operator masks are given as (3.2) (3.3) The operator calculates the gradient of the image intensity at every point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction. Therefore the result shows how "abruptly" or "smoothly" the image changes at that point and therefore how likely it is that part of the image represents an edge and how that the edge is likely to be oriented. In practice, the magnitude (likelihood of an edge) calculation is more reliable and easier to interpret than the direction calculation. The gradient of a two- variable function (the image intensity function) at each image point is a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each point of image, the gradient vector points to the direction of largest possible increases the intensity and the length of the gradient vector corresponds to the rate of change in that direction which results in Sobel operator at any image point which is in a region of constant image intensity is a zero vector and at a point on an edge is a vector which points across the edge, from darker to brighter values. The algorithm for developing the Sobel model for edge detection is given below. PSEDO CODES Input: A Sample Image Output: Detected Edges Step 1: Accept the input image Step 2: Apply mask Gx, Gy to the input image Step 3: Apply Sobel edge detection algorithm and the gradient Step 4: Masks manipulation of Gx, Gy separately on the input image Step 5: Results combined to find the absolute magnitude of the gradient (3.4) Step 6: the absolute magnitude is the output edges Fig 3.3: Detected Image IV. SELF-ORGANIZING MAPS A. OVERVIEW The self-organizing map also known as a Kohonen Map is a well-known artificial neural network. It is an unsupervised learning process, which learns the distribution of a set of patterns without any class information. It has the property of topology preservation. There is a competition among the neurons to be activated or fired. The result is that only one neuron that wins the competition is fired and is called the “winner”[5]. A SOM network identifies a winning neuron using the same procedure as employed by a competitive layer. However, instead of updating only the winning neuron, all neurons within a certain neighborhood of the winning neuron are updated using the Kohonen Rule. The Kohonen rule allows the weights of a neuron to learn an input vector, and because of this it is useful in recognition applications. Hence, in this system, a SOM is employed to classify DCT-based vectors into groups to identify if the subject in the input image is “present” or “not present” in the image database [3]. B. NETWORK ARCHITECTURE SOMs can be one-dimensional, two- dimensional or multi-dimensional maps. The number of input connections in a SOM network depends on the number of attributes to be used in the classification [4].
  • 4. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 788 | P a g e Fig. 4 Architecture of a simple SOM The input vector p shown in Fig. 4 is the row of pixels of the input compressed image. The ||dist|| box accepts the input vector p and the input weight matrix IW1, 1 , which produces a vector having S1 elements. The elements are the negative of the distances between the input vector and vectors iIW1,1 formed from the rows of the input weight matrix. The ||dist|| box computes the net input n1 of a competitive layer by finding the Euclidean distance between input vector p and the weight vectors. The competitive transfer function C accepts a net input vector for a layer and returns neuron outputs of 0 for all neurons except for the winner, the neuron associated with the most positive element of net input n1 . The winner’s output is 1. The neuron whose weight vector is closest to the input vector has the least negative net input and, therefore, wins the competition to output a 1. Thus the competitive transfer function C produces a 1 for output element a1 i corresponding to i* , the “winner”. All other output elements in a1 are 0[6]. n1 = − IW11 − p (4.1) a1 = compet(n1 ) (4.2) Thus, when a vector p is presented, the weights of the winning neuron and its close neighbours move toward p. Consequently, after many presentations, neighbouring neurons learn vectors similar to each other[6]. Hence, the SOM network learns to categorize the input vectors it sees. The SOM network used here contains N nodes ordered in a two-dimensional lattice structure. In these cases, each node has 2 or 4 neighboring nodes, respectively. Typically, a SOM has a life cycle of three phases: the learning phase, the training phase and the testing phase. C. UNSUPERVISED LEARNING During the learning phase, the neuron with weights closest to the input data vector is declared as the winner. Then weights of all of the neurons in the neighborhood of the winning neuron are adjusted by an amount inversely proportional to the Euclidean distance. It clusters and classifies the data set based on the set of attributes used. The learning algorithm is summarized as follows [4]: 1. Initialization: Choose random values for the initial weight vectors wj(0), the weight vectors being different for j = 1,2,...,l where l is the total number of neurons. wi = [wi1, wi2 ,..., wil ]T n (4.3) 2. Sampling: Draw a sample x from the input space with a certain probability. x[x1,x2,...,xl]T n (4.4) 3. Similarity Matching: Find the best matching (winning) neuron i(x) at time t, 0 < t ≤ n by using the minimum distance Euclidean criterion: i(x)arg min x(n) – wj| j j = 1 2...,l (4.5) 4. Updating: Adjust the synaptic weight vector of all neurons by using the update formula: wj(n+1)=wj(n)+η(n)hj,i(x)(n)(x(n)−wj (n) (4.6) where η(n) is the learning rate parameter, and hj,ix(n) is the neighborhood function centered around the winning neuron i(x). Both η(n) and hj,ix(n) are varied dynamically during learning for best results. 5. Continue with step 2 until no noticeable changes in the feature map are observed. Training images are mapped into a lower dimension using the SOM network and the weight matrix of each image stored in the training database. During recognition trained images are reconstructed using weight matrices and recognition is through untrained test images using Euclidean distance as the similarity measure. Training and testing for our system was performed using the MATLAB Neural Network Toolbox. D. TRAINING During the training phase, labeled Sobel- vectors are presented to the SOM one at a time. For each node, the number of “wins” is recorded along with the label of the input sample. The weight vectors for the nodes are updated as described in the learning phase. By the end of this stage, each node of the SOM has two recorded values: the total number of winning times for subject present in image database, and the total number of winning times for subject not present in image database [2]. E) TESTING During the testing phase, each input vector is compared with all nodes of the SOM, and the best match is found based on minimum Euclidean distance, as given in (4.5)[4]. The final output of the system based on its recognition, displays if the test image is “present” or “not present” in the image database. V. EXPERIMENTAL RESULTS A. IMAGE DATABASE A face image database was created for the purpose of benchmarking the face recognition system. The image database is divided into two subsets, for separate training and testing purposes. During SOM training, 30 images were used,
  • 5. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 789 | P a g e containing six subjects and each subject having 5 images with different facial expressions. Fig. 5 shows the training and testing image database constructed. (a) (b) Fig. 5.1 Image database for training: (a) Image database for training.(b) Untrained image for testing The face recognition system presented in this paper was developed, trained, and tested using MATLAB™ 7.2. The computer was a Windows XP machine with a 3.00 GHz Intel Pentium 4 processor and 1 GB of RAM. B. VALIDATION of TECHNIQUE The pre-processed grayscale images of size 8 × 8 pixels are reshaped in MATLAB to form a 64 × 1 array with 64 rows and 1 column for each image. This technique is performed on all 5 test images to form the input data for testing the recognition system. Similarly, the image database for training uses 30 images and forms a matrix of 64 × 30 with 64 rows and 30 columns. The input vectors defined for the SOM are distributed over a 2D-input space varying over [0 255], which represents intensity levels of the gray scale pixels. These are used to train the SOM with dimensions [64 2], where 64 minimum and 64 maximum values of the pixel intensities are represented for each image sample. The resulting SOM created with these parameters is a single-layer feed forward SOM map with 128 weights and a competitive transfer function. The weight function of this network is the negative of the Euclidean distance[3]. As many as 5 test images are used with the image database for performing the experiments. Training and testing sets were used without any overlapping. Fig. 5.3 shows the result of the face recognition system simulated in MATLAB using the image database and test input image shown in Fig. 5.1. Fig.5.2 Training & testing image database Fig. 5.3 Result of face recognition system. (a) Untrained input image for testing. (b) Best match image of subject found in training database 40 50 60 70 80 20 30 40 50 20 30 40 50 60 70 W(i,1) Weight Vectors W(i,2) W(i,3) 0 10 20 30 40 50 60 70 0 200 400 600 800 1000 1200 1400 SOM Layer Weights Layer Weights Magnitude (b) Fig.5.4 (a) Weight vector graph (b) SOM layer weight graph 20 40 60 10 20 30 40 50 60 20 40 60 80 100 120 20 40 60 80 100 120 140 160 180 50100150 50100 150 200 20406080100120 50 100 150 20406080100120 50 100 150 20406080100120 50 100 150 50100150 50100 150 200250 20406080100120140 50 100 150 200 20406080100120 50 100 150 200 20406080100120140 50 100 150 200 20406080100120140 50 100 150 20406080100120140 50 100 150 20406080100120140 50 100 150 20406080100120140 50 100 150 200 20406080100120140 50 100 150 200 50100150 50 100 150 200 50100150 50 100 150 200 50100150 50 100150 200 250 50100150 50 100150 200 250 50100150 50 100150 200 250 50100150 50 100150 200250 50100150 50 100150 200 250 20406080100120 50 100 150 200 20406080100120 50 100 150 20406080100120 50 100 150 20406080100120140 50 100 150 20406080100120 50 100 150 20406080100120140 50 100 150 200 20406080100120 50 100 150 20406080100120 50 100 150 200 20406080100120 50 100 150 50100150 50 100 150 200
  • 6. Nisha Soni, Garima Mathur, Mahendra Kumar / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 4, Jul-Aug 2013, pp.785-790 790 | P a g e Table 5.1: Comparison for 5 images B/W DCT & SED based face recognition system Time NO.of epoch No.of Imag e DCT SED Training time 200 1 10.2594 Unmatch ed Executio n time 27.9148 Training time 2 10.1553 10.1383 Executio n time 36.0948 25.0202 Training time 3 10.3623 9.9243 Executio n time 26.3857 29.4493 Training time 4 10.3039 10.0291 Executio n time 36.8931 29.0859 Training time 5 10.2635 10.0649 Executio n time 24.2630 12.0557 Recogniti on rate 100% 83.33% This table shows that training and execution time through SED is less then DCT but the recognition rate provided by DCT is 100%. Table 5.2: Comparison for one image at different epoch B/W DCT & SED based face recognition system I m ag e Epoch Method Time DCT Sobel S0 1_ 01 100 Training time 5.9940 4.6029 Execution time 115.7445 14.0523 S0 1_ 01 500 Training time 24.8952 24.4737 Execution time 43.3684 43.3329 S0 1_ 01 1000 Training time 46.2868 45.6684 Execution time 59.6358 68.7433 This table shows that SOBEl is faster then DCT. VI. CONCLUSION This paper met all objectives by the successful design of an efficient high-speed face recognition system. The SED in MATLAB was successful and all face images were successfully compressed to the desired size and quality and Self- Organizing Maps (SOM’s) which proved to be highly accurate for recognizing a variety of face images with different facial expressions under uniform light conditions with light backgrounds. Hence as a conclusion, the SED and the SOM neural network are the heart for the design and implementation, which are the final algorithms used for the design of an efficient high-speed face recognition system. References [1] D. Kumar, C.S. Rai, and S. Kumar, “Face Recognition using Self- Organizing Map and Principal Component Analysis” in Proc. on Neural Networks and Brain, ICNNB 2005, Vol. 3, Oct 2005, pp. 1469- 1473. [2] Y. Zi Lu and Z. You Wei, “Facial Expression Recognition Based on Wavelet Transform and MLP Neural Network”, in Proc. 7th International Conference on Signal Processing, ICSP 2004, Vol. 2, Aug 2004, pp. 1340-1343. [3] AYBAR, E., “Topolojik Kenar _slecleri”, Anadolu Üniversitesi, Fen Bilimleri Enstitüsü, Ph.D. thesis, 2003. [4] Image Toolbox (for use with Matlab) User’s Guide, The MathWorks Inc., 2000. [5] J. Nagi, “Design of an Efficient High-speed Face Recognition System”,Department of Electrical and Electronics Engineering, College of Engineering, Universiti Tenaga Nasional, March 2007. [6] A. Abdallah, M. Abou El-Nasr, and A. Lynn Abbott, “A New Face Detection Technique using 2D DCT and Self Organizing Feature Map” in Proc. of World Academy of Science, Engineering and Technology,Vol. 21, May 2007, pp. 15-19. About auther Nisha Soni was born on 7th November 1984 at Bhanpura, madhyapradesh (India). She has completed Bachelor Engineering Degree in Electronics & Communication Engineering branch from M.L.V. Textile & Tech. Engineering College , Bhilwara, University of Rajasthan, in 2007. She has registered for M. Tech (Full Time) in Digital Communication branch at Electronics Engineering Department, Jaipur Engineering College,kukas under Rajasthan Technical University (Kota) .