SlideShare a Scribd company logo
V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH
Volume No. 1, Issue No. 2, February - March 2013, 207 - 210

Reflectance Perception Model Based Face
Recognition in Dissimilar Illumination
Conditions
V. KARTHIKEYAN
Department of ECE,
SVS College of Engineering,
Coimbatore-109,
Tamilnadu, India

V. J. VIJAYALAKSHMI
Department of EEE,
SKCET,
Coimbatore ā€“ 46
Tamilnadu, India

P. JEYAKUMAR
Student,
Department of ECE,
Karpagam University,
Coimbatore-105

Abstractā€”Reflectance Perception Based Face Recognition in different Illuminating Conditions is presented. Face
recognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images.
Many of the State-of-the art commercial face recognition algorithms still struggle with this problem. In this projected
work a new algorithm is stated for the preprocessing method which compensated for illumination variations in images
along with a robust Principle Component Analysis (PCA) based Facial Feature Extraction is stated which is used to
improve and reduce the dimension of the image by removing the unwanted vectors by the weighted Eigen faces. The
proposed work demonstrates large performance improvements with several standard face recognition algorithms across
multiple, publicly available face databases.
Keywords- Face Recognition, Principle Component Analysis, Reflectance Perception. Illumination Variations

I.

INTRODUCTION

Humans often use faces to recognize individuals and
advancements in computing capability over the past
few decades. Early face recognition algorithms used
simple geometric models but the recognition process
have matured into a science of sophisticated
mathematical representation and matching process.
Major advancements and initiatives in the past ten to
fifteen years have propelled face recognition
technology into the spotlight which can be used for
both identification and Verification. In order to build
robust and efficient face recognition system the
problem of lighting variation is one of the main
technical challenges faced by the designers. In this
paper the focus is mainly on robustness to lighting
variations. First in the processing stage a face image
is transformed into an illumination insensitive image.
The hybrid Fourier features are extracted from
different Fourier domains in different frequency
bandwidths and then each feature is individually
classified by Principal Component Analysis. Along
with this multiple face models generated by plural
normalized face images that have different eye
distances. Finally all the scores from multiple
complementary classifiers a weighted based score
fusion scheme is applied.
II. RELATED WORKS
Automated face recognition is a relatively new
concept. Developed in the 1960s the first semiautomated system for face recognition required the
administrator to locate features such as eyes, ears,
nose and mouth on the photographs before it
ISSN 2320 ā€“5547

calculated distances and ratios to a common reference
point which were then compared to the referenced
data. Along with the pose variation, illumination is
the most significant factor affecting the appearance of
faces. Significant changes in lighting can be seen
greatly within between days and among indoor and
outdoor environments .Greater variations can be seen
for the two images of a same person taken under two
different conditions, say one is taken in a studio and
the other in an outdoor location. Due to the 3D shape
of the face, a direct lighting source can cast strong
shadows that highlight or diminish certain facial
features. Evaluations of face recognition algorithms
consistently show that state-of-the-art systems can
not deal with large differences in illumination
conditions between gallery and probe images [1-3].
In recent years many appearance-based algorithms
have been proposed to deal with this problem and
find a proper solution to it [4-7].Then [5] showed that
the set of images of an object in fixed pose but under
varying illumination forms a convex cone in the
space of images. The illumination cones of human
faces can be approximated well by low-dimensional
linear subspaces [8]. The linear subspaces are
typically estimated from training data, requiring
multiple images of the object under different
illumination conditions. Alternatively, model-based
approaches have been proposed to address the
problem. [9] showed that previously constructed
morph able 3D model to single images. Though the
algorithm works well across pose and illumination,
however, the computational expense is very high. To
eliminate the tarnished halo effect, [10] introduced
low curvature image simplifier (LCIS) hierarchical

@ 2013 http://www.ijitr.com All rights Reserved.

Page | 207
V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH
Volume No. 1, Issue No. 2, February - March 2013, 207 - 210

decomposition of an image. Each component in this
hierarchy is computed by solving a partial differential
equation inspired by anisotropic diffusion [11]. At
each hierarchical level the method segments the
image into smooth (low-curvature) regions while
stopping at sharp discontinuities. The algorithm is
computationally intensive and requires manual
selection of no less than 8 different parameters.
III. PREPROCESSING ALGORITHM
3.1. The Reflectance Perception Model
This algorithm is makes use of two widely accepted
assumptions about human vision 1) human vision is
mostly sensitive to scene reflectance and mostly
insensitive to the illumination conditions and 2)
human vision responds to local changes in contrast
rather than to global brightness levels. Both these
assumptions have a close relation since local contrast
is given as a function of reflectance. Our main motive
is to find an estimating conditions of L(x; y) such that
when it divides I(x; y) it produces R(x; y) in which
the local contrast is appropriately and accurately
enhanced. In this view R(x; y) takes the place of
perceived sensation, while I(x; y) takes the place of
the input stimulus. L(x; y) is then called perception
gain which maps the input sensation into the
perceived stimulus that is given as
=

(1)

Here in this equation, R is mostly the reflectance of
the scene, and L is mostly the illumination field,
After all, humans perceive reflectance details in both
shadows as well as in bright regions, but they are also
aware of the presence of shadows. To derive our
model we look into evidences form the experimental
technology. The Weber's Law states that the
sensitivity threshold to a small intensity change
increases proportionally to the signal level [12]. This
law follows from the experimentation on brightness
perception that consists of exposing an observer to a
uniform field of intensity I in which a disk is
gradually increased in brightness by a quantity
.The value
from which the observer perceives
the existence of the desk against the background is
called as the brightness discrimination threshold.
Weber concluded that
of intensity values.

is constant for a wide range

Fig 1: Compressive logarithmic mapping emphasizes
changes at low stimulus levels and attenuates changes
at high stimulus levels.
Weber's law gives a theoretical justification for
assuming a logarithmic mapping from input stimulus
to perceived sensation which can be observed from
fig 1. We can see that due to the logarithmic mapping
when the stimulus is weak, for example in deep
shadows, small changes in the input stimulus extract
large changes in perceived sensation. When the
stimulus is strong, small changes in the input
stimulus are mapped to even smaller changes in
perceived sensation. In facts Local variations in the
input stimulus are mapped to the perceived sensation
Variations with the gain
=
Where

which is given as
,

(2)

is the stimulus level in a small

neighborhood
in the input image by comparing
above equations we can finally obtain the model for
the perception gain:
(3)
Where the neighborhood stimulus level is by
definition taken to be the stimulus at point
the
solution for
can be found by minimizing the
equation
J(L)= ŹƒŹƒĻ Ļx,yL-12dxdy+Ī»ŹƒŹƒ (L2x+L2y)dxdy
(4)
Where the first term drives the solution for following
the perception gain model, while the second term
imposes a smoothness of the constraint here refers
to the image. The parameter
controls the relative
importance of the two terms. The space varying
permeability weight
controls the anisotropic
nature of the smoothing constraint. The EulerLagrange equation for this calculus of variation
problem yields
(5)
On a rectangular lattice, the linear differential
equation becomes

ISSN 2320 ā€“5547

@ 2013 http://www.ijitr.com All rights Reserved.

Page | 208
V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH
Volume No. 1, Issue No. 2, February - March 2013, 207 - 210

(
Where

)

denotes the pixel grid size and the each

value is taken in the middle of the edge between
the center pixel and each of the corresponding
neighbor. In this formulation
controls the
anisotropic nature of the smoothing by modulating
permeability between pixel neighbors. Multi-grid
algorithms are fairly efficient having complexity. To
get a relative measure of local contrasts that will
equally respect boundaries in shadows and regions
(different illumination conditions) and the equation is
given as
(7)
Where

(a)

(6)

is the weight between two neighboring

pixels whose intensities are

(b)
Fig.2. (a) Image before PCA Fig (b) Image after PCA
Algorithm
SCORE FUSION BASED WEIGHTED SUM
METHOD
There are many ways to score the values of extracted
features of human face, among those the simple and
accurate robust method to combine the scores is
Weighted sum Technique. To compute a weighted
sum as follows:
(7)

and
Where the weight

IV. PRINCIPAL COMPONENT ANALYSIS
BASED FEATURE EXTRACTION
PCA commonly referred to as the use of Eigen faces,
is the technique pioneered by Kirby and Sirivich.
With PCA, the probe and gallery images must be the
same size and must first be normalized to line up the
eyes and mouth of the subjects within the image. The
PCA approach is then used to reduce the dimension
of the data by means of data compression basics and
reveals the most effective low dimensional structure
of facial patterns. This reduction in dimensions
removes information that is not useful and precisely
decomposes the face structure into orthogonal
components known as Eigen faces. Each face image
may be represented as the weighted sum of the Eigen
faces, which are stored in a 1D array. A probe image
is compared against a gallery image by measuring the
distance between their respective feature vectors. The
PCA approach typically requires the full frontal face
to be presented each time; otherwise the image
results in poor performance. The primary advantage
of this technique is that it can reduce the data needed
to identify the individual to 1/1000th of the data
presented.
The application of the PCA algorithm to the Yale
databases illustrates the
improvement in the
accuracy. This can be shown in the figure as

ISSN 2320 ā€“5547

have in the

wi is the amount of confidence we

ith classifier and its score. In this work,

we use 1/EER as a measure of such confidence. Thus,
we have a new score
(8)
The weighted sum method based upon the EER is
heuristic but intuitively appealing and easy to
implement. In addition, this method has the
advantage that it is robust to the difference in
statistics between training data and test data. Even
though the training data and test data have different
statistics, the relative strength of the component
classifiers is less likely to change significantly,
suggesting that the weighting still makes sense.
V. CONCLUSION
Thus a robust face recognition system is being
proposed which is insensitive to illumination
variations. The pre processing algorithm of RPM
states the experimentation on brightness perception
that consists of exposing an observer to a uniform
field of intensity. The PCA based feature extraction
method coveys that it reduces the dimension of the
image by removing the unwanted information.
Finally the score fusion of the extracted features is
done by the weighted sum method. We introduced a
simple and automatic image-processing algorithm for
compensation of illumination-induced variations in
images. At the high level, the algorithm mimics some
aspects of human visual perception. If desired, the
user may adjust a single parameter whose meaning is
intuitive and simple to understand. The algorithm
delivers large performance improvements for

@ 2013 http://www.ijitr.com All rights Reserved.

Page | 209
V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH
Volume No. 1, Issue No. 2, February - March 2013, 207 - 210

standard face recognition algorithms across multiple
face databases.
VI. EXPERIMENTAL RESULTS

Fig.7.Data base Image

[1]
Fig.3. Test Image
[2]

[3]

[4]
Fig. 4 Preprocessed Image
[5]

[6]

[7]
Fig.5.Preprocessed Image 2
[8]

[9]

[10]
Fig.6.Preprocessed Image 3

ISSN 2320 ā€“5547

VII. REFERENCES
Phillips, P., Moon, H., Rizvi, S., Rauss, P.:
The FERET evaluation methodology for facerecognition algorithms. IEEE
PAMI 22
(2000) 1090{1104
Blackburn, D., Bone, M., Philips, P.: Facial
Recognition vendor test 2000: evaluation
report (2000)
Gross, R., Shi, J., Cohn, J.: Quo vadis face
Recognition In: Third Workshop on Empirical
Evaluation Methods in Computer Vision
(2001)
Belhumeur, P.N., Hespanha, J.P., Kriegman,
D.J.: Eigenfaces vs. Fisherfaces: Recognition
using class specific linear Projection IEEE
PAMI 19 (1997)
Belhumeur, P., Kriegman, D.: What is the set
of images of an object under allPossible
Lighting conditions. Int. J. of Computer Vision
28 (1998)
Georghiades, A., Kriegman, D., Belhumeur,
P.: From few to many: Generative models for
recognition under variable pose and
illumination. IEEE PAMI (2001)
Riklin-Raviv, T., Shashua, A.: The Quotient
image:
class-based
re-rendering
and
recognition
with
varying
illumination
conditions. In: IEEE PAMI. (2001)
Georghiades, A., Kriegman, D., Belhumeur,
P.: Illumination cones for recognition Under
variable lighting: Faces. In: Proc. I EEE
Conf. on CVPR. (1998)
Blanz, V., Romdhani, S., Vetter, T.: Face
identi_cation across di_erent poses and
illumination with a 3D morphable model. In:
IEEE Conf. on Automatic Face and Gesture
recognition (2002).
Tumblin, J., Turk, G.: LCIS: A boundary
hierarchy for detail-preserving contrast
reduction. In: ACM SIGGRAPH. (1999)

@ 2013 http://www.ijitr.com All rights Reserved.

Page | 210

More Related Content

What's hot

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
Ā 
Face recognition using laplacian faces
Face recognition using laplacian facesFace recognition using laplacian faces
Face recognition using laplacian faces
Pulkiŧ Sharma
Ā 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251
Editor IJARCET
Ā 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
ijtsrd
Ā 
Spectral approach to image projection with cubic b spline interpolation
Spectral approach to image projection with cubic b spline interpolationSpectral approach to image projection with cubic b spline interpolation
Spectral approach to image projection with cubic b spline interpolation
iaemedu
Ā 

What's hot (16)

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
Ā 
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...
CONTRAST ENHANCEMENT AND BRIGHTNESS PRESERVATION USING MULTIDECOMPOSITION HIS...
Ā 
Performance Evaluation of Illumination Invariant Face Recognition Algorthims
Performance Evaluation of Illumination Invariant Face Recognition AlgorthimsPerformance Evaluation of Illumination Invariant Face Recognition Algorthims
Performance Evaluation of Illumination Invariant Face Recognition Algorthims
Ā 
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION
Ā 
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
Ā 
A017530114
A017530114A017530114
A017530114
Ā 
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSION
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSIONNEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSION
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSION
Ā 
Data-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial AdaptationData-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial Adaptation
Ā 
Face recognition using laplacian faces
Face recognition using laplacian facesFace recognition using laplacian faces
Face recognition using laplacian faces
Ā 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251
Ā 
Facial Expression Recognition via Python
Facial Expression Recognition via PythonFacial Expression Recognition via Python
Facial Expression Recognition via Python
Ā 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
Ā 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
Ā 
A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...
Ā 
Spectral approach to image projection with cubic b spline interpolation
Spectral approach to image projection with cubic b spline interpolationSpectral approach to image projection with cubic b spline interpolation
Spectral approach to image projection with cubic b spline interpolation
Ā 
557 480-486
557 480-486557 480-486
557 480-486
Ā 

Similar to V.karthikeyan published article a..a

Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
TELKOMNIKA JOURNAL
Ā 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
Iman Roosta
Ā 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
IOSR Journals
Ā 

Similar to V.karthikeyan published article a..a (20)

Efficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpointsEfficient 3D stereo vision stabilization for multi-camera viewpoints
Efficient 3D stereo vision stabilization for multi-camera viewpoints
Ā 
Filtering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face RecognitionFiltering Based Illumination Normalization Techniques for Face Recognition
Filtering Based Illumination Normalization Techniques for Face Recognition
Ā 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
Ā 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONCOLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
Ā 
View and illumination invariant iterative based image
View and illumination invariant iterative based imageView and illumination invariant iterative based image
View and illumination invariant iterative based image
Ā 
An efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithmAn efficient image segmentation approach through enhanced watershed algorithm
An efficient image segmentation approach through enhanced watershed algorithm
Ā 
IRJET- Multi Image Morphing: A Review
IRJET- Multi Image Morphing: A ReviewIRJET- Multi Image Morphing: A Review
IRJET- Multi Image Morphing: A Review
Ā 
E010232227
E010232227E010232227
E010232227
Ā 
50120130405009
5012013040500950120130405009
50120130405009
Ā 
I017265357
I017265357I017265357
I017265357
Ā 
A Combined Model for Image Inpainting
A Combined Model for Image InpaintingA Combined Model for Image Inpainting
A Combined Model for Image Inpainting
Ā 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
Ā 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super Resolution
Ā 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super Resolution
Ā 
Object based image enhancement
Object based image enhancementObject based image enhancement
Object based image enhancement
Ā 
Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819Multifocus_IRANIANCEE.2014.6999819
Multifocus_IRANIANCEE.2014.6999819
Ā 
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Inflammatory Conditions Mimicking Tumours In Calabar: A 30 Year Study (1978-2...
Ā 
A Review on Deformation Measurement from Speckle Patterns using Digital Image...
A Review on Deformation Measurement from Speckle Patterns using Digital Image...A Review on Deformation Measurement from Speckle Patterns using Digital Image...
A Review on Deformation Measurement from Speckle Patterns using Digital Image...
Ā 
Morpho
MorphoMorpho
Morpho
Ā 
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Ā 

More from KARTHIKEYAN V

More from KARTHIKEYAN V (16)

V.KARTHIKEYAN PUBLISHED ARTICLE AAA
V.KARTHIKEYAN PUBLISHED ARTICLE AAAV.KARTHIKEYAN PUBLISHED ARTICLE AAA
V.KARTHIKEYAN PUBLISHED ARTICLE AAA
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE AA..
V.KARTHIKEYAN PUBLISHED ARTICLE AA..V.KARTHIKEYAN PUBLISHED ARTICLE AA..
V.KARTHIKEYAN PUBLISHED ARTICLE AA..
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE AA
V.KARTHIKEYAN PUBLISHED ARTICLE AAV.KARTHIKEYAN PUBLISHED ARTICLE AA
V.KARTHIKEYAN PUBLISHED ARTICLE AA
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE A.A
V.KARTHIKEYAN PUBLISHED ARTICLE A.AV.KARTHIKEYAN PUBLISHED ARTICLE A.A
V.KARTHIKEYAN PUBLISHED ARTICLE A.A
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE A.A
V.KARTHIKEYAN PUBLISHED ARTICLE A.AV.KARTHIKEYAN PUBLISHED ARTICLE A.A
V.KARTHIKEYAN PUBLISHED ARTICLE A.A
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE A..
V.KARTHIKEYAN PUBLISHED ARTICLE A..V.KARTHIKEYAN PUBLISHED ARTICLE A..
V.KARTHIKEYAN PUBLISHED ARTICLE A..
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE A
V.KARTHIKEYAN PUBLISHED ARTICLE AV.KARTHIKEYAN PUBLISHED ARTICLE A
V.KARTHIKEYAN PUBLISHED ARTICLE A
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1
Ā 
V.karthikeyan published article
V.karthikeyan published articleV.karthikeyan published article
V.karthikeyan published article
Ā 
V.karthikeyan published article1
V.karthikeyan published article1V.karthikeyan published article1
V.karthikeyan published article1
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 
V.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLEV.KARTHIKEYAN PUBLISHED ARTICLE
V.KARTHIKEYAN PUBLISHED ARTICLE
Ā 

Recently uploaded

Recently uploaded (20)

Dev Dives: Train smarter, not harder ā€“ active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder ā€“ active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder ā€“ active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder ā€“ active learning and UiPath LLMs for do...
Ā 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Ā 
Salesforce Adoption ā€“ Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption ā€“ Metrics, Methods, and Motivation, Antone KomSalesforce Adoption ā€“ Metrics, Methods, and Motivation, Antone Kom
Salesforce Adoption ā€“ Metrics, Methods, and Motivation, Antone Kom
Ā 
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya HalderCustom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Custom Approval Process: A New Perspective, Pavel Hrbacek & Anindya Halder
Ā 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Ā 
SOQL 201 for Admins & Developers: Slice & Dice Your Orgā€™s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Orgā€™s Data With Aggregate...SOQL 201 for Admins & Developers: Slice & Dice Your Orgā€™s Data With Aggregate...
SOQL 201 for Admins & Developers: Slice & Dice Your Orgā€™s Data With Aggregate...
Ā 
IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024IoT Analytics Company Presentation May 2024
IoT Analytics Company Presentation May 2024
Ā 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Ā 
In-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT ProfessionalsIn-Depth Performance Testing Guide for IT Professionals
In-Depth Performance Testing Guide for IT Professionals
Ā 
AI revolution and Salesforce, JiÅ™Ć­ KarpĆ­Å”ek
AI revolution and Salesforce, JiÅ™Ć­ KarpĆ­Å”ekAI revolution and Salesforce, JiÅ™Ć­ KarpĆ­Å”ek
AI revolution and Salesforce, JiÅ™Ć­ KarpĆ­Å”ek
Ā 
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Measures in SQL (a talk at SF Distributed Systems meetup, 2024-05-22)
Ā 
Speed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in MinutesSpeed Wins: From Kafka to APIs in Minutes
Speed Wins: From Kafka to APIs in Minutes
Ā 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Ā 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
Ā 
IESVE for Early Stage Design and Planning
IESVE for Early Stage Design and PlanningIESVE for Early Stage Design and Planning
IESVE for Early Stage Design and Planning
Ā 
Introduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG EvaluationIntroduction to Open Source RAG and RAG Evaluation
Introduction to Open Source RAG and RAG Evaluation
Ā 
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptxUnpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Unpacking Value Delivery - Agile Oxford Meetup - May 2024.pptx
Ā 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Ā 
Exploring UiPath Orchestrator API: updates and limits in 2024 šŸš€
Exploring UiPath Orchestrator API: updates and limits in 2024 šŸš€Exploring UiPath Orchestrator API: updates and limits in 2024 šŸš€
Exploring UiPath Orchestrator API: updates and limits in 2024 šŸš€
Ā 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Ā 

V.karthikeyan published article a..a

  • 1. V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH Volume No. 1, Issue No. 2, February - March 2013, 207 - 210 Reflectance Perception Model Based Face Recognition in Dissimilar Illumination Conditions V. KARTHIKEYAN Department of ECE, SVS College of Engineering, Coimbatore-109, Tamilnadu, India V. J. VIJAYALAKSHMI Department of EEE, SKCET, Coimbatore ā€“ 46 Tamilnadu, India P. JEYAKUMAR Student, Department of ECE, Karpagam University, Coimbatore-105 Abstractā€”Reflectance Perception Based Face Recognition in different Illuminating Conditions is presented. Face recognition algorithms have to deal with significant amounts of illumination variations between gallery and probe images. Many of the State-of-the art commercial face recognition algorithms still struggle with this problem. In this projected work a new algorithm is stated for the preprocessing method which compensated for illumination variations in images along with a robust Principle Component Analysis (PCA) based Facial Feature Extraction is stated which is used to improve and reduce the dimension of the image by removing the unwanted vectors by the weighted Eigen faces. The proposed work demonstrates large performance improvements with several standard face recognition algorithms across multiple, publicly available face databases. Keywords- Face Recognition, Principle Component Analysis, Reflectance Perception. Illumination Variations I. INTRODUCTION Humans often use faces to recognize individuals and advancements in computing capability over the past few decades. Early face recognition algorithms used simple geometric models but the recognition process have matured into a science of sophisticated mathematical representation and matching process. Major advancements and initiatives in the past ten to fifteen years have propelled face recognition technology into the spotlight which can be used for both identification and Verification. In order to build robust and efficient face recognition system the problem of lighting variation is one of the main technical challenges faced by the designers. In this paper the focus is mainly on robustness to lighting variations. First in the processing stage a face image is transformed into an illumination insensitive image. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths and then each feature is individually classified by Principal Component Analysis. Along with this multiple face models generated by plural normalized face images that have different eye distances. Finally all the scores from multiple complementary classifiers a weighted based score fusion scheme is applied. II. RELATED WORKS Automated face recognition is a relatively new concept. Developed in the 1960s the first semiautomated system for face recognition required the administrator to locate features such as eyes, ears, nose and mouth on the photographs before it ISSN 2320 ā€“5547 calculated distances and ratios to a common reference point which were then compared to the referenced data. Along with the pose variation, illumination is the most significant factor affecting the appearance of faces. Significant changes in lighting can be seen greatly within between days and among indoor and outdoor environments .Greater variations can be seen for the two images of a same person taken under two different conditions, say one is taken in a studio and the other in an outdoor location. Due to the 3D shape of the face, a direct lighting source can cast strong shadows that highlight or diminish certain facial features. Evaluations of face recognition algorithms consistently show that state-of-the-art systems can not deal with large differences in illumination conditions between gallery and probe images [1-3]. In recent years many appearance-based algorithms have been proposed to deal with this problem and find a proper solution to it [4-7].Then [5] showed that the set of images of an object in fixed pose but under varying illumination forms a convex cone in the space of images. The illumination cones of human faces can be approximated well by low-dimensional linear subspaces [8]. The linear subspaces are typically estimated from training data, requiring multiple images of the object under different illumination conditions. Alternatively, model-based approaches have been proposed to address the problem. [9] showed that previously constructed morph able 3D model to single images. Though the algorithm works well across pose and illumination, however, the computational expense is very high. To eliminate the tarnished halo effect, [10] introduced low curvature image simplifier (LCIS) hierarchical @ 2013 http://www.ijitr.com All rights Reserved. Page | 207
  • 2. V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH Volume No. 1, Issue No. 2, February - March 2013, 207 - 210 decomposition of an image. Each component in this hierarchy is computed by solving a partial differential equation inspired by anisotropic diffusion [11]. At each hierarchical level the method segments the image into smooth (low-curvature) regions while stopping at sharp discontinuities. The algorithm is computationally intensive and requires manual selection of no less than 8 different parameters. III. PREPROCESSING ALGORITHM 3.1. The Reflectance Perception Model This algorithm is makes use of two widely accepted assumptions about human vision 1) human vision is mostly sensitive to scene reflectance and mostly insensitive to the illumination conditions and 2) human vision responds to local changes in contrast rather than to global brightness levels. Both these assumptions have a close relation since local contrast is given as a function of reflectance. Our main motive is to find an estimating conditions of L(x; y) such that when it divides I(x; y) it produces R(x; y) in which the local contrast is appropriately and accurately enhanced. In this view R(x; y) takes the place of perceived sensation, while I(x; y) takes the place of the input stimulus. L(x; y) is then called perception gain which maps the input sensation into the perceived stimulus that is given as = (1) Here in this equation, R is mostly the reflectance of the scene, and L is mostly the illumination field, After all, humans perceive reflectance details in both shadows as well as in bright regions, but they are also aware of the presence of shadows. To derive our model we look into evidences form the experimental technology. The Weber's Law states that the sensitivity threshold to a small intensity change increases proportionally to the signal level [12]. This law follows from the experimentation on brightness perception that consists of exposing an observer to a uniform field of intensity I in which a disk is gradually increased in brightness by a quantity .The value from which the observer perceives the existence of the desk against the background is called as the brightness discrimination threshold. Weber concluded that of intensity values. is constant for a wide range Fig 1: Compressive logarithmic mapping emphasizes changes at low stimulus levels and attenuates changes at high stimulus levels. Weber's law gives a theoretical justification for assuming a logarithmic mapping from input stimulus to perceived sensation which can be observed from fig 1. We can see that due to the logarithmic mapping when the stimulus is weak, for example in deep shadows, small changes in the input stimulus extract large changes in perceived sensation. When the stimulus is strong, small changes in the input stimulus are mapped to even smaller changes in perceived sensation. In facts Local variations in the input stimulus are mapped to the perceived sensation Variations with the gain = Where which is given as , (2) is the stimulus level in a small neighborhood in the input image by comparing above equations we can finally obtain the model for the perception gain: (3) Where the neighborhood stimulus level is by definition taken to be the stimulus at point the solution for can be found by minimizing the equation J(L)= ŹƒŹƒĻ Ļx,yL-12dxdy+Ī»ŹƒŹƒ (L2x+L2y)dxdy (4) Where the first term drives the solution for following the perception gain model, while the second term imposes a smoothness of the constraint here refers to the image. The parameter controls the relative importance of the two terms. The space varying permeability weight controls the anisotropic nature of the smoothing constraint. The EulerLagrange equation for this calculus of variation problem yields (5) On a rectangular lattice, the linear differential equation becomes ISSN 2320 ā€“5547 @ 2013 http://www.ijitr.com All rights Reserved. Page | 208
  • 3. V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH Volume No. 1, Issue No. 2, February - March 2013, 207 - 210 ( Where ) denotes the pixel grid size and the each value is taken in the middle of the edge between the center pixel and each of the corresponding neighbor. In this formulation controls the anisotropic nature of the smoothing by modulating permeability between pixel neighbors. Multi-grid algorithms are fairly efficient having complexity. To get a relative measure of local contrasts that will equally respect boundaries in shadows and regions (different illumination conditions) and the equation is given as (7) Where (a) (6) is the weight between two neighboring pixels whose intensities are (b) Fig.2. (a) Image before PCA Fig (b) Image after PCA Algorithm SCORE FUSION BASED WEIGHTED SUM METHOD There are many ways to score the values of extracted features of human face, among those the simple and accurate robust method to combine the scores is Weighted sum Technique. To compute a weighted sum as follows: (7) and Where the weight IV. PRINCIPAL COMPONENT ANALYSIS BASED FEATURE EXTRACTION PCA commonly referred to as the use of Eigen faces, is the technique pioneered by Kirby and Sirivich. With PCA, the probe and gallery images must be the same size and must first be normalized to line up the eyes and mouth of the subjects within the image. The PCA approach is then used to reduce the dimension of the data by means of data compression basics and reveals the most effective low dimensional structure of facial patterns. This reduction in dimensions removes information that is not useful and precisely decomposes the face structure into orthogonal components known as Eigen faces. Each face image may be represented as the weighted sum of the Eigen faces, which are stored in a 1D array. A probe image is compared against a gallery image by measuring the distance between their respective feature vectors. The PCA approach typically requires the full frontal face to be presented each time; otherwise the image results in poor performance. The primary advantage of this technique is that it can reduce the data needed to identify the individual to 1/1000th of the data presented. The application of the PCA algorithm to the Yale databases illustrates the improvement in the accuracy. This can be shown in the figure as ISSN 2320 ā€“5547 have in the wi is the amount of confidence we ith classifier and its score. In this work, we use 1/EER as a measure of such confidence. Thus, we have a new score (8) The weighted sum method based upon the EER is heuristic but intuitively appealing and easy to implement. In addition, this method has the advantage that it is robust to the difference in statistics between training data and test data. Even though the training data and test data have different statistics, the relative strength of the component classifiers is less likely to change significantly, suggesting that the weighting still makes sense. V. CONCLUSION Thus a robust face recognition system is being proposed which is insensitive to illumination variations. The pre processing algorithm of RPM states the experimentation on brightness perception that consists of exposing an observer to a uniform field of intensity. The PCA based feature extraction method coveys that it reduces the dimension of the image by removing the unwanted information. Finally the score fusion of the extracted features is done by the weighted sum method. We introduced a simple and automatic image-processing algorithm for compensation of illumination-induced variations in images. At the high level, the algorithm mimics some aspects of human visual perception. If desired, the user may adjust a single parameter whose meaning is intuitive and simple to understand. The algorithm delivers large performance improvements for @ 2013 http://www.ijitr.com All rights Reserved. Page | 209
  • 4. V.Karthikeyan * et al. / (IJITR) INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY AND RESEARCH Volume No. 1, Issue No. 2, February - March 2013, 207 - 210 standard face recognition algorithms across multiple face databases. VI. EXPERIMENTAL RESULTS Fig.7.Data base Image [1] Fig.3. Test Image [2] [3] [4] Fig. 4 Preprocessed Image [5] [6] [7] Fig.5.Preprocessed Image 2 [8] [9] [10] Fig.6.Preprocessed Image 3 ISSN 2320 ā€“5547 VII. REFERENCES Phillips, P., Moon, H., Rizvi, S., Rauss, P.: The FERET evaluation methodology for facerecognition algorithms. IEEE PAMI 22 (2000) 1090{1104 Blackburn, D., Bone, M., Philips, P.: Facial Recognition vendor test 2000: evaluation report (2000) Gross, R., Shi, J., Cohn, J.: Quo vadis face Recognition In: Third Workshop on Empirical Evaluation Methods in Computer Vision (2001) Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear Projection IEEE PAMI 19 (1997) Belhumeur, P., Kriegman, D.: What is the set of images of an object under allPossible Lighting conditions. Int. J. of Computer Vision 28 (1998) Georghiades, A., Kriegman, D., Belhumeur, P.: From few to many: Generative models for recognition under variable pose and illumination. IEEE PAMI (2001) Riklin-Raviv, T., Shashua, A.: The Quotient image: class-based re-rendering and recognition with varying illumination conditions. In: IEEE PAMI. (2001) Georghiades, A., Kriegman, D., Belhumeur, P.: Illumination cones for recognition Under variable lighting: Faces. In: Proc. I EEE Conf. on CVPR. (1998) Blanz, V., Romdhani, S., Vetter, T.: Face identi_cation across di_erent poses and illumination with a 3D morphable model. In: IEEE Conf. on Automatic Face and Gesture recognition (2002). Tumblin, J., Turk, G.: LCIS: A boundary hierarchy for detail-preserving contrast reduction. In: ACM SIGGRAPH. (1999) @ 2013 http://www.ijitr.com All rights Reserved. Page | 210