SlideShare a Scribd company logo
1 of 9
Download to read offline
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 2, April 2022, pp. 1468~1476
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i2.pp1468-1476  1468
Journal homepage: http://ijece.iaescore.com
A face recognition system using convolutional feature extraction
with linear collaborative discriminant regression classification
Sangamesh Hosgurmath1
, Viswanatha Vanjre Mallappa2
, Nagaraj B. Patil3
, Vishwanath Petli2
1
Department of Electronics and Communication Engineering, Visvesvaraya Technological University, Belagavi, India
2
Department of Electronics and Communication Engineering, H.K.E.S’s S.L.N. College of Engineering, Raichur, India
3
Department of Computer Science and Engineering, Government Engineering College Gangavathi, Karnataka, India
Article Info ABSTRACT
Article history:
Received Dec 19, 2020
Revised Aug 3, 2021
Accepted Sep 1, 2021
Face recognition is one of the important biometric authentication research
areas for security purposes in many fields such as pattern recognition and
image processing. However, the human face recognitions have the major
problem in machine learning and deep learning techniques, since input images
vary with poses of people, different lighting conditions, various expressions,
ages as well as illumination conditions and it makes the face recognition
process poor in accuracy. In the present research, the resolution of the image
patches is reduced by the max pooling layer in convolutional neural network
(CNN) and also used to make the model robust than other traditional feature
extraction technique called local multiple pattern (LMP). The extracted
features are fed into the linear collaborative discriminant regression
classification (LCDRC) for final face recognition. Due to optimization using
CNN in LCDRC, the distance ratio between the classes has maximized and
the distance of the features inside the class reduces. The results stated that the
CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy,
where traditional LCDRC achieved 83.35% and 77.70% of mean recognition
accuracy on ORL and YALE databases respectively for the training number 8
(i.e. 80% of training and 20% of testing data).
Keywords:
Convolutional neural network
Face recognition
Feature extraction techniques
Illumination conditions
High-dimensional features
This is an open access article under the CC BY-SA license.
Corresponding Author:
Sangamesh Hosgurmath
Department of Electronics and Communication Engineering, Visvesvaraya Technological University
Machhe
Belagavi, Karnataka 590018, India
Email: sangamesh.hosgurmath@gmail.com
1. INTRODUCTION
Biometric face recognition is now an evolving area of work in the field of image processing [1]. The
primary aim of biometry is to distinguish people from many visible characteristics such as their face,
fingerprints, voice, and andiris [2]. These traits are ideal for biometric identification, but more protection for
individual authentication is offered by the human face [3], [4]. Face recognition is an effective biometric
authentication technique, because face recognition is highly used in the number of applications, including
social networking, access control, video monitoring, and law enforcement [5], [6]. There are two types of face
recognition systems, the appearance-based [7], and feature-based [8]. For successful face recognition, the
feature-based framework has been introduced in this research study. The variations in occlusion and
illumination cause critical degradation of the image [9], [10]. The common issues in face recognition are
illumination variations, frontal vs profile, expression variations, aging and occlusions [11]. For tackling the
above-stated problems, an effective convolutional neural network-linear collaborative discriminant regression
classification (CNN-LCDRC) model is proposed in this research. The researchers developed five major
Int J Elec & Comp Eng ISSN:2088-8708 
A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath)
1469
methods for face recognition, where the methods include geometrical characterization method, the subspace
analysis method, the elastic graph matching method, the hidden Markov model method [12], and the neural
network method [13]. Mostly, the first four approaches are known as shallow learning because they can only
take advantage of certain simple image features, and all rely on artificial intelligence to obtain sample features.
Neural network-based methods are considered to be a profound learning since they could extract more
complicated features such as corner point and plane features [14], [15]. The researchers developed the popular
deep-learning approaches CNN, the deep belief network (DBN) [16], and the stacked denoising autoencoder
(SDAE) [17]. In this analysis, CNN is used to extract facial characteristics as it takes images as the direct input,
and resilient to image deformation rotation, translation, and scaling. Additionally, it is relatively difficult to
manually acquire facial features from the face images, whereas CNN may automatically extract successful
facial features. Thus, the proposed CNN-LCDRC combined model increases the recognition accuracy. The
experiments are conducted on two datasets such as ORL and YALE in terms of mean accuracy and maximum
accuracy. The scientific contributions of the proposed face recognition method in real-time are: i) provides
security to devices; ii) to investigate criminals; iii) protects law of enforcement; iv) protects from the threats in
business fields; v) gives control access to sensitive area; and vi) facilitates secure transactions.
This research paper is organized as follows: section 2 presents the study of existing techniques that
are related to face recognition. The proposed method is explained in section 3, where the validation of CNN-
LCDRC on two datasets with comparative study is described in section 4. Finally, the conclusion of the study
is drawn in section 5.
2. LITERATURE REVIEW
In this section, a study of various existing techniques that are used to recognize the face are presented.
In addition, the key benefits of existing techniques with its limitations are also described. Yang et al. [18]
recognized the face and extracted the features by using an local multiple pattern (LMP) feature descriptor on
Weber's law. The changing directions were described by generating multiple feature maps using a modified
ratio of Weber's law. In the feature maps for image representation, the non-overlapping regions' histogram
were concatenated by LMP. However, the improved LMP was only computationally efficient using integral
images. Ouyang et al. [19] recognized the input face by implementing the hybrid approach that combined the
improved kernel linear discriminant analysis (IKLD) and probabilistic neural network (PNN). At first, the
relevant data were obtained by removing the sample features and recognition problems were solved by PNN.
The computation time of the IKLD with PNN was higher than existing KLD with PNN, when the number of
training was high.
Bhattacharya et al. [20] developed a local force pattern (LFP) based on local-appearance for
recognizing the face effectively. Compared to the traditional pattern technique, a discriminating code was
produced by encoding the textures' directional information using LFP. The Van der Waal’s force between pixel
pairs were computed the micro-pattern structure, which was used to extract the information and encoded that
directional information by a two-dimensional vector. The LFP method has high computation time, when the
training samples had imbalance data. Nikan and Ahmadi [21] identified the input face images by implementing
improved face recognition algorithms with degrading conditions. The optimum distinctive features were
obtained by the combination of discriminative feature extractor with pre-processing techniques that were used
for classification. The facial images were pre-processed by using enhanced complex wavelet transforms and
multi scale Weber pre-processing techniques. In order to improve the recognition rate, different feature
extraction techniques, namely block-based local phase quantization, Gabor filters with principal component
analysis used in this study. However, the algorithm had peak signal-to-noise ratio (PSNR), when the
recognition rate was 0.8.
Peng et al. [22] solved the issues of cross-modality face recognition by developing a Sparse graphical
representation based discriminant analysis (SGR-DA). The adaptive sparse vectors were generated by
constructing the Markov network model, which was used to represent the face images from various modalities.
The discriminability was improved and complex facial structure was handled by SGR-DA and also refined the
adaptive sparse vectors for facial matching. In the thermal infrared image recognition experiments, the
SGR-DA provided poor performance, because the infrared images contained less facial structure and more
detailed information.The major problem faced by the existing feature extraction techniques includes high
computation time for extracting huge amounts of data as well training for classification. To address these issues,
the research study developed the CNN as feature extraction techniques and classifies using LCDRC classifier.
3. PROPOSED METHOD
In this research, the proposed CNN-LCDRC model comprises of four phases; data collection, feature
extraction, classification, and performance analysis, which is shown in Figure 1.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476
1470
Figure 1. Working flow of the proposed CNN-LCDRC technique
3.1. Collection of dataset
In this research study, two online datasets namely YALE and ORL are used for image collection. In
ORL database, 40 classes of ten facial images are collected for each and every individual, therefore 400 facial
images are presented in this database. In the research, large number of images from ORL database is taken and
YALE is taken from different angles for consideration. 40 images are considered for feature extraction but not
for classification. Figure 2 shows the sample images of ORL database, where some individual's facial images
are collected under different lighting variations, different facial expressions include eye open/closed, smiling
and without smiling, facial details include with glass and without glass. Moreover, 165 grayscale images are
presented in the YALE datasets, which are collected from 15 individuals. Figure 3 describes the sample images
of YALE dataset, where 11 facial images are collected from each subject. Each image is taken with various
facial expressions like normal, sleepy, wink, happy, with/without glasses, center-light, surprised and sad [23].
Figure 2. Sample images of ORL dataset Figure 3. Sample images of YALE dataset
3.2. Feature extraction
CNN, which has the convolution and pooling layer, is used as a feature extraction technique in this
study. In CNN, the raw images are taken as input, which prevents complicated feature extraction and massive
data by using a conventional recognition algorithm. The complexity of the CNN method can be reduced by
using weight sharing approach and also by reducing the count of the parameters used or measured. The process
in which the entire image is filtered, which is defined as in (1) and (2):
𝑦𝑗
𝑙
= 𝑓(βˆ‘ π‘₯𝑖
π‘™βˆ’1
βˆ— π‘˜π‘–π‘—
𝑙
+ 𝑏𝑗
𝑙
π‘–βˆˆπ‘€ ) (1)
𝑓(π‘₯) = π‘šπ‘Žπ‘₯( 0, π‘₯) (2)
where π‘₯𝑖
π‘™βˆ’1
is the th
i input feature map at the 𝑙 βˆ’ 1π‘‘β„Ž
layer, 𝑦𝑗
𝑙
is the π‘—π‘‘β„Ž
output feature map at the l layer, M is
the set of feature maps at the 𝑙 βˆ’ 1π‘‘β„Ž
layer. π‘˜π‘–π‘—
𝑙
is the convolution kernel between the π‘–π‘‘β„Ž
input map at the layer
𝑙 βˆ’ 1π‘‘β„Ž
and the π‘—π‘‘β„Ž
Output map at the layer 𝑙, 𝑏𝑗
𝑙
is the bias of the π‘—π‘‘β„Ž
output map at the π‘™π‘‘β„Ž
layer. 𝑓(π‘₯) is
Rectified Linear Unit’s function.
In this paper, the CNN adopts the max pooling. In (3) the maximum pool formula used in this research
study is explained mathematically in (3):
𝑦𝑗(π‘š,𝑛)
𝑙+1
=0β‰€π‘Ÿ,π‘˜<
π‘€π‘Žπ‘₯
{π‘₯(π‘š.π‘₯+π‘Ÿ,𝑛.𝑠+π‘˜)
𝑙
} (3)
Int J Elec & Comp Eng ISSN:2088-8708 
A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath)
1471
where π‘š β‰₯ 0, 𝑛 β‰₯ 0, 𝑠 β‰₯ 0 and 𝑦𝑗(π‘š,𝑛)
𝑙+1
is the value of the neuron unit (π‘š, 𝑛) that in the π‘—π‘‘β„Ž
output feature map
𝑦𝑗
𝑙+1
at the 𝑙 + 1 layers, (π‘š. 𝑠 + π‘Ÿ, 𝑛. 𝑠 + π‘˜) is neuron unit in the π‘–π‘‘β„Ž
input map π‘₯𝑖
𝑙
at the 𝑙 + 1 layers, whose
corresponding value is π‘₯(π‘š.𝑠+π‘Ÿ,𝑛.𝑠+π‘˜)
𝑙
, 𝑦𝑗(π‘š,𝑛)
𝑙+1
is obtained by computing the big value over an 𝑠 Γ— 𝑠 non-
overlapping local region in the input map π‘₯𝑖
𝑙
. The CNN structure has nine layers as shown in the Figure 4,
where the layers includes batch normalization (BN)-3, convolution layers-3 and pool layers-3.
Figure 4. Structure of proposed CNN model
The layers C1, C2, and C3 are convolution layers and consist of 30, 60, and 80 feature maps that
extract and combine those features, respectively. Layers S1, S2, and S3 are layers of sub-sampling whose
number of feature maps are equal to the number of maps of their previous layers of convolution. The output
layer must identify all the input images based on the exact functions. Two optimization methods are used to
design the convolutional neural network. First, using the 𝑓(π‘₯) function of the rectified linear units that
describes the neural signal activation well in the convolutionary layers to replace the sigmoid feature. Second,
BN is the core design block of CNN structure, where the typical CNN model has a vast amount of BN layers
in their deep architecture. During training, the mean and variance calculations over each mini-batch are
required by the BN.
3.3. Classification using linear collaborative discriminant regression classification
Using linear collaborative discriminant regression classification (LCDRC), classification is carried
out after extracting the 2048 features of the facial images. It is a method for extraction of features which
employs discriminating analysis for adequate discrimination in the classification of linear regression. It uses
the labeled training data to create a more robust subspace on which to perform an efficient discriminant
regression classification. Effectively learn a subspace using discriminant analysis where data samples from
different classes are far away from each other while samples from the same class are similar to each other.
Therefore, LDRC calculates an optimal projection in such a way that by maximizing the linear regression
classification, the ratio of the reconstruction error between classes over the reconstruction error within class
achieves. Due to optimization using CNN in LCDRC, the distance ratio between the classes has been highly
maximized and the distance of the features inside the class greatly reduces. The proposed CNN algorithm
selects, therefore, the best weight value in LCDRC to reduce the reconstruction error. Consequently, the
proposed approach seeks a discriminating subspace by maximizing the ratio of reconstruction error between
classes and minimizing reconstruction error inside the class.
4. RESULTS AND DISCUSSION
In this section, the validation of proposed CNN-LCDRC is conducted on two datasets namely ORL
and YALE in terms of mean recognition accuracy (average accuracy) and maximum recognition accuracy. The
simulation of the algorithm is implemented in the computer with Pentium IV processor of 1.8 GHz using
MATLAB Version 18.a. Recognition accuracy is the parameter that is used to identify the efficiency of
proposed CNN-LCDRC. The mathematical expression for recognition accuracy is shown in (4).
π‘…π‘’π‘π‘œπ‘”π‘›π‘–π‘‘π‘–π‘œπ‘›π΄π‘π‘π‘’π‘Ÿπ‘Žπ‘π‘¦ =
𝑇𝑃+𝑇𝑁
𝐹𝑁+𝑇𝑃+𝑇𝑁+𝐹𝑃
Γ— 100 (4)
Where, true positive is described as TP, false positive is represented as FP, true negative is defined as TN and
false negative is illustrated as FN.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476
1472
4.1. Analysis of proposed CNN-LCDRC on ORL database
The performance of proposed CNN-LCDRC is compared with traditional LCDRC for different
training datasets. In the upcoming tables, Training number represents the ratio of training and testing data for
input facial images. Table 1 presents the validated results of the proposed method in terms of mean accuracy
and Figure 5 illustrates the graphical representation of CNN-LCDRC.
For instance, the ratio of 40% training and 60% testing data i.e. training number 4, the proposed CNN-
LCDRC method achieved 92.78% of mean accuracy, where LCDRC method achieved only 83.20% of mean
accuracy. The reason for achieving better performance than LCDRC is the usage of CNN as feature extraction
technique in this research study. Table 2 provides the comparative study of both LCDRC and proposed CNN-
LCDRC in terms of maximum recognition accuracy and Figure 6 presents graphical presentation of CNN-
LCDRC.
Table 1. Performance of proposed method on ORL database in terms of mean recognition accuracy (%)
Method Training Number
2 3 4 5 6 7 8
LCDRC 74.47 80.18 83.20 84.33 84.27 83.88 83.35
Proposed CNN-LCDRC 83.65 90.27 92.78 93.20 93.60 93.24 93.10
Figure 5. Graphical illustration of proposed CNN-LCDRC in terms of maximum recognition accuracy
Table 2. Performance of proposed method on ORL database in terms of maximum recognition accuracy (%)
Method Training Number
2 3 4 5 6 7 8
LCDRC 90.31 94.64 97.08 99.00 99.90 99.91 99.91
Proposed CNN-LCDRC 93.75 98.21 99.99 99.99 99.99 99.99 99.99
From the Table 2, it is clearly stated that the proposed CNN-LCDRC achieved higher maximum
recognition accuracy than LCDRC. The proposed CNN-LCDRC achieved highest maximum recognition
accuracy (i.e. 99.99%) because of 50 iterations and 300 dimensions leads to saturate the performance of CNN-
LCDRC. The results proved that CNN effectively extracts the input features from the facial ORL images. The
next section will explain the performance analysis of proposed CNN-LCDRC on YALE dataset.
4.2. Analysis of proposed CNN-LCDRC on yale database
In this section, the performance of CNN-LCDRC and traditional LCDRC technique in YALE
database is experimented. The validated results are presented in Table 3 and graphical representation is
illustrated in Figure 6 on the basis of mean recognition accuracy. For instance, CNN-LCDRC achieved only
82.18% of mean accuracy of YALE database, where the same method achieved 92.78% of mean accuracy of
the ORL database in the ratio of 40% training with 60% testing data (training number 4). Table 3 and Figure 7
describes the experimental results of CNN-LDCRC with traditional LCDRC by means of maximum
recognition accuracy on YALE dataset. From the above experiments, it is clearly stated that the proposed CNN-
Int J Elec & Comp Eng ISSN:2088-8708 
A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath)
1473
LCDRC achieved better performance on YALE database on the basis of maximum recognition accuracy. The
ratio of 70% training and 30% testing data (training number 7) has achieved nearly 98% of maximum
recognition accuracy for the proposed CNN-LCDRC method, where traditional LCDRC achieved only 95%
of accuracy. The reason behind the high performance is due to the effective use of CNN as feature extraction
with the least number of layers.
Table 3. Performance of proposed method on YALE database in terms of mean recognition accuracy (%) and
maximum recognition accuracy (%)
Method Training Number
2 3 4 5 6 7 8
LCDRC (Mean) 57.03 67.39 72.50 75.04 75.51 76.65 77.70
LCDRC (Maximum)
Proposed CNN-LCDRC (Mean)
Proposed CNN-LCDRC (Maximum)
68.89
67.50
77.04
80.00
76.95
85.00
87.62
82.18
93.33
90.00
85.61
95.56
92.00
86.30
97.33
95.00
86.83
98.33
97.78
87.60
99.99
Figure 6. Graphical representation of proposed CNN-LCDRC in terms of mean recognition accuracy on
YALE database
Figure 7. Graphical illustration of CNN-LCDRC in terms of maximum recognition accuracy (%) on YALE
dataset
4.3. Comparative study
In this section, the performance of CNN-LCDRC is compared with other existing techniques such as
sparse multiple maximum scatter difference (SMMSD) [24], multiple maximum scatter difference (MMSD)
[25]. Maximum scatter difference (MSD) discriminant condition presents the binary discriminant condition for
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476
1474
the classification of pattern which utilizes the general scatter difference not general Rayleigh quotient for the
measure of class separability which avoids the problem of singularity by addressing smaller sample sized
issues. The comparison of proposed CNN-LCDRC with existing methods in terms of MSD is shown in
Table 4. The error value of SMMSD [24] was 0.1907 and for MMSD [25] the error value was 0.0395. The
comparison graph of CNN-LCDRC with existing method in terms of MSD is shown in Figure 8. The existing
IKLD with PNN provides poor performance on YALE database [26], but the proposed CNN-LCDRC provides
better performance (i.e. 87.60% mean recognition accuracy), when the training number increases.
Table 4. Comparison of proposed CNN-LCDRC with existing methods in terms of MSD
Authors Methods Maximum Scatter Difference
Yu et al. [24] SMMSD 0.1907
Song [25] MMSD 0.0395
Proposed method CNN-LCDRC 0.0302
Figure 8. Comaparison graph of CNN-LCDRC with existing methods in terms of error valuefor MSD
5. CONCLUSION
The main challenging area in the field of biometrics is the recognition of the face due to various
difficulties such as occlusions, variations of emotions and aging factors. In order to solve the issues, the
research study implemented a novel algorithm called CNN with LCDRC classifier. In this study, the results
stated that the proposed CNN-LCDRC technique achieved 99.99% of maximum recognition accuracy on both
YALE and ORL databases in the ratio of 80% training and 20% testing data (training number 8). The LCDRC
technique achieved only 97% to 99.90% of maximum recognition accuracy on both YALE and ORL databases.
The proposed CNN-LCDRC achieved highest maximum recognition accuracy (i.e. 99.99%) because of 50
iterations and 300 dimensions leads to saturate the performance of CNN-LCDRC. The efficiency of the CNN
is high, because it reduces the number of parameters. In future work, the meta-heuristic algorithms are
developed to recognize the input facial images effectively.
REFERENCES
[1] M. Moussa, M. Hmila, and A. Douik, β€œFace recognition using fractional coefficients and discrete cosine transform tool,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 1, pp. 892–899, Feb. 2021, doi:
10.11591/ijece.v11i1.pp892-899.
[2] M. Fachrurrozi et al., β€œReal-time multi-object face recognition using content based image retrieval (CBIR),” International Journal
of Electrical and Computer Engineering (IJECE), vol. 8, no. 5, pp. 2812–2817, Oct. 2018, doi: 10.11591/ijece.v8i5.pp2812-2817.
[3] V. Ε truc and N. PaveΕ‘iΔ‡, β€œThe complete Gabor-Fisher classifier for robust face recognition,” EURASIP Journal on Advances in
Signal Processing, vol. 2010, no. 1, p. 847680, Dec. 2010, doi: 10.1155/2010/847680.
[4] R. Sharma and M. S. Patterh, β€œA new hybrid approach using PCA for pose invariant face recognition,” Wireless Personal
Communications, vol. 85, no. 3, pp. 1561–1571, Dec. 2015, doi: 10.1007/s11277-015-2855-7.
[5] W. Xu and E.-J. Lee, β€œA hybrid method based on dynamic compensatory fuzzy neural network algorithm for face recognition,”
International Journal of Control, Automation and Systems, vol. 12, no. 3, pp. 688–696, Jun. 2014, doi: 10.1007/s12555-013-0338-
8.
[6] H. Mliki, E. Fendri, and M. Hammami, β€œFace recognition through different facial expressions,” Journal of Signal Processing
Systems, vol. 81, no. 3, pp. 433–446, Dec. 2015, doi: 10.1007/s11265-014-0967-z.
[7] R. Senthilkumar and R. K. Gnanamurthy, β€œA robust wavelet based decomposition of facial images to improve recognition accuracy
in standard appearance based statistical face recognition methods,” Cluster Computing, vol. 22, no. S5, pp. 12785–12794, Sep.
2019, doi: 10.1007/s10586-018-1759-1.
Int J Elec & Comp Eng ISSN:2088-8708 
A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath)
1475
[8] C. Singh, N. Mittal, and E. Walia, β€œComplementary feature sets for optimal face recognition,” EURASIP Journal on Image and
Video Processing, vol. 2014, no. 1, p. 35, Dec. 2014, doi: 10.1186/1687-5281-2014-35.
[9] C. Fookes, F. Lin, V. Chandran, and S. Sridharan, β€œEvaluation of image resolution and super-resolution on face recognition
performance,” Journal of Visual Communication and Image Representation, vol. 23, no. 1, pp. 75–93, Jan. 2012, doi:
10.1016/j.jvcir.2011.06.004.
[10] H. H. Abbas, A. A. Altameemi, and H. R. Farhan, β€œBiological landmark Vs quasi-landmarks for 3D face recognition and gender
classification,” International Journal of Electrical and Computer Engineering (IJECE), vol. 9, no. 5, pp. 4069–4076, Oct. 2019,
doi: 10.11591/ijece.v9i5.pp4069-4076.
[11] M. A. Naji, G. A. Salman, and M. J. Fadhil, β€œFace recognition using selected topographical features,” International Journal of
Electrical and Computer Engineering (IJECE), vol. 10, no. 5, pp. 4695–4700, Oct. 2020, doi: 10.11591/ijece.v10i5.pp4695-4700.
[12] S. Guo, S. Chen, and Y. Li, β€œFace recognition based on convolutional neural network and support vector machine,” in 2016 IEEE
International Conference on Information and Automation (ICIA), 2016, pp. 1787–1792, doi: 10.1109/ICInfA.2016.7832107.
[13] Meng Joo Er, Shiqian Wu, Juwei Lu, and Hock Lye Toh, β€œFace recognition with radial basis function (RBF) neural networks,”
IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 697–710, May 2002, doi: 10.1109/TNN.2002.1000134.
[14] A. A. Moustafa, A. Elnakib, and N. F. F. Areed, β€œOptimization of deep learning features for age-invariant face recognition,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, p. 1833, Apr. 2020, doi:
10.11591/ijece.v10i2.pp1833-1841.
[15] M. Nimbarte and K. Bhoyar, β€œAge Invariant Face Recognition using Convolutional Neural Network,” International Journal of
Electrical and Computer Engineering (IJECE), vol. 8, no. 4, pp. 2126–2138, Aug. 2018, doi: 10.11591/ijece.v8i4.pp2126-2138.
[16] K.-E. Ko and K.-B. Sim, β€œDevelopment of a Facial Emotion Recognition Method Based on Combining AAM with DBN,” in 2010
International Conference on Cyberworlds, 2010, pp. 87–91, doi: 10.1109/CW.2010.65.
[17] R. Xia, J. Deng, B. Schuller, and Y. Liu, β€œModeling gender information for emotion recognition using Denoising autoencoder,” in
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 990–994, doi:
10.1109/ICASSP.2014.6853745.
[18] W. Yang, X. Zhang, and J. Li, β€œA local multiple patterns feature descriptor for face recognition,” Neurocomputing, vol. 373, pp.
109–122, Jan. 2020, doi: 10.1016/j.neucom.2019.09.102.
[19] A. Ouyang, Y. Liu, S. Pei, X. Peng, M. He, and Q. Wang, β€œA hybrid improved kernel LDA and PNN algorithm for efficient face
recognition,” Neurocomputing, vol. 393, pp. 214–222, Jun. 2020, doi: 10.1016/j.neucom.2019.01.117.
[20] S. Bhattacharya, G. S. Nainala, S. Rooj, and A. Routray, β€œLocal force pattern (LFP): Descriptor for heterogeneous face recognition,”
Pattern Recognition Letters, vol. 125, pp. 63–70, Jul. 2019, doi: 10.1016/j.patrec.2019.03.028.
[21] S. Nikan and M. Ahmadi, β€œA modified technique for face recognition under degraded conditions,” Journal of Visual Communication
and Image Representation, vol. 55, pp. 742–755, Aug. 2018, doi: 10.1016/j.jvcir.2018.08.007.
[22] C. Peng, X. Gao, N. Wang, and J. Li, β€œSparse graphical representation based discriminant analysis for heterogeneous face
recognition,” Signal Processing, vol. 156, pp. 46–61, Mar. 2019, doi: 10.1016/j.sigpro.2018.10.015.
[23] M. Khoshdeli, R. Cong, and B. Parvin, β€œDetection of nuclei in H&amp;E stained sections using convolutional neural networks,” in
2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2017, pp. 105–108, doi:
10.1109/BHI.2017.7897216.
[24] L. Yu, D. Yang, and H. Wang, β€œSparse multiple maximum scatter difference for dimensionality reduction,” Digital Signal
Processing, vol. 62, pp. 91–100, Mar. 2017, doi: 10.1016/j.dsp.2016.11.005.
[25] Fengxi Song, D. Zhang, Dayong Mei, and Zhongwei Guo, β€œA Multiple Maximum Scatter Difference Discriminant Criterion for
Facial Feature Extraction,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 37, no. 6, pp. 1599–
1606, Dec. 2007, doi: 10.1109/TSMCB.2007.906579.
[26] I. S. Kwak, J.-Z. Guo, A. Hantman, K. Branson, and D. Kriegman, β€œDetecting the starting frame of actions in video,” in 2020 IEEE
Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 478–486, doi: 10.1109/WACV45572.2020.9093405.
BIOGRAPHIES OF AUTHORS
Sangamesh Hosgurmath completed his M.Tech from PDA college of
Engineering, Visvesvaraya Technological University (VTU) Belagavi (Karnataka) in the year
2010. Presently working as Assistant Professor in the department of β€œElectronics and
Communication Engineering”, SLN College of Engineering, Raichur since 2005. His area of
interest are image processing and embedded system. Presently he is pursuing PhD in the
department of β€œElectronics and Communication Engineering”, VTU Belagavi. He can be
contacted at email: sangamesh.hosgurmath@gmail.com.
Vanjre Mallappa Viswanatha completed his M.Tech from NITK in the year 1986
and completed his PhD in the year 2012 from Singhania university, Rajasthan. Presently
working as Professor & HOD in the department of β€œElectronics and Communication
Engineering” since 1986. His area of interest are Image Processing and mobile communication.
He has published Nine papers in international journals and presented Six papers in national and
international conferences. He can be contacted at email: vmviswanatha@gmail.com.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476
1476
Nagaraj B. Patil received the B.E. degree from the Gulbarga University Gulbarga
Karnataka in the 1993, the M Tech. degree from the AAIDU Allahabad in 2005, and the Ph.D.
degree from the University of Singhania Rajasthan India in 2012. From 1993 to 2010, I worked
as a Lecturer, Senior Lecturer and Assistant professor and HOD Dept. of CSE & ISE at SLN
College of Engineering, Raichur Karnataka. From 2010 to June 2019 worked as an Associate
Professor and HOD in the Department of Computer Science and Engineering at Government
Engineering College Raichur, Karnataka. He is currently Working as Principal Government
Engineering College Gangavathi, Karnataka from July 2019. His research interests are in Image
Processing and Computer Network. He can be contacted at email: nagarajbpatil@gmail.com.
Vishwanath Petli completed his M.E from PDA college of Engineering, Gulbarga
University, Gulbarga (Karnataka) in the year 2003 and completed his PhD in the year 2017
from NIMS University, Jaipur (Rajasthan). Presently working as Associate Professor in the
department of β€œElectronics and Communication Engineering” since 1998.His area of interest
are Image Processing and Embedded System. He has published four papers in international
journals and presented one paper in international conference. He can be contacted at email:
nayana8223@gmail.com.

More Related Content

Similar to A face recognition system using convolutional feature extraction with linear collaborative discriminant regression classification

IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET Journal
Β 
Real time multi face detection using deep learning
Real time multi face detection using deep learningReal time multi face detection using deep learning
Real time multi face detection using deep learningReallykul Kuul
Β 
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEWFACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEWIRJET Journal
Β 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Β 
Facial recognition based on enhanced neural network
Facial recognition based on enhanced neural networkFacial recognition based on enhanced neural network
Facial recognition based on enhanced neural networkIAESIJAI
Β 
Real Time Implementation Of Face Recognition System
Real Time Implementation Of Face Recognition SystemReal Time Implementation Of Face Recognition System
Real Time Implementation Of Face Recognition SystemIJERA Editor
Β 
Deep learning for understanding faces
Deep learning for understanding facesDeep learning for understanding faces
Deep learning for understanding facessieubebu
Β 
Deep learning based masked face recognition in the era of the COVID-19 pandemic
Deep learning based masked face recognition in the era of the COVID-19 pandemicDeep learning based masked face recognition in the era of the COVID-19 pandemic
Deep learning based masked face recognition in the era of the COVID-19 pandemicIJECEIAES
Β 
Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...TELKOMNIKA JOURNAL
Β 
Technique for recognizing faces using a hybrid of moments and a local binary...
Technique for recognizing faces using a hybrid of moments and  a local binary...Technique for recognizing faces using a hybrid of moments and  a local binary...
Technique for recognizing faces using a hybrid of moments and a local binary...IJECEIAES
Β 
Independent Component Analysis of Edge Information for Face Recognition
Independent Component Analysis of Edge Information for Face RecognitionIndependent Component Analysis of Edge Information for Face Recognition
Independent Component Analysis of Edge Information for Face RecognitionCSCJournals
Β 
Deep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognitionDeep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognitionTELKOMNIKA JOURNAL
Β 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...IOSR Journals
Β 
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET Journal
Β 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
Β 
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
Β 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...CSCJournals
Β 
Gender classification using custom convolutional neural networks architecture
Gender classification using custom convolutional neural networks architecture Gender classification using custom convolutional neural networks architecture
Gender classification using custom convolutional neural networks architecture IJECEIAES
Β 

Similar to A face recognition system using convolutional feature extraction with linear collaborative discriminant regression classification (20)

IRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face RecognitionIRJET- Spot Me - A Smart Attendance System based on Face Recognition
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
Β 
Real time multi face detection using deep learning
Real time multi face detection using deep learningReal time multi face detection using deep learning
Real time multi face detection using deep learning
Β 
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEWFACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
Β 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Β 
Facial recognition based on enhanced neural network
Facial recognition based on enhanced neural networkFacial recognition based on enhanced neural network
Facial recognition based on enhanced neural network
Β 
Real Time Implementation Of Face Recognition System
Real Time Implementation Of Face Recognition SystemReal Time Implementation Of Face Recognition System
Real Time Implementation Of Face Recognition System
Β 
19. 22068.pdf
19. 22068.pdf19. 22068.pdf
19. 22068.pdf
Β 
Deep learning for understanding faces
Deep learning for understanding facesDeep learning for understanding faces
Deep learning for understanding faces
Β 
Deep learning based masked face recognition in the era of the COVID-19 pandemic
Deep learning based masked face recognition in the era of the COVID-19 pandemicDeep learning based masked face recognition in the era of the COVID-19 pandemic
Deep learning based masked face recognition in the era of the COVID-19 pandemic
Β 
Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...Facial image retrieval on semantic features using adaptive mean genetic algor...
Facial image retrieval on semantic features using adaptive mean genetic algor...
Β 
Technique for recognizing faces using a hybrid of moments and a local binary...
Technique for recognizing faces using a hybrid of moments and  a local binary...Technique for recognizing faces using a hybrid of moments and  a local binary...
Technique for recognizing faces using a hybrid of moments and a local binary...
Β 
Independent Component Analysis of Edge Information for Face Recognition
Independent Component Analysis of Edge Information for Face RecognitionIndependent Component Analysis of Edge Information for Face Recognition
Independent Component Analysis of Edge Information for Face Recognition
Β 
Deep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognitionDeep hypersphere embedding for real-time face recognition
Deep hypersphere embedding for real-time face recognition
Β 
IJECET_13_02_003.pdf
IJECET_13_02_003.pdfIJECET_13_02_003.pdf
IJECET_13_02_003.pdf
Β 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Β 
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting PneumoniaIRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
IRJET- A Survey on Medical Image Interpretation for Predicting Pneumonia
Β 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network
Β 
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...
Β 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Β 
Gender classification using custom convolutional neural networks architecture
Gender classification using custom convolutional neural networks architecture Gender classification using custom convolutional neural networks architecture
Gender classification using custom convolutional neural networks architecture
Β 

More from IJECEIAES

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
Β 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionIJECEIAES
Β 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...IJECEIAES
Β 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...IJECEIAES
Β 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningIJECEIAES
Β 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...IJECEIAES
Β 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...IJECEIAES
Β 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
Β 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...IJECEIAES
Β 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyIJECEIAES
Β 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationIJECEIAES
Β 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceIJECEIAES
Β 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...IJECEIAES
Β 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...IJECEIAES
Β 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...IJECEIAES
Β 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...IJECEIAES
Β 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...IJECEIAES
Β 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...IJECEIAES
Β 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...IJECEIAES
Β 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...IJECEIAES
Β 

More from IJECEIAES (20)

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...
Β 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regression
Β 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Β 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Β 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learning
Β 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...
Β 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...
Β 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...
Β 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...
Β 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancy
Β 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimization
Β 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performance
Β 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...
Β 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...
Β 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Β 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...
Β 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...
Β 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...
Β 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
Β 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
Β 

Recently uploaded

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
Β 
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”soniya singh
Β 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
Β 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
Β 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoΓ£o Esperancinha
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
Β 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
Β 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
Β 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
Β 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
Β 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
Β 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
Β 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
Β 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
Β 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
Β 

Recently uploaded (20)

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
Β 
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Model Call Girl in Narela Delhi reach out to us at πŸ”8264348440πŸ”
Β 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
Β 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
Β 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Β 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
Β 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Β 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Β 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Β 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
Β 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
Β 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
Β 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Β 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
Β 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Β 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
Β 

A face recognition system using convolutional feature extraction with linear collaborative discriminant regression classification

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 12, No. 2, April 2022, pp. 1468~1476 ISSN: 2088-8708, DOI: 10.11591/ijece.v12i2.pp1468-1476  1468 Journal homepage: http://ijece.iaescore.com A face recognition system using convolutional feature extraction with linear collaborative discriminant regression classification Sangamesh Hosgurmath1 , Viswanatha Vanjre Mallappa2 , Nagaraj B. Patil3 , Vishwanath Petli2 1 Department of Electronics and Communication Engineering, Visvesvaraya Technological University, Belagavi, India 2 Department of Electronics and Communication Engineering, H.K.E.S’s S.L.N. College of Engineering, Raichur, India 3 Department of Computer Science and Engineering, Government Engineering College Gangavathi, Karnataka, India Article Info ABSTRACT Article history: Received Dec 19, 2020 Revised Aug 3, 2021 Accepted Sep 1, 2021 Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data). Keywords: Convolutional neural network Face recognition Feature extraction techniques Illumination conditions High-dimensional features This is an open access article under the CC BY-SA license. Corresponding Author: Sangamesh Hosgurmath Department of Electronics and Communication Engineering, Visvesvaraya Technological University Machhe Belagavi, Karnataka 590018, India Email: sangamesh.hosgurmath@gmail.com 1. INTRODUCTION Biometric face recognition is now an evolving area of work in the field of image processing [1]. The primary aim of biometry is to distinguish people from many visible characteristics such as their face, fingerprints, voice, and andiris [2]. These traits are ideal for biometric identification, but more protection for individual authentication is offered by the human face [3], [4]. Face recognition is an effective biometric authentication technique, because face recognition is highly used in the number of applications, including social networking, access control, video monitoring, and law enforcement [5], [6]. There are two types of face recognition systems, the appearance-based [7], and feature-based [8]. For successful face recognition, the feature-based framework has been introduced in this research study. The variations in occlusion and illumination cause critical degradation of the image [9], [10]. The common issues in face recognition are illumination variations, frontal vs profile, expression variations, aging and occlusions [11]. For tackling the above-stated problems, an effective convolutional neural network-linear collaborative discriminant regression classification (CNN-LCDRC) model is proposed in this research. The researchers developed five major
  • 2. Int J Elec & Comp Eng ISSN:2088-8708  A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath) 1469 methods for face recognition, where the methods include geometrical characterization method, the subspace analysis method, the elastic graph matching method, the hidden Markov model method [12], and the neural network method [13]. Mostly, the first four approaches are known as shallow learning because they can only take advantage of certain simple image features, and all rely on artificial intelligence to obtain sample features. Neural network-based methods are considered to be a profound learning since they could extract more complicated features such as corner point and plane features [14], [15]. The researchers developed the popular deep-learning approaches CNN, the deep belief network (DBN) [16], and the stacked denoising autoencoder (SDAE) [17]. In this analysis, CNN is used to extract facial characteristics as it takes images as the direct input, and resilient to image deformation rotation, translation, and scaling. Additionally, it is relatively difficult to manually acquire facial features from the face images, whereas CNN may automatically extract successful facial features. Thus, the proposed CNN-LCDRC combined model increases the recognition accuracy. The experiments are conducted on two datasets such as ORL and YALE in terms of mean accuracy and maximum accuracy. The scientific contributions of the proposed face recognition method in real-time are: i) provides security to devices; ii) to investigate criminals; iii) protects law of enforcement; iv) protects from the threats in business fields; v) gives control access to sensitive area; and vi) facilitates secure transactions. This research paper is organized as follows: section 2 presents the study of existing techniques that are related to face recognition. The proposed method is explained in section 3, where the validation of CNN- LCDRC on two datasets with comparative study is described in section 4. Finally, the conclusion of the study is drawn in section 5. 2. LITERATURE REVIEW In this section, a study of various existing techniques that are used to recognize the face are presented. In addition, the key benefits of existing techniques with its limitations are also described. Yang et al. [18] recognized the face and extracted the features by using an local multiple pattern (LMP) feature descriptor on Weber's law. The changing directions were described by generating multiple feature maps using a modified ratio of Weber's law. In the feature maps for image representation, the non-overlapping regions' histogram were concatenated by LMP. However, the improved LMP was only computationally efficient using integral images. Ouyang et al. [19] recognized the input face by implementing the hybrid approach that combined the improved kernel linear discriminant analysis (IKLD) and probabilistic neural network (PNN). At first, the relevant data were obtained by removing the sample features and recognition problems were solved by PNN. The computation time of the IKLD with PNN was higher than existing KLD with PNN, when the number of training was high. Bhattacharya et al. [20] developed a local force pattern (LFP) based on local-appearance for recognizing the face effectively. Compared to the traditional pattern technique, a discriminating code was produced by encoding the textures' directional information using LFP. The Van der Waal’s force between pixel pairs were computed the micro-pattern structure, which was used to extract the information and encoded that directional information by a two-dimensional vector. The LFP method has high computation time, when the training samples had imbalance data. Nikan and Ahmadi [21] identified the input face images by implementing improved face recognition algorithms with degrading conditions. The optimum distinctive features were obtained by the combination of discriminative feature extractor with pre-processing techniques that were used for classification. The facial images were pre-processed by using enhanced complex wavelet transforms and multi scale Weber pre-processing techniques. In order to improve the recognition rate, different feature extraction techniques, namely block-based local phase quantization, Gabor filters with principal component analysis used in this study. However, the algorithm had peak signal-to-noise ratio (PSNR), when the recognition rate was 0.8. Peng et al. [22] solved the issues of cross-modality face recognition by developing a Sparse graphical representation based discriminant analysis (SGR-DA). The adaptive sparse vectors were generated by constructing the Markov network model, which was used to represent the face images from various modalities. The discriminability was improved and complex facial structure was handled by SGR-DA and also refined the adaptive sparse vectors for facial matching. In the thermal infrared image recognition experiments, the SGR-DA provided poor performance, because the infrared images contained less facial structure and more detailed information.The major problem faced by the existing feature extraction techniques includes high computation time for extracting huge amounts of data as well training for classification. To address these issues, the research study developed the CNN as feature extraction techniques and classifies using LCDRC classifier. 3. PROPOSED METHOD In this research, the proposed CNN-LCDRC model comprises of four phases; data collection, feature extraction, classification, and performance analysis, which is shown in Figure 1.
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476 1470 Figure 1. Working flow of the proposed CNN-LCDRC technique 3.1. Collection of dataset In this research study, two online datasets namely YALE and ORL are used for image collection. In ORL database, 40 classes of ten facial images are collected for each and every individual, therefore 400 facial images are presented in this database. In the research, large number of images from ORL database is taken and YALE is taken from different angles for consideration. 40 images are considered for feature extraction but not for classification. Figure 2 shows the sample images of ORL database, where some individual's facial images are collected under different lighting variations, different facial expressions include eye open/closed, smiling and without smiling, facial details include with glass and without glass. Moreover, 165 grayscale images are presented in the YALE datasets, which are collected from 15 individuals. Figure 3 describes the sample images of YALE dataset, where 11 facial images are collected from each subject. Each image is taken with various facial expressions like normal, sleepy, wink, happy, with/without glasses, center-light, surprised and sad [23]. Figure 2. Sample images of ORL dataset Figure 3. Sample images of YALE dataset 3.2. Feature extraction CNN, which has the convolution and pooling layer, is used as a feature extraction technique in this study. In CNN, the raw images are taken as input, which prevents complicated feature extraction and massive data by using a conventional recognition algorithm. The complexity of the CNN method can be reduced by using weight sharing approach and also by reducing the count of the parameters used or measured. The process in which the entire image is filtered, which is defined as in (1) and (2): 𝑦𝑗 𝑙 = 𝑓(βˆ‘ π‘₯𝑖 π‘™βˆ’1 βˆ— π‘˜π‘–π‘— 𝑙 + 𝑏𝑗 𝑙 π‘–βˆˆπ‘€ ) (1) 𝑓(π‘₯) = π‘šπ‘Žπ‘₯( 0, π‘₯) (2) where π‘₯𝑖 π‘™βˆ’1 is the th i input feature map at the 𝑙 βˆ’ 1π‘‘β„Ž layer, 𝑦𝑗 𝑙 is the π‘—π‘‘β„Ž output feature map at the l layer, M is the set of feature maps at the 𝑙 βˆ’ 1π‘‘β„Ž layer. π‘˜π‘–π‘— 𝑙 is the convolution kernel between the π‘–π‘‘β„Ž input map at the layer 𝑙 βˆ’ 1π‘‘β„Ž and the π‘—π‘‘β„Ž Output map at the layer 𝑙, 𝑏𝑗 𝑙 is the bias of the π‘—π‘‘β„Ž output map at the π‘™π‘‘β„Ž layer. 𝑓(π‘₯) is Rectified Linear Unit’s function. In this paper, the CNN adopts the max pooling. In (3) the maximum pool formula used in this research study is explained mathematically in (3): 𝑦𝑗(π‘š,𝑛) 𝑙+1 =0β‰€π‘Ÿ,π‘˜< π‘€π‘Žπ‘₯ {π‘₯(π‘š.π‘₯+π‘Ÿ,𝑛.𝑠+π‘˜) 𝑙 } (3)
  • 4. Int J Elec & Comp Eng ISSN:2088-8708  A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath) 1471 where π‘š β‰₯ 0, 𝑛 β‰₯ 0, 𝑠 β‰₯ 0 and 𝑦𝑗(π‘š,𝑛) 𝑙+1 is the value of the neuron unit (π‘š, 𝑛) that in the π‘—π‘‘β„Ž output feature map 𝑦𝑗 𝑙+1 at the 𝑙 + 1 layers, (π‘š. 𝑠 + π‘Ÿ, 𝑛. 𝑠 + π‘˜) is neuron unit in the π‘–π‘‘β„Ž input map π‘₯𝑖 𝑙 at the 𝑙 + 1 layers, whose corresponding value is π‘₯(π‘š.𝑠+π‘Ÿ,𝑛.𝑠+π‘˜) 𝑙 , 𝑦𝑗(π‘š,𝑛) 𝑙+1 is obtained by computing the big value over an 𝑠 Γ— 𝑠 non- overlapping local region in the input map π‘₯𝑖 𝑙 . The CNN structure has nine layers as shown in the Figure 4, where the layers includes batch normalization (BN)-3, convolution layers-3 and pool layers-3. Figure 4. Structure of proposed CNN model The layers C1, C2, and C3 are convolution layers and consist of 30, 60, and 80 feature maps that extract and combine those features, respectively. Layers S1, S2, and S3 are layers of sub-sampling whose number of feature maps are equal to the number of maps of their previous layers of convolution. The output layer must identify all the input images based on the exact functions. Two optimization methods are used to design the convolutional neural network. First, using the 𝑓(π‘₯) function of the rectified linear units that describes the neural signal activation well in the convolutionary layers to replace the sigmoid feature. Second, BN is the core design block of CNN structure, where the typical CNN model has a vast amount of BN layers in their deep architecture. During training, the mean and variance calculations over each mini-batch are required by the BN. 3.3. Classification using linear collaborative discriminant regression classification Using linear collaborative discriminant regression classification (LCDRC), classification is carried out after extracting the 2048 features of the facial images. It is a method for extraction of features which employs discriminating analysis for adequate discrimination in the classification of linear regression. It uses the labeled training data to create a more robust subspace on which to perform an efficient discriminant regression classification. Effectively learn a subspace using discriminant analysis where data samples from different classes are far away from each other while samples from the same class are similar to each other. Therefore, LDRC calculates an optimal projection in such a way that by maximizing the linear regression classification, the ratio of the reconstruction error between classes over the reconstruction error within class achieves. Due to optimization using CNN in LCDRC, the distance ratio between the classes has been highly maximized and the distance of the features inside the class greatly reduces. The proposed CNN algorithm selects, therefore, the best weight value in LCDRC to reduce the reconstruction error. Consequently, the proposed approach seeks a discriminating subspace by maximizing the ratio of reconstruction error between classes and minimizing reconstruction error inside the class. 4. RESULTS AND DISCUSSION In this section, the validation of proposed CNN-LCDRC is conducted on two datasets namely ORL and YALE in terms of mean recognition accuracy (average accuracy) and maximum recognition accuracy. The simulation of the algorithm is implemented in the computer with Pentium IV processor of 1.8 GHz using MATLAB Version 18.a. Recognition accuracy is the parameter that is used to identify the efficiency of proposed CNN-LCDRC. The mathematical expression for recognition accuracy is shown in (4). π‘…π‘’π‘π‘œπ‘”π‘›π‘–π‘‘π‘–π‘œπ‘›π΄π‘π‘π‘’π‘Ÿπ‘Žπ‘π‘¦ = 𝑇𝑃+𝑇𝑁 𝐹𝑁+𝑇𝑃+𝑇𝑁+𝐹𝑃 Γ— 100 (4) Where, true positive is described as TP, false positive is represented as FP, true negative is defined as TN and false negative is illustrated as FN.
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476 1472 4.1. Analysis of proposed CNN-LCDRC on ORL database The performance of proposed CNN-LCDRC is compared with traditional LCDRC for different training datasets. In the upcoming tables, Training number represents the ratio of training and testing data for input facial images. Table 1 presents the validated results of the proposed method in terms of mean accuracy and Figure 5 illustrates the graphical representation of CNN-LCDRC. For instance, the ratio of 40% training and 60% testing data i.e. training number 4, the proposed CNN- LCDRC method achieved 92.78% of mean accuracy, where LCDRC method achieved only 83.20% of mean accuracy. The reason for achieving better performance than LCDRC is the usage of CNN as feature extraction technique in this research study. Table 2 provides the comparative study of both LCDRC and proposed CNN- LCDRC in terms of maximum recognition accuracy and Figure 6 presents graphical presentation of CNN- LCDRC. Table 1. Performance of proposed method on ORL database in terms of mean recognition accuracy (%) Method Training Number 2 3 4 5 6 7 8 LCDRC 74.47 80.18 83.20 84.33 84.27 83.88 83.35 Proposed CNN-LCDRC 83.65 90.27 92.78 93.20 93.60 93.24 93.10 Figure 5. Graphical illustration of proposed CNN-LCDRC in terms of maximum recognition accuracy Table 2. Performance of proposed method on ORL database in terms of maximum recognition accuracy (%) Method Training Number 2 3 4 5 6 7 8 LCDRC 90.31 94.64 97.08 99.00 99.90 99.91 99.91 Proposed CNN-LCDRC 93.75 98.21 99.99 99.99 99.99 99.99 99.99 From the Table 2, it is clearly stated that the proposed CNN-LCDRC achieved higher maximum recognition accuracy than LCDRC. The proposed CNN-LCDRC achieved highest maximum recognition accuracy (i.e. 99.99%) because of 50 iterations and 300 dimensions leads to saturate the performance of CNN- LCDRC. The results proved that CNN effectively extracts the input features from the facial ORL images. The next section will explain the performance analysis of proposed CNN-LCDRC on YALE dataset. 4.2. Analysis of proposed CNN-LCDRC on yale database In this section, the performance of CNN-LCDRC and traditional LCDRC technique in YALE database is experimented. The validated results are presented in Table 3 and graphical representation is illustrated in Figure 6 on the basis of mean recognition accuracy. For instance, CNN-LCDRC achieved only 82.18% of mean accuracy of YALE database, where the same method achieved 92.78% of mean accuracy of the ORL database in the ratio of 40% training with 60% testing data (training number 4). Table 3 and Figure 7 describes the experimental results of CNN-LDCRC with traditional LCDRC by means of maximum recognition accuracy on YALE dataset. From the above experiments, it is clearly stated that the proposed CNN-
  • 6. Int J Elec & Comp Eng ISSN:2088-8708  A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath) 1473 LCDRC achieved better performance on YALE database on the basis of maximum recognition accuracy. The ratio of 70% training and 30% testing data (training number 7) has achieved nearly 98% of maximum recognition accuracy for the proposed CNN-LCDRC method, where traditional LCDRC achieved only 95% of accuracy. The reason behind the high performance is due to the effective use of CNN as feature extraction with the least number of layers. Table 3. Performance of proposed method on YALE database in terms of mean recognition accuracy (%) and maximum recognition accuracy (%) Method Training Number 2 3 4 5 6 7 8 LCDRC (Mean) 57.03 67.39 72.50 75.04 75.51 76.65 77.70 LCDRC (Maximum) Proposed CNN-LCDRC (Mean) Proposed CNN-LCDRC (Maximum) 68.89 67.50 77.04 80.00 76.95 85.00 87.62 82.18 93.33 90.00 85.61 95.56 92.00 86.30 97.33 95.00 86.83 98.33 97.78 87.60 99.99 Figure 6. Graphical representation of proposed CNN-LCDRC in terms of mean recognition accuracy on YALE database Figure 7. Graphical illustration of CNN-LCDRC in terms of maximum recognition accuracy (%) on YALE dataset 4.3. Comparative study In this section, the performance of CNN-LCDRC is compared with other existing techniques such as sparse multiple maximum scatter difference (SMMSD) [24], multiple maximum scatter difference (MMSD) [25]. Maximum scatter difference (MSD) discriminant condition presents the binary discriminant condition for
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476 1474 the classification of pattern which utilizes the general scatter difference not general Rayleigh quotient for the measure of class separability which avoids the problem of singularity by addressing smaller sample sized issues. The comparison of proposed CNN-LCDRC with existing methods in terms of MSD is shown in Table 4. The error value of SMMSD [24] was 0.1907 and for MMSD [25] the error value was 0.0395. The comparison graph of CNN-LCDRC with existing method in terms of MSD is shown in Figure 8. The existing IKLD with PNN provides poor performance on YALE database [26], but the proposed CNN-LCDRC provides better performance (i.e. 87.60% mean recognition accuracy), when the training number increases. Table 4. Comparison of proposed CNN-LCDRC with existing methods in terms of MSD Authors Methods Maximum Scatter Difference Yu et al. [24] SMMSD 0.1907 Song [25] MMSD 0.0395 Proposed method CNN-LCDRC 0.0302 Figure 8. Comaparison graph of CNN-LCDRC with existing methods in terms of error valuefor MSD 5. CONCLUSION The main challenging area in the field of biometrics is the recognition of the face due to various difficulties such as occlusions, variations of emotions and aging factors. In order to solve the issues, the research study implemented a novel algorithm called CNN with LCDRC classifier. In this study, the results stated that the proposed CNN-LCDRC technique achieved 99.99% of maximum recognition accuracy on both YALE and ORL databases in the ratio of 80% training and 20% testing data (training number 8). The LCDRC technique achieved only 97% to 99.90% of maximum recognition accuracy on both YALE and ORL databases. The proposed CNN-LCDRC achieved highest maximum recognition accuracy (i.e. 99.99%) because of 50 iterations and 300 dimensions leads to saturate the performance of CNN-LCDRC. The efficiency of the CNN is high, because it reduces the number of parameters. In future work, the meta-heuristic algorithms are developed to recognize the input facial images effectively. REFERENCES [1] M. Moussa, M. Hmila, and A. Douik, β€œFace recognition using fractional coefficients and discrete cosine transform tool,” International Journal of Electrical and Computer Engineering (IJECE), vol. 11, no. 1, pp. 892–899, Feb. 2021, doi: 10.11591/ijece.v11i1.pp892-899. [2] M. Fachrurrozi et al., β€œReal-time multi-object face recognition using content based image retrieval (CBIR),” International Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 5, pp. 2812–2817, Oct. 2018, doi: 10.11591/ijece.v8i5.pp2812-2817. [3] V. Ε truc and N. PaveΕ‘iΔ‡, β€œThe complete Gabor-Fisher classifier for robust face recognition,” EURASIP Journal on Advances in Signal Processing, vol. 2010, no. 1, p. 847680, Dec. 2010, doi: 10.1155/2010/847680. [4] R. Sharma and M. S. Patterh, β€œA new hybrid approach using PCA for pose invariant face recognition,” Wireless Personal Communications, vol. 85, no. 3, pp. 1561–1571, Dec. 2015, doi: 10.1007/s11277-015-2855-7. [5] W. Xu and E.-J. Lee, β€œA hybrid method based on dynamic compensatory fuzzy neural network algorithm for face recognition,” International Journal of Control, Automation and Systems, vol. 12, no. 3, pp. 688–696, Jun. 2014, doi: 10.1007/s12555-013-0338- 8. [6] H. Mliki, E. Fendri, and M. Hammami, β€œFace recognition through different facial expressions,” Journal of Signal Processing Systems, vol. 81, no. 3, pp. 433–446, Dec. 2015, doi: 10.1007/s11265-014-0967-z. [7] R. Senthilkumar and R. K. Gnanamurthy, β€œA robust wavelet based decomposition of facial images to improve recognition accuracy in standard appearance based statistical face recognition methods,” Cluster Computing, vol. 22, no. S5, pp. 12785–12794, Sep. 2019, doi: 10.1007/s10586-018-1759-1.
  • 8. Int J Elec & Comp Eng ISSN:2088-8708  A face recognition system using convolutional feature extraction (Sangamesh Hosgurmath) 1475 [8] C. Singh, N. Mittal, and E. Walia, β€œComplementary feature sets for optimal face recognition,” EURASIP Journal on Image and Video Processing, vol. 2014, no. 1, p. 35, Dec. 2014, doi: 10.1186/1687-5281-2014-35. [9] C. Fookes, F. Lin, V. Chandran, and S. Sridharan, β€œEvaluation of image resolution and super-resolution on face recognition performance,” Journal of Visual Communication and Image Representation, vol. 23, no. 1, pp. 75–93, Jan. 2012, doi: 10.1016/j.jvcir.2011.06.004. [10] H. H. Abbas, A. A. Altameemi, and H. R. Farhan, β€œBiological landmark Vs quasi-landmarks for 3D face recognition and gender classification,” International Journal of Electrical and Computer Engineering (IJECE), vol. 9, no. 5, pp. 4069–4076, Oct. 2019, doi: 10.11591/ijece.v9i5.pp4069-4076. [11] M. A. Naji, G. A. Salman, and M. J. Fadhil, β€œFace recognition using selected topographical features,” International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 5, pp. 4695–4700, Oct. 2020, doi: 10.11591/ijece.v10i5.pp4695-4700. [12] S. Guo, S. Chen, and Y. Li, β€œFace recognition based on convolutional neural network and support vector machine,” in 2016 IEEE International Conference on Information and Automation (ICIA), 2016, pp. 1787–1792, doi: 10.1109/ICInfA.2016.7832107. [13] Meng Joo Er, Shiqian Wu, Juwei Lu, and Hock Lye Toh, β€œFace recognition with radial basis function (RBF) neural networks,” IEEE Transactions on Neural Networks, vol. 13, no. 3, pp. 697–710, May 2002, doi: 10.1109/TNN.2002.1000134. [14] A. A. Moustafa, A. Elnakib, and N. F. F. Areed, β€œOptimization of deep learning features for age-invariant face recognition,” International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, p. 1833, Apr. 2020, doi: 10.11591/ijece.v10i2.pp1833-1841. [15] M. Nimbarte and K. Bhoyar, β€œAge Invariant Face Recognition using Convolutional Neural Network,” International Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 4, pp. 2126–2138, Aug. 2018, doi: 10.11591/ijece.v8i4.pp2126-2138. [16] K.-E. Ko and K.-B. Sim, β€œDevelopment of a Facial Emotion Recognition Method Based on Combining AAM with DBN,” in 2010 International Conference on Cyberworlds, 2010, pp. 87–91, doi: 10.1109/CW.2010.65. [17] R. Xia, J. Deng, B. Schuller, and Y. Liu, β€œModeling gender information for emotion recognition using Denoising autoencoder,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014, pp. 990–994, doi: 10.1109/ICASSP.2014.6853745. [18] W. Yang, X. Zhang, and J. Li, β€œA local multiple patterns feature descriptor for face recognition,” Neurocomputing, vol. 373, pp. 109–122, Jan. 2020, doi: 10.1016/j.neucom.2019.09.102. [19] A. Ouyang, Y. Liu, S. Pei, X. Peng, M. He, and Q. Wang, β€œA hybrid improved kernel LDA and PNN algorithm for efficient face recognition,” Neurocomputing, vol. 393, pp. 214–222, Jun. 2020, doi: 10.1016/j.neucom.2019.01.117. [20] S. Bhattacharya, G. S. Nainala, S. Rooj, and A. Routray, β€œLocal force pattern (LFP): Descriptor for heterogeneous face recognition,” Pattern Recognition Letters, vol. 125, pp. 63–70, Jul. 2019, doi: 10.1016/j.patrec.2019.03.028. [21] S. Nikan and M. Ahmadi, β€œA modified technique for face recognition under degraded conditions,” Journal of Visual Communication and Image Representation, vol. 55, pp. 742–755, Aug. 2018, doi: 10.1016/j.jvcir.2018.08.007. [22] C. Peng, X. Gao, N. Wang, and J. Li, β€œSparse graphical representation based discriminant analysis for heterogeneous face recognition,” Signal Processing, vol. 156, pp. 46–61, Mar. 2019, doi: 10.1016/j.sigpro.2018.10.015. [23] M. Khoshdeli, R. Cong, and B. Parvin, β€œDetection of nuclei in H&amp;E stained sections using convolutional neural networks,” in 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), 2017, pp. 105–108, doi: 10.1109/BHI.2017.7897216. [24] L. Yu, D. Yang, and H. Wang, β€œSparse multiple maximum scatter difference for dimensionality reduction,” Digital Signal Processing, vol. 62, pp. 91–100, Mar. 2017, doi: 10.1016/j.dsp.2016.11.005. [25] Fengxi Song, D. Zhang, Dayong Mei, and Zhongwei Guo, β€œA Multiple Maximum Scatter Difference Discriminant Criterion for Facial Feature Extraction,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 37, no. 6, pp. 1599– 1606, Dec. 2007, doi: 10.1109/TSMCB.2007.906579. [26] I. S. Kwak, J.-Z. Guo, A. Hantman, K. Branson, and D. Kriegman, β€œDetecting the starting frame of actions in video,” in 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 478–486, doi: 10.1109/WACV45572.2020.9093405. BIOGRAPHIES OF AUTHORS Sangamesh Hosgurmath completed his M.Tech from PDA college of Engineering, Visvesvaraya Technological University (VTU) Belagavi (Karnataka) in the year 2010. Presently working as Assistant Professor in the department of β€œElectronics and Communication Engineering”, SLN College of Engineering, Raichur since 2005. His area of interest are image processing and embedded system. Presently he is pursuing PhD in the department of β€œElectronics and Communication Engineering”, VTU Belagavi. He can be contacted at email: sangamesh.hosgurmath@gmail.com. Vanjre Mallappa Viswanatha completed his M.Tech from NITK in the year 1986 and completed his PhD in the year 2012 from Singhania university, Rajasthan. Presently working as Professor & HOD in the department of β€œElectronics and Communication Engineering” since 1986. His area of interest are Image Processing and mobile communication. He has published Nine papers in international journals and presented Six papers in national and international conferences. He can be contacted at email: vmviswanatha@gmail.com.
  • 9.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 2, April 2022: 1468-1476 1476 Nagaraj B. Patil received the B.E. degree from the Gulbarga University Gulbarga Karnataka in the 1993, the M Tech. degree from the AAIDU Allahabad in 2005, and the Ph.D. degree from the University of Singhania Rajasthan India in 2012. From 1993 to 2010, I worked as a Lecturer, Senior Lecturer and Assistant professor and HOD Dept. of CSE & ISE at SLN College of Engineering, Raichur Karnataka. From 2010 to June 2019 worked as an Associate Professor and HOD in the Department of Computer Science and Engineering at Government Engineering College Raichur, Karnataka. He is currently Working as Principal Government Engineering College Gangavathi, Karnataka from July 2019. His research interests are in Image Processing and Computer Network. He can be contacted at email: nagarajbpatil@gmail.com. Vishwanath Petli completed his M.E from PDA college of Engineering, Gulbarga University, Gulbarga (Karnataka) in the year 2003 and completed his PhD in the year 2017 from NIMS University, Jaipur (Rajasthan). Presently working as Associate Professor in the department of β€œElectronics and Communication Engineering” since 1998.His area of interest are Image Processing and Embedded System. He has published four papers in international journals and presented one paper in international conference. He can be contacted at email: nayana8223@gmail.com.