This paper presents a real time face recognition based on face descriptor and its application for door
locking. The face descriptor is represented by both local and global information. The local information, which
is the dominant frequency content of sub-face, is extracted by zoned discrete cosine transforms (DCT). While
the global information, which also is the dominant frequency content and shape information of the whole face,
is extracted by DCT and by Hu-moment. Therefore, face descriptor has rich information about a face image
which tends to provide good performance for real time face recognition. To decrease the dimensional size of
face descriptor, the predictive linear discriminant analysis (PDLDA) is employed and the face classification
is done by kNN. The experimental results show that the proposed real time face recognition provides good
performances which indicated by 98.30%, 21.99%, and 1.8% of accuracy, FPR, and FNR respectively. In
addition, it also needs short computational time (1 second).
This document summarizes a research paper on face recognition using image processing techniques. It begins with an introduction to biometrics and face detection methods. It then describes principal component analysis (PCA) and the "eigenface" approach, in which faces are represented as combinations of eigenvectors derived from training images. The document also discusses using the Radon transform to capture directional features, and applying the wavelet transform to the Radon space for multi-resolution features. It provides equations for these transforms. The paper was implemented and tested on databases of faces to evaluate recognition accuracy of different methods including PCA, Radon transform, wavelet transform, and their combination via Radon-wavelet transform.
This document summarizes a research paper on face recognition using image processing techniques. It begins with an introduction to biometrics and face detection methods. It then describes principal component analysis (PCA) and the "eigenface" approach, in which faces are represented as combinations of eigenvectors corresponding to variations in face images. The document also discusses using the Radon transform to capture directional features and the wavelet transform to provide multi-resolution analysis. It presents the methodology used, including applying these transforms to face images and classifying them. The paper aims to develop an efficient and robust system for face recognition.
IRJET- Face Spoof Detection using Machine Learning with Colour FeaturesIRJET Journal
This document proposes a machine learning approach to detect face spoofing using color features. It extracts local texture features from face images converted to different color spaces like RGB, HSV, and YCbCr. These features along with distortion features are used to train an SVM classifier to detect genuine faces and spoofed faces like photos and videos. Prior work on face spoofing detection mainly focused on intensity and avoided chroma components, but the chroma components in color spaces are effective for distinguishing real and fake faces. The proposed approach extracts color-based texture features to help identify spoofed faces.
Deep hypersphere embedding for real-time face recognitionTELKOMNIKA JOURNAL
With the advancement of human-computer interaction capabilities of robots, computer vision surveillance systems involving security yields a large impact in the research industry by helping in digitalization of certain security processes. Recognizing a face in the computer vision involves identification and classification of which faces belongs to the same person by means of comparing face embedding vectors. In an organization that has a large and diverse labelled dataset on a large number of epoch, oftentimes, creates a training difficulties involving incompatibility in different versions of face embedding that leads to poor face recognition accuracy. In this paper, we will design and implement robotic vision security surveillance system incorporating hybrid combination of MTCNN for face detection, and FaceNet as the unified embedding for face recognition and clustering.
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...CSCJournals
This paper deals with one sample face recognition which is a new challenging problem in pattern recognition. In the proposed method, the frontal 2D face image of each person divided to some sub-regions. After computing the 3D shape of each sub-region, a fusion scheme is applied on sub-regions to create a total 3D shape for whole face image. Then, 2D face image is added to the corresponding 3D shape to construct 3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated. Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5% in recognition rate for one sample per person by enlarging training set using generated virtual samples. Compared with other related works, the proposed method has the following advantages: 1) only one single frontal face is required for face recognition and the outputs are virtual images with variant views for each individual 2) need only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully mathematical with no training phase and the estimated 3D model is unique for each individual.
This document discusses a face recognition system that uses Gabor feature extraction and neural networks. 40 Gabor filters are applied to images to extract features with different orientations. Fiducial points are identified based on maximum intensity points and distances between points are calculated. These distances are compared to a pre-defined database to recognize faces. A neural network with multiple layers is used to classify faces based on the Gabor filter outputs. The system was able to accurately detect faces in test images by comparing distances between fiducial points to the stored database.
This document presents a facial expression recognition system that uses Bezier curves to approximate facial features. Facial features are extracted using knowledge of face geometry and approximated by 3rd order Bezier curves representing the relationship between feature movements and expression changes. Experimental results show the method can recognize faces with over 90% accuracy. The proposed system first detects the face, then extracts features like eyes, mouth, and eyebrows. Contours are extracted from these regions and used to generate feature vectors, which are then processed with a neural network to determine the expression. The methodology involves preprocessing the image, skin color conversion, connected component analysis, binary conversion, and extracting eye and mouth features to detect expressions by comparing Bezier curves to a database of expressions.
Robust 3 d face recognition in presence ofijfcstjournal
In this paper, we propose a robust 3D face recognition system which can handle pose as well as occlusions
in real world. The system at first takes as input, a 3D range image, simultaneously registers it using
ICP(Iterative Closest Point) algorithm. ICP used in this work, registers facial surfaces to a common model
by minimizing distances between a probe model and a gallery model. However the performance of ICP
relies heavily on the initial conditions. Hence, it is necessary to provide an initial registration, which will
be improved iteratively and finally converge to the best alignment possible. Once the faces are registered,
the occlusions are automatically extracted by thresholding the depth map values of the 3D image. After the
occluded regions are detected, restoration is done by Principal Component Analysis (PCA). The restored
images, after the removal of occlusions, are then fed to the recognition system for classification purpose.
Features are extracted from the reconstructed non-occluded face images in the form of face normals. The
experimental results which were obtained on the occluded facial images from the Bosphorus 3D face
database, illustrate that our occlusion compensation scheme has attained a recognition accuracy of
91.30%.
This document summarizes a research paper on face recognition using image processing techniques. It begins with an introduction to biometrics and face detection methods. It then describes principal component analysis (PCA) and the "eigenface" approach, in which faces are represented as combinations of eigenvectors derived from training images. The document also discusses using the Radon transform to capture directional features, and applying the wavelet transform to the Radon space for multi-resolution features. It provides equations for these transforms. The paper was implemented and tested on databases of faces to evaluate recognition accuracy of different methods including PCA, Radon transform, wavelet transform, and their combination via Radon-wavelet transform.
This document summarizes a research paper on face recognition using image processing techniques. It begins with an introduction to biometrics and face detection methods. It then describes principal component analysis (PCA) and the "eigenface" approach, in which faces are represented as combinations of eigenvectors corresponding to variations in face images. The document also discusses using the Radon transform to capture directional features and the wavelet transform to provide multi-resolution analysis. It presents the methodology used, including applying these transforms to face images and classifying them. The paper aims to develop an efficient and robust system for face recognition.
IRJET- Face Spoof Detection using Machine Learning with Colour FeaturesIRJET Journal
This document proposes a machine learning approach to detect face spoofing using color features. It extracts local texture features from face images converted to different color spaces like RGB, HSV, and YCbCr. These features along with distortion features are used to train an SVM classifier to detect genuine faces and spoofed faces like photos and videos. Prior work on face spoofing detection mainly focused on intensity and avoided chroma components, but the chroma components in color spaces are effective for distinguishing real and fake faces. The proposed approach extracts color-based texture features to help identify spoofed faces.
Deep hypersphere embedding for real-time face recognitionTELKOMNIKA JOURNAL
With the advancement of human-computer interaction capabilities of robots, computer vision surveillance systems involving security yields a large impact in the research industry by helping in digitalization of certain security processes. Recognizing a face in the computer vision involves identification and classification of which faces belongs to the same person by means of comparing face embedding vectors. In an organization that has a large and diverse labelled dataset on a large number of epoch, oftentimes, creates a training difficulties involving incompatibility in different versions of face embedding that leads to poor face recognition accuracy. In this paper, we will design and implement robotic vision security surveillance system incorporating hybrid combination of MTCNN for face detection, and FaceNet as the unified embedding for face recognition and clustering.
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...CSCJournals
This paper deals with one sample face recognition which is a new challenging problem in pattern recognition. In the proposed method, the frontal 2D face image of each person divided to some sub-regions. After computing the 3D shape of each sub-region, a fusion scheme is applied on sub-regions to create a total 3D shape for whole face image. Then, 2D face image is added to the corresponding 3D shape to construct 3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated. Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5% in recognition rate for one sample per person by enlarging training set using generated virtual samples. Compared with other related works, the proposed method has the following advantages: 1) only one single frontal face is required for face recognition and the outputs are virtual images with variant views for each individual 2) need only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully mathematical with no training phase and the estimated 3D model is unique for each individual.
This document discusses a face recognition system that uses Gabor feature extraction and neural networks. 40 Gabor filters are applied to images to extract features with different orientations. Fiducial points are identified based on maximum intensity points and distances between points are calculated. These distances are compared to a pre-defined database to recognize faces. A neural network with multiple layers is used to classify faces based on the Gabor filter outputs. The system was able to accurately detect faces in test images by comparing distances between fiducial points to the stored database.
This document presents a facial expression recognition system that uses Bezier curves to approximate facial features. Facial features are extracted using knowledge of face geometry and approximated by 3rd order Bezier curves representing the relationship between feature movements and expression changes. Experimental results show the method can recognize faces with over 90% accuracy. The proposed system first detects the face, then extracts features like eyes, mouth, and eyebrows. Contours are extracted from these regions and used to generate feature vectors, which are then processed with a neural network to determine the expression. The methodology involves preprocessing the image, skin color conversion, connected component analysis, binary conversion, and extracting eye and mouth features to detect expressions by comparing Bezier curves to a database of expressions.
Robust 3 d face recognition in presence ofijfcstjournal
In this paper, we propose a robust 3D face recognition system which can handle pose as well as occlusions
in real world. The system at first takes as input, a 3D range image, simultaneously registers it using
ICP(Iterative Closest Point) algorithm. ICP used in this work, registers facial surfaces to a common model
by minimizing distances between a probe model and a gallery model. However the performance of ICP
relies heavily on the initial conditions. Hence, it is necessary to provide an initial registration, which will
be improved iteratively and finally converge to the best alignment possible. Once the faces are registered,
the occlusions are automatically extracted by thresholding the depth map values of the 3D image. After the
occluded regions are detected, restoration is done by Principal Component Analysis (PCA). The restored
images, after the removal of occlusions, are then fed to the recognition system for classification purpose.
Features are extracted from the reconstructed non-occluded face images in the form of face normals. The
experimental results which were obtained on the occluded facial images from the Bosphorus 3D face
database, illustrate that our occlusion compensation scheme has attained a recognition accuracy of
91.30%.
An Assimilated Face Recognition System with effective Gender Recognition RateIRJET Journal
This document summarizes an assimilated face recognition system that can also perform gender recognition. The system conducts experiments using databases like GENDER-FERET and Cambridge AT&T. For face recognition, it uses the Eigenfaces algorithm to extract features and classify faces. For gender recognition, it uses a trainable COSFIRE filter with Gabor filters to obtain face descriptors, which are classified using an SVM classifier. The experiments achieve a gender recognition rate of over 90%. The paper shows that the gender recognition approach outperforms other methods using handcrafted features and raw pixels.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
IRJET- Analysis of Face Recognition using Docface+ Selfie MatchingIRJET Journal
1. The document proposes a method called DocFace+ to automatically match ID document photos to selfie images in real-time with high accuracy.
2. It uses transfer learning by fine-tuning a base model trained on unconstrained face images on a private ID-selfie dataset. A pair of sibling networks are used to learn unified face representations.
3. Experimental results on an ID-selfie dataset show the proposed DocFace+ system achieves better generalization performance than publicly available general face matchers.
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
This document presents a face recognition system using principal component analysis to identify criminals at airports. The system is trained on images of known criminals collected from law enforcement agencies. It uses PCA for dimensionality reduction to generate eigenfaces from the training images. During testing, it generates an eigenface from the input image and calculates the Euclidean distance between this eigenface and the eigenfaces of the training images. It identifies the criminal as the one corresponding to the training image with the minimum distance, alerting authorities. The document outlines the methodology, including preprocessing steps like subtracting the mean face, and reviews prior work applying PCA and other algorithms to face recognition.
License Plate Recognition using Morphological Operation. Amitava Choudhury
This paper describes an efficient technique of locating and
extracting license plate and recognizing each segmented
character. The proposed model can be subdivided into four
parts- Digitization of image, Edge Detection, Separation of
characters and Template Matching. In this work, we propose a
method which is based on morphological operations where
different Structuring Elements (SE) are used to maximally
eliminate non-plate region and enhance plate region.
Character segmentation is done using Connected Component
Analysis. Correlation based template matching technique is
used for recognition of characters. This system is
implemented using MATLAB7.4.0. The proposed system is
mainly applicable to Indian License Plates.
IRJET- Automatic Identification, Analysis and Investigation of Printed Circui...IRJET Journal
This document presents a method for automatically identifying, analyzing, and classifying defects on printed circuit boards (PCBs). The proposed algorithm uses image registration, preprocessing, segmentation, defect detection, and defect classification. It is able to detect and classify 14 different types of defects. The algorithm is robust to variations in rotation, scale, and translation between the test and template images. It takes only 2.528 seconds to inspect and analyze a PCB image. Experimental results demonstrate the suitability of the proposed algorithm for automatic visual inspection of PCBs.
IRJET- Design of an Automated Attendance System using Face Recognition AlgorithmIRJET Journal
This document describes the design of an automated attendance system using face recognition for Nigerian universities. It aims to provide an efficient alternative to the problematic paper-based attendance system. The proposed system uses single scale Retinex with bi-histogram equalization to enhance face images captured by a webcam. Then, the Viola-Jones algorithm is used to detect faces, and principal component analysis (PCA) is used to recognize faces by comparing them to template images stored in a database. The system was able to achieve over 90% accuracy in tests. If a captured face matches a stored template, a '1' is recorded to mark the student as present. Otherwise, a '0' is recorded to mark them as absent. The percentage
Progression in Large Age-Gap Face VerificationIRJET Journal
1. The document discusses progression in large age-gap face verification. It summarizes various techniques used for face recognition from conventional methods to deep neural networks.
2. A typical face recognition system includes image acquisition, face detection, normalization, feature extraction, and then matching. The document reviews several feature extraction methods like Eigenfaces, Fisherfaces, and local binary patterns.
3. Recent deep learning approaches like convolutional neural networks have improved face recognition accuracy. One study used CNNs with metric learning and achieved high accuracy on the Labeled Faces in the Wild dataset. Encoding image microstructures and identity factor analysis has also been effective for matching similar faces.
IRJET- A Review on Face Detection and Expression RecognitionIRJET Journal
This document reviews face detection and expression recognition techniques. It discusses common methods for face detection including knowledge-based, feature-based, template matching and appearance-based. For expression recognition it covers preprocessing, feature extraction using local binary patterns (LBP) and principal component analysis (PCA). LBP represents textures as histograms of local binary patterns. PCA performs dimensionality reduction to extract the most important features. The document also provides examples of implementing a basic face recognition system and compares LBP and PCA methods.
IRJET- Optical Character Recognition using Neural Networks by Classification ...IRJET Journal
This document proposes a method for optical character recognition (OCR) using neural networks and character classification based on symmetry. It involves three main phases: 1) A classification phase that categorizes training characters by their symmetry properties (vertical, horizontal, total, none), 2) A training phase that trains separate neural networks on each classified group, and 3) A recognition phase that preprocesses documents, extracts lines and characters, and recognizes characters using the trained neural networks. Experimental results show the method achieves up to 99.2% accuracy on Arial font without punctuation, outperforming traditional OCR methods. However, it requires 14.01 seconds for full network training.
The document describes a face recognition method that uses wavelet features extracted from images. It divides each face image into 12 blocks and applies discrete wavelet transform to each block. It extracts the mean of the horizontal and vertical wavelet coefficients as features. A neural network is trained on these features from training images and used to recognize faces in test images by classifying the extracted wavelet features.
IRJET - A Detailed Review of Different Handwriting Recognition MethodsIRJET Journal
This document provides a detailed review of different methods for handwriting recognition, including incremental, semi-incremental, convolutional neural network, line and word segmentation, part-based, and slope and slant correction methods. It discusses the processes, benefits, and limitations of each approach. The document is intended as a review for researchers studying handwriting recognition techniques.
Face recognition systems are becoming increasingly important for security applications like surveillance cameras. They use biometric facial features which are easier for non-collaborating individuals compared to other biometrics. The document outlines the steps for a face recognition system as acquiring an image, detecting faces, recognizing faces to identify individuals. It discusses challenges like illumination, occlusion and methods are categorized as knowledge-based or appearance-based. The problem is to design a system for a robotics lab to detect and recognize frontal faces under changing lighting of at least 50 people, excluding sunglasses. The thesis outline covers literature review, proposed system theory, experiments and results, discussion and future work.
IRJET- Class Attendance using Face Detection and Recognition with OPENCVIRJET Journal
This document describes a system to automate class attendance using face detection and recognition with OpenCV. The system uses the Viola-Jones algorithm for face detection and linear binary pattern histograms for face recognition. Detected faces are converted to grayscale images for better accuracy. The system trains on positive images of faces and negative images without faces to build a classifier. It then detects faces in class and recognizes students by matching features to a stored database, updating attendance and notifying administrators. The proposed system aims to reduce time spent on manual attendance and increase accuracy by automating the process through computer vision techniques.
IRJET- Vehicle Seat Vacancy Identification using Image Processing TechniqueIRJET Journal
This document summarizes a research paper that proposes a system to identify vehicle seat vacancy using image processing techniques. A webcam installed in a vehicle captures passenger images and sends them to a server via 3G communication. The server then uses face detection algorithms like Viola-Jones, HOG, and CNN to detect and count faces in the images. This allows the system to calculate the vehicle's seat occupancy. The system can also estimate passengers' gender. The proposed system achieves real-time face detection and could help public transportation companies provide better customer service by displaying seat availability information.
IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...IRJET Journal
This document discusses multiple feature fusion techniques for facial expression recognition in videos. It reviews existing literature on using features like Histogram of Oriented Gradients from Three Orthogonal Planes (HOG-TOP), geometric warp features, Convolutional Neural Networks (CNNs), Scale Invariant Feature Transform (SIFT) and combining these features through multiple feature fusion to improve facial expression recognition from video sequences. The document also analyzes techniques like HOG-TOP for extracting dynamic textures from videos and geometric features to capture facial configuration changes caused by expressions.
A novel approach for performance parameter estimation of face recognition bas...IJMER
This document presents a novel approach for face recognition based on clustering, shape detection, and corner detection. The approach first clusters face key points and applies shape and corner detection methods to detect the face boundary and corners. It then performs both face identification and recognition on a large face database. The method achieves lower false acceptance rates, false rejection rates, and equal error rates compared to previous works, and also calculates recognition time. It provides a concise 3-sentence summary of the key aspects of the document.
The document discusses face detection and recognition methods. It covers several topics:
1) Face detection can be categorized into knowledge-based (using facial features, skin color, templates) and image-based methods (AdaBoost, Eigenfaces, neural networks, support vector machines).
2) Important factors for face detection include facial features, skin color modeling, and template matching.
3) Face recognition uses linear/nonlinear projection methods, neural networks for 2D recognition, and 3D geometric measures for 3D recognition.
Facial Expression Recognition Using SVM Classifierijeei-iaes
Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels. First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (i.e., mouth, eyebrow, etc), captures the detailed face shape information; Second, facial actions recognition, i.e., recognize facial action units (AUs) defined in FACS, try to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states. In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern) LBP and PCA is used to reduce the dimensions of the features. Finally SVM is used to classification of expression and facial action units.
Enhanced Latent Fingerprint Segmentation through Dictionary Based ApproachEditor IJMTER
The accuracy of latent finger print matching compared to roll and plain finger print
matching is significantly lower due to background noise, poor ridge quality and overlapping
structured noise in latent images. In this paper the proposed algorithm is dictionary-based approach
for automatic segmentation and enhancement towards the goal of achieving “lights out” latent
identifications system. Total variation decomposition model with L1 fidelity regularization in latent
finger print image remove background noise. A coarse to fine strategy is used to improve robustness
and accuracy. It improves the computational efficiency of the algorithm.
Deep learning methods have improved machine capabilities for face recognition. Three key modules are typically used: 1) A face detector localizes faces in images using region-based or sliding window approaches with DCNNs. 2) A fiducial point detector localizes facial landmarks using DCNN regressors or 3D models. 3) A DCNN extracts features for face recognition and verification by computing similarity scores between representations. Large annotated datasets have enabled DCNNs to learn robust representations for unconstrained images.
Real time multi face detection using deep learningReallykul Kuul
This document proposes a framework for real-time multiple face recognition using deep learning on an embedded GPU system. The framework includes face detection using a CNN, face tracking to reduce processing time, and a state-of-the-art deep CNN for face recognition. Experimental results showed the system can recognize up to 8 faces simultaneously in real-time, with processing times up to 0.23 seconds and a minimum recognition rate of 83.67%.
An Assimilated Face Recognition System with effective Gender Recognition RateIRJET Journal
This document summarizes an assimilated face recognition system that can also perform gender recognition. The system conducts experiments using databases like GENDER-FERET and Cambridge AT&T. For face recognition, it uses the Eigenfaces algorithm to extract features and classify faces. For gender recognition, it uses a trainable COSFIRE filter with Gabor filters to obtain face descriptors, which are classified using an SVM classifier. The experiments achieve a gender recognition rate of over 90%. The paper shows that the gender recognition approach outperforms other methods using handcrafted features and raw pixels.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
IRJET- Analysis of Face Recognition using Docface+ Selfie MatchingIRJET Journal
1. The document proposes a method called DocFace+ to automatically match ID document photos to selfie images in real-time with high accuracy.
2. It uses transfer learning by fine-tuning a base model trained on unconstrained face images on a private ID-selfie dataset. A pair of sibling networks are used to learn unified face representations.
3. Experimental results on an ID-selfie dataset show the proposed DocFace+ system achieves better generalization performance than publicly available general face matchers.
IRJET- Face Recognition of Criminals for Security using Principal Component A...IRJET Journal
This document presents a face recognition system using principal component analysis to identify criminals at airports. The system is trained on images of known criminals collected from law enforcement agencies. It uses PCA for dimensionality reduction to generate eigenfaces from the training images. During testing, it generates an eigenface from the input image and calculates the Euclidean distance between this eigenface and the eigenfaces of the training images. It identifies the criminal as the one corresponding to the training image with the minimum distance, alerting authorities. The document outlines the methodology, including preprocessing steps like subtracting the mean face, and reviews prior work applying PCA and other algorithms to face recognition.
License Plate Recognition using Morphological Operation. Amitava Choudhury
This paper describes an efficient technique of locating and
extracting license plate and recognizing each segmented
character. The proposed model can be subdivided into four
parts- Digitization of image, Edge Detection, Separation of
characters and Template Matching. In this work, we propose a
method which is based on morphological operations where
different Structuring Elements (SE) are used to maximally
eliminate non-plate region and enhance plate region.
Character segmentation is done using Connected Component
Analysis. Correlation based template matching technique is
used for recognition of characters. This system is
implemented using MATLAB7.4.0. The proposed system is
mainly applicable to Indian License Plates.
IRJET- Automatic Identification, Analysis and Investigation of Printed Circui...IRJET Journal
This document presents a method for automatically identifying, analyzing, and classifying defects on printed circuit boards (PCBs). The proposed algorithm uses image registration, preprocessing, segmentation, defect detection, and defect classification. It is able to detect and classify 14 different types of defects. The algorithm is robust to variations in rotation, scale, and translation between the test and template images. It takes only 2.528 seconds to inspect and analyze a PCB image. Experimental results demonstrate the suitability of the proposed algorithm for automatic visual inspection of PCBs.
IRJET- Design of an Automated Attendance System using Face Recognition AlgorithmIRJET Journal
This document describes the design of an automated attendance system using face recognition for Nigerian universities. It aims to provide an efficient alternative to the problematic paper-based attendance system. The proposed system uses single scale Retinex with bi-histogram equalization to enhance face images captured by a webcam. Then, the Viola-Jones algorithm is used to detect faces, and principal component analysis (PCA) is used to recognize faces by comparing them to template images stored in a database. The system was able to achieve over 90% accuracy in tests. If a captured face matches a stored template, a '1' is recorded to mark the student as present. Otherwise, a '0' is recorded to mark them as absent. The percentage
Progression in Large Age-Gap Face VerificationIRJET Journal
1. The document discusses progression in large age-gap face verification. It summarizes various techniques used for face recognition from conventional methods to deep neural networks.
2. A typical face recognition system includes image acquisition, face detection, normalization, feature extraction, and then matching. The document reviews several feature extraction methods like Eigenfaces, Fisherfaces, and local binary patterns.
3. Recent deep learning approaches like convolutional neural networks have improved face recognition accuracy. One study used CNNs with metric learning and achieved high accuracy on the Labeled Faces in the Wild dataset. Encoding image microstructures and identity factor analysis has also been effective for matching similar faces.
IRJET- A Review on Face Detection and Expression RecognitionIRJET Journal
This document reviews face detection and expression recognition techniques. It discusses common methods for face detection including knowledge-based, feature-based, template matching and appearance-based. For expression recognition it covers preprocessing, feature extraction using local binary patterns (LBP) and principal component analysis (PCA). LBP represents textures as histograms of local binary patterns. PCA performs dimensionality reduction to extract the most important features. The document also provides examples of implementing a basic face recognition system and compares LBP and PCA methods.
IRJET- Optical Character Recognition using Neural Networks by Classification ...IRJET Journal
This document proposes a method for optical character recognition (OCR) using neural networks and character classification based on symmetry. It involves three main phases: 1) A classification phase that categorizes training characters by their symmetry properties (vertical, horizontal, total, none), 2) A training phase that trains separate neural networks on each classified group, and 3) A recognition phase that preprocesses documents, extracts lines and characters, and recognizes characters using the trained neural networks. Experimental results show the method achieves up to 99.2% accuracy on Arial font without punctuation, outperforming traditional OCR methods. However, it requires 14.01 seconds for full network training.
The document describes a face recognition method that uses wavelet features extracted from images. It divides each face image into 12 blocks and applies discrete wavelet transform to each block. It extracts the mean of the horizontal and vertical wavelet coefficients as features. A neural network is trained on these features from training images and used to recognize faces in test images by classifying the extracted wavelet features.
IRJET - A Detailed Review of Different Handwriting Recognition MethodsIRJET Journal
This document provides a detailed review of different methods for handwriting recognition, including incremental, semi-incremental, convolutional neural network, line and word segmentation, part-based, and slope and slant correction methods. It discusses the processes, benefits, and limitations of each approach. The document is intended as a review for researchers studying handwriting recognition techniques.
Face recognition systems are becoming increasingly important for security applications like surveillance cameras. They use biometric facial features which are easier for non-collaborating individuals compared to other biometrics. The document outlines the steps for a face recognition system as acquiring an image, detecting faces, recognizing faces to identify individuals. It discusses challenges like illumination, occlusion and methods are categorized as knowledge-based or appearance-based. The problem is to design a system for a robotics lab to detect and recognize frontal faces under changing lighting of at least 50 people, excluding sunglasses. The thesis outline covers literature review, proposed system theory, experiments and results, discussion and future work.
IRJET- Class Attendance using Face Detection and Recognition with OPENCVIRJET Journal
This document describes a system to automate class attendance using face detection and recognition with OpenCV. The system uses the Viola-Jones algorithm for face detection and linear binary pattern histograms for face recognition. Detected faces are converted to grayscale images for better accuracy. The system trains on positive images of faces and negative images without faces to build a classifier. It then detects faces in class and recognizes students by matching features to a stored database, updating attendance and notifying administrators. The proposed system aims to reduce time spent on manual attendance and increase accuracy by automating the process through computer vision techniques.
IRJET- Vehicle Seat Vacancy Identification using Image Processing TechniqueIRJET Journal
This document summarizes a research paper that proposes a system to identify vehicle seat vacancy using image processing techniques. A webcam installed in a vehicle captures passenger images and sends them to a server via 3G communication. The server then uses face detection algorithms like Viola-Jones, HOG, and CNN to detect and count faces in the images. This allows the system to calculate the vehicle's seat occupancy. The system can also estimate passengers' gender. The proposed system achieves real-time face detection and could help public transportation companies provide better customer service by displaying seat availability information.
IRJET- Multiple Feature Fusion for Facial Expression Recognition in Video: Su...IRJET Journal
This document discusses multiple feature fusion techniques for facial expression recognition in videos. It reviews existing literature on using features like Histogram of Oriented Gradients from Three Orthogonal Planes (HOG-TOP), geometric warp features, Convolutional Neural Networks (CNNs), Scale Invariant Feature Transform (SIFT) and combining these features through multiple feature fusion to improve facial expression recognition from video sequences. The document also analyzes techniques like HOG-TOP for extracting dynamic textures from videos and geometric features to capture facial configuration changes caused by expressions.
A novel approach for performance parameter estimation of face recognition bas...IJMER
This document presents a novel approach for face recognition based on clustering, shape detection, and corner detection. The approach first clusters face key points and applies shape and corner detection methods to detect the face boundary and corners. It then performs both face identification and recognition on a large face database. The method achieves lower false acceptance rates, false rejection rates, and equal error rates compared to previous works, and also calculates recognition time. It provides a concise 3-sentence summary of the key aspects of the document.
The document discusses face detection and recognition methods. It covers several topics:
1) Face detection can be categorized into knowledge-based (using facial features, skin color, templates) and image-based methods (AdaBoost, Eigenfaces, neural networks, support vector machines).
2) Important factors for face detection include facial features, skin color modeling, and template matching.
3) Face recognition uses linear/nonlinear projection methods, neural networks for 2D recognition, and 3D geometric measures for 3D recognition.
Facial Expression Recognition Using SVM Classifierijeei-iaes
Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels. First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (i.e., mouth, eyebrow, etc), captures the detailed face shape information; Second, facial actions recognition, i.e., recognize facial action units (AUs) defined in FACS, try to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc); In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states. In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern) LBP and PCA is used to reduce the dimensions of the features. Finally SVM is used to classification of expression and facial action units.
Enhanced Latent Fingerprint Segmentation through Dictionary Based ApproachEditor IJMTER
The accuracy of latent finger print matching compared to roll and plain finger print
matching is significantly lower due to background noise, poor ridge quality and overlapping
structured noise in latent images. In this paper the proposed algorithm is dictionary-based approach
for automatic segmentation and enhancement towards the goal of achieving “lights out” latent
identifications system. Total variation decomposition model with L1 fidelity regularization in latent
finger print image remove background noise. A coarse to fine strategy is used to improve robustness
and accuracy. It improves the computational efficiency of the algorithm.
Deep learning methods have improved machine capabilities for face recognition. Three key modules are typically used: 1) A face detector localizes faces in images using region-based or sliding window approaches with DCNNs. 2) A fiducial point detector localizes facial landmarks using DCNN regressors or 3D models. 3) A DCNN extracts features for face recognition and verification by computing similarity scores between representations. Large annotated datasets have enabled DCNNs to learn robust representations for unconstrained images.
Real time multi face detection using deep learningReallykul Kuul
This document proposes a framework for real-time multiple face recognition using deep learning on an embedded GPU system. The framework includes face detection using a CNN, face tracking to reduce processing time, and a state-of-the-art deep CNN for face recognition. Experimental results showed the system can recognize up to 8 faces simultaneously in real-time, with processing times up to 0.23 seconds and a minimum recognition rate of 83.67%.
IRJET - A Review on Face Recognition using Deep Learning AlgorithmIRJET Journal
This document provides an overview of face recognition using deep learning algorithms. It discusses how deep learning approaches like convolutional neural networks (CNNs) have achieved high accuracy in face recognition tasks compared to earlier methods. CNNs can learn discriminative face features from large datasets during training to generalize to new images, handling variations in pose, illumination and expression. The document reviews popular CNN architectures and training approaches for face recognition. It also discusses other traditional face recognition methods like PCA and LDA, and compares their performance to deep learning methods.
Face recognition using assemble of low frequency of DCT featuresjournalBEEI
Face recognition is a challenge due to facial expression, direction, light, and scale variations. The system requires a suitable algorithm to perform recognition task in order to reduce the system complexity. This paper focuses on a development of a new local feature extraction in frequency domain to reduce dimension of feature space. In the propose method, assemble of DCT coefficients are used to extract important features and reduces the features vector. PCA is performed to further reduce feature dimension by using linear projection of original image. The proposed of assemble low frequency coefficients and features reduction method is able to increase discriminant power in low dimensional feature space. The classification is performed by using the Euclidean distance score between the projection of test and train images. The algorithm is implemented on DSP processor which has the same performance as PC based. The experiment is conducted using ORL standard face databases the best performance achieved by this method is 100%. The execution time to recognize 40 peoples is 0.3313 second when tested using DSP processor. The proposed method has a high degree of recognition accuracy and fast computational time when implemented in embedded platform such as DSP processor.
A face recognition system using convolutional feature extraction with linear...IJECEIAES
Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).
1. The document discusses various techniques that have been proposed for face detection and attendance systems, including Haar classifiers, improved support vector machines, and local binary patterns algorithms.
2. It reviews several papers that have implemented different methods for face recognition for attendance systems, such as using HOG features and PCA for dimensionality reduction along with SVM classification.
3. The document also summarizes a paper that proposed a context-aware local binary feature learning method for face recognition that exploits contextual information between adjacent image bits.
Virtual Contact Discovery using Facial RecognitionIRJET Journal
The document describes a project that aims to use facial recognition as a means of contact discovery and metadata retrieval. The project seeks to optimize machine learning models for facial detection and verification in order to provide fast and accurate contact matching based on facial encodings. It outlines the objectives, scope, literature review, proposed system architecture and implementation details. The system would take facial landmarks and encodings to compare and rank the top 10 most similar encodings to identify matches from a database. The optimized model aims to reduce latency and improve accuracy for contact matching based on facial scans.
Research Inventy : International Journal of Engineering and Scienceinventy
This document summarizes an approach for face recognition using local and global features with Discrete Cosine Transform (DCT). Local features like eyes, nose, and mouth are extracted from face images and DCT is applied to both the local features and the entire face image (global feature). Recognition rates are calculated and compared for local vs global features. When local and global features are combined, DCT gives a relatively high recognition rate while minimizing the false acceptance rate compared to using only global features. Face normalization is also performed to reduce variations from lighting, orientation, and other factors.
Realtime face matching and gender prediction based on deep learningIJECEIAES
Face analysis is an essential topic in computer vision that dealing with human faces for recognition or prediction tasks. The face is one of the easiest ways to distinguish the identity people. Face recognition is a type of personal identification system that employs a person’s personal traits to determine their identity. Human face recognition scheme generally consists of four steps, namely face detection, alignment, representation, and verification. In this paper, we propose to extract information from human face for several tasks based on recent advanced deep learning framework. The proposed approach outperforms the results in the state-of-the-art.
A hybrid approach for face recognition using a convolutional neural network c...IAESIJAI
Facial recognition technology has been used in many fields such as security,
biometric identification, robotics, video surveillance, health, and commerce
due to its ease of implementation and minimal data processing time.
However, this technology is influenced by the presence of variations such as
pose, lighting, or occlusion. In this paper, we propose a new approach to
improve the accuracy rate of face recognition in the presence of variation or
occlusion, by combining feature extraction with a histogram of oriented
gradient (HOG), scale invariant feature transform (SIFT), Gabor, and the
Canny contour detector techniques, as well as a convolutional neural
network (CNN) architecture, tested with several combinations of the
activation function used (Softmax and Segmoïd) and the optimization
algorithm used during training (adam, Adamax, RMSprop, and stochastic
gradient descent (SGD)). For this, a preprocessing was performed on two
databases of our database of faces (ORL) and Sheffield faces used, then we
perform a feature extraction operation with the mentioned techniques and
then pass them to our used CNN architecture. The results of our simulations
show a high performance of the SIFT+CNN combination, in the case of the
presence of variations with an accuracy rate up to 100%.
Optimization of deep learning features for age-invariant face recognition IJECEIAES
This paper presents a methodology for Age-Invariant Face Recognition (AIFR), based on the optimization of deep learning features. The proposed method extracts deep learning features using transfer deep learning, extracted from the unprocessed face images. To optimize the extracted features, a Genetic Algorithm (GA) procedure is designed in order to select the most relevant features to the problem of identifying a person based on his/her facial images over different ages. For classification, K-Nearest Neighbor (KNN) classifiers with different distance metrics are investigated, i.e., Correlation, Euclidian, Cosine, and Manhattan distance metrics. Experimental results using a Manhattan distance KNN classifier achieve the best Rank-1 recognition rates of 86.2% and 96% on the standard FGNET and MORPH datasets, respectively. Compared to the state-of-the-art methods, our proposed method needs no preprocessing stages. In addition, the experiments show its privilege over other related methods.
Research and Development of DSP-Based Face Recognition System for Robotic Reh...IJCSES Journal
This article describes the development of DSP as the core of the face recognition system, on the basis of
understanding the background, significance and current research situation at home and abroad of face
recognition issue, having a in-depth study to face detection, Image preprocessing, feature extraction face
facial structure, facial expression feature extraction, classification and other issues during face recognition
and have achieved research and development of DSP-based face recognition system for robotic
rehabilitation nursing beds. The system uses a fixed-point DSP TMS320DM642 as a central processing
unit, with a strong processing performance, high flexibility and programmability.
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPIRJET Journal
The document presents a study on an efficient face recognition method employing support vector machines (SVM) and biomimetic uncorrelated local difference projection (BU-LDP). The study proposes using BU-LDP, which is based on uncorrelated local projection but uses a different neighborhood coefficient calculation approach inspired by human perception. Experimental results on several datasets show that BU-LDP and its kernel variant KBU-LDP outperform state-of-the-art methods for face recognition. Future work will focus on addressing the "one sample problem" and applying the approach to unlabeled data.
In this paper, an attempt has been made to extract texture
features from facial images using an improved method of
Illumination Invariant Feature Descriptor. The proposed local
ternary Pattern based feature extractor viz., Steady Illumination
Local Ternary Pattern (SIcLTP) has been used to extract texture
features from Indian face database. The similarity matching
between two extracted feature sets has been obtained using Zero
Mean Sum of Squared Differences (ZSSD). The RGB facial images
are first converted into the YIQ colour space to reduce the
redundancy of the RGB images. The result obtained has been
analysed using Receiver Operating Characteristic curve, and is
found to be promising. Finally the results are validated with
standard local binary pattern (LBP) extractor.
Face detection is one of the most suitable applications for image processing and biometric programs. Artificial neural networks have been used in the many field like image processing, pattern recognition, sales forecasting, customer research and data validation. Face detection and recognition have become one of the most popular biometric techniques over the past few years. There is a lack of research literature that provides an overview of studies and research-related research of Artificial neural networks face detection. Therefore, this study includes a review of facial recognition studies as well systems based on various Artificial neural networks methods and algorithms.
IRJET- Credit Card Authentication using Facial RecognitionIRJET Journal
This document describes a proposed system for credit card authentication using facial recognition. The system aims to address security issues with credit card fraud during online transactions. Currently, credit cards often rely on PIN codes for authentication, but PIN codes can be stolen or forgotten. The proposed system uses facial recognition technology to authenticate users during online credit card payments. When users register their credit card, their photo would be taken and their facial features extracted and stored in a database. During a payment, the system would compare the user's live photo to the stored facial features to verify their identity before approving the transaction. The document outlines the facial recognition process, including face detection, feature extraction using Local Binary Patterns, and face matching. It also provides sample
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxAravindHari22
1) The document proposes a real-time unconstrained face recognition system using deep convolutional neural networks (DCNN).
2) The system performs face detection, extracts DCNN features, and computes similarity to perform face recognition on images and video frames.
3) It was tested on challenging datasets like CASIA-WebFace, IJB-A, and LFW and was able to achieve accurate recognition with variations in pose, illumination, expression, resolution and occlusion.
A comparative review of various approaches for feature extraction in Face rec...Vishnupriya T H
This document provides an overview of various approaches for feature extraction in face recognition. It discusses common feature extraction algorithms such as PCA, DCT, LDA, and ICA. PCA is aimed at data compression while ensuring no information loss. DCT transforms images from spatial to frequency domains. LDA maximizes between-class variations and minimizes within-class variations. ICA determines statistically independent variables and minimizes higher-order dependencies. The document reviews several papers comparing the performance of these algorithms individually and in combination for face recognition applications.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
Illumination Invariant Face Recognition System using Local Directional Patter...Editor IJCATR
The document proposes an illumination-robust face recognition system using local directional pattern (LDP) images and two-dimensional principal component analysis (2D-PCA). It extracts LDP images from face images as inputs to 2D-PCA for feature extraction, instead of using LDP for histogram feature extraction like previous works. Experimental results on the Yale B and CMU-PIE databases show the proposed approach achieves higher recognition accuracy than PCA and LBP-based methods, demonstrating its effectiveness in handling illumination variations. The maximum recognition rates are 96.43% for Yale B and 100% for CMU-PIE illumination sets.
Similar to Real Time Face Recognition Based on Face Descriptor and Its Application (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Literature Review Basics and Understanding Reference Management.pptx
Real Time Face Recognition Based on Face Descriptor and Its Application
1. TELKOMNIKA, Vol. 16, No. 2, April 2018, pp. 739 ∼ 746
ISSN: 1693-6930, accredited A by DIKTI, Decree No: 58/DIKTI/Kep/2013
DOI: 10.12928/telkomnika.v16.i2.7418 739
Real Time Face Recognition Based on Face
Descriptor and Its Application
I Gede Pasek Suta Wijaya* , Ario Yudo Husodo , and I Wayan Agus Arimbawa
Department of Informatics Engineering, Engineering Faculty, Mataram University
Jl. Majapahit 62 Mataram, Lombok, Indonesia
*Corresponding Author, email: gpsutawijaya@unram.ac.id, ario@ti.ftunram.ac.id, arimbawa@unram.ac.id
Abstract
This paper presents a real time face recognition based on face descriptor and its application for door
locking. The face descriptor is represented by both local and global information. The local information, which
is the dominant frequency content of sub-face, is extracted by zoned discrete cosine transforms (DCT). While
the global information, which also is the dominant frequency content and shape information of the whole face,
is extracted by DCT and by Hu-moment. Therefore, face descriptor has rich information about a face image
which tends to provide good performance for real time face recognition. To decrease the dimensional size of
face descriptor, the predictive linear discriminant analysis (PDLDA) is employed and the face classification
is done by kNN. The experimental results show that the proposed real time face recognition provides good
performances which indicated by 98.30%, 21.99%, and 1.8% of accuracy, FPR, and FNR respectively. In
addition, it also needs short computational time (1 second).
Keyword: face recognition, real time, LDA, face descriptor, face classification
Copyright c 2018 Universitas Ahmad Dahlan. All rights reserved.
1. Introduction
This paper presents an application of real time face recognition based on face descriptor
for door locking system. The face descriptor is the dominant frequency content of sub-face (local)
and whole face (global). The face descriptor is extracted by zoned DCT, non-zoned DCT, and
Hu-moment. The main aim of DCT coefficients based face descriptor is to get rich information of
face image which can give better achievement than that of compact features (CF) based method
[1] for real time face recognition. The predictive linear discriminant analysis (PDLDA) is hired to
drop-off the dimensional size of the descriptor and the k nearest neighborhood (kNN) is utilized for
verification. The main aim of this work is to obtain strong face recognition against lighting variation
which can be applied to the security system, i.e. door locking system which is an extended version
of our previous work[2].
Face recognition has been widely developed by many researchers[3], such as statistical-
based (ICA, PCA, and naive Bayesian), global features-based, artificial intelligent-based (i.e.,
genetic algorithm, artificial neural network, SVM and etc,) and any their variations-based [1, 2]
face recognition algorithms. The most popular algorithm is face recognition based on subspace
projection: LDA, eigenface (PCA), and their variations [4, 5]. The LDA and their variation become
popular due to its simple implementation and less computation complexity. In addition, their dis-
crimination power is higher than that of the PCA, which make the performance of LDA and their
variation be better than PCA.
Discrete cosine transform (DCT) based face recognition [6] has been reported that it pro-
vided good performance compared to other approaches. Both PCA and LDA is possible to be
directly executed on images in JPEG standard format unaccompanied by performing inverse DCT
transform because they can work in DCT domain. The DCT-based system requires certain nor-
malization techniques to overcome variations in facial geometry and illumination. However, both
approaches extracted the face features using only block-based DCT. The face recognition method
using selection DCT coefficients from 75% to 100% DCT and setting the high frequency to zero
has been proposed to handle illumination problem[6]. However, it needs high computational time
Received September 28, 2017; Revised December 21, 2017 ; Accepted January 18, 2018
2. 740 ISSN: 1693-6930
because inverse DCT transforms and Contrast Limited Adaptive Histogram Equalization (CLAHE)
is mandatory to obtain an illumination invariant face image.
Regarding real time face recognition algorithms[7, 8], mostly the eigenface (PCA) has
been successfully implemented. However, the PCA is lack of discriminant power, which make
the system be lack of accuracy. In addition, the combination of compact features (CF) and LDA
projection has been applied for real time face recognition[1]. The CF vector was extracted by LBP
and zoned DCT, while the classification was performed by nearest neighbor rules. The LDA was
employed for dimensional reduction of CF vector.
Therefore, alternative real time face recognition using DCT coefficients based face de-
scriptor which consists of dominant frequency content extracted by discrete cosine transforms
(DCT), local features extracted by zone DCT (block-based DCT) and shape information extracted
by Hu-moment. The DCT coefficients based face descriptor tends to improve the performance of
CF based face recognition because it has rich information.
2. Proposed Method
In this research, there are two main modules: face recognition engine and its implemen-
tation for a door-locking system. The face recognition engine principally has three subsystems:
face detection, feature extraction, recognition and verification rules, as shown in Fig. 1(a). While
the door-locking system consists of a face recognition engine and solenoid control circuit, as
presented in Fig. 1(b).
2.1. Proposed Face Recognition Engine
The mechanism of face recognition and verification can be described as follows:
1. Suppose, the training set is given to the recognition engine for finding out machine param-
eters and guiding the engine to be intelligent. Furthermore, face image descriptors that are
Fig. 1. Face recognition and verification process.
4.1 Face Detection
The face detection is one important modul in real time face recognition especially on
application of face recognition for electronics key. In this research, the haar-like based
face detection provided by provided by openCV library is implemented for face
detection. This algorithm has been reported than provide robust performance among the
other’s algorithm [19]. The face detection modul starts face localization for defining
Face signature
Face Descriptor
Extraction
Face Descriptor
Dimensional
Reduction
Electronic Keys
Verification Process
Using Metrics and kNN
Rule
Query Image Registered Images
Face Detection and
Normalization
Face Detection and
Normalization
ID and Registered
Key
(a)
Face Detection &
Features Extraction
Module
Solenoid Key
Control Module
Detect Face Close/Open
Key SignalControl
Instruction
Face Detection Unit Solenoid Control Unit
(b)
Figure 1. Diagram blocks: (a) face recognition engine[2] and (b) door-locking based on face
image.
TELKOMNIKA Vol. 16, No. 2, April 2018 : 739 ∼ 746
3. TELKOMNIKA ISSN: 1693-6930 741
extracted during the training process are stored in the database as registered face signa-
tures. The face image descriptor is extracted by using a fast zoned and non-zoned DCT, and
Hu-moment, then selected a small part of transformation coefficient having greatest magni-
tude. Then the chosen coefficients are quantized for sharpening the key features called as
face signature.
2. In the recognition process, the query face signature is extracted by using a similar technique
to the training process. Next, the similarity score is determined by matching query face
signature and registered face signatures. In this case, the smallest score is concluded as
the best likeness.
3. In the verification process, the kNN is hired to find the highest probability of query face
signature, which is close to the registered face signature. If the probability of query face
signature, which is close registered face signature of class B, the input query is verified as
class A. The kNN is chosen because it could give good performance (91.5% of recognition
rate and 2.66 seconds of computational time) for face recognition in small and compact
devices(ARM processor)[9].
2.1.1. Face Acquisition
Face image acquisition is done by using a standard USB camera. Next, histogram equal-
ization is utilized to decrease the effect of lighting condition during face acquisition. Finally, the
Haar-like based face detection[10], which has been widely examined and provide robust perfor-
mance among the others algorithm, is employed for face detection. Simply, the face detection
algorithm starts from face localization to define a region of interest (ROI) of face, and then from
the detected face ROI is confirmed by detecting the two eyes inside the ROI, finally the firmed
face ROI is cropped and passed to face recognition engine for further process on real time face
recognition. The illustration of face detection is presented in Fig. 2.
face the region of interest (ROI). The face ROI will be confirmed by detecting the two
eyes coordinates that existing inside the face ROI. Finally, the firmed face ROI is
cropped and saved for further process in real time face recognition. The illustration of
face detection is shown in Fig, 3. Based on our real-time experimental results, Haar-
likes face detection provides robust enough performances against the large illumination
variations and requires short time processing.
(a) (b) (c)
Fig. 2. Face detection processes: (a) face localization, (b) eyes detection, (c) cropping face.
4.2 Feature Extraction
In this research, a new approach to face features which is defined based one Different of
Gaussian (DoG). The DoG itself is implemented for finding out interesting key points in
the image can be implemented for illumination normalization because it work looks like
low-pass filtering. The DoG of image can be extracted using the illustration as shown in
Fig 3.
Fig. 3. The DoG extarction procedure[10].
This procedure works fast and efficient because it replaces a computationally intensive
of Laplacian of Gaussian process with a simple subtraction. Therefore, the DoG of
images is approximately the same as the Laplacian of Gaussian.
Figure 2. Face detection algorithms: (a) face localization, (b) eyes detection, (c) cropping face
2.1.2. Face Descriptor Extraction
In this paper, the face descriptor extraction process is shown by using diagram block in
Fig. 3. The filtering and contrast stretching are also employed to eliminate the lighting variation
effect during face capturing. In detail, the face descriptor extraction is done by using some steps
as follows:
1. Performing the local binary pattern (LBP) and followed by performing none-zone DCT (on
entire image) to obtain the global information of face image. LBP and its variation have
been successfully implemented for face recognition[11]. In this case, small part (less than
64) coefficients are selected as global information. The LBP is implemented to get robust
global information of face image against illuminations.
2. Performing zone DCT (as performed on JPEG compression) to obtain local features of the
face image, as shown in Fig. 4. In this case, less than four coefficients are selected from
Real Time Face Recognition Based on Face Descriptor ... (I Gede Pasek Suta Wijaya)
4. 742 ISSN: 1693-6930
Feature Extraction
Normaliza
tion
LBP
Zone DCT
Shape
Analysis
U
Non-Zone
DCT
Figure 3. Face descriptor extraction processes
Feature Extraction
Normaliza
tion
LBP
Zone DCT
Shape
Analysis
U
Non-Zone
DCT
0 50 100 150 200 250
-2
0
2
4
6
8
10
12
Normalized
Face
Zone
DCT
Selected
coefficients
Figure 4. Local features extraction processes
each zone as local features. The local features represent specific information of sub face
image which is available in some low frequency components.
3. Performing the shape analysis using Hu-moment to get shape information of face image. In
this case, only four moments (first-fourth) is considered because the fifth-seventh moment’s
values are close to zero. It means that shape information is not available in the fifth-seventh
moments.
4. Finally, combining the global information, local features, and shape information to get rich
face descriptor.
In this work, the face descriptor is represented by the dominant frequency content of
whole and sub face images. The local features represent the most information of key point.
Similar to SIFT features, this face descriptor is rich of information which tends lighting invariant
because the lighting variation has been decreased by filtering and contrast stretching.
2.1.3. Dimensional Reductions
In this paper, the predictive LDA (PDLDA[1]) algorithm is hired to drop-off face descriptor
size. The PDLDA is similar to LDA which define optimum projection matrix, W, by eigen analysis
of the between class scatter, Sb, and the with-in class scatter, Sw[1, 4]. The W has to satisfy the
Eq. 1.
JLDA(W) = arg max
W
| WT
SbW |
| WT SwW |
(1)
This algorithm has been established that can avoid the retraining problem of LDA. It can
be done by redefining the Sb and the Sw using global mean, µa. It means the µa is estimated by
calculating it from l sub-sample data that is randomly selected from a given data set.
Finally, the dimensional reduction is done by Eq. 2.
yk
i = WT
xk
i (2)
where yk
i is projected face descriptor and xk
i is an input face descriptor. By using this
concept, the input face descriptor can be decreased more than 50% of original size.
2.2. Door-Locking System
The door locking hardware system consists of five subsystems named: Raspberry mod-
ule, a set of output power gain system, a server, a network switch, and a door solenoid system.
The Raspberry module is used to control the door solenoid to locked or unlocked depending on
the output status given by the software recognition system in the server. The server provides
a logic condition 1 (refers to unlocked) or 0 (refers to locked). This logic condition wrote in a file
which can be accessed through the network. A web server is installed on the server to provide this
feature. A network switch is used to connect the server and the Raspberry through the computer
network.
TELKOMNIKA Vol. 16, No. 2, April 2018 : 739 ∼ 746
5. TELKOMNIKA ISSN: 1693-6930 743
The Raspberry initial mode firstly sets to 0 which will lock the door solenoid. The Rasp-
berry continuously checks the server output condition through the network. If the server status
is different from the initial status, then the Raspberry processes the program to command the
solenoid. The server status will drive the program to control solenoid either lock or unlock. The
latest status, then used as the initial status, and the checking processes will continue.
The Raspberry uses servers logic output condition as an input and a Raspberry GPIO
(general purpose input output) pin as an output. This Raspberry GPIO output status is used as
an input by the door solenoid as a command to lock or unlock the door. Since the solenoid input
voltage requirement is 12VDC and the Raspberry GPIO output voltage is 3.3VDC then a relay
is needed to drive the solenoid. The relay has a minimum 5V input, which is higher than the
Raspberry GPIO output (3.3V) as well as the current. To drives the relay, Raspberry will need a
simple power gain system. A simple power gain system can be built using a transistor and some
resistors as shown in the Fig 5.
Figure 5. Circuits of door locking hardware system
3. Experiments and Result Discussions
Both off-line and real time experiments were carried out to know the performance of face
recognition engine based on face descriptor (FD). four well known face datasets: ORL[1, 12],
Image Media Laboratory Kumamoto University (ITS) [1, 4], and India (IND)[4], and Yale B[13]
were chosen for doing off-line experiments. The ORL dataset has 400 grayscale faces that were
taken from 40 persons. Face variations example of the ORL dataset is presented in Fig. 6(a)[1].
ITS face database belongs to Image Media Laboratory Kumamoto University, which is an ethnic
East Asia face image, especially Japan and Chinese. ITS has 90 samples and each sample has
(a) ORL dataset [12] (b) ITS dataset[4]
(c) IND face dataset[2]
Feature Extraction
(a) Sub-Set 1 (S1) (b) Sub-Set 2 (S2)
(c) Sub set 3 (S3) (d) Sub Set 4 (S4)
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5th Day
(d) Yale B[13]
Figure 6. The example of face variations of tested datasets.
Real Time Face Recognition Based on Face Descriptor ... (I Gede Pasek Suta Wijaya)
6. 744 ISSN: 1693-6930
10 to 15 face variations. Examples of face variations of the ITS face database can be presented
in Fig. 6(b)[1]. Thirdly, India dataset is color face image dataset which has 61 persons (22 female
and 39 male). Thee are eleven pose variations as presented in Fig. 6(c). Some facial emotions
are also included in this dataset such as smile, disgust, neutral, and laugh[4]. The Yele B dataset
is divided into four sets, as shown in Fig. 6(d). In this case, the sub-set 1 was chosen as training
and the remaining sub-sets were selected as testing.
In addition, the off-line experiments were carried out by under conditions: firstly, 50%
faces of each dataset were arbitrarily elected for training data and leftover part was chosen as
querying images; secondly 10-Fold cross-validation was enforced for performance evaluation;
finally, recognition rate and computational time were utilized as a performance indicator.
98.31
92.75
82.11
99.14
98.35
92.66
80
85
90
95
100
ITS ORL IND
RecognitionRate(%)
Face Databases
CF FD
100
75.60
12.55
100
89.89
16.54
10
20
30
40
50
60
70
80
90
100
S1 vs. S2 S1 vs. S3 S1 vs. S4
RecognitionRate(%)
Yale G Database
CF FD
(a) On ORL, ITS, and IND datasets
98.31
92.75
82.11
99.14
98.35
92.66
80
85
90
95
100
ITS ORL IND
RecognitionRate(%)
Face Databases
CF FD
100
75.60
12.55
100
89.89
16.54
10
20
30
40
50
60
70
80
90
100
S1 vs. S2 S1 vs. S3 S1 vs. S4
RecognitionRate(%)
Yale G Database
CF FD
(b) Yale B dataset
Figure 7. Off-line performances of our face recognition compared to baseline (CF based
method[1]) on tested datasets.
The experimental results (see Fig. 7) show that face recognition engine based on FD
gives better performance than those baseline methods (compact features (CF) based face recognition[1].
In average, the FD based face recognition engine provides by about 96.72% of recognition rate on
ORL, ITS, and IND datasets (see Fig. 7(a)). In other words, FD based face recognition engine can
improve by about 5.66% the performance of CF based face recognition[1]. It can be achieved by
our face descriptor that has rich information which is formed by global information, local features,
and shape information. The global and local information is represented by some low frequency
components of whole and sub-face images. This achievement is in line with the basic theory of
signal processing that the most signal information is located in the low-frequency element.
In terms of robustness of FD to any variations of lighting condition compared with the CF
method, the FD based method gives better performance than CF (see Fig. 7(b)). It proves that the
FD has rich information which robust to lighting invariant due to filtering and contrast stretching
before the extraction.
Regarding execution time, the FD based face recognition engine takes less than 1 second
in average for performing the matching between querying face descriptor among the registered
face descriptors of all tested datasets. It can be achieved because the face descriptor is repre-
sented by 32 elements of original size face images (128x128 pixels). From off-line experimental
data, the proposed face recognition engine is potential to be used for electronic keys for door
locking system.
In the real time experiments, the system was tested using large variability face images in
terms of pose and capturing time. In this case, 1002 face images have been collected by using
web camera Logitech C300 (1.3 MP (1280 x 1024)) from 13 persons of the staff on Informatics
Engineering Dept., Engineering Faculty, Mataram University, in fifth days. From this dataset, 159
face images captured on the first day (almost 11 images for each person) were used as the
training and 843 faces were prepared for testing. Examples of faces variation are shown in Fig. 8.
The parameters for real time evaluation of face recognition engine were accuracy, False
Positive Rate (FPR), False Negative Rate (FNR), and computational time. The evaluation results
affirm that the proposed face recognition engine using face descriptor has performed properly,
TELKOMNIKA Vol. 16, No. 2, April 2018 : 739 ∼ 746
7. TELKOMNIKA ISSN: 1693-6930 745
(a)Sub-Set 1 (S1) (b)Sub-Set 2 (S2)
(c) Sub set 3 (S3) (d)Sub Set 4 (S4)
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5
th
Day
Staff TI
1
st
Day
2
nd
Day
3
rd
Day
4
th
Day
5
th
Day
Figure 8. Example of face images variations for real time evaluation.
97.96
25.03
2.20
98.46
17.66
1.67
0
20
40
60
80
100
120
Accuracy FNR FPR
Rate(%)
Evaluation Parameters
CF FD
(a) Acuracy, FNR, and FPR
74.97
73.50
82.34 81.45
40
50
60
70
80
90
Sensitivity Precission
Rate(%)
Evaluation Parameters
CF FD
(b) Sensitivity and precision
Figure 9. Performance of real time experiments.
which is indicated by more than 98% of accuracy and less than 2% and 18% of FPR and FNR,
respectively (see Fig. 9(a)). These performances can be achieved because the developed recog-
nition engine using the face descriptor of the DCT and sub-space analysis provides well data
separation. Compared to the performance of CF based face recognition, our proposed method
significantly decreases the FNR by about 7.37% and in another side, it is not much increase and
decrease the accuracy and FPR, as presented in Fig. 9(a).
The FD based face recognition engine also improves significantly sensitivity and precision
of CF based face recognition by about 7.37% and 7.95% respectively, as shown in Fig. 9(b). It
also affirms that our proposed method can handle the false negative problem of the baseline
method (the correct person is falsely recognized as others). Overall, the real time performances
confirm the off-line achievements which can improve the baseline performances.
The last experiment was carried out to know the performance of FD based face recog-
nition engine for door locking system which was done by the staff of Informatics Engineering
Dept., Engineering Faculty, Mataram University, in one week. The door locking system can work
properly, which is shown by the accuracy, FPR, and FNR by about 98.30%, 21.99%, and 1.8%,
respectively. The last result also re-affirm that the FD is powerful for real time face recognition
engine.
4. Conclusion and Future Work
The real time FD based face recognition engine gives better performances than those of
baseline (CF). From the Off-line evaluations, it provides high recognition rate (average more than
96%) for all tested datasets, while the real time experimental data provide high accuracy (more
than 98%) and less false verification rate (by about 17.66% of false negative and 1.67% of false
positive rate). Regarding the computational time, the proposed electronics key simulator needs
less than 1 second for the matching process. In addition, the application of FD face recognition
engine for the door-locking system also works properly, which is indicated by 98.30%, 21.99%,
and 1.8% of accuracy, FPR, and FNR respectively.
The door-locking system based on face image has to be evaluated in large size dataset
Real Time Face Recognition Based on Face Descriptor ... (I Gede Pasek Suta Wijaya)
8. 746 ISSN: 1693-6930
to know its robust performance against the large variability of face images in pose, lighting, and
accessories. In addition, the proposed system still needs to be improved by adding some illu-
mination compensation, such as Contrast Limited Adaptive Histogram Equalization (CLAHE) to
decrease the false negative recognition.
Acknowledgment
We would like to send our great appreciation to the staff of informatics Engineering Dept.
on their participation in the evaluation of this system. In addition, our great honor is also to The
Minister of Research and Higher Education Republic of Indonesia for research funding under
scheme competitive research grant 2015-2016.
References
[1] I. G. P. S. Wijaya, A. Y. Husodo, and A. H. Jatmika, “Real time face recognition engine using
compact features for electronics key,” in International Seminar on Intelligent Technology and
Its Applications (ISITIA), Lombok, Indonesia, July 2016.
[2] I. G. P. S. Wijaya, A. Y. Husodo, and I. W. A. Arimbawa, “Real time face recognition using
dct coefficients face descriptory,” in International Conference on Informatics and Computing
(ICIC 2016), Lombok, Indonesia, October 2016.
[3] R. Chellappa, C. L. Wilson, and S. Sirohey, “Human and machine recognition of faces: a
survey,” Proceedings of the IEEE, vol. 83, no. 5, pp. 705–741, May 1995.
[4] I. G. P. S. Wijaya, K. Uchimura, and G. Koutakii, “Face recognition based on incremental
predictive linear discriminant analysis,” IEEJ Transactions on Electronics, Information and
Systems, vol. 133, no. 1, pp. 74–83, 8 2013.
[5] J. Zhang and D. Scholten, “A face recognition algorithm based on improved contourlet
transform and principle component analysis,” TELKOMNIKA (Telecommunication Comput-
ing Electronics and Control), vol. 14, no. 2A, pp. 114–119, 2016.
[6] A. Thamizharasi and J. Jayasudha, “An illumination invariant face recognition by selection of
dct coefficients,” International Journal of Image Processing (IJIP), vol. 10, no. 1, p. 14, 2016.
[7] H. H. Lwin, A. S. Khaing, and H. M. Tun, “Automatic door access system using face recog-
nition,” International Journal of Scientific & Technology Research, vol. 4, no. 6, pp. 210–221,
Jun 2016.
[8] M. Baykara and R. Da, “Real time face recognition and tracking system,” in Electronics,
Computer and Computation (ICECCO), 2013 International Conference on, Nov 2013, pp.
159–163.
[9] E. Setiawan and A. Muttaqin, “Implementation of k-nearest neightbors face recognition on
low-power processor,” TELKOMNIKA (Telecommunication Computing Electronics and Con-
trol), vol. 13, no. 3, pp. 949–954, 2015.
[10] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,”
in Proceedings of the conference on Computer Vision and Pattern Recognition, 2001, pp.
511–518.
[11] S. R. Konda, V. Kumar, and V. Krishna, “Face recognition using multi region prominent lbp
representation,” International Journal of Electrical and Computer Engineering, vol. 6, no. 6,
p. 2781, 2016.
[12] F. S. Samaria and A. C. Harter, “Parameterisation of a stochastic model for human face
identification,” in Applications of Computer Vision, 1994., Proceedings of the Second IEEE
Workshop on. IEEE, 1994, pp. 138–142.
[13] Yale, “The extended yale face database b,” 2001. [Online]. Available:
http://vision.ucsd.edu/ iskwak/ExtYaleDatabase/ExtYaleB.html
TELKOMNIKA Vol. 16, No. 2, April 2018 : 739 ∼ 746