This document presents a method for classifying different types of dates using computer vision and image analysis techniques. Fifteen visual features were extracted from images of dates, including color means and standard deviations, size, shape, and texture. Multiple classification methods were tested, including k-nearest neighbors, linear discriminant analysis, and neural networks. Top accuracies for classifying between seven different types of dates ranged from 89% to 99%. The automated visual classification system has the potential to improve date sorting efficiency in the agriculture industry compared to manual sorting.
To Development Manufacturing and Education using Data Mining A Reviewijtsrd
In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas. Data mining is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. It is the extraction of information from huge volume of data or set through the use of various data mining techniques. The data mining techniques like clustering, classification help in finding the hidden and previously unknown information from the database. In addition, data mining also important role and educational sector. Educational Data Mining EDM is a field of analysis and research where various data mining tools and techniques are used to optimize the applications in education sector. The paper aims to analyze the enormous data from the education sector and provide solutions and reports for specific aspects of education sector such as student's performance and placements. Moreover, this paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. Aye Pwint Phyu | Khaing Khaing Wai "To Development Manufacturing and Education using Data Mining: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27910.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/27910/to-development-manufacturing-and-education-using-data-mining-a-review/aye-pwint-phyu
Visual and analytical mining of transactions data for production planning f...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/visual-and-analytical-mining-of-sales-transaction-data-for-production-planning-and-marketing/
Recent developments in information technology paved the way for the collection of large amounts of data pertaining to various aspects of an enterprise. The greatest challenge faced in processing these massive amounts of raw data gathered turns out to be the effective management of data with the ultimate purpose of deriving necessary and meaningful information out of it. The following paper presents an attempt to illustrate the combination of visual and analytical data mining techniques for planning of marketing and production activities. The primary phases of the proposed framework consist of filtering, clustering and comparison steps
implemented using interactive pie charts, K-Means algorithm and parallel coordinate plots respectively. A prototype decision support system is developed and a sample analysis session is conducted to demonstrate the applicability of the framework.
Automated face recognition offers an effective method for identifying individuals. Face images have been used in a number of different applications, including driver’s licenses, passports and identification cards. To provide some form of standardization for photographs in these applications, ISO / IEC JTC 1 SC 37 have developed standardized data interchange formats to promote interoperability. There are many different publically available face databases available to the research community that are used to advance the field of face recognition algorithms, amongst other uses. In this paper, we examine how an existing database that has been used extensively in research (FERET) compares with two operational data sets with respect to some of the metrics outlined in the standard ISO / IEC 19794-5. The goals of this research are to provide the community with a comparison of a baseline data set and to compare this baseline to a photographic data set that has been scanned in from mug-shot photographs, as well as a data set of digitally captured photographs. It is hoped that this information will provide Face Recognition System (FRS) developers some guidance on the characteristics of operationally collected data sets versus a controlled-collection database.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
IRJET - Facial Recognition based Attendance System with LBPHIRJET Journal
This document presents a facial recognition based attendance system using LBPH (Local Binary Pattern Histograms). It begins with an abstract describing the system which takes student attendance using facial identification from classroom camera images. It then discusses related work in attendance and face recognition systems. The proposed system workflow is described involving face detection, feature extraction using LBPH, template matching, and attendance recording. Experimental results demonstrate the system's ability to detect multiple faces and record attendance accurately in an Excel sheet with date/time. The conclusion discusses how the system reduces human effort for attendance and increases learning time compared to traditional methods.
Vehicle Driver Age Estimation using Neural NetworksIRJET Journal
This document presents research on developing a convolutional neural network model to estimate the age and gender of a vehicle driver from their facial image. The researchers assembled a large dataset of over 60,000 face images from various sources to train their CNN model. They implemented the model using Caffe and tested it on a Raspberry Pi 3B+ for real-time age and gender detection. After training, the CNN model was able to accurately classify age and gender from input images with an accuracy of 98.1%. The document discusses the CNN architecture, preprocessing steps, and algorithms used to develop this age and gender detection system for vehicle drivers.
This document provides a review of various face detection techniques used in computer vision. It begins with an introduction to face detection, explaining that while easy for humans, face detection is complex for machines. It then discusses several challenges in face detection related to factors like illumination, occlusion, and orientation.
The document reviews several common approaches to face detection, including feature-based methods using skin color, color models like RGB and HSV, and feature analysis. It also discusses image-based methods such as neural networks, Eigenfaces, support vector machines, and principal component analysis. It concludes by noting progress in face detection technologies and their increasing real-world applications, while also pointing to challenges like occlusion that require further research.
Comparative analysis of augmented datasets performances of age invariant face...journalBEEI
The popularity of face recognition systems has increased due to their non-invasive method of image acquisition, thus boasting the widespread applications. Face ageing is one major factor that influences the performance of face recognition algorithms. In this study, the authors present a comparative study of the two most accepted and experimented face ageing datasets (FG-Net and morph II). These datasets were used to simulate age invariant face recognition (AIFR) models. Four types of noises were added to the two face ageing datasets at the preprocessing stage. The addition of noise at the preprocessing stage served as a data augmentation technique that increased the number of sample images available for deep convolutional neural network (DCNN) experimentation, improved the proposed AIFR model and the trait aging features extraction process. The proposed AIFR models are developed with the pre-trained Inception-ResNet-v2 deep convolutional neural network architecture. On testing and comparing the models, the results revealed that FG-Net is more efficient over Morph with an accuracy of 0.15%, loss function of 71%, mean square error (MSE) of 39% and mean absolute error (MAE) of -0.63%.
To Development Manufacturing and Education using Data Mining A Reviewijtsrd
In modern manufacturing environments, vast amounts of data are collected in database management systems and data warehouses from all involved areas. Data mining is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. It is the extraction of information from huge volume of data or set through the use of various data mining techniques. The data mining techniques like clustering, classification help in finding the hidden and previously unknown information from the database. In addition, data mining also important role and educational sector. Educational Data Mining EDM is a field of analysis and research where various data mining tools and techniques are used to optimize the applications in education sector. The paper aims to analyze the enormous data from the education sector and provide solutions and reports for specific aspects of education sector such as student's performance and placements. Moreover, this paper reviews the literature dealing with knowledge discovery and data mining applications in the broad domain of manufacturing with a special emphasis on the type of functions to be performed on the data. The major data mining functions to be performed include characterization and description, association, classification, prediction, clustering and evolution analysis. Aye Pwint Phyu | Khaing Khaing Wai "To Development Manufacturing and Education using Data Mining: A Review" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27910.pdfPaper URL: https://www.ijtsrd.com/computer-science/data-miining/27910/to-development-manufacturing-and-education-using-data-mining-a-review/aye-pwint-phyu
Visual and analytical mining of transactions data for production planning f...ertekg
Download Link > https://ertekprojects.com/gurdal-ertek-publications/blog/visual-and-analytical-mining-of-sales-transaction-data-for-production-planning-and-marketing/
Recent developments in information technology paved the way for the collection of large amounts of data pertaining to various aspects of an enterprise. The greatest challenge faced in processing these massive amounts of raw data gathered turns out to be the effective management of data with the ultimate purpose of deriving necessary and meaningful information out of it. The following paper presents an attempt to illustrate the combination of visual and analytical data mining techniques for planning of marketing and production activities. The primary phases of the proposed framework consist of filtering, clustering and comparison steps
implemented using interactive pie charts, K-Means algorithm and parallel coordinate plots respectively. A prototype decision support system is developed and a sample analysis session is conducted to demonstrate the applicability of the framework.
Automated face recognition offers an effective method for identifying individuals. Face images have been used in a number of different applications, including driver’s licenses, passports and identification cards. To provide some form of standardization for photographs in these applications, ISO / IEC JTC 1 SC 37 have developed standardized data interchange formats to promote interoperability. There are many different publically available face databases available to the research community that are used to advance the field of face recognition algorithms, amongst other uses. In this paper, we examine how an existing database that has been used extensively in research (FERET) compares with two operational data sets with respect to some of the metrics outlined in the standard ISO / IEC 19794-5. The goals of this research are to provide the community with a comparison of a baseline data set and to compare this baseline to a photographic data set that has been scanned in from mug-shot photographs, as well as a data set of digitally captured photographs. It is hoped that this information will provide Face Recognition System (FRS) developers some guidance on the characteristics of operationally collected data sets versus a controlled-collection database.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
IRJET - Facial Recognition based Attendance System with LBPHIRJET Journal
This document presents a facial recognition based attendance system using LBPH (Local Binary Pattern Histograms). It begins with an abstract describing the system which takes student attendance using facial identification from classroom camera images. It then discusses related work in attendance and face recognition systems. The proposed system workflow is described involving face detection, feature extraction using LBPH, template matching, and attendance recording. Experimental results demonstrate the system's ability to detect multiple faces and record attendance accurately in an Excel sheet with date/time. The conclusion discusses how the system reduces human effort for attendance and increases learning time compared to traditional methods.
Vehicle Driver Age Estimation using Neural NetworksIRJET Journal
This document presents research on developing a convolutional neural network model to estimate the age and gender of a vehicle driver from their facial image. The researchers assembled a large dataset of over 60,000 face images from various sources to train their CNN model. They implemented the model using Caffe and tested it on a Raspberry Pi 3B+ for real-time age and gender detection. After training, the CNN model was able to accurately classify age and gender from input images with an accuracy of 98.1%. The document discusses the CNN architecture, preprocessing steps, and algorithms used to develop this age and gender detection system for vehicle drivers.
This document provides a review of various face detection techniques used in computer vision. It begins with an introduction to face detection, explaining that while easy for humans, face detection is complex for machines. It then discusses several challenges in face detection related to factors like illumination, occlusion, and orientation.
The document reviews several common approaches to face detection, including feature-based methods using skin color, color models like RGB and HSV, and feature analysis. It also discusses image-based methods such as neural networks, Eigenfaces, support vector machines, and principal component analysis. It concludes by noting progress in face detection technologies and their increasing real-world applications, while also pointing to challenges like occlusion that require further research.
Comparative analysis of augmented datasets performances of age invariant face...journalBEEI
The popularity of face recognition systems has increased due to their non-invasive method of image acquisition, thus boasting the widespread applications. Face ageing is one major factor that influences the performance of face recognition algorithms. In this study, the authors present a comparative study of the two most accepted and experimented face ageing datasets (FG-Net and morph II). These datasets were used to simulate age invariant face recognition (AIFR) models. Four types of noises were added to the two face ageing datasets at the preprocessing stage. The addition of noise at the preprocessing stage served as a data augmentation technique that increased the number of sample images available for deep convolutional neural network (DCNN) experimentation, improved the proposed AIFR model and the trait aging features extraction process. The proposed AIFR models are developed with the pre-trained Inception-ResNet-v2 deep convolutional neural network architecture. On testing and comparing the models, the results revealed that FG-Net is more efficient over Morph with an accuracy of 0.15%, loss function of 71%, mean square error (MSE) of 39% and mean absolute error (MAE) of -0.63%.
This document presents a proposed virtual body measurement system to measure body parameters from images in order to select appropriately sized clothing without needing to be physically present. The system uses HAAR features to recognize body parameters like height, waist, bust from images. It then considers factors like fashion style and clothing psychology to enable tailored clothing alterations. The methodology involves using HAAR classifiers and integral images to detect facial features and then train classifiers to recognize other body measurements. This virtual system aims to reduce time spent on physical fittings while shopping for tailored clothing.
IRJET- Persons Identification Tool for Visually Impaired - Digital EyeIRJET Journal
This document presents a face detection and recognition system to help visually impaired people identify individuals. The system uses computer vision techniques like convolutional neural networks and cascade classifiers for face detection with high accuracy. It then performs face recognition on pre-trained image datasets to determine a person's identity, as well as their emotion, age and gender. The system was tested on a combined dataset of images and achieved 95.7% accuracy in identifying faces, even when there were many faces present. This person identification tool aims to help the visually impaired better interact with others by audibly providing the name and attributes of detected individuals.
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a facial emotion detection system called Emotionalizer. The system uses machine learning to analyze facial expressions in images and detect emotions like happy, sad, angry, fearful and disgust. It was developed in Python using techniques like pre-processing, skin color detection, facial feature extraction and a support vector machine classifier. The goal is to build a system that can automatically recognize emotions from faces as accurately as humans. It discusses previous related work on facial recognition and detection and outlines the objectives, methodology and evaluation of the Emotionalizer system.
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a face emotion detection system called Emotionalizer. It uses machine learning and facial recognition techniques to detect emotions like happy, sad, angry, fearful and disgust based on facial expressions. The system analyzes images of faces and determines the appropriate emotion based on geometric changes in facial features. It was developed in Python using tools like OpenCV for facial detection and recognition. The goal is to build a system that can read emotions from facial expressions similarly to how humans perceive emotions.
ATTENDANCE BY FACE RECOGNITION USING AIIRJET Journal
This document describes a proposed face recognition system for automated student attendance. The system would use a camera situated at a school entrance to capture frontal images of students as they enter. It would then use face recognition algorithms to identify each student and automatically record their attendance. Some key advantages of this system include reducing the time spent on manual attendance recording and increasing accuracy by eliminating proxy attendance issues. The proposed system aims to provide a hassle-free automated solution for tracking student attendance using biometric face recognition technologies.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IJERA Editor
This paper describes the technique for real time human face detection and tracking for age rank, weight and gender estimation. Face detection is involved with finding whether there are any faces in a given image and if there are any faces present, track the face and returns the face region with features of each face. Here it describes a simple and convenient hardware implementation of face detection method using Raspberry Pi Processor, which itself is a minicomputer of a credit card size. This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age evaluation based on face images. Two main components for building an efficient age estimator are facial feature extraction and estimator learning. Using feature extraction and comparing with our input database in which we have different age group face images with weight is specified according to that we also specify weight category i.e. under weight, normal weight and overweight . In this article we present gender estimation technique, which effectively integrates the head as well as mouth motion information with facial appearance by taking advantage of a unified probabilistic framework. Facial appearance as well as head and mouth motion possess a potentially relevant discriminatory power, and that the integration of different sources of biometric data from video sequences is the key approach to develop more precise and reliable realization systems.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IRJET Journal
This document proposes a system for real-time human face detection, tracking, and estimation of age, weight, and gender from face images using a Raspberry Pi processor. The system uses OpenCV for face detection and extracts facial features to classify age, estimate weight, and determine gender through a probabilistic framework. The system allows for real-time detection of multiple faces with high efficiency even from low-quality images. Evaluation shows the low-cost Raspberry Pi provides fast execution speeds suitable for real-time applications.
Face Annotation using Co-Relation based Matching for Improving Image Mining ...IRJET Journal
This document discusses face annotation techniques for improving image mining in videos. It begins by introducing the need for better image retrieval with the rise of online sharing. It then discusses challenges with face annotation in videos and existing techniques like content-based image retrieval and search-based face annotation. The document analyzes limitations of these existing techniques, such as semantic gaps with manual tagging, decreased accuracy, and poor generalization with new data. It proposes using correlation-based matching to address problems in face recognition techniques.
IRJET- Library Management System with Facial Biometric AuthenticationIRJET Journal
This document proposes a library management system using facial recognition for biometric authentication. It would automate the entry and exit logging of users and reduce wait times. The system involves two phases: 1) enrolling images of authorized users in a dataset during initial setup and 2) verifying users in real-time by comparing their images to those in the stored dataset. If a match is found, the user is granted access, and if not, they are denied entry. The system aims to eliminate manual logging and prevent unauthorized access to library resources.
IRJET- Library Management System with Facial Biometric AuthenticationIRJET Journal
1. The document proposes a facial recognition system using OpenCV for biometric authentication in a library management system.
2. The system has two phases: a data generation phase that enrolls user images in a database, and a recognition phase that identifies users by comparing input images to the database.
3. In the recognition phase, preprocessing such as grayscale conversion and histogram equalization is performed before feature extraction using algorithms like LBPH, SIFT, LDA, and PCA to generate faceprints for comparison and verification against the database.
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document presents a hybrid approach for face detection and feature extraction. It combines the Viola-Jones face detection framework with a neural network classifier to first classify images as containing a face or not. If a face is detected, Viola-Jones algorithms like integral images and cascading classifiers are used to detect the face features. Edge-based feature maps and feature vectors are also extracted and used as inputs to the neural network classifier and for future facial feature extraction. The proposed approach aims to leverage the strengths of Viola-Jones and neural networks to accurately detect faces and then extract facial features from images.
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET Journal
The document proposes a new quadtree-based algorithm for multi-focus image fusion. The algorithm divides the input images into 4 equal blocks using a quadtree structure. It then further divides each block into smaller blocks and detects the focused regions in each block using a focus measure and weighted values. The small blocks are then fused using a modified Laplacian mechanism. The fused image is evaluated using SSIM and ESSIM values, which indicate the proposed algorithm performs better fusion than previous methods.
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONIRJET Journal
This document describes a visual attendance system using face recognition. The system was created to make the traditional paper-based attendance process more efficient and less prone to errors. It uses computer vision and face recognition techniques, including the RetinaFace and Arcnet algorithms, to detect and identify students' faces from video feeds or images taken in the classroom. When taking attendance, the system captures photos of students present and searches its database of student faces to automatically record attendance without disrupting the class. The document discusses the methodology and system structure, including face detection, face recognition modeling, and an overall workflow flowchart. It aims to provide an improved digital solution for tracking attendance at universities and colleges.
IRJET - Design and Development of Android Application for Face Detection and ...IRJET Journal
This document describes research on developing an Android application for face detection and face recognition. It discusses using techniques like skin segmentation, facial feature extraction, and classification algorithms from the OpenCV library. The application detects faces in images and compares them to a dataset for recognition. It addresses challenges like scale and lighting changes. The architecture involves preprocessing images, extracting Local Binary Patterns features, and matching them to the database for identification. Common mistakes like inability to retrieve detected faces are also outlined.
Face and facial expressions recognition for blind peopleIRJET Journal
This document discusses a system for facial recognition and expression recognition to help blind people identify faces and facial expressions. The system uses a Raspberry Pi 3 with a camera to capture faces. It then uses the Viola-Jones algorithm for face detection and PCA for facial recognition, comparing faces to a database of stored images. When a matched face is found, the system speaks the name of the person to the blind user. The goal is to enhance social interactions for the visually impaired by enabling them to recognize faces and expressions.
Data Mining Assignment Sample Online - PDFAjeet Singh
A data mining assignment sample may include tasks such as data preprocessing, exploratory data analysis, modeling, and evaluation. For example, students may be asked to clean and preprocess a dataset, perform exploratory data analysis to gain insights into the data, build predictive models using techniques such as classification or regression, and evaluate the performance of the models using metrics such as accuracy or precision.
Robust face recognition by applying partitioning around medoids over eigen fa...ijcsa
An unsupervised learning methodology for robust face recognition is proposed for enhancing invariance to
various changes in the face. The area of face recognition in spite of being the most unobtrusive biometric
modality of all has encountered challenges with high performance in uncontrolled environment owing to
frequently occurring, unavoidable variations in the face. These changes may be due to noise, outliers,
changing expressions, emotions, pose, illumination, facial distractions like makeup, spectacles, hair growth
etc. Methods for dealing with these variations have been developed in the past with different success.
However the cost and time efficiency play a crucial role in implementing any methodology in real world.
This paper presents a method to integrate the technique of Partitioning Around Medoids with Eigen Faces
and Fisher Faces to improve the efficiency of face recognition considerably. The system so designed has
higher resistance towards the impact of various changes in the face and performs well in terms of success
rate, cost involved and time complexity. The methodology can therefore be used in developing highly robust
face recognition systems for real time environment.
This document presents a proposed virtual body measurement system to measure body parameters from images in order to select appropriately sized clothing without needing to be physically present. The system uses HAAR features to recognize body parameters like height, waist, bust from images. It then considers factors like fashion style and clothing psychology to enable tailored clothing alterations. The methodology involves using HAAR classifiers and integral images to detect facial features and then train classifiers to recognize other body measurements. This virtual system aims to reduce time spent on physical fittings while shopping for tailored clothing.
IRJET- Persons Identification Tool for Visually Impaired - Digital EyeIRJET Journal
This document presents a face detection and recognition system to help visually impaired people identify individuals. The system uses computer vision techniques like convolutional neural networks and cascade classifiers for face detection with high accuracy. It then performs face recognition on pre-trained image datasets to determine a person's identity, as well as their emotion, age and gender. The system was tested on a combined dataset of images and achieved 95.7% accuracy in identifying faces, even when there were many faces present. This person identification tool aims to help the visually impaired better interact with others by audibly providing the name and attributes of detected individuals.
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a facial emotion detection system called Emotionalizer. The system uses machine learning to analyze facial expressions in images and detect emotions like happy, sad, angry, fearful and disgust. It was developed in Python using techniques like pre-processing, skin color detection, facial feature extraction and a support vector machine classifier. The goal is to build a system that can automatically recognize emotions from faces as accurately as humans. It discusses previous related work on facial recognition and detection and outlines the objectives, methodology and evaluation of the Emotionalizer system.
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a face emotion detection system called Emotionalizer. It uses machine learning and facial recognition techniques to detect emotions like happy, sad, angry, fearful and disgust based on facial expressions. The system analyzes images of faces and determines the appropriate emotion based on geometric changes in facial features. It was developed in Python using tools like OpenCV for facial detection and recognition. The goal is to build a system that can read emotions from facial expressions similarly to how humans perceive emotions.
ATTENDANCE BY FACE RECOGNITION USING AIIRJET Journal
This document describes a proposed face recognition system for automated student attendance. The system would use a camera situated at a school entrance to capture frontal images of students as they enter. It would then use face recognition algorithms to identify each student and automatically record their attendance. Some key advantages of this system include reducing the time spent on manual attendance recording and increasing accuracy by eliminating proxy attendance issues. The proposed system aims to provide a hassle-free automated solution for tracking student attendance using biometric face recognition technologies.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IJERA Editor
This paper describes the technique for real time human face detection and tracking for age rank, weight and gender estimation. Face detection is involved with finding whether there are any faces in a given image and if there are any faces present, track the face and returns the face region with features of each face. Here it describes a simple and convenient hardware implementation of face detection method using Raspberry Pi Processor, which itself is a minicomputer of a credit card size. This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age evaluation based on face images. Two main components for building an efficient age estimator are facial feature extraction and estimator learning. Using feature extraction and comparing with our input database in which we have different age group face images with weight is specified according to that we also specify weight category i.e. under weight, normal weight and overweight . In this article we present gender estimation technique, which effectively integrates the head as well as mouth motion information with facial appearance by taking advantage of a unified probabilistic framework. Facial appearance as well as head and mouth motion possess a potentially relevant discriminatory power, and that the integration of different sources of biometric data from video sequences is the key approach to develop more precise and reliable realization systems.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IRJET Journal
This document proposes a system for real-time human face detection, tracking, and estimation of age, weight, and gender from face images using a Raspberry Pi processor. The system uses OpenCV for face detection and extracts facial features to classify age, estimate weight, and determine gender through a probabilistic framework. The system allows for real-time detection of multiple faces with high efficiency even from low-quality images. Evaluation shows the low-cost Raspberry Pi provides fast execution speeds suitable for real-time applications.
Face Annotation using Co-Relation based Matching for Improving Image Mining ...IRJET Journal
This document discusses face annotation techniques for improving image mining in videos. It begins by introducing the need for better image retrieval with the rise of online sharing. It then discusses challenges with face annotation in videos and existing techniques like content-based image retrieval and search-based face annotation. The document analyzes limitations of these existing techniques, such as semantic gaps with manual tagging, decreased accuracy, and poor generalization with new data. It proposes using correlation-based matching to address problems in face recognition techniques.
IRJET- Library Management System with Facial Biometric AuthenticationIRJET Journal
This document proposes a library management system using facial recognition for biometric authentication. It would automate the entry and exit logging of users and reduce wait times. The system involves two phases: 1) enrolling images of authorized users in a dataset during initial setup and 2) verifying users in real-time by comparing their images to those in the stored dataset. If a match is found, the user is granted access, and if not, they are denied entry. The system aims to eliminate manual logging and prevent unauthorized access to library resources.
IRJET- Library Management System with Facial Biometric AuthenticationIRJET Journal
1. The document proposes a facial recognition system using OpenCV for biometric authentication in a library management system.
2. The system has two phases: a data generation phase that enrolls user images in a database, and a recognition phase that identifies users by comparing input images to the database.
3. In the recognition phase, preprocessing such as grayscale conversion and histogram equalization is performed before feature extraction using algorithms like LBPH, SIFT, LDA, and PCA to generate faceprints for comparison and verification against the database.
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document presents a hybrid approach for face detection and feature extraction. It combines the Viola-Jones face detection framework with a neural network classifier to first classify images as containing a face or not. If a face is detected, Viola-Jones algorithms like integral images and cascading classifiers are used to detect the face features. Edge-based feature maps and feature vectors are also extracted and used as inputs to the neural network classifier and for future facial feature extraction. The proposed approach aims to leverage the strengths of Viola-Jones and neural networks to accurately detect faces and then extract facial features from images.
IRJET- An Improvised Multi Focus Image Fusion Algorithm through QuadtreeIRJET Journal
The document proposes a new quadtree-based algorithm for multi-focus image fusion. The algorithm divides the input images into 4 equal blocks using a quadtree structure. It then further divides each block into smaller blocks and detects the focused regions in each block using a focus measure and weighted values. The small blocks are then fused using a modified Laplacian mechanism. The fused image is evaluated using SSIM and ESSIM values, which indicate the proposed algorithm performs better fusion than previous methods.
A VISUAL ATTENDANCE SYSTEM USING FACE RECOGNITIONIRJET Journal
This document describes a visual attendance system using face recognition. The system was created to make the traditional paper-based attendance process more efficient and less prone to errors. It uses computer vision and face recognition techniques, including the RetinaFace and Arcnet algorithms, to detect and identify students' faces from video feeds or images taken in the classroom. When taking attendance, the system captures photos of students present and searches its database of student faces to automatically record attendance without disrupting the class. The document discusses the methodology and system structure, including face detection, face recognition modeling, and an overall workflow flowchart. It aims to provide an improved digital solution for tracking attendance at universities and colleges.
IRJET - Design and Development of Android Application for Face Detection and ...IRJET Journal
This document describes research on developing an Android application for face detection and face recognition. It discusses using techniques like skin segmentation, facial feature extraction, and classification algorithms from the OpenCV library. The application detects faces in images and compares them to a dataset for recognition. It addresses challenges like scale and lighting changes. The architecture involves preprocessing images, extracting Local Binary Patterns features, and matching them to the database for identification. Common mistakes like inability to retrieve detected faces are also outlined.
Face and facial expressions recognition for blind peopleIRJET Journal
This document discusses a system for facial recognition and expression recognition to help blind people identify faces and facial expressions. The system uses a Raspberry Pi 3 with a camera to capture faces. It then uses the Viola-Jones algorithm for face detection and PCA for facial recognition, comparing faces to a database of stored images. When a matched face is found, the system speaks the name of the person to the blind user. The goal is to enhance social interactions for the visually impaired by enabling them to recognize faces and expressions.
Data Mining Assignment Sample Online - PDFAjeet Singh
A data mining assignment sample may include tasks such as data preprocessing, exploratory data analysis, modeling, and evaluation. For example, students may be asked to clean and preprocess a dataset, perform exploratory data analysis to gain insights into the data, build predictive models using techniques such as classification or regression, and evaluate the performance of the models using metrics such as accuracy or precision.
Robust face recognition by applying partitioning around medoids over eigen fa...ijcsa
An unsupervised learning methodology for robust face recognition is proposed for enhancing invariance to
various changes in the face. The area of face recognition in spite of being the most unobtrusive biometric
modality of all has encountered challenges with high performance in uncontrolled environment owing to
frequently occurring, unavoidable variations in the face. These changes may be due to noise, outliers,
changing expressions, emotions, pose, illumination, facial distractions like makeup, spectacles, hair growth
etc. Methods for dealing with these variations have been developed in the past with different success.
However the cost and time efficiency play a crucial role in implementing any methodology in real world.
This paper presents a method to integrate the technique of Partitioning Around Medoids with Eigen Faces
and Fisher Faces to improve the efficiency of face recognition considerably. The system so designed has
higher resistance towards the impact of various changes in the face and performs well in terms of success
rate, cost involved and time complexity. The methodology can therefore be used in developing highly robust
face recognition systems for real time environment.
Similar to Image-Based Date Fruit Classification (20)
SLAM of Multi-Robot System Considering Its Network Topologytoukaigi
This document proposes a new solution to the multi-robot simultaneous localization and mapping (SLAM) problem that takes into account the network topology between robots. Previous multi-robot SLAM research has expanded one-robot SLAM algorithms without considering how the relationship between robots changes over time. The proposed approach models the network structure and derives the mathematical formulation for estimating the multi-robot SLAM. It presents motion and observation update equations in an information filter framework that can be implemented in a decentralized way on individual robots. Future work will focus on specific challenges in multi-robot SLAM like map merging.
Reduced Model Adaptive Force Control for Carrying Human Beings with Uncertain...toukaigi
This document summarizes a research paper that proposes a new strategy for lifting human bodies into predefined positions and postures using robot arms. The strategy treats the human body as a redundant system and only controls certain "interested" joint states, like the head position and hip angle, to simplify the complex human model. It develops an adaptive controller and estimator to identify the dynamic parameters of a reduced human body model in real-time to account for individual differences between people. Simulations lifting a skeleton model with robot arms verify the efficiency and effectiveness of the proposed approach.
The document describes a new adaptive treadmill control strategy that allows the treadmill speed to be controlled by the user's intended walking speed. It analyzes the center of pressure formula and simulation results to identify the ratio of reaction forces yF and zF (denoted as ,y zR ) as a key index related to intended walking speed. An experiment is conducted where subjects walk on a treadmill while viewing a virtual reality shopping scene. Force plate data is used to model the relationship between ,y zR and treadmill velocity V, and least squares regression is used to calibrate the model. The results show the treadmill speed can be smoothly controlled to match the user's intended speed.
Development of a 3D Interactive Virtual Market System with Adaptive Treadmill...toukaigi
The document describes the development of a 3D interactive virtual reality system connected to an adaptive treadmill. The system measures the interaction forces between a user's feet and the treadmill to estimate their intended walking speed. It then adjusts the treadmill belt velocity and 3D display pace to match the user's walking, creating an immersive experience where they can walk through and interact with a virtual market at their own pace. An experiment showed the control results were smooth, validating the overall system.
Adaptive Attitude Control of Redundant Time-Varying Complex Model of Human Bo...toukaigi
This paper proposes an adaptive attitude control approach to lift the human body using robots, regardless of individual differences like height and weight. It models the human body as a complex, time-varying, redundant system. The approach treats the human body as having "interested" and "uninterested" joints. It uses robust adaptive control to eliminate the effects of "uninterested joints" and identify human parameters in real-time. This reduces the complex human model to a smaller one with fewer degrees of freedom. The approach is simulated by lifting a human skeleton with two robot arms, verifying its efficiency and effectiveness.
Adaptive Control Scheme with Parameter Adaptation - From Human Motor Control ...toukaigi
This document summarizes an adaptive control scheme proposed for humanoid robot locomotion that draws from principles of human motor control. It addresses two common issues: modeling and control. For modeling, it discusses how both human and robot dynamics are simplified by selecting partial variable states, and proves the partial dynamics still satisfies conditions of a physical system. For control, it proposes an adaptive scheme using variable state control and parameter adaptation. Variable state control can tolerate modeling errors, while parameter adaptation can identify the dynamic system in real time. The control scheme is applied to a humanoid robot control case to demonstrate its effectiveness.
Human Factor Affects Eye Movement Pattern During Riding Motorcycle on the Mou...toukaigi
This document analyzes eye movement patterns during motorcycle riding on mountains using an eye tracking system. The system recorded eye movements and front views of subjects riding on a mountain route. Analysis found that human factors like traffic, vehicles, and roadside objects influenced fixation points and eye tracking. Specifically, subjects' eyes tracked opposite lane vehicles more when climbing than descending, and fixated mainly at traffic signals, stopped vehicles, close objects, roadside buildings, and vehicles in front. The study aims to understand how human factors impact eye movements during motorcycle riding.
Modeling and Control of a Humanoid Robot Arm with Redundant Biarticular Muscl...toukaigi
This document describes research on modeling and controlling a humanoid robot arm actuated by biarticular muscles. Key points:
- A two-link robot arm model is developed based on the parameters of a human arm, with six muscles added as actuators, including two biarticular muscles.
- An adaptive control scheme is proposed to control the robot arm by distributing computed torque commands to the muscle forces. The scheme can tolerate modeling errors through online parameter adaptation.
- The control and adaptation methods were verified in a simulation of the arm performing bend-stretch movements. The adaptation scheme successfully identified changing arm parameters to compensate for them in real-time.
Real-time Estimation of Human’s Intended Walking Speed for Treadmill-style Lo...toukaigi
This document discusses estimating a human's intended walking speed using force plates under a treadmill. It first introduces the problem and experimental setup using two force plates under a treadmill. It then describes Experiment 1 which found that a proposed force index, defined as the minimum value of the ratio of forward ground reaction force to total ground reaction force during one gait cycle, has a strong linear correlation with intended walking speed. Experiment 2 showed the coefficients of this linear relationship vary little, ensuring tolerance of individual variations. Finally, a treadmill-style locomotion interface is presented that allows a user to actively control the treadmill speed with their feet based on intended walking speed estimation, providing a promising human-machine interface.
Car Parking Vacancy Detection and Its Application in 24-Hour Statistical Anal...toukaigi
This document proposes a method to detect vacant parking spaces using neural networks and video analysis of a parking lot over 24 hours. It addresses three technical challenges: 1) changing light intensity and shadows during the day require an adaptive method, 2) low light periods in early morning and evening need separate treatment, and 3) nighttime lighting patterns differ from daytime. The proposed method achieved 99.9% accuracy for detecting occupied spots and 97.9% for vacant spots using features extracted from video of a parking lot in Abu Dhabi across 24 hours. The results provide useful statistics on parking utilization and patterns at different times.
Adaptive Biarticular Muscle Force Control for Humanoid Robot Armstoukaigi
This document presents a method for adaptive control of humanoid robot arms driven by biarticular muscles. The method uses sliding control to first determine joint torques, then distributes those torques to individual muscle forces using a Jacobian matrix. Internal forces are also used to optimize muscle forces so they remain within predefined limits and work in the middle of their range to avoid fatigue. All dynamic parameters are updated in real-time to account for perturbations. Compared to previous adaptive control methods, this method uses prediction error to accelerate parameter convergence. The results demonstrate the benefits of this adaptive control method.
Cost-Effective Single-Camera Multi-Car Parking Monitoring and Vacancy Detecti...toukaigi
This document describes a system for monitoring parking spaces using a single camera. It uses a combination of static image analysis techniques like edge detection and histogram classification on individual frames, as well as dynamic analysis across frames using blob tracking. The static analysis provides estimates of occupancy for each space, while dynamic analysis updates spaces where cars are moving in or out. The system was tested on a parking lot with 56 spaces, achieving over 90% accuracy in reporting empty spots in real-time. It provides a low-cost solution for monitoring large parking areas with a single camera.
Development of a Novel Conversational Calculator Based on Remote Online Compu...toukaigi
This document describes the development of a novel conversational calculator that uses speech recognition and remote online computation. The researchers conducted a Wizard of Oz experiment with 100 people to collect natural language queries about calculations. They used this data to build a language model specific to conversations with a calculator. This improved the speech recognition accuracy compared to general purpose systems. The conversational calculator allows for multi-step calculations, currency conversions, and relies on the Wolfram Alpha computational engine instead of local processing.
To Ask or To Sense? Planning to Integrate Speech and Sensorimotor Actstoukaigi
This document describes research into developing a conversational robot that can integrate speech acts and sensorimotor acts when resolving ambiguities. The robot needs to decide whether to ask a clarifying question or perform a sensory action like moving its head to see from a different perspective. The researchers present a planning algorithm that treats speech acts and sensory actions in a common framework by calculating the expected costs and information rewards of different actions. They evaluate the algorithm's performance under various settings and discuss possible extensions.
An adaptive treadmill-style locomotion interface and its application in 3-D i...toukaigi
This document presents a study that develops an adaptive treadmill-style locomotion interface. The interface estimates a user's intended walking speed based on measuring ground reaction forces with force plates under each side of a treadmill. Two experiments found the intended speed is linearly correlated with a proposed "force index" calculated from the force data. The interface was applied in a 3D virtual reality market system to allow users to walk through a virtual Japanese-style market at their desired speed. This provides elderly users exercise while shopping virtually.
A Fast Laser Motion Detection and Approaching Behavior Monitoring Method for ...toukaigi
1. The document describes a method for a Moving Object Alarm System (MOAS) that uses a laser sensor to detect moving objects, monitor their trajectories and approaching speed, and provide alerts if approaching in a dangerous manner.
2. The method defines a boundary around the monitored area and uses a fan-shaped grid to efficiently detect continuous moving objects. Object association across time is determined by updating a deviation matrix measuring changes in range, angle, and size of detected objects.
3. Outdoor experiments tested passing, approaching, and crossing objects, finding the method effectively detected motion and monitored approaching behavior in real-time.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
2. can be distinguished by visual characteristics, one
technical problem in visual-based recognition is that
there sometimes exists a wide range of certain features
within samples of a specific date type, i.e. large intra-
class variation (Fig. 3). Such variations could result
from natural differences in exposure to water, heat, etc.
On the other hand, several date types are not
distinguishable merely by basic visual measurements,
such as color and size, and thus further visual detail is
required to separate such date types.
Figure 3. Safawi Al Madina dates. Two dates of the same type
(Safawi Al Madina) with different sizes and color distributions.
A good amount of research has gone towards image-
based techniques and pattern recognition regarding fruits
and vegetables. D. Unaya et al. have done much
successful work related to machine-vision based apple-
grading [7] [8] [9]. However, a large portion of such
research focuses on identifying faults, deformations and
defects. Besides apples, potatoes have also been focused
on [10]. There has been little research dedicated to palm
date fruits, and most literature on the subject revolves on
grading and quality classification. Schmilovitch et. al
used near-infrared spectrometry to measure ‘maturity’ of
dates [11]. Lee used a similar computer vision-based
method (digital reflective near-infrared imaging) to
grade dates [6]. Al-Janobi explored visual-based
classification based on color [12] and texture [13]
features, also for the purpose of grading.
Literature concerning the classification of different
types of dates, on the other hand, is very scarce. Hobani
et Al. developed a system for classifying seven different
varieties of dates, with accuracies reaching 99.6% [14].
However, their system was based only partially on
computer vision and image analysis, and made heavy
use of physical properties that were measured carefully,
such as weight, volume, and moisture level. In fact, only
three of their eleven features were based on visual data,
and those related only to the color of the dates. Such
requirements make their system a poor candidate for any
automated industrial classification system, as
measurements for any given date would require an
immense amount of time, and probably manual labor as
well. Fadel, on the other hand, presented a classification
program based entirely on visual features, but relied
only on color data – RGB means and standard
deviations [15]. His method classifies five different
types of dates, with accuracies averaging 80%. The most
comprehensive computer-vision system that deals with
dates is probably the one developed by Ohali, which
accounts for size, shape, and flabbiness, among other
things [5]. However, his method – like most – classifies
dates based on grade, and achieves a maximum success
rate of 80%. Regarding the pattern recognition methods
that were utilized, there has been a lot of concentration
on the usage of artificial neural networks for
classification of fruits [16] [17]. This is also true for
projects concerning dates. Ohali, Fadel, and Hobani et.
al, as well as others, used neural networks for
classification [14] [15] [5].
As we shall see, although our method utilizes solely
image-based features, which are inexpensive and very
fast to acquire, it achieves accuracies comparable to
those that rely heavily on physical features, thus
supporting the design and implementation of computer
vision-based industrial sorting mechanisms. Moreover,
we explore the uses of alternative methods to neural
networks and compare the results achieved.
In this paper, in order to classify the dates, fifteen
visual features were extracted from training images,
including: means and standard deviations of colors, size,
shape descriptors, as well as texture descriptors (entropy
and energy). In order to extract features, images went
through several stages of image processing, including
color threshold masking, color filtering, and region
identification. Then, multiple classification methods
were used, including nearest neighbor, artificial neural
networks and linear discriminant analysis. We will start
by presenting our method in more detail, describing
image preprocessing, the customized features that were
extracted, as well as the classification methods that were
tried out, and then we will present detailed results,
followed by discussion section, and our concluding
remarks.
II. FEATURE EXTRACTION
For our project, a total of 140 images of dates were
available, divided evenly between seven common
market date types – Sukkery, Mabroom, Khedri, Safawi
Al Madina, Madina Ajwa, Amber Al Madina, and
Safree. Images had a size of 480 x 640 and were in
JPEG format, and were taken from a set position, so that
all images had the same dimension scales. Dates were
placed in different positions, but were always fully
contained in the frame. Although they were placed on a
white background, lighting varied considerably between
different images. Shadows almost always existed around
the dates. In addition, in some cases the white
background did not cover the entire frame, leading to
dark triangles on edges and corners. In order to perform
segmentation of the date regions, an adaptive
thresholding method was implemented to separate the
date from the rest of the image. No uniform threshold
worked well, due to significant differences amongst the
date colors, as well as lighting and shadow issues.
370
3. Thresholds that worked well on lighter images often
captured shadows when used with darker images, and
those that worked well for dark images failed to capture
some lighter-colored date parts in lighter images.
Thus, a customized threshold was used for every
image, which was calculated on the basis of the color
distribution. Table 2 shows a sample date image from
each type, as well as the image after background-
removal and the color histograms from which the
thresholds were extracted. The histograms contain two
high-frequency regions (bimodal distribution) – the first
being the date colors, and the second being the light-
grey/white background.
The histograms were filtered in order to make the
graph smooth and continuous, as not all intensity values
with observed, leaving zero readings throughout the
histogram. The filter used was a one-dimensional finite-
impulse response filter. The target threshold was the
color value at the end of the first intense region (the
beginning of the ‘plateau’), thus effectively separating
the pixels belonging to the date vs. the background.
A total of 15 features, summarized in Table 1, were
chosen to represent each date image, and were
calculated after the date region was extracted from the
image using the adaptive threshold-based segmentation-
derived mask.
A. Color-Related Features
Colors are perhaps the most important features, as
some date types vary considerably in color. Thus, color
alone can often be used to restrict the number of
possible types (Fig. 4).
(a) (b)
Figure 4. Color feature. (a) A typical Madina Ajwa date has an
uniform and dark color. (b) Safawi Al Madina dates contain lighter
and darker regions.
Distributions of colors are also quite important.
While some date types have uniform colors, others
contain both light and dark regions, and such differences
could possibly identify certain types. Once the date
region is extracted from the image, several resultant
images are produced. First, red, green and blue
component regions are extracted, and from each the
average corresponding color value is calculated. The
standard deviation of each color component region is
also measured and used as a feature, in order to
represent the distribution of the three colors.
TABLE I. FEATURES AND CORRESPONDING MEASUREMENT
FORMATS
Feature
Group
Feature Measurement Format
Color-
Related
Features
Red Filter
Mean
Float from 8-bit Int (0 -
255)
Green Filter
Mean
Float from 8-bit Int (0 -
255)
Blue Filter
Mean
Float from 8-bit Int (0 -
255)
Red Filter
Standard Deviation
Float from 8-bit Int (0 -
255)
Green Filter
Standard Deviation
Float from 8-bit Int (0 -
255)
Blue Filter
Standard Deviation
Float from 8-bit Int (0 -
255)
Size and
Shape
Features
Area Number of Pixels
Perimeter Number of Pixels
Ellipse
Eccentricity
Ratio between foci
separation and major axis
length
Major Axis
Length
Distance in terms of
pixels
Minor Axis
Length
Distance in terms of
pixels
Texture-
Related
Features
Red Filter
Entropy
Bit
Green Filter
Entropy
Bit
Blue Filter
Entropy
Bit
Energy Sum of squares of Gray
Level Co-Occurance Matrix
entries
B. Shape and Size Features
For each date, the ellipse with the same normalized
second central moments was used as a best-fitting
ellipse. Ellipses were chosen as a modeling shape due to
the natural shapes of the dates. From the best-fitting
ellipse, the lengths of the major and minor axis were
calculated and used as features. In addition, the
eccentricity – defined as the ratio between the major
axis length and the distance separating the two foci –
was also used.
The size of each date is a key feature, as it has great
separating power amongst different date types. Although
dates are not perfectly symmetrical, the area of each date
in the images was used to represent size. Areas were
obtained simply by counting the number of pixels in the
mask, determined by the adaptive threshold. In addition,
the perimeter of the date image segments was also used,
which was simply the number of date-pixels bordering
non-date-pixels.
371
4. TABLE II. COLOR DISTRIBUTION ANALYSIS
Date Type Original Image, With Best-Fitting Ellipse
Colorscaled Image After Background
Separation Via Masking
Color Histogram – Vertical Line
Represents Adaptive Threshhold
Amber Al Madina
Mabroom
Madina Ajwa
Safawi Al Madina
Safree
Sukkery
Khedri
372
5. C. Texture-Related Features
It was also necessary to take texture and randomness
measures into consideration, as some date types can be
visually separated from others by their smoothness – or
lack thereof. In order to represent texture, two tactics
were used.
First, the gray level co-occurrence matrix was
calculated for each image, using gray-scale versions of
the images. Gray-level co-occurrence matrices indicate
how often pixels of certain intensities adjoin each other.
For its calculation, the gray-scale range was divided into
eight segments, and then each pixel was replaced with
its gray level, ranging between one and eight. A master
scale of 20-150 was used, as opposed to the full scale of
0 – 255, for added accuracy, as very few date pixel
values existed outside the chosen range. Then, for every
pixel, the gray level of the pixel and the one directly to
its right were observed. Finally, the co-occurrence
matrix was constructed using the numbers of instances
of all possible neighborhoods. Each entry (i, j)
represents the number of instances of a j-pixel being to
the right of an i-pixel (Fig. 5).
Figure 5. Formation of a gray-level co-occurance matrix from
segmented matrix. The (2, 3) entry represents the number of
occurrences of ( 2 3) in the matrix of regions as horizontal neighbors.
The entry (7, 7) is a 2, corresponding to two instances of neighboring
7s.
The gray-level co-occurrence matrix, thus, gives us
an indication of the smoothness of an image. In an
image with completely uniform rows, for example, all
neighborhoods will be between pixels of the same level,
leading to a diagonal matrix. On the other hand, an
image where no neighboring pixels are similar in color
will produce a matrix with zeros along the diagonal and
positive values elsewhere. From the gray-level co-
occurrence matrix, several single values can be extracted
to represent randomness or texture. Energy – the sum of
the squares of the elements in the matrix – was chosen.
Besides energy, entropies – statistical measures of
randomness - were determined for each of the red,
green, and blue filters, and were used as features.
I. CLASSIFICATION
Three classification methods were used – nearest
neighbor, linear discriminant analysis, and neural
networks. In more detail:
A. Nearest Neighbor
K-Nearest Neighbor was tested, in which each date
of a testing set was assigned to the class of the date with
the closest feature vector in the training set. In order to
measure accuracy, 10-fold validation was implemented:
the set of all images was divided into 10 subsets, each of
equal size. Each subset was chosen randomly, but in a
way that guaranteed balance between date types. Thus,
each subset contained two images of each date subset,
with a size of 14 images. In turn, every subset was tested
using the other nine subsets (126 images) as the training
test.
Two types of distance calculation were
implemented. First, the standard Euclidean distance
between any two points (L2), computed using
Pythagorean trigonometry, was used. As an alternative
measurement, City Block distances (L1) were also used.
City Block distances are defined as the sum of distances
along each dimension. In other words, the differences in
each of the features are measured independently, and
then all differences are summed. Results differed only
slightly between the two distance calculation methods.
For each method and k-value, the test was repeated
100 times, each with different subsets.
B. Discriminant Analysis
Classification was also performed with Linear
Discriminant Analysis. In this process, new features,
constructed from linear combinations of the original
features, are found and used to better separate the set of
features into their respective categories. This is done
based on modeling each training subset of the same data
type as a multivariate normal density distribution. As in
K-Nearest Neighbor, 10-fold validation was
implemented to measure accuracy when generalizing.
Once again, the date subsets were randomly chosen in a
way that guaranteed equal distribution of date types.
C. Neural Networks
Classification using artificial neural networks, most
likely the most popular method in the field of vision-
based automated recognition of fruits, was explored as
well. The networks were trained using scaled conjugate
gradient back propagation, with 70% of the data
reserved for training (98 images), 15% for validation,
and 15% for testing (21 each). The validation subset was
used to measure how well the network performs with
new data, after being trained with the training subset.
The neural network was trained repeatedly (with the
same training data) until results on the validation subset
ceased to improve (measured by the mean squared error
on the validation samples), in order to avoid over-fitting.
Then, with training complete, the network was put to the
test with the testing subset. The networks used were
two-layer feed-forward networks, with sigmoid hidden
and output neurons. Training spanned neural networks
373
6. with a range of number of hidden layers, and the optimal
net found contained 10 hidden layers.
The entire process of multiple training, validating,
and testing at the end was repeated many times with
different numbers of hidden neuron layers until an
adequate total accuracy was achieved. It must be noted
that this method – particularly with repetition until
satisfaction – was computationally expensive in
comparison to the first two methods.
II. RESULTS
A. Method Comparison
The results are summarized in Table 3. After testing
1- through 20-nearest neighbor, 3-Nearest Neighbor
with Euclidean distances emerged as the most accurate
test, with an accuracy rate of 90.3 %. City-Block
distances were only slightly less informative, as with it
and 4-nearest neighbor an accuracy of 89.4% was
achieved.
TABLE III. ACCURACIES USING THE FOUR METHODS
Method
Nearest
Neighbor
(CityBlock)
Nearest
Neighbor
(Euclidean)
Linear
Discriminant
Analysis
Artificial
Neural
Network
Top
Accuracy
89% 90% 96% 99%
Linear Discriminant Analysis proved to be a quite
successful way to classify different types of dates. This
can be attributed to the nature of the different types and
the features chosen to represent them – many dates can
be quite definitively identified using a small number of
features, and with 15 features, linear separation proved
to be productive.
The strong separability of some date categories can
be visualized directly, by viewing inside the feature
space. For example, Fig. 6 plots the major axis length,
mean red value and mean blue value of each date, which
were carefully chosen to maximize visual separation. It
can be seen that several groups form clusters that can
easily be separated from others using these features. As
shown in Fig. 6, Safree dates (top right rectangle),
Madina Ajwa dates (bottom left rectangle), Sukkery
dates (bottom right polygon) and Safawi Al Madina
dates (central polygon) all form exclusive clusters. For
each, the entire population of that type – and none of
any other type – lies in the cluster. It must also be noted
that a fifth date type – Amber Al Madina (top left
rectangle) – would also be exclusive to a certain cluster,
had there not been two Khedri dates with very similar
features. Thus, using only three features one can cluster
and separate many different groups with a large degree
of accuracy. More important is the fact that many
clusters are linearly separable – a positive indication for
a linear discriminant analysis classifier. Also, while Fig.
6 shows only three distinct features, the classifiers have
access to fifteen. With several separating planes visible
in three dimensions, certainly many can exist with
combinations of all fifteen dimensions. It is therefore no
surprise that linear discriminant analysis achieved an
accuracy of 96% with 10-fold validation.
Figure 6. Visualization of date features. The features include major
axis lengths, mean red values and mean blue values.
Classification via an artificial neural networks
proved to be the most accurate method, as the best
neural network found achieved a total accuracy of 99.29
% - the largest success rate achieved during this study. It
must be noted that it took a great deal of time and
training to arrive at this network – the majority of
trained networks had significantly lower success rates,
but after approximately 20 minutes, we arrived at our
selected classifier, which has performance and
generalizability superior to all other classification
methods that we tested, as can easily be witnessed in
Table 3.
B. Discussion
Artificial Neural Networks emerged with the best
final accuracy, reaching 98.6%. However, linear
discriminant analysis performed nearly as well, with an
accuracy of 96%. The success of LDA can be in part
attributed to the existence of easily computable features
which can achieve high seperability across a number of
the date categories that we are classifying. Nevertheless,
our experiment confirmed that in terms of absolute
accuracy, neural networks are unmatched. However, one
should not underestimate the time and effort required to
find a network with a minimum set error rate. Results
using nearest neighbor methods are not as accurate as
either of the other two methods, but fall within
acceptable range.
374
7. III. CONCLUSION
Date palms are not only an ancient fruit with
important historical as well as symbolic significance, but
are also a noticeable dietary component of several
nations in the modern world, a material that is
transformed to many others, and a product with a huge
worldwide market and strong economic significance. In
this paper, we have presented a system for classifying
date palms, based on images of the dates, without the
need for time-consuming and intricate physical
measurements. Our system consists of a customized
feature extraction stage, followed by classification. An
extensive empirical trial is also presented, exposing a
number of results regarding classifier choice and tuning,
while guaranteeing generalizability, and most
importantly, illustrating the real-world effectiveness and
direct applicability of our system. Also, many
extensions of the proposed system are possible, thus
enabling technology to increase the utilization as well as
the quality of one of the world’s most ancient fruits, the
date palm. Other research involving date classification
has provided far less accurate classification methods
[15], or has required the measurement of physical
attributes [14]. This paper thus shows that it is possible
to classify dates very accurately using computer vision,
and supports further research in the topic, as well as
implementation of vision-based systems in date
production plants.
REFERENCES
[1] Food and Agriculture Organization of the United Nations, "Food
and Agricultural commodities production 2010".
[2] Anwar_saadat, “Date Output in 2005", Wikipedia, 2007.
http://en.wikipedia.org/wiki/File:2005dattes.png.
[3] A. Zaid, "Date Palm Cultivation," Rome, 2002.
[4] Abu Dhabi Food Control Authority, UAE Dates Named Best
Selling Product in SIAL China 2012, 2012.
[5] Y. A. Ohali, "Computer vision based date fruit grading system:
Design," Journal of King Saud University, vol. 23, pp. 29 - 36,
2011.
[6] D. J. Lee, "Development of a machine vision system for
automatic date grading using digital reflective near-infrared
imaging.," Journal of Food Engineering, vol. 86, pp. 388-398,
2008.
[7] D. Unay and B. Gosselin, "Thresholding-based segmentation
and apple grading by machine vision," in Proc. of European
Signal Processing Conference, 2005.
[8] D. Unay, B. Gosselin, O. Kleynen, V. Leemans, M. Destain and
O. Debeir, "Automatic grading of Bi-colored apples by
multispectral machine vision," Computers and Electronics in
Agriculture, vol. 75, no. 1, pp. 204-212, 2011.
[9] D. UNAY, "Artificial neural network-based segmentation and
apple grading by machine vision," in Proc. of IEEE Internation
Conference on Image Processing, 2005.
[10] F. Pedreschi, D. Mery, F. Mendoza and J. Aguilera,
"Classification of Potato Chips Using Pattern Recognition,"
Journal of Food Science, vol. 69, no. 6, pp. E264 - E270, 2004.
[11] Z. Schmilovitch1, A. Hoffman, H. Egozi, R. Ben-Zvi, Z.
Bernstein and V. Alchanatis, "Maturity determination of fresh
dates by near infrared spectrometry," Journal of the Science of
Food and Agriculture, vol. 79, pp. 86-90, 1999.
[12] A. Al-Janobi, "Date inspection by color machine vision.,"
Journal of King Saud University, vol. 12, no. 1, pp. 69-97, 2000.
[13] A. A. Al-Janobi, "Application of co-occurrence matrix method
in grading date fruits," in ASAE Paper, no. 98-3024, 1998.
[14] A. I. Hobani, A. M. Thottam and K. A. Ahmed, "Development
of a neural network classifier for date fruit varieties using some
physical attributes," King Saud University - Agricultural
Research Center, 2003.
[15] M. Fadel, "Date fruits classification using probabilistic neural
networks," Agricultural Engineering International: the CIGR
Ejournal, vol. 9, 2007.
[16] D. Unay, B. Gosselin, "Apple defect detection and quality
classification with MLP-neural networks," in 13th Annual
Workshop on Circuits, Systems and Signal Processing, 2002.
[17] D. Unay and B. Gosselin, "Apple defect segmentation by
artificial neural networks," in Proceedings of BeNeLux
Conference on Artificial Intelligence, 2006.
[18] N. Alavi, "Date grading using rule-based fuzzy inference
system," Journal of Agricultural Technology, vol. 8, no. 4, pp.
1243-1254, 2012.
[19] S. M. Mazloumzadeh, M. Shamsi and H. Nezamabadi-pour,
"Evaluation of generalpurpose lifters for the date harvest
industry based on a fuzzy inference system," Computers and
Electronics in Agriculture, vol. 60, pp. 60-66, 2008.
375