This document discusses red-eye effect in photographs and algorithms for detecting and correcting red-eye. It begins by explaining what causes red-eye effect and how flash light reflecting off the blood vessels in the eye makes the pupil appear red in photographs. It then discusses previous work on red-eye detection and correction algorithms. The proposed algorithm uses three main steps: 1) face detection using Viola-Jones algorithm, 2) red-eye detection within the detected face regions, and 3) red-eye correction using in-painting to remove the red-eye regions and paint the pupils a proper circular shape. The algorithm is tested on large numbers of photographs to evaluate its effectiveness compared to conventional algorithms.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...Yen Ho
This is a key paper : Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boosted Classifiers - face detection (100%) & feature extraction(93%) for expressionless faces
This document summarizes a research paper that proposes a new face recognition method capable of recognizing faces with expressions, glasses, and/or rotation. The method uses variance estimation of the red, green, and blue color components to compare extracted faces to those in a database. It also uses Euclidean distance to compare extracted facial features (eyes, nose, mouth) to those in the database. The method is divided into three steps: 1) variance estimation of color components, 2) facial feature extraction based on feature locations, and 3) identifying similar faces by scanning the database. Experimental results showed the method achieved good accuracy, speed, and used simple computations for face recognition.
The document summarizes research on automated face detection and recognition. It discusses common applications of face detection such as webcam tracking and photo tagging. Face recognition can be used for biometrics, mugshot databases, and detecting fake IDs. The document then compares human and computer abilities in face detection/recognition and describes challenges computers face representing multidimensional face data. It provides a brief history of the field and covers common approaches to face detection and recognition including eigenfaces, Fisherfaces, neural networks, Gabor wavelets, and active shape models. The document also discusses challenges of 3D, video, and comparing face recognition systems.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
The document proposes a new method for real-time blinking detection based on Gabor filters. It begins by reviewing existing methods and their limitations in dealing with noise, variations in eye shape, and blinking speed. The proposed method uses a Gabor filter to extract the top and bottom arcs of the eye from an image. It then measures the distance between these arcs and compares it to a threshold: a distance below the threshold indicates a closed eye, while a distance above indicates an open eye. The document claims this Gabor filter-based approach is robust to noise, variations in eye shape and blinking speed. It presents experimental results showing the method can accurately detect blinking across different users.
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
Facial expression analysis is a remarkable and demanding problem, and impacts significant applications in various fields like human-computer interaction and data-driven animation. Developing an efficient facial representation from the original face images is a crucial step for achieving facial expression recognition. Facial representation based on statistical local features, Local Binary Patterns (LBP) is practically assessed. Several machine learning techniques were thoroughly observed on various databases. LBP features- which are effectual and competent for facial expression recognition are generally used by researchers Cohn Kanade is the database for present work and the programming language used is MATLAB. Firstly, face area is divided in small regions, by which histograms, Local Binary Patterns (LBP) are extracted and then concatenated into single feature vector. This feature vector outlines a well-organized representation of face and is helpful in determining the resemblance among images.
Face recognition: A Comparison of Appearance Based Approachessadique_ghitm
Face recognition approaches can be divided into three main categories: direct correlation, eigenfaces, and fisherfaces. Direct correlation directly compares pixel intensity values between images. Eigenfaces uses principal component analysis to project faces into a face space defined by eigenvectors. Fisherfaces aims to maximize between-class variations while minimizing within-class variations to better account for differences in lighting and expressions. Pre-processing techniques like color normalization, histogram equalization, and edge detection can improve the accuracy of face recognition systems by reducing the effects of lighting variations. Testing various pre-processing techniques on different approaches found that the fisherfaces method combined with SLBC preprocessing achieved the lowest error rate of 17.8%, followed closely by direct correlation with intensity normalization at 18.
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Editor IJCATR
Amyotrophic lateral sclerosis (ALS) patients cannot control their muscle except eyes in the later stage of the disease
progress. This paper aims to develop an eye-based assistive system that is controlled by the eye gaze to help ALS patients improve
their life quality. Two main functions are proposed in this paper. The first one is called HelpCall that can detect the users’ eye gaze to
active the corresponding events. ALS patients can “talk” with other people more easily by looking at specific buttons in the HelpCall
system. The second one is an eye-control browser that allows the users browsing web pages in Internet. We design an interface that
embeds the IE browser into several buttons controlled by the user eye gaze. ALS patients can visit the Internet only using their eyes in
our proposed system. This paper discusses our ideas for the assistive system and then describes the design and implementation of our
proposed system in details.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...Yen Ho
This is a key paper : Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boosted Classifiers - face detection (100%) & feature extraction(93%) for expressionless faces
This document summarizes a research paper that proposes a new face recognition method capable of recognizing faces with expressions, glasses, and/or rotation. The method uses variance estimation of the red, green, and blue color components to compare extracted faces to those in a database. It also uses Euclidean distance to compare extracted facial features (eyes, nose, mouth) to those in the database. The method is divided into three steps: 1) variance estimation of color components, 2) facial feature extraction based on feature locations, and 3) identifying similar faces by scanning the database. Experimental results showed the method achieved good accuracy, speed, and used simple computations for face recognition.
The document summarizes research on automated face detection and recognition. It discusses common applications of face detection such as webcam tracking and photo tagging. Face recognition can be used for biometrics, mugshot databases, and detecting fake IDs. The document then compares human and computer abilities in face detection/recognition and describes challenges computers face representing multidimensional face data. It provides a brief history of the field and covers common approaches to face detection and recognition including eigenfaces, Fisherfaces, neural networks, Gabor wavelets, and active shape models. The document also discusses challenges of 3D, video, and comparing face recognition systems.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
The document proposes a new method for real-time blinking detection based on Gabor filters. It begins by reviewing existing methods and their limitations in dealing with noise, variations in eye shape, and blinking speed. The proposed method uses a Gabor filter to extract the top and bottom arcs of the eye from an image. It then measures the distance between these arcs and compares it to a threshold: a distance below the threshold indicates a closed eye, while a distance above indicates an open eye. The document claims this Gabor filter-based approach is robust to noise, variations in eye shape and blinking speed. It presents experimental results showing the method can accurately detect blinking across different users.
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
Facial expression analysis is a remarkable and demanding problem, and impacts significant applications in various fields like human-computer interaction and data-driven animation. Developing an efficient facial representation from the original face images is a crucial step for achieving facial expression recognition. Facial representation based on statistical local features, Local Binary Patterns (LBP) is practically assessed. Several machine learning techniques were thoroughly observed on various databases. LBP features- which are effectual and competent for facial expression recognition are generally used by researchers Cohn Kanade is the database for present work and the programming language used is MATLAB. Firstly, face area is divided in small regions, by which histograms, Local Binary Patterns (LBP) are extracted and then concatenated into single feature vector. This feature vector outlines a well-organized representation of face and is helpful in determining the resemblance among images.
Face recognition: A Comparison of Appearance Based Approachessadique_ghitm
Face recognition approaches can be divided into three main categories: direct correlation, eigenfaces, and fisherfaces. Direct correlation directly compares pixel intensity values between images. Eigenfaces uses principal component analysis to project faces into a face space defined by eigenvectors. Fisherfaces aims to maximize between-class variations while minimizing within-class variations to better account for differences in lighting and expressions. Pre-processing techniques like color normalization, histogram equalization, and edge detection can improve the accuracy of face recognition systems by reducing the effects of lighting variations. Testing various pre-processing techniques on different approaches found that the fisherfaces method combined with SLBC preprocessing achieved the lowest error rate of 17.8%, followed closely by direct correlation with intensity normalization at 18.
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Editor IJCATR
Amyotrophic lateral sclerosis (ALS) patients cannot control their muscle except eyes in the later stage of the disease
progress. This paper aims to develop an eye-based assistive system that is controlled by the eye gaze to help ALS patients improve
their life quality. Two main functions are proposed in this paper. The first one is called HelpCall that can detect the users’ eye gaze to
active the corresponding events. ALS patients can “talk” with other people more easily by looking at specific buttons in the HelpCall
system. The second one is an eye-control browser that allows the users browsing web pages in Internet. We design an interface that
embeds the IE browser into several buttons controlled by the user eye gaze. ALS patients can visit the Internet only using their eyes in
our proposed system. This paper discusses our ideas for the assistive system and then describes the design and implementation of our
proposed system in details.
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
The document describes an algorithm for eye detection in face images. It begins with face detection using skin color detection in HSV color space. Then it finds the symmetric axis of the extracted face region using gradient orientation histograms to determine the location of the eyes. It further finds the symmetric axis within the eye region to locate the center of the eyes. The algorithm aims to accurately detect the eyes even when the face is rotated, which is important for applications like face recognition and gaze tracking.
This document provides an overview of eye tracking technology. It discusses why eye tracking is used to understand human behavior and thinking. It also describes Tobii eye tracking technology, basic operating principles involving infrared light reflection, and applications in TV, web and street advertising. Eye tracking metrics like fixation time and gaze plots are explained. Advantages of eye tracking include obtaining insight into eye behavior without training, while disadvantages are expense and calibration time. The document concludes eye tracking can help develop video games and assist handicapped users.
1) NETRA is an interactive display that estimates refractive errors and focal range by taking the inverse approach of the Shack-Hartmann wavefront sensor. It uses high-resolution displays and user interaction rather than lasers and sensors.
2) The user interacts with patterns on the display to measure their farthest and nearest focal points, allowing NETRA to determine refractive errors like myopia, hyperopia, and astigmatism, as well as the overall focal range.
3) NETRA has applications in low-cost, portable eye exams that could improve eye care access in developing countries. Over 600 million people lack corrective glasses due to limited resources.
Facial expression recognition based on local binary patterns finalahmad abdelhafeez
This document summarizes research on facial expression recognition using Local Binary Patterns (LBP) features. The key points discussed are:
1) LBP features are effective and efficient for facial expression recognition compared to other methods like Gabor wavelets.
2) LBP features perform robustly even at low image resolutions, important for real-world applications.
3) Boosting LBP features improves recognition performance over using LBP alone. However, boosted features may not generalize well across datasets.
The paper presents a comprehensive study of LBP features for facial expression recognition and addresses challenges like low-resolution images.
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKijiert bestjournal
Security and authentication of a person is a vital part of any business. There are many techniques use d for this purpose. One of technique is human face recognition . Human Face recognition is an effective means of authenticating a person. The benefit of this approa ch is that,it enables us to detect changes in the face pattern of an individual to substantial extent. The recognition s ystem can tolerate local variations in the face exp ression of an individual. Hence Human face recognition can be use d as a key factor in crime detection mainly to iden tify criminals. There are several approaches to Human fa ce recognition of which Image Processing Principal Component Analysis (PCA) and Neural Networks have been includ ed in our project. The system consists of a databas e of a set of facial patterns for each individual. The charact eristic features called �eigenfaces� are extracted from the stored images using which the system is trained for subseq uent recognition of new images.
This document summarizes a study comparing the ability of seven contact lens designs to reduce higher-order aberrations (HOA) in 16 eyes. An aberrometer was used to measure HOA both without lenses and with each lens design. The study found that Definition HD contact lenses reduced HOA in 14 out of 16 eyes, more than all other lens designs tested, lowering HOA over four times more than the next best competitor. Definition lenses also lowered spherical aberration in 11 out of 16 eyes, more than other lenses, offering a better option for aberration control compared to first generation aspheric lenses.
This report is based on research. This whole research content are taken by books and websites. you can learn about face recognition history, how's it is work traditional and in technical way, introduction of some face recognition software and devices. we also add face recognition algorithm in report.
This document discusses thermal infrared face recognition. It begins by introducing thermal face recognition and noting the advantages of using thermal images over visual images, such as insensitivity to illumination changes. It then describes the different infrared spectrums and how thermal face images are generated from human body heat patterns. Some critical observations of thermal imaging are discussed, such as its inability to distinguish identical twins or effects of breathing. The document also covers limitations of thermal imaging and how fusing thermal and visual images can generate more informative data for face recognition.
The Virtual Dimension Center (VDC) Fellbach has compiled the state of the art as well as the market situation and areas of application of the technology field "Eye Tracking" and put it together in a whitepaper.
Design of gaussian spatial filter to determine the amount of refraction error...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IRJET-Unconstraint Eye Tracking on Mobile SmartphoneIRJET Journal
This document presents a study on developing an unconstrained eye tracking system using only the front camera of a mobile smartphone. It aims to create a low-cost eye tracking solution. The researchers designed techniques to detect faces, eyes and irises from camera images using Haar cascade classification and circular Hough transform. They tested the system under various conditions like lighting changes, wearing glasses, in dark environments, and while driving. The techniques were able to accurately detect eyes in different scenarios. The system has applications in areas like driving assistance systems and could be integrated into vehicles.
Face recognition technology uses biometrics to identify individuals based on facial features. The document outlines the history of facial recognition from early systems in the 1960s-1980s to modern implementations. It describes how current systems work by detecting faces, normalizing images, extracting nodal points, creating templates, and matching templates to identify or verify individuals. The technology has grown from using 21 markers to analyzing over 80 nodal points for increased accuracy. Strengths include leveraging existing cameras, but weaknesses include impacts of environment and appearance changes. Applications include security, banking, and daycare pickups.
This document summarizes a research paper on face recognition. It discusses what face recognition is, how it works through face detection and recognition. It describes different approaches to face recognition including feature extraction methods, holistic methods, and hybrid methods. It discusses problems with face recognition related to variations in expressions, makeup, lighting. It provides examples of applications of face recognition technology including access control systems, time attendance tracking, and facial recognition software for online gaming and crime prevention.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/02/eye-tracking-for-the-future-a-presentation-from-parallel-rules/
Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future” tutorial at the September 2020 Embedded Vision Summit.
Eye tracking is an increasingly important technology for applications ranging from augmented and virtual reality head-mounted displays to automotive driver monitoring. In this talk, Milford introduces eye tracking techniques and technical challenges. He also explores camera and computational requirements for eye tracking, and highlights selected use cases and applications.
1. The document proposes a hybrid approach to facial expression recognition that combines appearance features extracted using Local Directional Number descriptors with geometric features based on distances between facial landmark points.
2. The features are classified independently using SVMs and the scores are fused at the decision level using product rule fusion to identify facial expressions in images.
3. Experiments on the CK+ and JAFFE databases show the hybrid approach achieves better recognition rates than using appearance or geometric features individually.
This document outlines a project that uses face recognition from face motion manifolds. It proposes an information-theoretic approach using Resistor-Average Distance (RAD) as a dissimilarity measure between distributions of face images. A kernel-based algorithm is introduced that allows modeling of complex, nonlinear manifolds while retaining the closed-form RAD expression between normal distributions. Recognition rates of 90-100% can be achieved on databases of 10-100 people by modeling errors in face registration. The algorithm uses kernel PCA to nonlinearly map data and computes RAD on the mapped data as the dissimilarity measure between face image distributions.
This document discusses face detection, analysis, and recognition using different techniques. It begins by introducing Matteo Valoriani and Luigi Oliveto. It then discusses doing face analysis at home using OpenCV/EmguCV. It covers using cloud services like Betaface and Microsoft Project Oxford. It also discusses using special cameras like Kinect and RealSense for face analysis. It concludes with discussing common problems and limits of face analysis techniques.
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
In recent time, along with the advances and new inventions in science and technology, fraud people and identity thieves are
also becoming smarter by finding new ways to fool the authorization and authentication process. So, there is a strong need of efficient
face recognition process or computer systems capable of recognizing faces of authenticated persons. One way to make face recognition
efficient is by extracting features of faces. This paper is to compare the relative efficiency of Lip Extraction and Eye extraction feature
for face recognition in biometric devices. Importance of this paper is to bring to the light which Feature Extraction method provides
better results under various conditions. For recognition experiments, I used face images of persons from different sets of YALE
database. In my dataset, there are total 132 images consisting of 11 persons & 12 face images of each person.
This document summarizes research on improving the Apriori algorithm for mining association rules from transactional databases. It first provides background on association rule mining and describes the basic Apriori algorithm. The Apriori algorithm finds frequent itemsets by multiple passes over the database but has limitations of increased search space and computational costs as the database size increases. The document then reviews research on variations of the Apriori algorithm that aim to reduce the number of database scans, shrink the candidate sets, and facilitate support counting to improve performance.
This document compares several propagation path loss models - Okumura, Hata, ECC 33, Cost-231, and SUI - by estimating path losses and signal strengths at 950 MHz in urban, suburban, and rural areas. Path losses are estimated using each model and compared to measured practical data from those environments. The results show that the Hata model most closely matches the practical data across all three environments. Therefore, the Hata model is concluded to be the most suitable for predicting signal strength in urban, suburban, and rural areas.
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
The document describes an algorithm for eye detection in face images. It begins with face detection using skin color detection in HSV color space. Then it finds the symmetric axis of the extracted face region using gradient orientation histograms to determine the location of the eyes. It further finds the symmetric axis within the eye region to locate the center of the eyes. The algorithm aims to accurately detect the eyes even when the face is rotated, which is important for applications like face recognition and gaze tracking.
This document provides an overview of eye tracking technology. It discusses why eye tracking is used to understand human behavior and thinking. It also describes Tobii eye tracking technology, basic operating principles involving infrared light reflection, and applications in TV, web and street advertising. Eye tracking metrics like fixation time and gaze plots are explained. Advantages of eye tracking include obtaining insight into eye behavior without training, while disadvantages are expense and calibration time. The document concludes eye tracking can help develop video games and assist handicapped users.
1) NETRA is an interactive display that estimates refractive errors and focal range by taking the inverse approach of the Shack-Hartmann wavefront sensor. It uses high-resolution displays and user interaction rather than lasers and sensors.
2) The user interacts with patterns on the display to measure their farthest and nearest focal points, allowing NETRA to determine refractive errors like myopia, hyperopia, and astigmatism, as well as the overall focal range.
3) NETRA has applications in low-cost, portable eye exams that could improve eye care access in developing countries. Over 600 million people lack corrective glasses due to limited resources.
Facial expression recognition based on local binary patterns finalahmad abdelhafeez
This document summarizes research on facial expression recognition using Local Binary Patterns (LBP) features. The key points discussed are:
1) LBP features are effective and efficient for facial expression recognition compared to other methods like Gabor wavelets.
2) LBP features perform robustly even at low image resolutions, important for real-world applications.
3) Boosting LBP features improves recognition performance over using LBP alone. However, boosted features may not generalize well across datasets.
The paper presents a comprehensive study of LBP features for facial expression recognition and addresses challenges like low-resolution images.
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORKijiert bestjournal
Security and authentication of a person is a vital part of any business. There are many techniques use d for this purpose. One of technique is human face recognition . Human Face recognition is an effective means of authenticating a person. The benefit of this approa ch is that,it enables us to detect changes in the face pattern of an individual to substantial extent. The recognition s ystem can tolerate local variations in the face exp ression of an individual. Hence Human face recognition can be use d as a key factor in crime detection mainly to iden tify criminals. There are several approaches to Human fa ce recognition of which Image Processing Principal Component Analysis (PCA) and Neural Networks have been includ ed in our project. The system consists of a databas e of a set of facial patterns for each individual. The charact eristic features called �eigenfaces� are extracted from the stored images using which the system is trained for subseq uent recognition of new images.
This document summarizes a study comparing the ability of seven contact lens designs to reduce higher-order aberrations (HOA) in 16 eyes. An aberrometer was used to measure HOA both without lenses and with each lens design. The study found that Definition HD contact lenses reduced HOA in 14 out of 16 eyes, more than all other lens designs tested, lowering HOA over four times more than the next best competitor. Definition lenses also lowered spherical aberration in 11 out of 16 eyes, more than other lenses, offering a better option for aberration control compared to first generation aspheric lenses.
This report is based on research. This whole research content are taken by books and websites. you can learn about face recognition history, how's it is work traditional and in technical way, introduction of some face recognition software and devices. we also add face recognition algorithm in report.
This document discusses thermal infrared face recognition. It begins by introducing thermal face recognition and noting the advantages of using thermal images over visual images, such as insensitivity to illumination changes. It then describes the different infrared spectrums and how thermal face images are generated from human body heat patterns. Some critical observations of thermal imaging are discussed, such as its inability to distinguish identical twins or effects of breathing. The document also covers limitations of thermal imaging and how fusing thermal and visual images can generate more informative data for face recognition.
The Virtual Dimension Center (VDC) Fellbach has compiled the state of the art as well as the market situation and areas of application of the technology field "Eye Tracking" and put it together in a whitepaper.
Design of gaussian spatial filter to determine the amount of refraction error...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IRJET-Unconstraint Eye Tracking on Mobile SmartphoneIRJET Journal
This document presents a study on developing an unconstrained eye tracking system using only the front camera of a mobile smartphone. It aims to create a low-cost eye tracking solution. The researchers designed techniques to detect faces, eyes and irises from camera images using Haar cascade classification and circular Hough transform. They tested the system under various conditions like lighting changes, wearing glasses, in dark environments, and while driving. The techniques were able to accurately detect eyes in different scenarios. The system has applications in areas like driving assistance systems and could be integrated into vehicles.
Face recognition technology uses biometrics to identify individuals based on facial features. The document outlines the history of facial recognition from early systems in the 1960s-1980s to modern implementations. It describes how current systems work by detecting faces, normalizing images, extracting nodal points, creating templates, and matching templates to identify or verify individuals. The technology has grown from using 21 markers to analyzing over 80 nodal points for increased accuracy. Strengths include leveraging existing cameras, but weaknesses include impacts of environment and appearance changes. Applications include security, banking, and daycare pickups.
This document summarizes a research paper on face recognition. It discusses what face recognition is, how it works through face detection and recognition. It describes different approaches to face recognition including feature extraction methods, holistic methods, and hybrid methods. It discusses problems with face recognition related to variations in expressions, makeup, lighting. It provides examples of applications of face recognition technology including access control systems, time attendance tracking, and facial recognition software for online gaming and crime prevention.
For the full video of this presentation, please visit:
https://www.edge-ai-vision.com/2021/02/eye-tracking-for-the-future-a-presentation-from-parallel-rules/
Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future” tutorial at the September 2020 Embedded Vision Summit.
Eye tracking is an increasingly important technology for applications ranging from augmented and virtual reality head-mounted displays to automotive driver monitoring. In this talk, Milford introduces eye tracking techniques and technical challenges. He also explores camera and computational requirements for eye tracking, and highlights selected use cases and applications.
1. The document proposes a hybrid approach to facial expression recognition that combines appearance features extracted using Local Directional Number descriptors with geometric features based on distances between facial landmark points.
2. The features are classified independently using SVMs and the scores are fused at the decision level using product rule fusion to identify facial expressions in images.
3. Experiments on the CK+ and JAFFE databases show the hybrid approach achieves better recognition rates than using appearance or geometric features individually.
This document outlines a project that uses face recognition from face motion manifolds. It proposes an information-theoretic approach using Resistor-Average Distance (RAD) as a dissimilarity measure between distributions of face images. A kernel-based algorithm is introduced that allows modeling of complex, nonlinear manifolds while retaining the closed-form RAD expression between normal distributions. Recognition rates of 90-100% can be achieved on databases of 10-100 people by modeling errors in face registration. The algorithm uses kernel PCA to nonlinearly map data and computes RAD on the mapped data as the dissimilarity measure between face image distributions.
This document discusses face detection, analysis, and recognition using different techniques. It begins by introducing Matteo Valoriani and Luigi Oliveto. It then discusses doing face analysis at home using OpenCV/EmguCV. It covers using cloud services like Betaface and Microsoft Project Oxford. It also discusses using special cameras like Kinect and RealSense for face analysis. It concludes with discussing common problems and limits of face analysis techniques.
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
In recent time, along with the advances and new inventions in science and technology, fraud people and identity thieves are
also becoming smarter by finding new ways to fool the authorization and authentication process. So, there is a strong need of efficient
face recognition process or computer systems capable of recognizing faces of authenticated persons. One way to make face recognition
efficient is by extracting features of faces. This paper is to compare the relative efficiency of Lip Extraction and Eye extraction feature
for face recognition in biometric devices. Importance of this paper is to bring to the light which Feature Extraction method provides
better results under various conditions. For recognition experiments, I used face images of persons from different sets of YALE
database. In my dataset, there are total 132 images consisting of 11 persons & 12 face images of each person.
This document summarizes research on improving the Apriori algorithm for mining association rules from transactional databases. It first provides background on association rule mining and describes the basic Apriori algorithm. The Apriori algorithm finds frequent itemsets by multiple passes over the database but has limitations of increased search space and computational costs as the database size increases. The document then reviews research on variations of the Apriori algorithm that aim to reduce the number of database scans, shrink the candidate sets, and facilitate support counting to improve performance.
This document compares several propagation path loss models - Okumura, Hata, ECC 33, Cost-231, and SUI - by estimating path losses and signal strengths at 950 MHz in urban, suburban, and rural areas. Path losses are estimated using each model and compared to measured practical data from those environments. The results show that the Hata model most closely matches the practical data across all three environments. Therefore, the Hata model is concluded to be the most suitable for predicting signal strength in urban, suburban, and rural areas.
This document discusses integrating two assembly lines, Line A and Line B, based on lean line design concepts. It begins by calculating the customer takt time and planned cycle time for each line. It then assesses the current layout and process flow for both lines. Manual times for each station are measured using MTM/UAS analysis. Improvement opportunities are identified and manual times are recalculated. The lines are then integrated by designing a new layout with the operations of Line B incorporated into Line A to better utilize space and operators. Operator balance charts are created for the new integrated line to distribute work and ensure tasks can be completed within the planned cycle time.
This document discusses feature selection techniques for intrusion detection systems. It begins with background on intrusion detection and challenges related to large datasets. It then describes three common feature selection algorithms: Correlation-based Feature Selection (CFS), Information Gain (IG), and Gain Ratio (GR). The document proposes a fusion model that applies these three algorithms to select features, then uses genetic algorithm and naive Bayes classification to evaluate performance. It conducted experiments on the KDD Cup 99 intrusion detection dataset to compare the proposed fusion method to the individual feature selection algorithms.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
This document summarizes research on improving web performance through integrated web prefetching and caching. It discusses how web prefetching can reduce latency by predicting and fetching web pages before they are requested. An integrated architecture reserves cache space for prefetched pages. By analyzing web logs to build prediction models of frequent paths, it aims to improve performance over caching alone. The tradeoff between reduced latency and potential increased network load is analyzed. Several previous works studying prefetching and caching algorithms individually are reviewed. The goal is a seamless prefetching system that works with existing caching systems.
This document discusses feature selection techniques for intrusion detection systems. It begins with background on intrusion detection and challenges related to large datasets. It then describes three common feature selection algorithms: Correlation-based Feature Selection (CFS), Information Gain (IG), and Gain Ratio (GR). The document proposes a fusion model that applies these three algorithms to select features, then uses genetic algorithm and naive Bayes classification to evaluate performance. It conducted experiments on the KDD Cup 99 intrusion detection dataset to compare the proposed fusion method to the individual feature selection algorithms.
This document discusses database access pattern protection using a partial shuffle scheme. It proposes a new encryption algorithm called Reverse Encryption Algorithm (REA) that aims to provide security while limiting performance degradation from encryption. It also discusses prior work on Private Information Retrieval (PIR) techniques and their limitations. The key idea of the proposed scheme is to introduce a trusted component that shuffles only a portion of the database periodically, providing privacy assurances similar to PIR but with lower computation costs than a full database shuffle each time.
This document discusses requirement metrics that can be used to measure and improve software quality during the requirements engineering phase of the software development lifecycle. It describes several types of requirement metrics, including size metrics, traceability metrics, completeness metrics, and volatility metrics. These metrics provide insight into factors like the scope and complexity of requirements, consistency between requirement levels, requirements changes over time, and gaps or issues in requirements documentation. Tracking and analyzing these metrics during requirements can help software developers and analysts enhance software quality from the early stages of development.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
1) The document presents a new face parts detection algorithm that combines the Viola-Jones object detection framework with geometric information of facial features.
2) It detects faces, then isolates regions of interest for the eyes, nose, and mouth. Eye pupils are located using iris recognition techniques.
3) The algorithm was tested on hundreds of images and showed promising results for automated facial feature detection.
INTEGRATING HEAD POSE TO A 3D MULTITEXTURE APPROACH FOR GAZE DETECTIONijma
This document summarizes a research paper that proposes integrating head pose information with a 3D multi-texture active appearance model (MT-AAM) to improve gaze detection from webcam images. The 3D MT-AAM combines an iris texture model with an eye skin model that has a hole for the iris region. This allows the iris texture to rotate realistically under the skin for different gaze directions. The paper also proposes a multi-objective optimization that applies the 3D MT-AAM to both eyes, weighting the results based on detected head pose to determine which eye is more visible. Experimental results showed this approach outperformed a standard AAM and was comparable to state-of-the-art methods that require manual initialization.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- S...IJERA Editor
For fatigue detection such as in the application of driver‟s fatigue monitoring system, the eye state analysis is one of the important and deciding steps to determine the fatigue of driver‟s eyes. In this study, algorithms for face detection, eye detection and eye state analysis have been studied and presented as well as an efficient algorithm for detection of face, eyes have been proposed. Firstly the efficient algorithm for face detection method has been presented which find the face area in the human images. Then, novel algorithms for detection of eye region and eye state are introduced. In this paper we propose a multi-view based eye state detection to determine the state of the eye. With the help of skin color model, the algorithm detects the face regions in an YCbCr color model. By applying the skin segmentation which normally separates the skin and non-skin pixels of the images, it detects the face regions of the image under various lighting and noise conditions. Then from these face regions, the eye regions are extracted within those extracted face regions. Our proposed algorithms are fast and robust as there is not pattern match.
An efficient system for real time fatigue detectionAlexander Decker
This document summarizes a research paper that proposes an efficient system for real-time fatigue detection. The system uses computer vision and image processing techniques to measure eye closure count, blinking rate, and yawning to detect user fatigue. Face detection is performed using the Viola-Jones algorithm. Abnormalities in eye and mouth behavior are then analyzed to determine if the user is fatigued. The system aims to detect fatigue early enough to avoid accidents in applications where user attentiveness is critical. It is designed to have low time and space complexity, be low cost, and not significantly impact normal user interactions. The proposed approach and algorithm are described, and example results of fatigue detection are provided.
Driving support systems, such as car navigation systems are becoming common and they
support driver in several aspects. Non-intrusive method of detecting Fatigue and drowsiness
based on eye-blink count and eye directed instruction controlhelps the driver to prevent from
collision caused by drowsy driving. Eye detection and tracking under various conditions such as
illumination, background, face alignment and facial expression makes the problem
complex.Neural Network based algorithm is proposed in this paper to detect the eyes efficiently.
In the proposed algorithm, first the neural Network is trained to reject the non-eye regionbased
on images with features of eyes and the images with features of non-eye using Gabor filter and Support Vector Machines to reduce the dimension and classify efficiently. In the algorithm, first the face is segmented using L*a*btransform color space, then eyes are detected using HSV and Neural Network approach. The algorithm is tested on nearly 100 images of different persons under different conditions and the results are satisfactory with success rate of 98%.The Neural Network is trained with 50 non-eye images and 50 eye images with different angles using Gabor filter. This paper is a part of research work on “Development of Non-Intrusive system for realtime Monitoring and Prediction of Driver Fatigue and drowsiness” project sponsored by Department of Science & Technology, Govt. of India, New Delhi at Vignan Institute of Technology and Sciences, Vignan Hills, Hyderabad.
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
A State-of-the-art Review on Dielectric fluid in Electric Discharge Machining...IRJET Journal
This document proposes a non-invasive skin lesion analysis system for early detection of malignant melanoma using image processing in MATLAB. The system has two main parts: 1) a sunburn monitoring app to track sun exposure and 2) an automatic image analysis module. The image analysis module segments skin lesions, extracts features like shape, color and texture, and classifies lesions as benign, atypical or skin cancer with over 95% accuracy. It was tested on 200 dermoscopy images from a Portuguese hospital and achieved high classification performance. The proposed system provides an affordable, effective tool for early melanoma detection using a mobile phone platform.
Non-Invasive ABCD Monitoring of Malignant Melanoma Using Image Processing in ...IRJET Journal
This document proposes a non-invasive skin lesion analysis system for early detection of malignant melanoma using image processing in MATLAB. The system has two main parts: 1) a sunburn monitoring app to track sun exposure and 2) an automatic image analysis module. The image analysis module uses dermoscopy images from a hospital database to test segmentation, feature extraction, and classification algorithms. Hair is detected and excluded from images before segmentation. Features like shape, color and texture are extracted and classified using algorithms like k-NN achieving over 95% accuracy for benign, atypical and cancerous lesions. The system aims to provide an affordable, early screening tool for skin cancer detection on mobile devices.
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGijiert bestjournal
Face recognition is a computer application technique for automatically identifying or
verifying a person from a digital image or a video frame source. To do this is by comparing
selected facial features from the digital image and a face dataset. It is basically used in
security systems and can be compared to other biometrics such as fingerprint recognition or
eye, iris recognition systems. The main limitation of the current face recognition system is
that they only detect straight faces looking at the camera. Separate versions of the system
could be trained for each head orientation, and the results can be combined using arbitration
methods similar to those presented here. In earlier work, the face position must be centerlight
position; any lighting effect will affect the system. Similarly the eyes of person must be
open and without glass.
Real Time Eye Blinking and Yawning Detectionijtsrd
Detecting eye blink and yawning is important, for example in systems that monitor the vigilance of the human operator, eg Driver's drowsiness. Driver fatigue is one of the leading causes of the worlds deadliest road accidents. This shows that in the transport sector in particular, where a driver of heavy vehicles is often open to hours of monotonous driving which causes fatigue without frequent rest periods. It is therefore essential to design a road accident prevention system that can detect the drivers drowsiness, determine the drivers level of carelessness and warn when an imminent danger occurs. In this article, we propose a real time system that uses eye detection techniques, blinking and yawning. The system is designed as a non intrusive real time monitoring system. The priority is to improve driver safety without being intrusive. In this work, the blink of an eye and the drivers yawn are detected. If the drivers eyes remain closed for more than a certain time and the drivers mouth is open to yawning, the driver is said to be fatigue. Ohnmar Win "Real Time Eye Blinking and Yawning Detection" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28004.pdfPaper URL: https://www.ijtsrd.com/engineering/electrical-engineering/28004/real-time-eye-blinking-and-yawning-detection/ohnmar-win
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a facial emotion detection system called Emotionalizer. The system uses machine learning to analyze facial expressions in images and detect emotions like happy, sad, angry, fearful and disgust. It was developed in Python using techniques like pre-processing, skin color detection, facial feature extraction and a support vector machine classifier. The goal is to build a system that can automatically recognize emotions from faces as accurately as humans. It discusses previous related work on facial recognition and detection and outlines the objectives, methodology and evaluation of the Emotionalizer system.
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a face emotion detection system called Emotionalizer. It uses machine learning and facial recognition techniques to detect emotions like happy, sad, angry, fearful and disgust based on facial expressions. The system analyzes images of faces and determines the appropriate emotion based on geometric changes in facial features. It was developed in Python using tools like OpenCV for facial detection and recognition. The goal is to build a system that can read emotions from facial expressions similarly to how humans perceive emotions.
This document describes a technique for human iris recognition for biometric identification. It involves 6 major steps: image acquisition, localization, isolation, normalization, feature extraction, and matching. The iris is localized by detecting the pupil and outer iris boundaries using techniques like Canny edge detection and Hough transforms. The iris region is then isolated using masking. It is normalized and represented as a fixed-sized block. Features are extracted using techniques like Gabor filters and Haar wavelets to generate biometric templates. Templates are matched using Hamming distance to identify individuals in applications like border control, computer login, and financial transactions. The iris has properties that make it suitable and accurate for identification compared to other biometrics.
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsIOSR Journals
This document reviews various techniques for iris segmentation in iris recognition systems. It discusses 8 techniques: (1) Integrodifferential operator, (2) Hough transform, (3) Masek method, (4) Fuzzy clustering algorithm, (5) Pulling and Pushing method, (6) Eight-neighbor connection based clustering, (7) Segmentation approach based on Fourier spectral density, and (8) Circular Gabor Filter. Each technique achieves some level of segmentation accuracy but also has disadvantages like high computational time, low accuracy, or poor performance on noisy images. The document concludes that a unified framework approach provides the highest overall segmentation accuracy for robustly segmenting iris images.
This document reviews various techniques for iris segmentation in iris recognition systems. It discusses integrodifferential operator and Hough transform approaches, as well as the Masek, fuzzy clustering, and pulling and pushing methods. Each approach has advantages and disadvantages. The Masek method achieves circular iris and pupil localization but has lower accuracy and speed. Fuzzy clustering provides better segmentation for non-cooperative iris recognition but requires an extensive search. The pulling and pushing method aims to develop a more accurate and rapid iris segmentation algorithm.
Electrically small antennas: The art of miniaturizationEditor IJARCET
We are living in the technological era, were we preferred to have the portable devices rather than unmovable devices. We are isolating our self rom the wires and we are becoming the habitual of wireless world what makes the device portable? I guess physical dimensions (mechanical) of that particular device, but along with this the electrical dimension is of the device is also of great importance. Reducing the physical dimension of the antenna would result in the small antenna but not electrically small antenna. We have different definition for the electrically small antenna but the one which is most appropriate is, where k is the wave number and is equal to and a is the radius of the imaginary sphere circumscribing the maximum dimension of the antenna. As the present day electronic devices progress to diminish in size, technocrats have become increasingly concentrated on electrically small antenna (ESA) designs to reduce the size of the antenna in the overall electronics system. Researchers in many fields, including RF and Microwave, biomedical technology and national intelligence, can benefit from electrically small antennas as long as the performance of the designed ESA meets the system requirement.
This document provides a comparative study of two-way finite automata and Turing machines. Some key points:
- Two-way finite automata are similar to read-only Turing machines in that they have a finite tape that can be read in both directions, but cannot write to the tape.
- Turing machines have an infinite tape that can be read from and written to, allowing them to recognize recursively enumerable languages.
- Both models are examined in their ability to accept the regular language L={anbm|m,n>0}.
- The time complexity of a two-way finite automaton for this language is O(n2) due to making two passes over the
This document analyzes and compares the performance of the AODV and DSDV routing protocols in a vehicular ad hoc network (VANET) simulation. Simulations were conducted using NS-2, SUMO, and MOVE simulators for a grid map scenario with varying numbers of nodes. The results show that AODV performed better than DSDV in terms of throughput and packet delivery fraction, while DSDV had lower end-to-end delays. However, neither protocol was found to be fully suitable for the highly dynamic VANET environment. The document concludes that further work is needed to develop improved routing protocols optimized for VANETs.
This document discusses the digital circuit layout problem and approaches to solving it using graph partitioning techniques. It begins by introducing the digital circuit layout problem and how it has become more complex with increasing circuit sizes. It then discusses how the problem can be decomposed into subproblems using graph partitioning to assign geometric coordinates to circuit components. The document reviews several traditional approaches to solve the problem, such as the Kernighan-Lin algorithm, and discusses their limitations for larger circuit sizes. It also discusses more recent approaches using evolutionary algorithms and concludes by analyzing the contributions of various approaches.
This document summarizes various data mining techniques that have been used for intrusion detection systems. It first describes the architecture of a data mining-based IDS, including sensors to collect data, detectors to evaluate the data using detection models, a data warehouse for storage, and a model generator. It then discusses supervised and unsupervised learning approaches that have been applied, including neural networks, support vector machines, K-means clustering, and self-organizing maps. Finally, it reviews several related works applying these techniques and compares their results, finding that combinations of approaches can improve detection rates while reducing false alarms.
This document provides an overview of speech recognition systems and recent progress in the field. It discusses different types of speech recognition including isolated word, connected word, continuous speech, and spontaneous speech. Various techniques used in speech recognition are also summarized, such as simulated evolutionary computation, artificial neural networks, fuzzy logic, Kalman filters, and Hidden Markov Models. The document reviews several papers published between 2004-2012 that studied speech recognition methods including using dynamic spectral subband centroids, Kalman filters, biomimetic computing techniques, noise estimation, and modulation filtering. It concludes that Hidden Markov Models combined with MFCC features provide good recognition results for large vocabulary, speaker-independent, continuous speech recognition.
This document discusses integrating two assembly lines, Line A and Line B, based on lean line design concepts to reduce space and operators. It analyzes the current state of the lines using tools like takt time analysis and MTM/UAS studies. Improvements are identified to eliminate waste, including methods improvements, workplace rearrangement, ergonomic changes, and outsourcing. Paper kaizen is conducted and work elements are retimed. The goal is to integrate the lines to better utilize space and manpower while meeting manufacturing standards.
This document summarizes research on the exposure of microwaves from cellular networks. It describes how microwaves interact with biological systems and discusses measurement techniques and safety standards regarding microwave exposure. While some studies have alleged health hazards from microwaves, independent reviews by health organizations have found no evidence that exposure to microwaves below international safety limits causes harm. The document concludes that with precautions like limiting exposure time and using phones with lower SAR ratings, microwaves from cell phones pose minimal health risks.
This document summarizes a research paper that examines the effect of feature reduction in sentiment analysis of online reviews. It uses principle component analysis to reduce the number of features (product attributes) from a dataset of 500 camera reviews labeled as positive or negative. Two models are developed - one using the original set of 95 product attributes, and one using the reduced set. Support vector machines and naive Bayes classifiers are applied to both models and their performance is evaluated to determine if classification accuracy can be maintained while using fewer features. The results show it is possible to achieve similar accuracy levels with less features, improving computational efficiency.
This document provides a review of multispectral palm image fusion techniques. It begins with an introduction to biometrics and palm print identification. Different palm print images capture different spectral information about the palm. The document then reviews several pixel-level fusion methods for combining multispectral palm images, finding that Curvelet transform performs best at preserving discriminative patterns. It also discusses hardware for capturing multispectral palm images and the process of region of interest extraction and localization. Common fusion methods like wavelet transform and Curvelet transform are also summarized.
This document describes a vehicle theft detection system that uses radio frequency identification (RFID) technology. The system involves embedding an RFID chip in each vehicle that continuously transmits a unique identification signal. When a vehicle is stolen, the owner reports it to the police, who upload the vehicle's information to a central database. Police vehicles are equipped with RFID receivers. If a stolen vehicle passes within range of a receiver, the receiver detects the vehicle's ID signal and displays its details on a tablet. This allows police to quickly identify and recover stolen vehicles. The system aims to make it difficult for thieves to hide a vehicle's identity and allows vehicles to be tracked globally wherever the detection system is implemented.
This document discusses and compares two techniques for image denoising using wavelet transforms: Dual-Tree Complex DWT and Double-Density Dual-Tree Complex DWT. Both techniques decompose an image corrupted by noise using filter banks, apply thresholding to the wavelet coefficients, and reconstruct the image. The Double-Density Dual-Tree Complex DWT yields better denoising results than the Dual-Tree Complex DWT as it produces more directional wavelets and is less sensitive to shifts and noise variance. Experimental results on test images demonstrate that the Double-Density method achieves higher peak signal-to-noise ratios, especially at higher noise levels.
This document compares the k-means and grid density clustering algorithms. It summarizes that grid density clustering determines dense grids based on the densities of neighboring grids, and is able to handle different shaped clusters in multi-density environments. The grid density algorithm does not require distance computation and is not dependent on the number of clusters being known in advance like k-means. The document concludes that grid density clustering is better than k-means clustering as it can handle noise and outliers, find arbitrary shaped clusters, and has lower time complexity.
This document proposes a method for detecting, localizing, and extracting text from videos with complex backgrounds. It involves three main steps:
1. Text detection uses corner metric and Laplacian filtering techniques independently to detect text regions. Corner metric identifies regions with high curvature, while Laplacian filtering highlights intensity discontinuities. The results are combined through multiplication to reduce noise.
2. Text localization then determines the accurate boundaries of detected text strings.
3. Text binarization filters background pixels to extract text pixels for recognition. Thresholding techniques are used to convert localized text regions to binary images.
The method exploits different text properties to detect text using corner metric and Laplacian filtering. Combining the results improves
This document describes the design and implementation of a low power 16-bit arithmetic logic unit (ALU) using clock gating techniques. A variable block length carry skip adder is used in the arithmetic unit to reduce power consumption and improve performance. The ALU uses a clock gating circuit to selectively clock only the active arithmetic or logic unit, reducing dynamic power dissipation from unnecessary clock charging/discharging. The ALU was simulated in VHDL and synthesized for a Xilinx Spartan 3E FPGA, achieving a maximum frequency of 65.19MHz at 1.98mW power dissipation, demonstrating improved performance over a conventional ALU design.
This document describes using particle swarm optimization (PSO) and genetic algorithms (GA) to tune the parameters of a proportional-integral-derivative (PID) controller for an automatic voltage regulator (AVR) system. PSO and GA are used to minimize the objective function by adjusting the PID parameters to achieve optimal step response with minimal overshoot, settling time, and rise time. The results show that PSO provides high-quality solutions within a shorter calculation time than other stochastic methods.
This document discusses implementing trust negotiations in multisession transactions. It proposes a framework that supports voluntary and unexpected interruptions, allowing negotiating parties to complete negotiations despite temporary unavailability of resources. The Trust-x protocol addresses issues related to validity, temporary loss of data, and extended unavailability of one negotiator. It allows a peer to suspend an ongoing negotiation and resume it with another authenticated peer. Negotiation portions and intermediate states can be safely and privately passed among peers to guarantee stability for continued suspended negotiations. An ontology is also proposed to provide formal specification of concepts and relationships, which is essential in complex web service environments for sharing credential information needed to establish trust.
This document discusses and compares various nature-inspired optimization algorithms for resolving the mixed pixel problem in remote sensing imagery, including Biogeography-Based Optimization (BBO), Genetic Algorithm (GA), and Particle Swarm Optimization (PSO). It provides an overview of each algorithm, explaining key concepts like migration and mutation in BBO. The document aims to prove that BBO is the best algorithm for resolving the mixed pixel problem by comparing it to other evolutionary algorithms. It also includes figures illustrating concepts like the species model and habitat in BBO.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
This document summarizes research on using wireless sensor networks to detect mobile targets. It discusses two optimization problems: 1) maximizing the exposure of the least exposed path within a sensor budget, and 2) minimizing sensor installation costs while ensuring all paths have exposure above a threshold. It proposes using tabu search heuristics to provide near-optimal solutions. The research also addresses extending the models to consider wireless connectivity, heterogeneous sensors, and intrusion detection using a game theory approach. Experimental results show the proposed mobile replica detection scheme can rapidly detect replicas with no false positives or negatives.
Are you looking for a long-lasting solution to your missing tooth?
Dental implants are the most common type of method for replacing the missing tooth. Unlike dentures or bridges, implants are surgically placed in the jawbone. In layman’s terms, a dental implant is similar to the natural root of the tooth. It offers a stable foundation for the artificial tooth giving it the look, feel, and function similar to the natural tooth.
Summer is a time for fun in the sun, but the heat and humidity can also wreak havoc on your skin. From itchy rashes to unwanted pigmentation, several skin conditions become more prevalent during these warmer months.
Travel Clinic Cardiff: Health Advice for International TravelersNX Healthcare
Travel Clinic Cardiff offers comprehensive travel health services, including vaccinations, travel advice, and preventive care for international travelers. Our expert team ensures you are well-prepared and protected for your journey, providing personalized consultations tailored to your destination. Conveniently located in Cardiff, we help you travel with confidence and peace of mind. Visit us: www.nxhealthcare.co.uk
“Psychiatry and the Humanities”: An Innovative Course at the University of Mo...Université de Montréal
“Psychiatry and the Humanities”: An Innovative Course at the University of Montreal Expanding the medical model to embrace the humanities. Link: https://www.psychiatrictimes.com/view/-psychiatry-and-the-humanities-an-innovative-course-at-the-university-of-montreal
Spontaneous Bacterial Peritonitis - Pathogenesis , Clinical Features & Manage...Jim Jacob Roy
In this presentation , SBP ( spontaneous bacterial peritonitis ) , which is a common complication in patients with cirrhosis and ascites is described in detail.
The reference for this presentation is Sleisenger and Fordtran's Gastrointestinal and Liver Disease Textbook ( 11th edition ).
The biomechanics of running involves the study of the mechanical principles underlying running movements. It includes the analysis of the running gait cycle, which consists of the stance phase (foot contact to push-off) and the swing phase (foot lift-off to next contact). Key aspects include kinematics (joint angles and movements, stride length and frequency) and kinetics (forces involved in running, including ground reaction and muscle forces). Understanding these factors helps in improving running performance, optimizing technique, and preventing injuries.
5-hydroxytryptamine or 5-HT or Serotonin is a neurotransmitter that serves a range of roles in the human body. It is sometimes referred to as the happy chemical since it promotes overall well-being and happiness.
It is mostly found in the brain, intestines, and blood platelets.
5-HT is utilised to transport messages between nerve cells, is known to be involved in smooth muscle contraction, and adds to overall well-being and pleasure, among other benefits. 5-HT regulates the body's sleep-wake cycles and internal clock by acting as a precursor to melatonin.
It is hypothesised to regulate hunger, emotions, motor, cognitive, and autonomic processes.
PGx Analysis in VarSeq: A User’s PerspectiveGolden Helix
Since our release of the PGx capabilities in VarSeq, we’ve had a few months to gather some insights from various use cases. Some users approach PGx workflows by means of array genotyping or what seems to be a growing trend of adding the star allele calling to the existing NGS pipeline for whole genome data. Luckily, both approaches are supported with the VarSeq software platform. The genotyping method being used will also dictate what the scope of the tertiary analysis will be. For example, are your PGx reports a standalone pipeline or would your lab’s goal be to handle a dual-purpose workflow and report on PGx + Diagnostic findings.
The purpose of this webcast is to:
Discuss and demonstrate the approaches with array and NGS genotyping methods for star allele calling to prep for downstream analysis.
Following genotyping, explore alternative tertiary workflow concepts in VarSeq to handle PGx reporting.
Moreover, we will include insights users will need to consider when validating their PGx workflow for all possible star alleles and options you have for automating your PGx analysis for large number of samples. Please join us for a session dedicated to the application of star allele genotyping and subsequent PGx workflows in our VarSeq software.
STUDIES IN SUPPORT OF SPECIAL POPULATIONS: GERIATRICS E7shruti jagirdar
Unit 4: MRA 103T Regulatory affairs
This guideline is directed principally toward new Molecular Entities that are
likely to have significant use in the elderly, either because the disease intended
to be treated is characteristically a disease of aging ( e.g., Alzheimer's disease) or
because the population to be treated is known to include substantial numbers of
geriatric patients (e.g., hypertension).
Tele Optometry (kunj'sppt) / Basics of tele optometry.
Volume 2-issue-6-2046-2051
1. ISSN: 2278 - 1323
International Journal of Advanced Research in Computer Engineering and Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org 2046
Abstract—Red-eye effect often appears in photographs while
taking pictures with flash. Flash light passing through pupil is
reflected on the blood vessels, and arrives at a camera lens.
This causes red-eyes in photographs. For Removing red-eyes in
digital photographs many algorithms have been proposed. It
can be removed by software available in market but most of
them are manual .Here is proposed work which is automatic
one. This work proposes a red-eye removal algorithm using in-
painting which is largely composed of three parts: face
detection, red-eye detection and red-eye correction. Face
regions are detected first for red-eye detection. After that, red-
eye regions are removed using in-painting. Next, pupils are
painted to a proper circular shape. The proposed algorithm
can be tested with large number of photographs with red-eye
effect and the proposed algorithm effectiveness can be
compared with the conventional algorithms.
Index Terms—Red eye effect, red eye correction, red eye
detection.
I. INTRODUCTION
What is red eye effect?
Red eye effect occurs because the light of the flash occurs
too fast for the pupil to close much of the very bright light
from the flash passes into the eye through the
pupil, reflect off the fundus at the back of the eyeball and
out through the pupil. The camera records this reflected
light. The main cause of the red color is the ample amount
of blood in the choroid which nourishes the back of the eye
and is located behind the retina. The blood in the retinal
circulation is far less than in the choroid, and plays virtually
no role. The eye contains several photostable pigments that
all absorb in the short wavelength region, and hence
contribute somewhat to the red eye effect. The lens cuts off
deep blue and violet light, below 430 nm (depending on
age), and macular pigment absorbs between 400 and
500 nm, but this pigment is located exclusively in the
tiny fovea. Melanin, located in the retinal pigment
epithelium (RPE) and the choroid, shows a gradually
increasing absorption towards the short wavelengths. But
blood is the main determinant of the red color, because it is
completely transparent at long wavelengths and abruptly
starts absorbing at 600 nm. The amount of red light
emerging from the pupil depends on the amount of melanin
in the layers behind the retina. This amount varies strongly
between individuals. Light skinned people with blue eyes
have relatively low melanin in the fundus and thus show a
much stronger red-eye effect than dark skinned people with
brown eyes. Following is the Fig 1. Shows Light reflecting
from eye.
Fig. 1 Light reflecting from eye.
How to prevent Red eye effect?
When such light reflected from eye it causes red pupil to
appear. Such red pupil make image useless. The red-eye
effect can be prevented in a number of ways. We can use
bounce flash in which the flash head is aimed at a nearby
pale colored surface such as a ceiling or wall or at a
specialist photographic reflector. This both changes the
direction of the flash and ensures that only diffused flash
light enters the eye. The flash can be placed away from the
camera's optical axis ensures that the light from the flash
hits the eye at an oblique angle. The light enters the eye in a
direction away from the optical axis of the camera and is
refocused by the eye lens back along the same axis. Because
of this the retina will not be visible to the camera and the
eyes will appear natural. Pictures can be taken without flash
by increasing the ambient lighting, opening the lens
aperture. Using the red-eye reduction capabilities built into
many modern cameras. These precede the main flash with a
series of short, low-power flashes, or a continuous piercing
bright light triggering the pupil to contract.
Another way is to having the subject look away from
the camera lens. Photograph subjects wearing contact lenses
with UV filtering. Increase the lighting in the room so that
the subject's pupils are more constricted. Professional
photographers prefer to use ambient light or indirect flash,
as the red-eye reduction system does not always prevent red
eyes — for example, if people look away during the pre-
flash. In addition, people do not look natural with small
pupils, and direct lighting from close to the camera lens is
considered to produce unflattering photographs.
Detection and Correction of Red Eye in Digital
Photograph
Swati S. Deshmukh M.E. [IT], Sipna COET, Amravati, Maharashtra
Dr. A.D. Gawande ,Sipna COET, Amravati, Maharashtra
Prof. A.B.Deshmukh ,Sipna COET, Amravati, Maharashtra
2. ISSN: 2278 - 1323
International Journal of Advanced Research in Computer Engineering and Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org 2047
Here we are using three main steps to remove red eye.
First step is to detect face in image. Face detection is
foremost the difficult task. Here face detection is done by
using Viola Jones Face detection effective algorithm. Once
the face is detected eye is detected in detected face and then
eye is removed by using in painting method.
II. PREVIOUS WORKS DONE
A method was presented by M. Gaubatz and R.
Ulichney [1] to automatically detect and correct redeye in
digital images in their work “Automatic red-eye detection
and correction,” They told that first, faces are detected with
a cascade of multi-scale classifiers. The red-eye pixels are
then located with several refining masks computed over the
facial region. The masks are created by thresholding per-
pixel metrics, designed to detect red-eye artifacts. Once the
redeye pixels have been found, the redness is attenuated
with a tapered color desaturation. But the face detector in
their work only returned one false positive non-face, and
red-eye detector found no red-eye this non-face region. In
addition, the red-eye detector produced only one false
positive red-eye detection. Most of the artifacts missed by
the system occurred in very small faces, which are often in
the background. Because the detection algorithm is flexible,
performance can be improved with the addition of a metric
tailored for smaller images.
R. Schettini, F. Gasparini, and F. Chazli, [2] in “A
modular procedure for automatic redeye correction in digital
photos,” used an adaptive color cast algorithm to correct the
color photo. The phase used not only facilitates the
subsequent of processing but, also improves the overall
appearance of output image. A multi-resolution neural
network approach has been used for mapping of candidate
faces. They have reduced search space by using information
about skin and face distribution. The overall performance of
this method would be improved by increasing the efficiency
of face detector and by introducing most geometric
constraints.
Huitao Luo, Jonathan Yen and Dan Tretter[4] in work
‘An Efficient Automatic Redeye Detection and Correction
Algorithm’ said that Adaboost has used to simultaneously
select features and train the classifier. A new feature set is
designed to address the orientation-dependency problem
associated with the Haar-like features commonly used for
object detection design. For each detected redeye, a
correction algorithm is applied to do adaptive desaturation
and darkening over the redeye region. In their work, the
verification classifiers Ire trained in two stages: a single eye
verification stage and a pairing verification stage. Adaboost
[1] has used to train the classifiers because of its ability to
select relevant features from a large number of object
features. This is partially motivated by the face detection
work of Viola and Jones [5]. However, in comparison to
their work, their contributions come in three aspects. First,
in addition to grayscale features, their detection algorithm
utilizes color information by exploring effective color space
projection and space conversion in designing object
features. Second, he design a set of non-orientation-sensitive
features to address the orientation-sensitivity problem
associated with the Haarlike features used in Viola and
Jones’ work. Third, their algorithm uses not only Haar-like
rectangle features, but also features of different semantics
such as object aspect ratio, percentage of skin tone pixels,
etc.
The proposed redeye removal system by [4] contains
two steps:
The red eye detection and the red eye correction. The
detection step contains three modules: initial candidate
detection, single eye verification and pairing verification.
Among them, the initial candidate detection is a fast
processing module designed to find all the red oval regions
that are possibly red eyes. The single eye verification
module verifies the redeye candidates using various object
features, and eliminates many candidate regions
corresponding to false alarms. Pairing verification further
verifies the remaining redeye candidates by grouping them
into pairs.
Jutta Willamowski, Gabriela Csurka [5] in work
‘Probabilistic Automatic Red Eye Detection and Correction’
uses contrast approach, their work does not require face
detection, and thus, enables the correction of red eyes
located on faces that are difficult to detect. A possible
drawback is that it detects red eye candidates that do not
correspond to real eyes. However, their method was able to
reject or to assign a low probability to most of these
locations which avoids introducing damaging artifacts.
Some existing approaches use learning based red eye
detection methods. Learning based methods rely on the
availability of a representative training set.
In S. Ioffe, Red eye detection with machine learning,
ICIP and L. Zhang, Y. Sun, M. Li, and H. Zhang, Automated
Red-Eye Detection and Correction in Digital Photographs,
ICIP 2004 this training set has to be manually collected, and
the eyes properly cropped and aligned. Similarly, during
detection or verification, test candidate patches or the test
images have to be tested at varying size and orientation. In
their approach, similarly to they constitute the training set
automatically through the initial candidate detection step. It
only requires labeling the detected candidates as red eyes or
non red eyes. This has the advantage of “concentrating” the
learning step on the differences between true and false
positive patches. In contrast to H. Luo, J. Yen and D.
Tretter, An Efficient Automatic Redeye Detection and
Correction Algorithm, ICPR 2004, they introduced an
additional distinction between false positives on faces and
on background. The major advantage of their approach
resides in combining probabilistic red eye detection and soft
red eye correction. Most previous approaches adopt in the
end a hard yes/no decision and apply either no or the
maximal possible correction. In difficult cases, hard
approaches make significant mistakes completely missing
certain red eyes or introducing disturbing artifacts on non
red eye regions. Their correction is often unnatural, e.g.
resulting in a remaining reddish ring around the central
corrected part of an eye. To obtain a more natural correction
[5] introduces softness blurring the detected red eye region.
F. Volken, J. Terrier, and P. Vandewalle [6] in
“Automatic red-eye removal based on sclera and skin tone
detection,” used the basic knowledge that an eye is
3. ISSN: 2278 - 1323
International Journal of Advanced Research in Computer Engineering and Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org 2048
characterized by its shape and the white color of the sclera.
Combining this intuitive approach with the detection of
“skin” around the eye, they obtain a higher success rate than
most of the tools. Moreover, their algorithm works for any
type of skin tone. Further work is oriented towards
improving the overall quality of the correction. It would be
interesting to address the problems encountered for people
with glasses, and to study more natural correction methods.
R.Ulichney and M.Gaubatz, in [8] “Perceptual-based
correction of photo red-eye,” Presented a brief overview of
facial image processing techniques it is given that the
original pupil color of a subject is often unrecoverable a
simple chrominance desaturation effectively removes the
red hue from the artifact pixels. These are in turn combined
with a fully automated procedure designed to minimize
intrusive effects associated with pixel re-coloration.
III. PROPOSED WORK
Face detection
A face detection algorithm is used to limit the candidate
region for red-eye detection. In this work, Viola and Jones’
algorithm is used to detect faces [9].
The features employed by the detection framework
universally involve the sums of image pixels within
rectangular areas. As such, they bear some resemblance
to Haar basis functions, which have been used
previously in the realm of image-based object
detection.
[3]
However, since the features used by Viola and
Jones all rely on more than one rectangular area, they are
generally more complex.. The value of any given feature is
always simply the sum of the pixels within clear rectangles
subtracted from the sum of the pixels within shaded
rectangles. As is to be expected, rectangular features of this
sort are rather primitive when compared to alternatives such
as steerable filters. Although they are sensitive to vertical
and horizontal features, their feedback is considerably
coarser. However, with the use of an image representation
called the integral image, rectangular features can be
evaluated in constant time, which gives them a considerable
speed advantage over their more sophisticated relatives.
Because each rectangular area in a feature is always adjacent
to at least one other rectangle, it follows that any two-
rectangle feature can be computed in six array references,
any three-rectangle feature in eight, and any four-rectangle
feature in just nine.
Learning algorithm
The speed with which features may be evaluated does
not adequately compensate for their number, however. For
example, in a standard 24x24 pixel sub-window, there are a
total of 45,396 possible features, and it would be
prohibitively expensive to evaluate them all. Thus, the
object detection framework employs a variant of the
learning algorithm AdaBoost to both select the best features
and to train classifiers that use them.
Cascade architecture
The evaluation of the strong classifiers generated by the
learning process can be done quickly, but it isn’t fast enough
to run in real-time. For this reason, the strong classifiers are
arranged in a cascade in order of complexity, where each
successive classifier is trained only on those selected
samples which pass through the preceding classifiers. If at
any stage in the cascade a classifier rejects the sub-window
under inspection, no further processing is performed and
continue on searching the next sub-window . The cascade
therefore has the form of a degenerate tree. In the case of
faces, the first classifier in the cascade – called the
attentional operator – uses only two features to achieve a
false negative rate of approximately 0% and a false positive
rate of 40%. The effect of this single classifier is to reduce
by roughly half the number of times the entire cascade is
evaluated.
Eyes are located in specific area of the face region by
the attribute of the face detection algorithm that is to be
used. When the face region is dived into 20 cells, in four
rows and five columns, it is observe that two eyes are
mostly placed in the second-row. As shown in following Fig
2 face can be detected.
Fig. 2(a) Face Detection example
Fig. 2(b) Face Detection example
Most of faces are detected unless the face is occluded
by other objects. And if face regions are found correctly,
eyes are detected in the specific area. Red-eyes are detected
by using features of red-eye extracted from the eye region.
IV. RED EYE REMOVAL USING INPAINTING
Eyes Extraction
According to observation eyes are located at
particular area of face for every human being I used
the face detected in face detection method. Once
obtained the face as you can observe in above fig.2
Full face feature is taken by plotting rectangle over
face and this rectangular face can be used to extract
5. ISSN: 2278 - 1323
International Journal of Advanced Research in Computer Engineering and Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org 2050
red-eyes to the total number of red-eyes. And if the
algorithm detects non-red-eye as red-eye, the count of FAs
increases. In case of redeye correction, it is difficult to
evaluate the performance quantitatively. Thus, we compare
the result images. We compare the performance of the
proposed algorithm with those of commercial softwares
[17–19]. In comparison of detection performance, method of
[17] is excluded since it does not offer automatic red-eye
detection.
Table 1
COMPARISON OF RED-EYE
DETECTION.RESULTS
Algorithm Detection rate
(%)
False alarms
Method of [18] 62.8 23
Method of [19] 91.2 20
Method of [20] 92.9 15
Proposed work 93.75 8
REFERENCES
[1] M. Gaubatz and R. Ulichney, “Automatic red-eye
detection and correction,” in Proc. IEEE Int. Conf. Image
Processing, vol. 1, pp. 804–807, Rochester, NY, Sep. 2002.
[2] R. Schettini, F. Gasparini, and F. Chazli, “A modular
procedure for automatic redeye correction in digital photos,”
in Proc. SPIE Conf. Color Imaging: Processing, Hardcopy,
and Application, vol. 5293, pp. 139–147, San Jose, CA, Jan.
2004.
[3] S. Ioffe, “Red eye detection with machine learning,” in
Proc. IEEE Int. Conf. Image Processing, vol. 2, pp. 871–
874, Barcelona, Spain, Sep. 2003.
[4] H. Luo, J. Yen, and D. Tretter, “An efficient automatic
redeye detection and correction algorithm,” in Proc. IEEE
Int. Conf. Pattern Recognition, vol. 2, pp. 883–886,
Cambridge, UK, Aug. 2004.
[5] J. Willamowski and G. Csurka, “Probabilistic automatic
red eye detection and correction,” in Proc. IEEE Int. Conf.
Pattern Recognition, vol. 3, pp. 762–765, Hong Kong,
China, Aug. 2006.
[6] F. Volken, J. Terrier, and P. Vandewalle, “Automatic
red-eye removal based on sclera and skin tone detection,” in
Proc. European Conf. Color in Graphics, Imaging and
Vision, pp. 359–364, Leeds, UK, June 2006.
[7] L. Zhang, Y. Sun, M. Li, and H. Zhang, “Automated red-
eye detection and correction in digital photographs,” in
Proc. IEEE Int. Conf. Image Processing, vol. 4, pp. 2363–
2366, Singapore, Oct. 2004.
[8] R. Ulichney and M. Gaubatz, “Perceptual-based
correction of photo red-eye,” in Proc. Int. Conf. Signal and
Image Processing, pp. 526– 531, Honolulu, HI, Aug. 2005.
[9] P. Viola and M. Jones, “Rapid object detection using a
boosted cascade of simple features,” in Proc. IEEE Conf.
Computer Vision and Pattern Recognition, vol. 1, pp. 511–
518, Kauai, HI, Dec. 2001.
[10] A. Criminisi, P. Perez, and K. Toyama, “Region filling
and object removal by exemplar-based image inpainting,”
IEEE Trans. ImageProcessing, vol. 13, no. 9, pp. 1200–
1212, Sep. 2004.
[11] B. Li, Y. Qi, and X. Shen, “An image inpainting
method,” in Proc IEEE Int. Conf. Computer Aided Design
and Computer Graphics, vol. 6, pp. 60–66, Hong Kong,
China, Dec. 2005.
[12] J. J. de Dios and N. Garcia, “Face detection based on a
new color space YCgCr,” in Proc. IEEE Int. Conf. Image
Processing, vol. 3, pp. 902– 912, Barcelona, Spain, Sep.
2003.
[13] J. J. de Dios and N. Garcia, “Fast face segmentation in
component color space,” in Proc. IEEE Int. Conf. Image
Processing, vol. 1, pp. 191–194, Singapore, Oct. 2004.
[14] R. Jain, R. Kasturi, and B. G. Schunck, Machine
Vision, New York, McGraw-Hill, 1995.
[15] J. E. Richman, K. G. McAndrew, D. Decker, and S. C.
Mullaney, “An evaluation of pupil size standards used by
police officers for detecting drug impairment,” Optometry,
vol. 75, no. 3, pp. 175–182, Mar. 2004.
[16] D. K. Martin and B. A. Holden, “A new method for
measuring the diameter of the in vivo human cornea,” Am. J.
Optometry Physiological Optics, vol. 59, no. 5, pp. 436–
441, May 1982.
[17] Adobe Photoshop CS2, San Jose, CA: Adobe, 2005.
6. ISSN: 2278 - 1323
International Journal of Advanced Research in Computer Engineering and Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org 2051
[18] STOIK RedEye AutoFix 3.0, Russia: STOIK Imaging,
2007.
[19] R. Ulichney, M. Gaubatz, and JM Van Thong, “Redbot
– a tool for improving red-eye correction”, in Proc.
IS&T/SID Eleventh Color Imaging Conference: Color
Science and Engineering System, Technologies,
Applications, Scottsdale, AZ, Nov. 2003.
[20] F. Gasparini and R. Schettini, “Automatic red-eye
removal for digital photography,” in Single-Sensor Imaging:
Methods and Applications For Digital Cameras, R. Lukac,
Ed. CRC, 2008.
.