This document discusses machine learning techniques for detecting driver drowsiness using facial analysis. Facial movements of subjects playing a driving video game were analyzed using classifiers trained on the Facial Action Coding System to detect 31 facial actions. Head motion was also tracked. Machine learning classifiers like Adaboost and logistic regression were able to predict sleep or crash episodes with 98% accuracy based on patterns of facial actions like blinking, yawning and head movements. New facial behaviors associated with drowsiness were revealed through this automated facial analysis approach.
Development of A Smart Interface For Safety and Protection of AutomotivesCSCJournals
This paper is mainly directed towards the safety and protection of the human beings by synchronizing both the software and hardware modules. Automotive safety sensors are mainly streamed towards the application in automobiles. The safety and protection of the automobile driver is monitored and abnormalities are detected by these sensors. These abnormalities are highlighted and alerts are provided to the driver, by the combinational synchronization of hardware and software.
Driving without license is the major cause for the road accident and the equivalent monetary losses. This paper is based on virtual reality based driving system which would enhance road safety and vehicle security. This paper helps to limit the vehicle operation on the basics of two parameters-Learn the driving by our own, category (car or bike) of the vehicle for which the driving license is issued. The hardware and software system required to improve our safety and security is developed. This driving system is apt for getting the license without bribe by gathering eye-gaze, Electroencephalography and peripheral physiological data.
Driver's drowsiness is the main reason for vehicular accidents. Drowsy driving is the form of impaired driving
that continuously affects a person's ability to drive safely. Continuous restless driving for longer time may result in
drowsiness and cause accidents. In this study, a collaborative system is build which assist the user and identifies
his/her state while driving in order to improve safety by preventing accidents. Based on grayscale image processing,
the position of the driver's face and his/her head movement is analysed. The driver's state identification also includes
the detection of alcohol consumption with the help of sensors.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Gazing Time Analysis for Drowsiness Assessment Using Eye Gaze TrackerTELKOMNIKA JOURNAL
From several investigations, it has been shown that most of the traffic accidents were due to drowsy driving. In order to address this issue, many related works have been conducted. One study was able to capture the driver’s facial expression and estimate their drowsiness. Instead of measuring the driver’s physiological condition, the results of such measurements were also used to predict their drowsiness level in this study. We investigated the relationship between the drowsiness and physiological condition by employing an eye gaze signal utilizing an eye gaze tracker and the Japanese version of the Karolinska sleepiness scale (KSS-J) within the driving simulator environment. The results showed that the gazing time has a significant statistical difference in relation to the drowsiness level: alert (1−5), weak drowsiness (6−7), and strong drowsiness (8−9), with P<0.001. Therefore, we suggested the potential of using the eye gaze to assess the drowsiness under a driving condition.
Driver drowsiness monitoring system using visual behavior and Machine Learning.AasimAhmedKhanJawaad
Drowsy driving is one of the major causes of road accidents and death. Hence, detection of
driver’s fatigue and its indication is an active research area. Most of the conventional methods are
either vehicle based, or behavioral based or physiological based. Few methods are intrusive and
distract the driver, some require expensive sensors and data handling. Therefore, in this study, a low
cost, real time driver’s drowsiness detection system is developed with acceptable accuracy. In the
developed system, a webcam records the video and driver’s face is detected in each frame employing
image processing techniques. Facial landmarks on the detected face are pointed and subsequently the
eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their
values, drowsiness is detected based on developed adaptive thresholding. Machine learning
algorithms have been implemented as well in an offline manner. A sensitivity of 95.58% and
specificity of 100% has been achieved in Support Vector Machine based classification.
Eye Gesture Analysis for Prevention of Road Accidentsijsrd.com
Around the globe, death is a daily occurrence mainly due to accidents. Research has been conducted intensively to attempt reduce accidents and extemporize the Driver Assistance System. The core idea for this paper is depicted through a process evolved to enhance effectively the Intelligent Driver assistance system and also a safety system to access the driver's perspectives with vehicles. The system uses a dynamic CCD camera in the vehicle that observes the driver's face. A prototype to match the approach is used to compare the Driver's eye pattern with a set of existing templates of the driver gazing at various focal points inside the vehicle. The windscreen is further divided into segments and a comparison of the driver's eye gaze pattern with the existing stencil determines the driver's view point on the windscreen. For instance, in case the driver is detected to be drowsy with closed eyelids for more than a few seconds then he will be alerted automatically.
Development of A Smart Interface For Safety and Protection of AutomotivesCSCJournals
This paper is mainly directed towards the safety and protection of the human beings by synchronizing both the software and hardware modules. Automotive safety sensors are mainly streamed towards the application in automobiles. The safety and protection of the automobile driver is monitored and abnormalities are detected by these sensors. These abnormalities are highlighted and alerts are provided to the driver, by the combinational synchronization of hardware and software.
Driving without license is the major cause for the road accident and the equivalent monetary losses. This paper is based on virtual reality based driving system which would enhance road safety and vehicle security. This paper helps to limit the vehicle operation on the basics of two parameters-Learn the driving by our own, category (car or bike) of the vehicle for which the driving license is issued. The hardware and software system required to improve our safety and security is developed. This driving system is apt for getting the license without bribe by gathering eye-gaze, Electroencephalography and peripheral physiological data.
Driver's drowsiness is the main reason for vehicular accidents. Drowsy driving is the form of impaired driving
that continuously affects a person's ability to drive safely. Continuous restless driving for longer time may result in
drowsiness and cause accidents. In this study, a collaborative system is build which assist the user and identifies
his/her state while driving in order to improve safety by preventing accidents. Based on grayscale image processing,
the position of the driver's face and his/her head movement is analysed. The driver's state identification also includes
the detection of alcohol consumption with the help of sensors.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Gazing Time Analysis for Drowsiness Assessment Using Eye Gaze TrackerTELKOMNIKA JOURNAL
From several investigations, it has been shown that most of the traffic accidents were due to drowsy driving. In order to address this issue, many related works have been conducted. One study was able to capture the driver’s facial expression and estimate their drowsiness. Instead of measuring the driver’s physiological condition, the results of such measurements were also used to predict their drowsiness level in this study. We investigated the relationship between the drowsiness and physiological condition by employing an eye gaze signal utilizing an eye gaze tracker and the Japanese version of the Karolinska sleepiness scale (KSS-J) within the driving simulator environment. The results showed that the gazing time has a significant statistical difference in relation to the drowsiness level: alert (1−5), weak drowsiness (6−7), and strong drowsiness (8−9), with P<0.001. Therefore, we suggested the potential of using the eye gaze to assess the drowsiness under a driving condition.
Driver drowsiness monitoring system using visual behavior and Machine Learning.AasimAhmedKhanJawaad
Drowsy driving is one of the major causes of road accidents and death. Hence, detection of
driver’s fatigue and its indication is an active research area. Most of the conventional methods are
either vehicle based, or behavioral based or physiological based. Few methods are intrusive and
distract the driver, some require expensive sensors and data handling. Therefore, in this study, a low
cost, real time driver’s drowsiness detection system is developed with acceptable accuracy. In the
developed system, a webcam records the video and driver’s face is detected in each frame employing
image processing techniques. Facial landmarks on the detected face are pointed and subsequently the
eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their
values, drowsiness is detected based on developed adaptive thresholding. Machine learning
algorithms have been implemented as well in an offline manner. A sensitivity of 95.58% and
specificity of 100% has been achieved in Support Vector Machine based classification.
Eye Gesture Analysis for Prevention of Road Accidentsijsrd.com
Around the globe, death is a daily occurrence mainly due to accidents. Research has been conducted intensively to attempt reduce accidents and extemporize the Driver Assistance System. The core idea for this paper is depicted through a process evolved to enhance effectively the Intelligent Driver assistance system and also a safety system to access the driver's perspectives with vehicles. The system uses a dynamic CCD camera in the vehicle that observes the driver's face. A prototype to match the approach is used to compare the Driver's eye pattern with a set of existing templates of the driver gazing at various focal points inside the vehicle. The windscreen is further divided into segments and a comparison of the driver's eye gaze pattern with the existing stencil determines the driver's view point on the windscreen. For instance, in case the driver is detected to be drowsy with closed eyelids for more than a few seconds then he will be alerted automatically.
E YE S CRUTINIZED W HEEL C HAIR FOR P EOPLE A FFECTED W ITH T ETRAPLEGIAijcsit
Nowadays the requirement for developing a wheel cha
ir control which is useful for the physically disab
led
person with Tetraplegia. This system involves the c
ontrol of the wheel chair with the eye moment of th
e
affected person. Statistics suggest that there are
230,000 cases of Tetraplegia in India. Our system h
ere is
to develop a wheelchair which make the lives of the
se people easier and instigate confidence to live i
n
them. We know that a person who is affected by Tetr
aplegia can move their eyes alone to a certain exte
nt
which paves the idea for the development of our sys
tem. Here we have proposed the method for a device
where a patient placed on the wheel chair looking
in a straight line at the camera which is permanent
ly
fixed in the optics, is capable to move in a track
by gazing in that way. When we change the directio
n, the
camera signals are given using the mat lab script t
o the microcontroller. Depends on the path of the e
ye,
the microcontroller controls the wheel chair in all
direction and stops the movement. If there is any
obstacle to be found before the wheel chair the sen
sor mind that and it stop and move in right directi
on
immediately. The benefit of this system is too easi
ly travel anywhere in any direction which is handle
d by
physically disabled person with Tetraplegia
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementation of face and eye detection on DM6437 board using simulink modeljournalBEEI
Driver Assistance system is significant in drriver drowsiness to avoid on road accidents. The aim of this research work is to detect the position of driver’s eye for fatigue estimation. It is not unusual to see vehicles moving around even during the nights. In such circumstances there will be very high probability that a driver gets drowsy which may lead to fatal accidents. Providing a solution to this problem has become a motivating factor for this research, which aims at detecting driver fatigue. This research concentrates on locating the eye region failing which a warning signal is generated so as to alert the driver. In this paper, an efficient algorithm is proposed for detecting the location of an eye, which forms an invaluable insight for driver fatigue detection after the face detection stage. After detecting the eyes, eye tracking for input videos has to be achieved so that the blink rate of eyes can be determined.
Yawning analysis for driver drowsiness detectioneSAT Journals
Abstract Driver fatigue is the main reason for fatal road accidents around the world. In this paper, an efficient driver’s drowsiness detection system is designed using yawning detection.Here, we consider eye detection and mouth detection. So that road accidents can avoid successfully. Mouth features points are identified using the redness property. Firstly detecting the driver’s face using YCbCr method then face tracking will perform using canny edge detector. After that , eyes and mouth positions by using Haar features. Lastly yawning detection is perform by using mouth geometric features. This method is tested on images from videos. Also proposed system should then alert to the driver in case of inattention. Keywords: Face detection, Face tracking, Eye and Mouth detection, Yawn detection
An eye gaze detection using low resolution web camera in desktop environmenteSAT Journals
Abstract Purpose of this paper is to detect the focus of the eye on monitor in X and Y coordinates with the help of regular low resolution web camera. We are using regular low resolution web camera to get the results of system which benefits in economical way for development. To achieve the systems goal we are using OpenCV (Open Source Computer Vision) library which is open source library which makes system very economical. System is implemented with the help of Viola Jones algorithm which help to maximize the accuracy. This system helps in human-computer interaction. This system helps the blind peoples to control the various systems. Keywords: Gaze Detection, Voila Jones Algorithm, Haar-Like Features.
Online Vigilance Analysis Combining Video and Electrooculography FeaturesRuofei Du
http://www.duruofei.com/Research/drowsydriving
In this paper, we propose a novel system to analyze vigilance level combining both video and Electrooculography (EOG) features. For one thing,
the video features extracted from an infrared camera include percentage of closure (PERCLOS) and eye blinks, slow eye movement (SEM), rapid eye movement (REM) are also extracted from EOG signals. For another, other features like yawn frequency, body posture and face orientation are extracted from the video by using Active Shape Model (ASM). The results of our experiments indicate that our approach outperforms the existing approaches based on either video or EOG merely. In addition, the prediction offered by our model is in close proximity to the actual error rate of the subject. We firmly believe that this method can be widely applied to prevent accidents like fatigued driving in the future.
A cloud based approach is proposed as a solution
for preventing accidents. The system provides face detection and
eye detection from the image captured using a low cost USB
camera. Then driver’s head pose is estimated using the region of
interest computed by Viola-Jones algorithm. The system also
contains a heart rate sensor for detecting the biological problems
of the driver and an alcohol sensor to detect whether the driver
has consumed alcohol or not. This combined system is used to
prevent drink and drive accident, accident due to inattention of
driver and accident due to driver’s biomedical problems.
Image processing based eye detection methods a theoretical reviewjournalBEEI
Lately, many of the road accidents have been attributed to the driver stupor. Statistics revealed that about 32% of the drivers who met with such accidents demonstrated the symptoms of tiredness before the mishap though at varying levels. The purpose of this research paper is to revisit the various interventions that have been devised to provide for assistance to the vehicle users to avert unwarranted contingencies on the roads. The paper tries to make a sincere attempt to encapsulate the body of work that has been initiated so far in this direction. As is evident, there are numerous ways in which one can identify the fatigue of the driver, namely biotic or physiological gauges, vehicle type and more importantly the analysis of the face in terms of its alignment and other attributes.
Automated Driver Fatigue Detection and Road Accident Prevention System: An Intelligent Approach to Solve a Fatal Problem. At least 4,284 people, including 516 women and 539 children, were killed and 9,112 others were injured in 3,472 road accidents across Bangladesh in 2017. Some of those accidents could have been avoided if proper systems were implemented at the time. This project focuses on creating a system based on EEG (Electroencephalogram) and ECG (electrocardiogram) signal from driver which will alert a driver about drowsiness while driving.
This Presentation is on the topic of Driver drowsiness Detection .
In this presentation We will discuss the Techniques used to detect drowsiness and compare some techniques
In the end we conclude and provide some suggestions regarding future work.
Thanks
Detection Of Saccadic Eye Movements to Switch the Devices For Disablesijsrd.com
The paper presents the alternate means of communication to the person with severely disable. Here, the work aimed to detection of fast eye movement and switching the devices for disable person for satisfaction of basic their needs with the help of their care takers.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads
Drowsiness is a critical factor impairing drivers’ performance in driving safely. There are several approaches in dealing with this issue based on human-machine interaction to detect drivers’ dozing off state, and then alert them to keep awake by sound or visual. These techniques fundamentally measure driver’s physical changes such as head angle, fatigue level and eyes states which are the indicators of drowsy state. However, they are limited in providing accurate and reliable results. Therefore, the project aims to achieve higher accuracy rate of drowsiness detection by using a very potential technology, electroencephalography (EEG) which is used widely in medical areas. Other than providing reliable result, the final product would bring more conveniences for customers with portability, easy-to-deploy and multi-device compatibility feature. In this project, its methodology first shows the strong correlation between drowsy state with brainwave frequency. Then a proposed system and testing plan are suggested based on the project objectives and available technologies. The final product is simply comprised of a hat with attached small electronic package used to record brainwave and a handheld device placing on dashboard of the car with an installed app. Finally, project management section will present in detail the human resources, scheduling, budget plan and risk analysis to show how it will be going to complete the project in six months.
E YE S CRUTINIZED W HEEL C HAIR FOR P EOPLE A FFECTED W ITH T ETRAPLEGIAijcsit
Nowadays the requirement for developing a wheel cha
ir control which is useful for the physically disab
led
person with Tetraplegia. This system involves the c
ontrol of the wheel chair with the eye moment of th
e
affected person. Statistics suggest that there are
230,000 cases of Tetraplegia in India. Our system h
ere is
to develop a wheelchair which make the lives of the
se people easier and instigate confidence to live i
n
them. We know that a person who is affected by Tetr
aplegia can move their eyes alone to a certain exte
nt
which paves the idea for the development of our sys
tem. Here we have proposed the method for a device
where a patient placed on the wheel chair looking
in a straight line at the camera which is permanent
ly
fixed in the optics, is capable to move in a track
by gazing in that way. When we change the directio
n, the
camera signals are given using the mat lab script t
o the microcontroller. Depends on the path of the e
ye,
the microcontroller controls the wheel chair in all
direction and stops the movement. If there is any
obstacle to be found before the wheel chair the sen
sor mind that and it stop and move in right directi
on
immediately. The benefit of this system is too easi
ly travel anywhere in any direction which is handle
d by
physically disabled person with Tetraplegia
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Implementation of face and eye detection on DM6437 board using simulink modeljournalBEEI
Driver Assistance system is significant in drriver drowsiness to avoid on road accidents. The aim of this research work is to detect the position of driver’s eye for fatigue estimation. It is not unusual to see vehicles moving around even during the nights. In such circumstances there will be very high probability that a driver gets drowsy which may lead to fatal accidents. Providing a solution to this problem has become a motivating factor for this research, which aims at detecting driver fatigue. This research concentrates on locating the eye region failing which a warning signal is generated so as to alert the driver. In this paper, an efficient algorithm is proposed for detecting the location of an eye, which forms an invaluable insight for driver fatigue detection after the face detection stage. After detecting the eyes, eye tracking for input videos has to be achieved so that the blink rate of eyes can be determined.
Yawning analysis for driver drowsiness detectioneSAT Journals
Abstract Driver fatigue is the main reason for fatal road accidents around the world. In this paper, an efficient driver’s drowsiness detection system is designed using yawning detection.Here, we consider eye detection and mouth detection. So that road accidents can avoid successfully. Mouth features points are identified using the redness property. Firstly detecting the driver’s face using YCbCr method then face tracking will perform using canny edge detector. After that , eyes and mouth positions by using Haar features. Lastly yawning detection is perform by using mouth geometric features. This method is tested on images from videos. Also proposed system should then alert to the driver in case of inattention. Keywords: Face detection, Face tracking, Eye and Mouth detection, Yawn detection
An eye gaze detection using low resolution web camera in desktop environmenteSAT Journals
Abstract Purpose of this paper is to detect the focus of the eye on monitor in X and Y coordinates with the help of regular low resolution web camera. We are using regular low resolution web camera to get the results of system which benefits in economical way for development. To achieve the systems goal we are using OpenCV (Open Source Computer Vision) library which is open source library which makes system very economical. System is implemented with the help of Viola Jones algorithm which help to maximize the accuracy. This system helps in human-computer interaction. This system helps the blind peoples to control the various systems. Keywords: Gaze Detection, Voila Jones Algorithm, Haar-Like Features.
Online Vigilance Analysis Combining Video and Electrooculography FeaturesRuofei Du
http://www.duruofei.com/Research/drowsydriving
In this paper, we propose a novel system to analyze vigilance level combining both video and Electrooculography (EOG) features. For one thing,
the video features extracted from an infrared camera include percentage of closure (PERCLOS) and eye blinks, slow eye movement (SEM), rapid eye movement (REM) are also extracted from EOG signals. For another, other features like yawn frequency, body posture and face orientation are extracted from the video by using Active Shape Model (ASM). The results of our experiments indicate that our approach outperforms the existing approaches based on either video or EOG merely. In addition, the prediction offered by our model is in close proximity to the actual error rate of the subject. We firmly believe that this method can be widely applied to prevent accidents like fatigued driving in the future.
A cloud based approach is proposed as a solution
for preventing accidents. The system provides face detection and
eye detection from the image captured using a low cost USB
camera. Then driver’s head pose is estimated using the region of
interest computed by Viola-Jones algorithm. The system also
contains a heart rate sensor for detecting the biological problems
of the driver and an alcohol sensor to detect whether the driver
has consumed alcohol or not. This combined system is used to
prevent drink and drive accident, accident due to inattention of
driver and accident due to driver’s biomedical problems.
Image processing based eye detection methods a theoretical reviewjournalBEEI
Lately, many of the road accidents have been attributed to the driver stupor. Statistics revealed that about 32% of the drivers who met with such accidents demonstrated the symptoms of tiredness before the mishap though at varying levels. The purpose of this research paper is to revisit the various interventions that have been devised to provide for assistance to the vehicle users to avert unwarranted contingencies on the roads. The paper tries to make a sincere attempt to encapsulate the body of work that has been initiated so far in this direction. As is evident, there are numerous ways in which one can identify the fatigue of the driver, namely biotic or physiological gauges, vehicle type and more importantly the analysis of the face in terms of its alignment and other attributes.
Automated Driver Fatigue Detection and Road Accident Prevention System: An Intelligent Approach to Solve a Fatal Problem. At least 4,284 people, including 516 women and 539 children, were killed and 9,112 others were injured in 3,472 road accidents across Bangladesh in 2017. Some of those accidents could have been avoided if proper systems were implemented at the time. This project focuses on creating a system based on EEG (Electroencephalogram) and ECG (electrocardiogram) signal from driver which will alert a driver about drowsiness while driving.
This Presentation is on the topic of Driver drowsiness Detection .
In this presentation We will discuss the Techniques used to detect drowsiness and compare some techniques
In the end we conclude and provide some suggestions regarding future work.
Thanks
Detection Of Saccadic Eye Movements to Switch the Devices For Disablesijsrd.com
The paper presents the alternate means of communication to the person with severely disable. Here, the work aimed to detection of fast eye movement and switching the devices for disable person for satisfaction of basic their needs with the help of their care takers.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads
Drowsiness is a critical factor impairing drivers’ performance in driving safely. There are several approaches in dealing with this issue based on human-machine interaction to detect drivers’ dozing off state, and then alert them to keep awake by sound or visual. These techniques fundamentally measure driver’s physical changes such as head angle, fatigue level and eyes states which are the indicators of drowsy state. However, they are limited in providing accurate and reliable results. Therefore, the project aims to achieve higher accuracy rate of drowsiness detection by using a very potential technology, electroencephalography (EEG) which is used widely in medical areas. Other than providing reliable result, the final product would bring more conveniences for customers with portability, easy-to-deploy and multi-device compatibility feature. In this project, its methodology first shows the strong correlation between drowsy state with brainwave frequency. Then a proposed system and testing plan are suggested based on the project objectives and available technologies. The final product is simply comprised of a hat with attached small electronic package used to record brainwave and a handheld device placing on dashboard of the car with an installed app. Finally, project management section will present in detail the human resources, scheduling, budget plan and risk analysis to show how it will be going to complete the project in six months.
Vision based system for monitoring the loss of attention in automotive driverVinay Diddi
This is a real time driver drowsiness detection system is used to alert the driver when he is drowsy. It consist of raspberry pi and OpenCV image processing library.
Advanced driver assistance systems are designed to increase car safety more generally road safety.
Basically Advanced driver assists(ADS) systems helps the driver in the driving process and enables safe, relaxed driving. It makes sense to get your new car with driver assist features if you find it at a reasonable price as it helps you drive easily and safely in everyday use.
This project represents a way of developing an
interface to detect driver drowsiness based on continuously
monitoring eyes and DIP algorithms. Micro sleeps that are short
period of sleeps lasting 2 to 3 seconds are good indicator of
fatigue state. Thus by continuously monitoring the eyes of the
driver by using camera one can detect the sleepy state of driver
and timely warning is issued.
Aim of the project is to develop the hardware which is very
advanced product related to driver safety on the roads using
controller and image processing. This product detects driver
drowsiness and gives warning in form of alarm and as well as
decreases the speed of vehicle.Along with the drowsiness
detection process there is continuous monitoring of the distance
done by the Ultrasonic sensor. The ultrasonic sensor detects the
obstacle and accordingly warns the driver as well as decreases
speed of vehicle.
Driver Alertness On Android With Face And Eye Ball MovementsIJRES Journal
Drowsiness is a big problem while in driving specially in long and continues driving. This is a main cause for accidents. Maximum accidents found by the driver’s ignorance of seeing the road and focus on other thing that will divert the concentration. This project used to find sleepy drivers and lazy driver by monitoring them periodically. Main objective of the project to develop entire system in to smart phone and make it as user friendly to the driver and try to support the system on Smartphone have the Android Operating System. There are major things are considered for measure the fatigue level when monitoring driver, Eye movement driver. Smartphone camera capture the drives image, A Dynamic decision making used for find the drivers fatigue level. When driver reaches threshold level of fatigue, then alert is triggered to avoid accident and awake the driver. If driver ignores the alert and continue with drowsy driving, the alert system takes further steps to stop the vehicle. It may be find nearest coffee shop to refresh driver and also if he need other choices to refresh the map will help them.GPS and Navigation Service of the Android phones used for assist the driver to overcome his drowsy driver.
Towards a system for real-time prevention of drowsiness-related accidentsIAESIJAI
Traffic accidents always result in great human and material losses. One of the main causes of accidents is the human factor, which usually results from driver’s fatigue or drowsiness. To address this issue, several methods for predicting the driver’s state and behavior have been proposed. Some approaches are based on the measurement of the driver’s behavior such as: head movement, blinking time, mouth expression note, while others are based on physiological measurements to obtain information about the internal state of the driver. Several works used machine learning / deep learning to train models for driver behavior prediction. In this paper, we propose a new deep learning architecture based on residual and feature pyramid networks (FPN) for driver drowsiness detection. The trained model is integrated into a system that aims to prevent drowsinessrelated accidents in real-time. The system can detect drivers’ drowsiness in real time and alert the driver in case of danger. Experiment results on benchmarking datasets shows that our proposed architecture achieves high detection accuracy compared to baseline approaches.
Drive Safe: An Intelligent System for Monitoring Stress and Pain from Drivers...IJLT EMAS
Stress and abnormal pain experienced by drivers
during driving is one of the major causes of road accidents. Most
of the existing systems focus on drivers being drowsy and
monitoring fatigue. In this paper, an effective intelligent system
for monitoring drivers’ stress and pain from facial expressions is
proposed. A novel method of detecting stress as well as pain from
facial expressions is proposed by combining a CK data set and
Pain dataset. Initially, AAM (Active Appearance Models)
features are tracked from the face; using these features, the
Euclidian distance between the normal face and the emotional
face are calculated and normalized. From the normalized values,
the facial expression is detected via trained models. It has been
observed from the results of the experiment that the developed
system works very well on simulated data. The proposed system
will be implemented on a mobile platform soon and will be
proposed for android automobiles.
Effective driver distraction warning system incorporating fast image recognit...IJECEIAES
Modern cars are equipped with advanced automatic technology featuring various safety measures for car occupants. However, the growing density of vehicles, especially in areas where infrastructure development lags, poses potential dangers, particularly accidents caused by driver subjectivity. These incidents may occur due to driver distraction or the presence of high-risk obstacles on the road. This article presents a comprehensive solution to assist drivers in mitigating these risks. Firstly, the study introduces a novel method to enhance the recognition of a driver's facial features by analyzing benchmarks and the whites of the eyes to assess the distraction level. Secondly, a domain division method is proposed to identify obstacles and lanes in front of the vehicle, enabling the assessment of the danger level. This information is promptly relayed to the driver and relevant individuals, such as the driver's manager or supervisor. An experimental device has also been developed to evaluate the effectiveness of the algorithms, solutions, and processing capabilities of the system.
Real Time Detection System of Driver FatigueIJCERT
The leading cause of vehicle crashes and accidents is the driver distraction. With the rapid development of motorization, driver fatigue has become a very serious traffic problem. Reasons for traffic accidents are driving after alcohol consumption, driving at night, driving without taking rest, aging, sleepiness, and fatigue occurred due to continuous driving, long working hours and night shifts. So to reduce rate of accidents due to above reasons, is aim of this project. This paper presents a method for detection of early signs of fatigue using feature extraction, Haar classifier and delivering of information and whereabouts of the driver to the emergency contact numbers.
Intelligent fatigue detection and automatic vehicle control systemijcsit
This paper describes method for detecting the early signs of fatigue in train drivers. As soon as the train
driver is falling in symptoms of fatigue immediate message will be transfer to the control room indicating the
status of the drivers. In addition of the advance technology of heart rate sensors is also added in the system
for correct detection of status of driver if in either case driver is falling to fatigue due to any sever medical
problems .The fatigue is detected in the system by the image processing method of comparing the
image(frames) in the video and by using the human features we are able to estimate the indirect way of
detecting fatigue. The technique also focuses on modes of person when driving the train i.e. awake, drowsy
state or sleepy and sleep state. The system is very efficient to detect the fatigue and control the train also
train can be controlled if it cross any such signal by which the train may collide on another train.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Hybrid Head Tracking for Wheelchair Control Using Haar Cascade Classifsier an...TELKOMNIKA JOURNAL
Disability may limit someone to move freely, especially when the severity of the disability is high. In order to help disabled people control their wheelchair, head movement-based control is preferred due to its reliability. This paper proposed a head direction detector framework which can be applied to wheelchair control. First, face and nose were detected from a video frame using Haar cascade classfier. Then, the detected bounding boxes were used to initialize Kernelized Correlation Filters tracker. Direction of a head was determined by relative position of the nose to the face, extracted from tracker’s bounding boxes. Results show that the method effectively detect head direction indicated by 82% accuracy and very low detection or tracking failure.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
REAL TIME DROWSY DRIVER DETECTION USING HAARCASCADE SAMPLEScscpconf
With the growth in population, the occurrence of automobile accidents has also seen an increase. A detailed analysis shows that, around half million accidents occur in a year , in India alone. Further , around 60% of these accidents are caused due to driver fatigue. Driver fatigue affects the driving ability in the following 3 areas, a) It impairs coordination, b) It causes longer reaction times, and, c)It impairs judgment. Through this paper, we provide a real time monitoring system using image processing, face/eye detection techniques. Further, to ensure real-time computation, Haarcascade samples are used to differentiate between an eye blink anddrowsy/fatigue detection.
Driver Drowsiness is a grave issue resulting in many road accidents each year. To evaluate the exact number of sleep related accidents because of the difficulties in detecting whether fatigue was a factor and in assessing the level of fatigue is not currently possible. In this paper the camera will be placed besides the rare view mirror of car in way such that it is in clear view of the frontal face of the driver. This camera will continuously capture the video of driver’s frontal face while driving. The system will detect the frontal face in the image and later the eyes. Depending upon the conditions the system will generate an alert. The focus will be on the system that will accurately monitor the open or closed state of the driver’s eyes in real-time. By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected early to avoid accidents.
2. 2
drowsiness or fatigue [2]. Thus incorporating automatic driver fatigue detection
mechanisms into vehicles may help prevent many accidents.
One can use a number of different techniques for analyzing driver exhaustion.
One set of techniques places sensors on standard vehicle components, e.g., steer-
ing wheel, gas pedal, and analyzes the signals sent by these sensors to detect
drowsiness [3]..Such techniques may need to be adapted to the driver, since there
are noticeable differences among drivers in the way they use the gas pedal [4].
A second set of techniques focuses on measurement of physiological signals
such as heart rate, pulse rate, and Electroencephalography (EEG) [5]. It has been
reported by researchers that as the alertness level decreases power of the alpha and
theta bands in the EEG signal increases [6], providing indicators of drowsiness.
However this method has drawbacks in terms of practicality since it requires a
person to wear an EEG cap while driving.
A third set of solutions focuses on computer vision systems that can detect and
recognize the facial motion and appearance changes occurring during drowsiness
[7] [8]. The advantage of computer vision techniques is that they are non-invasive,
and thus are more amenable to use by the general public. There are some signific-
ant previous studies about drowsiness detection using computer vision techniques.
Most of the published research on computer vision approaches to detection of fa-
tigue has focused on the analysis of blinks and head movements. However the ef-
fect of drowsiness on other facial expressions have not been studied thoroughly.
Recently Gu & Ji presented one of the first fatigue studies that incorporates certain
facial expressions other than blinks [9]. Their study feeds action unit information
as an input to a dynamic Bayesian network. The network was trained on subjects
posing a state of fatigue. The video segments were classified into three stages: in-
attention, yawn, or falling asleep. For predicting falling-asleep, head nods, blinks,
nose wrinkles and eyelid tighteners were used.
Previous approaches to drowsiness detection primarily make pre-assumptions
about the relevant behavior, focusing on blink rate, eye closure, and yawning.
Here we employ machine learning methods to datamine actual human behavior
during drowsiness episodes. The objective of this study is to discover what facial
configurations are predictors of fatigue. In other words our aim is to discover the
significant associations between a single or a combination of facial expressions
and fatigue In this study, facial motion was analyzed automatically from video us-
ing a fully automated facial expression analysis system based on the Facial Action
Coding System (FACS) [10]. In addition to the output of the automatic FACS re-
cognition system we also collected head motion data using an accelerometer
placed on the subject’s head, as well as steering wheel data. Adaboost and Multi-
nomial Logistic Regression based classifiers have been employed while discover-
ing the highly significant facial expressions and detecting fatigue.
3. 3
8.2. METHODS
2.1 Driving Task
Subjects played a driving video game on a windows machine using a steering
wheel and an open source multi-platform video game (See Figure 1). The win-
2 3
dows version of the video game was maintained such that at random times, a wind
effect was applied that dragged the car to the right or left, forcing the subject to
correct the position of the car. This type of manipulation had been found in the
past to increase fatigue [11]. Driving speed was held constant. Four subjects per-
formed the driving task over a three hour period beginning at midnight. During
this time subjects fell asleep multiple times thus crashing their vehicles. Episodes
in which the car left the road (crash) were recorded. Video of the subjects face
was recorded using a Digital Video camera for the entire 3 hour session.
Fig.1 Driving Simulation Task.
2Thrustmaster ® Ferrari Racing Wheel
3The Open Racing Car Simulator (TORCS)
4. 4
Figure 2 shows an example of a subject falling asleep during this task. First we
see the eyes closing and drifting off the center line, followed by an overcorrection.
Then the eyes close again, there is more drift, followed by a crash. We investig-
ated facial behaviors that predicted these episodes of falling asleep and crashing.
60 seconds before crashes are taken as drowsy episodes. Given a video segment
the machine learning task is to predict whether the segment is coming from an
alert or drowsy episode.
Fig.2 Driving Signals and Eye Opening Signal ( Eye Opening Signal is obtained by invert-
ing the Eye Closure Signal (Action Unit 45). See section 2.3 for a list of facial action coding
signals and a desription of the Action Unit)
2.2 Head Movement Measures
Head movement was measured using an accelerometer that has 3 degrees of
freedom. This three dimensional accelerometer has three one dimensional accel-
4
erometers mounted at right angles measuring accelerations in the range of -5g to
+5g where g represents earth’s gravitational force.
4 Vernier ®
5. 5
2.3 Facial Action Classifiers
The facial action coding system (FACS) [12] is arguably the most widely used
method for coding facial expressions in the behavioral sciences. The system de-
scribes facial expressions in terms of 46 component movements, which roughly
correspond to the individual facial muscle movements. An example is shown in
Figure 3. FACS provides an objective and comprehensive way to analyze expres-
sions into elementary components, analogous to decomposition of speech into
phonemes. Because it is comprehensive, FACS has proven useful for discovering
facial movements that are indicative of cognitive and affective states. In this
chapter we investigate whether there are Action units (AUs) such as chin raises
(AU17), nasolabial furrow deepeners (AU11), outer (AU2) and inner brow raises
(AU1) that are predictive of the levels of drowsiness observed prior to the subjects
falling asleep.
Fig. 3 Example facial action decomposition from the Facial Action Coding System [12].
In previous work we presented a system, named CERT, for fully automated detec-
tion of facial actions from the facial action coding system [10]. The workflow of
the system is summarized in Figure 4. We previously reported detection of 20 fa-
6. 6
cial action units, with a mean of 93% correct detection under controlled posed
conditions, and 75% correct for less controlled spontaneous expressions with head
movements and speech.
For this project we used an improved version of CERT which was retrained on a
larger dataset of spontaneous as well as posed examples. In addition, the system
was trained to detect an additional 11 facial actions for a total of 31 (See Table 1).
The facial action set includes blink (action unit 45), as well as facial actions in-
volved in yawning (action units 26 and 27). The selection of this set of 31 out of
46 total facial actions was based on the availability of labeled training data.
Fig. 4 Overview of fully automated facial action coding system.
The facial action detection system was designed as follows: First faces and eyes
are detected in real time using a system that employs boosting techniques in a gen-
erative framework [13]. The automatically detected faces are aligned based on the
detected eye positions, cropped and scaled to a size of 96 × 96 pixels and then
passed through a bank of Gabor filters. The system employs 72 Gabor filters span-
ning 9 spatial scales and 8 orientations. The outputs of these filters are normalized
and then passed to a standard classifier.
Table 1. Full set of action units used for predicting drowsiness
AU Name
1 Inner Brow Raise
2 Outer Brow Raise
4 Brow Lowerer
5 Upper Lid Raise
6 Cheek Raise
7 Lids Tight
8 Lip Toward
7. 7
9 Nose Wrinkle
10 Upper Lip Raiser
11 Nasolabial Furrow Deepener
12 Lip Corner Puller
13 Sharp Lip Puller
14 Dimpler
15 Lip Corner Depressor
16 Lower Lip Depress
17 Chin Raise
18 Lip Pucker
19 Tongue Show
20 Lip Stretch
22 Lip Funneller
23 Lip Tightener
24 Lip Presser
25 Lips Part
26 Jaw Drop
27 Mouth Stretch
28 Lips Suck
30 Jaw Sideways
32 Bite
38 Nostril Dilate
39 Nostril Compress
45 Blink
For this study we employed support vector machines. One SVM was trained for
each of the 31 facial actions, and it was trained to detect the facial action regard-
less of whether it occurred alone or in combination with other facial actions. The
system output consists of a continuous value which is the distance to the separat-
ing hyperplane for each test frame of video. The system operates at about 6 frames
per second on a Mac G5 dual processor with 2.5 ghz processing speed.
Facial expression training data The training data for the facial action classifiers
came from two posed datasets and one dataset of spontaneous expressions. The fa-
cial expressions in each dataset were FACS coded by certified FACS coders. The
first posed datasets was the Cohn-Kanade DFAT-504 dataset[14]. This dataset
consists of 100 university students who were instructed by an experimenter to per-
form a series of 23 facial displays, including expressions of seven basic emotions.
The second posed dataset consisted of directed facial actions from 24 subjects col-
lected by Ekman and Hager [12]. Subjects were instructed by a FACS expert on
the display of individual facial actions and action combinations, and they practiced
with a mirror. The resulting video was verified for AU content by two certified
FACS coders. The spontaneous expression dataset consisted of a set of 33 subjects
8. 8
collected by Mark Frank at Rutgers University. These subjects underwent an inter-
view about political opinions on which they felt strongly. Two minutes of each
subject were FACS coded. The total training set consisted of 6000 examples, 2000
from posed databases and 4000 from the spontaneous set.
3. Results
Subject data was partitioned into drowsy (non-alert) and alert states as follows.
The one minute preceding a sleep episode or a crash was identified as a non-alert
state for each of the four subjects. There was a inter subject mean of 24 non-alert
episodes with a minimum of 9 and a maximum of 35.. Fourteen alert segments for
each subject were collected from the first 20 minutes of the driving task. Our ini-
tial analysis focused on drowsiness prediction within-subjects.
3.1 Facial Action Signals
The output of the facial action detector consisted of a continuous value for each
frame which was the distance to the separating hyperplane, i.e., the margin. Histo-
grams for two of the action units in alert and non-alert states are shown in Figure
5. The area under the ROC (A’) was computed for the outputs of each facial ac-
tion detector to see to what degree the alert and non-alert output distributions were
separated. The A’ measure is derived from signal detection theory and character-
izes the discriminative capacity of the signal, independent of decision threshold.
A’ can be interpreted as equivalent to the theoretical maximum percent correct
achievable with the information provided by the system when using a 2-Alternat-
ive Forced Choice testing paradigm. Table 2 shows the actions with the highest A’
for each subject. As expected, the blink/eye closure measure was overall the most
discriminative for most subjects. However note that for Subject 2, the outer brow
raise (Action Unit 2) was the most discriminative.
3.2 Drowsiness Prediction
The facial action outputs were passed to a classifier for predicting drowsiness
based on the automatically detected facial behavior. Two learning-based classifi-
ers, Adaboost and multinomial ridge regression are compared. Within-subject pre-
diction of drowsiness and across-subject (subject independent) prediction of
drowsiness were both tested.
10. 10
Fig. 5 Histograms for blink and Action Unit 2 in alert and non-alert states. A’ is area un-
der the ROC
Within subject drowsiness prediction.
For the within-subject prediction, 80% of the alert and non-alert episodes were
used for training and the other 20% were reserved for testing. This resulted in a
mean of 19 non-alert and 11 alert episodes for training, and 5 non-alert and 3 alert
episodes for testing per subject.
The weak learners for the Adaboost classifier consisted of each of the 31 Facial
Action detectors. The classifier was trained to predict alert or non-alert from each
frame of video. There was a mean of 54000 training samples, (19+11)×60×30, and
14400 testing samples, (5 + 3) × 60 × 30, for each subject. On each training itera-
tion, Adaboost selected the facial action detector that minimized prediction error
given the previously selected detectors. Adaboost obtained 92% correct accuracy
for predicting driver drowsiness based on the facial behavior.
Classification with Adaboost was compared to that using multinomial logistic re-
gression (MLR). Performance with MLR was similar, obtaining 94% correct pre-
diction of drowsy states. The facial actions that were most highly weighted by
MLR also tended to be the facial actions selected by Adaboost. 85% of the top ten
facial actions as weighted by MLR were among the first 10 facial actions to be se-
lected by Adaboost.
Across subject drowsiness prediction.
The ability to predict drowsiness in novel subjects was tested by using a leave-
one-out cross validation procedure. The data for each subject was first normalized
to zero-mean and unit standard deviation before training the classifier. MLR was
trained to predict drowsiness from the AU outputs several ways.
Table 2. The top 5 most discriminant action units for discriminating alert from non-alert states
for each of the four subjects. A’ is area under the ROC curve.
AU Name A’
Subj1 45 Blink .94
17 Chin raise .85
30 Jaw sideways .84
7 Lid tighten .81
39 Nostril compress .79
Subj2 2 Outer brow raise .91
45 Blink .80
11. 11
17 Chin Raise .76
15 Lip corner depress .76
11 Nasolabial furrow .76
Subj3 45 Blink .86
9 Nose wrinkle .78
25 Lips part .78
1 Inner brow raise .74
20 Lip stretch .73
Subj4 45 Blink .90
4 Brow lower .81
15 Lip corner depress .81
7 Lid tighten .80
39 Nostril compress .74
Table 3 Performance for drowsiness prediction, within subjects. Means and standard deviations
are shown across subjects.
Classifier Percent Correct Hit Rate False Alarm Rate
Adaboost .92 ±.03 .92 ±.01 .06±.1
MLR .94 ±.02 .98 ±.02 .13 ±.02
Performance was evaluated in terms of area under the ROC. As a single frame
may not provide sufficient information, for all of the novel subject analysis, the
MLR output for each feature was summed over a temporal window of 12 seconds
(360 frames) before computing A’. MLR trained on all features obtained an A’
of .90 for predicting drowsiness in novel subjects.
Finally, a new MLR classifier was trained by sequential feature selection, starting
with the most discriminative feature (AU 45), and then iteratively adding the next
most discriminative feature given the features already selected. These features are
shown at the bottom of Table 4. Best performance of .98 was obtained with five
features: 45, 2, 19 (tongue show), 26 (jaw drop), and 15. This five feature model
outperformed the MLR trained on all features.
12. 12
Table 4 Drowsiness detection performance for across subjects, using an MLR classifier with dif-
ferent feature combinations. The weighted features are summed over 12 seconds before comput-
ing A’.
Feature A’
AU45 .9468
AU45,AU2 .9614
A45,AU2,AU19 .9693
AU45,AU2,AU19,AU26 .9776
AU45,AU2,AU19,AU26,AU1 .9792
5 .8954
all the features
13. 13
Fig. 6 Performance for drowsiness detection in novel subjects over temporal window sizes.
Red point indicates the priorly obtained performace in Table 4 for a temporal window of
12 seconds using the five feature model.
Effect of Temporal Window Length: We next examined the effect of the size
of the temporal window on performance. The five feature model was employed
for this analysis. The performances shown in Table 4 obtained using a temporal
window of 12 seconds. Here, the MLR output in the 5 feature model was summed
over windows of N seconds, where N ranged from 0.5 to 60 seconds. Figure 6
shows the area under the ROC for drowsiness detection in novel subjects over
time periods. Performance saturates at about 0.99 as the window size exceeds 30
seconds. In other words, given a 30 second video segment the system can discrim-
inate sleepy versus non-sleepy segments with 0.99 accuracy across subjects.
Action Units Associated with Drowsiness
In order to understand how each action unit is associated with drowsiness, MLR
was trained on each facial action individually. Examination of the A’ for each ac-
tion unit reveals the degree to which each facial movement was able to predict
drowsiness in this study. The A’s for the drowsy and alert states are shown in
Table 5. The five facial actions that were the most predictive of drowsiness by in-
creasing in drowsy states were 45, 2 (outer brow raise), 15 (frown), 17 (chin
raise), and 9 (nose wrinkle). The five actions that were the most predictive of
drowsiness by decreasing in drowsy states were 12 (smile), 7 (lid tighten), 39
(nostril compress), 4 (brow lower), and 26 (jaw drop). The high predictive ability
of the blink/eye closure measure was expected. However the predictability of the
outer brow raise (AU 2) was previously unknown.
14. 14
Table 5 MLR model for predicting drowsiness across subjects. Predictive performance of each
facial action individually is shown.
More when critically drowsy
AU Name A’
45 Blink/Eye Closure 0.94
2 Outer Brow Raise 0.81
15 Lip Corner Depressor 0.80
17 Chin Raiser 0.79
9 Nose Wrinkle 0.78
30 Jaw Sideways 0.76
20 Lip stretch 0.74
11 Nasolabial Furrow 0.71
14 Dimpler 0.71
1 Inner Brow Raise 0.68
10 Upper Lip Raise 0.67
27 Mouth Stretch 0.66
18 Lip Pucker 0.66
22 Lip funneler 0.64
24 Lip presser 0.64
19 Tongue show 0.61
Less when critically drowsy
AU Name A’
12 Smile 0.87
7 Lid tighten 0.86
39 Nostril Compress 0.79
4 Brow lower 0.79
26 Jaw Drop 0.77
6 Cheek Raise 0.73
15. 15
38 Nostril Dilate 0.72
23 Lip tighten 0.67
8 Lips toward 0.67
5 Upper lid raise 0.65
16 Upper lip depress 0.64
32 Bite 0.63
We observed during this study that many subjects raised their eyebrows in an at-
tempt to keep their eyes open, and the strong association of the AU 2 detector is
consistent with that observation. Also of note is that action 26, jaw drop, which
occurs during yawning, actually occurred less often in the critical 60 seconds prior
to a crash. This is consistent with the prediction that yawning does not tend to oc-
cur in the final moments before falling asleep.
3.3 Coupling of Behaviors
As a preliminary work we analyzed the coupling between behaviors. Here you can
find our preliminary results for coupling between first for steering and head mo-
tion and next for eye openness and eyebrow raises,
Coupling of steering and head motion.
Observation of the subjects during drowsy and alert states indicated that the sub-
jects head motion differed substantially when alert versus when the driver was
about to fall asleep. Surprisingly, head motion increased as the driver became
drowsy, with large roll motion coupled with the steering motion as the driver be-
came drowsy. Just before falling asleep, the head would become still.
We also investigated the coupling of the head and arm motions. Correlations
between head motion as measured by the roll dimension of the accelerometer out-
put and the steering wheel motion are shown in Figure 7. For this subject (subject
2), the correlation between head motion and steering increased from 0.27 in the
alert state to 0.65 in the non-alert state. For subject 1, the correlation between head
motion and steering similarly increased from 0.24 in the alert state to 0.43 in the
non-alert state. The other two subjects showed a smaller coupling effect. Future
work includes combining the head motion measures and steering correlations with
the facial movement measures in the predictive model.
16. 16
Fig. 7 Head motion (blue/gray) and steering position (red/black) for 60 seconds in an alert
state (left) and 60 seconds prior to a crash (right). Head motion is the output of the roll di-
mension of the accelerometer.
Coupling of eye openness and eyebrow raises.
Our observations indicated that for some of the subjects coupling between eye
brow up’s and eye openness increased in the drowsy state. In other words subjects
tried to open their eyes using their eyebrows in an attempt to keep awake.
17. 17
Fig. 8 Eye Openness and Eye Brow Raises (AU2) for 10 seconds in an alert state (left) and
10 seconds prior to a crash (right).
4 Conclusions
This chapter presented a system for automatic detection of driver drowsiness from
video. Previous approaches focused on assumptions about behaviors that might be
predictive of drowsiness. Here, a system for automatically measuring facial ex-
pressions was employed to datamine spontaneous behavior during real drowsiness
episodes. This is the first work to our knowledge to reveal significant associations
between facial expression and fatigue beyond eyeblinks. The project also revealed
a potential association between head roll and driver drowsiness, and the coupling
of head roll with steering motion during drowsiness. Of note is that a behavior that
is often assumed to be predictive of drowsiness, yawn, was in fact a negative pre-
dictor of the 60-second window prior to a crash. It appears that in the moments be-
fore falling asleep, drivers yawn less, not more, often. This highlights the import-
ance of using examples of fatigue and drowsiness conditions in which subjects ac-
tually fall sleep.
18. 18
5 Future Work
In future work, we will incorporate motion capture and EEG facilities to our ex-
perimental setup. The motion capture system will enable analyzing the upper torso
movements. In addition the EEG will provide a ground-truth for drowsiness. The
new sample experimental setup can be seen in Figure 9.
Fig. 9 Future experimental setup with the EEG and Motion Capture Systems
Acknowledgements This research was supported in part by NSF grants NSF-CNS
0454233, SBE-0542013 and by the European Commission under Grants
FP6-2004-ACC-SSA-2 (SPICE) and MIRG-CT-2006-041919. Any opinions,
findings, and conclusions or recommendations expressed in this material are those
of the author(s) and do not necessarily reflect the views of the National Science
Foundation.
19. 19
References
1. DOT: Intelligent vehicle initiative. United States Department of Transportation. http://www.it-
s.dot.gov/ivi/ivi.htm./
2. DOT: Saving lives through advanced vehicle safety technology. USA Department of Trans-
portation. http://www.its.dot.gov/ivi/docs/AR2001.pdf.
3. Takei, Y. Furukawa, Y.: Estimate of driver’s fatigue through steering motion. In: Man and Cy-
bernetics, 2005 IEEE International Conference. (Volume: 2, On page(s): 1765- 1770 Vol. 2)
4. Igarashi, K., Takeda, K., Itakura, F., Abut, H.: DSP for In-Vehicle and Mobile Systems.
Springer US (2005)
5. Cobb., W.: Recommendations for the practice of clinical neurophysiology. Elsevier (1983)
6. Hong, Chung, K.: Electroencephalographic study of drowsiness in simulated driving with
sleep deprivation. International Journal of Industrial Ergonomics. (Volume 35, Issue 4, April
2005, Pages 307-320.)
7. Gu, H., Ji, Q.: An automated face reader for fatigue detection. In: FGR. (2004) 111–116
8. Zhang, Z., shu Zhang, J.: Driver fatigue detection based intelligent vehicle control. In: ICPR
’06: Proceedings of the 18th International Conference on Pattern Recognition, Washington,
DC, USA, IEEE Computer Society (2006) 1262–1265
9. Gu, H., Zhang, Y., Ji, Q.: Task oriented facial behavior recognition with selective sensing.
Comput. Vis. Image Underst. 100(3) (2005) 385–415
10. Bartlett, M., Littlewort, G., Frank, M., Lainscsek, C., Fasel, I., Movellan, J.: Automatic re-
cognition of facial actions in spontaneous expressions. Journal of Multimedia. (1(6) p.
22-35.)
11. Orden, K.F.V., Jung, T.P., Makeig, S.: Combined eye activity measures accurately estimate
changes in sustained visual task performance. Biological Psychology (2000
Apr;52(3):221-40)
12. Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of
Facial Movement. Consulting Psychologists Press, Palo Alto, CA (1978)
13. Fasel I., Fortenberry B., M.J.: A generative framework for real-time object detection and
classification. Computer Vision and Image Understanding. (98, 2005.)
14. Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In:
Proceedings of the fourth IEEE International conference on automatic face and gesture recog-
nition (FG’00), Grenoble, France (2000) 46–53.