DRIVER DROWSINESS MONITORING
SYSTEM USING VISUAL BEHAVIOR
AND MACHINE LEARNING
By
Mohammed Ahmed Hussain (15081A0521)
Mohammad Asif (15081A0517)
Aasim Ahmed Khan Jawaad (15081A0502)
Under the Supervision of
G. Sridhar
Department of Computer Science and Engineering
Shadan College of Engineering and Technology
ABSTRACT
■ Drowsy driving is one of the major causes of road accidents and death.
■ Most of the conventional methods are either vehicle based, behavioral based or
physiological based.
■ A low cost, real time driver’s drowsiness detection system is developed with
acceptable accuracy.
■ A webcam records the video and images are extracted.
■ Firstly face is detected and then facial landmarks.
■ Subsequently the eye aspect ratio, mouth opening ratio and nose length ratio are
computed.
■ Drowsiness is detected based on ratios and developed adaptive thresholding.
EXISTING SYSTEM
■ In existing system the driver drowsiness detection system involves controlling
accident due to unconsciousness through Eye blink.
■ Here one eye blink sensor is fixed in vehicle where if driver loses consciousness,
then it alerts the driver through buzzer to prevent vehicle from accident.
Disadvantages of Existing System
■ Not Reliable
■ May damage retina
■ Highly expensive
■ Intrusive
■ Not portable
PROPOSED SYSTEM
■ In Proposed System, a low-cost, real time driver’s drowsiness detection system
developed with acceptable accuracy.
■ A webcam based system is used to detect driver’s fatigue from the face image using
image processing and machine learning techniques.
■ The major parts of the system are
– Data Acquisition: The video is recorded using webcam and the frames are
extracted and processed in a laptop. After extracting the frames, image
processing techniques are applied on these 2D images.
– Face Detection: After extracting the
frames, first the human faces are
detected. Numerous online face
detection algorithms are there.
– Facial Landmarks Points:
■ After detecting the face, the next task is to find the locations of
different facial features like the corners of the eyes and mouth, the tip
of the nose and so on.
■ The boundary points of eyes, mouth and the central line of the nose are
marked and the number of points for eye, mouth and nose are given in
Table I.
TABLE 1
 The facial landmarks are shown below. The red points are the
detected landmarks for further processing.
– Feature Extraction:
■ Eye Aspect Ratio (EAR): From the eye corner points, the eye aspect ratio
is calculated as the ratio of height and width of the eye i.e.,
■ Mouth Opening Ratio (MOR): Mouth opening ratio is a parameter to
detect yawning during drowsiness. Similar to EAR, it is calculated as
■ Nose Length ratio (NLR): In normal condition, our nose makes an acute
angle with respect to focal plane of the camera. This angle increases as the
head moves vertically up and decreases on moving down.
NLR = Nose length(p28−p25)
Average nose length
– Classification:
■ After computing all the three features, the next task is to
detect drowsiness in the extracted frames.
■ In the beginning, adaptive thresholding is considered for
classification.
■ Later, machine learning algorithms are used to classify the
data. The machine learning algorithms that are being used are
– Naïve Bayes Classifier
– Support Vector Machine(SVM)
– Fisher’s Linear Discriminant Analysis(FLDA)
Naïve bayes classifier
■ Naive Bayes classifiers are a collection of classification
algorithms based on Bayes’Theorem.
■ The fundamental Naive Bayes assumption is that each feature
makes an:
– Independent
– equal
contribution to the outcome.
• Example: Play Tennis
Example Continue..
• Learning Phase
Outlook Play=Yes Play=No
Sunny 2/9 3/5
Overcast 4/9 0/5
Rain 3/9 2/5
Temperature Play=Yes Play=No
Hot 2/9 2/5
Mild 4/9 2/5
Cool 3/9 1/5
Humidity Play=Yes Play=No
High 3/9 4/5
Normal 6/9 1/5
Wind Play=Yes Play=No
Strong 3/9 3/5
Weak 6/9 2/5
P(Play=Yes) = 9/14 P(Play=No) = 5/14
Example continue..
• Test Phase
– Given a new instance,
x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong)
– Look up tables
– MAP rule
P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) =
0.0053
P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) =
0.0206
Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”.
P(Outlook=Sunny|Play=No) = 3/5
P(Temperature=Cool|Play==No) = 1/5
P(Huminity=High|Play=No) = 4/5
P(Wind=Strong|Play=No) = 3/5
P(Play=No) = 5/14
P(Outlook=Sunny|Play=Yes) = 2/9
P(Temperature=Cool|Play=Yes) = 3/9
P(Huminity=High|Play=Yes) = 3/9
P(Wind=Strong|Play=Yes) = 3/9
P(Play=Yes) = 9/14
Support Vector Machine (SVM) - Tennis example
Humidity
Temperature
= play tennis
= do not play tennis
=-1
=+1
Data: <xi,yi>, i=1,..,l
xi  Rd
yi  {-1,+1}
All hyperplanes in Rd are parameterize by a vector (w) and a constant b.
Can be expressed as w•x+b=0 (remember the equation for a hyperplane
from algebra!)
Our aim is to find such a hyperplane f(x)=sign(w•x+b), that
correctly classify our data.
f(x)
SVM continue…
d+
d-
Definition:
Define the hyperplane H such that:
xi•w+b  +1 when yi =+1
xi•w+b  -1 when yi =-1
d+ = the shortest distance to the closest positive point
d- = the shortest distance to the closest negative point
The margin of a separating hyperplane is d+ + d-.
H
H1 and H2 are the planes:
H1: xi•w+b = +1
H2: xi•w+b = -1
The points on the planes H1 and H2
are the Support Vectors
H1
H2
SVM continue…
Maximizing the margin:
d+
d-
We want a classifier with as big margin as possible.
In order to maximize the margin, we need to minimize ||w||. With the
condition that there are no datapoints between H1 and H2:
xi•w+b  +1 when yi =+1
xi•w+b  -1 when yi =-1 Can be combined into yi(xi•w)  1
H1
H2
H
SVM continue…
■ Determines class membership
– Sensitive / Resistant
■ Finds the directions in spectral
space (projection)
– Maximize the ratio of between-
class variance to within- class
variance
■ Supervised Classification
– Use known data to train model
– Apply model to blind set for
prediction
17
LD1
LD2
Fisher’s Linear Discriminate Analysis (FLDA)
LDA continue…
Hardware requirements:
 Processer : Any Update Processer
 Ram : Min 1 GB
 Hard Disk : Min 100 GB
Software requirements:
 Operating System : Windows family
 Technology : Python 3.6
 IDE : PyCham
 UML : Star UML
SYSTEM CONFIGURATION
ARCHITECTURE DIAGRAM
DATAFLOW DIAGRAM
VideoStream
+left_eye
+right_eye
+mouth
+nose
webcam.
+SetUp loading
+start()
DROWSINESS
+Head Bending
+Yawning
+ALERT()
CLASS DIAGRAM
System
user
webcam
loading face
landmark predictor
video stream left_eye
mouth nose
right_eye
<<include>>
<<include>>
<<include>>
<<include>>
DROWSINESS ALERT
USECASE DIAGRAM
: user
webcam loading face landmark predictor video stream mouth right_eye left_eye nose DROWSINESS ALERT
1 : ON()
2 : Auto()
3 : Auto()
4 : Auto()
5 : Auto()
6
7 : Result()
8
9
10 11 12
SEQUENCE DIAGRAM
COLLABORATION DIAGRAM
webcam
loading face
video stream
right_eye left_eye
nose
mouth
DROWSINESS ALERT
STATE CHART DIAGRAM
Swimlane1
webcam
loading face
video stream
landmark predictor
right_eye
mouth
left_eye
nose
DROWSINESS ALERT
ACTIVITY DIAGRAM
scipy.spatial
imutils.video
imutils numpy
COMPONENT DIAGRAM
Pycham System
DEPLOYMENT DIAGRAM
OUTPUT SCREENS
 Eye Closed
 Head Bending
 Yawning
 Training Dataset
 Analysis of Machine Learning algorithms
CONCLUSION
■ A low cost, real time driver drowsiness monitoring system has been proposed
based on visual behavior and machine learning.
■ Here, visual behavior features like eye aspect ratio, mouth opening ratio and nose
length ratio are computed from the streaming video, captured by a webcam.
■ An adaptive thresholding technique has been developed to detect driver drowsiness
in real time.
■ W. L. Ou, M. H. Shih, C. W. Chang, X. H. Yu, C. P. Fan, "Intelligent Video-Based
Drowsy Driver Detection System under Various Illuminations and Embedded
Software Implementation", 2015 international Conf. on Consumer Electronics -
Taiwan, 2015.
■ [2] W. B. Horng, C. Y. Chen, Y. Chang, C. H. Fan, “Driver Fatigue Detection based
on Eye Tracking and Dynamic Template Matching”, IEEE International
Conference on Networking,, Sensing and Control, Taipei, Taiwan, March 21-23,
2004.
■ [3] S. Singh, N. P. papanikolopoulos, “Monitoring Driver Fatigue using Facial
Analysis Techniques”, IEEE Conference on Intelligent Transportation System, pp
314-318.
■ [4] B. Alshaqaqi, A. S. Baquhaizel, M. E. A. Ouis, M. Bouumehed, A. Ouamri, M.
Keche, “Driver Drowsiness Detection System”, IEEE International Workshop on
Systems, Signal Processing and their Applications, 2013.
■ [5] M. Karchani, A. Mazloumi, G. N. Saraji, A. Nahvi, K. S. Haghighi, B. M.
Abadi, A. R. Foroshani, A. Niknezhad, “The Steps of Proposed Drowsiness
Detection System Design based on Image Processing in Simulator Driving”,
International Research Journal of Applied and Basic Sciences, vol. 9(6), pp 878-
887, 2015.
REFERENCES
■ [6] R. Ahmad, and J. N. Borole, “Drowsy Driver Identification Using Eye Blink
Detection,” IJISET – International Journal of Computer Science and Information
Technologies, vol. 6, no. 1, pp. 270-274, Jan. 2015.
■ [7] A. Abas, J. Mellor, and X. Chen, “Non-intrusive drowsiness detection by employing
Support Vector Machine,” 2014 20th International Conference on Automation and
Computing (ICAC), Bedfordshire, UK, 2014, pp. 188- 193.
■ [8] A. Sengupta, A. Dasgupta, A. Chaudhuri, A. George, A. Routray, R. Guha; "A
Multimodal System for Assessing Alertness Levels Due to Cognitive Loading", IEEE
Trans. on Neural Systems and Rehabilitation Engg., vol. 25 (7), pp 1037-1046, 2017.
■ [9] K. T. Chui, K. F. Tsang, H. R. Chi, B. W. K. Ling, and C. K. Wu, “An accurate ECG
based transportation safety drowsiness detection scheme,” 343 IEEE Transactions on
Industrial Informatics, vol. 12, no. 4, pp. 1438- 1452, Aug. 2016.
■ [10] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”,
IEEE conf. on CVPR, 2005.
■ [11] V. Kazemi and J. Sullivan; "One millisecond face alignment with an ensemble of
regression trees", IEEE Conf. on Computer Vision and Pattern Recognition, 23-28 June,
2014, Columbus, OH, USA.
■ [12] Richard O. Duda, Peter E. Hart, David G. Stork, “Pattern Classification”, Wiley
student edition.
■ [13] Dataset: https://sites.google.com/site/invedrifac/
THANK YOU!!!

Driver drowsiness monitoring system using visual behavior and Machine Learning.

  • 1.
    DRIVER DROWSINESS MONITORING SYSTEMUSING VISUAL BEHAVIOR AND MACHINE LEARNING By Mohammed Ahmed Hussain (15081A0521) Mohammad Asif (15081A0517) Aasim Ahmed Khan Jawaad (15081A0502) Under the Supervision of G. Sridhar Department of Computer Science and Engineering Shadan College of Engineering and Technology
  • 2.
    ABSTRACT ■ Drowsy drivingis one of the major causes of road accidents and death. ■ Most of the conventional methods are either vehicle based, behavioral based or physiological based. ■ A low cost, real time driver’s drowsiness detection system is developed with acceptable accuracy. ■ A webcam records the video and images are extracted. ■ Firstly face is detected and then facial landmarks. ■ Subsequently the eye aspect ratio, mouth opening ratio and nose length ratio are computed. ■ Drowsiness is detected based on ratios and developed adaptive thresholding.
  • 3.
    EXISTING SYSTEM ■ Inexisting system the driver drowsiness detection system involves controlling accident due to unconsciousness through Eye blink. ■ Here one eye blink sensor is fixed in vehicle where if driver loses consciousness, then it alerts the driver through buzzer to prevent vehicle from accident. Disadvantages of Existing System ■ Not Reliable ■ May damage retina ■ Highly expensive ■ Intrusive ■ Not portable
  • 4.
    PROPOSED SYSTEM ■ InProposed System, a low-cost, real time driver’s drowsiness detection system developed with acceptable accuracy. ■ A webcam based system is used to detect driver’s fatigue from the face image using image processing and machine learning techniques. ■ The major parts of the system are – Data Acquisition: The video is recorded using webcam and the frames are extracted and processed in a laptop. After extracting the frames, image processing techniques are applied on these 2D images. – Face Detection: After extracting the frames, first the human faces are detected. Numerous online face detection algorithms are there.
  • 5.
    – Facial LandmarksPoints: ■ After detecting the face, the next task is to find the locations of different facial features like the corners of the eyes and mouth, the tip of the nose and so on. ■ The boundary points of eyes, mouth and the central line of the nose are marked and the number of points for eye, mouth and nose are given in Table I. TABLE 1
  • 6.
     The faciallandmarks are shown below. The red points are the detected landmarks for further processing.
  • 7.
    – Feature Extraction: ■Eye Aspect Ratio (EAR): From the eye corner points, the eye aspect ratio is calculated as the ratio of height and width of the eye i.e., ■ Mouth Opening Ratio (MOR): Mouth opening ratio is a parameter to detect yawning during drowsiness. Similar to EAR, it is calculated as ■ Nose Length ratio (NLR): In normal condition, our nose makes an acute angle with respect to focal plane of the camera. This angle increases as the head moves vertically up and decreases on moving down. NLR = Nose length(p28−p25) Average nose length
  • 8.
    – Classification: ■ Aftercomputing all the three features, the next task is to detect drowsiness in the extracted frames. ■ In the beginning, adaptive thresholding is considered for classification. ■ Later, machine learning algorithms are used to classify the data. The machine learning algorithms that are being used are – Naïve Bayes Classifier – Support Vector Machine(SVM) – Fisher’s Linear Discriminant Analysis(FLDA)
  • 9.
    Naïve bayes classifier ■Naive Bayes classifiers are a collection of classification algorithms based on Bayes’Theorem. ■ The fundamental Naive Bayes assumption is that each feature makes an: – Independent – equal contribution to the outcome.
  • 10.
  • 11.
    Example Continue.. • LearningPhase Outlook Play=Yes Play=No Sunny 2/9 3/5 Overcast 4/9 0/5 Rain 3/9 2/5 Temperature Play=Yes Play=No Hot 2/9 2/5 Mild 4/9 2/5 Cool 3/9 1/5 Humidity Play=Yes Play=No High 3/9 4/5 Normal 6/9 1/5 Wind Play=Yes Play=No Strong 3/9 3/5 Weak 6/9 2/5 P(Play=Yes) = 9/14 P(Play=No) = 5/14
  • 12.
    Example continue.. • TestPhase – Given a new instance, x’=(Outlook=Sunny, Temperature=Cool, Humidity=High, Wind=Strong) – Look up tables – MAP rule P(Yes|x’): [P(Sunny|Yes)P(Cool|Yes)P(High|Yes)P(Strong|Yes)]P(Play=Yes) = 0.0053 P(No|x’): [P(Sunny|No) P(Cool|No)P(High|No)P(Strong|No)]P(Play=No) = 0.0206 Given the fact P(Yes|x’) < P(No|x’), we label x’ to be “No”. P(Outlook=Sunny|Play=No) = 3/5 P(Temperature=Cool|Play==No) = 1/5 P(Huminity=High|Play=No) = 4/5 P(Wind=Strong|Play=No) = 3/5 P(Play=No) = 5/14 P(Outlook=Sunny|Play=Yes) = 2/9 P(Temperature=Cool|Play=Yes) = 3/9 P(Huminity=High|Play=Yes) = 3/9 P(Wind=Strong|Play=Yes) = 3/9 P(Play=Yes) = 9/14
  • 13.
    Support Vector Machine(SVM) - Tennis example Humidity Temperature = play tennis = do not play tennis
  • 14.
    =-1 =+1 Data: <xi,yi>, i=1,..,l xi Rd yi  {-1,+1} All hyperplanes in Rd are parameterize by a vector (w) and a constant b. Can be expressed as w•x+b=0 (remember the equation for a hyperplane from algebra!) Our aim is to find such a hyperplane f(x)=sign(w•x+b), that correctly classify our data. f(x) SVM continue…
  • 15.
    d+ d- Definition: Define the hyperplaneH such that: xi•w+b  +1 when yi =+1 xi•w+b  -1 when yi =-1 d+ = the shortest distance to the closest positive point d- = the shortest distance to the closest negative point The margin of a separating hyperplane is d+ + d-. H H1 and H2 are the planes: H1: xi•w+b = +1 H2: xi•w+b = -1 The points on the planes H1 and H2 are the Support Vectors H1 H2 SVM continue…
  • 16.
    Maximizing the margin: d+ d- Wewant a classifier with as big margin as possible. In order to maximize the margin, we need to minimize ||w||. With the condition that there are no datapoints between H1 and H2: xi•w+b  +1 when yi =+1 xi•w+b  -1 when yi =-1 Can be combined into yi(xi•w)  1 H1 H2 H SVM continue…
  • 17.
    ■ Determines classmembership – Sensitive / Resistant ■ Finds the directions in spectral space (projection) – Maximize the ratio of between- class variance to within- class variance ■ Supervised Classification – Use known data to train model – Apply model to blind set for prediction 17 LD1 LD2 Fisher’s Linear Discriminate Analysis (FLDA)
  • 18.
  • 19.
    Hardware requirements:  Processer: Any Update Processer  Ram : Min 1 GB  Hard Disk : Min 100 GB Software requirements:  Operating System : Windows family  Technology : Python 3.6  IDE : PyCham  UML : Star UML SYSTEM CONFIGURATION
  • 20.
  • 22.
  • 23.
  • 24.
    System user webcam loading face landmark predictor videostream left_eye mouth nose right_eye <<include>> <<include>> <<include>> <<include>> DROWSINESS ALERT USECASE DIAGRAM
  • 25.
    : user webcam loadingface landmark predictor video stream mouth right_eye left_eye nose DROWSINESS ALERT 1 : ON() 2 : Auto() 3 : Auto() 4 : Auto() 5 : Auto() 6 7 : Result() 8 9 10 11 12 SEQUENCE DIAGRAM
  • 26.
  • 27.
    webcam loading face video stream right_eyeleft_eye nose mouth DROWSINESS ALERT STATE CHART DIAGRAM
  • 28.
    Swimlane1 webcam loading face video stream landmarkpredictor right_eye mouth left_eye nose DROWSINESS ALERT ACTIVITY DIAGRAM
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
     Analysis ofMachine Learning algorithms
  • 37.
    CONCLUSION ■ A lowcost, real time driver drowsiness monitoring system has been proposed based on visual behavior and machine learning. ■ Here, visual behavior features like eye aspect ratio, mouth opening ratio and nose length ratio are computed from the streaming video, captured by a webcam. ■ An adaptive thresholding technique has been developed to detect driver drowsiness in real time.
  • 38.
    ■ W. L.Ou, M. H. Shih, C. W. Chang, X. H. Yu, C. P. Fan, "Intelligent Video-Based Drowsy Driver Detection System under Various Illuminations and Embedded Software Implementation", 2015 international Conf. on Consumer Electronics - Taiwan, 2015. ■ [2] W. B. Horng, C. Y. Chen, Y. Chang, C. H. Fan, “Driver Fatigue Detection based on Eye Tracking and Dynamic Template Matching”, IEEE International Conference on Networking,, Sensing and Control, Taipei, Taiwan, March 21-23, 2004. ■ [3] S. Singh, N. P. papanikolopoulos, “Monitoring Driver Fatigue using Facial Analysis Techniques”, IEEE Conference on Intelligent Transportation System, pp 314-318. ■ [4] B. Alshaqaqi, A. S. Baquhaizel, M. E. A. Ouis, M. Bouumehed, A. Ouamri, M. Keche, “Driver Drowsiness Detection System”, IEEE International Workshop on Systems, Signal Processing and their Applications, 2013. ■ [5] M. Karchani, A. Mazloumi, G. N. Saraji, A. Nahvi, K. S. Haghighi, B. M. Abadi, A. R. Foroshani, A. Niknezhad, “The Steps of Proposed Drowsiness Detection System Design based on Image Processing in Simulator Driving”, International Research Journal of Applied and Basic Sciences, vol. 9(6), pp 878- 887, 2015. REFERENCES
  • 39.
    ■ [6] R.Ahmad, and J. N. Borole, “Drowsy Driver Identification Using Eye Blink Detection,” IJISET – International Journal of Computer Science and Information Technologies, vol. 6, no. 1, pp. 270-274, Jan. 2015. ■ [7] A. Abas, J. Mellor, and X. Chen, “Non-intrusive drowsiness detection by employing Support Vector Machine,” 2014 20th International Conference on Automation and Computing (ICAC), Bedfordshire, UK, 2014, pp. 188- 193. ■ [8] A. Sengupta, A. Dasgupta, A. Chaudhuri, A. George, A. Routray, R. Guha; "A Multimodal System for Assessing Alertness Levels Due to Cognitive Loading", IEEE Trans. on Neural Systems and Rehabilitation Engg., vol. 25 (7), pp 1037-1046, 2017. ■ [9] K. T. Chui, K. F. Tsang, H. R. Chi, B. W. K. Ling, and C. K. Wu, “An accurate ECG based transportation safety drowsiness detection scheme,” 343 IEEE Transactions on Industrial Informatics, vol. 12, no. 4, pp. 1438- 1452, Aug. 2016. ■ [10] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, IEEE conf. on CVPR, 2005. ■ [11] V. Kazemi and J. Sullivan; "One millisecond face alignment with an ensemble of regression trees", IEEE Conf. on Computer Vision and Pattern Recognition, 23-28 June, 2014, Columbus, OH, USA. ■ [12] Richard O. Duda, Peter E. Hart, David G. Stork, “Pattern Classification”, Wiley student edition. ■ [13] Dataset: https://sites.google.com/site/invedrifac/
  • 40.