The document describes the design of a Drowsy Driver Detection System. The system uses a camera to monitor the driver's face and eyes in order to detect fatigue. It analyzes the video input to locate the eyes and determine if they are open or closed. If the eyes are closed for 5 consecutive frames, the system concludes the driver is falling asleep and issues a warning signal. The system is designed to work in both day and night conditions using a machine vision based approach with the goal of developing a non-intrusive real-time fatigue monitoring system.
Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads
This Presentation is on the topic of Driver drowsiness Detection .
In this presentation We will discuss the Techniques used to detect drowsiness and compare some techniques
In the end we conclude and provide some suggestions regarding future work.
Thanks
Faststream Technologies’s Driver Monitoring System, using a single low-power in-vehicle camera and advanced vision technologies, provides reliable detection of driver drowsiness and distraction, alerting the driver to reduce the chances of serious accidents and thus providing a safer and comfortable drive. Our solution is offered to OEMs and Tier1s for pre-installment into their cars and trucks.
Automated Driver Fatigue Detection and Road Accident Prevention System: An Intelligent Approach to Solve a Fatal Problem. At least 4,284 people, including 516 women and 539 children, were killed and 9,112 others were injured in 3,472 road accidents across Bangladesh in 2017. Some of those accidents could have been avoided if proper systems were implemented at the time. This project focuses on creating a system based on EEG (Electroencephalogram) and ECG (electrocardiogram) signal from driver which will alert a driver about drowsiness while driving.
Driver drowsiness detection is a car safety technology which helps prevent accidents caused by the driver getting drowsy. Various studies have suggested that around 20% of all road accidents are fatigue-related, up to 50% on certain roads
This Presentation is on the topic of Driver drowsiness Detection .
In this presentation We will discuss the Techniques used to detect drowsiness and compare some techniques
In the end we conclude and provide some suggestions regarding future work.
Thanks
Faststream Technologies’s Driver Monitoring System, using a single low-power in-vehicle camera and advanced vision technologies, provides reliable detection of driver drowsiness and distraction, alerting the driver to reduce the chances of serious accidents and thus providing a safer and comfortable drive. Our solution is offered to OEMs and Tier1s for pre-installment into their cars and trucks.
Automated Driver Fatigue Detection and Road Accident Prevention System: An Intelligent Approach to Solve a Fatal Problem. At least 4,284 people, including 516 women and 539 children, were killed and 9,112 others were injured in 3,472 road accidents across Bangladesh in 2017. Some of those accidents could have been avoided if proper systems were implemented at the time. This project focuses on creating a system based on EEG (Electroencephalogram) and ECG (electrocardiogram) signal from driver which will alert a driver about drowsiness while driving.
Driver drowsiness monitoring system using visual behavior and Machine Learning.AasimAhmedKhanJawaad
Drowsy driving is one of the major causes of road accidents and death. Hence, detection of
driver’s fatigue and its indication is an active research area. Most of the conventional methods are
either vehicle based, or behavioral based or physiological based. Few methods are intrusive and
distract the driver, some require expensive sensors and data handling. Therefore, in this study, a low
cost, real time driver’s drowsiness detection system is developed with acceptable accuracy. In the
developed system, a webcam records the video and driver’s face is detected in each frame employing
image processing techniques. Facial landmarks on the detected face are pointed and subsequently the
eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their
values, drowsiness is detected based on developed adaptive thresholding. Machine learning
algorithms have been implemented as well in an offline manner. A sensitivity of 95.58% and
specificity of 100% has been achieved in Support Vector Machine based classification.
The Internet of Things (IoT) is the network of
physical objects devices, vehicles, buildings and
other items embedded with electronics, software,
sensors, and network connectivity that enables
these objects to collect and exchange data.
Starting from small houses to huge industries,
surveillance plays very vital role to fulfill our
safety aspects as Burglary and theft have always
been a problem. In big industries personal security
means monitoring the people’s changing
information like activities, behavior for the purpose
of protecting, managing and influencing
confidential details. Surveillance means watching
over from a distance by means of electronic
equipment such as CCTV cameras but it is costly
for normal residents to set up such kind of system
and also it does not inform the user immediately
when the burglary happens.
Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...Vignesh C
A presentation on Drowsiness State Detection of Driver using Eyelid Movement in IRE Journal publications in Volume 2 Issue 10 2019. In the field of automobile, drowsiness causes more setbacks, which this presentation initiate a step in finding the solution.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
Keyboard without keys, virtual keyboard uses sensor technology and artificial intelligence. Awesome replacement for QWERTY keyboard. Can implement all types of keyboards. Example of Augmented Reality.
With the growth in population, the occurrence of automobile accidents has also seen an
increase. A detailed analysis shows that, around half million accidents occur in a year , in India
alone. Further , around 60% of these accidents are caused due to driver fatigue. Driver fatigue
affects the driving ability in the following 3 areas, a) It impairs coordination, b) It causes longer
reaction times, and, c)It impairs judgment. Through this paper, we provide a real time
monitoring system using image processing, face/eye detection techniques. Further, to ensure
real-time computation, Haarcascade samples are used to differentiate between an eye blink and
drowsy/fatigue detection.
Vision based system for monitoring the loss of attention in automotive driverVinay Diddi
This is a real time driver drowsiness detection system is used to alert the driver when he is drowsy. It consist of raspberry pi and OpenCV image processing library.
This project represents a way of developing an
interface to detect driver drowsiness based on continuously
monitoring eyes and DIP algorithms. Micro sleeps that are short
period of sleeps lasting 2 to 3 seconds are good indicator of
fatigue state. Thus by continuously monitoring the eyes of the
driver by using camera one can detect the sleepy state of driver
and timely warning is issued.
Aim of the project is to develop the hardware which is very
advanced product related to driver safety on the roads using
controller and image processing. This product detects driver
drowsiness and gives warning in form of alarm and as well as
decreases the speed of vehicle.Along with the drowsiness
detection process there is continuous monitoring of the distance
done by the Ultrasonic sensor. The ultrasonic sensor detects the
obstacle and accordingly warns the driver as well as decreases
speed of vehicle.
Drowsiness is a critical factor impairing drivers’ performance in driving safely. There are several approaches in dealing with this issue based on human-machine interaction to detect drivers’ dozing off state, and then alert them to keep awake by sound or visual. These techniques fundamentally measure driver’s physical changes such as head angle, fatigue level and eyes states which are the indicators of drowsy state. However, they are limited in providing accurate and reliable results. Therefore, the project aims to achieve higher accuracy rate of drowsiness detection by using a very potential technology, electroencephalography (EEG) which is used widely in medical areas. Other than providing reliable result, the final product would bring more conveniences for customers with portability, easy-to-deploy and multi-device compatibility feature. In this project, its methodology first shows the strong correlation between drowsy state with brainwave frequency. Then a proposed system and testing plan are suggested based on the project objectives and available technologies. The final product is simply comprised of a hat with attached small electronic package used to record brainwave and a handheld device placing on dashboard of the car with an installed app. Finally, project management section will present in detail the human resources, scheduling, budget plan and risk analysis to show how it will be going to complete the project in six months.
Driver drowsiness monitoring system using visual behavior and Machine Learning.AasimAhmedKhanJawaad
Drowsy driving is one of the major causes of road accidents and death. Hence, detection of
driver’s fatigue and its indication is an active research area. Most of the conventional methods are
either vehicle based, or behavioral based or physiological based. Few methods are intrusive and
distract the driver, some require expensive sensors and data handling. Therefore, in this study, a low
cost, real time driver’s drowsiness detection system is developed with acceptable accuracy. In the
developed system, a webcam records the video and driver’s face is detected in each frame employing
image processing techniques. Facial landmarks on the detected face are pointed and subsequently the
eye aspect ratio, mouth opening ratio and nose length ratio are computed and depending on their
values, drowsiness is detected based on developed adaptive thresholding. Machine learning
algorithms have been implemented as well in an offline manner. A sensitivity of 95.58% and
specificity of 100% has been achieved in Support Vector Machine based classification.
The Internet of Things (IoT) is the network of
physical objects devices, vehicles, buildings and
other items embedded with electronics, software,
sensors, and network connectivity that enables
these objects to collect and exchange data.
Starting from small houses to huge industries,
surveillance plays very vital role to fulfill our
safety aspects as Burglary and theft have always
been a problem. In big industries personal security
means monitoring the people’s changing
information like activities, behavior for the purpose
of protecting, managing and influencing
confidential details. Surveillance means watching
over from a distance by means of electronic
equipment such as CCTV cameras but it is costly
for normal residents to set up such kind of system
and also it does not inform the user immediately
when the burglary happens.
Drowsiness State Detection of Driver using Eyelid Movement- IRE Journal Confe...Vignesh C
A presentation on Drowsiness State Detection of Driver using Eyelid Movement in IRE Journal publications in Volume 2 Issue 10 2019. In the field of automobile, drowsiness causes more setbacks, which this presentation initiate a step in finding the solution.
This documentation provides a brief insight of face recognition based attendance system using neural networks in terms of product architecture which can be used for educational purpose.
Keyboard without keys, virtual keyboard uses sensor technology and artificial intelligence. Awesome replacement for QWERTY keyboard. Can implement all types of keyboards. Example of Augmented Reality.
With the growth in population, the occurrence of automobile accidents has also seen an
increase. A detailed analysis shows that, around half million accidents occur in a year , in India
alone. Further , around 60% of these accidents are caused due to driver fatigue. Driver fatigue
affects the driving ability in the following 3 areas, a) It impairs coordination, b) It causes longer
reaction times, and, c)It impairs judgment. Through this paper, we provide a real time
monitoring system using image processing, face/eye detection techniques. Further, to ensure
real-time computation, Haarcascade samples are used to differentiate between an eye blink and
drowsy/fatigue detection.
Vision based system for monitoring the loss of attention in automotive driverVinay Diddi
This is a real time driver drowsiness detection system is used to alert the driver when he is drowsy. It consist of raspberry pi and OpenCV image processing library.
This project represents a way of developing an
interface to detect driver drowsiness based on continuously
monitoring eyes and DIP algorithms. Micro sleeps that are short
period of sleeps lasting 2 to 3 seconds are good indicator of
fatigue state. Thus by continuously monitoring the eyes of the
driver by using camera one can detect the sleepy state of driver
and timely warning is issued.
Aim of the project is to develop the hardware which is very
advanced product related to driver safety on the roads using
controller and image processing. This product detects driver
drowsiness and gives warning in form of alarm and as well as
decreases the speed of vehicle.Along with the drowsiness
detection process there is continuous monitoring of the distance
done by the Ultrasonic sensor. The ultrasonic sensor detects the
obstacle and accordingly warns the driver as well as decreases
speed of vehicle.
Drowsiness is a critical factor impairing drivers’ performance in driving safely. There are several approaches in dealing with this issue based on human-machine interaction to detect drivers’ dozing off state, and then alert them to keep awake by sound or visual. These techniques fundamentally measure driver’s physical changes such as head angle, fatigue level and eyes states which are the indicators of drowsy state. However, they are limited in providing accurate and reliable results. Therefore, the project aims to achieve higher accuracy rate of drowsiness detection by using a very potential technology, electroencephalography (EEG) which is used widely in medical areas. Other than providing reliable result, the final product would bring more conveniences for customers with portability, easy-to-deploy and multi-device compatibility feature. In this project, its methodology first shows the strong correlation between drowsy state with brainwave frequency. Then a proposed system and testing plan are suggested based on the project objectives and available technologies. The final product is simply comprised of a hat with attached small electronic package used to record brainwave and a handheld device placing on dashboard of the car with an installed app. Finally, project management section will present in detail the human resources, scheduling, budget plan and risk analysis to show how it will be going to complete the project in six months.
it is a presentation based on image processing used in the field of fatigue detection while driving which can save many life as well as prevent accident.
Driver Alertness On Android With Face And Eye Ball MovementsIJRES Journal
Drowsiness is a big problem while in driving specially in long and continues driving. This is a main cause for accidents. Maximum accidents found by the driver’s ignorance of seeing the road and focus on other thing that will divert the concentration. This project used to find sleepy drivers and lazy driver by monitoring them periodically. Main objective of the project to develop entire system in to smart phone and make it as user friendly to the driver and try to support the system on Smartphone have the Android Operating System. There are major things are considered for measure the fatigue level when monitoring driver, Eye movement driver. Smartphone camera capture the drives image, A Dynamic decision making used for find the drivers fatigue level. When driver reaches threshold level of fatigue, then alert is triggered to avoid accident and awake the driver. If driver ignores the alert and continue with drowsy driving, the alert system takes further steps to stop the vehicle. It may be find nearest coffee shop to refresh driver and also if he need other choices to refresh the map will help them.GPS and Navigation Service of the Android phones used for assist the driver to overcome his drowsy driver.
REAL TIME DROWSY DRIVER DETECTION USING HAARCASCADE SAMPLEScscpconf
With the growth in population, the occurrence of automobile accidents has also seen an increase. A detailed analysis shows that, around half million accidents occur in a year , in India alone. Further , around 60% of these accidents are caused due to driver fatigue. Driver fatigue affects the driving ability in the following 3 areas, a) It impairs coordination, b) It causes longer reaction times, and, c)It impairs judgment. Through this paper, we provide a real time monitoring system using image processing, face/eye detection techniques. Further, to ensure real-time computation, Haarcascade samples are used to differentiate between an eye blink anddrowsy/fatigue detection.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Blue Eyes is a technology conducted by the research team of IBM at its Almaden Research Center (ARC) in San Jose, California since 1997. Blue eyes technology makes a computer to understand and sense human feelings and behavior and also enables the computer to react according to the sensed emotional levels. The aim of the blue eyes technology is to give human power or abilities to a computer, so that the machine can naturally interact with human beings as we interact with each other. All human beings have some perceptual capabilities, the ability to understand each other’s emotional level or feelings from their facial expressions. Blue eyes technology aims at creating a computer that have the abilities to understand the perceptual powers of human being by recognizing their facial expressions and react accordingly to them.
Imagine, a beautiful world, where humans collaborate with computers!! .The computer can talk, listen or screech aloud!! .With the help of speech recognition and facial recognition systems, computers gathers information from the users and starts interacting with them according to their mood variations. Computer recognizes your emotional levels by a simple touch on the mouse and it can interact with us as an intimate partner. The machine feels your presence; verifies your identity and starts interacting with you and even it will dial and call to your home at any urgent situations. This all is happening with this “Blue Eyes” technology.
The Real Time Drowisness Detection Using Arm 9IOSR Journals
Abstract: The project is to monitor the driver’s eye movement by using webcam and EOG channel respectively. Embedded project is to design and develop a low cost feature which is based on embedded platform for finding the driver drowsiness. Specifically, our Embedded System includes a webcam placed on the steering column which is capable to capture the eye movements and EOG placed at the forehead of the Driver to find out the visual activity. If the driver is not paying attention on the road ahead and a dangerous situation is detected, the system will warn the driver by giving the warning sounds. Embedded System uses ARM9 32-bit micro controller has a feature of image processing technique as well as Analog to Digital Conversion. Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or a set of characteristics or parameters related to the image. Keywords:ARM 9, EOGSensor, Webcam, GSM Modem.
Arduino (/ɑːrˈdwiːnoʊ/) is an open-source hardware and software company, project and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Its hardware products are licensed under a CC-BY-SA license, while software is licensed under the GNU Lesser General Public License (LGPL) or the GNU General Public License (GPL),[1] permitting the manufacture of Arduino boards and software distribution by anyone. Arduino boards are available commercially from the official website or through authorized distributors.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
4. Abstract
A Drowsy Driver Detection System has been developed, using a non-intrusive machine
vision based concepts. The system uses a small monochrome security camera that points
directly towards the driver’s face and monitors the driver’s eyes in order to detect fatigue.
In such a case when fatigue is detected, a warning signal is issued to alert the driver. This
report describes how to find the eyes, and also how to determine if the eyes are open or
closed. The algorithm developed is unique to any currently published papers, which was a
primary objective of the project. The system deals with using information obtained for the
binary version of the image to find the edges of the face, which narrows the area of where
the eyes may exist. Once the face area is found, the eyes are found by computing the
horizontal averages in the area. Taking into account the knowledge that eye regions in the
face present great intensity changes, the eyes are located by finding the significant intensity
changes in
the face. Once the eyes are located, measuring the distances between the intensity changes
in the eye area determine whether the eyes are open or closed. A large distance corresponds
to eye closure. If the eyes are found closed for 5 consecutive frames, the system draws the
conclusion that the driver is falling asleep and issues a warning signal. The system is also
able to detect when the eyes cannot be found, and works under reasonable lighting
conditions.
6
5. List of Figures
FIGURE 6.1: CASE WHERE NO RETINAL REFLECTION PRESENT.................................................. 16
FIGURE 6.2: PHOTOGRAPH OF DROWSY DRIVER DETECTION SYSTEM PROTOTYPE. ………… 18
FIGURE 7.1: FLOW CHART OF SYSTEM. ……………………………………………………… 20
FIGURE 7.2: EXAMPLES OF BINARIZATION USING DIFFERENT THRESHOLDS. ............................ 21
FIGURE 7.3: FACE TOP AND WIDTH DETECTION. ....................................................................... 23
FIGURE 7.4: FACE EDGES FOUND AFTER FIRST TRIAL. .............................................................. 24
FIGURE 7.5: BINARY PICTURE AFTER NOISE REMOVAL............................................................. 25
FIGURE 7.6: FACE EDGES FOUND AFTER SECOND TRIAL. .......................................................... 25
FIGURE 7.7: LABELS OF TOP OF HEAD, AND FIRST TWO INTENSITY CHANGES. ………………. 26
FIGURE 7.8: RESULT OF USING HORIZONTAL AVERAGES TO FIND VERTICAL POSITION OF THE
EYE. .................................................................................................................................
27
FIGURE 7.9: COMPARISON OF OPEN AND CLOSED EYE. ............................................................. 28
FIGURE 8.1: RESULTS OF USING SOBEL EDGE DETECTION. ....................................................... 32
FIGURE 9.1: DEMONSTRATION OF FIRST STEP IN THE PROCESS FOR FINDING THE EYES –
ORIGINAL IMAGE. ............................................................................................................. 34
FIGURE 9.2: DEMONSTRATION OF SECOND STEP – BINARY IMAGE. ......................................... 35
FIGURE 9.3: DEMONSTRATION OF THIRD STEP – INITIAL EDGE DETECTION IMAGE. .................. 35
FIGURE 9.4: DEMONSTRATION OF THE FOURTH STEP - REMOVING BLOBS IN BINARY IMAGE. ... 35
FIGURE 9.5: DEMONSTRATION OF STEP 5 - SECOND EDGE DETECTION. .................................... 36
FIGURE 9.6: RESULT OF FINAL STEP - FINDING THE LEFT EYE USING INTENSITY INFORMATION.36
7
6. Chapter 1
Introduction
Driver fatigue is a significant factor in a large number of vehicle accidents. Recent
statistics estimate that annually 1,200 deaths and 76,000 injuries can be attributed to fatigue
related crashes [9].
The development of technologies for detecting or preventing drowsiness at the wheel is a
major challenge in the field of accident avoidance systems. Because of the hazard that
drowsiness presents on the road, methods need to be developed for counteracting its affects.
The aim of this project is to develop a prototype drowsiness detection system. The focus
will be placed on designing a system that will accurately monitor the open or closed state of
the driver’s eyes in real-time.
By monitoring the eyes, it is believed that the symptoms of driver fatigue can be detected
early enough to avoid a car accident. Detection of fatigue involves a sequence of images of
a face, and the observation of eye movements and blink patterns.
The analysis of face images is a popular research area with applications such as face
recognition, virtual tools, and human identification security systems. This project is
focused on the localization of the eyes, which involves looking at the entire image of the
face, and determining the position of the eyes by a self developed image-processing
algorithm. Once the position of the eyes is located, the system is designed to determine
whether the eyes are opened or closed, and detect fatigue.
8
7. Chapter 2
System Requirements
The requirements for an effective drowsy driver detection system are as follows:
• A non-intrusive monitoring system that will not distract the driver.
• A real-time monitoring system, to insure accuracy in detecting drowsiness.
• A system that will work in both daytime and nighttime conditions.
The above requirements are subsequently the aims of this project. The project will consist
of a concept level system that will meet all the above requirements.
8. INTRODUCTION TO EMBEDDED SYSTEMS
EMBEDDED SYSTEMS
Introduction:
An embedded system is a system which is going to do a predefined specified task is the
embedded system and is even defined as combination of both software and hardware. A general-
purpose definition of embedded systems is that they are devices used to control, monitor or assist
the operation of equipment, machinery or plant. "Embedded" reflects the fact that they are an
integral part of the system. At the other extreme a general-purpose computer may be used to control
the operation of a large complex processing plant, and its presence will be obvious.
All embedded systems are including computers or microprocessors. Some of these
computers are however very simple systems as compared with a personal computer.
The very simplest embedded systems are capable of performing only a single function
or set of functions to meet a single predetermined purpose. In more complex systems an
application program that enables the embedded system to be used for a particular purpose in a
specific application determines the functioning of the embedded system. The ability to have
programs means that the same embedded system can be used for a variety of different
purposes. In some cases a microprocessor may be designed in such a way that application
software for a particular purpose can be added to the basic software in a second process, after
which it is not possible to make further changes. The applications software on such processors
is sometimes referred to as firmware.
The simplest devices consist of a single microprocessor (often called a "chip”), which may
itself be packaged with other chips in a hybrid system or Application Specific Integrated Circuit (ASIC).
Its input comes from a detector or sensor and its output goes to a switch or activator which (for
example) may start or stop the operation of a machine or, by operating a valve, may control the flow
of fuel to an engine.
9. As the embedded system is the combination of both software and hardware
Block diagram of Embedded System
Software deals with the languages like ALP, C, and VB etc., and Hardware deals with Processors,
Peripherals, and Memory.
Memory: It is used to store data or address.
Peripherals: These are the external devices connected
Processor: It is an IC which is used to perform some task
Applications of embedded systems
• Manufacturing and process control
• Construction industry
• Transport
• Buildings and premises
• Domestic service
Embedded
System
Software Hardware
ALP
C
VB
Etc.,
Processor
Peripherals
memory
10. • Communications
• Office systems and mobile equipment
• Banking, finance and commercial
• Medical diagnostics, monitoring and life support
• Testing, monitoring and diagnostic systems
Processors are classified into four types like:
Micro Processor (µp)
Micro controller (µc)
Digital Signal Processor (DSP)
Application Specific Integrated Circuits (ASIC)
Micro Processor (µp):
A silicon chip that contains a CPU. In the world of personal computers, the terms microprocessor and
CPU are used interchangeably. At the heart of all personal computers and most workstations sits a
microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios
to fuel-injection systems for automobiles.
Three basic characteristics differentiate microprocessors:
Instruction set: The set of instructions that the microprocessor can execute.
Bandwidth : The number of bits processed in a single instruction.
Clock speed : Given in megahertz (MHz), the clock speed determines how many instructions per
second the processor can execute.
In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor
that runs at 50MHz is more powerful than a 16-bit microprocessor that runs at 25MHz. In addition to
bandwidth and clock speed, microprocessors are classified as being either RISC (reduced instruction
set computer) or CISC (complex instruction set computer).
A microprocessor has three basic elements, as shown above. The ALU performs all arithmetic
computations, such as addition, subtraction and logic operations (AND, OR, etc.). It is controlled by
11. the Control Unit and receives its data from the Register Array. The Register Array is a set of registers
used for storing data. These registers can be accessed by the ALU very quickly. Some registers have
specific functions - we will deal with these later. The Control Unit controls the entire process. It
provides the timing and a control signal for getting data into and out of the registers and the ALU and
it synchronizes the execution of instructions (we will deal with instruction execution at a later date).
Three Basic Elements of a Microprocessor
Micro Controller (µc):
A microcontroller is a small computer on a single integrated circuit containing a processor
core, memory, and programmable input/output peripherals. Program memory in the form of
NOR flash or OTP ROM is also often included on chip, as well as a typically small amount
of RAM. Microcontrollers are designed for embedded applications, in contrast to the
microprocessors used in personal computers or other general purpose applications.
12. Block Diagram of Micro Controller (µc)
Digital Signal Processors (DSPs):
Digital Signal Processors is one which performs scientific and mathematical operation.
Digital Signal Processor chips - specialized microprocessors with architectures designed
specifically for the types of operations required in digital signal processing. Like a general-
purpose microprocessor, a DSP is a programmable device, with its own native instruction
code. DSP chips are capable of carrying out millions of floating point operations per second,
and like their better-known general-purpose cousins, faster and more powerful versions are
continually being introduced. DSPs can also be embedded within complex "system-on-chip"
devices, often containing both analog and digital circuitry.
Application Specific Integrated Circuit (ASIC)
ASIC is a combination of digital and analog circuits packed into an IC to achieve the desired
control/computation function
ASIC typically contains
CPU cores for computation and control
Peripherals to control timing critical functions
Memories to store data and program
Analog circuits to provide clocks and interface to the real world which is analog in nature
I/Os to connect to external components like LEDs, memories, monitors etc.
Timer, Counter, serial
communication ROM,
ADC, DAC, Timers,
USART, Oscillators etc.
ALU
CU
Memory
13. Computer Instruction Set
There are two different types of computer instruction set there are:
1. RISC (Reduced Instruction Set Computer) and
2. CISC (Complex Instruction Set computer)
Reduced Instruction Set Computer (RISC)
A RISC (reduced instruction set computer) is a microprocessor that is designed to perform a smaller
number of types of computer instruction so that it can operate at a higher speed (perform more
million instructions per second, or millions of instructions per second). Since each instruction type
that a computer must perform requires additional transistors and circuitry, a larger list or set of
computer instructions tends to make the microprocessor more complicated and slower in operation.
Besides performance improvement, some advantages of RISC and related design improvements are:
A new microprocessor can be developed and tested more quickly if one of its aims is to be less
complicated.
Operating system and application programmers who use the microprocessor's instructions will find it
easier to develop code with a smaller instruction set.
The simplicity of RISC allows more freedom to choose how to use the space on a microprocessor.
Higher-level language compilers produce more efficient code than formerly because they have
always tended to use the smaller set of instructions to be found in a RISC computer.
RISC characteristics
Simple instruction set:
In a RISC machine, the instruction set contains simple, basic instructions, from which more complex
instructions can be composed.
Same length instructions.
Each instruction is the same length, so that it may be fetched in a single operation.
1 machine-cycle instructions.
Most instructions complete in one machine cycle, which allows the processor to handle several
instructions at the same time. This pipelining is a key technique used to speed up RISC machines.
Complex Instruction Set Computer (CISC)
14. CISC, which stands for Complex Instruction Set Computer, is a philosophy for designing chips that are
easy to program and which make efficient use of memory. Each instruction in a CISC instruction set
might perform a series of operations inside the processor. This reduces the number of instructions
required to implement a given program, and allows the programmer to learn a small but flexible set
of instructions.
The advantages of CISC
At the time of their initial development, CISC machines used available technologies to optimize
computer performance.
Microprogramming is as easy as assembly language to implement, and much less expensive than
hardwiring a control unit.
The ease of micro-coding new instructions allowed designers to make CISC machines upwardly
compatible: a new computer could run the same programs as earlier computers because the new
computer would contain a superset of the instructions of the earlier computers.
As each instruction became more capable, fewer instructions could be used to implement a given
task. This made more efficient use of the relatively slow main memory.
Because micro program instruction sets can be written to match the constructs of high-level
languages, the compiler does not have to be as complicated.
The disadvantages of CISC
Still, designers soon realized that the CISC philosophy had its own problems, including:
Earlier generations of a processor family generally were contained as a subset in every new version
--- so instruction set & chip hardware become more complex with each generation of computers.
So that as many instructions as possible could be stored in memory with the least possible wasted
space, individual instructions could be of almost any length---this means that different instructions
will take different amounts of clock time to execute, slowing down the overall performance of the
machine.
Many specialized instructions aren't used frequently enough to justify their existence ---
approximately 20% of the available instructions are used in a typical program.
CISC instructions typically set the condition codes as a side effect of the instruction. Not only does
setting the condition codes take time, but programmers have to remember to examine the condition
code bits before a subsequent instruction changes them.
2.2 Memory Architecture
15. There two different type’s memory architectures there are:
Harvard Architecture
Von-Neumann Architecture
16. Harvard Architecture
Computers have separate memory areas for program instructions and data. There are two or more
internal data buses, which allow simultaneous access to both instructions and data. The CPU fetches
program instructions on the program memory bus.
The Harvard architecture is a computer architecture with physically separate storage and signal
pathways for instructions and data. The term originated from the Harvard Mark I relay-based
computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical
counters. These early machines had limited data storage, entirely contained within the central
processing unit, and provided no access to the instruction storage as data. Programs needed to be
loaded by an operator, the processor could not boot itself.
Harvard Architecture
Modern uses of the Harvard architecture
The principal advantage of the pure Harvard architecture - simultaneous access to more than
one memory system - has been reduced by modified Harvard processors using modern CPU
cache systems. Relatively pure Harvard architecture machines are used mostly in applications
where tradeoffs, such as the cost and power savings from omitting caches, outweigh the
programming penalties from having distinct code and data address spaces.
Digital signal processors (DSPs) generally execute small, highly-optimized audio or video processing
algorithms. They avoid caches because their behavior must be extremely reproducible. The
difficulties of coping with multiple address spaces are of secondary concern to speed of execution. As
a result, some DSPs have multiple data memories in distinct address spaces to facilitate SIMD and
VLIW processing. Texas Instruments TMS320 C55x processors, as one example, have multiple parallel
data busses (two write, three read) and one instruction bus.
17. Microcontrollers are characterized by having small amounts of program (flash memory) and data
(SRAM) memory, with no cache, and take advantage of the Harvard architecture to speed processing
by concurrent instruction and data access. The separate storage means the program and data
memories can have different bit depths, for example using 16-bit wide instructions and 8-bit wide
data. They also mean that instruction pre-fetch can be performed in parallel with other activities.
Examples include, the AVR by Atmel Corp, the PIC by Microchip Technology, Inc. and the ARM Cortex-
M3 processor (not all ARM chips have Harvard architecture).
Even in these cases, it is common to have special instructions to access program memory as
data for read-only tables, or for reprogramming.
Von-Neumann Architecture
A computer has a single, common memory space in which both program instructions and data are
stored. There is a single internal data bus that fetches both instructions and data. They cannot be
performed at the same time
The Von Neumann architecture is a design model for a stored-program digital computer that uses a
central processing unit (CPU) and a single separate storage structure ("memory") to hold both
instructions and data. It is named after the mathematician and early computer scientist John von
Neumann. Such computers implement a universal Turing machine and have a sequential
architecture.
A stored-program digital computer is one that keeps its programmed instructions, as well as its data,
in read-write, random-access memory (RAM). Stored-program computers were advancement over
the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were
programmed by setting switches and inserting patch leads to route data and to control signals
between various functional units. In the vast majority of modern computers, the same memory is
used for both data and program instructions. The mechanisms for transferring the data and
instructions between the CPU and memory are, however, considerably more complex than the
original von Neumann architecture.
The terms "von Neumann architecture" and "stored-program computer" are generally used
interchangeably, and that usage is followed in this article.
18. Schematic of the Von-Neumann Architecture.
Basic Difference between Harvard and Von-Neumann Architecture
The primary difference between Harvard architecture and the Von Neumann architecture is in the
Von Neumann architecture data and programs are stored in the same memory and managed by the
same information handling system.
Whereas the Harvard architecture stores data and programs in separate memory devices and they
are handled by different subsystems.
In a computer using the Von-Neumann architecture without cache; the central processing unit (CPU)
can either be reading and instruction or writing/reading data to/from the memory. Both of these
operations cannot occur simultaneously as the data and instructions use the same system bus.
In a computer using the Harvard architecture the CPU can both read an instruction and access data
memory at the same time without cache. This means that a computer with Harvard architecture can
potentially be faster for a given circuit complexity because data access and instruction fetches do not
contend for use of a single memory pathway.
Today, the vast majority of computers are designed and built using the Von Neumann architecture
template primarily because of the dynamic capabilities and efficiencies gained in designing,
implementing, operating one memory system as opposed to two. Von Neumann architecture may be
somewhat slower than the contrasting Harvard Architecture for certain specific tasks, but it is much
more flexible and allows for many concepts unavailable to Harvard architecture such as self
programming, word processing and so on.
19. Harvard architectures are typically only used in either specialized systems or for very specific uses. It
is used in specialized digital signal processing (DSP), typically for video and audio processing products.
It is also used in many small microcontrollers used in electronics applications such as Advanced RISK
Machine (ARM) based products for many vendors.
20. BLOCK DESCRIPTION
POWER SUPPLY:
The input to the circuit is applied from the regulated power supply. The ac. input i.e., 230V
from the mains supply is step down by the transformer to 12V and is fed to a rectifier. The
output obtained from the rectifier is a pulsating dc voltage. So in order to get a pure dc
voltage, the output voltage from the rectifier is fed to a filter to remove any ac components
present even after rectification. Now, this voltage is given to a voltage regulator to obtain a
pure constant dc voltage.
Fig: Power supply
Regulator
Filter
Bridge
Rectifier
Step down
transformer
230V AC
50Hz
D.C
Output
21. Transformer:
Usually, DC voltages are required to operate various electronic equipment and these
voltages are 5V, 9V or 12V. But these voltages cannot be obtained directly. Thus the a.c input
available at the mains supply i.e., 230V is to be brought down to the required voltage level.
This is done by a transformer. Thus, a step down transformer is employed to decrease the
voltage to a required level.
Rectifier:
The output from the transformer is fed to the rectifier. It converts A.C. into pulsating
D.C. The rectifier may be a half wave or a full wave rectifier. In this project, a bridge rectifier
is used because of its merits like good stability and full wave rectification.
The Bridge rectifier is a circuit,
which converts an ac voltage to dc voltage using both half cycles of the input ac voltage. The
Bridge rectifier circuit is shown in the figure. The circuit has four diodes connected to form a
22. bridge. The ac input voltage is applied to the diagonally opposite ends of the bridge. The load
resistance is connected between the other two ends of the bridge.
For the positive half cycle of the input ac voltage, diodes D1 and D3 conduct, whereas
diodes D2 and D4 remain in the OFF state. The conducting diodes will be in series with the
load resistance RL and hence the load current flows through RL.
For the negative half cycle of the input ac voltage, diodes D2 and D4 conduct whereas,
D1 and D3 remain OFF. The conducting diodes D2 and D4 will be in series with the load
resistance RL and hence the current flows through RL in the same direction as in the previous
half cycle. Thus a bi-directional wave is converted into a unidirectional wave.
Filter:
Capacitive filter is used in this
project. It removes the ripples
from the output of rectifier and smoothens
the D.C. Output received from this filter
is constant until the mains voltage and
load is maintained constant. However, if
either of the two is varied, D.C. voltage received at this point changes. Therefore a regulator is
applied at the output stage.
23. Voltage regulator:
As the name itself implies, it regulates the input applied to it. A voltage regulator is an
electrical regulator designed to automatically maintain a constant voltage level. In this project,
power supply of 5V and 12V are required. In order to obtain these voltage levels, 7805 and
7812 voltage regulators are to be used. The first number 78 represents positive supply and the
numbers
05, 12 represent the required output voltage levels.
24. ATMEGA 328 Microcontroller
The ATmega88 through ATmega328 microcontrollers are said by Atmel
to be the upgrades from the very popular ATmega8. They are pin compatible, but not
functionally compatible. The ATmega328 has 32kB of flash, where the ATmega8 has 8kB.
25. Other differences are in the timers, additional SRAM and EEPROM, the addition of pin
change interrupts, and a divide by 8 prescaler for the system clock.
The schematic below shows the Atmel ATmega328 circuit as it was built on the test board. The power
supply is common and is shared between all of the microcontrollers on the board. The ATmega328 is
in a minimal circuit. It is using its internal 8 MHz RC oscillator (divided by 8). With the ATmega328 I
needed to both burn a bootloader and download Arduino sketches. The boot loader is programmed
using the ISP programming connector, and the Arduino sketches are uploaded via the 6-pin header.
Be aware that programming the Arduino bootloader into the ATmega88, ATmega168, or ATmega328
micrcontroller will change the clock fuses, requiring the addition of an external crystal. The crystal
shown on the schematic is only required when the ATmega328 is going to be used as an Arduino,
although it may be desired in any real world application. I typically run them at 16 MHz, but they will
run as high as 20 MHz
27. Pin Descriptions
VCC: Digital supply voltage
GND: Ground
Port B (PB7:0) XTAL1/XTAL2/TOSC1/TOSC2
Port B is an 8-bit bi-directional I/O port with internal pull-up resistors (selected for each bit). The Port
B output buffers have symmetrical drive characteristics with both high sink and source capability. As
inputs, Port B pins that are externally pulled low will source current if the pull-up resistors are
activated. The Port B pins are tri-stated when a reset condition becomes active, even if the clock is
not running.
Depending on the clock selection fuse settings, PB6 can be used as input to the inverting Oscillator
amplifier and input to the internal clock operating circuit.
Depending on the clock selection fuse settings, PB7 can be used as output from the inverting
Oscillator amplifier.
If the Internal Calibrated RC Oscillator is used as chip clock source, PB7.6 is used as TOSC2.1 input for
the Asynchronous Timer/Counter2 if the AS2 bit in ASSR is set.
Port C (PC5:0)
Port C is a 7-bit bi-directional I/O port with internal pull-up resistors (selected for each bit). The
PC5..0 output buffers have symmetrical drive characteristics with both high sink and source
capability. As inputs, Port C pins that are externally pulled low will source current if the pull-up
28. resistors are activated. The Port C pins are tri-stated when a reset condition becomes active, even if
the clock is not running.
PC6/RESET
If the RSTDISBL Fuse is programmed, PC6 is used as an I/O pin. Note that the electrical characteristics
of PC6 differ from those of the other pins of Port C. If the RSTDISBL Fuse is unprogrammed, PC6 is
used as a Reset input. A low level on this pin for longer than the minimum pulse length will generate
a Reset, even if the clock is not running.
Port D (PD7:0)
Port D is an 8-bit bi-directional I/O port with internal pull-up resistors (selected for each bit). The Port
D output buffers have symmetrical drive characteristics with both high sink and source capability. As
inputs, Port D pins that are externally pulled low will source current if the pull-up resistors are
activated. The Port D pins are tri-stated when a reset condition becomes active, even if the clock is
not running.
AVCC
AVCC is the supply voltage pin for the A/D Converter, PC3:0, and ADC7:6. It should be externally
connected to VCC, even if the ADC is not used. If the ADC is used, it should be connected to VCC
through a low-pass filter. Note that PC6..4 use digital supply voltage, VCC.
AREF
AREF is the analog reference pin for the A/D Converter.
29. ADC7:6 (TQFP and QFN/MLF Package Only)
In the TQFP and QFN/MLF package, ADC7:6 serve as analog inputs to the A/D converter.
These pins are powered from the analog supply and serve as 10-bit ADC channels.
31. EYE DETECTOR
The population of our country has been increasing rapidly which indirectly increases
the vehicle density and leads to many road accidents. The aim of the project in to minimize
the road accidents which causes the loss of invaluable human life and other valuable goods.
Beside, the provision for the safety of the vehicle is also provided to avoid the theft action. In
this fast moving world, new technologies have been evolved for every second for our human
life style improvement. There have enormous advancement in automobile technologies
already and still to come. Because of these technologies, now we are enjoying the necessary
comfort-ness and safety. But there is lot of accidents happening now-a-days. It is because of
increased vehicle density, violating rules and carelessness. The embedded technology is
used to prevent accidents due to sleep driving.
32. By using these concepts, we hope that the road accidents due to violating rules and
careless-ness will be minimized and this will be one of the project required for now-a-days
and with the significance of low cost.
At present sleep drivers have increased enormously and so is the deaths due to sleep
drivers. The main reason for driving sleep is that the side people are not able to check each
and every time. So there is a need for an effective system to check sleep drivers.
The sensor circuit is used to detect whether the driver eyes is closed or not. sleep drivers
have been let unchecked in the society. Though there are laws to our life.This leads to
severe accidents as such that happened in Delhi in which a car ran over four road dwellers
killing them on the spot. So there is a necessity to develop a efficient eye detection
system.
Working procedure
In our eye detection system the buzzer circuit is controlled by interfacing a set of
sensors, logic circuit and a ATMEGA328 microcontroler. The eye detector sensor unit gives
output as per the condition of the eye through the logic circuit which is sent to the
ATMEGA328 micro controler. Depending upon the output the ATMEGA328 microcontroler
controls the buzzer.
Characteristics
* High sensitivity
* Fast response and resume
* Long life and low cost
* Mini Size
33. TRANSISTOR DRIVER CIRCUIT:
An SPDT relay consists of five pins, two for the magnetic coil, one as the common
terminal and the last pins as normally connected pin and normally closed pin. When the
current flows through this coil, the coil gets energized. Initially when the coil is not energized,
there will be a connection between the common terminal and normally closed pin. But when
the coil is energized, this connection breaks and a new connection between the common
terminal and normally open pin will be established. Thus when there is an input from the
microcontroller to the buzzer, the buzzer will be switched on. Thus when the relay is on, it can
drive the loads connected between the common terminal and normally open pin. Therefore,
the buzzer takes 5V from the microcontroller and drives the loads which consume high
currents. Thus the relay acts as an isolation device.
Digital systems and microcontroller pins lack sufficient current to drive the relay. While the
relay’s coil needs around 10milli amps to be energized, the microcontroller’s pin can provide
a maximum of 1-2milli amps current. For this reason, a driver such as a power transistor is
placed in between the microcontroller and the buzzer.
34. BUZZER CIRCUITRY
TRANSISTOR DRIVER CIRCUIT:
The transistor used in this project to drive the buzzer is BC547. The features of this transistor are
discussed in the next topics.
FEATURES
· Low current (max. 100 mA)
· Low voltage (max. 65 V).
APPLICATIONS
· General purpose switching and amplification.
DESCRIPTION
PNP transistor in a TO-92; SOT54 plastic package.
LIMITING VALUES
35. In accordance with the Absolute Maximum Rating System (IEC 134).
Digital systems and microcontroller pins lack sufficient current to drive the
circuits like buzzer circuits and relay circuits. While these circuits need around 10milli amps
to be energized, the microcontroller’s pin can provide a maximum of 1-2milli amps current.
For this reason, a driver such as a power transistor is placed in between the microcontroller
and the buzzer.
The operation of this circuit is as follows:
The input to the base of the transistor is applied from the microcontroller port pin P1.0.
The transistor will be switched on when the base to emitter voltage is greater than 0.7V (cut-in
voltage). Thus when the voltage applied to the pin P1.0 is high i.e., P1.0=1 (>0.7V), the
transistor will be switched on and thus the buzzer will be activated and produces a loud noise.
When the voltage at the pin P1.0 is low i.e., P1.0=0 (<0.7V) the transistor will be in
off state and the buzzer will be off. Thus the transistor acts like a current driver to operate the
buzzer accordingly.
AT89C51
P1.0
Vcc
BUZZER
GROUND
37. Chapter 3
Report Organization
The documentation for this project consists of 10 chapters. Chapter 4 represents the
Literature Review, which serves as an introduction to current research on driver drowsiness
detection systems. Chapter 5 discusses the design issues of the project, specifically, the
issues and concepts behind real-time image processing. Following this, Chapter 6 describes
the design of the system, and chapter 7 describes the algorithm behind the system. Chapter
8 gives additional information of the real-time system and the challenges met. Chapter 9
shows images illustrating the steps taken in localizing the eyes, which is followed by a
discussion on some possible future directions for the project, after which the final
conclusions are drawn in chapter 10.
38. Chapter 4
Literature Review
4.1 Techniques for Detecting Drowsy Drivers
Possible techniques for detecting drowsiness in drivers can be generally divided into the
following categories: sensing of physiological characteristics, sensing of driver
operation, sensing of vehicle response, monitoring the response of driver.
4.1.1 Monitoring Physiological Characteristics
Among these methods, the techniques that are best, based on accuracy are the ones based on
human physiological phenomena [9]. This technique is implemented in two ways:
measuring changes in physiological signals, such as brain waves, heart rate, and eye
blinking; and measuring physical changes such as sagging posture, leaning of the driver’s
head and the open/closed states of the eyes [9]. The first technique, while most accurate, is
not realistic, since sensing electrodes would have to be attached directly onto the driver’s
body, and hence be annoying and distracting to the driver. In addition, long time driving
would result in perspiration on the sensors, diminishing their ability to monitor accurately.
The second technique is well suited for real world driving conditions since it can be non-
intrusive by using optical sensors of video cameras to detect changes.
4.1.2 Other Methods
Driver operation and vehicle behaviour can be implemented by monitoring the steering
wheel movement, accelerator or brake patterns, vehicle speed, lateral acceleration, and
lateral displacement. These too are non-intrusive ways of detecting drowsiness, but are
limited to vehicle type and driver conditions. The final technique for detecting drowsiness is
by
1
1
39. monitoring the response of the driver. This involves periodically requesting the driver to
send a response to the system to indicate alertness. The problem with this technique is that
it will eventually become tiresome and annoying to the driver.
1
2
40. Chapter 5
Design Issues
The most important aspect of implementing a machine vision system is the image
acquisition. Any deficiencies in the acquired images can cause problems with image
analysis and interpretation. Examples of such problems are a lack of detail due to
insufficient contrast or poor positioning of the camera: this can cause the objects to be
unrecognizable, so the purpose of vision cannot be fulfilled.
5.1 Illumination
A correct illumination scheme is a crucial part of insuring that the image has the correct
amount of contrast to allow to correctly process the image. In case of the drowsy driver
detection system, the light source is placed in such a way that the maximum light being
reflected back is from the face. The driver’s face will be illuminated using a 60W light
source. To prevent the light source from distracting the driver, an 850nm filter is placed
over the source. Since 850nm falls in the infrared region, the illumination cannot be
detected by the human eye, and hence does not agitate the driver. Since the algorithm
behind the eye monitoring system is highly dependant on light, the following important
illumination factors to consider are [1]:
1. Different parts of objects are lit differently, because of variations in the angle
of incidence, and hence have different brightness as seen by the camera.
2. Brightness values vary due to the degree of reflectivness of the object.
3. Parts of the background and surrounding objects are in shadow, and can also
affect the brightness values in different regions of the object.
4. Surrounding light sources (such as daylight) can diminish the effect of the
light source on the object.
1
3
41. 5.2 Camera Hardware
The next item to be considered in image acquisition is the video camera. Review of several
journal articles reveals that face monitoring systems use an infrared-sensitive camera to
generate the eye images [3],[5],[6],[7]. This is due to the infrared light source used to
illuminate the driver’s face. CCD cameras have a spectral range of 400-1000nm, and peak
at approximately 800nm. The camera used in this system is a Sony CCD black and white
camera. CCD camera digitize the image from the outset, although in one respect – that
signal amplitude represents light intensity – the image is still analog.
5.3 Frame Grabbers and Capture Hardware
The next stage of any image acquisition system must convert the video signal into a format,
which can be processed by a computer. The common solution is a frame grabber board that
attaches to a computer and provides the complete video signal to the computer. The
resulting data is an array of greyscale values, and may then be analysed by a processor to
extract the required features. Two options were investigated when choosing the system’s
capture hardware. The first option is designing a homemade frame grabber, and the second
option is purchasing a commercial frame grabber. The two options are discussed below.
5.3.1 Homemade Frame Grabbers: Using the Parallel Port
Initially, a homemade frame grabber was going to be used. The design used was based
on the ‘Dirt Cheap Frame Grabber (DCFG)’, developed by Michael Day [2].
A detailed circuit description of the DCFG is beyond the scope of this report, but important
observations of this design were made. The DCFG assumes that the video signal will be
NTSC type compiling to the RS170 video standard. Although the DCFG successfully grabs
the video signal and digitizes it, the limitation to this design was that the parallel port could
not transfer the signal fast enough for real-time purposes. The DCFG was tested on two
different computers, a PC (Pentium I – 233MHz) and a laptop (Pentium III – 700MHz).
The laptop parallel port was much slower than the PC’s. Using the PC, which has a parallel
port of 4MHz, it was calculated that it would take approximately a minute to transfer a 480
x 640 pixel image. Because of the slow transfer rate of the parallel port it was concluded
that this type of frame grabber could not be used.
Another option was to build a frame grabber similar to the commercial versions. This
option was not pursued, since the main objective of the project is not to build a frame
grabber. Building a complete frame grabber is a thesis project in itself, and since the
parallel port design could not be used for real-time applications, the final option was
purchasing a commercial frame grabber.
1
4
42. 5.3.2 Commercial Frame Grabbers: Euresys Picolo I
The commercial frame grabber chosen for this system is the Euresys Picolo I. The Picolo
frame grabber acquired both colour and monochrome image formats. The Picolo supports
the acquisition and the real-time transfer of full resolution colour images (up to 768 x 576
pixels) and sequences of images to the PC memory (video capture). The board supports
PCI bus mastering; images are transferred to the PC memory using DMA in parallel with
the acquisition and processing. With the commercial board, it is also easier to write
software to process the images for each systems own application. The board comes with
various drivers and libraries for different software (i.e.; Borland C, Java, etc). The drivers
allow the controlling of capturing and processing of images via custom code.
1
5
43. Chapter 6
Design
This chapter aims to present my design of the Drowsy Driver Detection System. Each
design decision will be presented and rationalized, and sufficient detail will be given to
allow the reader to examine each element in its entirety.
6.1 Concept Design
As seen in the various references [3],[5],[6],[7],[8],[9], there are several different
algorithms and methods for eye tracking, and monitoring. Most of them in some way
relate to features of the eye (typically reflections from the eye) within a video image of the
driver.
The original aim of this project was to use the retinal reflection (only) as a means to finding
the eyes on the face, and then using the absence of this reflection as a way of detecting
when the eyes are closed. It was then found that this method might not be the best method
of monitoring the eyes fort two reasons. First, in lower lighting conditions, the amount of
retinal reflection decreases; and second, if the person has small eyes the reflection may not
show, as seen below in Figure 6.1.
Figure 6.1: Case where no retinal reflection present.
1
45. As the project progressed, the basis of the horizontal intensity changes idea from paper [7]
was used. One similarity among all faces is that eyebrows are significantly different from
the skin in intensity, and that the next significant change in intensity, in the y-direction, is
the eyes. This facial characteristic is the centre of finding the eyes on the face, which will
allow the system to monitor the eyes and detect long periods of eye closure.
Each of the following sections describes the design of the drowsy driver detection system.
6.2 System Configuration
6.2.1 Background and Ambient Light
Because the eye tracking system is based on intensity changes on the face, it is crucial that
the background does not contain any object with strong intensity changes. Highly reflective
object behind the driver, can be picked up by the camera, and be consequently mistaken as
the eyes. Since this design is a prototype, a controlled lighting area was set up for testing.
Low surrounding light (ambient light) is also important, since the only significant light
illuminating the face should come from the drowsy driver system. If there is a lot of
ambient light, the effect of the light source diminishes. The testing area included a black
background, and low ambient light (in this case, the ceiling light was physically high, and
hence had low illumination). This setup is somewhat realistic since inside a vehicle, there is
no direct light, and the background is fairly uniform.
6.2.2 Camera
The drowsy driver detection system consists of a CCD camera that takes images of the
driver’s face. This type of drowsiness detection system is based on the use of image
processing technology that will be able to accommodate individual driver differences. The
camera is placed in front of the driver, approximately 30 cm away from the face. The
camera must be positioned such that the following criteria are met:
1. The driver’s face takes up the majority of the image.
2. The driver’s face is approximately in the centre of the image.
The facial image data is in 480x640 pixel format and is stored as an array through
the predefined Picolo driver functions (as described in a later section).
6.2.3 Light Source
For conditions when ambient light is poor (night time), a light source must be present to
compensate. Initially, the construction of an infrared light source using infrared LED was
going to be implemented. It was later found that at least 50 LEDs would be needed so
create a source that would be able to illuminate the entire face. To cut down cost, a simple
desk light was used. Using the desk light alone could not work, since the bright light is
blinding if
1
7
46. looked at directly, and could not be used to illuminate the face. However, light from light
bulbs and even daylight all contain infrared light; using this fact, it was decided that if an
infrared filter was placed over the desk lamp, this would protect the eyes from a strong and
distracting light and provide strong enough light to illuminate the face. A wideband
infrared filter was placed over the desk lamp, and provides an excellent method of
illuminating the face. The spectral plot of the filter is shown in Appendix A.
The prototype of the Drowsy Driver Detection System is shown in Figure 6.2.
Figure 6.2: Photograph of Drowsy Driver Detection System prototype
1
8
47. Chapter 7
Algorithm Development
7.1 System Flowchart
A flowchart of the major functions of The Drowsy Driver Detection System is shown in
Figure 7.1.
7.2 System Process
7.2.1 Eye Detection Function
An explanation is given here of the eye detection procedure.
After inputting a facial image, pre-processing is first performed by binarizing the image.
The top and sides of the face are detected to narrow down the area of where the eyes exist.
Using the sides of the face, the centre of the face is found, which will be used as a
reference when comparing the left and right eyes.
Moving down from the top of the face, horizontal averages (average intensity value for each
y coordinate) of the face area are calculated. Large changes in the averages are used to
define the eye area.
The following explains the eye detection procedure in the order of the processing
operations. All images were generating in Matlab using the image processing toolbox.
1
9
51. Binarization
The first step to localize the eyes is binarizing the picture. Binarization is converting
the image to a binary image. Examples of binarized images are shown in Figure 7.2.
a) Threshold value 100. b) Threshold value 150.
c) Threshold value
200.
Figure 7.2: Examples of binarization using different thresholds.
A binary image is an image in which each pixel assumes the value of only two discrete
values. In this case the values are 0 and 1, 0 representing black and 1 representing white.
With the binary image it is easy to distinguish objects from the background. The greyscale
image is converting to a binary image via thresholding. The output binary image has values
of 0 (black) for all pixels in the original image with luminance less than level and 1 (white)
for all other pixels. Thresholds are often determined based on surrounding lighting
conditions, and the complexion of the driver. After observing many images of different
faces under various lighting conditions a threshold value of 150 was found to be effective.
The criteria used in choosing the correct threshold was based on the idea that the binary
image of the driver’s face should be majority white, allowing a few black blobs from the
eyes, nose and/or lips. Figure 7.2 demonstrates the effectiveness of varying threshold
values. Figure
53. 7.2a, 7.2b, and 7.2c use the threshold values 100, 150 and 200, respectively. Figure 7.2b
is an example of an optimum binary image for the eye detection algorithm in that the
background is uniformly black, and the face is primary white. This will allow finding the
edges of the face, as described in the next section.
Face Top and Width Detection
The next step in the eye detection function is determining the top and side of the driver’s
face. This is important since finding the outline of the face narrows down the region in
which the eyes are, which makes it easier (computationally) to localize the position of the
eyes. The first step is to find the top of the face. The first step is to find a starting point on
the face, followed by decrementing the y-coordinates until the top of the face is detected.
Assuming that the person’s face is approximately in the centre of the image, the initial
starting point used is (100,240). The starting x-coordinate of 100 was chosen, to insure that
the starting point is a black pixel (no on the face). The following algorithm describes how
to find the actual starting point on the face, which will be used to find the top of the face.
1. Starting at (100,240), increment the x-coordinate until a white pixel is found. This
is considered the left side of the face.
2. If the initial white pixel is followed by 25 more white pixels, keep incrementing
x until a black pixel is found.
3. Count the number of black pixels followed by the pixel found in step2, if a series of
25 black pixels are found, this is the right side.
4. The new starting x-coordinate value (x1) is the middle point of the left side and
right side.
Figure 7.3 demonstrates the above algorithm.
Using the new starting point (x1, 240), the top of the head can be found. The following is
the algorithm to find the top of the head:
1. Beginning at the starting point, decrement the y-coordinate (i.e.; moving up the face).
2. Continue to decrement y until a black pixel is found. If y becomes 0 (reached the
top of the image), set this to the top of the head.
3. Check to see if any white pixels follow the black pixel.
i. If a significant number of white pixels are found, continue to decrement
y. ii. If no white pixels are found, the top of the head is found at the point of
the
initial black pixel.
2
55. A
B
x
1
Starting point (100,240)
A Count 25 white pixels from this point
B Count 25 black pixels from this point
x1 Middle point of A and B
Figure 7.3: Face top and width detection.
Once the top of the driver’s head is found, the sides of the face can also be found. Below
are the steps used to find the left and right sides of the face.
1. Increment the y-coordinate of the top (found above) by 10. Label
this y1 = y + top.
2. Find the centre of the face using the following steps:
i. At point (x1, y1), move left until 25 consecutive black pixels are found, this
is the left side (lx).
ii. At point (x1, y1), move right until 25 consecutive white pixels are found,
this is the right side (rx).
iii. The centre of the face (in x-direction) is: (rx – lx)/2. Label this x2.
3. Starting at the point (x2, y1), find the top of the face again. This will result in a
new y-coordinate, y2.
57. 4. Finally, the edges of the face can be found using the point (x2, y2).
i. Increment y-coordinate.
ii. Move left by decrementing the x-coordinate, when 5 black consecutive
pixels are found, this is the left side, add the x-coordinate to an array
labelled
‘left_x’.
iii. Move right by incrementing the x-coordinate, when 5 black consecutive
pixels are found, this is the right side, add the x-coordinate to an array
labelled
‘right_x’.
iv. Repeat the above steps 200 times (200 different y-coordinates).
The result of the face top and width detection is shown in Figure 7.4, these were marked
on the picture as part of the computer simulation.
Figure 7.4: Face edges found after first trial.
As seen in Figure 7.4, the edges of the face are not accurate. Using the edges found in this
initial step would never localize the eyes, since the eyes fall outside the determined
boundary of the face. This is due to the blobs of black pixels on the face, primarily in the
eye area, as seen in Figure 7.2b. To fix this problem, an algorithm to remove the black
blobs was developed.
59. Removal of Noise
The removal of noise in the binary image is very straightforward. Starting at the top,
(x2,y2), move left on pixel by decrementing x2, and set each y value to white (for 200 y
values). Repeat the same for the right side of the face. The key to this is to stop at left and
right edge of the face; otherwise the information of where the edges of the face are will be
lost. Figure
7.5, shows the binary image after this process.
Figure 7.5: Binary picture after noise removal.
After removing the black blobs on the face, the edges of the face are found again. As
seen below, the second time of doing this results in accurately finding the edges of the
face.
Figure 7.6: Face edges found after second trial.
61. Finding Intensity Changes on the Face
The next step in locating the eyes is finding the intensity changes on the face. This is done
using the original image, not the binary image. The first step is to calculate the average
intensity for each y – coordinate. This is called the horizontal average, since the averages
are taken among the horizontal values. The valleys (dips) in the plot of the horizontal values
indicate intensity changes. When the horizontal values were initially plotted, it was found
that there were many small valleys, which do not represent intensity changes, but result
from small differences in the averages. To correct this, a smoothing algorithm was
implemented. The smoothing algorithm eliminated and small changes, resulting in a more
smooth, clean graph.
After obtaining the horizontal average data, the next step is to find the most significant
valleys, which will indicate the eye area. Assuming that the person has a uniform
forehead
(i.e.; little hair covering the forehead), this is based on the notion that from the top of
the face, moving down, the first intensity change is the eyebrow, and the next change is
the upper edge of the eye, as shown below.
Top of the head
First significant intensity change
Second significant
intensity change
Figure 7.7: Labels of top of head, and first two intensity changes.
2
63. The valleys are found by finding the change in slope from negative to positive. And peaks
are found by a change in slope from positive to negative. The size of the valley is
determined by finding the distance between the peak and the valley. Once all the valleys
are found, they are sorted by their size.
Detection of Vertical Eye Position
The first largest valley with the lowest y – coordinate is the eyebrow, and the second
largest valley with the next lowest y-coordinate is the eye. This is shown in Figures 7.8a
and 7.8b.
Firs
t
inte
nsit
y
cha
nge
(val
ley)
Second intensity
Change (valley)
a) Graph of horizontal averages of the left side of the
face.
F
i
r
s
t
v
a
l
l
e
y
64. Second valley
b) Position of the left eye found from finding the valleys in
a).
Figure 7.8: Result of using horizontal averages to find vertical
position of the eye.
2
7
65. This process is done for the left and right side of the face separately, and then the found
eye areas of the left and right side are compared to check whether the eyes are found
correctly. Calculating the left side means taking the averages from the left edge to the
centre of the face, and similarly for the right side of the face. The reason for doing the two
sides
separately is because when the driver’s head is tilted the horizontal averages are not
accurate. For example if the head is tilted to the right, the horizontal average of the eyebrow
area will be of the left eyebrow, and possibly the right hand side of the forehead.
7.2.2 Drowsiness Detection Function
Determining the State of the Eyes
The state of the eyes (whether it is open or closed) is determined by distance between the
first two intensity changes found in the above step. When the eyes are closed, the
distance between the y – coordinates of the intensity changes is larger if compared to
when the eyes are open. This is shown in Figure 7.9.
Distance between 2 valleys = 32
Distance between 2 valleys = 55
Figure 7.9: Comparison of open and closed eye.
2
67. The limitation to this is if the driver moves their face closer to or further from the camera.
If this occurs, the distances will vary, since the number of pixels the face takes up varies, as
seen below. Because of this limitation, the system developed assumes that the driver’s face
stays almost the same distance from the camera at all times.
Judging Drowsiness
When there are 5 consecutive frames find the eye closed, then the alarm is activated, and a
driver is alerted to wake up. Consecutive number of closed frames is needed to avoid
including instances of eye closure due to blinking. Criteria for judging the alertness level
on the basis of eye closure count is based on the results found in a previous study [9].
2
9
68. Chapter 8
Algorithm Implementation
8.1 Real-time System
The real-time system includes a few more functions when monitoring the driver, in order to
make the system more robust. There is an initialization stage, in which the for the first 4
frames, the driver’s eyes are assumed to be open, and the distance between the y –
coordinates of where the intensity changes occur, is set as a reference. After the
initialization stage, the distances calculated are compared with the one found in the
initialization stage. If the lower distance is found (difference between 5-80 pixels), then the
eye is determined as being closed.
Another addition to the real-time system is a comparison of the left and right eyes found.
The left and right eyes are found separately, and their positions are compared. Assuming
that the driver’s head is not at an angle (tilted), the y – coordinates of the left and right eye
should be approximately the same. If they are not, the system determines that the eyes have
not
been found, and outputs a message indicating that the system is not monitoring the eyes.
The system then continues to try to find the eyes. This addition is also useful in cases where
the driver is out of the camera’s sight. In this case, the system should indicate that no
drowsiness monitoring is taking place. In the instance where there are 5 or more
consecutive frames where the eye is not found, an alarm goes off. This takes account for the
case when the driver’s head has completely dropped down, and hence an alarm is needed to
alert the driver.
70. 8.2 Challenges
8.2.1 Obtaining the image
The first, and probably most significant challenge faced was transferring the images
obtained from the camera to the computer. Two issues were involved with this challenge:
1) Capturing and transferring in real time; 2) Having access to where the image was being
stored. Initially, the ‘Dirt Cheap Frame Grabber’ was constructed and tested. It was later
found that this hardware (which uses the parallel port) would not be fast enough, and it
would be difficult to write software to process the images in conjunction with the hardware.
8.2.2 Constructing an effective light source
Illuminating the face is an important aspect of the system. Initially, a light source
consisting of 8 IR LEDs and a 9V battery was constructed and mounted onto the camera. It
was soon realized that the source was not strong enough. After review other literature, the
conclusion was that in order to build a strong enough light source, approximately 50 LEDs.
To reduce the cost, a desk lamp and IR filter were used (as described previously).
8.2.3 Determining the correct binarization threshold
Because of varying facial complexions and ambient light, it was very hard to determine the
correct threshold for binarization. The initial thought was to choose a value that would
result in the least amount of black blobs on the face. After observing several binary images
of different people, it was concluded that a single threshold that would result in similar
results for all people is impossible. Histogram equalization was attempted, but resulted in
no progress. Finally it was decided that blobs on the face were acceptable, and an algorithm
to remove such blobs was developed.
8.2.4 Not being able to use edge detection algorithms
After reading many technical papers and image processing textbooks, it was though that
locating the eyes would be a trivial task if edge detection were used. This notion turned out
to be incorrect. A Sobel edge detection program was written and tested in C. Figure 8.1
shows the result. As can see, there are too many edges to find the outline of the face.
Figure
8.1b shows the result of the eye area alone. It was also thought that in order to determine
whether the eye was open or closed, an image processing algorithm to find circles could be
used. That is, the absence of a circle would mean that the eye is closed. This also failed
since many times the circle representing the iris was broken up. In addition, in order to use
a circle finding algorithm, the radius of the circle must be known, and this varies depending
on the distance of the driver from the camera.
3
1
71. a) Whole image.
b) Eye area.
Figure 8.1: Results of using Sobel edge detection.
8.2.5 Case when the driver’s head is tilted
As mentioned previously, if the driver’s head is tilted, calculating the horizontal averages
from the left side of the head to the right side is not accurate. It is not realistic to assume
that the driver’s head will be perfectly straight at all times. To compensate for this, the left
and right horizontal averages are found separately.
8.2.6 Finding the top of the head correctly
Successfully finding the top of the head was a big challenge. Depending on the binarization
of the face, the top of the head could not be found correctly. The problem was with the
black pixels on the face of the binary image. In some instances the nose, eye or lips were
found to be the top of the head due to their black blobs in the binary image. Driver’s with
facial hair
3
73. were also often a source of this error. After implementing the noise removal algorithm,
this problem was eliminated.
8.2.7 Finding bounds of the functions
By observing the system algorithm and source code, many bounds are defined. For
example, when finding the edge of the face, 200 points in the y-direction is used. This
represents the approximate length of the face from the top of the head (for a given distance
from the camera). Another example is counting 25 points to the left and right of the centre
point, when finding the edge of the face. This represents the approximate width of the face.
Both of these values, along with other bounds were a challenge to determine. After
examining many images of different faces, and testing different values, these bounds were
determined.
3
76. SOFTWARE TOOLS
Installation of required software applications
Installation of AVR studio
Please insert the Supplied CD to your CD ROM. You will found the
following Item in the CD.
Doble Click to AVR studio icon
Follow the steps
77.
78.
79. Click finish to complete the installation
Installation of WINAVR
Please insert the Supplied CD to your CD ROM. You will found the
following Item in the CD. Double Click on WINAVR icon
86. Chapter 9
Results and Future Work
9.1 Simulation Results
The system was tested on 15 people, and was successful with 12 people, resulting in 80%
accuracy. Figure 9.1 – 9.6 below shows an example of the step-by-step result of finding
the eyes.
Figure 9.1: Demonstration of first step in the process for finding the eyes – original image.
3
4
87. Figure 9.2: Demonstration of second step – binary image.
Figure 9.3: Demonstration of third step – initial edge detection image.
Notice that initially the edges are not found correctly.
Figure 9.4: Demonstration of the fourth step - removing blobs in binary image.
3
5
88. Figure 9.5: Demonstration of step 5 - second edge detection.
Notice that the second time the edges are found accurately.
Figure 9.6: Result of final step - finding the left eye using intensity information.
9.2 Limitations
With 80% accuracy, it is obvious that there are limitations to the system. The most
significant limitation is that it will not work with people who have very dark skin. This is
apparent, since the core of the algorithm behind the system is based on binarization. For
dark skinned people, binarization doesn’t work.
Another limitation is that there cannot be any reflective objects behind the driver. The
more uniform the background is, the more robust the system becomes. For testing
purposing, a black sheet was put up behind the subject to eliminate this problem.
For testing, rapid head movement was not allowed. This may be acceptable, since it can be
equivalent to simulating a tired driver. For small head movements, the system rarely loses
track of the eyes. When the head is turned too much sideways there were some false
alarms.
3
90. The system has problems when the person is wearing eyeglasses. Localizing the eyes is not
a problem, but determining whether the eyes are opened or closed is.
9.3 Future Work
Currently there is not adjustment in zoom or direction of the camera during operation.
Future work may be to automatically zoom in on the eyes once they are localized. This
would avoid the trade-off between having a wide field of view in order to locate the eyes,
and a narrow view in order to detect fatigue.
This system only looks at the number of consecutive frames where the eyes are closed.
At that point it may be too late to issue the warning. By studying eye movement patterns,
it is possible to find a method to generate the warning sooner.
Using 3D images is another possibility in finding the eyes. The eyes are the deepest part of a
3D image, and this maybe a more robust way of localizing the eyes.
Adaptive binarization is an addition that can help make the system more robust. This may
also eliminate the need for the noise removal function, cutting down the computations
needed to find the eyes. This will also allow adaptability to changes in ambient light.
The system does not work for dark skinned individuals. This can be corrected by having
an adaptive light source. The adaptive light source would measure the amount of light
being reflected back. If little light is being reflected, the intensity of the light is increased.
Darker skinned individual need much more light, so that when the binary image is
constructed, the face is white, and the background is black.
3
7
91. Chapter 10
Conclusion
A non-invasive system to localize the eyes and monitor fatigue was developed. Information
about the head and eyes position is obtained though various self-developed image
processing algorithms. During the monitoring, the system is able to decide if the eyes are
opened or closed. When the eyes have been closed for too long, a warning signal is issued.
In addition, during monitoring, the system is able to automatically detect any eye localizing
error that might have occurred. In case of this type of error, the system is able to recover
and properly localize the eyes. The following conclusions were made:
• Image processing achieves highly accurate and reliable detection of drowsiness.
• Image processing offers a non-invasive approach to detecting drowsiness without
the annoyance and interference.
• A drowsiness detection system developed around the principle of image
processing judges the driver’s alertness level on the basis of continuous eye
closures.
All the system requirements described in Chapter 2 were met.
3
93. Area of application
Road users have long been known to fall asleep whilst driving. Driving long hours can
induce fatigue causing lack of concentration and occasionally road accidents.
This is even more critical on motorways where traffic travels at higher speeds and
drivers can succumb to motorway hypnosis as the repetitive and somewhat passive
experience of driving on long, wide, straight roads can cause the driver to relax and
lose concentration from the road and traffic around them.
This allows drivers who legally are fit to drive but physically are not, on the roads.
Many of these drivers are unaware of their fatigue and danger they pose to themselves
and other road users. A main application of this system is primarily to protect such
people described by alerting them to their fatigue.
94. REFERENCES
1. WWW. howstuffworks.com
2. EMBEDDED SYSTEM BY RAJ KAMAL
3. ATMEGA328 MICROCONTROLLER AND EMBEDDED SYSTEMS BY MAZZIDI
4. Magazines
5. Electronics for you
6. Electrikindia
7. WWW.google.com
8. WWW.Electronic projects.co