• Save
Lane departure and obstacle detection algorithm for use in an automotive environment
Upcoming SlideShare
Loading in...5
×
 

Lane departure and obstacle detection algorithm for use in an automotive environment

on

  • 464 views

For more visit www.nanocdac.com

For more visit www.nanocdac.com

Statistics

Views

Total Views
464
Views on SlideShare
464
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Lane departure and obstacle detection algorithm for use in an automotive environment Lane departure and obstacle detection algorithm for use in an automotive environment Document Transcript

  • Lane Departure and Obstacle DetectionAlgorithm for use in an AutomotiveEnvironmentBy Diarmaid O CualainSupervised by Dr. Martin GlavinB.E Electronic Engineering ThesisMarch 2005
  • March 2006 Page i_____________________________________________________________________Final Year Thesis Diarmaid O CualainDeclaration of OriginalityI hereby declare that this thesis is my original work except where statedSignature: ________________________ Date: ______________________
  • March 2006 Page ii_____________________________________________________________________Final Year Thesis Diarmaid O CualainAbstractToday, one of the largest areas of research and development in the automobileindustry is road safety. Many deaths and injuries occur every year on public roadsfrom accidents that technology could have been used to prevent. The latest vehiclessold boast many safety features that have helped to lower the number of accidents onthe roads. These include seatbelts, crumple zones, Anti Brake Lock Systems (ABS),air bags, traction control, Electronic stability control (ESC), traction control, etc.These technologies have benefited from the large advances made in computer andelectronic technology in the past few years to become cheaper, more robust andreliable. As such, it is predicted that many more safety technologies will be developedfor use in vehicles of the future. Legislation, consumer needs, and other factors willonly serve to increase the need for these devices.For this project, an investigation into one of these safety systems wasperformed. This project consists of the research and development of an algorithm foran automotive system to detect when the vehicle drifts out of lane, or when thevehicle was within the safe stopping distance of obstacle in its path. Once one of thesesituations is detected, a warning is issued to the driver. For input, the system has asingle CCD camera module, along with the speed of the vehicle and wiper setting tocalculate the safe stopping distance. The system was able to identify to a satisfyinglevel when the vehicle drifted out of lane. The obstacle and collision detection sectionof the algorithm also worked to a certain extent, but issues with shadows in theimages and such meant that it was only accurate for short distances. However, themain aim of this project that such a concept was possible, and this has been proven toa certain extent.This report summarises the background, the design, the development,and the testing of the algorithm for this project.
  • March 2006 Page iii_____________________________________________________________________Final Year Thesis Diarmaid O CualainAcknowledgmentsI would like to thank the following people for their help and support during thecourse of this project:Firstly, I would like to thank my supervisor, Dr Martin Glavin. He was alwaysthere to lend a hand or support when needed.Second, I would like to thank Ciaran Hughes. If it were not for his answers tomy many questions I would have found it very difficult, if not impossible, to reach thefinal stage that I did with my project.I would also like to thank the electronic technicians of Nuns Island, AodhDalton, Myles Meehan, Martin Burke, and Sean Porter. They helped with anytechnical difficulties that I encountered with equipment or software over the years.Lastly, I wish to thank my parents for their help and support over the course ofmy studies.
  • March 2006 Page iv_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of ContentsDeclaration of Originality...............................................................................................iAbstract..........................................................................................................................iiAcknowledgments........................................................................................................ iiiTable of Contents..........................................................................................................ivTable of Figures ............................................................................................................viTable of Tables .......................................................................................................... viiiList of Abbreviations ....................................................................................................ixChapter 1 Introduction................................................................................................11.1 Concept of Project..........................................................................................11.2 Core Objectives..............................................................................................21.3 Basic Assumptions.........................................................................................41.3.1 High Contrast Roads and Lane Markings..............................................41.3.2 Dark images ...........................................................................................41.3.3 Image Information .................................................................................41.3.4 Image Frame Rate..................................................................................51.4 Outline of Report ...........................................................................................5Chapter 2 Background Research ................................................................................72.1 Current Systems.............................................................................................72.1.1 Citroen “LDWS”....................................................................................72.1.2 Mercedes-Benz “Distronic”...................................................................82.1.3 Honda “HiDS” .....................................................................................102.1.4 Toyota Lexus “AODS”........................................................................112.1.5 Nissan “ICC” .......................................................................................122.1.6 Volkswagen “ACC”.............................................................................122.1.7 BMW “ACC”.......................................................................................122.1.8 Other Manufacturers Systems..............................................................132.2 CCD Camera................................................................................................142.3 MATLAB.....................................................................................................152.4 Lane Departure & Object Detection Algorithms.........................................152.5 Summary......................................................................................................16Chapter 3 Database of Images..................................................................................163.1 Artificial Images ..........................................................................................163.2 “Real World” Images...................................................................................173.2.1 Various Road Surfaces and Markings..................................................173.2.2 Lane Detection Images ........................................................................183.2.3 Object Detection Images......................................................................193.3 Summary......................................................................................................20Chapter 4 Analysis of Project Components..............................................................214.1 Standard Features of the Road.....................................................................214.2 Dividing Algorithm into modules................................................................224.3 Analyses of the Lane Detection Module......................................................224.3.1 Characteristics for Lane Detection.......................................................224.3.2 Lane Detection Assumptions...............................................................244.4 Analyses of the Lane Departure Detection Module.....................................244.4.1 Characteristics for Lane Departure Detection......................................244.4.2 Lane Departure Detection Assumptions..............................................25Chapter 5 Lane Detection Module............................................................................26
  • March 2006 Page v_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.1 Solutions for module....................................................................................265.2 The Algorithm..............................................................................................285.2.1 MATLAB Image Matrices...................................................................295.2.2 Horizon filter........................................................................................295.2.3 Colour filter..........................................................................................305.2.4 Noise Removal.....................................................................................335.3 Summary......................................................................................................34Chapter 6 Lane Departure Detection Module...........................................................356.1 Solutions for module....................................................................................356.2 The Hough Transform..................................................................................366.3 Edge Detection.............................................................................................386.3.1 Sobel Method.......................................................................................386.4 Average Angles Algorithm..........................................................................396.5 Cluster Angles Algorithm............................................................................456.5.1 Clustering Algorithms..........................................................................476.5.2 Implementation of Clustering Algorithm.............................................476.5.3 Inverse Hough Transform....................................................................496.5.4 Calculation of Lane Departure.............................................................516.6 Summary......................................................................................................52Chapter 7 Object Detection ......................................................................................537.1 Introduction..................................................................................................537.2 Solutions for module....................................................................................537.2.1 Area of interest.....................................................................................537.2.2 Object Detection ..................................................................................557.3 Summary......................................................................................................59Chapter 8 Collision Detection ..................................................................................608.1 Solutions for module....................................................................................608.2 Safe Stopping Distance Calculator ..............................................................608.3 Summary......................................................................................................61Chapter 9 Testing......................................................................................................629.1 Lane Detection and Departure Modules ......................................................629.2 Obstacle and Collision detection Modules ..................................................639.3 Summary......................................................................................................64Chapter 10 Conclusions and Future Work .............................................................6510.1 Conclusions..................................................................................................6510.2 Future Work.................................................................................................65References....................................................................................................................68Bibliography ................................................................................................................71Appendix A: Sample Image Outputs from Testing .....................................................72Appendix B: Tables and Graphs..................................................................................74Appendix C: CD ..........................................................................................................76
  • March 2006 Page vi_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of FiguresFigure 1.1: Forward Facing Camera ..............................................................................2Figure 1.2: Basic Outline of Project System..................................................................3Figure 2.1: Distronic Radar System...............................................................................8Figure 2.2: Distronics Internal Electronics ....................................................................9Figure 2.3:Hondas HiDS System.................................................................................10Figure 2.4: Bosch Radar Module Internals..................................................................13Figure 2.5: General Flowchart for Radar Based Systems............................................13Figure 3.1: Artifically Generated Test Image..............................................................16Figure 3.2: Motorway Surface with Road Markings. ..................................................18Figure 3.3: Lane Departure Stand Set-up ....................................................................19Figure 3.4: Object Detection Set-up ............................................................................19Figure 4.1: Road Features............................................................................................21Figure 4.2: Sub Modules of Algorithm........................................................................22Figure 4.3: Typical Image Frame from Camera ..........................................................23Figure 4.4: Drifting out of Lane...................................................................................25Figure 5.1: Artificial Image of Road Surface ..............................................................27Figure 5.2: Boundary Detection of Road Image..........................................................28Figure 5.3: Cartesian and MATLAB Image Space......................................................29Figure 5.4: Image with Horizon Removed ..................................................................30Figure 5.5: Output From White Road Marking Filter..................................................32Figure 5.6: Output From Yellow Road Marking Filter................................................32Figure 6.1: Early Lane Departure Detection Algorithm..............................................35Figure 6.2: Hough Data from 3 Points.........................................................................37Figure 6.3: Hough Space Graph...................................................................................37Figure 6.4: Sobel Edge Detection Masks.....................................................................39Figure 6.5: Canny Edge Detection of Yellow &White road Markings .......................39Figure 6.6: Hough transformation of Yellow Lane Markings .....................................40Figure 6.7: Hough Transformation of White Lane Markings......................................40Figure 6.8: Hough Peaks of Yellow Lane Markings ...................................................41Figure 6.9: Hough Peaks of White Lane Markings .....................................................42Figure 6.10: Sample Output from Hough Peak Detection...........................................43Figure 6.11: Issue with Average Angles Algorithm ....................................................45
  • March 2006 Page vii_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.12: Road Marking Segments .........................................................................46Figure 6.13: Findcluster GUI.......................................................................................48Figure 6.14: Cluster Centroids.....................................................................................49Figure 6.15: Output from Cluster Angles Algorithm...................................................51Figure 7.1: Rectangle Search Method..........................................................................54Figure 7.2: Area of Interest Filter ................................................................................55Figure 7.3: Shadow/Tyre filter.....................................................................................56Figure 7.4: Sobel Edge Detection of Area of Interest.................................................57Figure 7.5: Noise Removal ..........................................................................................58Figure 8.1: Forces on Stopping Vehicle ......................................................................61Figure 9.1: Artificial Test Image For Inverse Hough Transform ................................63Figure 9.2: Artificial Test Image For Clustering Algorithm........................................64Figure 10.1: Inter Communication Between Vehicles.................................................67
  • March 2006 Page viii_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of TablesTable 5.1: White RGB Values .....................................................................................31Table 5.2: Yellow RGB Values ...................................................................................31Table 3: Values of Theta and Rho measured as Left lane departure occurs................74Table 4: Values of Theta and Rho as Right lane departure occurs..............................74Table 5: Pixel Height Vs Metres distance....................................................................75
  • March 2006 Page ix_____________________________________________________________________Final Year Thesis Diarmaid O CualainList of AbbreviationsCCD Charged Coupled DeviceMATLAB Matrix LaboratoryRGB Red Green BlueNRA National Roads Authority of IrelandECU Engine Control UnitLED Light Emitting DiodeABS Anti Brake Lock SystemACC Automatic Cruise ControlGUI Graphical User Interface
  • March 2006 Page 1_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 1 Introduction1.1 Concept of ProjectRoad safety is an important issue to any nation in the world today. In Ireland,collisions on our roads cause many fatalities every year. The National RoadsAuthority of Ireland (NRA) [1] reported that for 2003, 335 people died as a result ofcollisions on public roads. Governments have long being trying to reduce this numberof accidents, and with some success. Better roads and road markings, drivereducation, and so on have all helped curb the number of fatalities each year.Manufacturers of road vehicles have also been working to reduce the number ofaccidents. They have used the latest of today’s technology to make vehicles that aremuch safer than their predecessors. Advances in computers, materials, electronics,and other areas have allowed them to decrease the number of accidents that theirvehicles are involved in, and improving the chance of the occupants walking awayfrom an accident without injury. Today, many buyers of new vehicles list safety asone of the highest priorities when choosing a car. Manufacturers have long knownthis, and use safety as one their main selling points for their products, as can be seenin most Volvo, Mercedes Benz, or Renault advertisement.A new and fast growing area of vehicle safety is collision detection andavoidance. This has only come about lately from the advances made in computertechnology, image processing, electronics, and the falling price of the cost of thehardware. Companies like Mercedes Benz utilise a radar system (“Pre-safe”) on the Sseries cars that can detect obstacles in the path of the vehicle [2], and apply the brakesfaster than the driver can. It also uses this system to have adaptive cruise control. Thisallows the car to regulate its speed according to the car in front under cruise control.From the falling cost of camera technology, many automobilemanufacturers are starting to equip their vehicles with video cameras positioned atvarious places around the body of the vehicle. This is done in a bid to remove any“blind spots” that he or she may have when driving or reversing. The cameras are alsofinding applications in other areas of road safety. Honda has developed a system thatutilises one of these mounted beside the rear-view mirror that recognises the lane thevehicle is travelling in. It applies this information in steering to keep the vehiclecentred in the lane. They have named this system “LKAS or “Lane Keeping Assist
  • March 2006 Page 2_____________________________________________________________________Final Year Thesis Diarmaid O CualainSystem”. This system was developed in the hope of reducing the number of fatalitiescaused by cars drifting out of lane. A 1995 British Medical Journal report stated thataround 20% of road accidents on roads such as motorways are caused by the driverfalling asleep at the wheel [3]. It is these types of accidents that these systems aredeveloped to prevent.For this project, a similar system in concept to both Mercedes “Pre-safe” and Honda’s LKAS was developed. The aim of this project was to research anddevelop an algorithm for detecting when the vehicle drifts out of lane, or when it iswithin the safe stopping distance of an object ahead of it. Like LKAS, this system isvision based for the lane detection part of the algorithm by using a CCD camera[Figure 1.1]. For the object detection part, a different approach to Mercedes wastaken. Instead of using a short wave radar sensor, the camera was again used tocalculate the distance to the object to determine whether it was within the safestopping distance. This was done to see if such a system was possible, because anyfinal implementation would be cheaper to produce.Figure 1.1: Forward Facing Camera1.2 Core ObjectivesThe objective of this project is the development of the lane departure andobstacle detection algorithm. This can be broken down into two main modules:1. Lane departure detection module2. Obstacle detection module
  • March 2006 Page 3_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe task of the lane departure detection module in the algorithm is to analysethe frames of video captured by the CCD camera to evaluate the lane that the vehicleis travelling in. It calculates from this whether the vehicle is drifting out of lane, and ifso, warns the driver. The obstacle detection module analyses the frames of videocaptured, and processes them to find objects in the path of the vehicle. It determinesthe distance of these objects from the vehicle, and investigates whether they arewithin the safe stopping distance of the vehicle. It calculates the safe sopping distancebased on the vehicles speed, and the wipers setting. The wiper setting is used todetermine if it is raining, so that a longer stopping distance can be taken into accountfrom the wet road surface. If it is found that the object is within the safe stoppingdistance of the vehicle, a warning is issued to the driver [Figure 1.2].Figure 1.2: Basic Outline of Project System
  • March 2006 Page 4_____________________________________________________________________Final Year Thesis Diarmaid O Cualain1.3 Basic AssumptionsSome basic assumptions were made to aid the understanding and the speed ofdevelopment of the project. These assumptions were carefully made so as not tocompromise the robustness of the algorithm. More specific assumptions in relation tosmaller areas of the project are given in [Chapter 4.3] and [Chapter 4.4]. The basicassumptions are as follows:1.3.1 High Contrast Roads and Lane MarkingsThe system is to be developed to work on tarmac or concrete roads. Theseroads will have high contrast road markings showing the lane that the vehicle is totravel in. The system does not have to be developed to function on dirt tracks or oncross-country tracks. This assumption was made because it makes analysis of theproblem a lot simpler and easier to manage. Also, the biggest use of the lane departurealgorithm would be on motorways, where 20% of accidents are caused by the driverfalling asleep at the wheel [4].1.3.2 Dark imagesThe system is designed to work during the daytime. Therefore, any images tobe analysed by the algorithm will be relatively bright and high contrast, as mentionedin [Chapter 1.3.1]. This assumption was made to allow the development the algorithmas a concept, without having to spend time on trying to develop methods to processuseful information from dark images.1.3.3 Image InformationThe information needed by the algorithm to detect both the vehicle drifting outof lane and objects in the path of the vehicle can be found from each frameindividually from the camera. This means that the algorithm will not need to keep inmemory any information from the previous frames. This assumption has theadvantage of allowing the algorithm respond more quickly when an error is found (i.e.a warning will be outputted in 1/25th of a second), instead of having to examine a fewframes before reaching an answer (n frames needed to calculate answer = n/25
  • March 2006 Page 5_____________________________________________________________________Final Year Thesis Diarmaid O Cualainseconds taken.). This also means that the problem can be simpler and easier tocomprehend.The disadvantage of this assumption is that information is lost from one frameto another. For example, when a car drifts over a lane marking, the frames taken bythe camera show the road marking drifting from one side of the image to the other.1.3.4 Image Frame RateThe camera outputs images at approximately 25 frames per minute. This is astandard frame rate for PAL video. However, since this system analyses each frameindependently, the frame rate does not have a large influence on its performance.Most of these assumptions were made to simplify the problem so that a betterunderstanding could be gained of what needed to be done. This allowed most of thework to be done in trying to solve the concept of the problem, without having to spendtoo much time on “tweaking” various aspects of the algorithm so that it could workunder all conditions and inputs. Chapter 10.2 investigates various methods and ideasfor improving the algorithm by removing some of these assumptions. These could beimplemented in a future version of the algorithm to improve robustness even more.1.4 Outline of ReportThis thesis is divided up into chapters, each dealing with a different aspect ofthe project. Each chapter has a short introduction, explaining the subject of eachchapter, and a summary. The following is a short overview of each of the chapters:Chapter 2: Outlines some of the research made on the project in thebeginning. More research was made as the project developed, as new areasneeded to be investigated. This research is summarised in the various chapters.Chapter 3: Summarises how a database of images was sourced and collectedfor testing during, and after the algorithm was developed.
  • March 2006 Page 6_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 4: Gives a brief summary of how the algorithm was broken downinto modules to aid in the understanding and development of the system.Chapter 5: Reviews the work done in the development of the lane detectionmodule of the algorithm, the problems encountered, and the solutions devised.Chapter 6: Outlines the work made on the lane departure detection module.Chapter 7: Gives a summary of the research and development work done onthe obstacle detection part of the algorithm.Chapter 8: Gives a brief summary of how a solution was devised into how tocalculate the safe stopping distance of the vehicle, and how this wasimplemented in the final algorithm.Chapter 9: Reviews the testing done on the various modules, and on the finalalgorithm.Chapter 10: Concludes the project, and outlines possible areas where futurework could be done.Appendices: Has a sample of the test image outputs and some of the graphsand tables used.
  • March 2006 Page 7_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 2 Background ResearchThis chapter details the research made into the subject matter of the project. Itgives a brief description of some of the various technologies used in current systemsof road vehicles. It also lists some of the more advanced technologies that may havean application in these systems.2.1 Current SystemsSimilar systems already implemented in consumer vehicles vary from therelatively simple, like a system by Citroen, to the complex, as in Mercedes system.Many other automobile manufactures have similar systems on the market or indevelopment. Since the technology is still in the early stages of implementation, manyof these systems are rather expensive. Because of these, manufacturers have onlyutilised these systems in their higher-end vehicle models. This is the case with nearlyany new automotive technology when it comes on the consumer market for the firsttime. Previously, satellite navigation, air bags, seatbelt pre-tensioners, and variousother technologies were only available to consumers who bought the highest-endmodel of vehicle. Now that these technologies have matured somewhat, the cost hasdropped considerably. Nearly every manufacturer has these as standard in the mostbasic of models or as an affordable extra. Once thee technologies start to becomemass-produced, competition drives down the cost considerably. Therefore, it isreasonable to assume that “driver assistance” systems will be available in mostconsumer vehicles in only a few years time.2.1.1 Citroen “LDWS”Citroens LWDS (Lane Departure Warning System) is one of the simplest lanedetection systems on the market today. It was developed to help curb the number ofaccidents caused by drivers falling asleep at the wheel on motorways. It consists of sixsensors fitted on the underside of the front bumper. Each sensor has an infrared LEDand a detection cell. The cells detect when the vehicle drifts out of lane by variationsin the infrared light reflected off the road markings. This is then sent to the ECU,which processes the information and sends a warning to the driver through vibrating
  • March 2006 Page 8_____________________________________________________________________Final Year Thesis Diarmaid O Cualainmotors built into the driver’s seat. If the driver drifts to the left, the left motor vibrates,for the right, the motor on the right vibrates.The benefits of this approach are that it is relatively cheap and robust. Notmuch computation has to be done. Other road markings such as directional arrows canalso be detected without having to resort to a reprogram or redesign. The negativeaspects are that the undersides of vehicles are notoriously harsh environments forelectronic sensors. The sensors have to cope with flying gravel, dirt, vibration, water,salt and so forth. This system is available on the Citroen C5, C6, and other models. [5]2.1.2 Mercedes-Benz “Distronic”Mercedes-Benz (owned by Daimler Chrysler) was one of the first carmanufacturers to introduce a system to determine the distance from the vehicle to anobject ahead in 1999 [Figure 2.1]. Their system is radar based, with the radar modulelocated behind the Mercedes badge on the front grill. The technology was developedin partnership by Automotive Distance Control [6] (formerly ITT), Daimler-BenzsTemic, and the optics company Leica. Lecia’s expertise was mostly on infrareddevices, while ITT’s was a radar-based technology. Temic was mostly concerned withbraking issues.Figure 2.1: Distronic Radar SystemThis stand alone-module contains the transmitter, receiver, and electronicsneeded. This is unlike the other manufacturers who use the ECU to perform the
  • March 2006 Page 9_____________________________________________________________________Final Year Thesis Diarmaid O Cualainprocessing. The unit contains three transmitters and receivers [Figure 2.2] [7]. Thenewest system, the Distronic Plus, works as follows. A 24GHz radar sweep is sent outat 80degrees. This can cause a reflection back if there is an object inside 30 metres.This is followed by a 77 GHz, 9 degree radar sweep that can detect objects of up to150 metres [8]. Reflected waves undergo the Doppler shift, causing them to changefrequency. The reflected waves are picked up by the receivers and process the data.They then calculate the relative speed between the vehicles. As the processing has tobe done at very high speeds, and with a large amount of data, signal processors areused.Figure 2.2: Distronics Internal ElectronicsDistronic is an adaptive cruise control system (AAC) in that it can control theamount of acceleration and braking used, to keep the vehicle a set distance behind thevehicle in front. It can work up to speeds of 125mph. The newer version, available onthe 2006 version of the Mercedes S-class, can fully apply the brakes in an emergency.This is to prevent the driver from driving into the back of another vehicle if he or shefalls asleep, has limited visibility because of the weather, etc. If the conditions arecalculated to be too hazardous for the system to calculate on its own, an audible
  • March 2006 Page 10_____________________________________________________________________Final Year Thesis Diarmaid O Cualainwarning is issued to the driver so that they can judge the situation themselves and actaccordingly.A big issue with this system is its price. Mercedes-Benz has not releasedseparate price information for the module, but it is assumed to be in the few thousandeuros. However, prices for this technology are falling rapidly, and as more automobilemanufacturers use such systems, the cost can only fall further.2.1.3 Honda “HiDS”Honda has also been researching methods to improve safety and drivercomfort using lane departure sensing and obstacle detection. HiDS (Honda IntelligentDriver Support) is a system developed by Honda similar to Distronic, with someadditions and changes. Like the Distronic system, it uses a radar-based approach foridentifying the location of the objects in the vehicles path, called IHCC (IntelligentHighway Cruise Control). It also has a CCD camera module located near the rearview mirror for identifying the lane that the vehicle it travelling in. This module iscalled LKAS (Lane Keeping Assist System). The umbrella name for this system isHiDS.Figure 2.3:Hondas HiDS System
  • March 2006 Page 11_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe LKAS system works as follows. A passive CCD camera situated just leftof the drivers rear view mirror analyses images taken of the road surface andmarkings. When it detects that the vehicle is drifting out of lane, without the indicatorbeing activated, it applies 80% of the torque needed to the steering to keep the vehiclecentred on the lane. The driver applies the other 20%. This torque applied increases asthe vehicle drifts nearer to the road markings. To deter the driver from letting thevehicle drive itself, the system automatically stops when their hands are removedfrom the steering for more than three seconds. The system works for speeds between65 to 100km/h, and for road curves larger than a radius of 230m [9].The IHCC system is very similar to Mercedes Distronic system. It uses a radarmodule to detect the relative speed between the vehicle and the vehicle in front. Fromthis it can adjust the vehicles speed to keep within a safe distance from the vehicle. Aswell as this, it has an extra input than the Distronic system, an accelerometer tomeasure the yaw of the vehicle. This is used to prevent a common occurrence with theradar based approach where when turning, where the radar system picks up a vehiclein a neighbouring lane, instead of the vehicle directly ahead [10].Another feature of this system is where the seatbelt of the driver tightens ifthey follow too close to the vehicle in front. If the system detects an imminentcollision, the seatbelt is tightened securely to aid the driver when the crash occurs.2.1.4 Toyota Lexus “AODS”AODS (Advanced Obstacle Detection System) is system developed by Toyotafor the Lexus LS460. This system consists of a radar obstacle detection system on thefront vehicle, and a stereo camera module located just above the rear view mirror. Theradar system behaves in a similar way to the Distronic system in that it locatesobstacles in the path of the vehicle and acts accordingly. The disadvantage of theseradar systems is that they cannot detect obstacles such as animals or pedestrians, asthe waves just pass through them. Therefore, the visible spectrum is used by the stereocameras to detect and triangulate the distance to these objects. Along with the radarsystem, if an imminent collision is detected, the system sends an audible warning tothe driver. If the collision is inevitable, the seatbelt is tightened similar to HondasHiDS system, and the ABS brakes are applied. This system can work day or night[11].
  • March 2006 Page 12_____________________________________________________________________Final Year Thesis Diarmaid O CualainStereo cameras have an obvious cost disadvantage over single vision camerasystems. Therefore, for this project, a single vision approach for object detection wasinvestigated.2.1.5 Nissan “ICC”Nissan have also been researching similar areas of safety, but instead ofdeveloping a radar-based approach for obstacle detection like the other manufacturers,they use an infra-red laser system. They call this system ICC or “Intelligent CruiseControl” .The benefits of this system are that it can detect pedestrians and animalswithout having to resort to separate cameras, as seen in Toyotas method. The vehiclealso adapts its driving speed according to the speed of the vehicle in front using thelaser. The disadvantage is that unlike radar, it cannot easily peer through to thevehicle in front in foggy or rainy conditions, as was also the case with Toyotas earliersystem [12].2.1.6 Volkswagen “ACC”Like most manufacturers, Volkswagen’s version of Adaptive Cruise Control(ACC), is radar based. It is available as an optional extra on the Phaeton, and on the2006 Passat. The module was co-developed as a project by French aerospacecompany Thales, and TRW, at a cost of 80 million euros. It is sold under the tradename of “Autocruise”.Like Distronic, the radar system is 77GHz. The circuitry is based on MMIC(Monolithic Microwave Integrated Circuit) technology to detect the reflected waves.The device can record the position and relative speed of multiple vehicles ahead.Since it is radar based, it does not need the clear optical path needed by infra-redsystems. This means that it is not as affected by fog or heave rain as these infra-reddevices are.2.1.7 BMW “ACC”BMW, not to be left behind, have a Adaptive Cruise Control (ACC) system onthe 3, 5 and 7 series models. The system was developed by Bosch [Figure 2.4], and
  • March 2006 Page 13_____________________________________________________________________Final Year Thesis Diarmaid O Cualainlike most other systems on the marked, it uses 77GHz radar. Four overlapping radarbeams scan up to 200m ahead of the vehicle. Instead of having a separate yaw sensingaccelerometer similar to the system employed by Honda, Bosch instead receives thisdata from the ESP (Electronic Stability Program). This is used to inform the radarwhich vehicle it needs to assess the speed and distance of when turning on amotorway. The device can work from speeds between 30 and 180km/h, and cansuccessfully detect objects up to 120m away [13].Figure 2.4: Bosch Radar Module Internals2.1.8 Other Manufacturers SystemsFord, Jaguar, Cadillac. Audi and many other car companies have similarsystems to the ones above, or some variation of the methods given. Most utilise aradar-based approach to detect objects in front. A figure showing the general radarflowchart for these devices is given in Figure 2.5 [14].Figure 2.5: General Flowchart for Radar Based Systems
  • March 2006 Page 14_____________________________________________________________________Final Year Thesis Diarmaid O CualainAs most of these systems use 77 GHz microwave radar, there are some inherentdisadvantages with detecting some objects. As mentioned in chapter 2.1.4, thedisadvantage of this is that objects such as animals or pedestrians cannot be detected.Lower frequencies could be used to scan for these objects, such as an infrared laser,but these suffer from having to need a clear optical field of view. Fog, rain and suchconditions can adversely affect their performance. Using the visible spectrum as inToyotas method, also suffers from this issue. Using both radar and the visiblespectrum does away with many of these disadvantages. When environment conditionsmean that the visible cameras cannot be used, more of the radar data can be used, andvisa versa. Road markings are designed to be most effective in the visible spectrum,as they need to be seen by the human eye. Therefore, there are many advantages withusing a CCD camera for detection of obstacles and lane markings.2.2 CCD CameraCCD cameras are a mature technology, which had first been discovered in Belllabs in 1969. Fairchild semiconductors first manufactured the devices for commercialuse in 1974 [15]. The devices work as follows: A lens projects the image onto acapacitor array. Each capacitor accumulates an electric charge proportional to thelight intensity from the photoelectric effect. Once the array has been exposed to theimage, a control circuit instructs each “pixel” (capacitor) to transfer its charge to itsneighbour. The final capacitor in the series sends its charge to an amplifier circuit thatconverts this charge into a voltage. The process is repeated, converting the entire arrayinto a signal of varying voltages. The signals can be sampled and digitised and sent toanother device for processing or keeping in memory.An interesting point to note is that CCD modules are typically sensitive to theinfrared spectrum as they are to the visible spectrum. This has meant that manymanufacturers place an infrared filter over the array so that only the visible spectrumis passed through. However, many electronic enthusiasts have removed these to takepictures using the visible and infrared spectrum. This shows how infrared CCDcameras have experienced the same speed of technological development as the visiblespectrum CCD cameras, since they are basically the same technologies. The cost per
  • March 2006 Page 15_____________________________________________________________________Final Year Thesis Diarmaid O Cualainmodule has also fallen in tandem with each other. This has led them to be used by carmanufacturers on their vehicles.Colour images are formed by having a “Bayer” mask over the array thatseparates out the colours received into two green pixels, one red, and one blue. Thisresults in the luminance of the image being collected at each pixel, but the colourresolution falls as a result of the 4 array pixels being used per pixel in the image.2.3 MATLABMATLAB is a programming environment ideal for scientific computationsthat require a large use of arrays or graphical analysis of data [16]. The syntax of theprogramming code is very similar to C, and is also very forgiving for errors made bythe programmer. It is an interpreted language, meaning that no compiler is needed,and scripts are saved as “.m” files. Another important note is that array indices beginwith a 1, compared to a 0 in Java or C.One of the most powerful aspects of MATLAB is that many commonly usedfunctions are already built-in to the program. For example, array-sorting algorithms,Hough transforms and so on, are quickly and easily implemented because of this. Thisallows MTTLAB to be a very useful environment for testing out approaches tosolving problems before committing them to C or Java, or other programminglanguages. As the main aim of this project was to see if the concept of a forwardfacing camera could be used for lane departure and collision detection, it was used forthis project for this reason.2.4 Lane Departure & Object Detection AlgorithmsMuch research was made into lane departure and object detection algorithmsfor this project. Many of the lane detection algorithms found were for self-drivingvehicles, not for lane departure detection as was needed by this project. However,some useful information was found from these papers and websites for the project.Obstacle and collision detection algorithms were more difficult to findinformation on. It was found that not as many papers or website dealt with this subjectas compared to lane detection. Similar to the problems faced with the lane departurealgorithms, many of these were fairly complex and beyond the scope of this project.
  • March 2006 Page 16_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe details into the research made for lane departure and collision detectionparts of the project are outlined in their respective chapters.2.5 SummaryFrom this chapter we have seen a section of the research that was performedinto the various technologies needed for this project. However, as the projectprogressed, it was realised that research had to be made into different areas. Wherethis was done, it is outlined in the corresponding chapters. After the basic literaturesearch was done, a database of images had to be set up. The work behind this isdescribed in the next chapter.Chapter 3 Database of ImagesTo begin developing the code for the project, a database of images needed tobe gathered for analysis and testing. This chapter outlines the work undertaken atacquiring the various images for the database.3.1 Artificial ImagesFor early development of the algorithms, a simplified version of the “realworld” images needed to be generated. These were made artificially by usingMicrosoft Visio, Microsoft Paint, and Jasc Paint Shop Pro. A sample image can beseen in [Figure 3.1].Figure 3.1: Artifically Generated Test Image
  • March 2006 Page 17_____________________________________________________________________Final Year Thesis Diarmaid O Cualain3.2 “Real World” Images3.2.1 Various Road Surfaces and MarkingsSince the system was to work on images grabbed from a colour video cameramounted on a vehicle, a sample of these images needed to be added to the database. Itwas necessary to have images of roads and motorways, since this is the environmentthat the system was needed to function under. Dirt tracks, graded roads, or earth werenot needed as this was beyond the scope of this project (see Chapter 1.3.1).Preferably, the images needed to be taken at approximately 1 to 1.5 metres height offthe surface of the road. They also needed to have the horizon roughly on thecentreline of the image. So, a search of images of asphalt, tar and chip, and concreteroad surfaces, with various road markings, was undertaken.Searching began on the Internet, with limited success. It was found that mostimages were “artistic” in nature, i.e., taken at dusk or dawn, or black and white, sepia,and so on; not so useful for this project. Because of this, it was necessary to take thepictures as fieldwork in various locations around the area.For taking the images, a Sony Mavica CD500 and a Sony W800 were used.The pictures were taken at a standard height of 1m off the surface of the road. Themajority of the different images of road surfaces and road marking were taken inside amoving car. It was found that the dashboard was just below 1m off the surface of theroad, so the process of taking images inside the vehicle was a simple enough affair[Figure 3.2]. To take the picture, the horizon was centred in the middle of the image,and the image was taken facing directly ahead. It was found that the Sony Mavicacamera took the highest quality images. Therefore, this was used to take the remainderof the pictures. Also, since most of the images needed to be taken in similar fashion toeach other, zoom was at its widest, flash was off, auto brightness was on, and so on.
  • March 2006 Page 18_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 3.2: Motorway Surface with Road Markings.3.2.2 Lane Detection ImagesLater on in the project, it was found that more accurate images of the lanemarkings were needed to calibrate the lane detection module of the system. A quietroad with good solid road markings was found in the area in a local industrial estate.A stand was made from timber to sit the camera on. This ensured that the pictureswere taken at a standard height of 1m. For the base of the stand, an approximate widthof a vehicle was needed. The width of a Renault Laguna and an Opel Corsa wasmeasured and the average width calculated. The base was cut to this size, 1.675m.The reason for this, was so that as the camera on the stand was moved from left toright in the lane of the road, it was known when the lane was crossed. A measurementfrom the centre of the stand to the centre of the road was taken for each image. ThePictures were taken at 200mm intervals. The results are given in Appendices B. Theset-up is shown in [Figure 3.3].
  • March 2006 Page 19_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 3.3: Lane Departure Stand Set-up3.2.3 Object Detection ImagesFor the object detection part of the module, a number of pictures were takenfrom a car as it followed behind another vehicle. However, later on in the project, itwas found that images where the distance between the camera and the vehicle wereneeded. To do this, the same stretch of road as was used in chapter 3.2.2 was re-visited. For this set up, the same stand was used again. Pictures were taken at 2mintervals behind a vehicle, as shown in [Figure 3.4].Figure 3.4: Object Detection Set-up
  • March 2006 Page 20_____________________________________________________________________Final Year Thesis Diarmaid O CualainHowever, after analyses of the images, it was found that they were not asaccurate as was required. Therefore, a different set up was needed. A long tape waslaid down in the centre of the lane for a distance of 30m. Strips of paper were laiddown horizontally at 2m intervals along the length of the tape. The paper was keptdown with sections of household cabling and loose stones. A picture was then takenalong the length of the tape with the horizon in the centre of the image. This was doneso that an accurate height in pixels, from the base of the image, to each strip of papercould be got. The reason for this is explained in chapter 7.2.2.3.3 SummaryOverall, just over 150 images were added into the database. These images were toprove invaluable in the development, calibration, and testing of the system. In thechapters ahead, an explanation will be given for the reason behind why some of theimages were taken. Now that a database was built, work could begin of the analysis ofthe systems components. This is outlined Chapter 4.
  • March 2006 Page 21_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 4 Analysis of Project ComponentsBefore work could begin on developing the algorithm, it needed to be dividedinto logical sub modules. This was done to help understand the system in the problemdomain, and to aid in the programming of the code. These modules were then studiedseparately as to how to go about designing them. This consisted of looking into whichinputs they would need to function, how they would be sub-divided again in a divideand conquer approach, what outputs they would have, and so on. This chapterdescribes how the main system was divided into these modules and analysed beforecommitting them to code.4.1 Standard Features of the RoadBefore the rest of the project analysis can be talked about, the features of theroad surface need to be pointed out. This can be seen in [Figure 4.1].Figure 4.1: Road Features
  • March 2006 Page 22_____________________________________________________________________Final Year Thesis Diarmaid O Cualain4.2 Dividing Algorithm into modulesAs mentioned in [Chapter 1.2], the main algorithm could be separated out intotwo main sections. These were the lane detection and lane departure detectionmodule, and the obstacle detection module. This can be seen in [Figure 1.2]. Afterfurther analyses, it was seen that these be broken down again into four separate submodules [Figure 4.2].Figure 4.2: Sub Modules of AlgorithmWork could then begin on each of the modules.4.3 Analyses of the Lane Detection Module4.3.1 Characteristics for Lane DetectionThe sole input to the lane detection module is the current image frame grabbedby the CCD camera [Figure 4.3]. Before an algorithm for this module was attempted,there needed to be a definitive idea for what the algorithm needed to look for in theinput. So images of the road surfaces were analysed, assumptions made, and thecharacteristics of the road features were broken down. The following is a list of theseassumptions.
  • March 2006 Page 23_____________________________________________________________________Final Year Thesis Diarmaid O Cualaina) Road surface is normally dark in colour.b) The middle road line markings are normally white in colour. Theselines can be a continuous single line, continuous double lines, brokenlines, or variations of this. Even when they are continuous, to thecamera they can often appear discontinuous from wear, surface water,etc, and so should be treated as such.c) Side road markings are normally yellow in colour and discontinuous.d) “Cats eyes” reflectors are normally the same colour of the lines thatthey sit on, and are positioned at approximately one metre intervals.e) The road surface is normally trapezoidal in shape when viewed by thecamera, or can be approximately trapezoidal when the road is turning.f) Road lines appear approximately straight at relatively short distances infront of the car even when the road is turning.g) The horizon of the road in the image is approx. half way down theimage.Many other characteristics for lane detection from the images existed, but onlythe ones that were thought to be of most use and easiest to detect were scrutinized.Figure 4.3: Typical Image Frame from Camera
  • March 2006 Page 24_____________________________________________________________________Final Year Thesis Diarmaid O Cualain4.3.2 Lane Detection AssumptionsIt was clear from the characteristics in chapter 4.3.1 that some assumptionscould be made to simplify the development of the code without compromising itsrobustness.1. All road markings are either yellow or white in colour.2. The horizon vanishing point is always on the horizontal centreaxis of the image.3. There are always road markings on the right hand side of thevehicle in the image (or on the left hand side for countries whodrive on the right of the road). These lines are always white incolour (In some European countries, temporary road markingsfor the centre of the road can be in yellow, red or blue.However, this will be ignored for this project).4. Each road line segment, continuous or discontinuous, whetheron the left or the right, or in a nearby lane (on motorways), arein line.Taking these assumptions into account, the characteristics were re-examined,and the most promising for use in the lane detection module were used in thealgorithm.4.4 Analyses of the Lane Departure Detection Module4.4.1 Characteristics for Lane Departure DetectionWhen a vehicle drifts out of lane, a forward facing camera can pick up onsome noticeable characteristics of this occurring. These are as follows:1. The lane markings for the side of the lane that the vehicle is driftingover pass from one side of the image to the other. This means that when thevehicle is over one of the lane markings, the lane markings are in the centre ofthe image. This can be clearly seen in [Figure 4.4].2. The angle of the lane markings to the horizontal increase from a acute angle toan obtuse as the vehicle drifts over it.The final algorithm must be able to recognise at least one of thesecharacteristics for it to function properly.
  • March 2006 Page 25_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 4.4: Drifting out of Lane4.4.2 Lane Departure Detection AssumptionsTo help simplify understanding of the lane departure detection problem, a fewassumptions were made with regards to the image that the module needed to analyse.Some of these assumptions were removed later on in the project to help make thealgorithm more robust in real world situations. These assumptions are listed below:1. There is always a road line to the right of the vehicle. On motorwaysand rural roads, this is nearly always the case. In countries where right-hand driving is performed, this is nearly always not the case. However,it is relatively simple to change the algorithm for right-hand driveapplication.2. The road line to the right of the vehicle is always white. This is nearlyalways the case where assumption (1) is true.After these assumptions were teased out, work could begin on the development of themodules.
  • March 2006 Page 26_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 5 Lane Detection ModuleLogically, the lane detection module was the best to begin work on. Thismodule was to supply data to the lane departure detection module, and so was neededfor testing when work would begin on that module. At the time it was also thoughtthat it would be needed by the obstacle detection module. It was believed that the“region of interest” i.e. the determined lane that the vehicle was travelling in, wouldhave been needed for it to detect the obstacles in. However, this was proved to be notthe case, as explained in chapter 7.2.1. To help complete this module, various ideaswere tried and tested, until a suitable one was found. This chapter summarises thework done in completing this lane detection module.5.1 Solutions for moduleFor finding a solution for this module, many papers and website articles wereresearched. Some were found to be helpful, but most were found to be too complexfor this project. It was found that were many papers written on lane detectionalgorithms for automotive an application, but these were studies into algorithms forvehicles that could drive themselves. For this project, all that was needed was awarning system for when the vehicle had drifted out of lane. Most of the algorithmsresearched detected the lane that the vehicle was travelling in for a large distanceahead. Some involved transformations of the image so that it appeared as a “top”view. The lane detection was then performed on this. Most involved rather complexalgorithms such as the “B Snake Model” [17] which was beyond the scope of thisproject. Nevertheless, some useful information was found in these papers. Thisincluded the issues posed by shadows on the road surface, to rain water hiding roadmarkings, to how branches in the roads and other road features confused thealgorithms. Some basic attempt at solving these issues was undertaken in the project,and with some success.One of the earlier ideas was to have a line superimposed at this set angle fromthe bottom right-hand corner of the image. This could then be moved pixel by pixel tothe left. After every step, a histogram could be taken along the line [Figure 5.1]. If theyellow or white value of the histogram reaches a certain level (i.e. the line is directlyon the middle road markings in the image), the algorithm will know it has found the
  • March 2006 Page 27_____________________________________________________________________Final Year Thesis Diarmaid O Cualainright-hand side marking of the lane. This could then be used to calculate the left handside of the lane by knowing the standard width of a lane. This was found to be overlycomplicated and from all the iterations involved, and probably too slow in a realworld situation, where approx. 25fps (frames per second) would be used.Figure 5.1: Artificial Image of Road SurfaceOther various methods were thought up of, but most were overly complicated,difficult to implement, unreliable, or both. A better approach for lane detection wasneeded.One useful document found was a MATLAB 7.0 demonstration algorithm thatcould differentiate the lane markings on the road from the road surface [18].Compared to other documents researched, this was relatively simple and easy tounderstand. More importantly, it had code written in MATLAB. Since this was thefirst module of the program to be developed, it gave a good idea of how such analgorithm could be developed in MATLAB. The algorithm works as follows. Theimage frame is saved in MATLAB in a 480 x 640 matrix, where 480 x 640 is theimage width by height. The greyscale image was then brightened, and converted intobinary. This was done by selecting a threshold value, and comparing the greyscalevalue of each pixel against it. If it was found to be bigger, it was given a value of 1, ifit was smaller; it was given a value of 0. Noise was removed from the image, and thena boundary scan was performed [Figure 5.2]. Any boundaries that were not long andthin were removed, and the lane markings were left.At first, it appeared that after some small changes, this algorithm could beused for the module. But after more study, it was seen that no useful data as suchcould be parsed from the final image that could be used in the other modules. Forexample, in chapter 4.4.1, certain characteristics for lane departure needed to beidentified from the data. These could either be the angle of the lane markings, or the
  • March 2006 Page 28_____________________________________________________________________Final Year Thesis Diarmaid O Cualaingeneral position of the lane markings. Both were not explicitly identified by thealgorithm. Also, the algorithm was not very robust as any change in brightness wasdid not result in a change in the binary threshold value, resulting in more boundariesidentified. This increased the risk of a boundary been incorrectly identified as a lanemarking. Furthermore, by converting the image from RGB (as was outputted by thecamera) to greyscale, some valuable colour information was lost. This is important aslane markings are of specific colour: yellow or white. A new algorithm needed to bedeveloped that solved these issues. Nevertheless, some parts of this algorithm were toprove useful in the final algorithm.Figure 5.2: Boundary Detection of Road Image5.2 The AlgorithmWhat was needed was an algorithm that could exploit this colour data and notjust dispose of it by converting it directly to greyscale. And so a filter needed to bedeveloped that would only pass the yellow and white spectrum of the road markings.To do this, some research was done into MATLAB image matrices
  • March 2006 Page 29_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.2.1 MATLAB Image MatricesIn MATLAB, the basic data structure is the array. This is also true of imagesstored in MATLAB. Images can be thought as made up of “pixels” or dots in theimage. These pixels can vary in dimensions, for example VGA (i.e. the standard pixeldimensions for many early monitors or cameras), is 640 pixels wide by 480 tall. Whenthis image is stored as an array in MATLAB, it corresponds to a 2 dimensional arrayof 640 columns by 480 rows. This allows powerful image processing work to be doneusing MATLAB. It is important to note, that the first element in a MATLAB imagematrix (i.e. (1, 1)) is the top left most pixel in the image. In Cartesian co-ordinate, thiswould be the bottom left pixel. This can be seen in [Figure 5.3].For the remainder ofthis report, whenever references are given to co-ordinates in images, this will refer tothe MATLAB image space co-ordinates.Figure 5.3: Cartesian and MATLAB Image SpaceRGB Colour image arrays in MATLAB contain an extra dimension to storethe extra colour information. This dimension is 3 elements wide. Each elementcorresponds to the red, green, and blue colour information of the image. Thus eachpixel can vary in colour from (0,0,0) (black) to (255,255,255), (white).5.2.2 Horizon filterTo begin with, the image was checked to see if its dimensions were 480 by640. If this was not so, it was resized to these dimensions. As mentioned in chapter4.3.1, assumption (g), the horizon in the image sits on the horizontal centre line of theimage. Since only the image information below this line is needed, the data above this
  • March 2006 Page 30_____________________________________________________________________Final Year Thesis Diarmaid O Cualainline could be removed. Later on in the project, it was also found that this image areaoften contained image data that confused the algorithm. Therefore, the imageinformation above the horizontal centre line was removed using a filter. A artificial640 by 640 binary image was made with 0’s where the image data of the originalimage was to be removed, 1’s where it was to be left unchanged. A loop wasperformed on each pixel in the original image. On each iteration, the value of thecorresponding pixel in the binary image was checked to see if it was a 1. If it was, theRGB value of the pixel was left unchanged. If it was not, it was changed to (0,0,0).The output can be seen in [Figure 5.4]. The original image can be seen in [Figure 4.3].Image maskedFigure 5.4: Image with Horizon Removed5.2.3 Colour filterTo separate out the road marking data from the rest of the image data, a colourfilter needed to be used. A RGB window needed to be created that would compare theRGB value of every pixel in the image. If the RGB value was found to be inside thewindow limits, it would be given a value of 1. If not, it was given a 0.. To do this, thepixel values of the sections of the image that contained the road markings wereanalysed. Using a plot of the image in MATLAB, the RGB value for random pixels inthese sections were added to a Microsoft Excel sheet and the max and min valueswere found. These values were used for the window limits in the filter. The values are
  • March 2006 Page 31_____________________________________________________________________Final Year Thesis Diarmaid O Cualainshown in [Table 5.1] and [Table 5.2]. The final window size was increased by 10 oneach side as to allow for some of the outlining pixels to pass. A separate filter wascreated for white and yellow.Average white road markings RGB valuePixel Red Green Blue1 182 200 1742 193 209 1833 191 205 1794 189 201 1775 198 208 1816 181 193 1697 191 203 1818 223 232 2119 194 209 18010 193 205 181min Val 181 193 174max Val 223 232 211Table 5.1: White RGB ValuesAverage yellow road markings RGB valuePixel Red Green Blue1 169 170 942 174 173 903 176 173 964 184 183 1015 181 179 1056 176 174 1007 181 177 1068 181 182 1249 179 173 11510 181 181 107min Val 169 170 94max Val 184 183 124Table 5.2: Yellow RGB ValuesThe image output can be seen in [Figure 5.5] and [Figure 5.6].
  • March 2006 Page 32_____________________________________________________________________Final Year Thesis Diarmaid O CualainBinary image WhiteFigure 5.5: Output From White Road Marking FilterBinary image YellowFigure 5.6: Output From Yellow Road Marking FilterAfter testing the filters on a number of images, it was found that when theimage was darker than normal, the RGB values of the white and yellow road markingsdropped below the window of the filter. It was a similar problem for brighter thannormal images. Various methods were proposed for solving this problem:
  • March 2006 Page 33_____________________________________________________________________Final Year Thesis Diarmaid O Cualain1. Increase the window size. This would be the easiest to implement, andwas done early on in the project. However, it was not robust tochanging conditions, and many other pixels that were not roadmarkings passed through. Therefore a different solution was needed.2. Use a feedback loop: When not enough pixels were found, lower theRGB limits of the window. There were two issues identified with thisapproach. Firstly, the RGB values of yellow do not decrease linearly asthey become darker. Therefore, the window could pass down in theRGB scale without finding enough pixels needed for the threshold toexit the feedback loop. Secondly, the road marking pixels that werepicked up on the first iteration of the feedback loop would be lost onlater iterations as the window moved down the RGB scale.3. Use a feedback loop: When not enough pixels are found, increase thewindow size until enough are found. It was found that this feedbackloop worked best on the image database.A loop iteration limit of 50 was set for white, and 20 for yellow. This wasdone in case there were no road markings in the image. A pixel count threshold of2000 pixels was chosen for both filters.5.2.4 Noise RemovalSpurious pixels that were passed by the filters had to be removed from thebinary image before lane departure detection could be done. It was seen that thesepixels were normally part of small groupings, or without any neighbouring pixels atall. Also, any pixels that were part of road markings were in large groupings of morethan 10 or so pixels. Therefore, what was needed was an algorithm that could removethe smaller grouping of pixels but leave the others untouched. From studying theMATLAB 7.0 demonstration algorithm [chapter 5.1], it was realised that MATLABhad a inbuilt function called “bwareaopen” that does this. From testing, it was foundthat the algorithm worked best when a minimum limit of 4 pixels per group foryellow, and 10 for white was used to remove the noise.
  • March 2006 Page 34_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.3 SummaryWe have seen how research was made and work done on the development ofthe lane detection module. Once this module was found to be working to a satisfactorylevel, work could begin on using the output to detect when lane departure occurs. Inchapter two, the way that this was done is explained.
  • March 2006 Page 35_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 6 Lane Departure Detection ModuleAfter work was finished on the lane detection module, and testing had provedthat it was working to a satisfying level, work began on developing the lane departuremodule. This was found to be one of the most difficult and time-consuming areas ofthe project. At first, a relatively simple approach was devised, that was thought to besuitable for the application. Unfortunately, after testing, a major issue was found. Adifferent approach was needed, and so work restarted on the module. The finalalgorithm devised was tested using the image database, and found to work under mostconditions. This chapter summarises how this algorithm was devised and developedinto the final code of the system.6.1 Solutions for moduleVarious solutions were derived and analysed for this module. One early ideawas to check a certain area in the image for road markings, as seen in [Figure 6.1].When the amount of pixels in this area reaches a certain threshold, a warning is issuedto the driver informing them that they have drifted out of lane. This algorithm makesuse of the first characteristic of lane departure as identified in chapter 4.4.1.Figure 6.1: Early Lane Departure Detection Algorithm
  • March 2006 Page 36_____________________________________________________________________Final Year Thesis Diarmaid O CualainThere are a small number of issues with this method. One is that if there areany other road markings, such as a rumble strip, “children crossing” text, largeamounts of image noise, or any other markings in the centre of the lane, the algorithmwill return a false positive. This could be solved by checking a few frames of imagesand seeing if they also return a positive. This would show that it would have beencaused by lane markings, as other markings will not feature in as many frames (i.e.,the vehicle has travelled past them). Unfortunately, this will introduce a lag in thealgorithm until the answer is calculated.Another approach was to develop an algorithm that worked using the secondcharacteristic identified in chapter 4.4.1. This stated that the angle of the roadmarkings changed as the vehicle drifts out of lane. To put this into practice, analgorithm was needed to be written that could measure the angle of the lane markingson the road. From chapter 4.3.1, characteristic (b) and (f), It was recognised that roadmarkings were often broken, or had to be treated as if they were broken. It was alsolearnt that lane markings could be approximated to be in a straight line at shortdistances even when the road was turning. Therefore, what was needed was a methodto interpolate the lane markings, find their average angle, and test this angle against athreshold to see when the vehicle had drifted out of lane.6.2 The Hough TransformThe Hough transform is a image processing technique for feature extraction[19]. It is more commonly used for detection of lines in an image, but can also be usedto detect any arbitrary shapes, for example circles, ellipses, and so on. For this project,it was used for its more common purpose. The underlying principle of the Houghtransform is that every point in the image has an infinite number of lines passingthrough it, each at a different angle. The purpose of the transform is to identify thelines that pass through the most points in the image, i.e. the lines that most closelymatch the features in the image. To do this, a representation of the line is needed thatwill allow meaningful comparison in this perspective. A second line is drawn from theorigin to the nearest point on the line at right angles. The angle that this second linemakes to the origin is recorded, as is the distance from the origin to the point where
  • March 2006 Page 37_____________________________________________________________________Final Year Thesis Diarmaid O Cualainthe two perpendicular lines meet. These values are known as “theta” (θ) and “rho” (ρ).An example of this using three points is shown in [Figure 6.2].Figure 6.2: Hough Data from 3 PointsWhen the rho value is plotted against theta for one of these arbitrary points, asinusoidal curve is created. When the other rho and theta values for the other pointsfound in the image are plotted on the same graph, it is found that the curves overlap incertain areas. This can be seen in [Figure 6.3]. It can be seen that the curves bisect atthe pink point. Since this point can be transformed back to the original image usingthe rho and theta values, we can find the line that passes through the three points, asshown in [Figure 6.2].Figure 6.3: Hough Space Graph
  • March 2006 Page 38_____________________________________________________________________Final Year Thesis Diarmaid O CualainFor implementation on an image, more often than not, the Hough Transform isperformed after edge detection has been done. In this project, this is done so that theHough transform can separate out the straight edges of the lane markings from theother image data.6.3 Edge DetectionEdge detection is another useful image processing technique, used todistinguish the boundary between two dissimilar regions in an image. Edge detectionis relatively little computing power, and there are many various edge detectionalgorithms developed. Sobel and Canny are examples of these. Each is sensitive todifferent type of edges. The various methods can be separated into two main groups:Laplacian and gradient. The gradient method works by finding discontinuities in theimage, i.e. the maximum and minimum of the first derivative of the image. For theLaplacian method, a search for zerocrossings is performed in the second derivative ofthe image. Canny, the method used in this project, works on this approach. Edges inimages, by their nature are a large jump in intensity for one pixel to the next.Unfortunately, this is the same for noise in an image. Therefore, before edge detectionin an image can take place, noise removal must be done. This can be done by“blurring” the image: averaging out the pixel intensities on a localised scale. This iswhat is implemented in the canny algorithm. Most edge detection is carried out onbinary images or greyscale.6.3.1 Sobel MethodThe Sobel method [20], one of the simpler methods, is also used in thisproject. This works as follows.Two 3x3 masks are created that are each passed over each pixel in the image.One mask is used to calculate the edge gradient in the y direction (rows), the other,the x direction (columns). Each neighbouring pixel found around that point is given avalue corresponding to the ones shown in [Figure 6.4]. The values are then addedtogether giving Gx and Gy for each pixel. The magnitude of these gradients can thenbe found by:
  • March 2006 Page 39_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.4: Sobel Edge Detection MasksIf a threshold value is chosen for the gradient, the horizontal (Gy) and vertical(Gx) values can be found.6.4 Average Angles AlgorithmIt was believed that performing edge detection on the image, followed by theHough transformation, would yield a good method for finding the angles of the roadmarkings in the image. In MATLAB, a function called “edge.m” was used to find theedges in the image, followed by “hough.m” to calculate the Hough transformation.The images after each stage are shown in [Figure 6.5] to [Figure 6.7].edge detect image of Yel edge detect image of WteFigure 6.5: Canny Edge Detection of Yellow &White road Markings
  • March 2006 Page 40_____________________________________________________________________Final Year Thesis Diarmaid O CualainHough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.6: Hough transformation of Yellow Lane MarkingsHough transform of the white lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.7: Hough Transformation of White Lane Markings
  • March 2006 Page 41_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe MATLAB function “houghpeaks.m” was then used to find the peaks inthe Hough space. These are the points where the straight lines in the original imagewere transformed to. The threshold value to distinguish the Hough peaks from theother points was found during testing by trial and error. A value of 0.6 multiplied bythe highest value in the Hough matrix was chosen for the white road markings, 0.3 forthe yellow. These peaks are plotted on the Hough transform space in [Figure 6.8] and[Figure 6.9].Hough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.8: Hough Peaks of Yellow Lane Markings
  • March 2006 Page 42_____________________________________________________________________Final Year Thesis Diarmaid O CualainHough transform of the white lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.9: Hough Peaks of White Lane MarkingsWe can clearly see from these figures where the transform of the lanemarkings occur. It is interesting to note in [Figure 6.8] how the two lane markings onthe sides of the road are clustered together in the Hough Space. The importance of thiswill become apparent later on in this chapter.After these peaks were found, they were plotted out onto the original image togive a good indication of where the straight lines occurred, and to provide feedback sothat various parameters (for example, the Hough peaks threshold) could be modifiedand the results scrutinised. A sample output is shown in [Figure 6.10].
  • March 2006 Page 43_____________________________________________________________________Final Year Thesis Diarmaid O CualainFinal OutputYlwYlwYlwYlwYlw YlwYlwYlwYlwYlwYlwYlwYlwYlwYlwFigure 6.10: Sample Output from Hough Peak DetectionA method was now needed to find the angles of each of these line segments,and find the average angle. This was done by finding the co-ordinates of each end ofthe line segments. Then, using the equation below, the slope of the line was found.The angles of the lines were then found by finding the tan of m. A checkneeded to be performed to ensure that there was no divide by zero when x1 and x2were the same. When this was found, the slope was made to be equal to -90 degrees,or +90 degrees, depending on which y value was higher. The yellow road markingangles were separated out into the ones that were less than 90 (the line segmentsfound on the right) and the ones bigger than 90 (the line segments on the left). Theaverage angle was then found for each group. The average angle was also found forthe white road markings segments.To detect when the vehicle had left the lane, a maximum and minimumthreshold for each of the road markings was set. These values were found from trial
  • March 2006 Page 44_____________________________________________________________________Final Year Thesis Diarmaid O Cualainand error through testing. A minimum of 35 degrees and a maximum of 58 degreeswere chosen for the line on the right of the vehicle, and a value of -35 degrees and -58degrees for the left of the vehicle respectively.To avoid confusion, and to remove some bugs, these values were changed toradians in later versions of the algorithm.After testing the algorithm on various images from the database, it was foundthat the algorithm worked to a satisfying level. It could easily distinguish when thevehicle had left its lane in most of the images. However, after a period of testing, itwas realised that the algorithm had one major failure. On motorways, where therewere more than one white road lines on either the left or the right of the vehicle, thealgorithm failed to function correctly. It could not identify when the vehicle haddrifted out of lane. Since motorways were one of the main environments that thisalgorithm was specified to work in, this was a serious issue.After reading through the code and looking at some of the flow charts, it wasrealised that this problem lay in the angle averaging section of the lane detectionmodule. Under normal circumstances, the algorithm finds the average angle of thesections of white lines found on the right of the vehicle. However, when two roadlines appear on the right of the vehicle (i.e. the markings for the neighbouring lane), itcalculates the average of these two lines. This results in an angle that is not the angleof the road line of the lane that the vehicle is travelling in, but the average of the twolines in the neighbouring lane. This can be seen in [Figure 6.11].
  • March 2006 Page 45_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.11: Issue with Average Angles Algorithm6.5 Cluster Angles AlgorithmAs mentioned in chapter 6.4, there was a major bug in the average anglesalgorithm. A method was needed that did not suffer from this problem, so that it couldbe used on the motorways as part of its application. Various methods were proposedat modifying the average angles algorithm but none were very robust or easy toimplement. One method suggested was to check segments of the image as shown in[Figure 6.12]. If road markings were found in this segment, the angle could berecorded, and the lane departure detection could then be performed. This would usethe angle threshold values like before. There were some problems with this method.Firstly, the vanishing point on the horizon was not always in the centre. Therefore, thecentre of the segments would have to move left and right accordingly. This would bedifficult to implement. Another problem posed was that for this algorithm, a gain inaccuracy for the angle of the lines detected resulted in it loosing some of its
  • March 2006 Page 46_____________________________________________________________________Final Year Thesis Diarmaid O Cualainrobustness. To increase accuracy meant that the arc of each segment had to bereduced. This also meant that there was a higher probability that some of the roadmarking would lie outside the arc, resulting in valuable information lost.Some other methods were proposed, but none were satisfactory for themodule. Therefore, work began at developing a new algorithm from the beginning tosolve this issue.Figure 6.12: Road Marking SegmentsIt was realised that after a short period of time that in the average anglesalgorithm, the Hough Transform Space had patterns in relation to the location of theroad markings in the image. It was recognised that the Hough peaks (the lines in theimage) seemed to occur in clusters in the Hough Space. This was mentioned briefly inchapter 6.4. This clustering can be seen in [Figure 6.8] and [Figure 6.9]. Thisclustering occurs because each road marking in a road line is approximately at thesame angle as each other (characteristic (f), chapter 4.3.1) and at the sameperpendicular distance to the origin. Therefore, the same numbers of clusters appearin the Hough space as road lines found in the image.A method needed to be devised to find the centre of each Hough peakcluster, and then to transform this point back into the spatial domain where it could be
  • March 2006 Page 47_____________________________________________________________________Final Year Thesis Diarmaid O Cualainplotted and analysed. Some research was then done on different clustering algorithmsto find one that could be suitable for this application.6.5.1 Clustering AlgorithmsMuch research has been done in mathematics on clustering algorithms in thepast few decades. Clustering algorithms have found many applications frommarketing, to biology, to insurance, and so on. The goal of clustering algorithms is tofind “The intrinsic grouping in a set of unlabeled data” [21]. There are a few maintypes of clustering algorithms. These are K-means clustering, Fuzzy C-means,Hierarchical Clustering, mixture of Gaussians, and so on.For this project, a subtractive clustering algorithm [22] was used. Thisalgorithm assumes that each point is a potential cluster centre and calculates thelikelihood that it is by analysing the density of the neighbouring data points. It doesthis by first selecting the most likely points as the cluster centres. Then it removes thesurrounding data points in the vicinity as determined by “radii” (see chapter 6.5.2)value. It repeats these two steps until all the data points are within the “radii” vicinity.This algorithm was chosen for a few of reasons. One is that it does not have to beexplicitly told the number of clusters that it needs to find. Instead, various otherparameters are chosen that determine the number of clusters that are to be found.6.5.2 Implementation of Clustering AlgorithmMATLAB has a function called “subclust” that can perform the subtractiveclustering needed for this project. Before it could be implemented into the algorithm,some study needed to be done on the parameter values for the algorithm.These are as follows:1. xBounds: The cluster area size. This is the dimension size of the areathat is to be searched for clusters. In the project, this was set to largestangle theta (θ) that could be found, which is 90 degrees, and the largestperpendicular distance roh (ρ) to a data point. This dimension wascalculated by:22dim imageWidthtimageHeigh
  • March 2006 Page 48_____________________________________________________________________Final Year Thesis Diarmaid O Cualain2. Radii: This is the distance in the two dimensions that determines theinfluence a point has over another in finding the centre of the cluster. Ifthis is small, a small number of clusters with a large number of datapoints are found, and visa versa.3. quashFactor: This is used to multiple against the radii value todetermine data points that are part of the cluster. This lowers thepotential for outlying points to be calculates to be part of the cluster.4. acceptRatio: This is used to set the potential that a data point isaccepted as the centre of that cluster, above which another data pointcan be accepted.5. rejectRatio: Similar to the accept ratio, except this is used to set thepotential that a data point is rejected as the centre of that cluster, abovewhich another data point can be rejected.Most of these values were found by trial and error by using a MATLAB GUIcalled “findcluster” [Figure 6.13]. A .dat file was generated from the rho and thetavalues of the Hough peaks data points and imported into the GUI. The variousparameters were then changed until a satisfactory output was achieved.Figure 6.13: Findcluster GUI
  • March 2006 Page 49_____________________________________________________________________Final Year Thesis Diarmaid O CualainAfter the workings of the clustering algorithm were understood, and itsparameters calculated for the data points in this application, work began onimplementing it into the module. The Hough peak values were added into an array,and then inputted into the function. The algorithm was tested with images from thedatabase of different road environments to see how the algorithm managed. Thecluster centroids found can be seen as blue circles in [Figure 6.14].Hough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.14: Cluster Centroids6.5.3 Inverse Hough TransformAfter these points were found, they needed to be transformed back to thespatial domain so that they could be analysed and understood easily. This would allowus to visually see the lines superimposed onto the original road image to see if theywere working correctly. Mapping these points back to the spatial domain would yieldlines corresponding to the average angle, and average position, of each road linefound. To perform this inverse Hough Transform, a number of steps were taken.
  • March 2006 Page 50_____________________________________________________________________Final Year Thesis Diarmaid O CualainFirstly, the point where the road line met the line perpendicular to it that passedthrough the origin was calculated. This was found by the equations:sin1ycos1xAfter this point was found, the line perpendicular to the road line could becalculated. This was found by calculating its slope using the equation:tanmThe line was then found by using the equation:1 1( )y y m x xThis gave the line with length , the line perpendicular to the road line found.Finding the road line was only a matter of plotting a line at right angles to this linethat passed through the point 1 1( , )x y . These lines can be seen in [Figure 6.15].The redline corresponds to the perpendicular line. The yellow lines are the lines generatedfrom the cluster centroids in the yellow road marking Hough space. The white line isgenerated from the cluster centroids found in the white Hough space. Arbitrary valueswere substituted in for x in each line equation so that they could be plotted. The stepsoutlined above could have been combined into one for implementation in thealgorithm, but to aid understanding and to help with the error checking, they wereseparated out.
  • March 2006 Page 51_____________________________________________________________________Final Year Thesis Diarmaid O CualainFinal Output-100 0 100 200 300 400 500 600 700-1000100200300400500Figure 6.15: Output from Cluster Angles Algorithm6.5.4 Calculation of Lane DepartureOnce it was seen that the cluster angles algorithm worked to a satisfactorylevel, a warning algorithm had to be written for when the vehicle drifted out of thelane. From chapter 6.5.2, we have seen how the clustering algorithm can give theaverage angle of the road lines and their perpendicular distance to the origin. Fromthis, it was recognised that a threshold value could be set on the angle (theta) of thecluster centroids. This would allow the algorithm detect when the car has drifted outof lane from characteristic (2) in chapter 4.3.1. Also, rho, the perpendicular distanceof the line to the origin, could also be used to detect lane departure. This followscharacteristic (1), also in chapter 4.3.1. Threshold values then needed to be found tocompare against the returned theta and rho values from the algorithm. These werefound by running the algorithm on the test images of lane departure as cited in chapter3.2.2. The values returned as the lane departure occurred were used as thresholdvalues in the algorithm.
  • March 2006 Page 52_____________________________________________________________________Final Year Thesis Diarmaid O CualainOnly the white line found on the right hand side of the vehicle was used forthe departure detection. This was to help simplify the code. It follows from theassumption stated in chapter 4.3.2. However, the other road lines could be easilyanalysed for future work on the module.Testing was then performed on the module to see if it functioned correctly.The warning output was printed to screen when it detected lane departure. A sampleof output images is seen in Appendix A.6.6 SummaryWe have seen how the module for lane detection was designed and developedfor use in this algorithm. It was one of the most difficult and time consuming modulesto develop for this project, but the final results were satisfactory. In the next chapter,an outline of the work that was done on the third module, the object detection module,will be presented.
  • March 2006 Page 53_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 7 Object Detection7.1 IntroductionThe object detection module of the algorithm was found to be easier todevelop than the other modules, as a better understanding of the MATLABprogramming language, various image-processing techniques, and other skills wereimproved over the course of the project. This chapter deals with the work done on thatmodule, and the outcome of the testing in the conclusion.7.2 Solutions for moduleBefore solutions for the module could be determined, the module was brokendown into sub-components to help in the understanding of the problem. It was quicklyseen that the object detection module could be broken into two main sub-modules:1. Area of interest determination. This is the area in the image that is tobe searched by the object detection algorithm. This is the path thatvehicle is travelling in.2. Object detection in area of interest. This is the algorithm that is toperform a search on the area of interest for objects.7.2.1 Area of interestAt first, it was believed that the best approach for determining the area ofinterest was to use the output from the lane detection algorithm. The triangular areaenclosed by the two detected road lines could be searched by the object detectionalgorithm. This object detection algorithm could approximately search the area bysetting a search area that was a tall and thin rectangle extending up to the top of thetriangle [Figure 7.1]. The corner points of the rectangle could be found by shaving aset width for the rectangle, and using the equations of the road lines found. On thesecond iteration, a slightly wider rectangle could find be the search area, and so onuntil most of the area is covered. One issue with this method is that it needed to havethe correct road lines to be identified by the lane detection and lane departuremodules. If a bordering line needed by the area of the triangle was not found, it wouldnot function correctly. However, it was later recognised that this approach wasincorrect.
  • March 2006 Page 54_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 7.1: Rectangle Search MethodAfter further study, it was realised that the area of interest was not the lane thatthe vehicle was travelling in, but the area in front that the vehicle is travelling in. Thiswas to prove very helpful as the complexity of the algorithm could be reduced. Also,it helped improve the robustness of the algorithm by not relying on correct road linesto be identified by the lane detection and departure modules. Therefore, the searcharea for the algorithm was chosen to be triangular in shape, with the base points atboth corners of the image frame, and the top point at the centre of the horizon line.This was chosen because it is the area that the vehicle is going to travel in. The areawas separated from the other image data by means of a mask, similar to the horizonfilter used in chapter 5.2.2. A sample output of an image is shown in [Figure 7.2].
  • March 2006 Page 55_____________________________________________________________________Final Year Thesis Diarmaid O Cualainmasked road imageFigure 7.2: Area of Interest Filter7.2.2 Object DetectionOne process suggested for the object detection algorithm was to have aalgorithm that could distinctly recognise a vehicle ahead, whether this were a van, acar, or a truck. A certain characteristic of these objects could be found and then testedto see if it were in the area of interest. If so, the distance towards it could then becalculated in the collision detection algorithm. One characteristic identified was thered colour of the rear lights. These were normally a set distance apart, red in colour,and easy to separate out from the background image information using a colour filter.The distance between them could then be used to calculate the distance from thecamera. Most, but not all vehicles have red rear lights. A similar method wasproposed using the number plate. The area of interest could be scanned for an objectresembling the shape and colour of the number plate. The advantage of this approachis that all road vehicles must have a number plate. The disadvantage is that theywould be more difficult to separate from the background image information, and thattheir distance from the camera would be more difficult to calculate. Since the numberplate was length of the number plate was so small in relation to the length between therear lights, the distance towards the number plate could not be as accurately measured.The distance from the base of the image up to the section where the number plate was
  • March 2006 Page 56_____________________________________________________________________Final Year Thesis Diarmaid O Cualainfound could also be used to measure the distance from the vehicle, but this was alsodiscounted as being of any use. This is because many vehicles, for example AlfaRomeos, some vans, 4x4’s etc have number plates of higher distance than regular roadvehicles off the road surface.Another method that was investigated was finding the shadow or the darkcolour of the tyres of the object. Once this found, an approximate distance could becalculated from its height in the image. A sample output is shown in [Figure 7.3].Unfortunately, the algorithm did not function as well when thee brightness of theimage changed. This resulted in the dark RGB colours of the tyres and shadow fromdrifting out of the window of the filter and not this not recognised by the distancecalculator.tyresFigure 7.3: Shadow/Tyre filterThe best method that was thought of was to find objects in the area of interestby searching for horizontal lines. If a vehicle were in the area of interest, thehorizontal nature of the rear bumper would indicate to the algorithm that there was anobject present. Also, since most vehicles have bumpers at approximately the samedistance off the road, this could also be used to calculate the distance it is from thecamera. A solution to this would be to find the distance from the base of the image tothe horizontal line found, and using a look-up table to find how far away it is.
  • March 2006 Page 57_____________________________________________________________________Final Year Thesis Diarmaid O CualainTherefore, this method was used for developing the algorithm that would be used bythis module.As mentioned in chapter 6.3, there are numerous edge detection algorithms,each sensitive to different types of edges. One such difference is whether it can beused to detect horizontal, vertical, or both types of edges. For this module, ahorizontal edge detection algorithm was needed. After some research, it was decidedto use the Sobel horizontal edge detection algorithm in this module. The Sobel edgedetection output is shown in [Figure 7.4].horizantal sobel edge detectionFigure 7.4: Sobel Edge Detection of Area of InterestOnce this was found, the noise needed to be removed. The MATLAB function“bwaopen” was used to do this. This is similar to the method used in chapter 5.2.4.Groups of less than 15 pixels were removed. The output is shown in [Figure 7.5].
  • March 2006 Page 58_____________________________________________________________________Final Year Thesis Diarmaid O CualainNo dotsFigure 7.5: Noise RemovalOnce the noise was removed, each pixel row was scanned and the amount ofpixels found was totalled. The scan started at the highest y value (the base of theimage) down to y/2 (the horizon line in the image). Two pixels were removed fromthis values because of the extra two caused by the edge detection detecting the edge ofthe triangular search area. This was then used to find the first straight line encounteredover a threshold of 20 pixels wide (i.e. the bumper). Once this was established, thepixel height from the base of the image to this row was used in a look-up table. Thelook up table had a range of pixel heights followed by their corresponding distancesfrom the camera. These values were found by using the data gathered from thepictures taken in chapter 3.2.3. Thus the distance from the vehicle to the object wascalculated for the module.During testing, it was noted that the algorithm did not function properly ifthere were horizontal road markings (i.e. the narrow side of a road marking) or largeamounts of noise found near the base of the image. This was caused by pixel countreaching the 20 pixels threshold before the edge of the object was reached. Becausethe width of the search area was wider near the base, more noise was detected there,causing a higher pixel count that normal. To work around this, a pixel count weightcalculation was proposed where the pixels found near the horizon would have a higherweight than the pixels found near the base. Unfortunately, after further testing, this
  • March 2006 Page 59_____________________________________________________________________Final Year Thesis Diarmaid O Cualainwas found to be a more difficult problem to solve. If the object in the area of interestwere a tractor-trailer for example, the pixel count would be the same for the top edgeof the trailer as the bottom edge, the bumper. Because of the weighted pixel countsystem, the top edge would be used in the distance look-up table to determine thedistance between the vehicle and the trailer. Unfortunately, this would result in alarger distance calculated than should have been.7.3 SummaryNo satisfactory solution to this problem could be determined in the timeallocated to this module. However, the module did function correctly for shortdistances. In the next chapter, the process of developing a algorithm for calculatingthe safe stopping distance will be explored, and how the result from it was used todiscover whether the vehicle is within the safe stopping distances of the objects found.
  • March 2006 Page 60_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 8 Collision DetectionOnce the distance to the nearest object could be determined, work could beginon developing the collision detection module. This could determine if the vehicle waswithin the safe stopping distance of the object distance calculated. The safe stoppingdistance was calculated by inputting the speed of the vehicle, whether it was rainingor not, and other variables. This chapter outlines the work done in developing thismodule.8.1 Solutions for moduleThe first solution established for this module was relatively straightforward.The speed of the vehicle could be used in a look-up table to determine the safestopping distance as determined by the Irish Rules of The Road [23]. This stoppingdistance is the sum of the distance travelled under the driver reaction time, and thedistance travelled as the vehicle is stopping. Newer vehicles with higher performancetyres have a shorter stopping distance, but will be ignored for this module, as theywould not be the legal stopping distances. Also, the since the vehicle ahead cannotstop immediately, the stopping distance of the vehicle could be ignored as the objectahead is stopping at the same rate. This will also be ignored for a number of reasons:The vehicle ahead could suffer from a head-on collision. This would result in a nearimmediate stop. A horizontally travelling object, say a vehicle passing through a crossroads, would also be treated as having a no stopping distance.The effect of rain is also outlined in the Rules of the Road booklet. This statesthat the distances can be doubled when the road surface is wet. The algorithm coulddetermine this by multiplying the stopping distance calculated by a factor of two. Orthe wiper setting on the vehicle could be used to multiple against the stoppingdistance.8.2 Safe Stopping Distance CalculatorAfter some research, the physics behind calculating the stopping distance of avehicle was found on “csgnetwork.com” [24]. This stated that the minimum stoppingdistance of the vehicle is determined by the driver reaction time and the co-efficient offriction between the tyres and the road. This friction force must do enough work to
  • March 2006 Page 61_____________________________________________________________________Final Year Thesis Diarmaid O Cualainreduce the kinetic energy of the moving vehicle to zero [Figure 8.1]. This can bewritten as:2021mgvmgdWorkfrictionWhere is the coefficient of friction, m is the mass of the vehicle, g is thespeed due to gravity, d is the distance travelled, and v is the velocity of the vehicle.Rearranging the equation yields:gvd220This is the stopping distance of the vehicle independent of driver reactiontime. Multiplying the driver reaction time against the distance travelled, and addingboth together results in the total stopping time.Figure 8.1: Forces on Stopping VehicleThe different wiper settings would give a good indication of how damp theroad surface is. The wiper setting was used in a look-up table to determine , thefriction co-efficient. These values were found by comparing the stopping distancescalculated by the equation above, and those found in the Rules of the Road booklet.The equations and look up tables were then implemented into an algorithm tocalculate the full stopping distance.Checking to see if the driver was within the safe stopping distance of theobject was only a matter of seeing if the object distance was smaller than the safestopping distance calculated. A warning as then printed to the screen if this was foundto be the case.8.3 SummaryWe have seen in this chapter how the safe stopping distance of the vehicle wasdetermined. This could be used then to complete the object and collision detectionmodules for the total system. Testing could now begin on the total system to prove itseffectiveness.
  • March 2006 Page 62_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 9 TestingTesting was one of the most important aspects of this project. To test thevarious modules to see if they worked effectively, a large database of images wasneeded. These were sourced as summarised in Chapter 3. Much of the testing that wasdone on each module was outlined in each chapter. This chapter summarises some ofthis testing, but also the overall testing that was performed on the system to determineits effectiveness.9.1 Lane Detection and Departure ModulesAfter each module was completed, it was tested using a selection of imagesfrom the database of images, or the total amount of images in the database. For thelane detection algorithm, many of the artificial images were used. Once it was foundthat the module could perform to a satisfactory level with these images, they weretested on the “real world” image database.The lane departure module required much of the overall testing work that wasperformed on the project. This was a result of its complexity, and also because manydifferent methods had to be tried until a suitable method was achieved. Artificial testimages had to be created that could be used to find the inverse Hough Transform.These were not designed to be similar to the real-world images. An example is shownin [Figure 9.1]. Other artificial test image was generated for testing the clusteringalgorithms. One such image after testing is shown in [Figure 9.2].To test the two modules on all the images in the database, the MATLAB codeneeded to be modified. This was done by writing some MATLAB code thatautomatically scanned the image database directory for files with “.jpg” extension,and save the plotted output in a separate directory as “.tiff”. Because of the largenumber of images in the database (~150), this was left unattended. The results werethen viewed to determine if the modules were working correctly.
  • March 2006 Page 63_____________________________________________________________________Final Year Thesis Diarmaid O Cualain9.2 Obstacle and Collision detection ModulesTesting of these modules was also done while the code was being developed.As mentioned earlier, many different methods had to be tested to see if they could beused for the final algorithm. An example of one of these methods tried is summarisedin chapter 7.2.2, the tyre and dark shadow detection filter. No artificial images were tobe designed for these modules, as many of the “real-world” images could be usedalready. Each module was tested after each was developed. For the obstacle detectionalgorithm, the pixel height returned was compared to the measured pixel height in MSpaint and the MATLAB plot screen. To test the collision detection algorithm, variousvehicle speeds and wiper settings were input into the module. The output values werecompared to those stated in the Rules of the Road booklet. Also outlined earlier,testing of the final two modules showed that it could only perform successfully atdetecting obstacles at short distances. For future work, a better approach could beresearched that could solve this issue.Figure 9.1: Artificial Test Image For Inverse Hough Transform
  • March 2006 Page 64_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 9.2: Artificial Test Image For Clustering Algorithm9.3 SummaryFor the testing of the lane detection and lane departure detection modules, theresults were satisfactory. For most of the images, the algorithm worked successfully,identifying when the vehicle was drifting out of lane and when the vehicle was inlane. Testing of the collision detection algorithm also showed that it could performsuccessfully with various speeds and wiper settings. However, it was found that theobstacle detection module could only successfully detect object at short distances, andwas not as accurate as hoped at the larger distances. This caused by the methodemployed by the algorithm for the obstacle detection.
  • March 2006 Page 65_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 10 Conclusions and Future Work10.1 ConclusionsThe main aim of this project was to determine if a single forward facingcamera in a vehicle could be used to determine when it was drifting out of lane or if itwas within a safe stopping distance of an object ahead. A large amount of time wasmade on researching similar technologies to see if such a system were possible, and ifso, how to go about implementing them. Then an investigation was performed intosuch a system, by trying to develop an algorithm that could work in this environment.Research into similar systems had shown that a system that detects when lanedeparture had occurred was possible using a forward facing camera. This was shownby Hondas HiDS system (chapter 2.1.3). Obstacle detection at short range was shownto be feasible using infrared stereo cameras by Toyota AODS (chapter 2.1.4).This project has also shown that lane detection using a single forward facingcamera is also possible. This could prove valuable in automotive applications forsafety situations where the driver is not paying attention to the road, falling asleep,etc. The collision detection algorithm worked very well at calculating the safestopping distance of the vehicle. Unfortunately, the object detection algorithm wasfound not to perform as well as was expected, by only being accurately able tomeasure the distance to the object at a ~20m. However, it showed that a mono visionbased approach (as compared to the radar based approach of other manufacturers),could function to some degree at detecting objects in the path of the vehicle. Also, avision-based approach has the benefits of being able to detect biological objects thatradar passes through: e.g. animals or pedestrians.10.2 Future WorkMuch work could be done to increase the robustness of this algorithm and toimprove its performance overall. To start with, implementation of the algorithm ontoa FPGA or DSP system would be needed before it could be used in practice in a
  • March 2006 Page 66_____________________________________________________________________Final Year Thesis Diarmaid O Cualainvehicle. Some changes could also be done to the different algorithms employed byeach module. A better algorithm could be developed for the obstacle detection modulethat could search the road ahead. When a boundary is found that has not the samefeatures as the road surface or road markings (e.g. different colour, shape, texture, etc)a warning could be issued to the driver. The lane departure detection module could beimproved by detecting when departure happens not only from the white road markingfound on the right, but from the road markings on the left, and in neighbouring lanestoo. This would be relatively easy to implement. The lane detection algorithm couldbe improved for different lighting conditions by designing a better feedback loop forthe colour to binary conversion. Some details about this were already mentioned inchapter 5.2.3. A better moving window could be designed that followed the RGBcolour drift of the road markings as the brightness of the image reduced. This mayalso allow the algorithm, work in night time conditions. Discovering a method forremoving the shadow in the images would also improve the obstacle detection moduleand the lane detection module.Looking at the larger picture, research into other vision based technologies forthe system, e.g. infrared could yield better performance by being able to detectanimals or pedestrians body heat signature if they were in the path of the vehicle.Combining the vision based approach with radar for example, could give the best ofboth worlds: biological object detection, vision in foggy/rainy conditions, the distantimage resolution and accuracy of radar, and so on. The detection of animals orpedestrians could be used in conjunction with pedestrian safety features on the vehiclesuch as the “Active Bonnet System” on the Citroen C5, to increase their effectiveness.Communication between vehicles could increase the effectiveness of the algorithm bycombining the data received by neighbouring vehicles. This is shown in [Figure 10.1][25]. Some research is already been done on this by Mercedes. Using a mono camerato detect the time to collision of an object could also be investigated. This could uselocal motion field measurements [26] to determine whether and when a collision isabout to occur.All in all, this project reached its main aim of proving that the concept of avision based system for lane departure detection and obstacle detection were possible.
  • March 2006 Page 67_____________________________________________________________________Final Year Thesis Diarmaid O CualainYet, there are many areas of this project that could be researched and developed for infuture work.Figure 10.1: Inter Communication Between Vehicles
  • March 2006 Page 68_____________________________________________________________________Final Year Thesis Diarmaid O CualainReferences[1] National Roads Authority of Ireland: “Road Collision Facts Ireland 2003”,http://www.nra.ie/PublicationsResources/DownloadableDocumentation/RoadSafety/file,1405,en.PDF, p. 4[2] Mercedes Benz: “Pre Safe”,http://www.safetyresearch.net/crash.htm[3] Watanabe Laboratory team, “AMIGO”,http://www.igvc.org/deploy/design/reports/dr44.pdf[4] Road Safety Statisticshttp://www.thinkroadsafety.gov.uk/statistics.htm[5] Citroen: lane Departure Warning Systemhttp://www.citroen.com/CWW/en-US/TECHNOLOGIES/SECURITY/AFIL/[6] FindArticles: Adaptive cruise control hits the roadshttp://www.findarticles.com/p/articles/mi_m3012/is_1998_Oct_1/ai_53179685[7] Mercedes Benz: Distronic Systemhttp://www.daimlerchrysler.com/dccom/0,,0-5-73307-1-73597-1-0-0-73603-0-0-8-7155-0-0-0-0-0-0-0,00.html[8] Mercedes Benz: Distronic Systemhttp://www.emercedesbenz.com/Feb06/09PricingInfoOfMercedesS600.html[9] Honda: HiDShttp://world.honda.com/factbook/auto/motorshow/200310/10.html[10] Channel 4: Honda HiDShttp://www.channel4.com/4car/feature/features-2005/honda-lkas/honda-lkas-2.html[3] Telegraph: Automatic Cruise Controlhttp://www.telegraph.co.uk/motoring/main.jhtml?view=DETAILS&grid=&xml=/motoring/2005/08/20/mfsleep20.xml[11] PR Newswire: Lexus AODShttp://sev.prnewswire.com/auto/20060303/LAF01603032006-1.html
  • March 2006 Page 69_____________________________________________________________________Final Year Thesis Diarmaid O Cualain[12] AutoChannel: Lexus LS430 First Impressionshttp://www.theautochannel.com/news/writers/lhill/01ls430/01ls430.html[13] Worldcarfans: BMW ACChttp://www.worldcarfans.com/news.cfm/NewsID/2030805.001/country/gcf/bmw/bmw-acc-active-cruise-control[14] EEtimes: Adaptive cruise control takes to the highwayhttp://www.eetimes.com/story/OEG19981020S0007[15] Wikipedia: CCDhttp://en.wikipedia.org/wiki/Ccd[16] The Institute for systems research: MATLAB overviewhttp://www.isr.umd.edu/~adomaiti/MATLABtutorial/[17] Yue Wang, Eam Khwang Teoh, Dinggang Shen: “Lane Detection and TrackingUsing B-Snake”, http://www.sciencedirect.com/science/article/B6V09-4B85832-1/2/323870bba997d4631763d9b275ed316c[18] MATLAB Central: Introduction to MATLAB 7http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId=6528&objectType=FILE[19] Wikipedia: Hough Transformhttp://en.wikipedia.org/wiki/Hough_transform[20] Bill Green: Edge detection Tutorialhttp://www.pages.drexel.edu/~weg22/edge.html[21] Unknown: A Tutorial on clustering Algorithmshttp://www.elet.polimi.it/upload/matteucc/Clustering/tutorial_html/[22] Chiu, S. "Fuzzy Model Identification Based on Cluster Estimation".Journal of Intelligent & Fuzzy systems, Vol. 2, No. 3, Sept. 1994.[23]”Rules of the Road”, Irish Department of the Environment.[24] CSGnetwork: Stopping distance calculatorhttp://www.csgnetwork.com/stopdistcalc.html[25] Unknown: Vehicle Safetywww.it.ipv6tf.org/minutes/CRF-ActivitiesApplicationsC2CCommunication.pdf[26] Keith Price: Biblographyhttp://iris.usc.edu/Vision-Notes/bibliography/optic-f751.html[27] I-Car: Adaptive cruise control
  • March 2006 Page 70_____________________________________________________________________Final Year Thesis Diarmaid O Cualainhttp://www.i-car.com/html_pages/about_icar/current_events_news/advantage/advantage_online_archives/2004/021604.html
  • March 2006 Page 71_____________________________________________________________________Final Year Thesis Diarmaid O CualainBibliography“Rules of the Road”. Department of the Environment.Hough Transform, http://en.wikipedia.org/wiki/Hough_transform
  • March 2006 Page 72_____________________________________________________________________Final Year Thesis Diarmaid O CualainAppendix A: Sample Image Outputs from Testing
  • March 2006 Page 73_____________________________________________________________________Final Year Thesis Diarmaid O Cualain
  • March 2006 Page 74_____________________________________________________________________Final Year Thesis Diarmaid O CualainAppendix B: Tables and GraphsDrifting to left values for White linePic. Num. Pic. Name Dist. From Left (cm) Theta (radians) Roh Notes1 DSC01196 200 -58 -632 DSC01197 180 -60 -633 DSC01198 160 -59 -404 DSC01199 140 -64 -915 DSC01200 120 -66 -1086 DSC01201 100 -68 -1167 DSC01202 80 -70 -131 Lane Departed8 DSC01203 60 -70 -1319 DSC01204 40 -70 -12010 DSC01205 20 -73 -15911 DSC01206 0 -74 -160Centre of vehicle onlane markingTable 3: Values of Theta and Rho measured as Left lane departure occursDrifting to rightPic. Num. Pic. Name Dist. From Right (cm)Theta (deg.) Roh Notes1DSC01210 180 -61 -652DSC01211 160 -58 -543DSC01212 140 -55 -274DSC01213 120 -50 95DSC01214 100 -45 526DSC01215 80 -39 1037DSC01216 60 -39 96 Lane Departed8DSC01217 40 -32 1489DSC01218 20 -23 21610 DSC01219 0 -18 26111 DSC01220 -20 -1 323Centre of vehicleonlane marking12 DSC01221 -40 11 36613 DSC01222 -60 21 382Notes: Width of lane is 380cm. Width of vehicle is 1.675mTable 4: Values of Theta and Rho as Right lane departure occurs
  • March 2006 Page 75_____________________________________________________________________Final Year Thesis Diarmaid O CualainPic. Name m from vehicle Pixel Height1 DSC01232 2 3881 DSC01233 4 3103 DSC01234 6 3174 DSC01235 8 3105 DSC01236 10 2696 DSC01237 12 2757 DSC01238 14 2498 DSC01239 16 2829 DSC01240 18 26610 DSC01241 20 26811 DSC01242 22 26312 DSC01243 24 26213 DSC01244 26 25714 DSC01245 28 26315 DSC01246 30 255Table 5: Pixel Height Vs Metres distancePxlH vs m0501001502002503003504004502 4 6 8 10 12 14 16 18 20 22 24 26 28 30Dist. from vehicle (m)Pixelheightinimage(pixels)PxlH vs m
  • March 2006 Page 76_____________________________________________________________________Final Year Thesis Diarmaid O CualainAppendix C: CDThe CD contains the full and final algorithm for the lane departure andobstacle detection. The separate algorithms: the lane departure detection algorithmand the obstacle detection algorithm are also given. Along with these, there is the finaltest image results, and previous versions of the algorithms (see back cover).