Lane departure and obstacle detection algorithm for use in an automotive environment

530
-1

Published on

For more visit www.nanocdac.com

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
530
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Lane departure and obstacle detection algorithm for use in an automotive environment

  1. 1. Lane Departure and Obstacle DetectionAlgorithm for use in an AutomotiveEnvironmentBy Diarmaid O CualainSupervised by Dr. Martin GlavinB.E Electronic Engineering ThesisMarch 2005
  2. 2. March 2006 Page i_____________________________________________________________________Final Year Thesis Diarmaid O CualainDeclaration of OriginalityI hereby declare that this thesis is my original work except where statedSignature: ________________________ Date: ______________________
  3. 3. March 2006 Page ii_____________________________________________________________________Final Year Thesis Diarmaid O CualainAbstractToday, one of the largest areas of research and development in the automobileindustry is road safety. Many deaths and injuries occur every year on public roadsfrom accidents that technology could have been used to prevent. The latest vehiclessold boast many safety features that have helped to lower the number of accidents onthe roads. These include seatbelts, crumple zones, Anti Brake Lock Systems (ABS),air bags, traction control, Electronic stability control (ESC), traction control, etc.These technologies have benefited from the large advances made in computer andelectronic technology in the past few years to become cheaper, more robust andreliable. As such, it is predicted that many more safety technologies will be developedfor use in vehicles of the future. Legislation, consumer needs, and other factors willonly serve to increase the need for these devices.For this project, an investigation into one of these safety systems wasperformed. This project consists of the research and development of an algorithm foran automotive system to detect when the vehicle drifts out of lane, or when thevehicle was within the safe stopping distance of obstacle in its path. Once one of thesesituations is detected, a warning is issued to the driver. For input, the system has asingle CCD camera module, along with the speed of the vehicle and wiper setting tocalculate the safe stopping distance. The system was able to identify to a satisfyinglevel when the vehicle drifted out of lane. The obstacle and collision detection sectionof the algorithm also worked to a certain extent, but issues with shadows in theimages and such meant that it was only accurate for short distances. However, themain aim of this project that such a concept was possible, and this has been proven toa certain extent.This report summarises the background, the design, the development,and the testing of the algorithm for this project.
  4. 4. March 2006 Page iii_____________________________________________________________________Final Year Thesis Diarmaid O CualainAcknowledgmentsI would like to thank the following people for their help and support during thecourse of this project:Firstly, I would like to thank my supervisor, Dr Martin Glavin. He was alwaysthere to lend a hand or support when needed.Second, I would like to thank Ciaran Hughes. If it were not for his answers tomy many questions I would have found it very difficult, if not impossible, to reach thefinal stage that I did with my project.I would also like to thank the electronic technicians of Nuns Island, AodhDalton, Myles Meehan, Martin Burke, and Sean Porter. They helped with anytechnical difficulties that I encountered with equipment or software over the years.Lastly, I wish to thank my parents for their help and support over the course ofmy studies.
  5. 5. March 2006 Page iv_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of ContentsDeclaration of Originality...............................................................................................iAbstract..........................................................................................................................iiAcknowledgments........................................................................................................ iiiTable of Contents..........................................................................................................ivTable of Figures ............................................................................................................viTable of Tables .......................................................................................................... viiiList of Abbreviations ....................................................................................................ixChapter 1 Introduction................................................................................................11.1 Concept of Project..........................................................................................11.2 Core Objectives..............................................................................................21.3 Basic Assumptions.........................................................................................41.3.1 High Contrast Roads and Lane Markings..............................................41.3.2 Dark images ...........................................................................................41.3.3 Image Information .................................................................................41.3.4 Image Frame Rate..................................................................................51.4 Outline of Report ...........................................................................................5Chapter 2 Background Research ................................................................................72.1 Current Systems.............................................................................................72.1.1 Citroen “LDWS”....................................................................................72.1.2 Mercedes-Benz “Distronic”...................................................................82.1.3 Honda “HiDS” .....................................................................................102.1.4 Toyota Lexus “AODS”........................................................................112.1.5 Nissan “ICC” .......................................................................................122.1.6 Volkswagen “ACC”.............................................................................122.1.7 BMW “ACC”.......................................................................................122.1.8 Other Manufacturers Systems..............................................................132.2 CCD Camera................................................................................................142.3 MATLAB.....................................................................................................152.4 Lane Departure & Object Detection Algorithms.........................................152.5 Summary......................................................................................................16Chapter 3 Database of Images..................................................................................163.1 Artificial Images ..........................................................................................163.2 “Real World” Images...................................................................................173.2.1 Various Road Surfaces and Markings..................................................173.2.2 Lane Detection Images ........................................................................183.2.3 Object Detection Images......................................................................193.3 Summary......................................................................................................20Chapter 4 Analysis of Project Components..............................................................214.1 Standard Features of the Road.....................................................................214.2 Dividing Algorithm into modules................................................................224.3 Analyses of the Lane Detection Module......................................................224.3.1 Characteristics for Lane Detection.......................................................224.3.2 Lane Detection Assumptions...............................................................244.4 Analyses of the Lane Departure Detection Module.....................................244.4.1 Characteristics for Lane Departure Detection......................................244.4.2 Lane Departure Detection Assumptions..............................................25Chapter 5 Lane Detection Module............................................................................26
  6. 6. March 2006 Page v_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.1 Solutions for module....................................................................................265.2 The Algorithm..............................................................................................285.2.1 MATLAB Image Matrices...................................................................295.2.2 Horizon filter........................................................................................295.2.3 Colour filter..........................................................................................305.2.4 Noise Removal.....................................................................................335.3 Summary......................................................................................................34Chapter 6 Lane Departure Detection Module...........................................................356.1 Solutions for module....................................................................................356.2 The Hough Transform..................................................................................366.3 Edge Detection.............................................................................................386.3.1 Sobel Method.......................................................................................386.4 Average Angles Algorithm..........................................................................396.5 Cluster Angles Algorithm............................................................................456.5.1 Clustering Algorithms..........................................................................476.5.2 Implementation of Clustering Algorithm.............................................476.5.3 Inverse Hough Transform....................................................................496.5.4 Calculation of Lane Departure.............................................................516.6 Summary......................................................................................................52Chapter 7 Object Detection ......................................................................................537.1 Introduction..................................................................................................537.2 Solutions for module....................................................................................537.2.1 Area of interest.....................................................................................537.2.2 Object Detection ..................................................................................557.3 Summary......................................................................................................59Chapter 8 Collision Detection ..................................................................................608.1 Solutions for module....................................................................................608.2 Safe Stopping Distance Calculator ..............................................................608.3 Summary......................................................................................................61Chapter 9 Testing......................................................................................................629.1 Lane Detection and Departure Modules ......................................................629.2 Obstacle and Collision detection Modules ..................................................639.3 Summary......................................................................................................64Chapter 10 Conclusions and Future Work .............................................................6510.1 Conclusions..................................................................................................6510.2 Future Work.................................................................................................65References....................................................................................................................68Bibliography ................................................................................................................71Appendix A: Sample Image Outputs from Testing .....................................................72Appendix B: Tables and Graphs..................................................................................74Appendix C: CD ..........................................................................................................76
  7. 7. March 2006 Page vi_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of FiguresFigure 1.1: Forward Facing Camera ..............................................................................2Figure 1.2: Basic Outline of Project System..................................................................3Figure 2.1: Distronic Radar System...............................................................................8Figure 2.2: Distronics Internal Electronics ....................................................................9Figure 2.3:Hondas HiDS System.................................................................................10Figure 2.4: Bosch Radar Module Internals..................................................................13Figure 2.5: General Flowchart for Radar Based Systems............................................13Figure 3.1: Artifically Generated Test Image..............................................................16Figure 3.2: Motorway Surface with Road Markings. ..................................................18Figure 3.3: Lane Departure Stand Set-up ....................................................................19Figure 3.4: Object Detection Set-up ............................................................................19Figure 4.1: Road Features............................................................................................21Figure 4.2: Sub Modules of Algorithm........................................................................22Figure 4.3: Typical Image Frame from Camera ..........................................................23Figure 4.4: Drifting out of Lane...................................................................................25Figure 5.1: Artificial Image of Road Surface ..............................................................27Figure 5.2: Boundary Detection of Road Image..........................................................28Figure 5.3: Cartesian and MATLAB Image Space......................................................29Figure 5.4: Image with Horizon Removed ..................................................................30Figure 5.5: Output From White Road Marking Filter..................................................32Figure 5.6: Output From Yellow Road Marking Filter................................................32Figure 6.1: Early Lane Departure Detection Algorithm..............................................35Figure 6.2: Hough Data from 3 Points.........................................................................37Figure 6.3: Hough Space Graph...................................................................................37Figure 6.4: Sobel Edge Detection Masks.....................................................................39Figure 6.5: Canny Edge Detection of Yellow &White road Markings .......................39Figure 6.6: Hough transformation of Yellow Lane Markings .....................................40Figure 6.7: Hough Transformation of White Lane Markings......................................40Figure 6.8: Hough Peaks of Yellow Lane Markings ...................................................41Figure 6.9: Hough Peaks of White Lane Markings .....................................................42Figure 6.10: Sample Output from Hough Peak Detection...........................................43Figure 6.11: Issue with Average Angles Algorithm ....................................................45
  8. 8. March 2006 Page vii_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.12: Road Marking Segments .........................................................................46Figure 6.13: Findcluster GUI.......................................................................................48Figure 6.14: Cluster Centroids.....................................................................................49Figure 6.15: Output from Cluster Angles Algorithm...................................................51Figure 7.1: Rectangle Search Method..........................................................................54Figure 7.2: Area of Interest Filter ................................................................................55Figure 7.3: Shadow/Tyre filter.....................................................................................56Figure 7.4: Sobel Edge Detection of Area of Interest.................................................57Figure 7.5: Noise Removal ..........................................................................................58Figure 8.1: Forces on Stopping Vehicle ......................................................................61Figure 9.1: Artificial Test Image For Inverse Hough Transform ................................63Figure 9.2: Artificial Test Image For Clustering Algorithm........................................64Figure 10.1: Inter Communication Between Vehicles.................................................67
  9. 9. March 2006 Page viii_____________________________________________________________________Final Year Thesis Diarmaid O CualainTable of TablesTable 5.1: White RGB Values .....................................................................................31Table 5.2: Yellow RGB Values ...................................................................................31Table 3: Values of Theta and Rho measured as Left lane departure occurs................74Table 4: Values of Theta and Rho as Right lane departure occurs..............................74Table 5: Pixel Height Vs Metres distance....................................................................75
  10. 10. March 2006 Page ix_____________________________________________________________________Final Year Thesis Diarmaid O CualainList of AbbreviationsCCD Charged Coupled DeviceMATLAB Matrix LaboratoryRGB Red Green BlueNRA National Roads Authority of IrelandECU Engine Control UnitLED Light Emitting DiodeABS Anti Brake Lock SystemACC Automatic Cruise ControlGUI Graphical User Interface
  11. 11. March 2006 Page 1_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 1 Introduction1.1 Concept of ProjectRoad safety is an important issue to any nation in the world today. In Ireland,collisions on our roads cause many fatalities every year. The National RoadsAuthority of Ireland (NRA) [1] reported that for 2003, 335 people died as a result ofcollisions on public roads. Governments have long being trying to reduce this numberof accidents, and with some success. Better roads and road markings, drivereducation, and so on have all helped curb the number of fatalities each year.Manufacturers of road vehicles have also been working to reduce the number ofaccidents. They have used the latest of today’s technology to make vehicles that aremuch safer than their predecessors. Advances in computers, materials, electronics,and other areas have allowed them to decrease the number of accidents that theirvehicles are involved in, and improving the chance of the occupants walking awayfrom an accident without injury. Today, many buyers of new vehicles list safety asone of the highest priorities when choosing a car. Manufacturers have long knownthis, and use safety as one their main selling points for their products, as can be seenin most Volvo, Mercedes Benz, or Renault advertisement.A new and fast growing area of vehicle safety is collision detection andavoidance. This has only come about lately from the advances made in computertechnology, image processing, electronics, and the falling price of the cost of thehardware. Companies like Mercedes Benz utilise a radar system (“Pre-safe”) on the Sseries cars that can detect obstacles in the path of the vehicle [2], and apply the brakesfaster than the driver can. It also uses this system to have adaptive cruise control. Thisallows the car to regulate its speed according to the car in front under cruise control.From the falling cost of camera technology, many automobilemanufacturers are starting to equip their vehicles with video cameras positioned atvarious places around the body of the vehicle. This is done in a bid to remove any“blind spots” that he or she may have when driving or reversing. The cameras are alsofinding applications in other areas of road safety. Honda has developed a system thatutilises one of these mounted beside the rear-view mirror that recognises the lane thevehicle is travelling in. It applies this information in steering to keep the vehiclecentred in the lane. They have named this system “LKAS or “Lane Keeping Assist
  12. 12. March 2006 Page 2_____________________________________________________________________Final Year Thesis Diarmaid O CualainSystem”. This system was developed in the hope of reducing the number of fatalitiescaused by cars drifting out of lane. A 1995 British Medical Journal report stated thataround 20% of road accidents on roads such as motorways are caused by the driverfalling asleep at the wheel [3]. It is these types of accidents that these systems aredeveloped to prevent.For this project, a similar system in concept to both Mercedes “Pre-safe” and Honda’s LKAS was developed. The aim of this project was to research anddevelop an algorithm for detecting when the vehicle drifts out of lane, or when it iswithin the safe stopping distance of an object ahead of it. Like LKAS, this system isvision based for the lane detection part of the algorithm by using a CCD camera[Figure 1.1]. For the object detection part, a different approach to Mercedes wastaken. Instead of using a short wave radar sensor, the camera was again used tocalculate the distance to the object to determine whether it was within the safestopping distance. This was done to see if such a system was possible, because anyfinal implementation would be cheaper to produce.Figure 1.1: Forward Facing Camera1.2 Core ObjectivesThe objective of this project is the development of the lane departure andobstacle detection algorithm. This can be broken down into two main modules:1. Lane departure detection module2. Obstacle detection module
  13. 13. March 2006 Page 3_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe task of the lane departure detection module in the algorithm is to analysethe frames of video captured by the CCD camera to evaluate the lane that the vehicleis travelling in. It calculates from this whether the vehicle is drifting out of lane, and ifso, warns the driver. The obstacle detection module analyses the frames of videocaptured, and processes them to find objects in the path of the vehicle. It determinesthe distance of these objects from the vehicle, and investigates whether they arewithin the safe stopping distance of the vehicle. It calculates the safe sopping distancebased on the vehicles speed, and the wipers setting. The wiper setting is used todetermine if it is raining, so that a longer stopping distance can be taken into accountfrom the wet road surface. If it is found that the object is within the safe stoppingdistance of the vehicle, a warning is issued to the driver [Figure 1.2].Figure 1.2: Basic Outline of Project System
  14. 14. March 2006 Page 4_____________________________________________________________________Final Year Thesis Diarmaid O Cualain1.3 Basic AssumptionsSome basic assumptions were made to aid the understanding and the speed ofdevelopment of the project. These assumptions were carefully made so as not tocompromise the robustness of the algorithm. More specific assumptions in relation tosmaller areas of the project are given in [Chapter 4.3] and [Chapter 4.4]. The basicassumptions are as follows:1.3.1 High Contrast Roads and Lane MarkingsThe system is to be developed to work on tarmac or concrete roads. Theseroads will have high contrast road markings showing the lane that the vehicle is totravel in. The system does not have to be developed to function on dirt tracks or oncross-country tracks. This assumption was made because it makes analysis of theproblem a lot simpler and easier to manage. Also, the biggest use of the lane departurealgorithm would be on motorways, where 20% of accidents are caused by the driverfalling asleep at the wheel [4].1.3.2 Dark imagesThe system is designed to work during the daytime. Therefore, any images tobe analysed by the algorithm will be relatively bright and high contrast, as mentionedin [Chapter 1.3.1]. This assumption was made to allow the development the algorithmas a concept, without having to spend time on trying to develop methods to processuseful information from dark images.1.3.3 Image InformationThe information needed by the algorithm to detect both the vehicle drifting outof lane and objects in the path of the vehicle can be found from each frameindividually from the camera. This means that the algorithm will not need to keep inmemory any information from the previous frames. This assumption has theadvantage of allowing the algorithm respond more quickly when an error is found (i.e.a warning will be outputted in 1/25th of a second), instead of having to examine a fewframes before reaching an answer (n frames needed to calculate answer = n/25
  15. 15. March 2006 Page 5_____________________________________________________________________Final Year Thesis Diarmaid O Cualainseconds taken.). This also means that the problem can be simpler and easier tocomprehend.The disadvantage of this assumption is that information is lost from one frameto another. For example, when a car drifts over a lane marking, the frames taken bythe camera show the road marking drifting from one side of the image to the other.1.3.4 Image Frame RateThe camera outputs images at approximately 25 frames per minute. This is astandard frame rate for PAL video. However, since this system analyses each frameindependently, the frame rate does not have a large influence on its performance.Most of these assumptions were made to simplify the problem so that a betterunderstanding could be gained of what needed to be done. This allowed most of thework to be done in trying to solve the concept of the problem, without having to spendtoo much time on “tweaking” various aspects of the algorithm so that it could workunder all conditions and inputs. Chapter 10.2 investigates various methods and ideasfor improving the algorithm by removing some of these assumptions. These could beimplemented in a future version of the algorithm to improve robustness even more.1.4 Outline of ReportThis thesis is divided up into chapters, each dealing with a different aspect ofthe project. Each chapter has a short introduction, explaining the subject of eachchapter, and a summary. The following is a short overview of each of the chapters:Chapter 2: Outlines some of the research made on the project in thebeginning. More research was made as the project developed, as new areasneeded to be investigated. This research is summarised in the various chapters.Chapter 3: Summarises how a database of images was sourced and collectedfor testing during, and after the algorithm was developed.
  16. 16. March 2006 Page 6_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 4: Gives a brief summary of how the algorithm was broken downinto modules to aid in the understanding and development of the system.Chapter 5: Reviews the work done in the development of the lane detectionmodule of the algorithm, the problems encountered, and the solutions devised.Chapter 6: Outlines the work made on the lane departure detection module.Chapter 7: Gives a summary of the research and development work done onthe obstacle detection part of the algorithm.Chapter 8: Gives a brief summary of how a solution was devised into how tocalculate the safe stopping distance of the vehicle, and how this wasimplemented in the final algorithm.Chapter 9: Reviews the testing done on the various modules, and on the finalalgorithm.Chapter 10: Concludes the project, and outlines possible areas where futurework could be done.Appendices: Has a sample of the test image outputs and some of the graphsand tables used.
  17. 17. March 2006 Page 7_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 2 Background ResearchThis chapter details the research made into the subject matter of the project. Itgives a brief description of some of the various technologies used in current systemsof road vehicles. It also lists some of the more advanced technologies that may havean application in these systems.2.1 Current SystemsSimilar systems already implemented in consumer vehicles vary from therelatively simple, like a system by Citroen, to the complex, as in Mercedes system.Many other automobile manufactures have similar systems on the market or indevelopment. Since the technology is still in the early stages of implementation, manyof these systems are rather expensive. Because of these, manufacturers have onlyutilised these systems in their higher-end vehicle models. This is the case with nearlyany new automotive technology when it comes on the consumer market for the firsttime. Previously, satellite navigation, air bags, seatbelt pre-tensioners, and variousother technologies were only available to consumers who bought the highest-endmodel of vehicle. Now that these technologies have matured somewhat, the cost hasdropped considerably. Nearly every manufacturer has these as standard in the mostbasic of models or as an affordable extra. Once thee technologies start to becomemass-produced, competition drives down the cost considerably. Therefore, it isreasonable to assume that “driver assistance” systems will be available in mostconsumer vehicles in only a few years time.2.1.1 Citroen “LDWS”Citroens LWDS (Lane Departure Warning System) is one of the simplest lanedetection systems on the market today. It was developed to help curb the number ofaccidents caused by drivers falling asleep at the wheel on motorways. It consists of sixsensors fitted on the underside of the front bumper. Each sensor has an infrared LEDand a detection cell. The cells detect when the vehicle drifts out of lane by variationsin the infrared light reflected off the road markings. This is then sent to the ECU,which processes the information and sends a warning to the driver through vibrating
  18. 18. March 2006 Page 8_____________________________________________________________________Final Year Thesis Diarmaid O Cualainmotors built into the driver’s seat. If the driver drifts to the left, the left motor vibrates,for the right, the motor on the right vibrates.The benefits of this approach are that it is relatively cheap and robust. Notmuch computation has to be done. Other road markings such as directional arrows canalso be detected without having to resort to a reprogram or redesign. The negativeaspects are that the undersides of vehicles are notoriously harsh environments forelectronic sensors. The sensors have to cope with flying gravel, dirt, vibration, water,salt and so forth. This system is available on the Citroen C5, C6, and other models. [5]2.1.2 Mercedes-Benz “Distronic”Mercedes-Benz (owned by Daimler Chrysler) was one of the first carmanufacturers to introduce a system to determine the distance from the vehicle to anobject ahead in 1999 [Figure 2.1]. Their system is radar based, with the radar modulelocated behind the Mercedes badge on the front grill. The technology was developedin partnership by Automotive Distance Control [6] (formerly ITT), Daimler-BenzsTemic, and the optics company Leica. Lecia’s expertise was mostly on infrareddevices, while ITT’s was a radar-based technology. Temic was mostly concerned withbraking issues.Figure 2.1: Distronic Radar SystemThis stand alone-module contains the transmitter, receiver, and electronicsneeded. This is unlike the other manufacturers who use the ECU to perform the
  19. 19. March 2006 Page 9_____________________________________________________________________Final Year Thesis Diarmaid O Cualainprocessing. The unit contains three transmitters and receivers [Figure 2.2] [7]. Thenewest system, the Distronic Plus, works as follows. A 24GHz radar sweep is sent outat 80degrees. This can cause a reflection back if there is an object inside 30 metres.This is followed by a 77 GHz, 9 degree radar sweep that can detect objects of up to150 metres [8]. Reflected waves undergo the Doppler shift, causing them to changefrequency. The reflected waves are picked up by the receivers and process the data.They then calculate the relative speed between the vehicles. As the processing has tobe done at very high speeds, and with a large amount of data, signal processors areused.Figure 2.2: Distronics Internal ElectronicsDistronic is an adaptive cruise control system (AAC) in that it can control theamount of acceleration and braking used, to keep the vehicle a set distance behind thevehicle in front. It can work up to speeds of 125mph. The newer version, available onthe 2006 version of the Mercedes S-class, can fully apply the brakes in an emergency.This is to prevent the driver from driving into the back of another vehicle if he or shefalls asleep, has limited visibility because of the weather, etc. If the conditions arecalculated to be too hazardous for the system to calculate on its own, an audible
  20. 20. March 2006 Page 10_____________________________________________________________________Final Year Thesis Diarmaid O Cualainwarning is issued to the driver so that they can judge the situation themselves and actaccordingly.A big issue with this system is its price. Mercedes-Benz has not releasedseparate price information for the module, but it is assumed to be in the few thousandeuros. However, prices for this technology are falling rapidly, and as more automobilemanufacturers use such systems, the cost can only fall further.2.1.3 Honda “HiDS”Honda has also been researching methods to improve safety and drivercomfort using lane departure sensing and obstacle detection. HiDS (Honda IntelligentDriver Support) is a system developed by Honda similar to Distronic, with someadditions and changes. Like the Distronic system, it uses a radar-based approach foridentifying the location of the objects in the vehicles path, called IHCC (IntelligentHighway Cruise Control). It also has a CCD camera module located near the rearview mirror for identifying the lane that the vehicle it travelling in. This module iscalled LKAS (Lane Keeping Assist System). The umbrella name for this system isHiDS.Figure 2.3:Hondas HiDS System
  21. 21. March 2006 Page 11_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe LKAS system works as follows. A passive CCD camera situated just leftof the drivers rear view mirror analyses images taken of the road surface andmarkings. When it detects that the vehicle is drifting out of lane, without the indicatorbeing activated, it applies 80% of the torque needed to the steering to keep the vehiclecentred on the lane. The driver applies the other 20%. This torque applied increases asthe vehicle drifts nearer to the road markings. To deter the driver from letting thevehicle drive itself, the system automatically stops when their hands are removedfrom the steering for more than three seconds. The system works for speeds between65 to 100km/h, and for road curves larger than a radius of 230m [9].The IHCC system is very similar to Mercedes Distronic system. It uses a radarmodule to detect the relative speed between the vehicle and the vehicle in front. Fromthis it can adjust the vehicles speed to keep within a safe distance from the vehicle. Aswell as this, it has an extra input than the Distronic system, an accelerometer tomeasure the yaw of the vehicle. This is used to prevent a common occurrence with theradar based approach where when turning, where the radar system picks up a vehiclein a neighbouring lane, instead of the vehicle directly ahead [10].Another feature of this system is where the seatbelt of the driver tightens ifthey follow too close to the vehicle in front. If the system detects an imminentcollision, the seatbelt is tightened securely to aid the driver when the crash occurs.2.1.4 Toyota Lexus “AODS”AODS (Advanced Obstacle Detection System) is system developed by Toyotafor the Lexus LS460. This system consists of a radar obstacle detection system on thefront vehicle, and a stereo camera module located just above the rear view mirror. Theradar system behaves in a similar way to the Distronic system in that it locatesobstacles in the path of the vehicle and acts accordingly. The disadvantage of theseradar systems is that they cannot detect obstacles such as animals or pedestrians, asthe waves just pass through them. Therefore, the visible spectrum is used by the stereocameras to detect and triangulate the distance to these objects. Along with the radarsystem, if an imminent collision is detected, the system sends an audible warning tothe driver. If the collision is inevitable, the seatbelt is tightened similar to HondasHiDS system, and the ABS brakes are applied. This system can work day or night[11].
  22. 22. March 2006 Page 12_____________________________________________________________________Final Year Thesis Diarmaid O CualainStereo cameras have an obvious cost disadvantage over single vision camerasystems. Therefore, for this project, a single vision approach for object detection wasinvestigated.2.1.5 Nissan “ICC”Nissan have also been researching similar areas of safety, but instead ofdeveloping a radar-based approach for obstacle detection like the other manufacturers,they use an infra-red laser system. They call this system ICC or “Intelligent CruiseControl” .The benefits of this system are that it can detect pedestrians and animalswithout having to resort to separate cameras, as seen in Toyotas method. The vehiclealso adapts its driving speed according to the speed of the vehicle in front using thelaser. The disadvantage is that unlike radar, it cannot easily peer through to thevehicle in front in foggy or rainy conditions, as was also the case with Toyotas earliersystem [12].2.1.6 Volkswagen “ACC”Like most manufacturers, Volkswagen’s version of Adaptive Cruise Control(ACC), is radar based. It is available as an optional extra on the Phaeton, and on the2006 Passat. The module was co-developed as a project by French aerospacecompany Thales, and TRW, at a cost of 80 million euros. It is sold under the tradename of “Autocruise”.Like Distronic, the radar system is 77GHz. The circuitry is based on MMIC(Monolithic Microwave Integrated Circuit) technology to detect the reflected waves.The device can record the position and relative speed of multiple vehicles ahead.Since it is radar based, it does not need the clear optical path needed by infra-redsystems. This means that it is not as affected by fog or heave rain as these infra-reddevices are.2.1.7 BMW “ACC”BMW, not to be left behind, have a Adaptive Cruise Control (ACC) system onthe 3, 5 and 7 series models. The system was developed by Bosch [Figure 2.4], and
  23. 23. March 2006 Page 13_____________________________________________________________________Final Year Thesis Diarmaid O Cualainlike most other systems on the marked, it uses 77GHz radar. Four overlapping radarbeams scan up to 200m ahead of the vehicle. Instead of having a separate yaw sensingaccelerometer similar to the system employed by Honda, Bosch instead receives thisdata from the ESP (Electronic Stability Program). This is used to inform the radarwhich vehicle it needs to assess the speed and distance of when turning on amotorway. The device can work from speeds between 30 and 180km/h, and cansuccessfully detect objects up to 120m away [13].Figure 2.4: Bosch Radar Module Internals2.1.8 Other Manufacturers SystemsFord, Jaguar, Cadillac. Audi and many other car companies have similarsystems to the ones above, or some variation of the methods given. Most utilise aradar-based approach to detect objects in front. A figure showing the general radarflowchart for these devices is given in Figure 2.5 [14].Figure 2.5: General Flowchart for Radar Based Systems
  24. 24. March 2006 Page 14_____________________________________________________________________Final Year Thesis Diarmaid O CualainAs most of these systems use 77 GHz microwave radar, there are some inherentdisadvantages with detecting some objects. As mentioned in chapter 2.1.4, thedisadvantage of this is that objects such as animals or pedestrians cannot be detected.Lower frequencies could be used to scan for these objects, such as an infrared laser,but these suffer from having to need a clear optical field of view. Fog, rain and suchconditions can adversely affect their performance. Using the visible spectrum as inToyotas method, also suffers from this issue. Using both radar and the visiblespectrum does away with many of these disadvantages. When environment conditionsmean that the visible cameras cannot be used, more of the radar data can be used, andvisa versa. Road markings are designed to be most effective in the visible spectrum,as they need to be seen by the human eye. Therefore, there are many advantages withusing a CCD camera for detection of obstacles and lane markings.2.2 CCD CameraCCD cameras are a mature technology, which had first been discovered in Belllabs in 1969. Fairchild semiconductors first manufactured the devices for commercialuse in 1974 [15]. The devices work as follows: A lens projects the image onto acapacitor array. Each capacitor accumulates an electric charge proportional to thelight intensity from the photoelectric effect. Once the array has been exposed to theimage, a control circuit instructs each “pixel” (capacitor) to transfer its charge to itsneighbour. The final capacitor in the series sends its charge to an amplifier circuit thatconverts this charge into a voltage. The process is repeated, converting the entire arrayinto a signal of varying voltages. The signals can be sampled and digitised and sent toanother device for processing or keeping in memory.An interesting point to note is that CCD modules are typically sensitive to theinfrared spectrum as they are to the visible spectrum. This has meant that manymanufacturers place an infrared filter over the array so that only the visible spectrumis passed through. However, many electronic enthusiasts have removed these to takepictures using the visible and infrared spectrum. This shows how infrared CCDcameras have experienced the same speed of technological development as the visiblespectrum CCD cameras, since they are basically the same technologies. The cost per
  25. 25. March 2006 Page 15_____________________________________________________________________Final Year Thesis Diarmaid O Cualainmodule has also fallen in tandem with each other. This has led them to be used by carmanufacturers on their vehicles.Colour images are formed by having a “Bayer” mask over the array thatseparates out the colours received into two green pixels, one red, and one blue. Thisresults in the luminance of the image being collected at each pixel, but the colourresolution falls as a result of the 4 array pixels being used per pixel in the image.2.3 MATLABMATLAB is a programming environment ideal for scientific computationsthat require a large use of arrays or graphical analysis of data [16]. The syntax of theprogramming code is very similar to C, and is also very forgiving for errors made bythe programmer. It is an interpreted language, meaning that no compiler is needed,and scripts are saved as “.m” files. Another important note is that array indices beginwith a 1, compared to a 0 in Java or C.One of the most powerful aspects of MATLAB is that many commonly usedfunctions are already built-in to the program. For example, array-sorting algorithms,Hough transforms and so on, are quickly and easily implemented because of this. Thisallows MTTLAB to be a very useful environment for testing out approaches tosolving problems before committing them to C or Java, or other programminglanguages. As the main aim of this project was to see if the concept of a forwardfacing camera could be used for lane departure and collision detection, it was used forthis project for this reason.2.4 Lane Departure & Object Detection AlgorithmsMuch research was made into lane departure and object detection algorithmsfor this project. Many of the lane detection algorithms found were for self-drivingvehicles, not for lane departure detection as was needed by this project. However,some useful information was found from these papers and websites for the project.Obstacle and collision detection algorithms were more difficult to findinformation on. It was found that not as many papers or website dealt with this subjectas compared to lane detection. Similar to the problems faced with the lane departurealgorithms, many of these were fairly complex and beyond the scope of this project.
  26. 26. March 2006 Page 16_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe details into the research made for lane departure and collision detectionparts of the project are outlined in their respective chapters.2.5 SummaryFrom this chapter we have seen a section of the research that was performedinto the various technologies needed for this project. However, as the projectprogressed, it was realised that research had to be made into different areas. Wherethis was done, it is outlined in the corresponding chapters. After the basic literaturesearch was done, a database of images had to be set up. The work behind this isdescribed in the next chapter.Chapter 3 Database of ImagesTo begin developing the code for the project, a database of images needed tobe gathered for analysis and testing. This chapter outlines the work undertaken atacquiring the various images for the database.3.1 Artificial ImagesFor early development of the algorithms, a simplified version of the “realworld” images needed to be generated. These were made artificially by usingMicrosoft Visio, Microsoft Paint, and Jasc Paint Shop Pro. A sample image can beseen in [Figure 3.1].Figure 3.1: Artifically Generated Test Image
  27. 27. March 2006 Page 17_____________________________________________________________________Final Year Thesis Diarmaid O Cualain3.2 “Real World” Images3.2.1 Various Road Surfaces and MarkingsSince the system was to work on images grabbed from a colour video cameramounted on a vehicle, a sample of these images needed to be added to the database. Itwas necessary to have images of roads and motorways, since this is the environmentthat the system was needed to function under. Dirt tracks, graded roads, or earth werenot needed as this was beyond the scope of this project (see Chapter 1.3.1).Preferably, the images needed to be taken at approximately 1 to 1.5 metres height offthe surface of the road. They also needed to have the horizon roughly on thecentreline of the image. So, a search of images of asphalt, tar and chip, and concreteroad surfaces, with various road markings, was undertaken.Searching began on the Internet, with limited success. It was found that mostimages were “artistic” in nature, i.e., taken at dusk or dawn, or black and white, sepia,and so on; not so useful for this project. Because of this, it was necessary to take thepictures as fieldwork in various locations around the area.For taking the images, a Sony Mavica CD500 and a Sony W800 were used.The pictures were taken at a standard height of 1m off the surface of the road. Themajority of the different images of road surfaces and road marking were taken inside amoving car. It was found that the dashboard was just below 1m off the surface of theroad, so the process of taking images inside the vehicle was a simple enough affair[Figure 3.2]. To take the picture, the horizon was centred in the middle of the image,and the image was taken facing directly ahead. It was found that the Sony Mavicacamera took the highest quality images. Therefore, this was used to take the remainderof the pictures. Also, since most of the images needed to be taken in similar fashion toeach other, zoom was at its widest, flash was off, auto brightness was on, and so on.
  28. 28. March 2006 Page 18_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 3.2: Motorway Surface with Road Markings.3.2.2 Lane Detection ImagesLater on in the project, it was found that more accurate images of the lanemarkings were needed to calibrate the lane detection module of the system. A quietroad with good solid road markings was found in the area in a local industrial estate.A stand was made from timber to sit the camera on. This ensured that the pictureswere taken at a standard height of 1m. For the base of the stand, an approximate widthof a vehicle was needed. The width of a Renault Laguna and an Opel Corsa wasmeasured and the average width calculated. The base was cut to this size, 1.675m.The reason for this, was so that as the camera on the stand was moved from left toright in the lane of the road, it was known when the lane was crossed. A measurementfrom the centre of the stand to the centre of the road was taken for each image. ThePictures were taken at 200mm intervals. The results are given in Appendices B. Theset-up is shown in [Figure 3.3].
  29. 29. March 2006 Page 19_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 3.3: Lane Departure Stand Set-up3.2.3 Object Detection ImagesFor the object detection part of the module, a number of pictures were takenfrom a car as it followed behind another vehicle. However, later on in the project, itwas found that images where the distance between the camera and the vehicle wereneeded. To do this, the same stretch of road as was used in chapter 3.2.2 was re-visited. For this set up, the same stand was used again. Pictures were taken at 2mintervals behind a vehicle, as shown in [Figure 3.4].Figure 3.4: Object Detection Set-up
  30. 30. March 2006 Page 20_____________________________________________________________________Final Year Thesis Diarmaid O CualainHowever, after analyses of the images, it was found that they were not asaccurate as was required. Therefore, a different set up was needed. A long tape waslaid down in the centre of the lane for a distance of 30m. Strips of paper were laiddown horizontally at 2m intervals along the length of the tape. The paper was keptdown with sections of household cabling and loose stones. A picture was then takenalong the length of the tape with the horizon in the centre of the image. This was doneso that an accurate height in pixels, from the base of the image, to each strip of papercould be got. The reason for this is explained in chapter 7.2.2.3.3 SummaryOverall, just over 150 images were added into the database. These images were toprove invaluable in the development, calibration, and testing of the system. In thechapters ahead, an explanation will be given for the reason behind why some of theimages were taken. Now that a database was built, work could begin of the analysis ofthe systems components. This is outlined Chapter 4.
  31. 31. March 2006 Page 21_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 4 Analysis of Project ComponentsBefore work could begin on developing the algorithm, it needed to be dividedinto logical sub modules. This was done to help understand the system in the problemdomain, and to aid in the programming of the code. These modules were then studiedseparately as to how to go about designing them. This consisted of looking into whichinputs they would need to function, how they would be sub-divided again in a divideand conquer approach, what outputs they would have, and so on. This chapterdescribes how the main system was divided into these modules and analysed beforecommitting them to code.4.1 Standard Features of the RoadBefore the rest of the project analysis can be talked about, the features of theroad surface need to be pointed out. This can be seen in [Figure 4.1].Figure 4.1: Road Features
  32. 32. March 2006 Page 22_____________________________________________________________________Final Year Thesis Diarmaid O Cualain4.2 Dividing Algorithm into modulesAs mentioned in [Chapter 1.2], the main algorithm could be separated out intotwo main sections. These were the lane detection and lane departure detectionmodule, and the obstacle detection module. This can be seen in [Figure 1.2]. Afterfurther analyses, it was seen that these be broken down again into four separate submodules [Figure 4.2].Figure 4.2: Sub Modules of AlgorithmWork could then begin on each of the modules.4.3 Analyses of the Lane Detection Module4.3.1 Characteristics for Lane DetectionThe sole input to the lane detection module is the current image frame grabbedby the CCD camera [Figure 4.3]. Before an algorithm for this module was attempted,there needed to be a definitive idea for what the algorithm needed to look for in theinput. So images of the road surfaces were analysed, assumptions made, and thecharacteristics of the road features were broken down. The following is a list of theseassumptions.
  33. 33. March 2006 Page 23_____________________________________________________________________Final Year Thesis Diarmaid O Cualaina) Road surface is normally dark in colour.b) The middle road line markings are normally white in colour. Theselines can be a continuous single line, continuous double lines, brokenlines, or variations of this. Even when they are continuous, to thecamera they can often appear discontinuous from wear, surface water,etc, and so should be treated as such.c) Side road markings are normally yellow in colour and discontinuous.d) “Cats eyes” reflectors are normally the same colour of the lines thatthey sit on, and are positioned at approximately one metre intervals.e) The road surface is normally trapezoidal in shape when viewed by thecamera, or can be approximately trapezoidal when the road is turning.f) Road lines appear approximately straight at relatively short distances infront of the car even when the road is turning.g) The horizon of the road in the image is approx. half way down theimage.Many other characteristics for lane detection from the images existed, but onlythe ones that were thought to be of most use and easiest to detect were scrutinized.Figure 4.3: Typical Image Frame from Camera
  34. 34. March 2006 Page 24_____________________________________________________________________Final Year Thesis Diarmaid O Cualain4.3.2 Lane Detection AssumptionsIt was clear from the characteristics in chapter 4.3.1 that some assumptionscould be made to simplify the development of the code without compromising itsrobustness.1. All road markings are either yellow or white in colour.2. The horizon vanishing point is always on the horizontal centreaxis of the image.3. There are always road markings on the right hand side of thevehicle in the image (or on the left hand side for countries whodrive on the right of the road). These lines are always white incolour (In some European countries, temporary road markingsfor the centre of the road can be in yellow, red or blue.However, this will be ignored for this project).4. Each road line segment, continuous or discontinuous, whetheron the left or the right, or in a nearby lane (on motorways), arein line.Taking these assumptions into account, the characteristics were re-examined,and the most promising for use in the lane detection module were used in thealgorithm.4.4 Analyses of the Lane Departure Detection Module4.4.1 Characteristics for Lane Departure DetectionWhen a vehicle drifts out of lane, a forward facing camera can pick up onsome noticeable characteristics of this occurring. These are as follows:1. The lane markings for the side of the lane that the vehicle is driftingover pass from one side of the image to the other. This means that when thevehicle is over one of the lane markings, the lane markings are in the centre ofthe image. This can be clearly seen in [Figure 4.4].2. The angle of the lane markings to the horizontal increase from a acute angle toan obtuse as the vehicle drifts over it.The final algorithm must be able to recognise at least one of thesecharacteristics for it to function properly.
  35. 35. March 2006 Page 25_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 4.4: Drifting out of Lane4.4.2 Lane Departure Detection AssumptionsTo help simplify understanding of the lane departure detection problem, a fewassumptions were made with regards to the image that the module needed to analyse.Some of these assumptions were removed later on in the project to help make thealgorithm more robust in real world situations. These assumptions are listed below:1. There is always a road line to the right of the vehicle. On motorwaysand rural roads, this is nearly always the case. In countries where right-hand driving is performed, this is nearly always not the case. However,it is relatively simple to change the algorithm for right-hand driveapplication.2. The road line to the right of the vehicle is always white. This is nearlyalways the case where assumption (1) is true.After these assumptions were teased out, work could begin on the development of themodules.
  36. 36. March 2006 Page 26_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 5 Lane Detection ModuleLogically, the lane detection module was the best to begin work on. Thismodule was to supply data to the lane departure detection module, and so was neededfor testing when work would begin on that module. At the time it was also thoughtthat it would be needed by the obstacle detection module. It was believed that the“region of interest” i.e. the determined lane that the vehicle was travelling in, wouldhave been needed for it to detect the obstacles in. However, this was proved to be notthe case, as explained in chapter 7.2.1. To help complete this module, various ideaswere tried and tested, until a suitable one was found. This chapter summarises thework done in completing this lane detection module.5.1 Solutions for moduleFor finding a solution for this module, many papers and website articles wereresearched. Some were found to be helpful, but most were found to be too complexfor this project. It was found that were many papers written on lane detectionalgorithms for automotive an application, but these were studies into algorithms forvehicles that could drive themselves. For this project, all that was needed was awarning system for when the vehicle had drifted out of lane. Most of the algorithmsresearched detected the lane that the vehicle was travelling in for a large distanceahead. Some involved transformations of the image so that it appeared as a “top”view. The lane detection was then performed on this. Most involved rather complexalgorithms such as the “B Snake Model” [17] which was beyond the scope of thisproject. Nevertheless, some useful information was found in these papers. Thisincluded the issues posed by shadows on the road surface, to rain water hiding roadmarkings, to how branches in the roads and other road features confused thealgorithms. Some basic attempt at solving these issues was undertaken in the project,and with some success.One of the earlier ideas was to have a line superimposed at this set angle fromthe bottom right-hand corner of the image. This could then be moved pixel by pixel tothe left. After every step, a histogram could be taken along the line [Figure 5.1]. If theyellow or white value of the histogram reaches a certain level (i.e. the line is directlyon the middle road markings in the image), the algorithm will know it has found the
  37. 37. March 2006 Page 27_____________________________________________________________________Final Year Thesis Diarmaid O Cualainright-hand side marking of the lane. This could then be used to calculate the left handside of the lane by knowing the standard width of a lane. This was found to be overlycomplicated and from all the iterations involved, and probably too slow in a realworld situation, where approx. 25fps (frames per second) would be used.Figure 5.1: Artificial Image of Road SurfaceOther various methods were thought up of, but most were overly complicated,difficult to implement, unreliable, or both. A better approach for lane detection wasneeded.One useful document found was a MATLAB 7.0 demonstration algorithm thatcould differentiate the lane markings on the road from the road surface [18].Compared to other documents researched, this was relatively simple and easy tounderstand. More importantly, it had code written in MATLAB. Since this was thefirst module of the program to be developed, it gave a good idea of how such analgorithm could be developed in MATLAB. The algorithm works as follows. Theimage frame is saved in MATLAB in a 480 x 640 matrix, where 480 x 640 is theimage width by height. The greyscale image was then brightened, and converted intobinary. This was done by selecting a threshold value, and comparing the greyscalevalue of each pixel against it. If it was found to be bigger, it was given a value of 1, ifit was smaller; it was given a value of 0. Noise was removed from the image, and thena boundary scan was performed [Figure 5.2]. Any boundaries that were not long andthin were removed, and the lane markings were left.At first, it appeared that after some small changes, this algorithm could beused for the module. But after more study, it was seen that no useful data as suchcould be parsed from the final image that could be used in the other modules. Forexample, in chapter 4.4.1, certain characteristics for lane departure needed to beidentified from the data. These could either be the angle of the lane markings, or the
  38. 38. March 2006 Page 28_____________________________________________________________________Final Year Thesis Diarmaid O Cualaingeneral position of the lane markings. Both were not explicitly identified by thealgorithm. Also, the algorithm was not very robust as any change in brightness wasdid not result in a change in the binary threshold value, resulting in more boundariesidentified. This increased the risk of a boundary been incorrectly identified as a lanemarking. Furthermore, by converting the image from RGB (as was outputted by thecamera) to greyscale, some valuable colour information was lost. This is important aslane markings are of specific colour: yellow or white. A new algorithm needed to bedeveloped that solved these issues. Nevertheless, some parts of this algorithm were toprove useful in the final algorithm.Figure 5.2: Boundary Detection of Road Image5.2 The AlgorithmWhat was needed was an algorithm that could exploit this colour data and notjust dispose of it by converting it directly to greyscale. And so a filter needed to bedeveloped that would only pass the yellow and white spectrum of the road markings.To do this, some research was done into MATLAB image matrices
  39. 39. March 2006 Page 29_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.2.1 MATLAB Image MatricesIn MATLAB, the basic data structure is the array. This is also true of imagesstored in MATLAB. Images can be thought as made up of “pixels” or dots in theimage. These pixels can vary in dimensions, for example VGA (i.e. the standard pixeldimensions for many early monitors or cameras), is 640 pixels wide by 480 tall. Whenthis image is stored as an array in MATLAB, it corresponds to a 2 dimensional arrayof 640 columns by 480 rows. This allows powerful image processing work to be doneusing MATLAB. It is important to note, that the first element in a MATLAB imagematrix (i.e. (1, 1)) is the top left most pixel in the image. In Cartesian co-ordinate, thiswould be the bottom left pixel. This can be seen in [Figure 5.3].For the remainder ofthis report, whenever references are given to co-ordinates in images, this will refer tothe MATLAB image space co-ordinates.Figure 5.3: Cartesian and MATLAB Image SpaceRGB Colour image arrays in MATLAB contain an extra dimension to storethe extra colour information. This dimension is 3 elements wide. Each elementcorresponds to the red, green, and blue colour information of the image. Thus eachpixel can vary in colour from (0,0,0) (black) to (255,255,255), (white).5.2.2 Horizon filterTo begin with, the image was checked to see if its dimensions were 480 by640. If this was not so, it was resized to these dimensions. As mentioned in chapter4.3.1, assumption (g), the horizon in the image sits on the horizontal centre line of theimage. Since only the image information below this line is needed, the data above this
  40. 40. March 2006 Page 30_____________________________________________________________________Final Year Thesis Diarmaid O Cualainline could be removed. Later on in the project, it was also found that this image areaoften contained image data that confused the algorithm. Therefore, the imageinformation above the horizontal centre line was removed using a filter. A artificial640 by 640 binary image was made with 0’s where the image data of the originalimage was to be removed, 1’s where it was to be left unchanged. A loop wasperformed on each pixel in the original image. On each iteration, the value of thecorresponding pixel in the binary image was checked to see if it was a 1. If it was, theRGB value of the pixel was left unchanged. If it was not, it was changed to (0,0,0).The output can be seen in [Figure 5.4]. The original image can be seen in [Figure 4.3].Image maskedFigure 5.4: Image with Horizon Removed5.2.3 Colour filterTo separate out the road marking data from the rest of the image data, a colourfilter needed to be used. A RGB window needed to be created that would compare theRGB value of every pixel in the image. If the RGB value was found to be inside thewindow limits, it would be given a value of 1. If not, it was given a 0.. To do this, thepixel values of the sections of the image that contained the road markings wereanalysed. Using a plot of the image in MATLAB, the RGB value for random pixels inthese sections were added to a Microsoft Excel sheet and the max and min valueswere found. These values were used for the window limits in the filter. The values are
  41. 41. March 2006 Page 31_____________________________________________________________________Final Year Thesis Diarmaid O Cualainshown in [Table 5.1] and [Table 5.2]. The final window size was increased by 10 oneach side as to allow for some of the outlining pixels to pass. A separate filter wascreated for white and yellow.Average white road markings RGB valuePixel Red Green Blue1 182 200 1742 193 209 1833 191 205 1794 189 201 1775 198 208 1816 181 193 1697 191 203 1818 223 232 2119 194 209 18010 193 205 181min Val 181 193 174max Val 223 232 211Table 5.1: White RGB ValuesAverage yellow road markings RGB valuePixel Red Green Blue1 169 170 942 174 173 903 176 173 964 184 183 1015 181 179 1056 176 174 1007 181 177 1068 181 182 1249 179 173 11510 181 181 107min Val 169 170 94max Val 184 183 124Table 5.2: Yellow RGB ValuesThe image output can be seen in [Figure 5.5] and [Figure 5.6].
  42. 42. March 2006 Page 32_____________________________________________________________________Final Year Thesis Diarmaid O CualainBinary image WhiteFigure 5.5: Output From White Road Marking FilterBinary image YellowFigure 5.6: Output From Yellow Road Marking FilterAfter testing the filters on a number of images, it was found that when theimage was darker than normal, the RGB values of the white and yellow road markingsdropped below the window of the filter. It was a similar problem for brighter thannormal images. Various methods were proposed for solving this problem:
  43. 43. March 2006 Page 33_____________________________________________________________________Final Year Thesis Diarmaid O Cualain1. Increase the window size. This would be the easiest to implement, andwas done early on in the project. However, it was not robust tochanging conditions, and many other pixels that were not roadmarkings passed through. Therefore a different solution was needed.2. Use a feedback loop: When not enough pixels were found, lower theRGB limits of the window. There were two issues identified with thisapproach. Firstly, the RGB values of yellow do not decrease linearly asthey become darker. Therefore, the window could pass down in theRGB scale without finding enough pixels needed for the threshold toexit the feedback loop. Secondly, the road marking pixels that werepicked up on the first iteration of the feedback loop would be lost onlater iterations as the window moved down the RGB scale.3. Use a feedback loop: When not enough pixels are found, increase thewindow size until enough are found. It was found that this feedbackloop worked best on the image database.A loop iteration limit of 50 was set for white, and 20 for yellow. This wasdone in case there were no road markings in the image. A pixel count threshold of2000 pixels was chosen for both filters.5.2.4 Noise RemovalSpurious pixels that were passed by the filters had to be removed from thebinary image before lane departure detection could be done. It was seen that thesepixels were normally part of small groupings, or without any neighbouring pixels atall. Also, any pixels that were part of road markings were in large groupings of morethan 10 or so pixels. Therefore, what was needed was an algorithm that could removethe smaller grouping of pixels but leave the others untouched. From studying theMATLAB 7.0 demonstration algorithm [chapter 5.1], it was realised that MATLABhad a inbuilt function called “bwareaopen” that does this. From testing, it was foundthat the algorithm worked best when a minimum limit of 4 pixels per group foryellow, and 10 for white was used to remove the noise.
  44. 44. March 2006 Page 34_____________________________________________________________________Final Year Thesis Diarmaid O Cualain5.3 SummaryWe have seen how research was made and work done on the development ofthe lane detection module. Once this module was found to be working to a satisfactorylevel, work could begin on using the output to detect when lane departure occurs. Inchapter two, the way that this was done is explained.
  45. 45. March 2006 Page 35_____________________________________________________________________Final Year Thesis Diarmaid O CualainChapter 6 Lane Departure Detection ModuleAfter work was finished on the lane detection module, and testing had provedthat it was working to a satisfying level, work began on developing the lane departuremodule. This was found to be one of the most difficult and time-consuming areas ofthe project. At first, a relatively simple approach was devised, that was thought to besuitable for the application. Unfortunately, after testing, a major issue was found. Adifferent approach was needed, and so work restarted on the module. The finalalgorithm devised was tested using the image database, and found to work under mostconditions. This chapter summarises how this algorithm was devised and developedinto the final code of the system.6.1 Solutions for moduleVarious solutions were derived and analysed for this module. One early ideawas to check a certain area in the image for road markings, as seen in [Figure 6.1].When the amount of pixels in this area reaches a certain threshold, a warning is issuedto the driver informing them that they have drifted out of lane. This algorithm makesuse of the first characteristic of lane departure as identified in chapter 4.4.1.Figure 6.1: Early Lane Departure Detection Algorithm
  46. 46. March 2006 Page 36_____________________________________________________________________Final Year Thesis Diarmaid O CualainThere are a small number of issues with this method. One is that if there areany other road markings, such as a rumble strip, “children crossing” text, largeamounts of image noise, or any other markings in the centre of the lane, the algorithmwill return a false positive. This could be solved by checking a few frames of imagesand seeing if they also return a positive. This would show that it would have beencaused by lane markings, as other markings will not feature in as many frames (i.e.,the vehicle has travelled past them). Unfortunately, this will introduce a lag in thealgorithm until the answer is calculated.Another approach was to develop an algorithm that worked using the secondcharacteristic identified in chapter 4.4.1. This stated that the angle of the roadmarkings changed as the vehicle drifts out of lane. To put this into practice, analgorithm was needed to be written that could measure the angle of the lane markingson the road. From chapter 4.3.1, characteristic (b) and (f), It was recognised that roadmarkings were often broken, or had to be treated as if they were broken. It was alsolearnt that lane markings could be approximated to be in a straight line at shortdistances even when the road was turning. Therefore, what was needed was a methodto interpolate the lane markings, find their average angle, and test this angle against athreshold to see when the vehicle had drifted out of lane.6.2 The Hough TransformThe Hough transform is a image processing technique for feature extraction[19]. It is more commonly used for detection of lines in an image, but can also be usedto detect any arbitrary shapes, for example circles, ellipses, and so on. For this project,it was used for its more common purpose. The underlying principle of the Houghtransform is that every point in the image has an infinite number of lines passingthrough it, each at a different angle. The purpose of the transform is to identify thelines that pass through the most points in the image, i.e. the lines that most closelymatch the features in the image. To do this, a representation of the line is needed thatwill allow meaningful comparison in this perspective. A second line is drawn from theorigin to the nearest point on the line at right angles. The angle that this second linemakes to the origin is recorded, as is the distance from the origin to the point where
  47. 47. March 2006 Page 37_____________________________________________________________________Final Year Thesis Diarmaid O Cualainthe two perpendicular lines meet. These values are known as “theta” (θ) and “rho” (ρ).An example of this using three points is shown in [Figure 6.2].Figure 6.2: Hough Data from 3 PointsWhen the rho value is plotted against theta for one of these arbitrary points, asinusoidal curve is created. When the other rho and theta values for the other pointsfound in the image are plotted on the same graph, it is found that the curves overlap incertain areas. This can be seen in [Figure 6.3]. It can be seen that the curves bisect atthe pink point. Since this point can be transformed back to the original image usingthe rho and theta values, we can find the line that passes through the three points, asshown in [Figure 6.2].Figure 6.3: Hough Space Graph
  48. 48. March 2006 Page 38_____________________________________________________________________Final Year Thesis Diarmaid O CualainFor implementation on an image, more often than not, the Hough Transform isperformed after edge detection has been done. In this project, this is done so that theHough transform can separate out the straight edges of the lane markings from theother image data.6.3 Edge DetectionEdge detection is another useful image processing technique, used todistinguish the boundary between two dissimilar regions in an image. Edge detectionis relatively little computing power, and there are many various edge detectionalgorithms developed. Sobel and Canny are examples of these. Each is sensitive todifferent type of edges. The various methods can be separated into two main groups:Laplacian and gradient. The gradient method works by finding discontinuities in theimage, i.e. the maximum and minimum of the first derivative of the image. For theLaplacian method, a search for zerocrossings is performed in the second derivative ofthe image. Canny, the method used in this project, works on this approach. Edges inimages, by their nature are a large jump in intensity for one pixel to the next.Unfortunately, this is the same for noise in an image. Therefore, before edge detectionin an image can take place, noise removal must be done. This can be done by“blurring” the image: averaging out the pixel intensities on a localised scale. This iswhat is implemented in the canny algorithm. Most edge detection is carried out onbinary images or greyscale.6.3.1 Sobel MethodThe Sobel method [20], one of the simpler methods, is also used in thisproject. This works as follows.Two 3x3 masks are created that are each passed over each pixel in the image.One mask is used to calculate the edge gradient in the y direction (rows), the other,the x direction (columns). Each neighbouring pixel found around that point is given avalue corresponding to the ones shown in [Figure 6.4]. The values are then addedtogether giving Gx and Gy for each pixel. The magnitude of these gradients can thenbe found by:
  49. 49. March 2006 Page 39_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.4: Sobel Edge Detection MasksIf a threshold value is chosen for the gradient, the horizontal (Gy) and vertical(Gx) values can be found.6.4 Average Angles AlgorithmIt was believed that performing edge detection on the image, followed by theHough transformation, would yield a good method for finding the angles of the roadmarkings in the image. In MATLAB, a function called “edge.m” was used to find theedges in the image, followed by “hough.m” to calculate the Hough transformation.The images after each stage are shown in [Figure 6.5] to [Figure 6.7].edge detect image of Yel edge detect image of WteFigure 6.5: Canny Edge Detection of Yellow &White road Markings
  50. 50. March 2006 Page 40_____________________________________________________________________Final Year Thesis Diarmaid O CualainHough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.6: Hough transformation of Yellow Lane MarkingsHough transform of the white lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.7: Hough Transformation of White Lane Markings
  51. 51. March 2006 Page 41_____________________________________________________________________Final Year Thesis Diarmaid O CualainThe MATLAB function “houghpeaks.m” was then used to find the peaks inthe Hough space. These are the points where the straight lines in the original imagewere transformed to. The threshold value to distinguish the Hough peaks from theother points was found during testing by trial and error. A value of 0.6 multiplied bythe highest value in the Hough matrix was chosen for the white road markings, 0.3 forthe yellow. These peaks are plotted on the Hough transform space in [Figure 6.8] and[Figure 6.9].Hough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.8: Hough Peaks of Yellow Lane Markings
  52. 52. March 2006 Page 42_____________________________________________________________________Final Year Thesis Diarmaid O CualainHough transform of the white lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.9: Hough Peaks of White Lane MarkingsWe can clearly see from these figures where the transform of the lanemarkings occur. It is interesting to note in [Figure 6.8] how the two lane markings onthe sides of the road are clustered together in the Hough Space. The importance of thiswill become apparent later on in this chapter.After these peaks were found, they were plotted out onto the original image togive a good indication of where the straight lines occurred, and to provide feedback sothat various parameters (for example, the Hough peaks threshold) could be modifiedand the results scrutinised. A sample output is shown in [Figure 6.10].
  53. 53. March 2006 Page 43_____________________________________________________________________Final Year Thesis Diarmaid O CualainFinal OutputYlwYlwYlwYlwYlw YlwYlwYlwYlwYlwYlwYlwYlwYlwYlwFigure 6.10: Sample Output from Hough Peak DetectionA method was now needed to find the angles of each of these line segments,and find the average angle. This was done by finding the co-ordinates of each end ofthe line segments. Then, using the equation below, the slope of the line was found.The angles of the lines were then found by finding the tan of m. A checkneeded to be performed to ensure that there was no divide by zero when x1 and x2were the same. When this was found, the slope was made to be equal to -90 degrees,or +90 degrees, depending on which y value was higher. The yellow road markingangles were separated out into the ones that were less than 90 (the line segmentsfound on the right) and the ones bigger than 90 (the line segments on the left). Theaverage angle was then found for each group. The average angle was also found forthe white road markings segments.To detect when the vehicle had left the lane, a maximum and minimumthreshold for each of the road markings was set. These values were found from trial
  54. 54. March 2006 Page 44_____________________________________________________________________Final Year Thesis Diarmaid O Cualainand error through testing. A minimum of 35 degrees and a maximum of 58 degreeswere chosen for the line on the right of the vehicle, and a value of -35 degrees and -58degrees for the left of the vehicle respectively.To avoid confusion, and to remove some bugs, these values were changed toradians in later versions of the algorithm.After testing the algorithm on various images from the database, it was foundthat the algorithm worked to a satisfying level. It could easily distinguish when thevehicle had left its lane in most of the images. However, after a period of testing, itwas realised that the algorithm had one major failure. On motorways, where therewere more than one white road lines on either the left or the right of the vehicle, thealgorithm failed to function correctly. It could not identify when the vehicle haddrifted out of lane. Since motorways were one of the main environments that thisalgorithm was specified to work in, this was a serious issue.After reading through the code and looking at some of the flow charts, it wasrealised that this problem lay in the angle averaging section of the lane detectionmodule. Under normal circumstances, the algorithm finds the average angle of thesections of white lines found on the right of the vehicle. However, when two roadlines appear on the right of the vehicle (i.e. the markings for the neighbouring lane), itcalculates the average of these two lines. This results in an angle that is not the angleof the road line of the lane that the vehicle is travelling in, but the average of the twolines in the neighbouring lane. This can be seen in [Figure 6.11].
  55. 55. March 2006 Page 45_____________________________________________________________________Final Year Thesis Diarmaid O CualainFigure 6.11: Issue with Average Angles Algorithm6.5 Cluster Angles AlgorithmAs mentioned in chapter 6.4, there was a major bug in the average anglesalgorithm. A method was needed that did not suffer from this problem, so that it couldbe used on the motorways as part of its application. Various methods were proposedat modifying the average angles algorithm but none were very robust or easy toimplement. One method suggested was to check segments of the image as shown in[Figure 6.12]. If road markings were found in this segment, the angle could berecorded, and the lane departure detection could then be performed. This would usethe angle threshold values like before. There were some problems with this method.Firstly, the vanishing point on the horizon was not always in the centre. Therefore, thecentre of the segments would have to move left and right accordingly. This would bedifficult to implement. Another problem posed was that for this algorithm, a gain inaccuracy for the angle of the lines detected resulted in it loosing some of its
  56. 56. March 2006 Page 46_____________________________________________________________________Final Year Thesis Diarmaid O Cualainrobustness. To increase accuracy meant that the arc of each segment had to bereduced. This also meant that there was a higher probability that some of the roadmarking would lie outside the arc, resulting in valuable information lost.Some other methods were proposed, but none were satisfactory for themodule. Therefore, work began at developing a new algorithm from the beginning tosolve this issue.Figure 6.12: Road Marking SegmentsIt was realised that after a short period of time that in the average anglesalgorithm, the Hough Transform Space had patterns in relation to the location of theroad markings in the image. It was recognised that the Hough peaks (the lines in theimage) seemed to occur in clusters in the Hough Space. This was mentioned briefly inchapter 6.4. This clustering can be seen in [Figure 6.8] and [Figure 6.9]. Thisclustering occurs because each road marking in a road line is approximately at thesame angle as each other (characteristic (f), chapter 4.3.1) and at the sameperpendicular distance to the origin. Therefore, the same numbers of clusters appearin the Hough space as road lines found in the image.A method needed to be devised to find the centre of each Hough peakcluster, and then to transform this point back into the spatial domain where it could be
  57. 57. March 2006 Page 47_____________________________________________________________________Final Year Thesis Diarmaid O Cualainplotted and analysed. Some research was then done on different clustering algorithmsto find one that could be suitable for this application.6.5.1 Clustering AlgorithmsMuch research has been done in mathematics on clustering algorithms in thepast few decades. Clustering algorithms have found many applications frommarketing, to biology, to insurance, and so on. The goal of clustering algorithms is tofind “The intrinsic grouping in a set of unlabeled data” [21]. There are a few maintypes of clustering algorithms. These are K-means clustering, Fuzzy C-means,Hierarchical Clustering, mixture of Gaussians, and so on.For this project, a subtractive clustering algorithm [22] was used. Thisalgorithm assumes that each point is a potential cluster centre and calculates thelikelihood that it is by analysing the density of the neighbouring data points. It doesthis by first selecting the most likely points as the cluster centres. Then it removes thesurrounding data points in the vicinity as determined by “radii” (see chapter 6.5.2)value. It repeats these two steps until all the data points are within the “radii” vicinity.This algorithm was chosen for a few of reasons. One is that it does not have to beexplicitly told the number of clusters that it needs to find. Instead, various otherparameters are chosen that determine the number of clusters that are to be found.6.5.2 Implementation of Clustering AlgorithmMATLAB has a function called “subclust” that can perform the subtractiveclustering needed for this project. Before it could be implemented into the algorithm,some study needed to be done on the parameter values for the algorithm.These are as follows:1. xBounds: The cluster area size. This is the dimension size of the areathat is to be searched for clusters. In the project, this was set to largestangle theta (θ) that could be found, which is 90 degrees, and the largestperpendicular distance roh (ρ) to a data point. This dimension wascalculated by:22dim imageWidthtimageHeigh
  58. 58. March 2006 Page 48_____________________________________________________________________Final Year Thesis Diarmaid O Cualain2. Radii: This is the distance in the two dimensions that determines theinfluence a point has over another in finding the centre of the cluster. Ifthis is small, a small number of clusters with a large number of datapoints are found, and visa versa.3. quashFactor: This is used to multiple against the radii value todetermine data points that are part of the cluster. This lowers thepotential for outlying points to be calculates to be part of the cluster.4. acceptRatio: This is used to set the potential that a data point isaccepted as the centre of that cluster, above which another data pointcan be accepted.5. rejectRatio: Similar to the accept ratio, except this is used to set thepotential that a data point is rejected as the centre of that cluster, abovewhich another data point can be rejected.Most of these values were found by trial and error by using a MATLAB GUIcalled “findcluster” [Figure 6.13]. A .dat file was generated from the rho and thetavalues of the Hough peaks data points and imported into the GUI. The variousparameters were then changed until a satisfactory output was achieved.Figure 6.13: Findcluster GUI
  59. 59. March 2006 Page 49_____________________________________________________________________Final Year Thesis Diarmaid O CualainAfter the workings of the clustering algorithm were understood, and itsparameters calculated for the data points in this application, work began onimplementing it into the module. The Hough peak values were added into an array,and then inputted into the function. The algorithm was tested with images from thedatabase of different road environments to see how the algorithm managed. Thecluster centroids found can be seen as blue circles in [Figure 6.14].Hough transform of the yellow lines image-80 -60 -40 -20 0 20 40 60 80-600-400-2000200400600Figure 6.14: Cluster Centroids6.5.3 Inverse Hough TransformAfter these points were found, they needed to be transformed back to thespatial domain so that they could be analysed and understood easily. This would allowus to visually see the lines superimposed onto the original road image to see if theywere working correctly. Mapping these points back to the spatial domain would yieldlines corresponding to the average angle, and average position, of each road linefound. To perform this inverse Hough Transform, a number of steps were taken.
  60. 60. March 2006 Page 50_____________________________________________________________________Final Year Thesis Diarmaid O CualainFirstly, the point where the road line met the line perpendicular to it that passedthrough the origin was calculated. This was found by the equations:sin1ycos1xAfter this point was found, the line perpendicular to the road line could becalculated. This was found by calculating its slope using the equation:tanmThe line was then found by using the equation:1 1( )y y m x xThis gave the line with length , the line perpendicular to the road line found.Finding the road line was only a matter of plotting a line at right angles to this linethat passed through the point 1 1( , )x y . These lines can be seen in [Figure 6.15].The redline corresponds to the perpendicular line. The yellow lines are the lines generatedfrom the cluster centroids in the yellow road marking Hough space. The white line isgenerated from the cluster centroids found in the white Hough space. Arbitrary valueswere substituted in for x in each line equation so that they could be plotted. The stepsoutlined above could have been combined into one for implementation in thealgorithm, but to aid understanding and to help with the error checking, they wereseparated out.
  61. 61. March 2006 Page 51_____________________________________________________________________Final Year Thesis Diarmaid O CualainFinal Output-100 0 100 200 300 400 500 600 700-1000100200300400500Figure 6.15: Output from Cluster Angles Algorithm6.5.4 Calculation of Lane DepartureOnce it was seen that the cluster angles algorithm worked to a satisfactorylevel, a warning algorithm had to be written for when the vehicle drifted out of thelane. From chapter 6.5.2, we have seen how the clustering algorithm can give theaverage angle of the road lines and their perpendicular distance to the origin. Fromthis, it was recognised that a threshold value could be set on the angle (theta) of thecluster centroids. This would allow the algorithm detect when the car has drifted outof lane from characteristic (2) in chapter 4.3.1. Also, rho, the perpendicular distanceof the line to the origin, could also be used to detect lane departure. This followscharacteristic (1), also in chapter 4.3.1. Threshold values then needed to be found tocompare against the returned theta and rho values from the algorithm. These werefound by running the algorithm on the test images of lane departure as cited in chapter3.2.2. The values returned as the lane departure occurred were used as thresholdvalues in the algorithm.
  62. 62. March 2006 Page 52_____________________________________________________________________Final Year Thesis Diarmaid O CualainOnly the white line found on the right hand side of the vehicle was used forthe departure detection. This was to help simplify the code. It follows from theassumption stated in chapter 4.3.2. However, the other road lines could be easilyanalysed for future work on the module.Testing was then performed on the module to see if it functioned correctly.The warning output was printed to screen when it detected lane departure. A sampleof output images is seen in Appendix A.6.6 SummaryWe have seen how the module for lane detection was designed and developedfor use in this algorithm. It was one of the most difficult and time consuming modulesto develop for this project, but the final results were satisfactory. In the next chapter,an outline of the work that was done on the third module, the object detection module,will be presented.

×