Implementation of a lane-tracking system for autonomous driving using Kalman filter

2,574 views
2,264 views

Published on

This project was developed for a Digital Control class. It consists of a system that is able to identify and track lane marks in a video acquired by webcam. It's interesting how the Kalman filter is used in such a context in order to make the lane detection computationally feasible in the small amount of time between two subsequent video frames

Published in: Technology, Art & Photos
1 Comment
1 Like
Statistics
Notes
No Downloads
Views
Total views
2,574
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
1
Likes
1
Embeds 0
No embeds

No notes for slide

Implementation of a lane-tracking system for autonomous driving using Kalman filter

  1. 1. A VISION-BASED LANE TRACKINGSYSTEM FOR AUTONOMOUS DRIVINGUSING KALMAN FILTERDepartment of Information EngineeringUniversity of PisaA. Biondi – .F. Corucci – .2011Project for Digital Control classProf. A. Balestrino, Prof. A. Landi
  2. 2. Aim of the project Build a vision-based lane tracking system usingKalman filter Test Kalman filter effectiveness in a practical noisyscenario Experience with some computer vision algorithmsand techniquesProject for Digital Control class - University of Pisa 2
  3. 3. Why lane tracking is useful?LanetrackingSteeringcontrol tofollow theroadCameraview Vision-based lanetracking is commonlyused to buildautonomous carscapable of following aroad without humaninterventionProject for Digital Control class - University of Pisa 3
  4. 4. Video acquisition In order to collect adataset to work with, weplaced a netbook on anhand-cart and recordeda video from thewebcam while followinga realistic circuit as bestas possibleProject for Digital Control class - University of Pisa 4
  5. 5. Video acquisitionProject for Digital Control class - University of Pisa 5RGB video frame grabbed from the webcam
  6. 6. Image preprocessing - OverviewProject for Digital Control class - University of Pisa 6 Some preprocessing is needed on the RGB video framein order to perform feature discrimination:RGB frameGrayscaleEqualization FilteringBinarizationEdge detection
  7. 7. Grayscale conversionProject for Digital Control class - University of Pisa 7Grayscale video frameColor distributionFloorLanemarkers
  8. 8. Enhancing color separationProject for Digital Control class - University of Pisa 8Enhanced colorseparationResulting color distributionFloorLanemarkersThis was achieved using thedecorrelation stretch method,commonly used for featurediscrimination in images.(Correlated images does notusually have sharp contours)
  9. 9. Filtering floor componentsProject for Digital Control class - University of Pisa 9Image after filteringResulting color distributionFloorLanemarkersFILTERThis was achieved enhancing theimage contrast through colorsaturation (with appropriatethresholds).Floor is saturated to black
  10. 10. Image binarizationProject for Digital Control class - University of Pisa 10Binarized image The image is now a matrix of 0 and 1
  11. 11. Edge detection Once the interesting features (i.e. lane marks) areenhanced, we detect edges in the video frame, inorder to identify lane-marks contours to befollowed The Canny algorithm (a derivative filter) was usedfor this purposeProject for Digital Control class - University of Pisa 11
  12. 12. Edge detection - CannyProject for Digital Control class - University of Pisa 12Frame after Canny applicationThe linear contours are now clearly identifiable
  13. 13. Line detection We are now able to extrapolate the analyticequations of the lane-mark borders At the first frame, we have no idea of where lanemarks could be, so we need to perform a globalscan We used the Hough Transformation in order todetect lines in this situationProject for Digital Control class - University of Pisa 13
  14. 14. Outline on Hough transformationProject for Digital Control class - University of Pisa 14θ Allows to discriminate featuresdescribed by an analyticalequation It has been generalized in orderto detect arbitrarily complexshapes The problem of searching for afeature is translated in a maxproblemExample of Houghdetection on a complexshape
  15. 15. More on Hough transformationProject for Digital Control class - University of Pisa 15θ The equation that ties the curve parameters 𝑎𝑖 to thecoordinates 𝑥, 𝑦 looks like:𝑓 𝑥, 𝑦 , 𝑎1, … , 𝑎 𝑛 = 0 Every point 𝑥𝑖, 𝑦𝑖 of the image space generates ahyper-curve in the parametric space A point in the parametric space identifies uniquely acurve in the image space N points of the image space generate N curves in theparametric space with a common intersection that is thecurve on which they possibly lie on
  16. 16. More on Hough transformationProject for Digital Control class - University of Pisa 16θ In our case the curves in the parametric space are stilllines𝑦 = 𝑚𝑥 + 𝑞 → 𝑓 𝑥, 𝑦 , 𝑚, 𝑞 = 0 Given a point 𝑥𝑖, 𝑦𝑖 , the parametric equation is𝑞 = 𝑦𝑖 − 𝑚 ∗ 𝑥𝑖 Every intersection in the parameter space is interpretedas a vote for the corresponding curve A curve that has lot of votes identifies (with a certainconfidence) a relevant feature
  17. 17. Line detectionProject for Digital Control class - University of Pisa 17Hough transform parameters spacePeaks identifiyng linesDetected lines,superimposed on theoriginal RGB frameθ
  18. 18. Inverse perspective transform Work in perspective space is not convenient A common way to avoid it is performing an inverseperspective transformation (→ Bird’s Eye view) The achieved «virtual» top view is much moreconvenient in order to measure the distances andthe angles we need Various auxiliary operation in our application areperformed in this spaceProject for Digital Control class - University of Pisa 18
  19. 19. Inverse perspective transformProject for Digital Control class - University of Pisa 19I → W map
  20. 20. Inverse perspective transformProject for Digital Control class - University of Pisa 20Bird’s eye view («virtual» top view)I spaceW spacePerspective view
  21. 21. So, where’s Kalman? The problem with this approach is that performing aHoughTransformation on the whole image iscomputationally heavy: with a reasonable video framerate for a driving application, there is no time toperform Hough between two subsequent framesWe can exploit Kalman to drastically reduce the searcharea in every frame and subsequently detect lane-markersin a computationally efficient wayProject for Digital Control class - University of Pisa 21
  22. 22. Kalman – the intuition Once we have an idea of where the lane marks are at thefirst frame, a Kalman filter is used in order to predict wherethe lane marks will be in the next video frame The system then evolves step-by-step, frame after frame,through Kalman predictions, until this is possible (i.e.Kalman is able to lock-on the lane mark) Every Kalman prediction identifies a very thin region of thenext frame, in which we can perform a local search in a veryefficient way (preprocessing, binarization, pixels fit) If something goes wrong and we are not able to identify thelane mark in the predicted band, a global Hough scan isperformedProject for Digital Control class - University of Pisa 22
  23. 23. Algorithm overviewProject for Digital Control class - University of Pisa 231-st frame• Hough detection• Kalman initialization• Kalman prediction for frame no. 2k-th frame• Fit inside Kalman predicted band• If fit fails -> Perform Hough Detection and init. Kalman• Kalman prediction for frame no. K+1…………
  24. 24. Model details𝐴 𝑘+1= 𝐹𝐴 𝑘 + 𝐺𝑢 𝑘+ 𝑤 𝑘𝑦 𝑘 = 𝐻𝐴 𝑘+ 𝑣 𝑘Project for Digital Control class - University of Pisa 24 𝑨 = state vector =𝒎𝒒Where 𝑚 and 𝑞 are coefficients of a line expressed as: 𝑦 = 𝑚𝑥 + 𝑞A linear model was sufficient for our short-range view 𝑭 = autonomous dynamicModels the evolution of the state when no input is applied. In order to simplify themodel, considered the low velocity and the high frame rate, we have taken 𝐹 = 𝐼 𝒖 = input vector, models the vehicle steering (here simulated) 𝑮 = maps the input on the state 𝒘 𝒌 = process noise ~ 𝑵(𝟎, 𝑷) 𝒗 𝒌 = measure noise (fit error due to pixel discretization) ~ 𝑵(𝟎, 𝑸) 𝒚 𝒌 = output (for us, output = state) 𝑯 = maps state on the output (for us, 𝐻 = 𝐼)
  25. 25. Kalman algorithmProject for Digital Control class - University of Pisa 25 𝑃𝑘 = covariance matrix of the process noise 𝒘 𝑄 𝑘 = covariance matrix of the measurement noise 𝒗 𝐾𝑘 = Kalman gainPredicted state Estimated state
  26. 26. Kalman-based detectionProject for Digital Control class - University of Pisa 26Left lane-mark Right lane-mark Legend:- Green region:Kalman predictedband- Blue line:Fitted line- White line:Lane-markcontour (realpixels)
  27. 27. Experimental resultsProject for Digital Control class - University of Pisa 27<video simulation> As shown in the simulation, Kalman is able to tracklane-marks without problems, even: in presence of sudden anomalous car movements with a simplified linear model (the curve is well tracked!) with a simplified model that does not consider the autonomousdynamic of the system Hough recalculation is triggered only when the left lane-mark disappears from the camera-field: a polling modeis triggered in this situation, and the lane-mark islocked-on again when it returns in the camera-field
  28. 28. Future workProject for Digital Control class - University of Pisa 28 Use the tracking information in order to make avehicle autonomously follow the circuit (a simplePID can be used to control the steering) Simulation Implementation Mounting a netbook running MATLAB on a toy car equipped with acamera DSP + μcontroller based implementation
  29. 29. BibliographyProject for Digital Control class - University of Pisa 29 «A massively parallel approach to real-timevision-based road markings detection»Alberto Broggi – University of Parmahttp://www.ce.unipr.it/people/broggi/publications/detroit.pdf «Lane detection and Kalman-basedlinear-parabolic lane tracking»Lim, Seng,Ang, Chin -The University of Nottingham Malaysia campusPublished at IHMSC’09 - Intelligent Human-Machine Systems and Cybernetics «La trasformata di Hough»Padova University, Computer vision course 07/08http://vision.unipv.it/corsi/VisioneArtificiale-ls/lucidi/VA-06.pdf
  30. 30. Thank you!Project for Digital Control class - University of Pisa 30

×