SlideShare a Scribd company logo
1 of 15
Download to read offline
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/363468475
A computer vision-based lane detection approach for an autonomous vehicle
using the image hough transformation and the edge features
Preprint · September 2022
DOI: 10.13140/RG.2.2.14924.69768
CITATIONS
0
READS
28
4 authors, including:
Some of the authors of this publication are also working on these related projects:
DETECTION OF FIRE AND GAS LEAKAGE IN INDUSTRY TO ENSURE SAFETY CONDITION. View project
Lane detection for Self driving vehicles View project
Md. Abdullah Al Noman
Beijing Institute of Technology
7 PUBLICATIONS   6 CITATIONS   
SEE PROFILE
Md. Faishal Rahaman
Beijing Institute of Technology
7 PUBLICATIONS   7 CITATIONS   
SEE PROFILE
All content following this page was uploaded by Md. Faishal Rahaman on 11 September 2022.
The user has requested enhancement of the downloaded file.
A computer vision-based lane detection
approach for an autonomous vehicle using the
image hough transformation and the edge
features ⋆
Md. Abdullah Al Noman1,2 ID
, Md. Faishal Rahaman2 ID
, Zhai Li∗1,2 ID
,
Samrat Ray3 ID
, and Chengping Wang1,2 ID
1
National Engineering Laboratory for Electric Vehicles, Beijing Institute of
Technology, Beijing, 100811, China.
2
School of Mechanical Engineering, Beijing Institute of Technology, Beijing, 100811,
China.
3
Sunstone Calcutta Institute of Engineering and Management, Kolkata, India.
Abstract. Lane detection systems play a critical role in ensuring safe
and secure driving by alerting the driver of lane departures. Lane de-
tection may also save passengers’ lives if they go off the road owing to
driver distraction. The article presents a three-step approach for detect-
ing lanes on high-speed video pictures in real-time and invariant lighting.
The first phase involves doing appropriate prepossessing, such as noise
reduction, RGB to grey-scale conversion, and binarizing the input pic-
ture. Then, a polygon area in front of the vehicle is picked as the zone of
interest to accelerate processing. Finally, the edge detection technique is
used to acquire the image’s edges in the area of interest, and the Hough
transform is used to identify lanes on both sides of the vehicle. The
suggested approach was implemented using the IROADS database as a
data source. The recommended method is effective in various daylight
circumstances, including sunny, snowy, and rainy days, as well as inside
tunnels. The proposed approach processes frame on average in 28 mil-
liseconds and have a detection accuracy of 96.78 per cent, as shown by
implementation results. This article aims to provide a simple technique
for identifying road lines on high-speed video pictures utilizing the edge
feature.
Keywords: Lane Detection · Hough Transformation · Edge Detection ·
Autonomous Vehicles · Computer Vision.
1 Introduction
Each year, many accidents occur worldwide as a result of the expanding number
of automobiles, resulting in significant financial and human losses [1]. Human
⋆
Corresponding Author: Dr. Zhai Li, National Engineering Laboratory for
Electric Vehicles, Beijing Institute of Technology, Beijing, China . Contact:
Zhaili26@bit.edu.cn
2 M.A.A.Noman et al.
error is the primary cause of many accidents, including weariness, sleepiness, lack
of focus, and ignorance of road conditions [1]. Automobile manufacturers have
made strenuous attempts in recent years to include driver assistance technologies
into their vehicles to aid drivers in controlling and steering the vehicle [2]. Driver
assistance technologies are becoming more prevalent as a means of boosting the
security of premium automobiles [3]. The Fig. 1 shows the perception sensor
location and corresponding ADAS function support for autonomous vehicles.
Fig. 1. Perception sensor location and corresponding ADAS function support for au-
tonomous vehicles.
Some of the driver assistance systems include road diversion warning systems,
accident warning systems, lane departure detection systems, junction and traffic
light detection systems, and objects detection systems in front of the vehicle [4].
Among the planned driver aid systems, the lane detection system is critical for
avoiding vehicle deviations and road accidents. Smart vehicles [2]are automobiles
equipped with driving aid technology. The importance of road line detection in
the realm of intelligent vehicles is substantial, since it has several applications
in autonomous vehicles [2]. Intelligent automobiles receive data from road lines
and direct the vehicle between them [3].
In recent years along with Autonomous Vehicles, computer vision has been
introduced in several places, such as fraud detection [5], forensic system [6],
home security systems [7], accident findings [8], IoT devices [9] and so on.The
lane detection refers to the process of determining the limits of the lines in a
picture of a road without knowledge of the road lines’ locations [2]. The lane
detecting system is meant to provide a prompt reaction and warning to the
driver in the case of a diversion from the primary path, allowing for increased
vehicle control. Lane detection has been touted as a critical component of safe
driving technology. By accurately recognizing the car’s location and traveling
within the road lines, this technology prevents the vehicle from diverting.
There are various methods for obtaining road lines in lane detecting sys-
tems. These include the placement of magnetic indications on highways, as well
as the use of sensors, high-precision GPS, and image processing [10]. The most
precise method of determining road lines is to place magnetic markers through-
out the road that are detectable by automobile sensors [10]. Another method
Title Suppressed Due to Excessive Length 3
of determining a vehicle’s present location is to calculate the worldwide change
in position relative to road lines [10], which is also somewhat costly. The most
accessible and cost-effective method of identifying road lines is by image process-
ing, which extracts road lines from an image taken with a video camera [10]. Due
to the fact that video pictures provide useful information about their surround-
ings [3], they are critical for lane detection. In-car cameras mounted behind the
windshield are utilized in lane detecting systems.
For instance, Bertozzi (1998) used the GOLD system, one of the most widely
used lane detecting systems, which is based on road imagery through a camera
mounted on a vehicle [3]. The path is determined using a method called model
matching. Additionally, the car’s front end is located via stereo image processing.
The perspective impact of the picture is erased in this manner, allowing for the
application of the pattern matching methodology [11]. The image’s straight lines
are then retrieved by employing a horizontal edge detector to look for horizontal
patterns of dark-light-dark illumination. Then, pieces that are adjacent or have
the same direction are blended to reduce the likelihood of inaccuracy due to
noise or obstruction [12]. Finally, lines that correspond to a certain road model
are chosen.
This experiment presented a computer vision-based approach for effectively
distinguishing lanes in any surrounding environment. The first stage is neces-
sary preprocessing, such as noise reduction, RGB to grey-scale conversion, and
binarizing the input image. The region of interest will be selected as a polygon
region to speed up the processing speed in front of the vehicle. Finally, applying
the edge detection approach will help acquire the image’s edges in the region
of interest and the Hough transform to identify lanes on the input image. The
remainder of the essay is divided into the following sections; we have discussed
current existing similar algorithms in Section 2, and the method of the suggested
lane-detecting approach is shown and explained in Section 3. The experimental
findings and discussion are shown in Section 4 using input and output data. The
paper comes to a conclusion with Section 5.
2 Related Works
In paper [13] authors apply the Hough transform and Euclidean distance in their
investigation. To begin, the picture is converted from RGB to YCbCr space.
Since the human visual system is more sensitive to light, the Y component of
the picture is employed to identify the lines. To increase the system’s speed and
accuracy, the bottom portion of the picture is chosen as the region of interest.
Equalization of histograms is employed to improve the contrast between the road
surface and the road lines in the region of interest, and an Otsu thresholding
binary picture is created [13]. The area of interest is then split into two sub-areas,
with the Canny edge detection and Hough transform techniques employed to
independently identify the left and right lines of the road for each sub-region [13].
The author of [14] provides a technique for recognizing road lines that are ro-
bust to changes in illumination. This approach begins by extracting the image’s
4 M.A.A.Noman et al.
edges using Canny edge detection, then obtaining the image’s lines using the
Hough transform and calculating the collision position of the discovered lines.
The vanishing point is chosen to be the centre of the district with the most votes,
and the region of interest is chosen to be the bottom area of the vanishing point
in the picture. To ensure that the suggested method is robust to fluctuations in
brightness, the road’s yellow and white lines are calculated independently. The
binary picture is then formed by assigning the binary value 1 to the regions next
to the road lines and 0 to the remaining image areas. The areas of the lines are
then noted, and the centres of each area are determined in the binary picture
using the linked component clustering algorithm. Additionally, each region’s an-
gle and point of contact with the y-axis are determined. Areas with the same
angle and junction are joined to make an area, and the picture denotes the left,
and right lines of the road [14].
According to [15], the input picture is first taken out of perspective to align
the road lines [15]. The colour picture is then transformed to greyscale, and
the grey image’s edges are retrieved using an edge recognition algorithm [9].
The morphological expansion operator is used to eliminate small picture noises.
Then, using the Hough transform, the road lines are recognized and compared
to the previous picture frame lines. Lines are maintained if they fit; otherwise,
the following Hough transform lines are examined.
To ease the process of identifying lines, [16] has used a bird’s eye perspective
to capture the road picture in front of the automobile, which makes the road
lines parallel. To identify the picture’s edges, two one-dimensional Gaussian and
Laplace filters are utilized [16], and the binary image is generated using the
Otsu threshold and computed using the Hough transform of the image lines.
Following that, the major road lines are created using a series of horizontal lines
and their intersections with the picture lines are detected using the least mean
square method [16].
The research [17] reported a robust Hough domain lane detecting system. The
proposed approach employs road pictures to statistically calculate the predicted
location and deviation of the road width. Additionally, a mask is built in the
Hough domain to constrain the road width, and vehicle location as required [17].
The authors of [18] provided a novel strategy for preprocessing and identifying
the region of interest in their research. The input frame is read first, and then the
input picture is preprocessed; the area of interest is then picked, and the road
lines are detected. Pre-processing involves converting the picture to HSV space
and extracting the image’s white characteristics. The picture is then smoothed,
and the noise effect is reduced using a Gaussian filter. Following that, the picture
is transformed to binary using threshold processing. The region of interest is then
determined to be the bottom half of the picture. The edge detection technique
is used to extract the image’s edges, and the Hough transform is used to identify
the road’s major lines. Finally, real-time detection and tracking of road lines are
accomplished using the extended Kalman filter [18].
In [19] researchers developed a unique technique for lane recognition with
a high degree of accuracy. The three-layer input picture is reduced to a single
Title Suppressed Due to Excessive Length 5
layer, and the sharpness is increased. The zone of interest in this research is
determined by the lowest safe distance from the vehicle ahead. To provide a
smooth lane fitting, the Hough Transform is utilized.
The paper [20] presents a multi-stage Hough space computation for a lane
detecting challenge. A fast Hough Transform was devised to extract and cat-
egorize line segments from pictures. Kalman filtering was used to smooth the
probabilistic Hough space between frames in order to minimize noise caused by
occlusion, vehicle movement, and classification mistakes. The filtered probabilis-
tic Hough space was used to filter out line segments with low probability values
(threshold set to 0.7) and retain those with high probability values.
The paper [21] defined a Region of Interest (ROI) to ensure a dependable
real-time system. The Hough Transform is used to extract boundary information
and to identify the margins of road lane markers. Otsu’s threshold is used to
improve preprocessing results, compensate for illumination issues, and to gather
gradient information. The suggested technique enables precise lane monitoring
and gathers reliable vehicle orientation data in [21,22] also mentioned this data
in their research.
3 Methodology
This article uses video pictures obtained from the road by a camera positioned in-
side the automobile to determine road lines in real time. The camera is mounted
inside the vehicle behind the windscreen and almost in the center to offer a view
of the road.
The suggested approach for determining road lines involves three steps: pre-
processing, area selection, and road line determination. Pre-processing steps in-
clude noise reduction, RGB to greyscale conversion, and input picture binariza-
tion. Then, a polygonal region in front of the automobile is picked as the area of
interest due to the presence of road lines in that area. The third stage uses the
Canny edge detection technique to identify the image’s edges inside the region
of interest, and the Hough transform to identify the road’s major lines. Fig. 2
depicts the block diagram of the proposed technique.
3.1 Preprocessing
The noise in the picture acquired by the camera positioned inside the automobile
is eliminated at this step using a Gaussian filter [23]. The output of Fig. 5 shows
the smooth image after the Gaussian filter [23] have applied in the input image
Fig. 3.
To minimize algorithm computations, the input picture is transformed to
greyscale from RGB color space. In grey mode, Fig. 5 depicts the input picture.
Then, using the Dong method [22], the input picture is converted from grey
space Fig. 5 to binary space Fig. 6.
6 M.A.A.Noman et al.
Fig. 2. Simplified block diagram of proposed technique.
3.2 Region of Interest
The road lines are visible in front of and on both sides of the automobile in the
photos received from the road surface. The region of interest in this article is
a dynamic polygonal section of the picture in front of the automobile that was
chosen using a trapezoidal mask. The information about the image’s vanishing
point is utilized to construct the trapezoidal mask. A trapezoid is formed in
the bottom portion of the vanishing point that encompasses the region directly
in front of the automobile. The Fig. 7 illustrates a mask created for the input
picture sample in which the portion of the image corresponding to the region of
interest is one, and the remainder is zero. By applying this mask to the input
binary picture, the preferred region is picked, containing the road lines just in
front of the automobile. The output binary picture in Fig. 8 results from applying
the designed mask to the input binary image.
Title Suppressed Due to Excessive Length 7
Fig. 3. An example of the input image.
Fig. 4. Input image after noise removing.
3.3 Determination of Road Lines
The usage of picture edges is a very helpful and effective feature for detecting
things in photographs. The image’s edge is the point at which the brightness
abruptly shifts. As long as the quantity of light on the road is consistent and
differs from the brightness of the lines, the border between the lines and the road
(edge) may be determined. Numerous algorithms are available in this sector,
including those developed by Sobel, Canny, Roberts, and Prewitt. The Canny
edge detection technique is used in this article [24]. Six steps are required to
identify the edge using the Canny edge detection technique. The first step is to
filter out the noise in the original picture, which is accomplished using a Gaussian
filter [23] and a basic mask. The second stage is to locate strong edges at any
location by using the gradient amplitude. Two masks are applied to the picture
for this purpose, the gradient amplitude in the x and y directions is computed,
and the edge strength is determined using (1).
G = Gx + Gy (1)
where G represents the edge strength, Gx represents the gradient amplitude
in the x direction, and Gy represents the gradient amplitude in the y direction
in (2).
Gx =


−1 0 +1
−2 0 +2
−1 0 +1

 , Gy =


+1 +2 +1
0 0 0
−1 −2 −1

 (2)
8 M.A.A.Noman et al.
Fig. 5. Input image converted into gray image.
Fig. 6. Gray image converted into a binary image.
The third stage establishes the orientation of the picture edges generated via
the use of Canny masks (3).
θ = arctan(
Gy
Gx
) (3)
Where θ denotes the edge direction, Gx denotes the gradient along the x axis,
and Gy denotes the gradient along the y axis.
The fourth step matches the directions determined in the previous step to
one of four angles of 0, 45, 90, or 135 degrees. The fifth step is to suppress non-
maximum edges by checking the direction and removing those that cannot be
recognized. In the sixth stage, the Canny algorithm is utilized with the hysteresis
approach, which defines two higher and lower threshold values. Any pixel with
a gradient larger than or equal to the lower threshold is allowed, whereas any
pixel with a gradient less than or equal to the lower threshold is rejected. We
extract the picture’s edges by using the Canny edge detection algorithm [24] to
the binary image. The outcome of applying the edge finder algorithm on the
previous stage picture is seen in Fig. 9.
The Hough transform is used to identify the major lines of the road in the
following. Hough transform is a method for extracting features from digital im-
ages that is used in digital image processing, machine vision, and image analysis.
The objective of this conversion is to discover the many forms included inside
the picture. Initially, the Hough transform was used to recognize visual lines,
but it was later extended to identify diverse forms such as circles and ovals. The
Hough transform is employed in this technique to determine and indicate the
Title Suppressed Due to Excessive Length 9
Fig. 7. An example of the input image Mask to determine the region of interest.
Fig. 8. Binary image after applying the ROI mask.
Fig. 9. Apply the Canny algorithm into the binary image.
locations of road lines. Then, as illustrated in Fig. 10, the nearest lines on both
sides to the automobile are deemed road lines on the left and right.
4 Results and Discussions
The IROADS database [25, 26] is used to evaluate the performance of the pro-
posed algorithm. The database consists of 7 collections with a total of 4700
image strings. It covers almost all possible road conditions, including daylight,
night darkness, excessive sunlight, tunnel light, day and night rainfall, and snowy
roads. The dimension of each frame in this database is 360x640. Given that this
paper examines the different lighting conditions per day, four sets of data from
the IROADS database, including daytime driving in sunny, rainy, snowy, and
in-tunnel conditions, are reviewed. The average detection rate for 2828 frames
10 M.A.A.Noman et al.
Fig. 10. Simplified output of the input image.
from the IROADS database in various daylight circumstances is 96.78 percent,
as shown in Table 1.
Table 1. Accuracy of the proposed method under different weather conditions.
IROAD database Total number
of frames
Number of in-
correct detection
frames
Identification
rate(%)
Number of
Frames Per
Second
IROAD Daylight 903 32 96.40 35.2
IROAD Rainy Day 1049 7 99.38 35.6
IROAD Snow Day 569 31 94.59 36.2
IROAD Tunnel 307 21 93.12 35.8
Total 2828 91 96.78 35.7
The Fig. 11 illustrates the results of the proposed algorithm’s road line
identification in various driving circumstances throughout the day.
The suggested method produces the following two modes of operation on
each frame of the input picture.
1. Accurate detection, in which the lines on both sides of the vehicle are ap-
propriately detected.
2. Incorrect detection, in which the lines on both sides of the automobile or
on just one side are not accurately recognised. The suggested algorithm’s
recognition rate is determined on the labels created before utilizing
Detection Rate =
Number of identified frames
Total number of frames
× 100 (4)
The suggested approach is implemented in Python on a laptop with a 2.40
GHz CPU and 8 GB RAM, and it takes an average of 28 milliseconds to analyze
each frame. As a result, the suggested method accurately recognizes road lines
in a timely manner. To evaluate the suggested algorithm’s performance, it was
compared to many methods already published. Table 2 compares the findings
obtained using Kortli’s approach [14], Suto’s method [27], Tian’s method [28],
and this study’s method using the IROADS dataset [26]. According to the find-
ings of this experiment, the suggested technique outperforms the current method
Title Suppressed Due to Excessive Length 11
Fig. 11. Output of proposed technique in different environmental scenarios.
in every category except the Tunnel scenario. This is mostly due to the fact that
the line indicators are muted in this case owing to the tunnel lights and hence
are not recognized for the suggested technique. However, if earlier frames’ data
is utilised, the outcomes may be enhanced.
Table 2. Comparative result analyses of the proposed method with some existing
techniques
Methods IROAD
Day Light
IROAD
Dainy Day
IROAD
Snow Day
IROAD
Tunnel
Total
Kortli’s Method [14] 95.79 88.27 86.64 93.51 90.81
Suto’s Method [27] 96.12 93.42 92.44 94.14 94.17
Tian’s Method [28] 96.23 94.75 93.67 94.46 94.99
The proposed
Method
96.40 99.38 94.59 93.12 96.78
The findings demonstrate the suggested algorithm’s usefulness in identifying
lanes under a variety of scenarios. The suggested approach does not yet include
tracking, since lanes are detected individually in each picture without the use
of temporal information. Although the suggested method for recognizing road
lines works well in 97% of situations, it may fail in 3% of circumstances due to
road lines fading, side vehicle covering of road lines, and driver direction shift.
5 Conclusions
Lane detection refers to the process of determining the position of lane borders
in a road picture without having previous information about their location. Lane
12 M.A.A.Noman et al.
detection is intended to notify the driver about departures from the main lane,
allowing for improved vehicle control on the road. The purpose of this article is to
provide a simple approach for identifying road lines on high-speed video pictures
by using the edge feature. The suggested technique begins with the required pre-
processing, which includes noise reduction, picture conversion to greyscale, and
finally binary mode. The ideal region was then found to be a polygonal area just
in front of the vehicle. Finally, the Canny edge detection technique and Hough
transform were used to identify road lines.
The suggested algorithm’s performance was evaluated using the IROADS
database, with each frame having a size of 360x640. The research examines four
datasets from the IROADS database, including daylight driving in sunny, wet,
and snowy circumstances, as well as indoor tunnels. The suggested technique
executes on average in 28 milliseconds per frame and achieves a detection rate
of 96.78 per cent.
References
1. Jonathan Horgan, Ciarán Hughes, John McDonald, and Senthil Yogamani. Vision-
based driver assistance systems: Survey, taxonomy and advances. In 2015 IEEE
18th International Conference on Intelligent Transportation Systems, pages 2032–
2039. IEEE, 2015.
2. Feng You, Ronghui Zhang, Lingshu Zhong, Haiwei Wang, and Jianmin Xu. Lane
detection algorithm for night-time digital image based on distribution feature of
boundary pixels. Journal of the Optical Society of Korea, 17(2):188–199, Apr 2013.
3. Jingwei Cao, Chuanxue Song, Shixin Song, Feng Xiao, and Silun Peng. Lane
detection algorithm for intelligent vehicles in complex road conditions and dynamic
environments. Sensors, 19(14), 2019.
4. Sang-Il Oh and Hang-Bong Kang. Object detection and classification by decision-
level fusion for intelligent vehicle systems. Sensors, 17(1), 2017.
5. Srishti Sahni, Anmol Mittal, Farzil Kidwai, Ajay Tiwari, and Kanak Khandelwal.
Insurance fraud identification using computer vision and iot: A study of field fires.
Procedia Computer Science, 173:56–63, 2020. International Conference on Smart
Sustainable Intelligent Computing and Applications under ICITETM2020.
6. Owen Mayer and Matthew C. Stamm. Learned forensic source similarity for un-
known camera models. In 2018 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), pages 2012–2016. IEEE, 2018.
7. Md Faishal Rahaman, Md Abdullah Al Noman, Muhammad Liakat Ali, and Mah-
fuzur Rahman. Design and implementation of a face recognition based door access
security system using raspberry pi. International Research Journal of Engineering
and Technology (IRJET), 8:1705–1709, 2021.
8. JoonOh Seo, SangUk Han, SangHyun Lee, and Hyoungkwan Kim. Computer vision
techniques for construction safety and health monitoring. Advanced Engineering
Informatics, 29(2):239–251, 2015. Infrastructure Computer Vision.
9. Mahfuzur Rahman, Md. Faishal Rahaman, Md. Moazzem Hossan Munna, Kelvin
Jian Aun Ooi, Khairun Nisa’ Minhad, Mohammad Arif Sobhan Bhuiyan, and
Mahdi H. Miraz. Integrated cmos active low-pass filter for iot rfid transceiver.
In Mahdi H. Miraz, Garfield Southall, Maaruf Ali, Andrew Ware, and Safeeullah
Soomro, editors, Emerging Technologies in Computing, pages 73–84, Cham, 2021.
Springer International Publishing.
Title Suppressed Due to Excessive Length 13
10. Dhananjay Singh and Madhusudan Singh. Internet of vehicles for smart and safe
driving. In 2015 International Conference on Connected Vehicles and Expo (IC-
CVE), pages 328–329. IEEE, 2015.
11. M. Bertozzi and A. Broggi. Gold: a parallel real-time stereo vision system for
generic obstacle and lane detection. IEEE Transactions on Image Processing,
7(1):62–81, 1998.
12. Sahil Samantaray, Rushikesh Deotale, and Chiranji Lal Chowdhary. Lane detection
using sliding window for intelligent ground vehicle challenge. In Jennifer S. Raj,
Abdullah M. Iliyasu, Robert Bestak, and Zubair A. Baig, editors, Innovative Data
Communication Technologies and Application, pages 871–881, Singapore, 2021.
Springer Singapore.
13. Prajakta R Yelwande and Aditi Jahagirdar. Real-time robust lane detection and
warning system using hough transform method. International Journal of Engi-
neering Research Technology (IJERT), 8(8):385–392, 2019.
14. Yassin Kortli, Mehrez Marzougui, Belgacem Bouallegue, J. Subash Chandra Bose,
Paul Rodrigues, and Mohamed Atri. A novel illumination-invariant lane detection
system. In 2017 2nd International Conference on Anti-Cyber Crimes (ICACC),
pages 166–171. IEEE, 2017.
15. Yu Chen, Yuanlong Yu, and Ting Li. A vision based traffic accident detection
method using extreme learning machine. In 2016 International Conference on
Advanced Robotics and Mechatronics (ICARM), pages 567–572. IEEE, 2016.
16. Swapnil Waykole, Nirajan Shiwakoti, and Peter Stasinopoulos. Review on lane
detection and tracking algorithms of advanced driver assistance system. Sustain-
ability, 13(20), 2021.
17. Umar Ozgunalp and Sertan Kaymak. Lane detection by estimating and using re-
stricted search space in hough domain. Procedia Computer Science, 120:148–155,
2017. 9th International Conference on Theory and Application of Soft Comput-
ing, Computing with Words and Perception, ICSCCW 2017, 22-23 August 2017,
Budapest, Hungary.
18. Malik Haris and Adam Glowacz. Lane line detection based on object feature
distillation. Electronics, 10(9), 2021.
19. Hongyi Du et al. Lane line detection and vehicle identification using monocular
camera based on matlab. Academic Journal of Computing & Information Science,
4(2), 2021.
20. Yi Sun, Jian Li, and Zhenping Sun. Multi-stage hough space calculation for lane
markings detection via imu and vision fusion. Sensors, 19(10), 2019.
21. Qiao Huang and Jinlong Liu. Practical limitations of lane detection algorithm
based on hough transform in challenging scenarios. International Journal of Ad-
vanced Robotic Systems, 18(2):1–13, 2021.
22. Yue Dong, Jintao Xiong, Liangchao Li, and Jianyu Yang. Robust lane detection
and tracking for lane departure warning. In 2012 International Conference on
Computational Problem-Solving (ICCP), pages 461–464. IEEE, 2012.
23. Houjie Xu and Huifang Li. Study on a robust approach of lane departure warning
algrithm. In 2010 2nd International Conference on Signal Processing Systems,
volume 2, pages 201–204. IEEE, 2010.
24. Weibin Rong, Zhanjing Li, Wei Zhang, and Lining Sun. An improved canny edge
detection algorithm. In 2014 IEEE International Conference on Mechatronics and
Automation, pages 577–582. IEEE, 2014.
25. Mahdi Rezaei and Mutsuhiro Terauchi. Vehicle detection based on multi-feature
clues and dempster-shafer fusion theory. In Image and Video Technology, pages
60–72. Springer, 2014.
14 M.A.A.Noman et al.
26. Mahdi Rezaei and Mutsuhiro Terauchi. iroads dataset (intercity roads and adverse
driving scenarios). Enpeda Image Sequence Analysis Test Site-EISATS, 2014.
27. Jozsef Suto. Real-time lane line tracking algorithm to mini vehicles. Transport and
Telecommunication, 22(4):461–470, 2021.
28. Jingyi Tian, Zhiqiang Wang, and Qing Zhu. An improved lane boundaries de-
tection based on dynamic roi. In 2017 IEEE 9th International Conference on
Communication Software and Networks (ICCSN), pages 1212–1217. IEEE, 2017.
View publication stats
View publication stats

More Related Content

Similar to A computer vision-based lane detection approach for an autonomous vehicle using the image hough transformation and the edge features

Traffic Light Detection for Red Light Violation System
Traffic Light Detection for Red Light Violation SystemTraffic Light Detection for Red Light Violation System
Traffic Light Detection for Red Light Violation System
ijtsrd
 
Reduced Dimension Lane Detection Method
Reduced Dimension Lane Detection MethodReduced Dimension Lane Detection Method
Reduced Dimension Lane Detection Method
ijtsrd
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processing
Malika Alix
 
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
IJERA Editor
 
Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...
Conference Papers
 
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
IJECEIAES
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...
Journal Papers
 
Ijarcet vol-2-issue-4-1309-1313
Ijarcet vol-2-issue-4-1309-1313Ijarcet vol-2-issue-4-1309-1313
Ijarcet vol-2-issue-4-1309-1313
Editor IJARCET
 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
YogeshIJTSRD
 

Similar to A computer vision-based lane detection approach for an autonomous vehicle using the image hough transformation and the edge features (20)

Traffic Light Detection for Red Light Violation System
Traffic Light Detection for Red Light Violation SystemTraffic Light Detection for Red Light Violation System
Traffic Light Detection for Red Light Violation System
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Reduced Dimension Lane Detection Method
Reduced Dimension Lane Detection MethodReduced Dimension Lane Detection Method
Reduced Dimension Lane Detection Method
 
A survey on road extraction from color image using
A survey on road extraction from color image usingA survey on road extraction from color image using
A survey on road extraction from color image using
 
Road markers classification using binary scanning and slope contours
Road markers classification using binary scanning and slope contoursRoad markers classification using binary scanning and slope contours
Road markers classification using binary scanning and slope contours
 
A survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorizationA survey on road extraction from color image using vectorization
A survey on road extraction from color image using vectorization
 
A computer vision-based lane detection technique using gradient threshold and...
A computer vision-based lane detection technique using gradient threshold and...A computer vision-based lane detection technique using gradient threshold and...
A computer vision-based lane detection technique using gradient threshold and...
 
A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...
A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...
A Much Advanced and Efficient Lane Detection Algorithm for Intelligent Highwa...
 
traffic jam detection using image processing
traffic jam detection using image processingtraffic jam detection using image processing
traffic jam detection using image processing
 
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
Design Approach for a Novel Traffic Sign Recognition System by Using LDA and ...
 
IRJET- Congestion Reducing System through Sensors, Image Processors and Vanet...
IRJET- Congestion Reducing System through Sensors, Image Processors and Vanet...IRJET- Congestion Reducing System through Sensors, Image Processors and Vanet...
IRJET- Congestion Reducing System through Sensors, Image Processors and Vanet...
 
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
 
Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...Real time deep-learning based traffic volume count for high-traffic urban art...
Real time deep-learning based traffic volume count for high-traffic urban art...
 
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
A 3D reconstruction-based method using unmanned aerial vehicles for the repre...
 
Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...Real time vehicle counting in complex scene for traffic flow estimation using...
Real time vehicle counting in complex scene for traffic flow estimation using...
 
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
REVIEW OF LANE DETECTION AND TRACKING ALGORITHMS IN ADVANCED DRIVER ASSISTANC...
 
Ijarcet vol-2-issue-4-1309-1313
Ijarcet vol-2-issue-4-1309-1313Ijarcet vol-2-issue-4-1309-1313
Ijarcet vol-2-issue-4-1309-1313
 
Real time Pothole and Speed Breaker Detection Using Image Processing Techniques
Real time Pothole and Speed Breaker Detection Using Image Processing TechniquesReal time Pothole and Speed Breaker Detection Using Image Processing Techniques
Real time Pothole and Speed Breaker Detection Using Image Processing Techniques
 
Identification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadIdentification and classification of moving vehicles on road
Identification and classification of moving vehicles on road
 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
 

Recently uploaded

Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptx
chumtiyababu
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ssuser89054b
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
AldoGarca30
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 

Recently uploaded (20)

Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptx
 
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptxHOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
HOA1&2 - Module 3 - PREHISTORCI ARCHITECTURE OF KERALA.pptx
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKARHAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
HAND TOOLS USED AT ELECTRONICS WORK PRESENTED BY KOUSTAV SARKAR
 
Wadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptxWadi Rum luxhotel lodge Analysis case study.pptx
Wadi Rum luxhotel lodge Analysis case study.pptx
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
kiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal loadkiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal load
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 

A computer vision-based lane detection approach for an autonomous vehicle using the image hough transformation and the edge features

  • 1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/363468475 A computer vision-based lane detection approach for an autonomous vehicle using the image hough transformation and the edge features Preprint · September 2022 DOI: 10.13140/RG.2.2.14924.69768 CITATIONS 0 READS 28 4 authors, including: Some of the authors of this publication are also working on these related projects: DETECTION OF FIRE AND GAS LEAKAGE IN INDUSTRY TO ENSURE SAFETY CONDITION. View project Lane detection for Self driving vehicles View project Md. Abdullah Al Noman Beijing Institute of Technology 7 PUBLICATIONS   6 CITATIONS    SEE PROFILE Md. Faishal Rahaman Beijing Institute of Technology 7 PUBLICATIONS   7 CITATIONS    SEE PROFILE All content following this page was uploaded by Md. Faishal Rahaman on 11 September 2022. The user has requested enhancement of the downloaded file.
  • 2. A computer vision-based lane detection approach for an autonomous vehicle using the image hough transformation and the edge features ⋆ Md. Abdullah Al Noman1,2 ID , Md. Faishal Rahaman2 ID , Zhai Li∗1,2 ID , Samrat Ray3 ID , and Chengping Wang1,2 ID 1 National Engineering Laboratory for Electric Vehicles, Beijing Institute of Technology, Beijing, 100811, China. 2 School of Mechanical Engineering, Beijing Institute of Technology, Beijing, 100811, China. 3 Sunstone Calcutta Institute of Engineering and Management, Kolkata, India. Abstract. Lane detection systems play a critical role in ensuring safe and secure driving by alerting the driver of lane departures. Lane de- tection may also save passengers’ lives if they go off the road owing to driver distraction. The article presents a three-step approach for detect- ing lanes on high-speed video pictures in real-time and invariant lighting. The first phase involves doing appropriate prepossessing, such as noise reduction, RGB to grey-scale conversion, and binarizing the input pic- ture. Then, a polygon area in front of the vehicle is picked as the zone of interest to accelerate processing. Finally, the edge detection technique is used to acquire the image’s edges in the area of interest, and the Hough transform is used to identify lanes on both sides of the vehicle. The suggested approach was implemented using the IROADS database as a data source. The recommended method is effective in various daylight circumstances, including sunny, snowy, and rainy days, as well as inside tunnels. The proposed approach processes frame on average in 28 mil- liseconds and have a detection accuracy of 96.78 per cent, as shown by implementation results. This article aims to provide a simple technique for identifying road lines on high-speed video pictures utilizing the edge feature. Keywords: Lane Detection · Hough Transformation · Edge Detection · Autonomous Vehicles · Computer Vision. 1 Introduction Each year, many accidents occur worldwide as a result of the expanding number of automobiles, resulting in significant financial and human losses [1]. Human ⋆ Corresponding Author: Dr. Zhai Li, National Engineering Laboratory for Electric Vehicles, Beijing Institute of Technology, Beijing, China . Contact: Zhaili26@bit.edu.cn
  • 3. 2 M.A.A.Noman et al. error is the primary cause of many accidents, including weariness, sleepiness, lack of focus, and ignorance of road conditions [1]. Automobile manufacturers have made strenuous attempts in recent years to include driver assistance technologies into their vehicles to aid drivers in controlling and steering the vehicle [2]. Driver assistance technologies are becoming more prevalent as a means of boosting the security of premium automobiles [3]. The Fig. 1 shows the perception sensor location and corresponding ADAS function support for autonomous vehicles. Fig. 1. Perception sensor location and corresponding ADAS function support for au- tonomous vehicles. Some of the driver assistance systems include road diversion warning systems, accident warning systems, lane departure detection systems, junction and traffic light detection systems, and objects detection systems in front of the vehicle [4]. Among the planned driver aid systems, the lane detection system is critical for avoiding vehicle deviations and road accidents. Smart vehicles [2]are automobiles equipped with driving aid technology. The importance of road line detection in the realm of intelligent vehicles is substantial, since it has several applications in autonomous vehicles [2]. Intelligent automobiles receive data from road lines and direct the vehicle between them [3]. In recent years along with Autonomous Vehicles, computer vision has been introduced in several places, such as fraud detection [5], forensic system [6], home security systems [7], accident findings [8], IoT devices [9] and so on.The lane detection refers to the process of determining the limits of the lines in a picture of a road without knowledge of the road lines’ locations [2]. The lane detecting system is meant to provide a prompt reaction and warning to the driver in the case of a diversion from the primary path, allowing for increased vehicle control. Lane detection has been touted as a critical component of safe driving technology. By accurately recognizing the car’s location and traveling within the road lines, this technology prevents the vehicle from diverting. There are various methods for obtaining road lines in lane detecting sys- tems. These include the placement of magnetic indications on highways, as well as the use of sensors, high-precision GPS, and image processing [10]. The most precise method of determining road lines is to place magnetic markers through- out the road that are detectable by automobile sensors [10]. Another method
  • 4. Title Suppressed Due to Excessive Length 3 of determining a vehicle’s present location is to calculate the worldwide change in position relative to road lines [10], which is also somewhat costly. The most accessible and cost-effective method of identifying road lines is by image process- ing, which extracts road lines from an image taken with a video camera [10]. Due to the fact that video pictures provide useful information about their surround- ings [3], they are critical for lane detection. In-car cameras mounted behind the windshield are utilized in lane detecting systems. For instance, Bertozzi (1998) used the GOLD system, one of the most widely used lane detecting systems, which is based on road imagery through a camera mounted on a vehicle [3]. The path is determined using a method called model matching. Additionally, the car’s front end is located via stereo image processing. The perspective impact of the picture is erased in this manner, allowing for the application of the pattern matching methodology [11]. The image’s straight lines are then retrieved by employing a horizontal edge detector to look for horizontal patterns of dark-light-dark illumination. Then, pieces that are adjacent or have the same direction are blended to reduce the likelihood of inaccuracy due to noise or obstruction [12]. Finally, lines that correspond to a certain road model are chosen. This experiment presented a computer vision-based approach for effectively distinguishing lanes in any surrounding environment. The first stage is neces- sary preprocessing, such as noise reduction, RGB to grey-scale conversion, and binarizing the input image. The region of interest will be selected as a polygon region to speed up the processing speed in front of the vehicle. Finally, applying the edge detection approach will help acquire the image’s edges in the region of interest and the Hough transform to identify lanes on the input image. The remainder of the essay is divided into the following sections; we have discussed current existing similar algorithms in Section 2, and the method of the suggested lane-detecting approach is shown and explained in Section 3. The experimental findings and discussion are shown in Section 4 using input and output data. The paper comes to a conclusion with Section 5. 2 Related Works In paper [13] authors apply the Hough transform and Euclidean distance in their investigation. To begin, the picture is converted from RGB to YCbCr space. Since the human visual system is more sensitive to light, the Y component of the picture is employed to identify the lines. To increase the system’s speed and accuracy, the bottom portion of the picture is chosen as the region of interest. Equalization of histograms is employed to improve the contrast between the road surface and the road lines in the region of interest, and an Otsu thresholding binary picture is created [13]. The area of interest is then split into two sub-areas, with the Canny edge detection and Hough transform techniques employed to independently identify the left and right lines of the road for each sub-region [13]. The author of [14] provides a technique for recognizing road lines that are ro- bust to changes in illumination. This approach begins by extracting the image’s
  • 5. 4 M.A.A.Noman et al. edges using Canny edge detection, then obtaining the image’s lines using the Hough transform and calculating the collision position of the discovered lines. The vanishing point is chosen to be the centre of the district with the most votes, and the region of interest is chosen to be the bottom area of the vanishing point in the picture. To ensure that the suggested method is robust to fluctuations in brightness, the road’s yellow and white lines are calculated independently. The binary picture is then formed by assigning the binary value 1 to the regions next to the road lines and 0 to the remaining image areas. The areas of the lines are then noted, and the centres of each area are determined in the binary picture using the linked component clustering algorithm. Additionally, each region’s an- gle and point of contact with the y-axis are determined. Areas with the same angle and junction are joined to make an area, and the picture denotes the left, and right lines of the road [14]. According to [15], the input picture is first taken out of perspective to align the road lines [15]. The colour picture is then transformed to greyscale, and the grey image’s edges are retrieved using an edge recognition algorithm [9]. The morphological expansion operator is used to eliminate small picture noises. Then, using the Hough transform, the road lines are recognized and compared to the previous picture frame lines. Lines are maintained if they fit; otherwise, the following Hough transform lines are examined. To ease the process of identifying lines, [16] has used a bird’s eye perspective to capture the road picture in front of the automobile, which makes the road lines parallel. To identify the picture’s edges, two one-dimensional Gaussian and Laplace filters are utilized [16], and the binary image is generated using the Otsu threshold and computed using the Hough transform of the image lines. Following that, the major road lines are created using a series of horizontal lines and their intersections with the picture lines are detected using the least mean square method [16]. The research [17] reported a robust Hough domain lane detecting system. The proposed approach employs road pictures to statistically calculate the predicted location and deviation of the road width. Additionally, a mask is built in the Hough domain to constrain the road width, and vehicle location as required [17]. The authors of [18] provided a novel strategy for preprocessing and identifying the region of interest in their research. The input frame is read first, and then the input picture is preprocessed; the area of interest is then picked, and the road lines are detected. Pre-processing involves converting the picture to HSV space and extracting the image’s white characteristics. The picture is then smoothed, and the noise effect is reduced using a Gaussian filter. Following that, the picture is transformed to binary using threshold processing. The region of interest is then determined to be the bottom half of the picture. The edge detection technique is used to extract the image’s edges, and the Hough transform is used to identify the road’s major lines. Finally, real-time detection and tracking of road lines are accomplished using the extended Kalman filter [18]. In [19] researchers developed a unique technique for lane recognition with a high degree of accuracy. The three-layer input picture is reduced to a single
  • 6. Title Suppressed Due to Excessive Length 5 layer, and the sharpness is increased. The zone of interest in this research is determined by the lowest safe distance from the vehicle ahead. To provide a smooth lane fitting, the Hough Transform is utilized. The paper [20] presents a multi-stage Hough space computation for a lane detecting challenge. A fast Hough Transform was devised to extract and cat- egorize line segments from pictures. Kalman filtering was used to smooth the probabilistic Hough space between frames in order to minimize noise caused by occlusion, vehicle movement, and classification mistakes. The filtered probabilis- tic Hough space was used to filter out line segments with low probability values (threshold set to 0.7) and retain those with high probability values. The paper [21] defined a Region of Interest (ROI) to ensure a dependable real-time system. The Hough Transform is used to extract boundary information and to identify the margins of road lane markers. Otsu’s threshold is used to improve preprocessing results, compensate for illumination issues, and to gather gradient information. The suggested technique enables precise lane monitoring and gathers reliable vehicle orientation data in [21,22] also mentioned this data in their research. 3 Methodology This article uses video pictures obtained from the road by a camera positioned in- side the automobile to determine road lines in real time. The camera is mounted inside the vehicle behind the windscreen and almost in the center to offer a view of the road. The suggested approach for determining road lines involves three steps: pre- processing, area selection, and road line determination. Pre-processing steps in- clude noise reduction, RGB to greyscale conversion, and input picture binariza- tion. Then, a polygonal region in front of the automobile is picked as the area of interest due to the presence of road lines in that area. The third stage uses the Canny edge detection technique to identify the image’s edges inside the region of interest, and the Hough transform to identify the road’s major lines. Fig. 2 depicts the block diagram of the proposed technique. 3.1 Preprocessing The noise in the picture acquired by the camera positioned inside the automobile is eliminated at this step using a Gaussian filter [23]. The output of Fig. 5 shows the smooth image after the Gaussian filter [23] have applied in the input image Fig. 3. To minimize algorithm computations, the input picture is transformed to greyscale from RGB color space. In grey mode, Fig. 5 depicts the input picture. Then, using the Dong method [22], the input picture is converted from grey space Fig. 5 to binary space Fig. 6.
  • 7. 6 M.A.A.Noman et al. Fig. 2. Simplified block diagram of proposed technique. 3.2 Region of Interest The road lines are visible in front of and on both sides of the automobile in the photos received from the road surface. The region of interest in this article is a dynamic polygonal section of the picture in front of the automobile that was chosen using a trapezoidal mask. The information about the image’s vanishing point is utilized to construct the trapezoidal mask. A trapezoid is formed in the bottom portion of the vanishing point that encompasses the region directly in front of the automobile. The Fig. 7 illustrates a mask created for the input picture sample in which the portion of the image corresponding to the region of interest is one, and the remainder is zero. By applying this mask to the input binary picture, the preferred region is picked, containing the road lines just in front of the automobile. The output binary picture in Fig. 8 results from applying the designed mask to the input binary image.
  • 8. Title Suppressed Due to Excessive Length 7 Fig. 3. An example of the input image. Fig. 4. Input image after noise removing. 3.3 Determination of Road Lines The usage of picture edges is a very helpful and effective feature for detecting things in photographs. The image’s edge is the point at which the brightness abruptly shifts. As long as the quantity of light on the road is consistent and differs from the brightness of the lines, the border between the lines and the road (edge) may be determined. Numerous algorithms are available in this sector, including those developed by Sobel, Canny, Roberts, and Prewitt. The Canny edge detection technique is used in this article [24]. Six steps are required to identify the edge using the Canny edge detection technique. The first step is to filter out the noise in the original picture, which is accomplished using a Gaussian filter [23] and a basic mask. The second stage is to locate strong edges at any location by using the gradient amplitude. Two masks are applied to the picture for this purpose, the gradient amplitude in the x and y directions is computed, and the edge strength is determined using (1). G = Gx + Gy (1) where G represents the edge strength, Gx represents the gradient amplitude in the x direction, and Gy represents the gradient amplitude in the y direction in (2). Gx =   −1 0 +1 −2 0 +2 −1 0 +1   , Gy =   +1 +2 +1 0 0 0 −1 −2 −1   (2)
  • 9. 8 M.A.A.Noman et al. Fig. 5. Input image converted into gray image. Fig. 6. Gray image converted into a binary image. The third stage establishes the orientation of the picture edges generated via the use of Canny masks (3). θ = arctan( Gy Gx ) (3) Where θ denotes the edge direction, Gx denotes the gradient along the x axis, and Gy denotes the gradient along the y axis. The fourth step matches the directions determined in the previous step to one of four angles of 0, 45, 90, or 135 degrees. The fifth step is to suppress non- maximum edges by checking the direction and removing those that cannot be recognized. In the sixth stage, the Canny algorithm is utilized with the hysteresis approach, which defines two higher and lower threshold values. Any pixel with a gradient larger than or equal to the lower threshold is allowed, whereas any pixel with a gradient less than or equal to the lower threshold is rejected. We extract the picture’s edges by using the Canny edge detection algorithm [24] to the binary image. The outcome of applying the edge finder algorithm on the previous stage picture is seen in Fig. 9. The Hough transform is used to identify the major lines of the road in the following. Hough transform is a method for extracting features from digital im- ages that is used in digital image processing, machine vision, and image analysis. The objective of this conversion is to discover the many forms included inside the picture. Initially, the Hough transform was used to recognize visual lines, but it was later extended to identify diverse forms such as circles and ovals. The Hough transform is employed in this technique to determine and indicate the
  • 10. Title Suppressed Due to Excessive Length 9 Fig. 7. An example of the input image Mask to determine the region of interest. Fig. 8. Binary image after applying the ROI mask. Fig. 9. Apply the Canny algorithm into the binary image. locations of road lines. Then, as illustrated in Fig. 10, the nearest lines on both sides to the automobile are deemed road lines on the left and right. 4 Results and Discussions The IROADS database [25, 26] is used to evaluate the performance of the pro- posed algorithm. The database consists of 7 collections with a total of 4700 image strings. It covers almost all possible road conditions, including daylight, night darkness, excessive sunlight, tunnel light, day and night rainfall, and snowy roads. The dimension of each frame in this database is 360x640. Given that this paper examines the different lighting conditions per day, four sets of data from the IROADS database, including daytime driving in sunny, rainy, snowy, and in-tunnel conditions, are reviewed. The average detection rate for 2828 frames
  • 11. 10 M.A.A.Noman et al. Fig. 10. Simplified output of the input image. from the IROADS database in various daylight circumstances is 96.78 percent, as shown in Table 1. Table 1. Accuracy of the proposed method under different weather conditions. IROAD database Total number of frames Number of in- correct detection frames Identification rate(%) Number of Frames Per Second IROAD Daylight 903 32 96.40 35.2 IROAD Rainy Day 1049 7 99.38 35.6 IROAD Snow Day 569 31 94.59 36.2 IROAD Tunnel 307 21 93.12 35.8 Total 2828 91 96.78 35.7 The Fig. 11 illustrates the results of the proposed algorithm’s road line identification in various driving circumstances throughout the day. The suggested method produces the following two modes of operation on each frame of the input picture. 1. Accurate detection, in which the lines on both sides of the vehicle are ap- propriately detected. 2. Incorrect detection, in which the lines on both sides of the automobile or on just one side are not accurately recognised. The suggested algorithm’s recognition rate is determined on the labels created before utilizing Detection Rate = Number of identified frames Total number of frames × 100 (4) The suggested approach is implemented in Python on a laptop with a 2.40 GHz CPU and 8 GB RAM, and it takes an average of 28 milliseconds to analyze each frame. As a result, the suggested method accurately recognizes road lines in a timely manner. To evaluate the suggested algorithm’s performance, it was compared to many methods already published. Table 2 compares the findings obtained using Kortli’s approach [14], Suto’s method [27], Tian’s method [28], and this study’s method using the IROADS dataset [26]. According to the find- ings of this experiment, the suggested technique outperforms the current method
  • 12. Title Suppressed Due to Excessive Length 11 Fig. 11. Output of proposed technique in different environmental scenarios. in every category except the Tunnel scenario. This is mostly due to the fact that the line indicators are muted in this case owing to the tunnel lights and hence are not recognized for the suggested technique. However, if earlier frames’ data is utilised, the outcomes may be enhanced. Table 2. Comparative result analyses of the proposed method with some existing techniques Methods IROAD Day Light IROAD Dainy Day IROAD Snow Day IROAD Tunnel Total Kortli’s Method [14] 95.79 88.27 86.64 93.51 90.81 Suto’s Method [27] 96.12 93.42 92.44 94.14 94.17 Tian’s Method [28] 96.23 94.75 93.67 94.46 94.99 The proposed Method 96.40 99.38 94.59 93.12 96.78 The findings demonstrate the suggested algorithm’s usefulness in identifying lanes under a variety of scenarios. The suggested approach does not yet include tracking, since lanes are detected individually in each picture without the use of temporal information. Although the suggested method for recognizing road lines works well in 97% of situations, it may fail in 3% of circumstances due to road lines fading, side vehicle covering of road lines, and driver direction shift. 5 Conclusions Lane detection refers to the process of determining the position of lane borders in a road picture without having previous information about their location. Lane
  • 13. 12 M.A.A.Noman et al. detection is intended to notify the driver about departures from the main lane, allowing for improved vehicle control on the road. The purpose of this article is to provide a simple approach for identifying road lines on high-speed video pictures by using the edge feature. The suggested technique begins with the required pre- processing, which includes noise reduction, picture conversion to greyscale, and finally binary mode. The ideal region was then found to be a polygonal area just in front of the vehicle. Finally, the Canny edge detection technique and Hough transform were used to identify road lines. The suggested algorithm’s performance was evaluated using the IROADS database, with each frame having a size of 360x640. The research examines four datasets from the IROADS database, including daylight driving in sunny, wet, and snowy circumstances, as well as indoor tunnels. The suggested technique executes on average in 28 milliseconds per frame and achieves a detection rate of 96.78 per cent. References 1. Jonathan Horgan, Ciarán Hughes, John McDonald, and Senthil Yogamani. Vision- based driver assistance systems: Survey, taxonomy and advances. In 2015 IEEE 18th International Conference on Intelligent Transportation Systems, pages 2032– 2039. IEEE, 2015. 2. Feng You, Ronghui Zhang, Lingshu Zhong, Haiwei Wang, and Jianmin Xu. Lane detection algorithm for night-time digital image based on distribution feature of boundary pixels. Journal of the Optical Society of Korea, 17(2):188–199, Apr 2013. 3. Jingwei Cao, Chuanxue Song, Shixin Song, Feng Xiao, and Silun Peng. Lane detection algorithm for intelligent vehicles in complex road conditions and dynamic environments. Sensors, 19(14), 2019. 4. Sang-Il Oh and Hang-Bong Kang. Object detection and classification by decision- level fusion for intelligent vehicle systems. Sensors, 17(1), 2017. 5. Srishti Sahni, Anmol Mittal, Farzil Kidwai, Ajay Tiwari, and Kanak Khandelwal. Insurance fraud identification using computer vision and iot: A study of field fires. Procedia Computer Science, 173:56–63, 2020. International Conference on Smart Sustainable Intelligent Computing and Applications under ICITETM2020. 6. Owen Mayer and Matthew C. Stamm. Learned forensic source similarity for un- known camera models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2012–2016. IEEE, 2018. 7. Md Faishal Rahaman, Md Abdullah Al Noman, Muhammad Liakat Ali, and Mah- fuzur Rahman. Design and implementation of a face recognition based door access security system using raspberry pi. International Research Journal of Engineering and Technology (IRJET), 8:1705–1709, 2021. 8. JoonOh Seo, SangUk Han, SangHyun Lee, and Hyoungkwan Kim. Computer vision techniques for construction safety and health monitoring. Advanced Engineering Informatics, 29(2):239–251, 2015. Infrastructure Computer Vision. 9. Mahfuzur Rahman, Md. Faishal Rahaman, Md. Moazzem Hossan Munna, Kelvin Jian Aun Ooi, Khairun Nisa’ Minhad, Mohammad Arif Sobhan Bhuiyan, and Mahdi H. Miraz. Integrated cmos active low-pass filter for iot rfid transceiver. In Mahdi H. Miraz, Garfield Southall, Maaruf Ali, Andrew Ware, and Safeeullah Soomro, editors, Emerging Technologies in Computing, pages 73–84, Cham, 2021. Springer International Publishing.
  • 14. Title Suppressed Due to Excessive Length 13 10. Dhananjay Singh and Madhusudan Singh. Internet of vehicles for smart and safe driving. In 2015 International Conference on Connected Vehicles and Expo (IC- CVE), pages 328–329. IEEE, 2015. 11. M. Bertozzi and A. Broggi. Gold: a parallel real-time stereo vision system for generic obstacle and lane detection. IEEE Transactions on Image Processing, 7(1):62–81, 1998. 12. Sahil Samantaray, Rushikesh Deotale, and Chiranji Lal Chowdhary. Lane detection using sliding window for intelligent ground vehicle challenge. In Jennifer S. Raj, Abdullah M. Iliyasu, Robert Bestak, and Zubair A. Baig, editors, Innovative Data Communication Technologies and Application, pages 871–881, Singapore, 2021. Springer Singapore. 13. Prajakta R Yelwande and Aditi Jahagirdar. Real-time robust lane detection and warning system using hough transform method. International Journal of Engi- neering Research Technology (IJERT), 8(8):385–392, 2019. 14. Yassin Kortli, Mehrez Marzougui, Belgacem Bouallegue, J. Subash Chandra Bose, Paul Rodrigues, and Mohamed Atri. A novel illumination-invariant lane detection system. In 2017 2nd International Conference on Anti-Cyber Crimes (ICACC), pages 166–171. IEEE, 2017. 15. Yu Chen, Yuanlong Yu, and Ting Li. A vision based traffic accident detection method using extreme learning machine. In 2016 International Conference on Advanced Robotics and Mechatronics (ICARM), pages 567–572. IEEE, 2016. 16. Swapnil Waykole, Nirajan Shiwakoti, and Peter Stasinopoulos. Review on lane detection and tracking algorithms of advanced driver assistance system. Sustain- ability, 13(20), 2021. 17. Umar Ozgunalp and Sertan Kaymak. Lane detection by estimating and using re- stricted search space in hough domain. Procedia Computer Science, 120:148–155, 2017. 9th International Conference on Theory and Application of Soft Comput- ing, Computing with Words and Perception, ICSCCW 2017, 22-23 August 2017, Budapest, Hungary. 18. Malik Haris and Adam Glowacz. Lane line detection based on object feature distillation. Electronics, 10(9), 2021. 19. Hongyi Du et al. Lane line detection and vehicle identification using monocular camera based on matlab. Academic Journal of Computing & Information Science, 4(2), 2021. 20. Yi Sun, Jian Li, and Zhenping Sun. Multi-stage hough space calculation for lane markings detection via imu and vision fusion. Sensors, 19(10), 2019. 21. Qiao Huang and Jinlong Liu. Practical limitations of lane detection algorithm based on hough transform in challenging scenarios. International Journal of Ad- vanced Robotic Systems, 18(2):1–13, 2021. 22. Yue Dong, Jintao Xiong, Liangchao Li, and Jianyu Yang. Robust lane detection and tracking for lane departure warning. In 2012 International Conference on Computational Problem-Solving (ICCP), pages 461–464. IEEE, 2012. 23. Houjie Xu and Huifang Li. Study on a robust approach of lane departure warning algrithm. In 2010 2nd International Conference on Signal Processing Systems, volume 2, pages 201–204. IEEE, 2010. 24. Weibin Rong, Zhanjing Li, Wei Zhang, and Lining Sun. An improved canny edge detection algorithm. In 2014 IEEE International Conference on Mechatronics and Automation, pages 577–582. IEEE, 2014. 25. Mahdi Rezaei and Mutsuhiro Terauchi. Vehicle detection based on multi-feature clues and dempster-shafer fusion theory. In Image and Video Technology, pages 60–72. Springer, 2014.
  • 15. 14 M.A.A.Noman et al. 26. Mahdi Rezaei and Mutsuhiro Terauchi. iroads dataset (intercity roads and adverse driving scenarios). Enpeda Image Sequence Analysis Test Site-EISATS, 2014. 27. Jozsef Suto. Real-time lane line tracking algorithm to mini vehicles. Transport and Telecommunication, 22(4):461–470, 2021. 28. Jingyi Tian, Zhiqiang Wang, and Qing Zhu. An improved lane boundaries de- tection based on dynamic roi. In 2017 IEEE 9th International Conference on Communication Software and Networks (ICCSN), pages 1212–1217. IEEE, 2017. View publication stats View publication stats