SlideShare a Scribd company logo
1 of 9
Download to read offline
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 6, December 2022, pp. 6284~6292
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i6.pp6284-6292  6284
Journal homepage: http://ijece.iaescore.com
Visual and light detection and ranging-based simultaneous
localization and mapping for self-driving cars
El Farnane Abdelhafid, Youssefi My Abdelkader, Mouhsen Ahmed,
Dakir Rachid, El Ihyaoui Abdelilah
Engineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technics,
Hassan First University of Settat, Settat, Morocco
Article Info ABSTRACT
Article history:
Received Jul 6, 2021
Revised Jun 10, 2022
Accepted Jul 5, 2022
In recent years, there has been a strong demand for self-driving cars. For
safe navigation, self-driving cars need both precise localization and robust
mapping. While global navigation satellite system (GNSS) can be used to
locate vehicles, it has some limitations, such as satellite signal absence
(tunnels and caves), which restrict its use in urban scenarios. Simultaneous
localization and mapping (SLAM) are an excellent solution for identifying a
vehicle’s position while at the same time constructing a representation of the
environment. SLAM-based visual and light detection and ranging (LIDAR)
refer to using cameras and LIDAR as source of external information. This
paper presents an implementation of SLAM algorithm for building a map of
environment and obtaining car’s trajectory using LIDAR scans. A detailed
overview of current visual and LIDAR SLAM approaches has also been
provided and discussed. Simulation results referred to LIDAR scans indicate
that SLAM is convenient and helpful in localization and mapping.
Keywords:
Camera
Light detection and ranging
Localization
Mapping
Self-driving cars
Simultaneous localization and
mapping
This is an open access article under the CC BY-SA license.
Corresponding Author:
El Farnane Abdelhafid
Engineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technologies,
Hassan First University
Settat City, Morocco
Email: a.elfarnane@uhp.ac.ma
1. INTRODUCTION
Self-driving cars are regarded as the next big thing, benefiting from intelligent transportation
systems, smart cities, internet of things (IoT), and new technologies such as machine learning. At present,
automotive industry has contributed towards innovation and economic growth. Navigation for autonomous
driving vehicles is a very active research area [1], [2]. In order to ensure efficiency and safety of traffic
systems that rely on driving strategies [3], researchers around the world are concentrating their efforts on
advancement of this subject [4]. In this context, defense advanced research projects agency (DARPA) has
been regularly organizing a grand challenge of autonomous [5].
One of the most important criteria for autonomous navigation is an accurate localization of the car
itself. Global navigation satellite system (GNSS) is the most used localization system, providing absolute
positioning with high accuracy [6]. However, in special conditions such as multi-path effect and latency,
restrict its use [7].
Only by representing environment using different kinds of sensors would robots be able to navigate
in these conditions. The process of representing environment or constructing a map and estimating position
of a car simultaneously called simultaneous localization and mapping (SLAM) [8]. SLAM is a solution for
several applications, including self-driving cars [9], mobile robots [10], unmanned aerial vehicles (UAV)
Int J Elec & Comp Eng ISSN: 2088-8708 
Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid)
6285
[11], and autonomous underwater vehicles [12]. Many SLAM techniques for self-driving cars are discussed
in literature [13]. Liu and Miura [14] presented an ORB-SLAM-based visual dynamic SLAM algorithm for
real-time tracking and mapping. In dynamic indoor environments, results show that the algorithm is efficient
and accurate. Wang et al. [15] proposed a robust multi-lidar for self-localization and mapping problems.
According to the authors, the suggested solution produced better results than single lidar approaches. In this
paper, after surveying and discussing the approaches of SLAM-based cameras and light detection and
ranging (LIDAR), we focused on SLAM using LIDAR sensors in order to create a map of an environment
and locate the pose of a self-driving car.
The remainder of this paper is structured: section 2 discusses sensors for perception and localization.
Techniques based on LIDAR and cameras for localization are also presented. Section 3 shows results
obtained by implementation of SLAM algorithm. Finally, section 4 concludes by discussing future
orientations and remaining challenges.
2. METHOD
2.1. Sensors based localization
Localization and mapping are the most important requirements for self-driving cars. Car needs to
know where it is as well as its environment at all times and in any conditions. Car must be able to localize
itself and generate a map of its surroundings called a β€œlocal map”. Map obtained by global positioning system
(GPS) is insufficient because the environment is extremely dynamic. As a result, creating a local map and
integrating it with the global map is required for more accurate navigation. Several sensors used for
localization are presented below, as well as a synthesis of techniques based on cameras and Lidar.
2.1.1. Sensors of perception
In self-driving cars, perception frameworks can detect and interpret surrounding environment
dependent on different sorts of sensors. Sensors are extensively ordered into two sorts, depending on what
property they record. They are exteroceptive if they record an environmental property. Then again, if sensors
record a property of ego vehicle, they are proprioceptive. Classification and characteristics of sensors used in
perception are shown in Table 1.
Table 1. Sensors of perception
Sensors Camera LIDAR Radar Ultrasonic GNSS/IMU Odometry
Role Essential for
correctly
perceiving
environment
Detailed 3D
scene geometry
Object detection
and relative
speed estimation
Short-range all-
weather distance
measurement.
Unaffected by
lighting.
Measure of ego
vehicle states
Tracks wheel
and calculate
overall speed
and orientation
Characteristics Resolution
Field of view
Dynamic
range
Number of beams
Points per second
Rotation rate
Field of view
Range
field of view
accuracy
Range
Field of view
Cost
Type Exteroceptive Exteroceptive Exteroceptive Exteroceptive Proprioceptive Proprioceptive
2.1.2. Camera-based localization
There are several localization techniques presented in literature that use only cameras [16], [17] or
integration of camera [18] to improve performance of these techniques. Suhr et al. [19] proposed, in urban
environments, using GPS/IMU for global positioning and camera for recognition of road markers as well as
lane markers to find both lateral and longitudinal positioning. Authors noted that this technique gives, in one
of 5 experiments, lateral errors of 0.49 m, longitudinal errors of 0.95 m and 1.18 m Euclidean error on
average. Li et al. [20] suggested, in urban environments also, a vision-based localization approach using only
cameras. Hybrid method combines a topological map to estimate a global position with a metric map to find
a fine localization. Authors noted that this low-cost approach gives mean positioning errors of 0.75 m.
2.1.3. LIDAR-based localization
LIDAR is another important sensor which is able to improve localization performance for
self-driving cars. It scans the surrounding environment and generates multiple points to build a 3D map using
light detection and ranging. LIDAR is known to offer precise and robust measurements of the environment.
Because it is expensive compared to other sensors, LIDAR is employed only to build map, and camera is
utilized to localize vehicle.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292
6286
Wolcott and Eustice [21] suggested a generic probabilistic localization method based on a 3D
LIDAR scanner. This technique models world as a fast and exact multiresolution map of a mixture of
Gaussians. Gaussian mixture maps for localization were evaluated on two autonomous vehicles, in adverse
weather conditions, and resulted in longitudinal and lateral RMS errors below 0.1 m and 0.13 m,
respectively. Hata and Wolf [22] proposed, in the urban environment, a method based on multilayer LIDAR
to estimate the vehicle localization using map of environment created by LIDAR sensor. Authors noted that
this proposed approach gives longitudinal, lateral and angular errors of 0.1395 m, 0.204 m and 0.019 rad,
respectively.
2.1.4. Summary
Camera-based localization solutions provide excellent performance. Error is caused by a low-
textured environment or sensitivity to light changes, which is the biggest drawback of this solution. Finally,
image processing necessitates a high level of computational complexity. LIDAR, on the other hand, is known
for its ease of use and accuracy. To take advantage of benefits of each sensor, several techniques use sensor
fusion as we will see in the next chapter. Table 2 summarizes camera and LIDAR based localization.
Table 2. Summary of camera and LIDAR based localization
Study Accuracy
Camera-based localization [19] Longitudinal error 0.95 m
Lateral error 0.49 m
Euclidean error 1.18 m
[20] Positioning error of 0.75 m
Lidar-based localization [21] Longitudinal error 0.1 m
Lateral error 0.13 m
[22] Longitudinal error 0.1395 m
Lateral error 0.204 m
Angular error 0.019 rad
2.2. Vision and LIDAR based SLAM
2.2.1. Principe of SLAM
SLAM is a technique for simultaneously creating an online map and locating a car on it. It is a
common practice in self-driving cars that works effectively in both indoor and outdoor environments and
performs well [23]. But it remains less efficient than prebuilt map localization due to high environmental
challenges, such as high speed and a large number of dynamic vehicles [24].
2.2.2. Probabilistic modeling of the SLAM problem
In order to model the SLAM problem, we consider a vehicle moving through an unknown
environment, as given in Figure 1. The idea is to use a graph to model the SLAM problem, which includes
the position xt of the robot as well as map landmarks mi. Lines connecting positions represent the trajectory
of the robot, and arrows represent the distance to landmarks. The position data is obtained by proprioceptive
sensors mounted on the robot. Furthermore, landmarks are extracted from the environment using
proprioceptive sensors such as LIDAR.
Figure 1. Basic idea of SLAM
Int J Elec & Comp Eng ISSN: 2088-8708 
Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid)
6287
At time 𝑑 we define the following quantities: π‘₯𝑑 is state vector that represents a vehicle; ut is control
vector applied at 𝑑 βˆ’ 1 to drive vehicle to xt; mi is vector representing ith
landmark; 𝑧𝑑,𝑖 is observation of ith
landmark. Assuming that duration between two successive positions is constant and equal to 𝑇, the instant 𝑑
becomes π‘˜π‘‡. To simplify calculations, we will use index π‘˜π‘‡ = π‘˜. Then, from time 0 to kT, the following sets
are defined:
βˆ’ X0:k = {x0, x1, … , xk}: set of vehicle locations;
βˆ’ U0:k = {u0, u1, … , uk}: set of control inputs;
βˆ’ Z0:k = {z0, z1, … , zk}: set of observations;
βˆ’ M = {m1, m2, … , mk}: set of landmarks or maps.
a) Modeling of localization
Problem of localization consists of calculating a vehicle’s position in a given environment using
environmental data such as observation and command history [25]. This problem is represented by estimation
of probability distribution.
P(xk|Z0:k, U0:k, M)
This formulation, which represents global localization, uses all of data from history of observations and
commands. This gives an accurate estimation of position, but it substantially complicates the calculations. To
simplify computational complexity, we estimate position at time kT using only data from time (k βˆ’ 1)T.
This is local localization represented by:
P(xk|zkβˆ’1, ukβˆ’1, xkβˆ’1, M)
Implementing this model will greatly reduce algorithm’s complexity, but accuracy will degrade as a result.
b) Modeling of mapping
Process of building an environment map using sensor data and history of real robot placements is
known as mapping. Mathematically, mapping problem can be modeled.
P(mk|Z0:k, X0:k)
To create a good mapping, robot’s position must be exact and accurate.
c) Modeling of SLAM
SLAM’s probabilistic modeling necessitates determination of the probability quantity at each time
step.
P(xk, M|Z0:k, U0:k, x0)
This calculation is divided into two parts by (1) and (2), describing the position update and the observation
update, respectively.
𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜, π‘₯0) = ∫[ 𝑃(π‘₯π‘˜|π‘₯π‘˜βˆ’1, π‘’π‘˜) βˆ— 𝑃(π‘₯π‘˜βˆ’1, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜βˆ’1, π‘₯0)]𝑑π‘₯π‘˜βˆ’1 (1)
𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜, π‘ˆ0:π‘˜, π‘₯0) =
𝑃(π‘§π‘˜|π‘₯π‘˜, 𝑀)βˆ—π‘ƒ(π‘₯π‘˜, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜, π‘₯0)
𝑃(π‘§π‘˜|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜)
(2)
P(xk|xkβˆ’1, uk): represent vehicle’s motion model (transition model). P(zk|xk, M): describes observation
model. Ξ· =
1
P(zk|Z0:kβˆ’1, U0:k)
: a normalization constant that depends on transition and observation models.
Finally, we can model SLAM problem by (3).
𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜, π‘ˆ0:π‘˜, π‘₯0) = πœ‚ βˆ— 𝑃(π‘§π‘˜|π‘₯π‘˜, 𝑀) βˆ— ∫[ 𝑃(π‘₯π‘˜|π‘₯π‘˜βˆ’1, π‘’π‘˜) βˆ—
𝑃(π‘₯π‘˜βˆ’1, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜βˆ’1, π‘₯0)]𝑑π‘₯π‘˜βˆ’1 (3)
2.2.3. Resolution of the SLAM problem
SLAM problem is regarded as a critical factor of self-driving cars. Many problems, however,
continue to prevent use of SLAM algorithms in vehicles that should be able to travel hundreds of kilometers
in a variety of conditions [26]. The main methods for solving this problem are based on the process shown in
Figure 2.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292
6288
Figure 2. Block diagram of SLAM process
2.2.4. Visual SLAM
Visual simultaneous localization and mapping (V-SLAM) is the problem of establishing location of
a vehicle in an area while also building a representation of explored region using images as only source of
external knowledge. V-SLAM extracts features from observed images for localization in dynamic and
complex environments. Due to techniques-based computer vision employed, visual SLAM is an active area
of research. Visual SLAM (or vision-based SLAM) refers to use of cameras as only exteroceptive sensor.
When visual SLAM systems are complemented with data from proprioceptive sensors, it is called visual-
inertial SLAM. Localization process consists of two stages: global localization using topological map and
local localization using metric map. Li et al. presented in [20] an approach for vision-based localization,
using hybrid environment model, which combines topological map with metric map to represent
environment. Nodes of topological map are described by a holistic image descriptor, while interest point
descriptors define nature landmarks on metric map. This technique, tested in urban environment, contributes
to the development of a precise (errors mean of 74.54 cm and errors deviation of 91.43 cm) and effective
localization method.
2.2.5. Introspective vision for SLAM (IV-SLAM)
V-SLAM algorithms consider that feature extraction errors are independent and identically
distributed, which is not always true. This hypothesis makes the tracking quality of the V-SLAM algorithms
low, especially when the detected images include difficult conditions. To address such challenges, the
authors in [27] presented an introspective vision for SLAM (IV-SLAM). In this approach, the noise process
of errors from visual features is specifically modelled as context dependent in IV-SLAM. In comparison to
V-SLAM, results show that IV-SLAM is able to accurately predict sources of error in input images and
decreases tracking error.
2.2.6. LIDAR based SLAM
Because of its simplicity and precision, 3D mapping with LIDAR is commonly used to solve SLAM
problem [28]. LIDAR can, in fact, achieve a low drift motion estimation while maintaining a manageable
computational complexity [29]. Study in [30] claims that distortions in collected data are caused by moving
at high speeds, an impact that most studies overlook, including all of details about vehicles’ displacement.
Idea is to use velocimetry and then look at measurement’s distortion.
2.2.7. Visual-LIDAR fusion-based SLAM
To further improve the robustness of SLAM, such as respect for aggressive movements and absence
of visual features, researchers are focusing on vision-LIDAR approaches. Techniques which combine LIDAR
and stereo cameras. Approach proposed by Seo and Chou in [31] allows us to create a LIDAR map and a
visual map with map points in different modalities, then use them together to optimize odometry residuals
such that LIDAR map and operating environment have global coherence. Work proposed in [32] gave a
combination of low-cost LIDAR sensor and vision sensor. A specific cost function that considers both scan
and feature constraints is proposed to perform graph optimization. In order to speed up loop detection, a
specific model with visual features is used to create a 2.5D map.
Int J Elec & Comp Eng ISSN: 2088-8708 
Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid)
6289
3. RESULTS AND DISCUSSION
3.1. Configuration
This section shows some results produced by implementation of SLAM algorithm in order to build a
map of environment and a robot trajectory using LIDAR scans. MATLAB software was used to generate
these simulation results. A data set of laser scans acquired from a real mobile robot is available in MATLAB.
3.2. Results
Algorithm begins to aggregate and connect LIDAR scans as robot moves from a start point to an
arrival location. Figure 3 shows a pose graph for first 15 scans that is linked between them. Figure 4 shows
final built map (magenta color) and trajectory of robot (green color).
Figure 3. Pose graph for initial 15 scans Figure 4. Final built map and robot's trajectory
By comparing collected scans, the robot identifies recently visited positions and may create loop
closures along its itinerary. The loop closure data is used to update the map of environment and correct the
trajectory of robot. Identification of the first loop closure can be observed in Figure 5 (red color). Figure 6.
shows two loop closures that were detected automatically during robot displacement. After constructing and
optimizing the pose graph, the algorithm operates on this data to produce an occupancy map that describes
the area. Results obtained are presented in Figures 7 and 8.
Figure 5. Detection of first loop closure Figure 6. Two loop closures in final build map
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292
6290
3.3. Summary
In this section we presented the results of a MATLAB simulation using SLAM algorithm. We
started with pose graph for first 15 scans, and then we showed first loop closure. All of scans are combined to
create final map and robot trajectory shown in Figure 6. Using optimized scans and poses, we also built an
occupancy map.
Figure 7. Occupancy map Figure 8. Loop closure in occupancy map
4. CONCLUSION
In this paper, we introduced the concept of localization, which is an essential component of
self-driving cars. We illustrated state-of-the-art techniques for locating vehicles using a camera and LIDAR.
We were concerned about its accuracy. Then we discussed SLAM method, that can give position of a car
while simultaneously building a map of environment. Some of these approaches, such as Visual SLAM,
IV-SLAM, LIDAR SLAM and visual-LIDAR fusion SLAM are discussed. We illustrated accuracy and
robustness of SLAM solutions in last section by implementing LIDAR SLAM algorithm in MATLAB and
showing the results. In the future, we will try a hybridized implementation of SLAM algorithm using camera
and LIDAR fusion. We will also study several approaches to improving algorithm’s performance.
REFERENCES
[1] J. Lopez, P. Sanchez-Vilarino, R. Sanz, and E. Paz, β€œEfficient local navigation approach for autonomous driving vehicles,” IEEE
Access, vol. 9, pp. 79776–79792, 2021, doi: 10.1109/ACCESS.2021.3084807.
[2] M. Raj and V. G. Narendra, β€œDeep neural network approach for navigation of autonomous vehicles,” in 2021 6th International
Conference for Convergence in Technology (I2CT), Apr. 2021, pp. 1–4, doi: 10.1109/I2CT51068.2021.9418189.
[3] C. Zhao, L. Li, X. Pei, Z. Li, F.-Y. Wang, and X. Wu, β€œA comparative study of state-of-the-art driving strategies for autonomous
vehicles,” Accident Analysis and Prevention, vol. 150, Feb. 2021, doi: 10.1016/j.aap.2020.105937.
[4] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, β€œA survey of autonomous driving: common practices and emerging
technologies,” IEEE Access, vol. 8, pp. 58443–58469, 2020, doi: 10.1109/ACCESS.2020.2983149.
[5] M. Buehler, K. Iagnemma, and S. Singh, Eds., The DARPA urban challenge, vol. 56. Berlin, Heidelberg: Springer Berlin
Heidelberg, 2009, doi: 10.1007/978-3-642-03991-1.
[6] S. Miura, L.-T. Hsu, F. Chen, and S. Kamijo, β€œGPS error correction with pseudorange evaluation using three-dimensional maps,”
IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 3104–3115, Dec. 2015, doi:
10.1109/TITS.2015.2432122.
[7] Y. Lin et al., β€œAutonomous aerial navigation using monocular visual-inertial fusion,” Journal of Field Robotics, vol. 35, no. 1,
pp. 23–51, Jan. 2018, doi: 10.1002/rob.21732.
[8] H. Durrant-Whyte and T. Bailey, β€œSimultaneous localization and mapping: part I,” IEEE Robotics and Automation Magazine,
vol. 13, no. 2, pp. 99–110, Jun. 2006, doi: 10.1109/MRA.2006.1638022.
[9] G. Bresson, Z. Alsayed, L. Yu, and S. Glaser, β€œSimultaneous localization and mapping: a survey of current trends in autonomous
driving,” IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 194–220, Sep. 2017, doi: 10.1109/TIV.2017.2749181.
[10] Y. Jang, C. Oh, Y. Lee, and H. J. Kim, β€œMultirobot collaborative monocular SLAM utilizing rendezvous,” IEEE Transactions on
Robotics, vol. 37, no. 5, pp. 1469–1486, Oct. 2021, doi: 10.1109/TRO.2021.3058502.
[11] Y. Luo, Y. Li, Z. Li, and F. Shuang, β€œMS-SLAM: motion state decision of keyframes for UAV-based vision localization,” IEEE
Access, vol. 9, pp. 67667–67679, 2021, doi: 10.1109/ACCESS.2021.3077591.
[12] F. Jalal and F. Nasir, β€œUnderwater navigation, localization and path planning for autonomous vehicles: a review,” in 2021
International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Jan. 2021, pp. 817–828, doi:
10.1109/IBCAST51254.2021.9393315.
[13] A. Singandhupe and H. M. La, β€œA review of SLAM techniques and security in autonomous driving,” in 2019 Third IEEE
Int J Elec & Comp Eng ISSN: 2088-8708 
Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid)
6291
International Conference on Robotic Computing (IRC), Feb. 2019, pp. 602–607, doi: 10.1109/IRC.2019.00122.
[14] Y. Liu and J. Miura, β€œRDS-SLAM: real-time dynamic SLAM using semantic segmentation methods,” IEEE Access, vol. 9,
pp. 23772–23785, 2021, doi: 10.1109/ACCESS.2021.3050617.
[15] Y. Wang, Y. Lou, Y. Zhang, W. Song, F. Huang, and Z. Tu, β€œA robust framework for simultaneous localization and mapping with
multiple non-repetitive scanning lidars,” Remote Sensing, vol. 13, no. 10, May 2021, doi: 10.3390/rs13102015.
[16] H. Hatimi, M. Fakir, M. Chabi, and M. Najimi, β€œNew approach for detecting and tracking a moving object,” International Journal
of Electrical and Computer Engineering (IJECE), vol. 8, no. 5, pp. 3296–3303, Oct. 2018, doi: 10.11591/ijece.v8i5.pp3296-3303.
[17] C. G. Pachon-Suescun, C. J. Enciso-Aragon, and R. Jimenez-Moreno, β€œRobotic navigation algorithm with machine vision,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, pp. 1308–1316, Apr. 2020, doi:
10.11591/ijece.v10i2.pp1308-1316.
[18] A. Abozeid, H. Farouk, and S. Mashali, β€œDepth-DensePose: an efficient densely connected deep learning model for camera-based
localization,” International Journal of Electrical and Computer Engineering (IJECE), vol. 12, no. 3, pp. 2792–2801, Jun. 2022,
doi: 10.11591/ijece.v12i3.pp2792-2801.
[19] J. K. Suhr, J. Jang, D. Min, and H. G. Jung, β€œSensor fusion-based low-cost vehicle localization system for complex urban
environments,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 5, pp. 1078–1086, May 2017, doi:
10.1109/TITS.2016.2595618.
[20] C. Li, B. Dai, and T. Wu, β€œVision-based precision vehicle localization in urban environments,” in 2013 Chinese Automation
Congress, Nov. 2013, pp. 599–604, doi: 10.1109/CAC.2013.6775806.
[21] R. W. Wolcott and R. M. Eustice, β€œRobust LIDAR localization using multiresolution Gaussian mixture maps for autonomous
driving,” The International Journal of Robotics Research, vol. 36, no. 3, pp. 292–319, Mar. 2017, doi:
10.1177/0278364917696568.
[22] A. Y. Hata and D. F. Wolf, β€œFeature detection for vehicle localization in urban environments using a multilayer LIDAR,” IEEE
Transactions on Intelligent Transportation Systems, vol. 17, no. 2, pp. 420–429, Feb. 2016, doi: 10.1109/TITS.2015.2477817.
[23] Q. Li, J. P. Queralta, T. N. Gia, Z. Zou, and T. Westerlund, β€œMulti-sensor fusion for navigation and mapping in autonomous
vehicles: accurate localization in urban environments,” Unmanned Systems, vol. 8, no. 3, pp. 229–237, Jul. 2020, doi:
10.1142/S2301385020500168.
[24] X. Li, S. Du, G. Li, and H. Li, β€œIntegrate point-cloud segmentation with 3D LiDAR scan-matching for mobile robot localization
and mapping,” Sensors, vol. 20, no. 1, Dec. 2019, doi: 10.3390/s20010237.
[25] A. S. Aguiar, F. N. dos Santos, J. B. Cunha, H. Sobreira, and A. J. Sousa, β€œLocalization and mapping for robots in agriculture and
forestry: a survey,” Robotics, vol. 9, no. 4, Nov. 2020, doi: 10.3390/robotics9040097.
[26] C. Debeunne and D. Vivet, β€œA review of visual-LiDAR fusion based simultaneous localization and mapping,” Sensors, vol. 20,
no. 7, Apr. 2020, doi: 10.3390/s20072068.
[27] S. Rabiee and J. Biswas, β€œIV-SLAM: introspective vision for simultaneous localization and mapping,” ArXiv200802760 Cs, Nov.
2020, Accessed: Nov. 12, 2021. [Online]. Available: http://arxiv.org/abs/2008.02760
[28] Q. Zou, Q. Sun, L. Chen, B. Nie, and Q. Li, β€œA comparative analysis of LiDAR SLAM-based indoor navigation for autonomous
vehicles,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–15, 2021, doi: 10.1109/TITS.2021.3063477.
[29] J. Zhang and S. Singh, β€œLow-drift and real-time lidar odometry and mapping,” Autonomous Robots, vol. 41, no. 2, pp. 401–416,
Feb. 2017, doi: 10.1007/s10514-016-9548-2.
[30] D. Vivet, β€œExtracting proprioceptive information by analyzing rotating range sensors induced distortion,” in 2019 27th European
Signal Processing Conference (EUSIPCO), Sep. 2019, pp. 1–5, doi: 10.23919/EUSIPCO.2019.8903009.
[31] Y. Seo and C.-C. Chou, β€œA tight coupling of vision-lidar measurements for an effective odometry,” in 2019 IEEE Intelligent
Vehicles Symposium (IV), Jun. 2019, pp. 1118–1123, doi: 10.1109/IVS.2019.8814164.
[32] G. Jiang, L. Yin, S. Jin, C. Tian, X. Ma, and Y. Ou, β€œA simultaneous localization and mapping (SLAM) framework for 2.5D map
building based on low-cost LiDAR and vision fusion,” Applied Sciences, vol. 9, no. 10, May 2019, doi: 10.3390/app9102105.
BIOGRAPHIES OF AUTHORS
El Farnane Abdelhafid received his Master’s degree in Electrical Engineering
from Hight Normal School of Technical Studies (ENSET) at Mohammed V University, Rabat,
Morocco, in 2019. Currently, he is pursuing his Ph.D. degree from Faculty of Sciences and
Technologies at Hassan I University, Settat, Morocco. His research interests include
embedded systems, self-driving cars, and robot control. He can be contacted at email:
a.elfarnane@uhp.ac.ma.
Youssefi My Abdelkader received his engineering degree in telecommunications
from National Institute of Telecommunications (INPT), Rabat, Morocco, in 2003, and he
received his Ph.D. degree from Mohammadia School of Engineers (EMI) in 2015. He is now a
teacher at Hassan I University, Settat, Morocco. His research interests include Embedded
Systems, self-driving, massive MIMO, wireless communications and internet of things. Prof.
Youssefi has published more than 20 papers in peer-reviewed journals and referred conference
proceedings. He can be contacted at email: ab.youssefi@gmail.com.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292
6292
Mouhsen Ahmed received his Ph.D degree in Electronics from the University of
Bordeaux, France, in 1992, and he is currently a Professor at Electrical Engineering
Department, Faculty of Sciences and Technologies, Hassan I University, Settat, Morocco. His
research interests focuses on Embedded Systems, wireless communications and Information
Technology. Prof. Mouhsen has published more than 50 papers in peer-reviewed journals and
referred conference proceedings. He can be contacted at email: mouhsen.ahmed@gmail.com.
Dakir Rachid was born on 06-January 1982. He received the Ph.D. in Network
and Telecommunication from the FST University Hassan 1st, Morocco. He is currently a
Professor in Ibn ZOHR University, FP of Ouarzazate, Morocco and he is involved in the
prototype active, passive microwave electronic circuits, systems, network and
telecommunications. He can be contacted at email: dakir_fsts@hotmail.com.
El Ihyaoui Abdelilah ICT Engineer graduated in 2013 from National Institute of
Telecommunications (INPT), Rabat, Morocco. PhD student at the Faculty of Sciences and
Technologies at Hassan I University, Settat, Morocco. His research interests are computer
science, software architecture and computer security. He can be contacted at email:
abdelilah.elihyawi@gmail.com.

More Related Content

Similar to Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars

Identification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadIdentification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadAlexander Decker
Β 
A computer vision-based lane detection approach for an autonomous vehicle usi...
A computer vision-based lane detection approach for an autonomous vehicle usi...A computer vision-based lane detection approach for an autonomous vehicle usi...
A computer vision-based lane detection approach for an autonomous vehicle usi...Md. Faishal Rahaman
Β 
Autonomous Vehicle by using 3D LIDAR and 2D Camera
Autonomous Vehicle by using 3D LIDAR and 2D CameraAutonomous Vehicle by using 3D LIDAR and 2D Camera
Autonomous Vehicle by using 3D LIDAR and 2D CameraIRJET Journal
Β 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionYogeshIJTSRD
Β 
IRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET Journal
Β 
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...ijma
Β 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotSudhakar Spartan
Β 
An Overview of Traffic Accident Detection System using IoT
An Overview of Traffic Accident Detection System using IoTAn Overview of Traffic Accident Detection System using IoT
An Overview of Traffic Accident Detection System using IoTIRJET Journal
Β 
Lane Detection and Obstacle Aviodance
Lane Detection and Obstacle AviodanceLane Detection and Obstacle Aviodance
Lane Detection and Obstacle AviodanceNishanth Sriramoju
Β 
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLE
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLEDESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLE
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLEIRJET Journal
Β 
Lane Detection and Obstacle Aviodance Revised
Lane Detection and Obstacle Aviodance RevisedLane Detection and Obstacle Aviodance Revised
Lane Detection and Obstacle Aviodance RevisedPhanindra Amaradhi
Β 
Real Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsReal Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsEditor IJCATR
Β 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET Journal
Β 
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Associate Professor in VSB Coimbatore
Β 
2D mapping using omni-directional mobile robot equipped with LiDAR
2D mapping using omni-directional mobile robot equipped with LiDAR2D mapping using omni-directional mobile robot equipped with LiDAR
2D mapping using omni-directional mobile robot equipped with LiDARTELKOMNIKA JOURNAL
Β 
Traffic Violation Detection Using Multiple Trajectories of Vehicles
Traffic Violation Detection Using Multiple Trajectories of VehiclesTraffic Violation Detection Using Multiple Trajectories of Vehicles
Traffic Violation Detection Using Multiple Trajectories of VehiclesIJERA Editor
Β 
A Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsA Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsIOSR Journals
Β 
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...ITIIIndustries
Β 
IRJET- Traffic Sign and Drowsiness Detection using Open-CV
IRJET- Traffic Sign and Drowsiness Detection using Open-CVIRJET- Traffic Sign and Drowsiness Detection using Open-CV
IRJET- Traffic Sign and Drowsiness Detection using Open-CVIRJET Journal
Β 

Similar to Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars (20)

Identification and classification of moving vehicles on road
Identification and classification of moving vehicles on roadIdentification and classification of moving vehicles on road
Identification and classification of moving vehicles on road
Β 
A computer vision-based lane detection approach for an autonomous vehicle usi...
A computer vision-based lane detection approach for an autonomous vehicle usi...A computer vision-based lane detection approach for an autonomous vehicle usi...
A computer vision-based lane detection approach for an autonomous vehicle usi...
Β 
Autonomous Vehicle by using 3D LIDAR and 2D Camera
Autonomous Vehicle by using 3D LIDAR and 2D CameraAutonomous Vehicle by using 3D LIDAR and 2D Camera
Autonomous Vehicle by using 3D LIDAR and 2D Camera
Β 
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer VisionLane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Lane and Object Detection for Autonomous Vehicle using Advanced Computer Vision
Β 
IRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous RobotIRJET - Lidar based Autonomous Robot
IRJET - Lidar based Autonomous Robot
Β 
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...
An Improved Tracking Using IMU and Vision Fusion for Mobile Augmented Reality...
Β 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
Β 
An Overview of Traffic Accident Detection System using IoT
An Overview of Traffic Accident Detection System using IoTAn Overview of Traffic Accident Detection System using IoT
An Overview of Traffic Accident Detection System using IoT
Β 
Lane Detection and Obstacle Aviodance
Lane Detection and Obstacle AviodanceLane Detection and Obstacle Aviodance
Lane Detection and Obstacle Aviodance
Β 
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLE
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLEDESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLE
DESIGN & DEVELOPMENT OF UNMANNED GROUND VEHICLE
Β 
Lane Detection and Obstacle Aviodance Revised
Lane Detection and Obstacle Aviodance RevisedLane Detection and Obstacle Aviodance Revised
Lane Detection and Obstacle Aviodance Revised
Β 
Real Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance ApplicationsReal Time Object Identification for Intelligent Video Surveillance Applications
Real Time Object Identification for Intelligent Video Surveillance Applications
Β 
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...IRJET-  	  Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
Β 
Ijecet 06 10_003
Ijecet 06 10_003Ijecet 06 10_003
Ijecet 06 10_003
Β 
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Real Time Road Blocker Detection and Distance Calculation for Autonomous Vehi...
Β 
2D mapping using omni-directional mobile robot equipped with LiDAR
2D mapping using omni-directional mobile robot equipped with LiDAR2D mapping using omni-directional mobile robot equipped with LiDAR
2D mapping using omni-directional mobile robot equipped with LiDAR
Β 
Traffic Violation Detection Using Multiple Trajectories of Vehicles
Traffic Violation Detection Using Multiple Trajectories of VehiclesTraffic Violation Detection Using Multiple Trajectories of Vehicles
Traffic Violation Detection Using Multiple Trajectories of Vehicles
Β 
A Review: Machine vision and its Applications
A Review: Machine vision and its ApplicationsA Review: Machine vision and its Applications
A Review: Machine vision and its Applications
Β 
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
A Method for Predicting Vehicles Motion Based on Road Scene Reconstruction an...
Β 
IRJET- Traffic Sign and Drowsiness Detection using Open-CV
IRJET- Traffic Sign and Drowsiness Detection using Open-CVIRJET- Traffic Sign and Drowsiness Detection using Open-CV
IRJET- Traffic Sign and Drowsiness Detection using Open-CV
Β 

More from IJECEIAES

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
Β 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionIJECEIAES
Β 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...IJECEIAES
Β 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...IJECEIAES
Β 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningIJECEIAES
Β 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...IJECEIAES
Β 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...IJECEIAES
Β 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
Β 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...IJECEIAES
Β 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyIJECEIAES
Β 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationIJECEIAES
Β 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceIJECEIAES
Β 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...IJECEIAES
Β 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...IJECEIAES
Β 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...IJECEIAES
Β 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...IJECEIAES
Β 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...IJECEIAES
Β 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...IJECEIAES
Β 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...IJECEIAES
Β 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...IJECEIAES
Β 

More from IJECEIAES (20)

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...
Β 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regression
Β 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Β 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Β 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learning
Β 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...
Β 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...
Β 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...
Β 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...
Β 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancy
Β 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimization
Β 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performance
Β 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...
Β 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...
Β 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Β 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...
Β 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...
Β 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...
Β 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
Β 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
Β 

Recently uploaded

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoΓ£o Esperancinha
Β 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
Β 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
Β 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
Β 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
Β 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
Β 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
Β 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
Β 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
Β 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
Β 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
Β 
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...9953056974 Low Rate Call Girls In Saket, Delhi NCR
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
Β 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
Β 

Recently uploaded (20)

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Β 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
Β 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
Β 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
Β 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
Β 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
Β 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Β 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Β 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
Β 
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCRβ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
β˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
Β 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
Β 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
Β 
young call girls in Rajiv ChowkπŸ” 9953056974 πŸ” Delhi escort Service
young call girls in Rajiv ChowkπŸ” 9953056974 πŸ” Delhi escort Serviceyoung call girls in Rajiv ChowkπŸ” 9953056974 πŸ” Delhi escort Service
young call girls in Rajiv ChowkπŸ” 9953056974 πŸ” Delhi escort Service
Β 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
Β 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Β 
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Β 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Β 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
Β 

Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 12, No. 6, December 2022, pp. 6284~6292 ISSN: 2088-8708, DOI: 10.11591/ijece.v12i6.pp6284-6292  6284 Journal homepage: http://ijece.iaescore.com Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars El Farnane Abdelhafid, Youssefi My Abdelkader, Mouhsen Ahmed, Dakir Rachid, El Ihyaoui Abdelilah Engineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technics, Hassan First University of Settat, Settat, Morocco Article Info ABSTRACT Article history: Received Jul 6, 2021 Revised Jun 10, 2022 Accepted Jul 5, 2022 In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping. Keywords: Camera Light detection and ranging Localization Mapping Self-driving cars Simultaneous localization and mapping This is an open access article under the CC BY-SA license. Corresponding Author: El Farnane Abdelhafid Engineering Laboratory, Industrial Management and Innovation, Faculty of Sciences and Technologies, Hassan First University Settat City, Morocco Email: a.elfarnane@uhp.ac.ma 1. INTRODUCTION Self-driving cars are regarded as the next big thing, benefiting from intelligent transportation systems, smart cities, internet of things (IoT), and new technologies such as machine learning. At present, automotive industry has contributed towards innovation and economic growth. Navigation for autonomous driving vehicles is a very active research area [1], [2]. In order to ensure efficiency and safety of traffic systems that rely on driving strategies [3], researchers around the world are concentrating their efforts on advancement of this subject [4]. In this context, defense advanced research projects agency (DARPA) has been regularly organizing a grand challenge of autonomous [5]. One of the most important criteria for autonomous navigation is an accurate localization of the car itself. Global navigation satellite system (GNSS) is the most used localization system, providing absolute positioning with high accuracy [6]. However, in special conditions such as multi-path effect and latency, restrict its use [7]. Only by representing environment using different kinds of sensors would robots be able to navigate in these conditions. The process of representing environment or constructing a map and estimating position of a car simultaneously called simultaneous localization and mapping (SLAM) [8]. SLAM is a solution for several applications, including self-driving cars [9], mobile robots [10], unmanned aerial vehicles (UAV)
  • 2. Int J Elec & Comp Eng ISSN: 2088-8708  Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid) 6285 [11], and autonomous underwater vehicles [12]. Many SLAM techniques for self-driving cars are discussed in literature [13]. Liu and Miura [14] presented an ORB-SLAM-based visual dynamic SLAM algorithm for real-time tracking and mapping. In dynamic indoor environments, results show that the algorithm is efficient and accurate. Wang et al. [15] proposed a robust multi-lidar for self-localization and mapping problems. According to the authors, the suggested solution produced better results than single lidar approaches. In this paper, after surveying and discussing the approaches of SLAM-based cameras and light detection and ranging (LIDAR), we focused on SLAM using LIDAR sensors in order to create a map of an environment and locate the pose of a self-driving car. The remainder of this paper is structured: section 2 discusses sensors for perception and localization. Techniques based on LIDAR and cameras for localization are also presented. Section 3 shows results obtained by implementation of SLAM algorithm. Finally, section 4 concludes by discussing future orientations and remaining challenges. 2. METHOD 2.1. Sensors based localization Localization and mapping are the most important requirements for self-driving cars. Car needs to know where it is as well as its environment at all times and in any conditions. Car must be able to localize itself and generate a map of its surroundings called a β€œlocal map”. Map obtained by global positioning system (GPS) is insufficient because the environment is extremely dynamic. As a result, creating a local map and integrating it with the global map is required for more accurate navigation. Several sensors used for localization are presented below, as well as a synthesis of techniques based on cameras and Lidar. 2.1.1. Sensors of perception In self-driving cars, perception frameworks can detect and interpret surrounding environment dependent on different sorts of sensors. Sensors are extensively ordered into two sorts, depending on what property they record. They are exteroceptive if they record an environmental property. Then again, if sensors record a property of ego vehicle, they are proprioceptive. Classification and characteristics of sensors used in perception are shown in Table 1. Table 1. Sensors of perception Sensors Camera LIDAR Radar Ultrasonic GNSS/IMU Odometry Role Essential for correctly perceiving environment Detailed 3D scene geometry Object detection and relative speed estimation Short-range all- weather distance measurement. Unaffected by lighting. Measure of ego vehicle states Tracks wheel and calculate overall speed and orientation Characteristics Resolution Field of view Dynamic range Number of beams Points per second Rotation rate Field of view Range field of view accuracy Range Field of view Cost Type Exteroceptive Exteroceptive Exteroceptive Exteroceptive Proprioceptive Proprioceptive 2.1.2. Camera-based localization There are several localization techniques presented in literature that use only cameras [16], [17] or integration of camera [18] to improve performance of these techniques. Suhr et al. [19] proposed, in urban environments, using GPS/IMU for global positioning and camera for recognition of road markers as well as lane markers to find both lateral and longitudinal positioning. Authors noted that this technique gives, in one of 5 experiments, lateral errors of 0.49 m, longitudinal errors of 0.95 m and 1.18 m Euclidean error on average. Li et al. [20] suggested, in urban environments also, a vision-based localization approach using only cameras. Hybrid method combines a topological map to estimate a global position with a metric map to find a fine localization. Authors noted that this low-cost approach gives mean positioning errors of 0.75 m. 2.1.3. LIDAR-based localization LIDAR is another important sensor which is able to improve localization performance for self-driving cars. It scans the surrounding environment and generates multiple points to build a 3D map using light detection and ranging. LIDAR is known to offer precise and robust measurements of the environment. Because it is expensive compared to other sensors, LIDAR is employed only to build map, and camera is utilized to localize vehicle.
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292 6286 Wolcott and Eustice [21] suggested a generic probabilistic localization method based on a 3D LIDAR scanner. This technique models world as a fast and exact multiresolution map of a mixture of Gaussians. Gaussian mixture maps for localization were evaluated on two autonomous vehicles, in adverse weather conditions, and resulted in longitudinal and lateral RMS errors below 0.1 m and 0.13 m, respectively. Hata and Wolf [22] proposed, in the urban environment, a method based on multilayer LIDAR to estimate the vehicle localization using map of environment created by LIDAR sensor. Authors noted that this proposed approach gives longitudinal, lateral and angular errors of 0.1395 m, 0.204 m and 0.019 rad, respectively. 2.1.4. Summary Camera-based localization solutions provide excellent performance. Error is caused by a low- textured environment or sensitivity to light changes, which is the biggest drawback of this solution. Finally, image processing necessitates a high level of computational complexity. LIDAR, on the other hand, is known for its ease of use and accuracy. To take advantage of benefits of each sensor, several techniques use sensor fusion as we will see in the next chapter. Table 2 summarizes camera and LIDAR based localization. Table 2. Summary of camera and LIDAR based localization Study Accuracy Camera-based localization [19] Longitudinal error 0.95 m Lateral error 0.49 m Euclidean error 1.18 m [20] Positioning error of 0.75 m Lidar-based localization [21] Longitudinal error 0.1 m Lateral error 0.13 m [22] Longitudinal error 0.1395 m Lateral error 0.204 m Angular error 0.019 rad 2.2. Vision and LIDAR based SLAM 2.2.1. Principe of SLAM SLAM is a technique for simultaneously creating an online map and locating a car on it. It is a common practice in self-driving cars that works effectively in both indoor and outdoor environments and performs well [23]. But it remains less efficient than prebuilt map localization due to high environmental challenges, such as high speed and a large number of dynamic vehicles [24]. 2.2.2. Probabilistic modeling of the SLAM problem In order to model the SLAM problem, we consider a vehicle moving through an unknown environment, as given in Figure 1. The idea is to use a graph to model the SLAM problem, which includes the position xt of the robot as well as map landmarks mi. Lines connecting positions represent the trajectory of the robot, and arrows represent the distance to landmarks. The position data is obtained by proprioceptive sensors mounted on the robot. Furthermore, landmarks are extracted from the environment using proprioceptive sensors such as LIDAR. Figure 1. Basic idea of SLAM
  • 4. Int J Elec & Comp Eng ISSN: 2088-8708  Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid) 6287 At time 𝑑 we define the following quantities: π‘₯𝑑 is state vector that represents a vehicle; ut is control vector applied at 𝑑 βˆ’ 1 to drive vehicle to xt; mi is vector representing ith landmark; 𝑧𝑑,𝑖 is observation of ith landmark. Assuming that duration between two successive positions is constant and equal to 𝑇, the instant 𝑑 becomes π‘˜π‘‡. To simplify calculations, we will use index π‘˜π‘‡ = π‘˜. Then, from time 0 to kT, the following sets are defined: βˆ’ X0:k = {x0, x1, … , xk}: set of vehicle locations; βˆ’ U0:k = {u0, u1, … , uk}: set of control inputs; βˆ’ Z0:k = {z0, z1, … , zk}: set of observations; βˆ’ M = {m1, m2, … , mk}: set of landmarks or maps. a) Modeling of localization Problem of localization consists of calculating a vehicle’s position in a given environment using environmental data such as observation and command history [25]. This problem is represented by estimation of probability distribution. P(xk|Z0:k, U0:k, M) This formulation, which represents global localization, uses all of data from history of observations and commands. This gives an accurate estimation of position, but it substantially complicates the calculations. To simplify computational complexity, we estimate position at time kT using only data from time (k βˆ’ 1)T. This is local localization represented by: P(xk|zkβˆ’1, ukβˆ’1, xkβˆ’1, M) Implementing this model will greatly reduce algorithm’s complexity, but accuracy will degrade as a result. b) Modeling of mapping Process of building an environment map using sensor data and history of real robot placements is known as mapping. Mathematically, mapping problem can be modeled. P(mk|Z0:k, X0:k) To create a good mapping, robot’s position must be exact and accurate. c) Modeling of SLAM SLAM’s probabilistic modeling necessitates determination of the probability quantity at each time step. P(xk, M|Z0:k, U0:k, x0) This calculation is divided into two parts by (1) and (2), describing the position update and the observation update, respectively. 𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜, π‘₯0) = ∫[ 𝑃(π‘₯π‘˜|π‘₯π‘˜βˆ’1, π‘’π‘˜) βˆ— 𝑃(π‘₯π‘˜βˆ’1, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜βˆ’1, π‘₯0)]𝑑π‘₯π‘˜βˆ’1 (1) 𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜, π‘ˆ0:π‘˜, π‘₯0) = 𝑃(π‘§π‘˜|π‘₯π‘˜, 𝑀)βˆ—π‘ƒ(π‘₯π‘˜, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜, π‘₯0) 𝑃(π‘§π‘˜|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜) (2) P(xk|xkβˆ’1, uk): represent vehicle’s motion model (transition model). P(zk|xk, M): describes observation model. Ξ· = 1 P(zk|Z0:kβˆ’1, U0:k) : a normalization constant that depends on transition and observation models. Finally, we can model SLAM problem by (3). 𝑃(π‘₯π‘˜, 𝑀|𝑍0:π‘˜, π‘ˆ0:π‘˜, π‘₯0) = πœ‚ βˆ— 𝑃(π‘§π‘˜|π‘₯π‘˜, 𝑀) βˆ— ∫[ 𝑃(π‘₯π‘˜|π‘₯π‘˜βˆ’1, π‘’π‘˜) βˆ— 𝑃(π‘₯π‘˜βˆ’1, 𝑀|𝑍0:π‘˜βˆ’1, π‘ˆ0:π‘˜βˆ’1, π‘₯0)]𝑑π‘₯π‘˜βˆ’1 (3) 2.2.3. Resolution of the SLAM problem SLAM problem is regarded as a critical factor of self-driving cars. Many problems, however, continue to prevent use of SLAM algorithms in vehicles that should be able to travel hundreds of kilometers in a variety of conditions [26]. The main methods for solving this problem are based on the process shown in Figure 2.
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292 6288 Figure 2. Block diagram of SLAM process 2.2.4. Visual SLAM Visual simultaneous localization and mapping (V-SLAM) is the problem of establishing location of a vehicle in an area while also building a representation of explored region using images as only source of external knowledge. V-SLAM extracts features from observed images for localization in dynamic and complex environments. Due to techniques-based computer vision employed, visual SLAM is an active area of research. Visual SLAM (or vision-based SLAM) refers to use of cameras as only exteroceptive sensor. When visual SLAM systems are complemented with data from proprioceptive sensors, it is called visual- inertial SLAM. Localization process consists of two stages: global localization using topological map and local localization using metric map. Li et al. presented in [20] an approach for vision-based localization, using hybrid environment model, which combines topological map with metric map to represent environment. Nodes of topological map are described by a holistic image descriptor, while interest point descriptors define nature landmarks on metric map. This technique, tested in urban environment, contributes to the development of a precise (errors mean of 74.54 cm and errors deviation of 91.43 cm) and effective localization method. 2.2.5. Introspective vision for SLAM (IV-SLAM) V-SLAM algorithms consider that feature extraction errors are independent and identically distributed, which is not always true. This hypothesis makes the tracking quality of the V-SLAM algorithms low, especially when the detected images include difficult conditions. To address such challenges, the authors in [27] presented an introspective vision for SLAM (IV-SLAM). In this approach, the noise process of errors from visual features is specifically modelled as context dependent in IV-SLAM. In comparison to V-SLAM, results show that IV-SLAM is able to accurately predict sources of error in input images and decreases tracking error. 2.2.6. LIDAR based SLAM Because of its simplicity and precision, 3D mapping with LIDAR is commonly used to solve SLAM problem [28]. LIDAR can, in fact, achieve a low drift motion estimation while maintaining a manageable computational complexity [29]. Study in [30] claims that distortions in collected data are caused by moving at high speeds, an impact that most studies overlook, including all of details about vehicles’ displacement. Idea is to use velocimetry and then look at measurement’s distortion. 2.2.7. Visual-LIDAR fusion-based SLAM To further improve the robustness of SLAM, such as respect for aggressive movements and absence of visual features, researchers are focusing on vision-LIDAR approaches. Techniques which combine LIDAR and stereo cameras. Approach proposed by Seo and Chou in [31] allows us to create a LIDAR map and a visual map with map points in different modalities, then use them together to optimize odometry residuals such that LIDAR map and operating environment have global coherence. Work proposed in [32] gave a combination of low-cost LIDAR sensor and vision sensor. A specific cost function that considers both scan and feature constraints is proposed to perform graph optimization. In order to speed up loop detection, a specific model with visual features is used to create a 2.5D map.
  • 6. Int J Elec & Comp Eng ISSN: 2088-8708  Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid) 6289 3. RESULTS AND DISCUSSION 3.1. Configuration This section shows some results produced by implementation of SLAM algorithm in order to build a map of environment and a robot trajectory using LIDAR scans. MATLAB software was used to generate these simulation results. A data set of laser scans acquired from a real mobile robot is available in MATLAB. 3.2. Results Algorithm begins to aggregate and connect LIDAR scans as robot moves from a start point to an arrival location. Figure 3 shows a pose graph for first 15 scans that is linked between them. Figure 4 shows final built map (magenta color) and trajectory of robot (green color). Figure 3. Pose graph for initial 15 scans Figure 4. Final built map and robot's trajectory By comparing collected scans, the robot identifies recently visited positions and may create loop closures along its itinerary. The loop closure data is used to update the map of environment and correct the trajectory of robot. Identification of the first loop closure can be observed in Figure 5 (red color). Figure 6. shows two loop closures that were detected automatically during robot displacement. After constructing and optimizing the pose graph, the algorithm operates on this data to produce an occupancy map that describes the area. Results obtained are presented in Figures 7 and 8. Figure 5. Detection of first loop closure Figure 6. Two loop closures in final build map
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292 6290 3.3. Summary In this section we presented the results of a MATLAB simulation using SLAM algorithm. We started with pose graph for first 15 scans, and then we showed first loop closure. All of scans are combined to create final map and robot trajectory shown in Figure 6. Using optimized scans and poses, we also built an occupancy map. Figure 7. Occupancy map Figure 8. Loop closure in occupancy map 4. CONCLUSION In this paper, we introduced the concept of localization, which is an essential component of self-driving cars. We illustrated state-of-the-art techniques for locating vehicles using a camera and LIDAR. We were concerned about its accuracy. Then we discussed SLAM method, that can give position of a car while simultaneously building a map of environment. Some of these approaches, such as Visual SLAM, IV-SLAM, LIDAR SLAM and visual-LIDAR fusion SLAM are discussed. We illustrated accuracy and robustness of SLAM solutions in last section by implementing LIDAR SLAM algorithm in MATLAB and showing the results. In the future, we will try a hybridized implementation of SLAM algorithm using camera and LIDAR fusion. We will also study several approaches to improving algorithm’s performance. REFERENCES [1] J. Lopez, P. Sanchez-Vilarino, R. Sanz, and E. Paz, β€œEfficient local navigation approach for autonomous driving vehicles,” IEEE Access, vol. 9, pp. 79776–79792, 2021, doi: 10.1109/ACCESS.2021.3084807. [2] M. Raj and V. G. Narendra, β€œDeep neural network approach for navigation of autonomous vehicles,” in 2021 6th International Conference for Convergence in Technology (I2CT), Apr. 2021, pp. 1–4, doi: 10.1109/I2CT51068.2021.9418189. [3] C. Zhao, L. Li, X. Pei, Z. Li, F.-Y. Wang, and X. Wu, β€œA comparative study of state-of-the-art driving strategies for autonomous vehicles,” Accident Analysis and Prevention, vol. 150, Feb. 2021, doi: 10.1016/j.aap.2020.105937. [4] E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, β€œA survey of autonomous driving: common practices and emerging technologies,” IEEE Access, vol. 8, pp. 58443–58469, 2020, doi: 10.1109/ACCESS.2020.2983149. [5] M. Buehler, K. Iagnemma, and S. Singh, Eds., The DARPA urban challenge, vol. 56. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, doi: 10.1007/978-3-642-03991-1. [6] S. Miura, L.-T. Hsu, F. Chen, and S. Kamijo, β€œGPS error correction with pseudorange evaluation using three-dimensional maps,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 6, pp. 3104–3115, Dec. 2015, doi: 10.1109/TITS.2015.2432122. [7] Y. Lin et al., β€œAutonomous aerial navigation using monocular visual-inertial fusion,” Journal of Field Robotics, vol. 35, no. 1, pp. 23–51, Jan. 2018, doi: 10.1002/rob.21732. [8] H. Durrant-Whyte and T. Bailey, β€œSimultaneous localization and mapping: part I,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99–110, Jun. 2006, doi: 10.1109/MRA.2006.1638022. [9] G. Bresson, Z. Alsayed, L. Yu, and S. Glaser, β€œSimultaneous localization and mapping: a survey of current trends in autonomous driving,” IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 194–220, Sep. 2017, doi: 10.1109/TIV.2017.2749181. [10] Y. Jang, C. Oh, Y. Lee, and H. J. Kim, β€œMultirobot collaborative monocular SLAM utilizing rendezvous,” IEEE Transactions on Robotics, vol. 37, no. 5, pp. 1469–1486, Oct. 2021, doi: 10.1109/TRO.2021.3058502. [11] Y. Luo, Y. Li, Z. Li, and F. Shuang, β€œMS-SLAM: motion state decision of keyframes for UAV-based vision localization,” IEEE Access, vol. 9, pp. 67667–67679, 2021, doi: 10.1109/ACCESS.2021.3077591. [12] F. Jalal and F. Nasir, β€œUnderwater navigation, localization and path planning for autonomous vehicles: a review,” in 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), Jan. 2021, pp. 817–828, doi: 10.1109/IBCAST51254.2021.9393315. [13] A. Singandhupe and H. M. La, β€œA review of SLAM techniques and security in autonomous driving,” in 2019 Third IEEE
  • 8. Int J Elec & Comp Eng ISSN: 2088-8708  Visual and light detection and ranging-based simultaneous localization and … (El Farnane Abdelhafid) 6291 International Conference on Robotic Computing (IRC), Feb. 2019, pp. 602–607, doi: 10.1109/IRC.2019.00122. [14] Y. Liu and J. Miura, β€œRDS-SLAM: real-time dynamic SLAM using semantic segmentation methods,” IEEE Access, vol. 9, pp. 23772–23785, 2021, doi: 10.1109/ACCESS.2021.3050617. [15] Y. Wang, Y. Lou, Y. Zhang, W. Song, F. Huang, and Z. Tu, β€œA robust framework for simultaneous localization and mapping with multiple non-repetitive scanning lidars,” Remote Sensing, vol. 13, no. 10, May 2021, doi: 10.3390/rs13102015. [16] H. Hatimi, M. Fakir, M. Chabi, and M. Najimi, β€œNew approach for detecting and tracking a moving object,” International Journal of Electrical and Computer Engineering (IJECE), vol. 8, no. 5, pp. 3296–3303, Oct. 2018, doi: 10.11591/ijece.v8i5.pp3296-3303. [17] C. G. Pachon-Suescun, C. J. Enciso-Aragon, and R. Jimenez-Moreno, β€œRobotic navigation algorithm with machine vision,” International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, pp. 1308–1316, Apr. 2020, doi: 10.11591/ijece.v10i2.pp1308-1316. [18] A. Abozeid, H. Farouk, and S. Mashali, β€œDepth-DensePose: an efficient densely connected deep learning model for camera-based localization,” International Journal of Electrical and Computer Engineering (IJECE), vol. 12, no. 3, pp. 2792–2801, Jun. 2022, doi: 10.11591/ijece.v12i3.pp2792-2801. [19] J. K. Suhr, J. Jang, D. Min, and H. G. Jung, β€œSensor fusion-based low-cost vehicle localization system for complex urban environments,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 5, pp. 1078–1086, May 2017, doi: 10.1109/TITS.2016.2595618. [20] C. Li, B. Dai, and T. Wu, β€œVision-based precision vehicle localization in urban environments,” in 2013 Chinese Automation Congress, Nov. 2013, pp. 599–604, doi: 10.1109/CAC.2013.6775806. [21] R. W. Wolcott and R. M. Eustice, β€œRobust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving,” The International Journal of Robotics Research, vol. 36, no. 3, pp. 292–319, Mar. 2017, doi: 10.1177/0278364917696568. [22] A. Y. Hata and D. F. Wolf, β€œFeature detection for vehicle localization in urban environments using a multilayer LIDAR,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 2, pp. 420–429, Feb. 2016, doi: 10.1109/TITS.2015.2477817. [23] Q. Li, J. P. Queralta, T. N. Gia, Z. Zou, and T. Westerlund, β€œMulti-sensor fusion for navigation and mapping in autonomous vehicles: accurate localization in urban environments,” Unmanned Systems, vol. 8, no. 3, pp. 229–237, Jul. 2020, doi: 10.1142/S2301385020500168. [24] X. Li, S. Du, G. Li, and H. Li, β€œIntegrate point-cloud segmentation with 3D LiDAR scan-matching for mobile robot localization and mapping,” Sensors, vol. 20, no. 1, Dec. 2019, doi: 10.3390/s20010237. [25] A. S. Aguiar, F. N. dos Santos, J. B. Cunha, H. Sobreira, and A. J. Sousa, β€œLocalization and mapping for robots in agriculture and forestry: a survey,” Robotics, vol. 9, no. 4, Nov. 2020, doi: 10.3390/robotics9040097. [26] C. Debeunne and D. Vivet, β€œA review of visual-LiDAR fusion based simultaneous localization and mapping,” Sensors, vol. 20, no. 7, Apr. 2020, doi: 10.3390/s20072068. [27] S. Rabiee and J. Biswas, β€œIV-SLAM: introspective vision for simultaneous localization and mapping,” ArXiv200802760 Cs, Nov. 2020, Accessed: Nov. 12, 2021. [Online]. Available: http://arxiv.org/abs/2008.02760 [28] Q. Zou, Q. Sun, L. Chen, B. Nie, and Q. Li, β€œA comparative analysis of LiDAR SLAM-based indoor navigation for autonomous vehicles,” IEEE Transactions on Intelligent Transportation Systems, pp. 1–15, 2021, doi: 10.1109/TITS.2021.3063477. [29] J. Zhang and S. Singh, β€œLow-drift and real-time lidar odometry and mapping,” Autonomous Robots, vol. 41, no. 2, pp. 401–416, Feb. 2017, doi: 10.1007/s10514-016-9548-2. [30] D. Vivet, β€œExtracting proprioceptive information by analyzing rotating range sensors induced distortion,” in 2019 27th European Signal Processing Conference (EUSIPCO), Sep. 2019, pp. 1–5, doi: 10.23919/EUSIPCO.2019.8903009. [31] Y. Seo and C.-C. Chou, β€œA tight coupling of vision-lidar measurements for an effective odometry,” in 2019 IEEE Intelligent Vehicles Symposium (IV), Jun. 2019, pp. 1118–1123, doi: 10.1109/IVS.2019.8814164. [32] G. Jiang, L. Yin, S. Jin, C. Tian, X. Ma, and Y. Ou, β€œA simultaneous localization and mapping (SLAM) framework for 2.5D map building based on low-cost LiDAR and vision fusion,” Applied Sciences, vol. 9, no. 10, May 2019, doi: 10.3390/app9102105. BIOGRAPHIES OF AUTHORS El Farnane Abdelhafid received his Master’s degree in Electrical Engineering from Hight Normal School of Technical Studies (ENSET) at Mohammed V University, Rabat, Morocco, in 2019. Currently, he is pursuing his Ph.D. degree from Faculty of Sciences and Technologies at Hassan I University, Settat, Morocco. His research interests include embedded systems, self-driving cars, and robot control. He can be contacted at email: a.elfarnane@uhp.ac.ma. Youssefi My Abdelkader received his engineering degree in telecommunications from National Institute of Telecommunications (INPT), Rabat, Morocco, in 2003, and he received his Ph.D. degree from Mohammadia School of Engineers (EMI) in 2015. He is now a teacher at Hassan I University, Settat, Morocco. His research interests include Embedded Systems, self-driving, massive MIMO, wireless communications and internet of things. Prof. Youssefi has published more than 20 papers in peer-reviewed journals and referred conference proceedings. He can be contacted at email: ab.youssefi@gmail.com.
  • 9.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 6, December 2022: 6284-6292 6292 Mouhsen Ahmed received his Ph.D degree in Electronics from the University of Bordeaux, France, in 1992, and he is currently a Professor at Electrical Engineering Department, Faculty of Sciences and Technologies, Hassan I University, Settat, Morocco. His research interests focuses on Embedded Systems, wireless communications and Information Technology. Prof. Mouhsen has published more than 50 papers in peer-reviewed journals and referred conference proceedings. He can be contacted at email: mouhsen.ahmed@gmail.com. Dakir Rachid was born on 06-January 1982. He received the Ph.D. in Network and Telecommunication from the FST University Hassan 1st, Morocco. He is currently a Professor in Ibn ZOHR University, FP of Ouarzazate, Morocco and he is involved in the prototype active, passive microwave electronic circuits, systems, network and telecommunications. He can be contacted at email: dakir_fsts@hotmail.com. El Ihyaoui Abdelilah ICT Engineer graduated in 2013 from National Institute of Telecommunications (INPT), Rabat, Morocco. PhD student at the Faculty of Sciences and Technologies at Hassan I University, Settat, Morocco. His research interests are computer science, software architecture and computer security. He can be contacted at email: abdelilah.elihyawi@gmail.com.