1
A N N A
U N I V E R S I T Y
ROBUST MULTISENSOR FRAMEWORK FOR
MOBILE ROBOT NAVIGATION
IN GNSS-DENIED ENVIRONMENTS
J. Steffi Keran Rani
(Reg. No. 2015225022)
Under the Supervision of
Dr.M.Deivamani
2
2
ABSTRACT
 The proposed work presents an algorithm that detects and avoids
both static and dynamic obstacles that lie in the path of the mobile
robot, be it indoor or outdoor.
 The proposed system allows the range of the obstacle detection to be
modified as per demand.
 The proposed system presents an RRT (Rapidly-exploring Random
Trees) -based path planner to find the shortest path to the goal.
 In this context, the presented work introduces an efficient, reliable
and scalable multi-sensor fusion system for autonomous mobile robot
navigation in GNSS(Global Navigation Satellite System)-denied
environments.
4-Jun-21
J. STEFFI KERAN RANI 2015225022
3
LITERATURE SURVEY
4
4
LITERATURE SURVEY
4-Jun-21
J. STEFFI KERAN RANI 2015225022
5
5
LITERATURE SURVEY – PATH PLANNING
4-Jun-21
J. STEFFI KERAN RANI 2015225022
6
6
LITERATURE SURVEY – RRT ALGORITHM
4-Jun-21
J. STEFFI KERAN RANI 2015225022
7
7
 Inability to identify obstructions that are greater in size
than half of the image size. Examples: wall, door etc.
 Poor angular resolution and specular reflections.
 Higher memory requirement and non-optimality.
 Jagged and non-continuous paths towards the goal.
 Increased computational load and runtime.
 Precise only for known or stable environments.
 Requires large update and iteration time.
 Reduced reliability over long period and increased
particle degradation.
PROBLEMS IDENTIFIED IN EXISTING SYSTEMS
4-Jun-21
J. STEFFI KERAN RANI 2015225022
8
PROBLEM STATEMENT
PROBLEM STATEMENT
PHASE 1:
 The proposed work aims to overcome the landmark ambiguity problems encountered during
the robot navigation.
 The fused outcome of various sensory data assures accurate self-initialization and
localization of the robot by reducing erroneous estimates.
PHASE 2:
 The objective of the work is to implement a novel vision-based obstacle
detection in dynamic environments.
 The framework uses the resultant model from the phase-1 to accurately
integrate the obstacle locations onto the environment model.
 The shortest, collision-free and smooth path is calculated based on Rapidly
exploring Random Tree (RRT) algorithm.
 The overall architecture aims to achieve optimization in multi sensor system for
localization and navigation of robots.
9
4-Jun-21
J. STEFFI KERAN RANI 2015225022
10
PROPOSED SYSTEM
11
11
PHASE 2 – HIGH LEVEL ARCHITECTURE
4-Jun-21
J. STEFFI KERAN RANI 2015225022
12
12
 The proposed work presents a novel path planning technique for the mobile robot navigation in
environments with static and dynamic obstacles.
 The proposed system employs Rapidly exploring Random Trees (RRT) as its basic algorithm.
 The system employs a vision based obstacle detection, followed by the computation of an optimal,
feasible and collision-free path to the destination.
 The building blocks of the Phase-2 project are:
1. Environment Perception
2. Obstacle recognition
3. Environment modelling
4. Integration of the obstacle and model
5. Optimized Path Planning
6. Trajectory smoothing and filtering
PROPOSED SYSTEM
4-Jun-21
J. STEFFI KERAN RANI 2015225022
13
13
PHASE 2
 Sensor fusion of camera and odometry data
 Accurate pose estimation
 Accurate Localization in known environment
 Landmark based navigation
 Reduced Error rate
 Performance analysis
 Obstacle detection
 Collision Avoidance
 Optimized Path Planning
 Trajectory filtering and smoothing
 Performance comparison with other state-of-the-art
SLAM algorithms
 Metrics computation
 Hardware implementation of Obstacle avoidance
DELIVERABLES
PHASE 1
4-Jun-21
J. STEFFI KERAN RANI 2015225022
14
14
NAME Cognitive navigation dataset
SCENARIO Democritus University of Thrace
URL http://robotics.pme.duth.gr/kostavelis/Dataset.html#10
ROBOT MAGGIE (Mobile Autonomous riGGed Indoors Exploratory) Robot.
J. STEFFI KERAN RANI 2015225022 4-Jun-21
DATASET
14
15
15
 Obstacle detection plays an important role in mobile
robots which operate in a highly diverse environments.
 The proposed system employs a vision-based obstacle
detection which can detect the presence of both static
and moving obstacles.
 Vision-based obstacle detection is superior to other
range-based sensors which suffer from specular
reflections and poor angular resolution.
 The proposed work involves detecting the objects that
differ in appearance from the ground and identifying
them as obstacles.
 Finally, if there are connected components in the
resultant image, it indicates the presence of the obstacle.
4-Jun-21
J. STEFFI KERAN RANI 2015225022
MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
16
ALGORITHM
17
17
 This algorithm aims to detect the static and dynamic obstacles in both open and
confined environments.
Algorithm Computer Vision-based Obstacle Detection
Input : Camera image I, Reference background image W
Output: State S of obstacle flag
1 Read the images I and W
2 Resize the images to a standard size s
3 Convert I and W to grayscale
4 Rb ← removeBackground( I,W )
5 Convert Rb to grayscale
6 RbTh ← otsuThreshold( Rb )
7 RoI ← bwAreaOpen( RbTh )
8 Label ← bwLabel(RoI)
9 if Label < 1 then
10 Set the flag F
11 return State St of the flag F
4-Jun-21
J. STEFFI KERAN RANI 2015225022
MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
18
EXPERIMENTAL RESULTS
MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
4
5
6
7
1 2 3
OUTDOOR
4-Jun-21
J. STEFFI KERAN RANI 2015225022
19
MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
1 2 3
4
5
6
7
INDOOR
4-Jun-21
J. STEFFI KERAN RANI 2015225022
20
MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
1 2 3
4
5
6
7
INDOOR
4-Jun-21
J. STEFFI KERAN RANI 2015225022
21
22
22
MODULE 2- PATH PLANNING AND
OBSTACLE AVOIDANCE
 The proposed system employs Rapidly-exploring Random Tree (RRT) algorithm as the principal technique
behind the path planning of robot. A two-step procedure of path planning is summarized as follows.
• Step 1: The first step is environment perception and modelling, usually using a grid map (with occupancy
probability)
• Step 2: Then path planning algorithm is employed to find the best path according to the cost function, with
the ability to achieve both time efficiency and cost minimum.
 An RRT is iteratively expanded by applying control inputs that drive the system slightly toward random points,
as opposed to requiring point-to-point convergence, as in the probabilistic roadmap approach.
 RRT-based path planning has many advantages like:
 Environmental coverage
 2D or 3D search space
 Obstacle Avoidance
 Auto-connect to Goal
 Path smoothing
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE (RRT) PATH PLANNER
4-Jun-21
J. STEFFI KERAN RANI 2015225022
23
24
J. STEFFI KERAN RANI 2015225022 4-Jun-21
RRT-BASED PATH PLANNING
Path
Collision
Detection
Collided Branch
Deletion
Deformation
Resampling &
Extension
Node Deletion
Smoothing
RRT EXTENSION PROCEDURE 25
J. STEFFI KERAN RANI 2015225022 4-Jun-21
RRT-BASED PATH PLANNING 26
J. STEFFI KERAN RANI 2015225022 4-Jun-21
rrtPathPlanning.m
27
27
 RRT grows a randomized tree during search. It terminates once a state close to
the goal is expanded.
RRT* Planner
Algorithm RRT Planner
Input : TreeMax, SeedsPerAxis, wallCount
Output: RRT graph Graph
1 Check inputs
2 Initial plotting and environmental setup
3 Continue search while the number of steps is less than TreeMax
4 Generate a new point
5 Find Nearest neighbour and connect to it
6 Draw the Output
J. STEFFI KERAN RANI 2015225022 4-Jun-21
28
28
 RRT grows a randomized tree during search. It terminates once a state close to
the goal is expanded.
RRT* Generation
Algorithm RRT
Input : Node Start, K, Node Goal, System Sys, Environment Env, ∆𝒕
Output: RRT graph Graph
1 Graph.init(Start)
2 while Graph.size() is less than threshold K
3 Node rand = rand() //Random_State()
4 Node near = Graph.nearest(rand, Graph) // Nearest_neighbour()
5 try
6 Node new = Sys.propagate(near, rand) //New_State
7 Graph.addNode(new)
8 Graph.addEdge(near,new)
J. STEFFI KERAN RANI 2015225022 4-Jun-21
29
29
 RRT grows a randomized tree during search. It terminates once a state close to
the goal is expanded.
RRT* Search Process
Algorithm RRT
Input : Search space S, limit K, initial state Qnew, goal Qgoal
Output: RRT graph Graph
Begin
tree ← 𝑄𝑖𝑛𝑖𝑡
while goalReached() do
if p < random() then
Qrand ← sampleSpace (S)
Qnear ← findNearest (tree, Qrand, S)
Qnew ← join (Qnear, Qrand, K, S)
Qneargoal ← Qnew
else
Qneargoal ← findNearest (tree, Qgoal, S)
Qnewgoal ← join (Qneargoal, Qgoal, K, S)
addNode (tree, Qneargoal , Qnewgoal )
solution ← traceBack (tree, Qgoal)
return solution
end
4-Jun-21
J. STEFFI KERAN RANI 2015225022
30
30
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 1
4-Jun-21
J. STEFFI KERAN RANI 2015225022
31
31
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 1
4-Jun-21
J. STEFFI KERAN RANI 2015225022
32
32
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 2
4-Jun-21
J. STEFFI KERAN RANI 2015225022
33
33
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 2
4-Jun-21
J. STEFFI KERAN RANI 2015225022
34
34
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 3
4-Jun-21
J. STEFFI KERAN RANI 2015225022
35
35
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 3
4-Jun-21
J. STEFFI KERAN RANI 2015225022
36
36
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 4
4-Jun-21
J. STEFFI KERAN RANI 2015225022
37
37
MODULE 2 : RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER
DATASET 4
4-Jun-21
J. STEFFI KERAN RANI 2015225022
38
HARDWARE IMPLEMENTATION-
OBSTACLE AVOIDANCE
39
39
PHASE 2 – HIGH LEVEL ARCHITECTURE
4-Jun-21
J. STEFFI KERAN RANI 2015225022
40
40
ROBOT PLATFORM FOR HARDWARE IMPLEMENTATION
NAME X8O SV WiRobot
SENSORS USED 3 Ultrasonic Range Sensors and 7 Infrared Sensors
4-Jun-21
J. STEFFI KERAN RANI 2015225022
41
41
ROBOT PLATFORM FOR HARDWARE IMPLEMENTATION
4-Jun-21
J. STEFFI KERAN RANI 2015225022
42
42
SONAR DATA COLLECTION ALONG WITH TIMESTAMP
SAFE ZONE
ALERT ZONE
DANGER ZONE
S1 LEFT SENSOR
S2 MIDDLE SENSOR
S3 RIGHT SENSOR
4-Jun-21
J. STEFFI KERAN RANI 2015225022
43
43
INFRARED SENSOR DATA COLLECTION ALONG WITH
TIMESTAMP
IR1
FRONT LEFT
SENSOR
IR2
FRONT
MIDDLE
SENSOR
IR3
FRONT
MIDDLE
IR4
FRONT
RIGHT
IR5 RIGHT
IR6 REAR
IR7 LEFT
4-Jun-21
J. STEFFI KERAN RANI 2015225022
44
44
J. STEFFI KERAN RANI 2015225022 4-Jun-21
OBSTACLE DETECTION AND AVOIDANCE
45
45
OBSTACLE DETECTION AND AVOIDANCE
STATIC AND DYNAMIC OBSTACLE
AVOIDANCE
 Move Backwards when the Obstacle is
in front
 Turn Left
 Turn Left to sense the Obstacle and
Moving forward when there is no
obstacle
 Avoid Obstacle and Move Forward
 Avoid Walls and Move Backwards
 Turn Right
 Multiple Obstacle Avoidance
4-Jun-21
J. STEFFI KERAN RANI 2015225022
46
WORKPLAN
47
47
WORKPLAN – PHASE 2
4-Jun-21
J. STEFFI KERAN RANI 2015225022
48
REFERENCES
49
49
REFERENCES
1. Mohammed Atia, Shifei Liu, Heba Nematallah, Tashfeen B. Karamat and Aboelmagd Noureldin, “Integrated
Indoor Navigation System for Ground Vehicles With Automatic 3-D Alignment and Position
Initialization”, IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp.1279-1292, April 2015.
2. Marwah Almasri, Khaled Elleithy and Abrar Alajlan, “Sensor Fusion Based Model for Collision Free Mobile
Robot Navigation”, Sensors, vol. 16,no. 24, pp. 1-24, Oct. 2015
3. David Fleer, and Ralf Möller, “Comparing holistic and feature-based visual methods for estimating the
relative pose of mobile robots” , Robotics and Autonomous Systems, Elsevier, vol. 89, pp. 51-74, Dec. 2016.
4. Zhiwen Xian, Xiaofeng He, Junxiang Lian, Xiaoping Hu, Lilian Zhang, “A bionic autonomous navigation
system by using polarization navigation sensor and stereo camera”, Springer Autonomous Robots, pp. 1-
12, July 2016
5. J.Du, C.Mouser and W.Sheng,” Design and Evaluation of a Tele operated Robotic 3-D Mapping System
using an RGB-D Sensor”, IEEE Transactions on Systems, Man and Cybernatics, vol. 46, no. 5, pp. 718-
724,May 2016.
6. S.Lowry and M.Milford,” Supervised and Unsupervised Linear Learning Techniques for Visual Place
Recognition in Changing Environments”, IEEE Transactions on Robotics, vol. 32, no. 3, pp. 600-613, Jun.
2016.
4-Jun-21
J. STEFFI KERAN RANI 2015225022
50
50
7. R.Yudanto and F.Petre,” Sensor Fusion for Indoor Navigation and Tracking of Automated Guided
Vehicles”, International Conference on Indoor Positioning and Indoor Navigation(IPIN)., Canada, Oct. 13-16,
2016, pp.13-16.
8. Liang Zhang, Peiyi Shen, Guangming Zhu, Wei Wei and Houbing Song,” A Fast Robot Identification and
Mapping Algorithm Based on Kinect Sensor”, Sensors , vol. 15, pp.19937-19967, Aug. 2015.
9. R.Luo and C.Lai,” Multi sensor fusion-based concurrent environment mapping and moving object
detection for intelligent service robotics”, IEEE Transactions on Industrial Electronics, vol.61, no.8,pp.4043-
4051, Aug. 2014.
10. Huizhong Zhou, Danping Zou, Ling Pei, Rendong Ying, Peilin Liu,and Wenxian Yu, ”StructSLAM: Visual
SLAM with building structure lines”, IEEE Transactions on Vehicular Technology, vol. 64, no.4, pp. 1364-
1375, Apr. 2015.
11. Yassen Dobrev, Sergio Flores and Martin Vossiek,”Multi-Modal Sensor Fusion for Indoor Mobile Robot
Pose Estimation”, 2016 IEEE/ION Position, pp.553-556, 2016.
12. F.Zhang, D.Clarke and A.Knoll,” Vehicle detection based on LiDAR and camera fusion”, 17th IEEE
International Conference on Intelligent Transportation Systems, China, pp.1620-1625, 2014.
13. X.Zhang, A.Rad and Y.Wong,” Sensor fusion of monocular cameras and laser rangefinders for line-
based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots”, Sensors,
vol.12, no.1, pp. 429-452, 2012.
50
REFERENCES
4-Jun-21
J. STEFFI KERAN RANI 2015225022
51
51
14. O.Wijk and H.Christensen, “Triangulation Based Fusion of Sonar Data with Application in Robot Pose Tracking”, IEEE
Transactions on Robotic Automation, vol. 16, no. 6, pp. 740-752, Dec. 2000.
15. Chunming Yan, Jun Luo, Huayan Pu, Shaorong Xie and Jason Gu, “A Navigation System Based on Vision and Motion
Fusion Information Using Two UFKs”, Proceeding of the 2015 IEEE International Conference on Information and
Automation Lijiang, China, pp. 174-178, 2015.
16. Oscar De Silva, George K. I. Mann, and Raymond G. Gosine, “An Ultrasonic and Vision-Based Relative Positioning Sensor
for Multi-robot Localization”, IEEE Sensors Journal, vol. 15,no.3, pp. 1716-1726, 2016
17. Anthony Spears, Ayanna M. Howard, Michael West and Thomas Collin, “Acoustic Sonar and Video Sensor Fusion for
Landmark Detection in an Under-Ice Environment”, 2015.
18. Chinchu Chandrasenan, Nafeesa T.A, Reshma Rajan, Vijayakumar K, “Multisensor Data Fusion Based Autonomous
Mobile Robot With Manipulator For Target Detection”, International Journal of Research in Engineering and Technology,
vol. 3, no. 1,pp.75-81, Mar. 2014.
19. Lijun Wei, Cindy Cappelle, and Yassine Ruichek, “Camera/Laser/GPS Fusion Method for Vehicle Positioning Under
Extended NIS-Based Sensor Validation”, IEEE Transactions on Instrumentation and Measurement, vol.62, no. 11,
pp.3110-3122, Nov. 2013.
20. Nuno L. Ferreira, Micael S. Couceiro, André Araújo, and Rui P. Rocha, “Multi-sensor fusion and classification with mobile
robots for situation awareness in urban search and rescue using ROS”, IEEE International Symposium on Safety, Security,
and Rescue Robotics, 2013.
51
REFERENCES
4-Jun-21
J. STEFFI KERAN RANI 2015225022

ROBUST MULTISENSOR FRAMEWORK FOR MOBILE ROBOT NAVIGATION IN GNSS-DENIED ENVIRONMENTS

  • 1.
    1 A N NA U N I V E R S I T Y ROBUST MULTISENSOR FRAMEWORK FOR MOBILE ROBOT NAVIGATION IN GNSS-DENIED ENVIRONMENTS J. Steffi Keran Rani (Reg. No. 2015225022) Under the Supervision of Dr.M.Deivamani
  • 2.
    2 2 ABSTRACT  The proposedwork presents an algorithm that detects and avoids both static and dynamic obstacles that lie in the path of the mobile robot, be it indoor or outdoor.  The proposed system allows the range of the obstacle detection to be modified as per demand.  The proposed system presents an RRT (Rapidly-exploring Random Trees) -based path planner to find the shortest path to the goal.  In this context, the presented work introduces an efficient, reliable and scalable multi-sensor fusion system for autonomous mobile robot navigation in GNSS(Global Navigation Satellite System)-denied environments. 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 3.
  • 4.
  • 5.
    5 5 LITERATURE SURVEY –PATH PLANNING 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 6.
    6 6 LITERATURE SURVEY –RRT ALGORITHM 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 7.
    7 7  Inability toidentify obstructions that are greater in size than half of the image size. Examples: wall, door etc.  Poor angular resolution and specular reflections.  Higher memory requirement and non-optimality.  Jagged and non-continuous paths towards the goal.  Increased computational load and runtime.  Precise only for known or stable environments.  Requires large update and iteration time.  Reduced reliability over long period and increased particle degradation. PROBLEMS IDENTIFIED IN EXISTING SYSTEMS 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 8.
  • 9.
    PROBLEM STATEMENT PHASE 1: The proposed work aims to overcome the landmark ambiguity problems encountered during the robot navigation.  The fused outcome of various sensory data assures accurate self-initialization and localization of the robot by reducing erroneous estimates. PHASE 2:  The objective of the work is to implement a novel vision-based obstacle detection in dynamic environments.  The framework uses the resultant model from the phase-1 to accurately integrate the obstacle locations onto the environment model.  The shortest, collision-free and smooth path is calculated based on Rapidly exploring Random Tree (RRT) algorithm.  The overall architecture aims to achieve optimization in multi sensor system for localization and navigation of robots. 9 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 10.
  • 11.
    11 11 PHASE 2 –HIGH LEVEL ARCHITECTURE 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 12.
    12 12  The proposedwork presents a novel path planning technique for the mobile robot navigation in environments with static and dynamic obstacles.  The proposed system employs Rapidly exploring Random Trees (RRT) as its basic algorithm.  The system employs a vision based obstacle detection, followed by the computation of an optimal, feasible and collision-free path to the destination.  The building blocks of the Phase-2 project are: 1. Environment Perception 2. Obstacle recognition 3. Environment modelling 4. Integration of the obstacle and model 5. Optimized Path Planning 6. Trajectory smoothing and filtering PROPOSED SYSTEM 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 13.
    13 13 PHASE 2  Sensorfusion of camera and odometry data  Accurate pose estimation  Accurate Localization in known environment  Landmark based navigation  Reduced Error rate  Performance analysis  Obstacle detection  Collision Avoidance  Optimized Path Planning  Trajectory filtering and smoothing  Performance comparison with other state-of-the-art SLAM algorithms  Metrics computation  Hardware implementation of Obstacle avoidance DELIVERABLES PHASE 1 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 14.
    14 14 NAME Cognitive navigationdataset SCENARIO Democritus University of Thrace URL http://robotics.pme.duth.gr/kostavelis/Dataset.html#10 ROBOT MAGGIE (Mobile Autonomous riGGed Indoors Exploratory) Robot. J. STEFFI KERAN RANI 2015225022 4-Jun-21 DATASET 14
  • 15.
    15 15  Obstacle detectionplays an important role in mobile robots which operate in a highly diverse environments.  The proposed system employs a vision-based obstacle detection which can detect the presence of both static and moving obstacles.  Vision-based obstacle detection is superior to other range-based sensors which suffer from specular reflections and poor angular resolution.  The proposed work involves detecting the objects that differ in appearance from the ground and identifying them as obstacles.  Finally, if there are connected components in the resultant image, it indicates the presence of the obstacle. 4-Jun-21 J. STEFFI KERAN RANI 2015225022 MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
  • 16.
  • 17.
    17 17  This algorithmaims to detect the static and dynamic obstacles in both open and confined environments. Algorithm Computer Vision-based Obstacle Detection Input : Camera image I, Reference background image W Output: State S of obstacle flag 1 Read the images I and W 2 Resize the images to a standard size s 3 Convert I and W to grayscale 4 Rb ← removeBackground( I,W ) 5 Convert Rb to grayscale 6 RbTh ← otsuThreshold( Rb ) 7 RoI ← bwAreaOpen( RbTh ) 8 Label ← bwLabel(RoI) 9 if Label < 1 then 10 Set the flag F 11 return State St of the flag F 4-Jun-21 J. STEFFI KERAN RANI 2015225022 MODULE 1 : COMPUTER VISION-BASED OBSTACLE DETECTION
  • 18.
  • 19.
    MODULE 1 :COMPUTER VISION-BASED OBSTACLE DETECTION 4 5 6 7 1 2 3 OUTDOOR 4-Jun-21 J. STEFFI KERAN RANI 2015225022 19
  • 20.
    MODULE 1 :COMPUTER VISION-BASED OBSTACLE DETECTION 1 2 3 4 5 6 7 INDOOR 4-Jun-21 J. STEFFI KERAN RANI 2015225022 20
  • 21.
    MODULE 1 :COMPUTER VISION-BASED OBSTACLE DETECTION 1 2 3 4 5 6 7 INDOOR 4-Jun-21 J. STEFFI KERAN RANI 2015225022 21
  • 22.
    22 22 MODULE 2- PATHPLANNING AND OBSTACLE AVOIDANCE
  • 23.
     The proposedsystem employs Rapidly-exploring Random Tree (RRT) algorithm as the principal technique behind the path planning of robot. A two-step procedure of path planning is summarized as follows. • Step 1: The first step is environment perception and modelling, usually using a grid map (with occupancy probability) • Step 2: Then path planning algorithm is employed to find the best path according to the cost function, with the ability to achieve both time efficiency and cost minimum.  An RRT is iteratively expanded by applying control inputs that drive the system slightly toward random points, as opposed to requiring point-to-point convergence, as in the probabilistic roadmap approach.  RRT-based path planning has many advantages like:  Environmental coverage  2D or 3D search space  Obstacle Avoidance  Auto-connect to Goal  Path smoothing MODULE 2 : RAPIDLY EXPLORING RANDOM TREE (RRT) PATH PLANNER 4-Jun-21 J. STEFFI KERAN RANI 2015225022 23
  • 24.
    24 J. STEFFI KERANRANI 2015225022 4-Jun-21 RRT-BASED PATH PLANNING Path Collision Detection Collided Branch Deletion Deformation Resampling & Extension Node Deletion Smoothing
  • 25.
    RRT EXTENSION PROCEDURE25 J. STEFFI KERAN RANI 2015225022 4-Jun-21
  • 26.
    RRT-BASED PATH PLANNING26 J. STEFFI KERAN RANI 2015225022 4-Jun-21 rrtPathPlanning.m
  • 27.
    27 27  RRT growsa randomized tree during search. It terminates once a state close to the goal is expanded. RRT* Planner Algorithm RRT Planner Input : TreeMax, SeedsPerAxis, wallCount Output: RRT graph Graph 1 Check inputs 2 Initial plotting and environmental setup 3 Continue search while the number of steps is less than TreeMax 4 Generate a new point 5 Find Nearest neighbour and connect to it 6 Draw the Output J. STEFFI KERAN RANI 2015225022 4-Jun-21
  • 28.
    28 28  RRT growsa randomized tree during search. It terminates once a state close to the goal is expanded. RRT* Generation Algorithm RRT Input : Node Start, K, Node Goal, System Sys, Environment Env, ∆𝒕 Output: RRT graph Graph 1 Graph.init(Start) 2 while Graph.size() is less than threshold K 3 Node rand = rand() //Random_State() 4 Node near = Graph.nearest(rand, Graph) // Nearest_neighbour() 5 try 6 Node new = Sys.propagate(near, rand) //New_State 7 Graph.addNode(new) 8 Graph.addEdge(near,new) J. STEFFI KERAN RANI 2015225022 4-Jun-21
  • 29.
    29 29  RRT growsa randomized tree during search. It terminates once a state close to the goal is expanded. RRT* Search Process Algorithm RRT Input : Search space S, limit K, initial state Qnew, goal Qgoal Output: RRT graph Graph Begin tree ← 𝑄𝑖𝑛𝑖𝑡 while goalReached() do if p < random() then Qrand ← sampleSpace (S) Qnear ← findNearest (tree, Qrand, S) Qnew ← join (Qnear, Qrand, K, S) Qneargoal ← Qnew else Qneargoal ← findNearest (tree, Qgoal, S) Qnewgoal ← join (Qneargoal, Qgoal, K, S) addNode (tree, Qneargoal , Qnewgoal ) solution ← traceBack (tree, Qgoal) return solution end 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 30.
    30 30 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 1 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 31.
    31 31 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 1 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 32.
    32 32 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 2 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 33.
    33 33 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 2 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 34.
    34 34 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 3 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 35.
    35 35 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 3 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 36.
    36 36 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 4 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 37.
    37 37 MODULE 2 :RAPIDLY EXPLORING RANDOM TREE* (RRT*) PATH PLANNER DATASET 4 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 38.
  • 39.
    39 39 PHASE 2 –HIGH LEVEL ARCHITECTURE 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 40.
    40 40 ROBOT PLATFORM FORHARDWARE IMPLEMENTATION NAME X8O SV WiRobot SENSORS USED 3 Ultrasonic Range Sensors and 7 Infrared Sensors 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 41.
    41 41 ROBOT PLATFORM FORHARDWARE IMPLEMENTATION 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 42.
    42 42 SONAR DATA COLLECTIONALONG WITH TIMESTAMP SAFE ZONE ALERT ZONE DANGER ZONE S1 LEFT SENSOR S2 MIDDLE SENSOR S3 RIGHT SENSOR 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 43.
    43 43 INFRARED SENSOR DATACOLLECTION ALONG WITH TIMESTAMP IR1 FRONT LEFT SENSOR IR2 FRONT MIDDLE SENSOR IR3 FRONT MIDDLE IR4 FRONT RIGHT IR5 RIGHT IR6 REAR IR7 LEFT 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 44.
    44 44 J. STEFFI KERANRANI 2015225022 4-Jun-21 OBSTACLE DETECTION AND AVOIDANCE
  • 45.
    45 45 OBSTACLE DETECTION ANDAVOIDANCE STATIC AND DYNAMIC OBSTACLE AVOIDANCE  Move Backwards when the Obstacle is in front  Turn Left  Turn Left to sense the Obstacle and Moving forward when there is no obstacle  Avoid Obstacle and Move Forward  Avoid Walls and Move Backwards  Turn Right  Multiple Obstacle Avoidance 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 46.
  • 47.
    47 47 WORKPLAN – PHASE2 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 48.
  • 49.
    49 49 REFERENCES 1. Mohammed Atia,Shifei Liu, Heba Nematallah, Tashfeen B. Karamat and Aboelmagd Noureldin, “Integrated Indoor Navigation System for Ground Vehicles With Automatic 3-D Alignment and Position Initialization”, IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp.1279-1292, April 2015. 2. Marwah Almasri, Khaled Elleithy and Abrar Alajlan, “Sensor Fusion Based Model for Collision Free Mobile Robot Navigation”, Sensors, vol. 16,no. 24, pp. 1-24, Oct. 2015 3. David Fleer, and Ralf Möller, “Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots” , Robotics and Autonomous Systems, Elsevier, vol. 89, pp. 51-74, Dec. 2016. 4. Zhiwen Xian, Xiaofeng He, Junxiang Lian, Xiaoping Hu, Lilian Zhang, “A bionic autonomous navigation system by using polarization navigation sensor and stereo camera”, Springer Autonomous Robots, pp. 1- 12, July 2016 5. J.Du, C.Mouser and W.Sheng,” Design and Evaluation of a Tele operated Robotic 3-D Mapping System using an RGB-D Sensor”, IEEE Transactions on Systems, Man and Cybernatics, vol. 46, no. 5, pp. 718- 724,May 2016. 6. S.Lowry and M.Milford,” Supervised and Unsupervised Linear Learning Techniques for Visual Place Recognition in Changing Environments”, IEEE Transactions on Robotics, vol. 32, no. 3, pp. 600-613, Jun. 2016. 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 50.
    50 50 7. R.Yudanto andF.Petre,” Sensor Fusion for Indoor Navigation and Tracking of Automated Guided Vehicles”, International Conference on Indoor Positioning and Indoor Navigation(IPIN)., Canada, Oct. 13-16, 2016, pp.13-16. 8. Liang Zhang, Peiyi Shen, Guangming Zhu, Wei Wei and Houbing Song,” A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor”, Sensors , vol. 15, pp.19937-19967, Aug. 2015. 9. R.Luo and C.Lai,” Multi sensor fusion-based concurrent environment mapping and moving object detection for intelligent service robotics”, IEEE Transactions on Industrial Electronics, vol.61, no.8,pp.4043- 4051, Aug. 2014. 10. Huizhong Zhou, Danping Zou, Ling Pei, Rendong Ying, Peilin Liu,and Wenxian Yu, ”StructSLAM: Visual SLAM with building structure lines”, IEEE Transactions on Vehicular Technology, vol. 64, no.4, pp. 1364- 1375, Apr. 2015. 11. Yassen Dobrev, Sergio Flores and Martin Vossiek,”Multi-Modal Sensor Fusion for Indoor Mobile Robot Pose Estimation”, 2016 IEEE/ION Position, pp.553-556, 2016. 12. F.Zhang, D.Clarke and A.Knoll,” Vehicle detection based on LiDAR and camera fusion”, 17th IEEE International Conference on Intelligent Transportation Systems, China, pp.1620-1625, 2014. 13. X.Zhang, A.Rad and Y.Wong,” Sensor fusion of monocular cameras and laser rangefinders for line- based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots”, Sensors, vol.12, no.1, pp. 429-452, 2012. 50 REFERENCES 4-Jun-21 J. STEFFI KERAN RANI 2015225022
  • 51.
    51 51 14. O.Wijk andH.Christensen, “Triangulation Based Fusion of Sonar Data with Application in Robot Pose Tracking”, IEEE Transactions on Robotic Automation, vol. 16, no. 6, pp. 740-752, Dec. 2000. 15. Chunming Yan, Jun Luo, Huayan Pu, Shaorong Xie and Jason Gu, “A Navigation System Based on Vision and Motion Fusion Information Using Two UFKs”, Proceeding of the 2015 IEEE International Conference on Information and Automation Lijiang, China, pp. 174-178, 2015. 16. Oscar De Silva, George K. I. Mann, and Raymond G. Gosine, “An Ultrasonic and Vision-Based Relative Positioning Sensor for Multi-robot Localization”, IEEE Sensors Journal, vol. 15,no.3, pp. 1716-1726, 2016 17. Anthony Spears, Ayanna M. Howard, Michael West and Thomas Collin, “Acoustic Sonar and Video Sensor Fusion for Landmark Detection in an Under-Ice Environment”, 2015. 18. Chinchu Chandrasenan, Nafeesa T.A, Reshma Rajan, Vijayakumar K, “Multisensor Data Fusion Based Autonomous Mobile Robot With Manipulator For Target Detection”, International Journal of Research in Engineering and Technology, vol. 3, no. 1,pp.75-81, Mar. 2014. 19. Lijun Wei, Cindy Cappelle, and Yassine Ruichek, “Camera/Laser/GPS Fusion Method for Vehicle Positioning Under Extended NIS-Based Sensor Validation”, IEEE Transactions on Instrumentation and Measurement, vol.62, no. 11, pp.3110-3122, Nov. 2013. 20. Nuno L. Ferreira, Micael S. Couceiro, André Araújo, and Rui P. Rocha, “Multi-sensor fusion and classification with mobile robots for situation awareness in urban search and rescue using ROS”, IEEE International Symposium on Safety, Security, and Rescue Robotics, 2013. 51 REFERENCES 4-Jun-21 J. STEFFI KERAN RANI 2015225022