Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426
NITTTR, Chandigarh EDIT -2015 184
Design of Image Segmentation Algorithm for
Autonomous Vehicle Navigation
using Raspberry Pi
1
Ankur S. Tandale, 2
Kapil K. Jajulwar
1
M.Tech Student,
2
Research scholar
1,2
Department of Communication Engineering, G.H.Raisoni College of Engineering,Nagpur
1
ankurtandale@gmail.com,2
kapil.jajulwar@raisoni.net
Abstract—In the past few years Autonomous vehicles
have gained importance due to its widespread
applications in the field of civilian and military
applications. On-board camera on autonomous vehicles
captures the images which need to be processed in real
time using the image segmentation algorithm. On board
processing of video(frames)in real time is a big
challenging task as it involves extracting the
information and performing the required operations for
navigation.
This paper proposes an approach for vision based
autonomous vehicle navigation in indoor environment
using the designed image segmentation algorithm. The
vision based navigation is applied to autonomous vehicle
and it is implemented using the Raspberry Pi camera
module on Raspberry Pi Model-B+ with the designed image
segmentation algorithm. The image segmentation algorithm
has been built using smoothing,thresholding, morpho-
logical operations, and edge detection. The reference
images of directions in the path are detected by the vehicle
and accordingly it moves in right or left directions or
stops at destination. The vehicle finds the path from source
to destination using reference directions. It first captures
the video,segments the video(frame by frame), finds the
edges in the segmented frame and moves accordingly. The
Raspberry Pi also transmits the capture video and
segmented results using the Wi-Fi to the remote system for
monitoring. The autonomous vehicle is also capable of
finding obstacle in its path and the detection is done using
the ultrasonic sensors.
Index Terms—Autonomous Vehicle, Graphical User Inter-
face(GUI), Raspberry Pi, Segmentation, Ultrasonic Sensor
I. INTRODUCTION
In the recent years, Autonomous vehicles have gained
importance due to its widespread applications in various
fields such as Military, Civilian, industrial etc. Autonomous
vehicle navigation has the ability to determine its ow
position and finding the path from source to destination.
Navigation mainly defines the self localisation and finding
the destination path. Vehicle navigation has long been a
fundamental goal in both robotics and computer vision
research. While the problem is largely solved for robots
equipped with active range- finding devices, for a variety of
reasons, the task still remains challeng- ing for vehicles
equipped only with vision sensors. On-board computing
using the computer vision is the most demanding areas of
robotics. The need for autonomy in vehicles in indoor based
navigation systems demands high computational power in
the form of image processing capabilities. The Simultaneous
localisation and mapping(SLAM) algorithm performs the
self
Fig. 1. Prototype of Autonomous vehicle Moving in Right
direction
localisation and maps the environment using the predefined
indoor environment area and the vision based form. It
involves complex computations and geometry to find
the path and obstacles in the path to map the
environment.
In vision based autonomous vehicle navigation ,segmenta-
tion of the captured frame is the fundamental step in image
processing. Segmentation is the process of grouping
pixels of an image depending on the information needed for
further processing. Various segmentation techniques are
present based on the region,edges,textures and intensities.
As vehicles pro- ceeds with navigation using on- board
processing it possess a problem to the use of powerful
computational units; secondly cost of the system hardware,
though having dropped in recent years, is still a limitation in
robotics [1]. Therefore, robots requires powerful and fast
processing speed to perform on board processing of images.
In the last few years the demand for autonomous vehicles
and robots has increased which have brought us a range of
ARM architecture computational devices such as the
Raspberry Pi or the even more powerful Quad- Core
ODROID-U2 and these devices can perform on board
real time image segmentation.
The proposed work uses a Raspberry Pi for real time
processing and a camera connected to the raspberry pi for
providing the vision. The prototype of the autonomous
vehicle is implemented as shown in figure 1. It is
having onboard
Raspberry Pi, Microsoft Lifecam, Ultrasonic sensor,
power supply and DC motors etc. The captured real
time video is processed such that it is first segmented
and the edges are found depending upon which the
Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426
185 NITTTR, Chandigarh EDIT-2015
vehicle moves in right, left or in certain angles. The
complete task of segmentation is done using Raspberry Pi
on board the vehicle in real time. The captured video using
the Raspberry Pi camera is also transmitted using the WiFi to
the remote computer.
II. RELATED WORK
Navigation can be done by designing proper Image segmen-
tation algorithm. In literature [2], the stereo vision applied to
small water vehicles using the low cost computers, which can
drive autonomous vehicles capable of following other vehicle
or boats in water is developed. The system uses 2 stereovision
cameras which are connected to raspberry-pi for real time im-
age processing using open computer vision libraries(OpenCV).
This autonomous vehicle performs control of yaw and speed,
line tracking and detecting obstacles. This system is capable of
identifying and following targets in a distance of over 5 meters.
In literature [3], the image segmentation algorithm is used for
real time image processing as it is demanded by micro air
vehicle(MAV) for navigation. Here, the image segmentation
is implemented on FPGA for on board fast processing. The
system finds vast application in military applications and for
surveillance of structures like roads and rivers [4].
Real time autonomous visual navigation system is presented in
[5] using approaches like region segmentation to find the road
appearance and road detection to compute road shape.
Monocular cameras along with proximity sensors are used to
detect roads. Two algorithm are designed and there outputs
are combined using a Kalman filter to produce a robust
estimation of road which is used as a control policy for
autonomous navigation in indoor and outdoor environments.
Image matching is another approach for navigation and it is
often used in unmanned aerial vehicle (UAVs) as used in [6].
The images can also be used in infrared range using CCD
sensors for the purpose of navigation in day and night time.
[7].
III. BLOCK DIAGRAM OF PROPOSED SYSTEM
The proposed title aims to design the Segmentation al-
gorithm for autonomous vehicles on Raspberry Pi to help
find obstacles and navigate the vehicles in an unknown
environment. The below block diagram in figure 2. shows
the proposed system for raspberry-pi Camera Feedback for
Navigation based mobile robot navigation. The navigation
is provided by designing the segmentation algorithm using
images captured through camera on board the vehicles.
• Camera: Camera is connected to the Raspberry pi and it
acquires the video(24fps) from which the frame is taken as
input and it is further processed.
• Filter: The filter removes the noise from the acquired
image so that the necessary information in image is not lost.
TABLE I
FEATURES OF RASPBERRY-PI MODEL B+
Features Raspberry-Pi Model B+
CPU 700MHz-ARM 11-S core
Memory 512MB RAM(shared with GPU)
On board Ethernet 10/100
Memory Storage uSD Card Slot 8/16GB
Power Ratings 700mA-1.8mA, 5V DC
USB Ports 4
Video Outputs HDMI
Operating Systems Raspbian OS, Debian OS
Processing Unit: The processing unit is where the
image segmentation is performed such that the
gradient and edge tracking is done. From the edges
it is possible to determine the reference image and
so the vehicle moves accordingly. The ultrasonic
sensor also gives the input to this unit so that the
distance between the obstacle and autonomous
vehicle is known and if the obstacle is near then the
vehicle stops and starts moving in other direction to
overcome it. All this processing is performed
using the minicomputer called as Raspberry-Pi. The
image segmentation algorithm is processed using
the raspberry- pi.
• Display Unit (GUI): The display unit is where
the cap- tured video and segmented output is
displayed using the WiFi sdapter on the remote
screen.
• Feedback: The segmented output is continuously
moni- tored to find the gradient and edges and it is
given as feedback to Raspberry Pi along with the
sensor output to check the obstacle continuously.
IV. INTRODUCTION TO RASPBERRY PI
The design of image segmentation algorithm is
done using C++ on Raspberry Pi board using the
Open Computer Vision (OpenCV) [8]. The
Raspberry pi is a handheld computer on the board
consists of ARM processor and best suitable for real
time operation. It runs on raspbian operating system
which has the Linux environment. Officially
launched in February 2012, the Raspberry Pi
personal computer took the world by storm, selling
out the 10,000 available units immediately.
It is an inexpensive credit card sized exposed
circuit board, a fully programmable PC running the
free open source Linux operating system. The
Raspberry Pi can connect to the Inter- net; can be
plugged into a TV, and costs very less. Originally
created to spark school childrens interest in
computers, due to the variety of features mentioned
in Table I, the Raspberry
Fig. 2. Block Diagram of Proposed System
Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426
NITTTR, Chandigarh EDIT -2015 186
V. PROPOSED METHODOLOGY FOR AUTONOMOUS 4) Finding Gradients: Only local maxima are marked as
VEHICLE NAVIGATION edges and they are marked where the image has large
Fig. 3. Raspberry pi Model-B+ Setup
Pi has caught the attention of home hobbyist, entrepreneurs,
and educators worldwide. Estimates shows the sales figures
around 1 million units as of February 2013.The figure
3. shows the Raspberry Pi model B+ setup with monitor
and Ethernet connected to it. Qt creator is used for Qt GUI
application development framework. Qt creator is a cross
platform C++,javascript integrated development
environment. The program is build on Qt creator and it is
compiled using the Linux terminal.
Fig. 4. Example of Autonomous Vehicle Navigation in Indoor
Room Environment
Fig. 5. Reference Directions Symbol
Ability to navigate in ones environment is important for a
fully autonomous vehicle (AV) system. One critical task in
navigation is to be able to recognize and stay on the
path.The Raspberry Pi is connected with Microsoft
LifeCam and the number of frames per second is 24fps. The
image processing algorithm makes use of OpenCV libraries.
The Video captured is first processed using the designed
segmentation algorithm and the processing of input is done
frame by frame.
A. Image Segmentation Algorithm
The image segmentation algorithm is designed using
smoothing, thresholding,morphological operations, edge
de- tection and tracking.The vehicle continuously tracks the
refer- ence direction to move the vehicle from source to
destination. Once the reference direction arrow is detected
the Raspberry Pi processes the captured by using the
algorithm. The indoor room environment with designed
algorithm for Autonomous Vehicle navigation is shown in
figure 4. The arrows are reference direction marks pasted or
stuck on the wall at ground level. The wall is detected as
obstacle and the Autonomous vehicle moves in backwards
and checks for reference direction. The reference directions
shown in figure 5. such as GO , left arrow, right arrow and
STOP are used as mentioned above. When the camera
detects the reference images it performs following:
1) The video captured is processed frame by frame.
2) The frame is converted into gray scale to limit the
computational requirements.
3) Smoothing: The image is then blurred to remove the
noise.
magnitudes .
5) Double Thresholding: Potential edges are then
deter- mined by thresholding.
6) Edge Tracking By Hystersis: Final edges are
determined by suppression of all edges that are not
connected to certain edges.
The detected edges should be as close as possible
to real edges to determine the reference direction and
move vehicle accordingly. The GO reference image
denotes start, for Right arrow, the vehicle moves in
right direction and so on.
B. Vision Based Navigation:
The vision based navigation can be done by reference
object color recognition and the other method which is
implemented here using the reference direction images.
The figure 6. shows the flow chart of the proposed
methodology for autonomous vehicle navigation. When
the system is started all the libraries which are used in
processing will be first initialized. When the Raspberry
Pi onboard the vehicle starts, the camera on board the
vehicle captures the GO image frame and the vehicle
starts. This is done by capturing the GO reference
image and then it is processed using the image
segmentation algorithm, and the output is the
threshold canny image which is then matched with
edges in image and defined action with input database
image of GO. Once the edges are matched, the vehicle
starts moving forward until it finds the second
reference image in the same direction to reach final
destination. The vehicle then keeps moving and
searching for the second reference image on wall for the
next action in order to reach the final destination.
When it finally captures the STOP(at destination)
reference image it keeps on processing the frames and
when any frame
Int. Journal of Electrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426
187 NITTTR, Chandigarh EDIT-2015
Fig. 6. Flow Chart of Proposed System
edges matches with the database image STOP edges,
the vehicle stops at the final destination. Like this
autonomous vehicle performs the navigation to reach
destination.
C. Obstacle Detection
The ultrasonic sensors are used with the Raspberry Pi
to detect the obstacles in vehicles path. The ultrasonic
sensors are mounted on the vehicle and they are
interfaced with Raspberry Pi. They mainly calculate
the distance between the obstacle and vehicle and
gives the output to the Raspberry Pi which then
processes the inputs and the vehicle moves in backward
direction and then it moves left and try to avoid
collision with obstacle. The detection distance of
ultrasonic sensors is 2cm-
450cm.
VI. RESULTS
The image segmentation result of Right and STOP
reference image captured in real time is shown in
figure 7. and figure
8. The segmented outputs denotes the movement of
vehicle in certain directions. The autonomous vehicle is
capable of moving in indoor environment and detects
the obstacles.The vehicle moves slowly due to lower
RPM DC motors(10RPM) used due to high
computation speed requirement for image segmentation
algorithm. The Raspberry Pi also displays the captured
video and segmented output on remote desktop using
the WiFi network.
Fig. 7. Segmented output of Right Reference Image
Fig. 8. Segmented output of STOP Reference Image
VII. CONCLUSION
The autonomous vehicle navigation implemented using
the reference directions images on wall(at ground level)
is done using the designed image segmentation
algorithm. In the implementation, the vehicle is affected
due to rough surfaces in indoor. At smooth surface, the
vehicle moves properly towards desired direction to
reach final destination by using reference images. The
speed of vehicle can be increased by using good RPM
motors but the computation(segmentation algorithm)
tasks for each frame makes it difficult to obtain desired
results using high RPM motors. In the future,
autonomous vehicle navigation can be performed using
the color object recognition in the indoor
environment. Different color objects will be
recognised by calculating the HSV values and then
performing the segmentation of color object such that
every color has certain movement defined in the
system. Also,the mapping of indoor environment can
be done by using the video frames on remote desktop
to map environment using the MATLAB.
REFERENCES
[1] C. K. Chang, C. Siagian, and L. Itti, “Mobile robot monocular
vision navigation based on road region and boundary estimation,”
in Proceedings of IEEE/RSJ International Conference on
Intelligent Robots and Systems, Vilamoura, Algarve, Portugal,
October 2012, pp. 1043–1050.
[2] R. Neves and A. C. Matos, “Raspberry pi based stereo vision
for small size asvs,” in IEEE International Conference.
[3] Shankardas, D. Bharat, A. I. Rasheed, and V. K. Reddy,
“Design and asic implementation of image segmentation
algorithm for autonomous mav navigation,” in Proceedings of
2013 IEEE Second International Conference of Image
Information Processing (ICIIP-2013), 2013, pp.352–357.
[4] S. Rathinam, P. Almeida, Z. Kim, and S. Jackson, “Autonomous
searching and tracking of a river using an uav,” in Proceedings of
American Control Conference, New York City,USA, July 2007,
pp. 359–364.
[5] L. F. Posada, K. K. Narayanan, F. Hoffmann, and T. Bertram,
“Floor seg- mentation of omnidirectional images for mobile robot
visual navigation,” in IEEE,RSJ International conference on
Intelligent Robots and Systems, Taipei,Taiwan, October 2010, pp.
804–809.
[6] Z. Zhang, B. Sun, K. Sun, and W. Tang, “A new image
matching algorithm based on multi-scale segmentation applied for
uav navigation,” in IEEE, 2010.
[7] A. Lenskiy and J.-S. Lee, “Terrain images segmentation in
infra-red spectrum for autonomous robot navigation,” in
IEEE, IFOST 2010 Proceedings, 2010.
[8] G.Bradski and A.Kaehler, Learning OpenCV. ’Reeilly Media
Inc., 2008.

Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing Raspberry Pi

  • 1.
    Int. Journal ofElectrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426 NITTTR, Chandigarh EDIT -2015 184 Design of Image Segmentation Algorithm for Autonomous Vehicle Navigation using Raspberry Pi 1 Ankur S. Tandale, 2 Kapil K. Jajulwar 1 M.Tech Student, 2 Research scholar 1,2 Department of Communication Engineering, G.H.Raisoni College of Engineering,Nagpur 1 ankurtandale@gmail.com,2 kapil.jajulwar@raisoni.net Abstract—In the past few years Autonomous vehicles have gained importance due to its widespread applications in the field of civilian and military applications. On-board camera on autonomous vehicles captures the images which need to be processed in real time using the image segmentation algorithm. On board processing of video(frames)in real time is a big challenging task as it involves extracting the information and performing the required operations for navigation. This paper proposes an approach for vision based autonomous vehicle navigation in indoor environment using the designed image segmentation algorithm. The vision based navigation is applied to autonomous vehicle and it is implemented using the Raspberry Pi camera module on Raspberry Pi Model-B+ with the designed image segmentation algorithm. The image segmentation algorithm has been built using smoothing,thresholding, morpho- logical operations, and edge detection. The reference images of directions in the path are detected by the vehicle and accordingly it moves in right or left directions or stops at destination. The vehicle finds the path from source to destination using reference directions. It first captures the video,segments the video(frame by frame), finds the edges in the segmented frame and moves accordingly. The Raspberry Pi also transmits the capture video and segmented results using the Wi-Fi to the remote system for monitoring. The autonomous vehicle is also capable of finding obstacle in its path and the detection is done using the ultrasonic sensors. Index Terms—Autonomous Vehicle, Graphical User Inter- face(GUI), Raspberry Pi, Segmentation, Ultrasonic Sensor I. INTRODUCTION In the recent years, Autonomous vehicles have gained importance due to its widespread applications in various fields such as Military, Civilian, industrial etc. Autonomous vehicle navigation has the ability to determine its ow position and finding the path from source to destination. Navigation mainly defines the self localisation and finding the destination path. Vehicle navigation has long been a fundamental goal in both robotics and computer vision research. While the problem is largely solved for robots equipped with active range- finding devices, for a variety of reasons, the task still remains challeng- ing for vehicles equipped only with vision sensors. On-board computing using the computer vision is the most demanding areas of robotics. The need for autonomy in vehicles in indoor based navigation systems demands high computational power in the form of image processing capabilities. The Simultaneous localisation and mapping(SLAM) algorithm performs the self Fig. 1. Prototype of Autonomous vehicle Moving in Right direction localisation and maps the environment using the predefined indoor environment area and the vision based form. It involves complex computations and geometry to find the path and obstacles in the path to map the environment. In vision based autonomous vehicle navigation ,segmenta- tion of the captured frame is the fundamental step in image processing. Segmentation is the process of grouping pixels of an image depending on the information needed for further processing. Various segmentation techniques are present based on the region,edges,textures and intensities. As vehicles pro- ceeds with navigation using on- board processing it possess a problem to the use of powerful computational units; secondly cost of the system hardware, though having dropped in recent years, is still a limitation in robotics [1]. Therefore, robots requires powerful and fast processing speed to perform on board processing of images. In the last few years the demand for autonomous vehicles and robots has increased which have brought us a range of ARM architecture computational devices such as the Raspberry Pi or the even more powerful Quad- Core ODROID-U2 and these devices can perform on board real time image segmentation. The proposed work uses a Raspberry Pi for real time processing and a camera connected to the raspberry pi for providing the vision. The prototype of the autonomous vehicle is implemented as shown in figure 1. It is having onboard Raspberry Pi, Microsoft Lifecam, Ultrasonic sensor, power supply and DC motors etc. The captured real time video is processed such that it is first segmented and the edges are found depending upon which the
  • 2.
    Int. Journal ofElectrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426 185 NITTTR, Chandigarh EDIT-2015 vehicle moves in right, left or in certain angles. The complete task of segmentation is done using Raspberry Pi on board the vehicle in real time. The captured video using the Raspberry Pi camera is also transmitted using the WiFi to the remote computer. II. RELATED WORK Navigation can be done by designing proper Image segmen- tation algorithm. In literature [2], the stereo vision applied to small water vehicles using the low cost computers, which can drive autonomous vehicles capable of following other vehicle or boats in water is developed. The system uses 2 stereovision cameras which are connected to raspberry-pi for real time im- age processing using open computer vision libraries(OpenCV). This autonomous vehicle performs control of yaw and speed, line tracking and detecting obstacles. This system is capable of identifying and following targets in a distance of over 5 meters. In literature [3], the image segmentation algorithm is used for real time image processing as it is demanded by micro air vehicle(MAV) for navigation. Here, the image segmentation is implemented on FPGA for on board fast processing. The system finds vast application in military applications and for surveillance of structures like roads and rivers [4]. Real time autonomous visual navigation system is presented in [5] using approaches like region segmentation to find the road appearance and road detection to compute road shape. Monocular cameras along with proximity sensors are used to detect roads. Two algorithm are designed and there outputs are combined using a Kalman filter to produce a robust estimation of road which is used as a control policy for autonomous navigation in indoor and outdoor environments. Image matching is another approach for navigation and it is often used in unmanned aerial vehicle (UAVs) as used in [6]. The images can also be used in infrared range using CCD sensors for the purpose of navigation in day and night time. [7]. III. BLOCK DIAGRAM OF PROPOSED SYSTEM The proposed title aims to design the Segmentation al- gorithm for autonomous vehicles on Raspberry Pi to help find obstacles and navigate the vehicles in an unknown environment. The below block diagram in figure 2. shows the proposed system for raspberry-pi Camera Feedback for Navigation based mobile robot navigation. The navigation is provided by designing the segmentation algorithm using images captured through camera on board the vehicles. • Camera: Camera is connected to the Raspberry pi and it acquires the video(24fps) from which the frame is taken as input and it is further processed. • Filter: The filter removes the noise from the acquired image so that the necessary information in image is not lost. TABLE I FEATURES OF RASPBERRY-PI MODEL B+ Features Raspberry-Pi Model B+ CPU 700MHz-ARM 11-S core Memory 512MB RAM(shared with GPU) On board Ethernet 10/100 Memory Storage uSD Card Slot 8/16GB Power Ratings 700mA-1.8mA, 5V DC USB Ports 4 Video Outputs HDMI Operating Systems Raspbian OS, Debian OS Processing Unit: The processing unit is where the image segmentation is performed such that the gradient and edge tracking is done. From the edges it is possible to determine the reference image and so the vehicle moves accordingly. The ultrasonic sensor also gives the input to this unit so that the distance between the obstacle and autonomous vehicle is known and if the obstacle is near then the vehicle stops and starts moving in other direction to overcome it. All this processing is performed using the minicomputer called as Raspberry-Pi. The image segmentation algorithm is processed using the raspberry- pi. • Display Unit (GUI): The display unit is where the cap- tured video and segmented output is displayed using the WiFi sdapter on the remote screen. • Feedback: The segmented output is continuously moni- tored to find the gradient and edges and it is given as feedback to Raspberry Pi along with the sensor output to check the obstacle continuously. IV. INTRODUCTION TO RASPBERRY PI The design of image segmentation algorithm is done using C++ on Raspberry Pi board using the Open Computer Vision (OpenCV) [8]. The Raspberry pi is a handheld computer on the board consists of ARM processor and best suitable for real time operation. It runs on raspbian operating system which has the Linux environment. Officially launched in February 2012, the Raspberry Pi personal computer took the world by storm, selling out the 10,000 available units immediately. It is an inexpensive credit card sized exposed circuit board, a fully programmable PC running the free open source Linux operating system. The Raspberry Pi can connect to the Inter- net; can be plugged into a TV, and costs very less. Originally created to spark school childrens interest in computers, due to the variety of features mentioned in Table I, the Raspberry Fig. 2. Block Diagram of Proposed System
  • 3.
    Int. Journal ofElectrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426 NITTTR, Chandigarh EDIT -2015 186 V. PROPOSED METHODOLOGY FOR AUTONOMOUS 4) Finding Gradients: Only local maxima are marked as VEHICLE NAVIGATION edges and they are marked where the image has large Fig. 3. Raspberry pi Model-B+ Setup Pi has caught the attention of home hobbyist, entrepreneurs, and educators worldwide. Estimates shows the sales figures around 1 million units as of February 2013.The figure 3. shows the Raspberry Pi model B+ setup with monitor and Ethernet connected to it. Qt creator is used for Qt GUI application development framework. Qt creator is a cross platform C++,javascript integrated development environment. The program is build on Qt creator and it is compiled using the Linux terminal. Fig. 4. Example of Autonomous Vehicle Navigation in Indoor Room Environment Fig. 5. Reference Directions Symbol Ability to navigate in ones environment is important for a fully autonomous vehicle (AV) system. One critical task in navigation is to be able to recognize and stay on the path.The Raspberry Pi is connected with Microsoft LifeCam and the number of frames per second is 24fps. The image processing algorithm makes use of OpenCV libraries. The Video captured is first processed using the designed segmentation algorithm and the processing of input is done frame by frame. A. Image Segmentation Algorithm The image segmentation algorithm is designed using smoothing, thresholding,morphological operations, edge de- tection and tracking.The vehicle continuously tracks the refer- ence direction to move the vehicle from source to destination. Once the reference direction arrow is detected the Raspberry Pi processes the captured by using the algorithm. The indoor room environment with designed algorithm for Autonomous Vehicle navigation is shown in figure 4. The arrows are reference direction marks pasted or stuck on the wall at ground level. The wall is detected as obstacle and the Autonomous vehicle moves in backwards and checks for reference direction. The reference directions shown in figure 5. such as GO , left arrow, right arrow and STOP are used as mentioned above. When the camera detects the reference images it performs following: 1) The video captured is processed frame by frame. 2) The frame is converted into gray scale to limit the computational requirements. 3) Smoothing: The image is then blurred to remove the noise. magnitudes . 5) Double Thresholding: Potential edges are then deter- mined by thresholding. 6) Edge Tracking By Hystersis: Final edges are determined by suppression of all edges that are not connected to certain edges. The detected edges should be as close as possible to real edges to determine the reference direction and move vehicle accordingly. The GO reference image denotes start, for Right arrow, the vehicle moves in right direction and so on. B. Vision Based Navigation: The vision based navigation can be done by reference object color recognition and the other method which is implemented here using the reference direction images. The figure 6. shows the flow chart of the proposed methodology for autonomous vehicle navigation. When the system is started all the libraries which are used in processing will be first initialized. When the Raspberry Pi onboard the vehicle starts, the camera on board the vehicle captures the GO image frame and the vehicle starts. This is done by capturing the GO reference image and then it is processed using the image segmentation algorithm, and the output is the threshold canny image which is then matched with edges in image and defined action with input database image of GO. Once the edges are matched, the vehicle starts moving forward until it finds the second reference image in the same direction to reach final destination. The vehicle then keeps moving and searching for the second reference image on wall for the next action in order to reach the final destination. When it finally captures the STOP(at destination) reference image it keeps on processing the frames and when any frame
  • 4.
    Int. Journal ofElectrical & Electronics Engg. Vol. 2, Spl. Issue 1 (2015) e-ISSN: 1694-2310 | p-ISSN: 1694-2426 187 NITTTR, Chandigarh EDIT-2015 Fig. 6. Flow Chart of Proposed System edges matches with the database image STOP edges, the vehicle stops at the final destination. Like this autonomous vehicle performs the navigation to reach destination. C. Obstacle Detection The ultrasonic sensors are used with the Raspberry Pi to detect the obstacles in vehicles path. The ultrasonic sensors are mounted on the vehicle and they are interfaced with Raspberry Pi. They mainly calculate the distance between the obstacle and vehicle and gives the output to the Raspberry Pi which then processes the inputs and the vehicle moves in backward direction and then it moves left and try to avoid collision with obstacle. The detection distance of ultrasonic sensors is 2cm- 450cm. VI. RESULTS The image segmentation result of Right and STOP reference image captured in real time is shown in figure 7. and figure 8. The segmented outputs denotes the movement of vehicle in certain directions. The autonomous vehicle is capable of moving in indoor environment and detects the obstacles.The vehicle moves slowly due to lower RPM DC motors(10RPM) used due to high computation speed requirement for image segmentation algorithm. The Raspberry Pi also displays the captured video and segmented output on remote desktop using the WiFi network. Fig. 7. Segmented output of Right Reference Image Fig. 8. Segmented output of STOP Reference Image VII. CONCLUSION The autonomous vehicle navigation implemented using the reference directions images on wall(at ground level) is done using the designed image segmentation algorithm. In the implementation, the vehicle is affected due to rough surfaces in indoor. At smooth surface, the vehicle moves properly towards desired direction to reach final destination by using reference images. The speed of vehicle can be increased by using good RPM motors but the computation(segmentation algorithm) tasks for each frame makes it difficult to obtain desired results using high RPM motors. In the future, autonomous vehicle navigation can be performed using the color object recognition in the indoor environment. Different color objects will be recognised by calculating the HSV values and then performing the segmentation of color object such that every color has certain movement defined in the system. Also,the mapping of indoor environment can be done by using the video frames on remote desktop to map environment using the MATLAB. REFERENCES [1] C. K. Chang, C. Siagian, and L. Itti, “Mobile robot monocular vision navigation based on road region and boundary estimation,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, October 2012, pp. 1043–1050. [2] R. Neves and A. C. Matos, “Raspberry pi based stereo vision for small size asvs,” in IEEE International Conference. [3] Shankardas, D. Bharat, A. I. Rasheed, and V. K. Reddy, “Design and asic implementation of image segmentation algorithm for autonomous mav navigation,” in Proceedings of 2013 IEEE Second International Conference of Image Information Processing (ICIIP-2013), 2013, pp.352–357. [4] S. Rathinam, P. Almeida, Z. Kim, and S. Jackson, “Autonomous searching and tracking of a river using an uav,” in Proceedings of American Control Conference, New York City,USA, July 2007, pp. 359–364. [5] L. F. Posada, K. K. Narayanan, F. Hoffmann, and T. Bertram, “Floor seg- mentation of omnidirectional images for mobile robot visual navigation,” in IEEE,RSJ International conference on Intelligent Robots and Systems, Taipei,Taiwan, October 2010, pp. 804–809. [6] Z. Zhang, B. Sun, K. Sun, and W. Tang, “A new image matching algorithm based on multi-scale segmentation applied for uav navigation,” in IEEE, 2010. [7] A. Lenskiy and J.-S. Lee, “Terrain images segmentation in infra-red spectrum for autonomous robot navigation,” in IEEE, IFOST 2010 Proceedings, 2010. [8] G.Bradski and A.Kaehler, Learning OpenCV. ’Reeilly Media Inc., 2008.