SlideShare a Scribd company logo
Introduction
This project was done as a part of my curriculum at University of South Australia for the
award of masters of advanced manufacturing technology. The time frame during which this
research was conducted was the final two semesters of my master's degree i.e. the third and
fourth semester (Feb 2010 to Oct 2010. The research project was titled "Development of
global vision system for swarm robots".
Aims and objectives
This research dealt with the use of global vision system to find position, angle, direction,
linear velocity and angular velocity of three robots. This represented a step towards the
development of fully functional swarm robots at university of South Australia.
The expected outcomes were an ID tag which allowed for easy calculation of the orientation
of the robots followed by robust tracking of the individual robots. In addition to this it must
be able to accommodate a large number of robots. Another outcome was identification of an
appropriate colour space which allowed for easy image segmentation separating the regions
of interest and the foreground pixels. Finally the algorithm developed should be such that it
could calculate the required parameters accurately, robustly and consumed a small amount of
processing time per frame.
Methodology
ID tag design
A variety of ID tags have been developed over the years for swarm robots. The ID tag
developed during this research is shown in figure 1.1 below. Here the number 1 represented
the key colour patch. This patch had the same colour for all three robots and coordinates of its
centre gave the position of the robots in the image. The number 0 represented the non key
colour patch. This patch had a different colour for each of the robots and was used for
tracking the individual robots and for angle calculation. The colour 2 represented a colour
which gave 0 and 1 a higher contrast so that these were more easily detectible.
Figure 1.1 The ID tag developed
The colour used for the ID tags are shown in figure 1.2
Figure 1.2 Colour used for ID tag
This design of the ID tag represents the simplest patterns used for robot pose estimation. The
reasons for selecting this specific pattern were visual simplicity ease for calculating
orientation and straightforward calculations for tracking individual robots. However a major
flaw in this case was the number of players that can be accommodated. To include more
players would have meant to include more colours for the non key colour patches which
would increase the computational burden for the algorithm significantly.
Selection of colour space
Each frame taken showed different pixel values due to the changing lighting conditions.
Therefore a colour space needed to be selected that was impervious and robust against the
changes in lighting conditions. The colour space used in this case was the HSV. This colour
space allows the algorithm to differentiate between colours based on their hue and saturation
values. The main reason for selecting for this colour space is that unlike the RGB colour
space where all the colour planes are correlated, i.e. changes in illumination effects all three
planes, it encodes the colour information in two planes i.e. hue and saturation and the
brightness information in the third value plane. Another method used to minimize the effects
of variations in illumination conditions is normalization of the RGB colour space. The
purpose for doing this is to compare both methods and to find which approach is more suited
for use in swarm robot.
Tracking of individual robots
A very important task was keeping track of the individual robots. Many different measures
had been devised using colour information and specific geometric properties. The tracking
measure used in this thesis belonged to the latter. This measure is termed as the Euclidean
distance. It can be defined as the ordinary distance between two points which one measures
using a common ruler and can be calculated using the Pythagoras theorem. This Euclidean
distance is shown in the figure 1.3
Figure 1.3 Euclidean distance
The formula is given as
𝐸𝑢𝑐𝑙𝑖𝑑𝑒𝑎𝑛 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = √ (𝑦2 − 𝑦1 )2
+ (𝑥2 − 𝑥1)2
Euclidean distance
X2-X1
Y2-Y1
Using the formula defined above the algorithm will calculate the Euclidean distance between
one of the non key colour patches and all of the three key colour patches and the key colour
patch having the lowest Euclidean distance will be selected. This procedure will be repeated
for all the three non key colour patches. This makes use of the fact that the distance between
the two patches is always less than 40 pixels for all the three patterns. Using this approach
allows the algorithm to track the positions and angles of the three patterns which in turn
makes it possible to find the linear velocity and the angular velocity.
The algorithm
A frame used in the experiment is shown in figure 1.4
Figure 1.4 A picture used in the experiment
The centre circle in this case is of red colour whereas the non key colour patch circles are
magenta, yellow and green. The algorithm starts by acquiring the RGB frame. As it is evident
from figure 1.4 there are three colour IDs. For the sake of convenience these are called
magenta robot, yellow robot and green robot. The acquired image is then converted to HSV.
This is followed by applying a threshold and an opening separating the centre red circles
shown in figure 1.4. It should be noted that it is not known at this point that which centre
circle belongs to which robot. The algorithm will for now store the x and y coordinates of the
three centre circles. During tracking the algorithm is able to differentiate between these based
on the Euclidean distance which in this case is the distance between the two circles on a robot
e.g. magenta circle on magenta robot and its respective centre. The next step is applying
multiple thresholds to find centre coordinates for magenta, yellow and green circles. The
algorithm then finds the Euclidean distance for each robot and is able to track these
efficiently. The algorithm has now differentiated between the three robots and calculates the
angle for each robot next. The angle is found by calculating the slope of the line connecting
the centres of the key colour patch and non key colour patch for each robot e.g. for magenta
robot the line is between centre of the red circle and centre of magenta circle. This is
followed by finding the velocity both linear and angular. To find these the positions and
angles obtained for each frame are stored separately. To find the linear velocity and angular
velocity of a robot between two frames a simple difference is taken between the centre
coordinates and angle values obtained for both frames divided by the time between two
frames. Finally based on robot angle the direction is calculated. For example if angle is 90
degrees the robot is headed in the north direction and if it is 270 degrees the robot is going in
the south direction.
The other approach taken was normalizing the RGB colour planes to find the required
parameters. The procedure for both approaches is the same and the only difference is the
colour space used. Both of these approaches were compared in terms of processing time, post
processing required and detecting secondary colours. It was concluded that the algorithm
employing HSV colour space was better of the two approaches.
Results
The project was able to meet all the expected outcomes. However the processing time per
frame was more than expected. Currently attempts are being done to reduce this time. The
vision algorithm developed by me will be combined with a PHD student's obstacle avoidance
algorithm to improve its overall efficiency and robustness.
Thesis summary

More Related Content

What's hot

Bangla Optical Digits Recognition using Edge Detection Method
Bangla Optical Digits Recognition using Edge Detection MethodBangla Optical Digits Recognition using Edge Detection Method
Bangla Optical Digits Recognition using Edge Detection Method
IOSR Journals
 
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
ijiert bestjournal
 
Similarity and Variance of Color Difference Based Demosaicing
Similarity and Variance of Color Difference Based DemosaicingSimilarity and Variance of Color Difference Based Demosaicing
Similarity and Variance of Color Difference Based Demosaicing
Radita Apriana
 
I017417176
I017417176I017417176
I017417176
IOSR Journals
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
sipij
 
Miniproject final group 14
Miniproject final group 14Miniproject final group 14
Miniproject final group 14Ashish Mundhra
 
HSI Classification: Analysis
HSI Classification: AnalysisHSI Classification: Analysis
HSI Classification: Analysis
IRJET Journal
 
Contour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic MapsContour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic Maps
CSCJournals
 
Texture descriptor based on local combination adaptive ternary pattern
Texture descriptor based on local combination adaptive ternary patternTexture descriptor based on local combination adaptive ternary pattern
Texture descriptor based on local combination adaptive ternary pattern
Projectsatbangalore
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
inventionjournals
 
E011142632
E011142632E011142632
E011142632
IOSR Journals
 
EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD
cscpconf
 
Edge detection by modified otsu method
Edge detection by modified otsu methodEdge detection by modified otsu method
Edge detection by modified otsu method
csandit
 
Path Finding Solutions For Grid Based Graph
Path Finding Solutions For Grid Based GraphPath Finding Solutions For Grid Based Graph
Path Finding Solutions For Grid Based Graph
acijjournal
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
researchinventy
 
Pre emphasis on data for an adaptive fingerprint image enhancement
Pre emphasis on data for an adaptive fingerprint image enhancementPre emphasis on data for an adaptive fingerprint image enhancement
Pre emphasis on data for an adaptive fingerprint image enhancement
IAEME Publication
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
TELKOMNIKA JOURNAL
 
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
iosrjce
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on color
eSAT Publishing House
 

What's hot (20)

Bangla Optical Digits Recognition using Edge Detection Method
Bangla Optical Digits Recognition using Edge Detection MethodBangla Optical Digits Recognition using Edge Detection Method
Bangla Optical Digits Recognition using Edge Detection Method
 
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
AN IMPLEMENTATION OF ADAPTIVE PROPAGATION-BASED COLOR SAMPLING FOR IMAGE MATT...
 
Similarity and Variance of Color Difference Based Demosaicing
Similarity and Variance of Color Difference Based DemosaicingSimilarity and Variance of Color Difference Based Demosaicing
Similarity and Variance of Color Difference Based Demosaicing
 
I017417176
I017417176I017417176
I017417176
 
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLINGAPPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
APPLYING R-SPATIOGRAM IN OBJECT TRACKING FOR OCCLUSION HANDLING
 
Miniproject final group 14
Miniproject final group 14Miniproject final group 14
Miniproject final group 14
 
HSI Classification: Analysis
HSI Classification: AnalysisHSI Classification: Analysis
HSI Classification: Analysis
 
Contour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic MapsContour Line Tracing Algorithm for Digital Topographic Maps
Contour Line Tracing Algorithm for Digital Topographic Maps
 
Texture descriptor based on local combination adaptive ternary pattern
Texture descriptor based on local combination adaptive ternary patternTexture descriptor based on local combination adaptive ternary pattern
Texture descriptor based on local combination adaptive ternary pattern
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
E011142632
E011142632E011142632
E011142632
 
EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD EDGE DETECTION BY MODIFIED OTSU METHOD
EDGE DETECTION BY MODIFIED OTSU METHOD
 
Edge detection by modified otsu method
Edge detection by modified otsu methodEdge detection by modified otsu method
Edge detection by modified otsu method
 
Path Finding Solutions For Grid Based Graph
Path Finding Solutions For Grid Based GraphPath Finding Solutions For Grid Based Graph
Path Finding Solutions For Grid Based Graph
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
Pre emphasis on data for an adaptive fingerprint image enhancement
Pre emphasis on data for an adaptive fingerprint image enhancementPre emphasis on data for an adaptive fingerprint image enhancement
Pre emphasis on data for an adaptive fingerprint image enhancement
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
 
artifical intelligence final paper
artifical intelligence final paperartifical intelligence final paper
artifical intelligence final paper
 
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
A Hough Transform Implementation for Line Detection for a Mobile Robot Self-N...
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on color
 

Similar to Thesis summary

User Interactive Color Transformation between Images
User Interactive Color Transformation between ImagesUser Interactive Color Transformation between Images
User Interactive Color Transformation between Images
IJMER
 
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
IJERA Editor
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET Journal
 
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
Dibya Jyoti Bora
 
E017443136
E017443136E017443136
E017443136
IOSR Journals
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on color
eSAT Journals
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of images
eSAT Publishing House
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of images
eSAT Journals
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of imageseSAT Publishing House
 
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
Universitas Pembangunan Panca Budi
 
Research on License Plate Recognition and Extraction from complicated Images
Research on License Plate Recognition and Extraction from complicated ImagesResearch on License Plate Recognition and Extraction from complicated Images
Research on License Plate Recognition and Extraction from complicated Images
IJERA Editor
 
E41033336
E41033336E41033336
E41033336
IJERA Editor
 
B018110915
B018110915B018110915
B018110915
IOSR Journals
 
Detection of Fruits Defects Using Colour Segmentation Technique
Detection of Fruits Defects Using Colour Segmentation TechniqueDetection of Fruits Defects Using Colour Segmentation Technique
Detection of Fruits Defects Using Colour Segmentation Technique
IJCSIS Research Publications
 
Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot
IRJET Journal
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
eSAT Publishing House
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
eSAT Journals
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
eSAT Publishing House
 
IRJET - Analysis of A-Star Bot
IRJET - Analysis of A-Star BotIRJET - Analysis of A-Star Bot
IRJET - Analysis of A-Star Bot
IRJET Journal
 
Automatic Number Plate Recognition System A Histogram Based Approach
Automatic Number Plate Recognition System  A Histogram Based ApproachAutomatic Number Plate Recognition System  A Histogram Based Approach
Automatic Number Plate Recognition System A Histogram Based Approach
Joe Osborn
 

Similar to Thesis summary (20)

User Interactive Color Transformation between Images
User Interactive Color Transformation between ImagesUser Interactive Color Transformation between Images
User Interactive Color Transformation between Images
 
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
 
IRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation PrincipleIRJET- Image Feature Extraction using Hough Transformation Principle
IRJET- Image Feature Extraction using Hough Transformation Principle
 
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
Multispectral Satellite Color Image Segmentation Using Fuzzy Based Innovative...
 
E017443136
E017443136E017443136
E017443136
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on color
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of images
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of images
 
A novel tool for stereo matching of images
A novel tool for stereo matching of imagesA novel tool for stereo matching of images
A novel tool for stereo matching of images
 
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
Heuristic Function Influence to the Global Optimum Value in Shortest Path Pro...
 
Research on License Plate Recognition and Extraction from complicated Images
Research on License Plate Recognition and Extraction from complicated ImagesResearch on License Plate Recognition and Extraction from complicated Images
Research on License Plate Recognition and Extraction from complicated Images
 
E41033336
E41033336E41033336
E41033336
 
B018110915
B018110915B018110915
B018110915
 
Detection of Fruits Defects Using Colour Segmentation Technique
Detection of Fruits Defects Using Colour Segmentation TechniqueDetection of Fruits Defects Using Colour Segmentation Technique
Detection of Fruits Defects Using Colour Segmentation Technique
 
Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
 
Implementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithmImplementation of content based image retrieval using the cfsd algorithm
Implementation of content based image retrieval using the cfsd algorithm
 
IRJET - Analysis of A-Star Bot
IRJET - Analysis of A-Star BotIRJET - Analysis of A-Star Bot
IRJET - Analysis of A-Star Bot
 
Automatic Number Plate Recognition System A Histogram Based Approach
Automatic Number Plate Recognition System  A Histogram Based ApproachAutomatic Number Plate Recognition System  A Histogram Based Approach
Automatic Number Plate Recognition System A Histogram Based Approach
 

Thesis summary

  • 1. Introduction This project was done as a part of my curriculum at University of South Australia for the award of masters of advanced manufacturing technology. The time frame during which this research was conducted was the final two semesters of my master's degree i.e. the third and fourth semester (Feb 2010 to Oct 2010. The research project was titled "Development of global vision system for swarm robots". Aims and objectives This research dealt with the use of global vision system to find position, angle, direction, linear velocity and angular velocity of three robots. This represented a step towards the development of fully functional swarm robots at university of South Australia. The expected outcomes were an ID tag which allowed for easy calculation of the orientation of the robots followed by robust tracking of the individual robots. In addition to this it must be able to accommodate a large number of robots. Another outcome was identification of an appropriate colour space which allowed for easy image segmentation separating the regions of interest and the foreground pixels. Finally the algorithm developed should be such that it could calculate the required parameters accurately, robustly and consumed a small amount of processing time per frame. Methodology ID tag design A variety of ID tags have been developed over the years for swarm robots. The ID tag developed during this research is shown in figure 1.1 below. Here the number 1 represented the key colour patch. This patch had the same colour for all three robots and coordinates of its centre gave the position of the robots in the image. The number 0 represented the non key colour patch. This patch had a different colour for each of the robots and was used for tracking the individual robots and for angle calculation. The colour 2 represented a colour which gave 0 and 1 a higher contrast so that these were more easily detectible.
  • 2. Figure 1.1 The ID tag developed The colour used for the ID tags are shown in figure 1.2 Figure 1.2 Colour used for ID tag This design of the ID tag represents the simplest patterns used for robot pose estimation. The reasons for selecting this specific pattern were visual simplicity ease for calculating orientation and straightforward calculations for tracking individual robots. However a major flaw in this case was the number of players that can be accommodated. To include more players would have meant to include more colours for the non key colour patches which would increase the computational burden for the algorithm significantly.
  • 3. Selection of colour space Each frame taken showed different pixel values due to the changing lighting conditions. Therefore a colour space needed to be selected that was impervious and robust against the changes in lighting conditions. The colour space used in this case was the HSV. This colour space allows the algorithm to differentiate between colours based on their hue and saturation values. The main reason for selecting for this colour space is that unlike the RGB colour space where all the colour planes are correlated, i.e. changes in illumination effects all three planes, it encodes the colour information in two planes i.e. hue and saturation and the brightness information in the third value plane. Another method used to minimize the effects of variations in illumination conditions is normalization of the RGB colour space. The purpose for doing this is to compare both methods and to find which approach is more suited for use in swarm robot. Tracking of individual robots A very important task was keeping track of the individual robots. Many different measures had been devised using colour information and specific geometric properties. The tracking measure used in this thesis belonged to the latter. This measure is termed as the Euclidean distance. It can be defined as the ordinary distance between two points which one measures using a common ruler and can be calculated using the Pythagoras theorem. This Euclidean distance is shown in the figure 1.3 Figure 1.3 Euclidean distance The formula is given as 𝐸𝑢𝑐𝑙𝑖𝑑𝑒𝑎𝑛 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 = √ (𝑦2 − 𝑦1 )2 + (𝑥2 − 𝑥1)2 Euclidean distance X2-X1 Y2-Y1
  • 4. Using the formula defined above the algorithm will calculate the Euclidean distance between one of the non key colour patches and all of the three key colour patches and the key colour patch having the lowest Euclidean distance will be selected. This procedure will be repeated for all the three non key colour patches. This makes use of the fact that the distance between the two patches is always less than 40 pixels for all the three patterns. Using this approach allows the algorithm to track the positions and angles of the three patterns which in turn makes it possible to find the linear velocity and the angular velocity. The algorithm A frame used in the experiment is shown in figure 1.4 Figure 1.4 A picture used in the experiment The centre circle in this case is of red colour whereas the non key colour patch circles are magenta, yellow and green. The algorithm starts by acquiring the RGB frame. As it is evident from figure 1.4 there are three colour IDs. For the sake of convenience these are called magenta robot, yellow robot and green robot. The acquired image is then converted to HSV.
  • 5. This is followed by applying a threshold and an opening separating the centre red circles shown in figure 1.4. It should be noted that it is not known at this point that which centre circle belongs to which robot. The algorithm will for now store the x and y coordinates of the three centre circles. During tracking the algorithm is able to differentiate between these based on the Euclidean distance which in this case is the distance between the two circles on a robot e.g. magenta circle on magenta robot and its respective centre. The next step is applying multiple thresholds to find centre coordinates for magenta, yellow and green circles. The algorithm then finds the Euclidean distance for each robot and is able to track these efficiently. The algorithm has now differentiated between the three robots and calculates the angle for each robot next. The angle is found by calculating the slope of the line connecting the centres of the key colour patch and non key colour patch for each robot e.g. for magenta robot the line is between centre of the red circle and centre of magenta circle. This is followed by finding the velocity both linear and angular. To find these the positions and angles obtained for each frame are stored separately. To find the linear velocity and angular velocity of a robot between two frames a simple difference is taken between the centre coordinates and angle values obtained for both frames divided by the time between two frames. Finally based on robot angle the direction is calculated. For example if angle is 90 degrees the robot is headed in the north direction and if it is 270 degrees the robot is going in the south direction. The other approach taken was normalizing the RGB colour planes to find the required parameters. The procedure for both approaches is the same and the only difference is the colour space used. Both of these approaches were compared in terms of processing time, post processing required and detecting secondary colours. It was concluded that the algorithm employing HSV colour space was better of the two approaches. Results The project was able to meet all the expected outcomes. However the processing time per frame was more than expected. Currently attempts are being done to reduce this time. The vision algorithm developed by me will be combined with a PHD student's obstacle avoidance algorithm to improve its overall efficiency and robustness.