SlideShare a Scribd company logo
1 of 8
COIN SORTER ROBOT
ME547-Winter 2015
Christina Chen (20400887)
Fiona Khor (20392500)
Clement Kwok (20338174)
Jarret Sun (20358567)
1
Abstract
Using the A255 system available at the
University of Waterloo Robotics lab, a ROS
program was designed to allow a robot arm an
attached camera system to sort different valued
coinsto a pre-destinedlocation.
The program is able to detect coins as circles in
the camera’s field of view, then extract location
and size information of the coins in order to
follow an algorithm to move the coins based on
their value. At the end of the program, the
amount of each coin sorted and their total face
value are outputtedasthe final result.
1.0 Introduction
This project describes the process and displays
the results of the Coin-sorter robot
programmed as part of the ME 547 course
requirement.
A ROS program is written, involving both
camera and robot arm functions to perform the
task of sorting coins based on their face value.
The coins sorted are of Canadian currency,
which includes the following: Toonies ($2.00),
Loonies ($1.00), Quarters ($0.25), Dimes ($0.10)
and Nickels($0.05).
The goal of the project is to have the robot
system effectively sort coins into their pre-
destined locations, and to count the amount of
each coin, outputting the total face-value after
all the coinshave beensorted.
2.0 System Details
The project uses the A255 robot system with a
robot arm, C500C controller and
communication cable connecting to the teach
pendant and workstation. There is also a top
mounted camera capturing continuous frames
of the workspace. Safety features include a light
curtain and plexi-glass enclosures surrounding
the robots.
The programming languages used in this project
are C++, OpenCV and ROS, running in Linux
environment.
The robot and camera systems are fixed in the
workspace, therefore any frame
transformations from camera sensor to end
effector will be constant throughout the
durationthatthe programruns.
The limitation of the system is that there are
two robot arms sharing the same workspace,
therefore during testing, care must be taken so
that the two robots are not in danger of
crashingintoeach other.
3.0 Problem Definition
The goal sort coins scattered in the camera’s
field of view based on their value using the
robot arm.
The coins are scattered randomly so the
program must be able to detect them each time
it runs. The program must also be able to
differentiate between different coins in order to
selectthe correctlocationto sortthemto.
4.0 Theory
The program will take the following steps to
performitsfunction:
1. Analyze the image taken by the camera to
detect coins. This step will be performed using
HoughCircles, one of the OpenCV shape
detection functions. The argument fed into the
program can be either a still image taken from
the camera or a continuousvideofeed.
2. Sort the location of the coin. This step is
performed since the robot may accidentally
change the position of other coins if the order
of the coins sorted is random. The coin furthest
to the right from the robot’s base frame will be
sorted first as the sorted destinations are all on
the right. Therefore, there will not be any
interference with other coins if the coins are
strategicallysortedstartingfromthe right.
3. Frame Transformations. Since the sensor will
output the locations in pixel coordinates of the
image or video, frame transformation is
performed to relay the real-life coordinates of
the coin to the robot arm. Transformation
matrix of robot B’s frame with respect to the
2
sensor frame is used to determine the location
of the objectswithrespecttothe robot’sframe.
4. Sort the coins. The robot will sweep the coins
with a cup as its end effector to their pre-
destined locations based on the coins’ radii. As
mentioned in the sorting step, the robot will
therefore sweep the coin that is furthest to the
right first. This will be further explained in
section5.3.
5. Output total number and value of coins.
Results are outputted at the end of the program.
The face value of each coin and their total face
value are calculated.
5.0 Implementation
5.1 Coin Detection
To detect coins in the camera’s field of view, an
image is first taken with VLC player. The
program then reads the image saved in the
source folder and pass it to the function that
detects the coins. The function used to detect
circles in the images as coins is HoughCircles in
OpenCV [1]. The arguments of the functions are
modified, such as upper and lower thresholds of
edge and centre detection, minimum and
maximum radii to be detected. This is done to
ensure that no “ghost circles” appear such as
the onesshowninthe below Figure 1.
The values used for the demonstration is shown
below. The arguments are as follows: image
name, matrix that stores the coordinate and
radius of the circle, detection method used, and
ratio of resolution, minimum distance between
circles, and upper threshold for edge detection,
threshold for centre detection, maximum radius
and minimumradius.
HoughCircles( src_gray, circles,
CV_HOUGH_GRADIENT, 1,
src_gray.rows/20,135,18, 4 ,10);
If the threshold for edge and centre detection is
set too high, for example, 200 and 100
respectively, then no coins are detected. If the
values are set too low, such as 100 and 10, then
the letters printed on the sheet of paper next to
the workspaces are detected as coins alongside
with the actual coins present in the workspace.
A working setting of 135, and 18 is found by
trial and error to be the best threshold points
for coindetection.
To further prevent false circles from appearing,
minimum and maximum radius is specified to
be 4 and 10 pixels respectively, which covers
the range of coinsto be sorted.
Figure 1: Ghost circles
The HoughCircle function outputs the center
locations of each coin and their respective radii
in a matrix named circle. This matrix is then
passed to the main program to be sorted based
on their x axis values (from right to left shown in
figure 1).
5.2 Frame Transformation
Similar to variables given in Lab 2, the focal
length of the camera, f is 4.3mm, and the
projection of the coins in the sensor frame, w is
994.3mm. [2] The height and width of the pixel,
px and py is 0.0081mm.[2] From the robot B’s
reference frame to the camera’s reference
frame, a rotation transformation around the x
axis by 1800
and around the z axis by 900
with
x,y,and z translation of (480,20,990) are
performed. [2] The transformation matrix that
describes the location of the camera frame with
respect to robot B’s reference frame is shown
below.
[2]
3
The transformation matrix that describes the
location of the sensor frame with respect to the
camera frame isshownbelow.
[2]
By multiplying both matrices above together,
the transformation matrix that describes the
location of the sensor frame with respect to
robot B’sreference frame isasfollow.
[2]
Once the transformation matrix has been
determined, the location of the object with
respect to the sensor frame can be determined
basedon the followingequation.
[2]
The image taken is 640 x 480 pixels and the
distances of the objects in pixels determined
from the Hough circle function are from the
bottom left corner of the picture. The distances
of the objects are therefore translated to
originate from the center of the picture instead
by subtracting the x direction distance by 320
pixels and the y direction distance by 240 pixels.
In order to transform the distances into mm
unit instead of pixels, both x and y distances are
multiplied by -0.0081mm, taking into account
the x and y directions are in the negative i and j
directionsasshownin Figure 2.
Figure 2: Sensor coordinate frame and dimensions of
image from center [2]
By applying the translation of the origin of the
image to the centre of the image and
transforming the distances into mm instead of
pixels, the location of the object with the
respect to the sensor frame can be determined.
Only the x and y locations of the coins are taken
into account as the Hough circle function only
outputs the x and y locations of the coins with
theirrespective radii.
𝑝𝑠𝑥 = 𝑢 =
((4.3𝑚𝑚 − 994.3𝑚𝑚) ∗ 𝑝𝑥)
4.3𝑚𝑚
𝑝𝑠𝑦 = 𝑢 =
((4.3𝑚𝑚 − 994.3𝑚𝑚) ∗ 𝑝𝑥)
4.3𝑚𝑚
Once the ps values are determined, the
locations of the coins with respect to the robot
B’s reference frame can be determined by
multiplyingwiththe transformationmatrix, 𝑇𝑆
𝐵
.
𝑝 𝐵 = 𝑇𝑆
𝐵
𝑝𝑠 [2]
pb =
In ROS, the movRobot and goInZ functions from
Lab 1 will then be used to move the robot to the
specified locations of the coins depending on
theirradii. [3]
5.3 Robot Movement
The end effector of the robot is a paper cup as
shown in Figure 3 below. With a paper cup, the
robot can easily cover the coin and slide them
overto theirrespective locations.
Figure 3: End effector of Robot
Using the VLC media player program, a
snapshot of the entire workspace with different
coins spread across in the real time video is
4
taken. The image is then placed in the
serialConnectA255 folder and the ROS code is
executed. The Hough circle function then uses
the image to determine the location of the
coinsinpixelsaswell asthe radii of the coins.
The code reads in the location of the coins in a
2D array and proceeds to sort the array in a
loop such that the coins on the furthest right
will be sorted first as shown in Figure 4 below.
The coins on the furthest right is sorted it as the
robot will sort the coins into their specific
locations on the right of the workspace. In this
way, the implementation will not cause the
robot to sort and move random coins, hitting
and changing the location of other coins in the
process. When the locations of other coins are
changed, the snapshot of the image taken will
no longer correlate to the location of the coins
on the workspace.
Figure 4: Details of sorted array concept
Figure 5: Robot B's coordinate frame in workspace [4]
The locations of where the toonies, loonies,
quarters, dimes and nickles will be sorted into
are set such that they are -300mm in the y-axis
of the robot’s reference frame as seen in Figure
5, and are 450mm, 350mm, 250mm, 150mm,
50mm respectively in the x-axis of the robot’s
reference frame.
Once the array has been sorted, the array
function is inserted into the moveRobot
function that was used in Lab 1 [3], with the
knowledge that the first x and y location value is
the location of the coin on the furthest right.
The location of the coins are translated to the
centre of the origin of the image in the correct
sign and transformation of the locations with
respect to robot B’s reference frame is applied.
The values in the array now will compose of the
location of the coins with respect to robot B’s
reference frame.
The robot will first move to 20cm above the first
coin location value it reads in from the array.
Then it will move down by 8cm using the goInZ
function used in Lab 1 so that the cup touches
the ground and covers the coin. The robot will
sweep the coin to its specific location
depending on its radius. For example if the
radius is more than 6.8 pixels, the coin is
recognised by the robot as a Toonie and the
robot will move the coin to the set Toonie
location. The robot arm then moves up by 8cm
and return to the ready position before sorting
the next coin value it reads in from the sorted
array.
Once the robot has sorted all the coins, output
of the face value of each coins and the total
face value is produced on ROS. The robot then
goesback to itsreadyposition.
6.0 Results
The A255 robot system was programmed to
determine where coins are placed in a specified
workspace, move the coins from the rightmost
coin to the leftmost coin into specific positions
based on the face value of the coins. Illustrated
in Figure 6 is the still image provided to the
robot for locating the coins. Using this image,
the robot outputs an image seen in Figure 7
highlighting the coins, determining their values
basedon size.
5
Figure 6: Still Image for Coin Location
Figure 7: Robot Locating Coins
Once the robot has determined the locations of
the coins, the robot arm moves to the rightmost
coin and begins sorting it to its specified
location. As mentioned in the implementation
section, the location of the coin was
transformed to originate from the centre of the
image. However, due to the offset in the
camera, instead of subtracting 320 pixels in the
x-direction and 240 pixels in the y-direction to
centre the origin, 370 pixels and 220 pixels were
subtracted by instead. Figure 8 shows the first
coin being covered by the end effector in
preparation for sorting. After each coin has
been sorted, the arm returns to the homed
readylocationas seenin Figure 9.
Figure 8: Robot Arm Covering First Coin for Sorting
Figure 9: Robot Arm Return to Ready
The separate locations for the coins can be seen
in Figure 10, where the robot has finished
sorting the coins and has returned to the ready
position. The toonie is located 300mm from the
robot’s base in the y direction, while the
quarter is located 200 mm from the robot’s
base in the y direction. At the end of the
program, the number of each coin sorted, and
total face value of all coins is displayed. The
final results can be seen in Figure 11, where 1
toonie and 1 quarter were moved, totalling
$2.25.
Figure 10: Final Location of Sorted Coins
Figure 11: Robot Output of Face Values
All expected results occurred after running the
robot. It was designed to sort the rightmost coin
first, which it was able to determine and
properly sort. Before sorting each coin, the
6
robot properly determined the type of coin and
the location of the coin. At the end, the robot
had properly moved all coins in the workspace
to the specified sorting locations, and output
the correct face value of all coins sorted.
6.1 Problems Faced
While the robot has the capability of sorting all
coin values, the final results only utilize toonies
and quarters. As identified earlier, the
HoughCircle function had the problem of “ghost
circles”, requiring a greater threshold of
detection. In using the HoughCircle function to
detect small coins such as dimes, the camera
would detect additional circles from shadows or
dirt on the workspace. The robot would move
to these locations and effectively sort nothing.
Also, similarly sized coins yielded the same
identified size by the camera. For example,
loonies and toonies would be sorted to the
same location, despite being of different sizes.
Due to the inability of the camera to properly
determine the difference between similar sized
coins, loonies, dimes and nickels were ignored
inthe final simulation.
As the threshold of the coin has been increased
to avoid ghost circles, the lowest resolution that
the camera can detect is 6.8 pixels. The radii of
dimes, nickels, and quarters are all below 6.8
pixels whereas the radii of toonies and loonies
are above 6.8 pixels. Therefore, the camera sees
dimes, nickels and quarters as the same type of
coin and toonies and loonies as the same.
Therefore, the final simulation was completed
only using toonies and quarters. No “ghost
circles” were observed, and the difference in
size between the toonies and quarters was
enough for the HoughCircle function to
differentiate. The two coins are sorted using an
‘if statement’ with the 6.8 pixels limit as shown
inFigure 12.
Figure 12: Sorting toonies and quarters
7.0 Recommendation and Future
Work
It is recommended to replace the camera with a
higher resolution camera, as to avoid the
detection of false coins and increase the
accuracy of face value determination.
A smaller end effector grip will allow the coins
to be picked up instead of being swept under a
cup, this will allow the coins to be scattered
more closelytoeachother.
Currently, the program reads coins from a still
image taken at the beginning, if a video stream
is used, it will allow for feedback, thus the
program can look for new coins and run
continuously.
References
[1] OpenCV Devteam,OpenCV 2.4.11.0
documentation (2015). Hough Circle Transform,
http://docs.opencv.org/doc/tutorials/imgproc/i
mgtrans/hough_circle/hough_circle.html.
RetrievedMarch,2015
[2] Guler, S. (2015). Implementation of the
Vision Part (Lab 1 Part 2). Waterloo:
Universityof Waterloo.
[3] Ahuja, S. (2013). main.cpp. Waterloo,
Ontario,Canada.
7
[4] Fidan, B., & Guler, S. (2015). ME547: Robot
Manipulators: Kinematics, Dynamics,
and Control. Waterloo: Mechanical and
Mechatronics Engineering, University of
Waterloo.

More Related Content

Similar to ME 547 Final Report

Beginners guide to khepera robot soccer
Beginners guide to khepera robot soccerBeginners guide to khepera robot soccer
Beginners guide to khepera robot soccerboimiim
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blankenxepost
 
Hands-on Robotics_Way Point Navigation
Hands-on Robotics_Way Point NavigationHands-on Robotics_Way Point Navigation
Hands-on Robotics_Way Point NavigationDeepak Sharma
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
 
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...Editor IJMTER
 
A new approach for an intelligent swarm robotic system
A new approach for an intelligent swarm robotic systemA new approach for an intelligent swarm robotic system
A new approach for an intelligent swarm robotic systemeSAT Journals
 
Object detection technique using bounding box algorithm for
Object detection technique using bounding box algorithm forObject detection technique using bounding box algorithm for
Object detection technique using bounding box algorithm forVESIT,Chembur,Mumbai
 
2021 wolrdopen tdp_nt
2021 wolrdopen tdp_nt2021 wolrdopen tdp_nt
2021 wolrdopen tdp_ntAkitoshiSaeki
 
Modelling of walking humanoid robot with capability of floor detection and dy...
Modelling of walking humanoid robot with capability of floor detection and dy...Modelling of walking humanoid robot with capability of floor detection and dy...
Modelling of walking humanoid robot with capability of floor detection and dy...ijfcstjournal
 
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...IRJET Journal
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Autonomous laser guided vehicle for book deposition in a library
Autonomous laser guided vehicle for book deposition in a libraryAutonomous laser guided vehicle for book deposition in a library
Autonomous laser guided vehicle for book deposition in a libraryPushkar Limaye
 
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...JaresJournal
 

Similar to ME 547 Final Report (20)

Beginners guide to khepera robot soccer
Beginners guide to khepera robot soccerBeginners guide to khepera robot soccer
Beginners guide to khepera robot soccer
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blanken
 
Hands-on Robotics_Way Point Navigation
Hands-on Robotics_Way Point NavigationHands-on Robotics_Way Point Navigation
Hands-on Robotics_Way Point Navigation
 
Aris_Robotics
Aris_RoboticsAris_Robotics
Aris_Robotics
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional camera
 
Color Tracking Robot
Color Tracking RobotColor Tracking Robot
Color Tracking Robot
 
sawano-icma2000
sawano-icma2000sawano-icma2000
sawano-icma2000
 
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
A RESEARCH - DEVELOP AN EFFICIENT ALGORITHM TO RECOGNIZE, SEPARATE AND COUNT ...
 
A new approach for an intelligent swarm robotic system
A new approach for an intelligent swarm robotic systemA new approach for an intelligent swarm robotic system
A new approach for an intelligent swarm robotic system
 
Firebot
FirebotFirebot
Firebot
 
Ijetcas14 308
Ijetcas14 308Ijetcas14 308
Ijetcas14 308
 
Object detection technique using bounding box algorithm for
Object detection technique using bounding box algorithm forObject detection technique using bounding box algorithm for
Object detection technique using bounding box algorithm for
 
2021 wolrdopen tdp_nt
2021 wolrdopen tdp_nt2021 wolrdopen tdp_nt
2021 wolrdopen tdp_nt
 
Modelling of walking humanoid robot with capability of floor detection and dy...
Modelling of walking humanoid robot with capability of floor detection and dy...Modelling of walking humanoid robot with capability of floor detection and dy...
Modelling of walking humanoid robot with capability of floor detection and dy...
 
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...
IRJET- Currency Note Detection and Note to Coin Converter using Digital Image...
 
Me 405 final report
Me 405 final reportMe 405 final report
Me 405 final report
 
Unit 1 notes
Unit 1 notesUnit 1 notes
Unit 1 notes
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Autonomous laser guided vehicle for book deposition in a library
Autonomous laser guided vehicle for book deposition in a libraryAutonomous laser guided vehicle for book deposition in a library
Autonomous laser guided vehicle for book deposition in a library
 
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...
A NOVEL NAVIGATION STRATEGY FOR AN UNICYCLE MOBILE ROBOT INSPIRED FROM THE DU...
 

ME 547 Final Report

  • 1. COIN SORTER ROBOT ME547-Winter 2015 Christina Chen (20400887) Fiona Khor (20392500) Clement Kwok (20338174) Jarret Sun (20358567)
  • 2. 1 Abstract Using the A255 system available at the University of Waterloo Robotics lab, a ROS program was designed to allow a robot arm an attached camera system to sort different valued coinsto a pre-destinedlocation. The program is able to detect coins as circles in the camera’s field of view, then extract location and size information of the coins in order to follow an algorithm to move the coins based on their value. At the end of the program, the amount of each coin sorted and their total face value are outputtedasthe final result. 1.0 Introduction This project describes the process and displays the results of the Coin-sorter robot programmed as part of the ME 547 course requirement. A ROS program is written, involving both camera and robot arm functions to perform the task of sorting coins based on their face value. The coins sorted are of Canadian currency, which includes the following: Toonies ($2.00), Loonies ($1.00), Quarters ($0.25), Dimes ($0.10) and Nickels($0.05). The goal of the project is to have the robot system effectively sort coins into their pre- destined locations, and to count the amount of each coin, outputting the total face-value after all the coinshave beensorted. 2.0 System Details The project uses the A255 robot system with a robot arm, C500C controller and communication cable connecting to the teach pendant and workstation. There is also a top mounted camera capturing continuous frames of the workspace. Safety features include a light curtain and plexi-glass enclosures surrounding the robots. The programming languages used in this project are C++, OpenCV and ROS, running in Linux environment. The robot and camera systems are fixed in the workspace, therefore any frame transformations from camera sensor to end effector will be constant throughout the durationthatthe programruns. The limitation of the system is that there are two robot arms sharing the same workspace, therefore during testing, care must be taken so that the two robots are not in danger of crashingintoeach other. 3.0 Problem Definition The goal sort coins scattered in the camera’s field of view based on their value using the robot arm. The coins are scattered randomly so the program must be able to detect them each time it runs. The program must also be able to differentiate between different coins in order to selectthe correctlocationto sortthemto. 4.0 Theory The program will take the following steps to performitsfunction: 1. Analyze the image taken by the camera to detect coins. This step will be performed using HoughCircles, one of the OpenCV shape detection functions. The argument fed into the program can be either a still image taken from the camera or a continuousvideofeed. 2. Sort the location of the coin. This step is performed since the robot may accidentally change the position of other coins if the order of the coins sorted is random. The coin furthest to the right from the robot’s base frame will be sorted first as the sorted destinations are all on the right. Therefore, there will not be any interference with other coins if the coins are strategicallysortedstartingfromthe right. 3. Frame Transformations. Since the sensor will output the locations in pixel coordinates of the image or video, frame transformation is performed to relay the real-life coordinates of the coin to the robot arm. Transformation matrix of robot B’s frame with respect to the
  • 3. 2 sensor frame is used to determine the location of the objectswithrespecttothe robot’sframe. 4. Sort the coins. The robot will sweep the coins with a cup as its end effector to their pre- destined locations based on the coins’ radii. As mentioned in the sorting step, the robot will therefore sweep the coin that is furthest to the right first. This will be further explained in section5.3. 5. Output total number and value of coins. Results are outputted at the end of the program. The face value of each coin and their total face value are calculated. 5.0 Implementation 5.1 Coin Detection To detect coins in the camera’s field of view, an image is first taken with VLC player. The program then reads the image saved in the source folder and pass it to the function that detects the coins. The function used to detect circles in the images as coins is HoughCircles in OpenCV [1]. The arguments of the functions are modified, such as upper and lower thresholds of edge and centre detection, minimum and maximum radii to be detected. This is done to ensure that no “ghost circles” appear such as the onesshowninthe below Figure 1. The values used for the demonstration is shown below. The arguments are as follows: image name, matrix that stores the coordinate and radius of the circle, detection method used, and ratio of resolution, minimum distance between circles, and upper threshold for edge detection, threshold for centre detection, maximum radius and minimumradius. HoughCircles( src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows/20,135,18, 4 ,10); If the threshold for edge and centre detection is set too high, for example, 200 and 100 respectively, then no coins are detected. If the values are set too low, such as 100 and 10, then the letters printed on the sheet of paper next to the workspaces are detected as coins alongside with the actual coins present in the workspace. A working setting of 135, and 18 is found by trial and error to be the best threshold points for coindetection. To further prevent false circles from appearing, minimum and maximum radius is specified to be 4 and 10 pixels respectively, which covers the range of coinsto be sorted. Figure 1: Ghost circles The HoughCircle function outputs the center locations of each coin and their respective radii in a matrix named circle. This matrix is then passed to the main program to be sorted based on their x axis values (from right to left shown in figure 1). 5.2 Frame Transformation Similar to variables given in Lab 2, the focal length of the camera, f is 4.3mm, and the projection of the coins in the sensor frame, w is 994.3mm. [2] The height and width of the pixel, px and py is 0.0081mm.[2] From the robot B’s reference frame to the camera’s reference frame, a rotation transformation around the x axis by 1800 and around the z axis by 900 with x,y,and z translation of (480,20,990) are performed. [2] The transformation matrix that describes the location of the camera frame with respect to robot B’s reference frame is shown below. [2]
  • 4. 3 The transformation matrix that describes the location of the sensor frame with respect to the camera frame isshownbelow. [2] By multiplying both matrices above together, the transformation matrix that describes the location of the sensor frame with respect to robot B’sreference frame isasfollow. [2] Once the transformation matrix has been determined, the location of the object with respect to the sensor frame can be determined basedon the followingequation. [2] The image taken is 640 x 480 pixels and the distances of the objects in pixels determined from the Hough circle function are from the bottom left corner of the picture. The distances of the objects are therefore translated to originate from the center of the picture instead by subtracting the x direction distance by 320 pixels and the y direction distance by 240 pixels. In order to transform the distances into mm unit instead of pixels, both x and y distances are multiplied by -0.0081mm, taking into account the x and y directions are in the negative i and j directionsasshownin Figure 2. Figure 2: Sensor coordinate frame and dimensions of image from center [2] By applying the translation of the origin of the image to the centre of the image and transforming the distances into mm instead of pixels, the location of the object with the respect to the sensor frame can be determined. Only the x and y locations of the coins are taken into account as the Hough circle function only outputs the x and y locations of the coins with theirrespective radii. 𝑝𝑠𝑥 = 𝑢 = ((4.3𝑚𝑚 − 994.3𝑚𝑚) ∗ 𝑝𝑥) 4.3𝑚𝑚 𝑝𝑠𝑦 = 𝑢 = ((4.3𝑚𝑚 − 994.3𝑚𝑚) ∗ 𝑝𝑥) 4.3𝑚𝑚 Once the ps values are determined, the locations of the coins with respect to the robot B’s reference frame can be determined by multiplyingwiththe transformationmatrix, 𝑇𝑆 𝐵 . 𝑝 𝐵 = 𝑇𝑆 𝐵 𝑝𝑠 [2] pb = In ROS, the movRobot and goInZ functions from Lab 1 will then be used to move the robot to the specified locations of the coins depending on theirradii. [3] 5.3 Robot Movement The end effector of the robot is a paper cup as shown in Figure 3 below. With a paper cup, the robot can easily cover the coin and slide them overto theirrespective locations. Figure 3: End effector of Robot Using the VLC media player program, a snapshot of the entire workspace with different coins spread across in the real time video is
  • 5. 4 taken. The image is then placed in the serialConnectA255 folder and the ROS code is executed. The Hough circle function then uses the image to determine the location of the coinsinpixelsaswell asthe radii of the coins. The code reads in the location of the coins in a 2D array and proceeds to sort the array in a loop such that the coins on the furthest right will be sorted first as shown in Figure 4 below. The coins on the furthest right is sorted it as the robot will sort the coins into their specific locations on the right of the workspace. In this way, the implementation will not cause the robot to sort and move random coins, hitting and changing the location of other coins in the process. When the locations of other coins are changed, the snapshot of the image taken will no longer correlate to the location of the coins on the workspace. Figure 4: Details of sorted array concept Figure 5: Robot B's coordinate frame in workspace [4] The locations of where the toonies, loonies, quarters, dimes and nickles will be sorted into are set such that they are -300mm in the y-axis of the robot’s reference frame as seen in Figure 5, and are 450mm, 350mm, 250mm, 150mm, 50mm respectively in the x-axis of the robot’s reference frame. Once the array has been sorted, the array function is inserted into the moveRobot function that was used in Lab 1 [3], with the knowledge that the first x and y location value is the location of the coin on the furthest right. The location of the coins are translated to the centre of the origin of the image in the correct sign and transformation of the locations with respect to robot B’s reference frame is applied. The values in the array now will compose of the location of the coins with respect to robot B’s reference frame. The robot will first move to 20cm above the first coin location value it reads in from the array. Then it will move down by 8cm using the goInZ function used in Lab 1 so that the cup touches the ground and covers the coin. The robot will sweep the coin to its specific location depending on its radius. For example if the radius is more than 6.8 pixels, the coin is recognised by the robot as a Toonie and the robot will move the coin to the set Toonie location. The robot arm then moves up by 8cm and return to the ready position before sorting the next coin value it reads in from the sorted array. Once the robot has sorted all the coins, output of the face value of each coins and the total face value is produced on ROS. The robot then goesback to itsreadyposition. 6.0 Results The A255 robot system was programmed to determine where coins are placed in a specified workspace, move the coins from the rightmost coin to the leftmost coin into specific positions based on the face value of the coins. Illustrated in Figure 6 is the still image provided to the robot for locating the coins. Using this image, the robot outputs an image seen in Figure 7 highlighting the coins, determining their values basedon size.
  • 6. 5 Figure 6: Still Image for Coin Location Figure 7: Robot Locating Coins Once the robot has determined the locations of the coins, the robot arm moves to the rightmost coin and begins sorting it to its specified location. As mentioned in the implementation section, the location of the coin was transformed to originate from the centre of the image. However, due to the offset in the camera, instead of subtracting 320 pixels in the x-direction and 240 pixels in the y-direction to centre the origin, 370 pixels and 220 pixels were subtracted by instead. Figure 8 shows the first coin being covered by the end effector in preparation for sorting. After each coin has been sorted, the arm returns to the homed readylocationas seenin Figure 9. Figure 8: Robot Arm Covering First Coin for Sorting Figure 9: Robot Arm Return to Ready The separate locations for the coins can be seen in Figure 10, where the robot has finished sorting the coins and has returned to the ready position. The toonie is located 300mm from the robot’s base in the y direction, while the quarter is located 200 mm from the robot’s base in the y direction. At the end of the program, the number of each coin sorted, and total face value of all coins is displayed. The final results can be seen in Figure 11, where 1 toonie and 1 quarter were moved, totalling $2.25. Figure 10: Final Location of Sorted Coins Figure 11: Robot Output of Face Values All expected results occurred after running the robot. It was designed to sort the rightmost coin first, which it was able to determine and properly sort. Before sorting each coin, the
  • 7. 6 robot properly determined the type of coin and the location of the coin. At the end, the robot had properly moved all coins in the workspace to the specified sorting locations, and output the correct face value of all coins sorted. 6.1 Problems Faced While the robot has the capability of sorting all coin values, the final results only utilize toonies and quarters. As identified earlier, the HoughCircle function had the problem of “ghost circles”, requiring a greater threshold of detection. In using the HoughCircle function to detect small coins such as dimes, the camera would detect additional circles from shadows or dirt on the workspace. The robot would move to these locations and effectively sort nothing. Also, similarly sized coins yielded the same identified size by the camera. For example, loonies and toonies would be sorted to the same location, despite being of different sizes. Due to the inability of the camera to properly determine the difference between similar sized coins, loonies, dimes and nickels were ignored inthe final simulation. As the threshold of the coin has been increased to avoid ghost circles, the lowest resolution that the camera can detect is 6.8 pixels. The radii of dimes, nickels, and quarters are all below 6.8 pixels whereas the radii of toonies and loonies are above 6.8 pixels. Therefore, the camera sees dimes, nickels and quarters as the same type of coin and toonies and loonies as the same. Therefore, the final simulation was completed only using toonies and quarters. No “ghost circles” were observed, and the difference in size between the toonies and quarters was enough for the HoughCircle function to differentiate. The two coins are sorted using an ‘if statement’ with the 6.8 pixels limit as shown inFigure 12. Figure 12: Sorting toonies and quarters 7.0 Recommendation and Future Work It is recommended to replace the camera with a higher resolution camera, as to avoid the detection of false coins and increase the accuracy of face value determination. A smaller end effector grip will allow the coins to be picked up instead of being swept under a cup, this will allow the coins to be scattered more closelytoeachother. Currently, the program reads coins from a still image taken at the beginning, if a video stream is used, it will allow for feedback, thus the program can look for new coins and run continuously. References [1] OpenCV Devteam,OpenCV 2.4.11.0 documentation (2015). Hough Circle Transform, http://docs.opencv.org/doc/tutorials/imgproc/i mgtrans/hough_circle/hough_circle.html. RetrievedMarch,2015 [2] Guler, S. (2015). Implementation of the Vision Part (Lab 1 Part 2). Waterloo: Universityof Waterloo. [3] Ahuja, S. (2013). main.cpp. Waterloo, Ontario,Canada.
  • 8. 7 [4] Fidan, B., & Guler, S. (2015). ME547: Robot Manipulators: Kinematics, Dynamics, and Control. Waterloo: Mechanical and Mechatronics Engineering, University of Waterloo.