This document describes a project to design a ball catching manipulator using a 2 degree of freedom robotic arm and webcam vision. The project involves selecting a 2 DOF manipulator, developing the inverse kinematics, generating trajectories to catch the ball using projectile motion equations, detecting the ball using image processing in the webcam footage, and transferring coordinate frames. Future work includes implementing the simulation with a Dynamixel robotic arm and using a Kinect for 3D detection and depth sensing.
1. Grad Students:
Gaurav Shah
Student Id- 1001152161
Ravikant Kattamudi
Student Id- 1001148933
Instructors :
Dr Dan O Popa
Dr Indika Wijaysinghe
Ball catching manipulator using webcam
using projectile motion
Dept of Electrical engineering – University of Texas at Arlington
5325 - Robotics, Spring15
2. Introduction:
• This project consists of Ball catching
manipulator in 2D space with 2 Degree of freedom
• It demands a tight interplay of skills in mechanics,
control, planning and visual sensing.
• Manipulator must reach the necessary point in
space and in time
• Ball catching has been used for almost 20 years from
now as a challenging benchmark system to develop
and test robotics key technologies
3. Steps and Divisions of Project
1. Selection of Manipulator – 2 DOF
2. Inverse kinematics approach in simulation
3. Trajectory generation
4. Estimation of Projectile motion – Selection of
Proper mathematical algorithm
5. Transfering co ordinate frames
6. Parsing Future co-ordinates to Inverse kinematics
7. Webcam based thresholding of Ball (Image
processing)
8. Visual servoing (Future work)
6. Selection of Manipulator
• For a simple simulation we have selected 2 DOF manipulator for our project
• To move a robotic arm from one point to another point requires the details of
joint variables.
• Using the Kinematics and Inverse Kinematics the data of the moment of the
arm is found and is implemented to show the pose of the robotic arm
• If the input is obtained (i.e. a point of catching from the trajectory is obtained
from the visual data) the arm positions itself to that point to catch the ball.
7. Inverse Kinematics approach
• First we define a set of co-ordinates of an end effector to be positioned.
• Applying the inverse kinematics to find out the joint variables of the arm.
• Forward kinematics will have a wide set of solution (since a robot can reach a point in
space in various number of ways)
But as this method gives a single solution for the joint angles this could clear the
confusion in choosing the solution
8. Inverse Kinematics approach
DH Table
Link a alpha d Theta
1 l1 0 0 Theta1
2 l2 0 0 Thata2
l1,l2 = Lengths of ARM
Theta1, Theta2 = Joint variables WorkSpace @ [180,180]
Last column of the transformation matrix gives position of end effector and inverse
kinematics gives joint variables
9. Ball Dynamcis : Trajectory Interpretation
• Moving the manipulator arm form some initial position {Ta } to some final desired
position {Tb} going through some Via points
• Trajectory is time history of position, velocity, acceleration
• Important step for path planning and estimation of future points to accurately catch
the ball in 2D space
•Our requirement : Predict the path point BEFORE THE BALL STRIKES TO WORKSPACE
“ Parabolic curve based trajectory is predicted
with recursive least sqaures algorithm”
10. Projectile Motion
•For modeling of this systems, We first feed manual
initial data – Velocity, height and angle
•This method verifies the estimation we used before we
give actual data from windows webcam
•Terms involved:
--Time of flight
-- Range
-- Velocity Impact
-- Projectile motion equation
11. Equations for Projectile Motion
Time of Flight
usin(θ) + sqrt(u2 sin2 (θ)-2gh)
_________________________
g
Trajectory Equation
Y = Y0 + ut -1/2(gt2 )
Velocity Impact :
V = sqrt (u2 + 2g*Y )
12. Timing Calculations
Basic Logical steps for time based dynamics :
Terms : Time of flight (Zero point to zero destination) / end to end
Time to predict the ball path (measured in MATLAB time stamp)
Max time for end effector to reach the ball in trajectory (end effector )
Time taken actually (considering air drag / error / stability / dynamixels
factors)
Example
Time flight : 1.9 second
Prediction time : 0< 0.5 second
Max time for end effector : < 1.4 second
Actual time (depends on practical approach)
13. Projectile motion for Height = 5 m, Initial V = 8 m/s, Throwing angle = 45
Prediction of Object with least squares approximations
14. • Real time data are calculated from visual servoing, not
from the manual data
• Path interpretation is carried out by taking the first
few point of the ball path and creating the whole
trajectory (in real time air drag must also be taken into
consideration)
• Initial velocity is known from the distance travelled by
first two points of short distance and the time
difference between two snaps.
• This way, Optimal point derived and manipulator
reaches there before the ball
15. Object detection using image processing toolbox
Goal : To visually track a Red ball thrown in a region and get XY co-ordinates for
manipulator”
• After literature survey, We focused on color, rather than edges and shapes.
• Also, We worked on RoboRealm robotics animatation software which helped
calibrating the methods needed to detect the red ball
Such as,
•Subtracting gray scale data from original RGB image to get red color detection. But
this covers all red objects in the frame. So we used median filtering followed by
thresholding.
•0.17 value seems to be most accurate thresholding for detecting red colors above 400
pixels
16. Object detection Algorithm
1. Extracting frames
2. Extracting colour components
3. RGB to grey scale conversion
4. Noise Elimination
5. Elimination of small objects
6. Subtracting background
MATLAB implementation :
Captured 20 frames from infinite frames per trigger, at 30 Hz
17. Getting X-Y Co-ordinates
Goal : Centroid Analysis
•Identification of bounding box (contour) of desired object
•The area = Count for number of pixels in the tracked bounding box
Centroid is the geometrical center of the bounding box(Inbuilt function).
• Centroid is nothing but X-Y
• Data stored in an Array, which is input for trajectory planning
For example, for 10 frames, Total 10 X- Y co-ordinates are stored in an
array and displayed on the command window.
This data depends on the frame length and centroid.
18. Visual servoing based
on Jacobian
For example case, When 2D image data used to move 3 D manipulator with 2
webcams….
If u = image coordinates of the head
The angle of the i-th arm is denoted by θi , and the set of these angles by
θ = [θ1, θ2, · · · , ] T
Then
u˙ = Jθ˙
J is the Jacobian between image feature velocities and manipulator joint velocities
Now, for control input from jacobian to match the data from both camera,
θ˙ = λJ T(JJT) −1 (ud − u)
19. Motion control
For this a computed torque feed forward control in combination with a
decentralized PD controller is used
Feedforward for motion control
Feedback from robot system
Assumption for simplicity :
•No air drag
•Rotational ball velocity = 0
Ref : Koichiro DEGUCHI, Hironari SAKURAI, and Shun’s paper
22. Alternative methods
•Optimizing prediction by Recursive least square filters OR Kalman filters.
( These methods are dedicated estimations to minimise error in overrated equations of
dynamics of projectile motion)
Holds solutions for linear as well as non linear terms of motion
Train the manipulator
1. The machine uses its cameras to predict the path of a flying object based on imitating
repeated human actions and grab it in less than five hundredths of a second
2. Model Learning and Inference through ANFIS ( adaptive neuro-fuzzy inference system)
25. yna
Servo motors in Actual manipulator
Dynamixels AX12
• The AX-12A robot servo has
the ability to track its speed,
temperature, shaft position,
voltage, and load
• Control algorithm allows
you to control the speed and
strength of the motor's
response
Operating V : 12 Volts
Resolution : 0.29 degree
Controller :
USB to
Dynamixels
(Easily
operated
with
Labview)
26. Limitations of our work
• Because of the time limitations we omitted air drag calculation and object
avoidance in trajectory prediction
• Our focus mainly concentrated between object tracking, trajectory
planning and dynamics of motion for inverse kinematics
• Force and torques are yet to be calculated once optimal models has been
carried out
• This is 2 Dimension, 2 DOF system, which is does not give results in 3D (as
kinect depth sensor gives)
• Needs feed forward loop to better estimate the trajectories
“ We ended our project with experimental simulation and
discussing the future work “
27. Applications
--Entertainment purpose
- Circus, Ball Juggling
--Sports
--Pick and place machines in SMT manufecturing
--Sorting out objects
-- Industrial application
-Parts insertion in mass production
28. Conclusion
Learning outcome of this project work is :
We fully understood the concepts of
- Kinematics
- Trajectory generation
- Estimation of Path
- Image processing for visual servoing
-Robotic toolbox
-Basic procedures in n DOF manipulator and their workspace, joint
configuration, applying dynamics equations
29. Future work
• Implementation of this simulation in Labview controlled Dynamixels arm available in
NGS lab
• Instead of 2D detetction, Detect the ball with Microsoft Kinect along with Depth
sensor for distance measurement and accurate dynamics
• Use of Force / Torque sensors in end effector
• Use of some more precise algorithm for trajectory estimation
• Try Predictive controller – Fuzzy controller