Handheld device motion tracking using MEMS gyros and accelerometer
SciTech2016
1. Adaptive Control of a Camera Projection
System using Vision-Based Feedback
Chwen-Kai Liao , Matt J. Bender
Andrew J. Kurdila , Steve Southward
─Virginia Tech ─
2. Motivation
• Over the past few years, people have been working on projecting
the image on a 3D object to make the image more vivid.
• Instead of placing the projector static, which turns out to have
more restricted projection area, we would like to make the
projector becomes mobile by mounting it on a robot arm and
apply the vision based feedback via a camera.
• The projection system should be able to compensate the projector
mounting error. Also, the stability of the projection system should
be invariant to different types of cameras.
3. Outline
• Problem statement
• Robot kinematics
• Uncertainties in our system
• Controller--- align to a stationary object in camera pixel space
• Controller--- track a moving object in camera pixel space
• Simulation--- align to a stationary object in camera pixel space
4. Problem statement
• Camera and projector are rigidly fixed on the robot end effector
• Camera pixel coordinate is our task space
• Task space variable and
desired task space variables are
• Task space tracking error is
Camera pixel coordinate
5. • Apply the feedback torque
on robot joints to achieve :
• Apply the feedback torque
on robot joints to achieve :
Desired feature points
move in pixel space
Desired feature points
are static in pixel space
Problem statement
7. In camera mounting , we consider kinematic uncertainty only
1. Camera extrinsic uncertainty:
- Assume , describe the misalignment between actual camera pose
and ideally mounted camera pose by the twist angle
2. Camera intrinsic uncertainty :
- Describe the camera pixel
coordinate scaling factor and
Uncertainties
Extrinsic
Intrinsic
8. Uncertainty case study
1. Case 1 :
Only camera intrinsic parameters and are uncertain.
2. Case 2 :
Only camera mounting twist angle is uncertain
3. Case 3 :
Both intrinsic parameters , and mounting orientation
are uncertain
9. • Robot equation of motion :
• Robot kinematic Jacobian
and it’s regressor form :
• Parameter update law :
• Feedback law :
Proposed adaptive control solution
for desired static feature points
Lump all uncertain
elements into a vector
10. Black box of the
actual robot model
Proposed adaptive control solution
for desired static feature points
11. • Robot equation of motion :
• Robot kinematic Jacobian
and it’s regressor form :
• Reference signal:
• Parameter update law :
• Feedback law :
Proposed adaptive control solution
for mobile desired feature points
12. Black box of the
actual robot model
Proposed adaptive control solution
for mobile desired feature points
14. Case 1 : Intrinsic parameter uncertain
• Twist angle
• True camera intrinsic parameter
• Initial estimation of the camera intrinsic parameter are
0200400600 400
500
600
700
800
900
Camer a pixel coor dinat es
2
9
Desired
Initial
15. Case 1 close loop simulation results
0 0. 05 0. 1 0. 15 0. 2
- 5
0
5
10
15
20
T ime (sec)
N-m
Tor que
1
2
3
0 0. 05 0. 1 0. 15 0. 2
- 20
0
20
40
60
80
100
T ime (sec)
Degree
Joint A ngle
q1
q2
q3
0 0. 05 0. 1 0. 15 0. 2
- 100
- 50
0
50
100
T ime (sec)
Pixel
Pixel Er r or
1
2
1
2
16. Case 2 : Extrinsic parameter uncertain
• The true twist angle
• Camera intrinsic parameter
• Initial estimation of the twist angle
0200400600800 300
400
500
600
700
800
900
1000
Camer a pixel coor dinat es
2
9
Desired
Initial
17. Case 2 close loop simulation results
0 2 4 6 8 10
- 10
0
10
20
30
T ime (sec)
N-m
Tor que
1
2
3
0 2 4 6 8 10
- 20
0
20
40
60
80
100
T ime (sec)
Degree
Joint A ngle
q1
q2
q3
0 2 4 6 8 10
- 200
- 150
- 100
- 50
0
50
100
150
T ime (sec)
Pixel
Pixel Er r or
1
2
1
2
18. Case 3 : Uncertain in intrinsic & extrinsic
• Initial estimation of the twist angle
• The true twist angle
• Initial estimation of the camera intrinsic parameter
• The True camera intrinsic parameter
0200400600800 300
400
500
600
700
800
900
Camer a pixel coor dinat es
2
9
Initial
Desired
19. Case 3 close loop simulation results
0 1 2 3 4
- 10
0
10
20
30
T ime (sec)
N-m
Tor que
1
2
3
0 1 2 3 4
- 20
0
20
40
60
80
100
T ime (sec)
Degree
Joint A ngle
q1
q2
q3
0 1 2 3 4
- 150
- 100
- 50
0
50
100
150
T ime (sec)
Pixel
Pixel Er r or
1
2
1
2
20. Conclusion
• The controller is able to compensate the uncertainty in the
system and drive the projected image to align to the desired
location in camera pixel space.
• The controller has met the requirement that the mounting
error and camera pixel scaling factors do not affect the
system stability.
• Future work include the simulation for tracking a desired
moving feature point in camera pixel space, and implement
the controller on robotic hardware described herein.
Editor's Notes
Over the past few years, people have been working on 3D projection mapping to make the projected image more vivid.
Instead of placing the projector static, we would like to make the projector becomes mobile by mounting it on a robot arm and in addition apply the vision based feedback by installing a camera.
For the requirement , the projection system should be able to compensate the projector mounting error. Also the projection system stability should be invariant to different types of cameras.
Done
In the presentation, we will go through the sections include ……
Done
In our design, the camera and projector are rigidly fixed together on a robot end effector. In both target to be stationary or moving, we select two corners of the projected image as the projected image feature points.
In addition, the camera pixel space is set to be our task space, and because those feature points are in the view of the camera, we setthe camera pixel coordinates of these feature points as the task space variables. Stack those task space variable into a vector form and denote it as “s”
Also we define a vector s* of the desired feature point pixel coordinates which indicate where we want the feature points to be in the pixel space.
Subtract “ s” and “s*” gives us the pixel error in Xi and Eta direction for each feature point.
Done
Here we separate the control problem into two categories: one is the target being static in camera pixel space, the other is the target moves along a time varying trajectory in camera pixel space.
for both cases, we require the controller to generate the feedback torque on robot joints so that the pixel error between feature points and desired feature points in camera pixel coordinates goes to zero
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For the stationary case, we define a desired projected image position “s*” which is static in camera pixel space. The controller is required to generate the feedback toque on each robot joint so that the projected image in camera pixel space , which is “s” , will align to desired position “s*”
For the moving target case, the desired pixel space position for projected image is a time varying trajectory. The controller should generate the feedback torque on each robot joint so that the projected image can track the time varying trajectory in camera pixel coordinate.
We use ABB irb120 in our design with camera and projector be rigidly fixed together at the end effector of the robot arm. Because we only drive the last three robot joint , the picture on your left side highlights the last three joints we use with grey color. In our study, the ideally mounted camera pose is denoted as c~ and the actual camera pose is denoted as C.
In addition, on the right side of the slide we use the pin-hole camera as the camera model mounted on the robot end effector. The pin hole camera consists of two image plane, one is the retinal coordinate, and the other is the pixel coordinates. The image in the pixel coordinates is what we actually get from the camera and the image is distorted. To describe the image distortion, we use the image scaling factor “su” “sv” in each pixel coordinates direction.
To described the actual camera pose, the rotation matrix is used to illustrate the orientation between perfectly mounted camera C~ and the actual camera C. Also the displacement vector from C~ to C is to describe the offset distance between ideally mounted camera and actual camera.
Done
In our system we consider only kinematic uncertainty, and as the yellow regions shows, there are two types of uncertainty, one is camera extrinsic uncertainty the other is camera intrinsic uncertainty.
In the camera extrinsic uncertainty we assume there is no displacement error between ideally mounted camera and actual camera position. In our study, the uncertainty happens in the misalignment between C~and C, which means the orientation of the camera is uncertain.
In camera intrinsic uncertainty , we use pixel scaling factor “su” and “sv” to describe the uncertain image distortion which can vary between different types of the camera.
Done
In our simulation, we will give out the case studies, which includes
Only uncertain in camera intrinsic parameters, which are pixel coordinates scaling factors.
And only uncertain in camera extrinsic parameter , which is the mounting twist angle alpha.
And the last case, both camera intrinsic and extrinsic parameters are uncertain.
Done
we define the static desired feature points in camera pixel space, and then we want to drive the robot arm so that the feature points from the projected image align to the desired feature points location in camera pixel space.
We use conventional robot equation of motion, and write the robot kinematic jacobian into regressor form where the theta_k is the vector contain all uncertain terms Su, Sv, alpha.
Update law and feedback law can make our system stable
By the Lyapnov stability analysis, we construct the parameter update law and the computed torque feedback law.
Done
In the form of block diagram, the orange part is the actual robot model which we do not know in advance.
The system input is “s*” which is the static desired feature point location in camera pixel coordinate. We measure the current feature points pixel coordinate value “s” from the camera , and subtract s and s* to get pixel error e_{s}. Feed the pixel error into our controller, the controller will compute the required torque
for each joint to drive the pixel error to zero.
Done
After we finish the controller for static desired feature points, we propose an adaptive controller for desired moving feature points.
Here the robot equation of motion and kinematic regressor form are still the same as stationary target scenario,
But from the Lyapnov stability analysis, we now need a reference signal to achieve system stability. Note that all terms in the reference signal are measureable.
By applying the parameter update law and feedback law, the system is proved to be stable.
Done
In the block diagram for moving target scenario, the target now moves along a previously known trajectory in the camera pixel coordinate which is denoted as time dependent s*.
We still measure the projected image feature points in the camera pixel coordinate . Subtract s* and s will have pixel error e_{s}. The controller will generate the required torque based on the pixel error.
Done
In the simulation section, we will see different combination of uncertainties applied to static target in camera pixel coordinates.
As the figure shows, we require the projected image , which is the yellow square, to rotate 45 degree. The initial range from the camera to the wall is set to be 2 meter.
For only the camera pixel coordinate scaling factor Su and Sv are uncertain, the estimation for Su and Sv are 500. However, the true value of the scaling factor should be Su equals 300 and Sv equals 350. The right side of the figure shows the camera pixel space trajectory.
As you can see the projected image rotates to match the desired configuration.
In the pixel error plot, there are 4 data sets because we select two feature points so there are totally 4 camera pixel coordinate value.
As you can see, the pixel errors are driven to zero and the maximum torque on each joint is no larger than 20 Nm
For case two where the camera mounting twist angle is uncertain, we estimate the twist angle as 0 degree. However, the true angle is 2.3 degree.
The right side of the figure shows the system trajectory in camera pixel coordinate
A the result shows, the pixel errors go to zero and the applied torque on each joint is lesser than 40 Nm
In case number 3 , both the uncertainties in camera scaling factor and mounting error are posted. The initial estimation for twist angle is 0 degree and scaling factors are both 500. However the true twist angle is 2.3 degree and Su equals 300 Sv equals 350
We find that the camera extrinsic uncertainty may have a greater effect to the system than the camera intrinsic uncertainty because the pixel error plot in case3 looks very similar to case 2.
----------------------------------------------------------------------------------------------------------
In the pixel error figure, we can see the errors chatter more obviously than case 1 and case2, but the torque on each joint still remains lessor than 40 Nm