Uncalibrated Image-Based Robotic Visual Servoing (knowdiff.net)


Published on

Visiting Lecturer Program (140)

Speaker: Azad Shademan
Ph.D. candidate
Department of Computing Sciences
University of Alberta, Canada

Title: Uncalibrated Image-Based Robotic Visual Servoing

Local Host: Ms. Nasim Pouraryan

Time: Wednesday, November 5, 2008, 12:30-2:00 pm
Location: Faculty of Electrical and Computer Engineering, University of Tehran, Tehran


Design of versatile vision-based robotic systems demands a solution with little or no dependence on system parameters. The problem of real-time vision-based control of robots has been long studied as robotic visual servoing. Most provably stable solutions to this problem require calibrated kinematic and camera models, because in a precisely calibrated system one can model the visual-motor function analytically. The uncalibrated approach has received limited attention mainly because the stability analysis is not as straightforward as that of calibrated image-based architecture. In an uncalibrated system the visual-motor function is not known, but partial derivative information (Jacobian) can be learned by tracking visual measurements during motion. In this talk, we study the uncalibrated image-based visual servoing and present different Jacobian learning methods.

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Uncalibrated Image-Based Robotic Visual Servoing (knowdiff.net)

    1. 1. Uncalibrated Image-based Control of Robots Azad Shademan PhD Candidate Computing Science, University of Alberta Edmonton, Alberta, CANADA [email_address]
    2. 2. Vision-Based Control A A A B B B current desired Left Image Right Image
    3. 3. Vision-Based Control Left Image Right Image B B B
    4. 4. Where is the camera located? <ul><li>Eye-to-Hand </li></ul><ul><li>e.g.,hand/eye coordination </li></ul><ul><li>Eye-in-Hand </li></ul>
    5. 5. Vision-Based Control <ul><li>Feedback from visual sensor (camera) to control a robot </li></ul><ul><li>Also called visual servoing </li></ul><ul><li>Visual servoing is the task of minimizing a visually specified objective by giving appropriate control commands to a robot </li></ul><ul><li>Is it any difficult ? </li></ul><ul><li>Images are 2D, the robot workspace is 3D 2D data  3D geometry </li></ul>
    6. 6. Visual Servo Control law <ul><li>Position-Based: </li></ul><ul><ul><li>Robust and real-time pose estimation + robot’s world-space (Cartesian) controller </li></ul></ul><ul><li>Image-Based: </li></ul><ul><ul><li>Desired image features seen from camera </li></ul></ul><ul><ul><li>Control law entirely based on image features </li></ul></ul><ul><li>Hybrid: </li></ul><ul><ul><li>Depth information is added to image data to increase stability </li></ul></ul>
    7. 7. Position-Based <ul><li>Robust and real-time relative pose estimation </li></ul><ul><li>E xtended K alman F ilter to solve the nonlinear relative pose equations. </li></ul><ul><li>Cons: </li></ul><ul><ul><li>EKF is not the optimal estimator. </li></ul></ul><ul><ul><li>Performance and the convergence of pose estimates are highly sensitive to EKF parameters. </li></ul></ul>
    8. 8. Position-Based Desired pose Estimated pose
    9. 9. Position-Based Measurement noise State variable Process noise yaw pitch roll Measurement equation (projection) is nonlinear and must be linearized.
    10. 10. K x k-1,k-1 z k R k P k,k P k,k-1 C k Q k-1 x k,k x k,k-1 P k-1,k-1 Kalman Gain Measurement noise covariance A priori error cov. @ k-1 Process noise covariance Initial/previous state Linearization Measurement State update State prediction Error cov. prediction Error cov. update
    11. 11. Image-Based Desired Image feature Extracted image feature
    12. 12. Visual-motor Equation x 1 x 2 x 3 x 4 q=[q 1 … q 6 ] Visual-Motor Equation This Jacobian is important for motion control.
    13. 13. Visual-motor Jacobian Image space velocity Joint space velocity A A B B
    14. 14. Image-Based Control Law <ul><li>Measure the error in image space </li></ul><ul><li>Calculate/Estimate the inverse Jacobian </li></ul><ul><li>Update new joint values </li></ul>
    15. 15. Image-Based Control Law Desired Image feature Extracted image feature
    16. 16. Jacobian calculation <ul><li>Analytic form available if model is known. Known model  Calibrated </li></ul><ul><li>Must be estimated if model is not known </li></ul><ul><li>Unknown model  Uncalibrated </li></ul>
    17. 17. Calibrated: Interaction Matrix <ul><li>Analytic form depends on depth estimates. </li></ul><ul><li>Camera/Robot transform required. </li></ul><ul><li>No flexibility. </li></ul>Camera Velocity
    18. 18. Uncalibrated: Visual-Motor Jacobian <ul><li>A naïve method: </li></ul><ul><ul><li>Orthogonal projections </li></ul></ul>
    19. 19. Uncalibrated: Visual-Motor Jacobian <ul><li>A naïve method: </li></ul><ul><ul><li>Orthogonal projections </li></ul></ul>
    20. 20. Uncalibrated: Visual-Motor Jacobian <ul><li>A naïve method: </li></ul><ul><ul><li>Orthogonal projections </li></ul></ul>…
    21. 21. Uncalibrated: Visual-Motor Jacobian <ul><li>A popular local estimator: </li></ul><ul><li>Recursive secant method (Broyden update): </li></ul>
    22. 22. <ul><li>Relaxed model assumptions </li></ul><ul><li>Traditionally: </li></ul><ul><ul><li>Local methods </li></ul></ul><ul><ul><li>No global planning (-) </li></ul></ul><ul><ul><li>Difficult to show asymptotic stability condition is ensured (-) </li></ul></ul><ul><li>Main problem of traditional methods is the locality. </li></ul>Calibrated vs. Uncalibrated <ul><li>Model derived analytically </li></ul><ul><li>Global asymptotic stability (+) </li></ul><ul><li>Optimal planning is </li></ul><ul><li>possible (+) </li></ul><ul><li>A lot of prior knowledge on the model (-) </li></ul><ul><li>Global Model Estimation ( Research result ) </li></ul><ul><ul><li>Optimal trajectory planning (+) </li></ul></ul><ul><ul><li>Global stability guarantee (+) </li></ul></ul>
    23. 23. Synopsis of Global Visual Servoing <ul><li>Model Estimation (Uncalibrated) </li></ul><ul><li>Visual-Motor Kinematics Model </li></ul><ul><li>Global Model </li></ul><ul><ul><li>Extending Linear Estimation (Visual-Motor Jacobian) to Nonlinear Estimation </li></ul></ul><ul><li>Our contributions: </li></ul><ul><ul><li>K-NN Regression-Based Estimation </li></ul></ul><ul><ul><li>Locally Least Squares Estimation </li></ul></ul>
    24. 24. Local vs. Global <ul><li>Key idea: using only the previous estimation to estimate the Jacobian </li></ul><ul><li>RLS with forgetting factor Hosoda and Asada ’94 </li></ul><ul><li>1 st Rank Broyden update: Jägersand et al. ’97 </li></ul><ul><li>Exploratory motion: Sutanto et al. ‘98 </li></ul><ul><li>Quasi-Newton Jacobian estimation of moving object: Piepmeier et al. ‘04 </li></ul><ul><li>Key idea: using all of the interaction history to estimate the Jacobian </li></ul><ul><li>Globally-Stable controller design </li></ul><ul><li>Optimal path planning </li></ul><ul><li>Local methods don’t! </li></ul>
    25. 25. K-NN Regression-based Method q 1 q 2 3 NN q 1 q 2 x 1 ?
    26. 26. (X,q) L ocally L east S quares Method q 1 q 2 x 1 ? K-neighbour(q)
    27. 27. Eye-to-hand Experiments <ul><li>Puma 560 </li></ul><ul><li>Stereo vision </li></ul><ul><li>Features: projection of the end-effector’s position on image planes (4-dim) </li></ul><ul><li>3 DOF for control </li></ul>
    28. 28. Measuring the Estimation Error
    29. 29. Global Estimation Error
    30. 30. Visual Task Specification <ul><li>Image Features: </li></ul><ul><ul><li>Geometric primitives (points, lines, etc.) </li></ul></ul><ul><ul><li>Higher order image moments </li></ul></ul><ul><ul><li>Shape parameters </li></ul></ul><ul><ul><li>… </li></ul></ul><ul><li>Visual Tasks </li></ul><ul><ul><li>Point-to-point (point alignment) </li></ul></ul><ul><ul><li>Point-to-line (colinearity) </li></ul></ul><ul><ul><li>Point-to-plane (coplanarity) </li></ul></ul><ul><ul><li>… </li></ul></ul>
    31. 31. Eye-in-hand
    32. 32. Eye-in-Hand Experiments
    33. 33. Eye-in-Hand Experiments
    34. 34. Eye-in-Hand Experiments
    35. 35. Eye-in-Hand Experiments
    36. 36. Mean-Squared-Error Task 1 Task 2
    37. 37. Task Errors
    38. 39. Conclusions <ul><li>Reviewed position-based and image-based visual servoing schemes. </li></ul><ul><li>Presented two global methods to learn the visual-motor function. </li></ul><ul><li>KNN suffers from the bias in local estimations. </li></ul><ul><li>LLS (global) works better than the KNN (global) and local updates. </li></ul><ul><li>The Jacobian of more complex visual tasks can also be learned using LLS method. </li></ul>[email_address]
    39. 40. Thank you!
    40. 41. Visual Ambiguity: Single Camera
    41. 42. Visual Ambiguity: Stereo