A hybrid controller for vision based navigation of autonomous vehicles in urban environments
1. A Hybrid Controller for Vision-Based
Navigation of Autonomous Vehicles in
Urban Environments
Sushil kumar(16EC65R16)
M.Tech, VIPES
IIT Kharagpur
1
2. Contents
• Introduction
• Related works
• VS+IDWA
• Control design
• Experiments results
• Conclusion and future works
• References
2
• Visual servoing(VS)
• Dynamic window approach(DWA)
• Image-based dynamic window approach(IDWA)
3. Introduction
• In the last few decades, autonomous robotic automobiles have been increased
• These vehicles use high cost sensors, some of them impractical for final
commercial cars, limiting the target customers
• GPS based control:
GPS signal loss
• Sensor-based control:
Extroceptive sensors data(sonar , radar and vision systems)
• Visual servoing-for road lane follower
• Dynamic window technique-opstacle avoidance
• Image based dynamic window technique
• VS+IDWA
3
4. Related works
1. Visual servoing (VS)-road lane follower robot
VS can be divided in two main approaches
2. Dynamic Window Approach (DWA)- it avoids obstacles prioritizing the final
orientation of the velocity vector field
3. Image-based dynamic Window Approach (IDWA)
4
• Image-based (IBVS)
• Position/pose-based (PBVS)
5. VS+IDWA
• Reactive obstacle avoidance with less VS error as possible.
• The VS equations were integrated in the DWA, compounding a new
IDWA.
• This combination results in a hybrid controller (VS+IDWA).
• VS control methodology for road lane following in the Dynamic
Window Approach for obstacle avoidance.
• Taking advantage of the VS (as deliberative) and the IDWA (as reactive)
controllers.
5
6. 6
The vehicle control is performed by the input set defined as
U=[v1,v2]T
v1=linear velocity of the front wheels of vehicle
v2=steering velocity
Robot Definitions
7. 7
Robot movement q˙ = [x˙r y˙r Θ˙]T
Ackerman’s approximation for the steering angle Φ,given by:
Θ =vehicle orientation angle , Θ∈ [− π, π]
Φ =steering angle, Φ ∈ [− Φ max, Φ max]
linear velocity can be seen from here as
v = v1 cos(Φ)
and the angular velocity Θ˙=v1 cos(Φ)/r1 = ω
So velocities can be seen as
Ur=[v ω ]T
Fig. 1[1]. Kinematic model diagram of a front-wheel drive car centered in the
body frame {R}. The pinhole camera frame is represented in {C}.
8. 8
Features Description
The optical center position is given by
(xc, yc, zc) = (tx, ty, tz)
Due to the camera’s tilt offset ρ and the planar surface Constraint,the normalized perspective is :
The mapping between {I} and {C} is calculated by:
intrinsic parameters focal length (fx, fy) and image center (cx, cy), the extrinsic parameters ρ and tz
9. 9
Control Design
a).Deliberative control:VS
Fig. 2[1]. . Image frame {I} representation for the camera frame {C}.
The road ane center P (in red) is related to the boundaries δ1 and
δ2 (in yellow). Its tangent Γ (in blue), at the point D and angle
offset Θ from Γ to the axis −Y ,defines the image features X, Y , and
Θ.
Image features set s=[X Y Θ]T
Θ=Angular offset
Diffeomorphism between image and cartesian
Velocities
image features velocities s ˙ = [ ˙X Y ˙ Θ˙ ]T
the robot velocities ur = [v ω]T .
Camera frame velocity uc = [vc,x vc,y vc,z ωc,x ωc,y ωc,z]T
VS +IDWA
10. 10
The interaction matrix Ls (X, Y, Θ) relates the path image features with
uc :
So image feature velocities can be given as
s˙ = Lsuc
11. 11
b). Reactive control :IDWA
DWA(v, ω)=α·heading(v, ω)+ β·dist(v, ω)+ γ·velocity(v)
α,β,γ = parametric-gains used in DWA
heading: based on the final orientation of the robot regarding the goal position
in the world
dist: which prioritize movements over the areas free of obstacles (with the
highest distance to collision)
velocity: with focus on the desired linear velocity setpoint
13. 13
Fig. 4[1]. . Simulation result for the VS controller performing the road lane following with λ = 0.3 (a), λ = 0.5 (b), and λ = 0.7 (c)
Simulation result for the VS controller
Simulation experiments
14. 14
Fig. 5[1]. . Influence of the gains associated with the IDWA when performing a reactive obstacle avoidance. (a), dist in (b), and velocity
(c). Finally, all the gains were enabled in (d), with α= β = γ = 1.0..
15. 15
Fig. 6[1]. . Tuning process for the gains associated with the IDWA, where: (a) α = 0.1, β = 1.0, and γ = 1.0; (b) α= 0.1, β =
1.0, and γ = 3.0;and (c) α = 0.1, β = 2.0, and γ = 3.0.
16. 16
Road Following With Collision Avoidance:
Fig. 7[1]. . Road center following with collision avoidance experiment (III) applying the VS+IDWA, where (a) presents some detected image features
in sequence during the experiment with the obstacle indicated by the red arrow blocking the road, (b) and (c) show the evolution of the X, Y , and
θ errors. The vehicle control inputs calculated by the VS and VS+IDWA are in (d) and (e).
Real Car-Like Robot Experiments
17. Conclusion and future works
• This study presented a new hybrid controller for vision based local navigation of
car-like robots in urban environment.
• Based on these benefits, future studies will integrate this controller in a global
navigation methodology.
17
18. References
[1]. D. Alves de Lima and A. Corrêa Victorino, "A Hybrid Controller for Vision Based
Navigation of Autonomous Vehicles in Urban Environments," in IEEE Transactions on
Intelligent Transportation Systems, vol. 17, no. 8, pp. 2310-2323, Aug 2016.
[2]. A. Cherubini, F. Chaumette, and G. Oriolo, “An image-based visual servoing scheme
for following paths with non holonomic mobile robots,”in Proc. IEEE Int. Conf. Control,
Automn, pp. 108–113, Dec 2008.
[3]. D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision
avoidance,” IEEE Robot. Autom. Mag., vol. 4, no. 1, pp. 23–33, Mar 1997.
18