SlideShare a Scribd company logo
1 of 40
AI Robotics 
Concepts 
Chapter 25
Content 
• Tasks 
• Effectors 
• Sensors 
• Agent Architectures 
• Actions in continuous space
Robot 
• Definition from the “Robot Institute of America” 
(fits a conveyor belt) 
A robot is a programmable, multifunction manipulator 
design to move material, parts, tools, or specific 
devices through variable programmed motions for the 
performance of a variety of tasks. 
Russels&Norvig definition (excludes softbots) 
A robot is an active, artificial agent whose 
environment is the physical world.
What is 'Artificial Intelligence'? 
• What is 'Artificial Intelligence' (Jim Hendler, CNN, 1999)? 
– What computers cannot do yet 
• E.g. NLP started as an AI field. Once successful (found 
objective mathematical model) became a field by itself. 
• Getting out of AI (conventional wisdom): 
– Artificial Neural Networks are the 2nd best solution to any problem 
– Genetic Algorithms are the 3rd best solution to any problem 
– The (non-AI) mathematical solution, once found, is the best 
solution of any problem. 
• AI is about generalization to unseen and unexpected 
situations. Once the situation is explicitly taken into 
account, we can no longer speak of AI. 1
The real world as an environment 
• Inaccessible (sensors do not tell all) 
– need to maintain a model of the world 
• Nondeterministic (wheels slip, parts break) 
– need to deal with uncertainty 
• Nonepisodic (effects of an action change over 
time) 
– need to handle sequential decision problems 
• Dynamic 
– need to know when to plan and when to response 
• Continuous 
– cannot enumerate possible actions
Tasks 
• Manufacturing and material handling 
– not autonomous: manufacturing, handling 
– simple machines are the best solution 
• need accuracy, power, shapes put in standard cradles 
• Assistant robots (with autonomy) 
– Mobile robots (mobots): couriers, under-water spies 
• Hazardous environments 
– toxic or deep sea environment 
• Telepresence and Virtual Reality 
– teleoperated robots for bomb scares in New York 
• Augmentation of human abilities 
– artificial fingers, eyes, exoskeletons
Components 
• Rigid Links 
• Rigid Joints 
• End Effectors (screw-drivers) 
– connected to “actuators” (convert electricity to 
movement) 
• an actuator = a degree of freedom 
• uses: locomotion, planning, processing
DOF 
• Holonomic Robots (same # effective and 
controllable DOF) 
– a car has 3 effective but 2 controllable DOF 
• robotic arms are holonomic 
• mobile robots are not (more complex mechanics) 
Stanford manipulator 6DOF 
5 revolute + 1 prismatic joint 
non-holonomic 4-wheeled vehicle
Distance sensors 
laser scanner & range scan 
• Sensors 
– force, torque, tactile, range (sonar), image, 
odometry & shaft decoders, inertial, GPS
Locomotion 
• Types: 
– statically stable (wheels, slow legs) 
– dynamically stable (legs, hopping) 
• less practical for most robots (expensive) 
4-legged 
“Big Dog” 
DFKI Bremen 
Marc Raibert
Perception 
• Dynamic belief network 
– Update the belief net 
– filtering equations combine 
• transition model / motion model 
• sensor model
Localization 
• Problems 
– Tracking 
– Global localization 
– Uncertain situation problem 
• Techniques 
– Probabilistic Prediction 
– Landmarks Models or Range Scan Models
Localization 
• Monte Carlo Localization, (MCL, particle filtering) 
– starts with a uniform distribution of particles 
– computes/updates their probability 
– Generates another set of samples based on the new 
distribution 
• Kalman filters 
– need local linearization, e.g. using Taylor expansion 
– Uncertainty grows until landmarks are spotted 
– uncertainty of landmarks => data association problem
Monte Carlo Localization
MCL Run 
initial global uncertainty 
bimodal uncertainty (symmetry) unimodal uncertainty
Mapping 
• Simultaneous localization and mapping 
SLAM 
– similar to the localization (filtering) problem 
• has the map in the posterior 
• Often uses Kalman filters + Landmarks
Usual Mapping 
• Other approaches are less probabilistic 
and more ad-hoc, and they represent a 
trend!
Planning to Move 
• Two types of motion: 
– point-to-point (free motion) 
– compliant motion (touching a wall, a screw) 
• Two representations of planning problems 
– configuration space 
– workspace 
• Planning algorithms 
– cell decomposition 
– skeletonization.
Configuration Space 
• The Workspace is the 3D world around. 
– typically a position is needed for each joint. 
• The configuration Space is the n-ary world of 
possible robot configurations. 
– sometimes smaller size than the Workspace, as it 
integrates linkage constraints. 
• occupied space vs. free space 
• Sometimes both spaces are required: 
– Direct kinematics (configuration  Workspace) 
– Inverse kinematics (workspace  configuration)
Workspace vs Configuration Space 
Workspace of a robot with2 DOF 
box with flat hanging obstacle 
Configuration space of same robot 
White regions are free of collisions 
The dot is for the current configuration
Robot configurations in workspace and configuration space.
Cell decomposition methods 
• Continuous space is hard to work with 
• Discretization uses cell decomposition 
– Grid 
• Easy to scan with dynamic programming 
• Difficult to handle “gray” boxes  
– Incompleteness 
– unsoundness 
• May need further subdivision  exponential in dimensions 
• Difficult to prove that a box is fully available 
– Quad-trees 
– Exact cell decomposition (complex shapes) 
– Potential Field  optimization via value iteration
Same path in workspace coordinates 
Robot bends elbow to avoid collision. 
Value function and path 
for discrete cell approximation 
of the configuration space
Potential 
A repelling potential field pushes 
robot away from obstacles 
Path found by minimizing path length 
and the potential
Skeletonization methods 
• Skeleton – a lower dimensional representation of 
the configuration space 
– Voronoi graph (points equally distant from two 
obstacles) 
• Reduces the dimension of the path planning. 
• Difficult to compute in the configuration space 
• Difficult for many dimensions 
• Leads to large/inefficient detours. 
– Probabilistic roadmap 
• Random generation of many points in the configuration 
space (discarding occupied ones) 
• Lines between close points, if they are in free space. 
– Distribution of points may be based on need and promise 
– Scales best
Voronoi 
Voronoi graph is the set of points 
equidistant to the two or more obstacles 
in configuration space 
Probabilistic roadmap, 400 randomly 
chosen points in free space
Planning Uncertain Movements 
• Errors 
– from the stochastic model 
– From the approximation (particle filtering) algorithms 
• Common simplification: 
– Assume the “most likely state” 
– “Works” when uncertainty is limited 
• Solution 
– (with observable states) Markov Decision Processes 
• Returns optimal policy (action in each state) 
– Called “Navigation function” 
– (with partially observable states) POMDPs 
• Creates policies based on distributions of state probabilities 
– Not practical
Robust Methods 
• Assumes bounded amount of uncertainty 
– An interval of possibilities 
– Works if the plan fails in that interval 
– “Conformant planning” is the state independent planning (Chap 
12) 
• Fine Motion Planning (FMP) 
– Assume motion in proximity of static objects 
– Based on sensor feedback, where the motion is too sensible for 
the robot 
• Use “guarded motions” (guard is a predicate stating when to end 
motion, and they attach “compliant motions”, i.e which suffer slide if 
needed) 
– Constructing FMP uses: configuration-space, angle of 
uncertainty in directions, sensors 
• Generates a set of steps guaranteeing success in the worst case 
• May not be the most efficient.
Fine Motion Planning 
2-dimensional environment, velocity uncertainty 
cone and envelope of possible robot movements.
Fine Motion Plan 
First motion. No matter the error, 
we know that the final 
configuration will be left of the hole 
2nd motion command. Even with 
error we get into the home
Moving 
• Computing forces and inertia 
– Dynamic state = position + velocities 
– Relation kinematics-velocity via differential eq. 
• Controller : keeps the robot on track 
– Reference controller  robot on reference 
path 
– Optimizing a cost function  
• Optimal controllers (e.g., MDP policies) 
– (Markov decision processes)
Control 
a) proportional control with gain factor 1. 
b) proportional control with gain 0.1 
c) proportional derivative control: gain 0.3 for proportional and 0.8 for derivative 
Robot tries to follow the path shown in gray
Controllers 
• Control that is proportional to 
displacement: P-controllers 
– at=KP(y(t)-xt) 
• y(t) desired location 
• KP – gain parameter 
• Assume small perturbations 
– Stable controller: bounded error y(t)-xt. 
• E.g. P-controllers 
– Strictly Stable controller: can return to 
reference path
P(I)D-Controllers 
• Proportional Derivative controllers: 
– at=KP(y(t)-xt)+KD*d(y(t)-xt)/dt 
• Proportional Integrative Derivative 
Controllers: 
– at=KP(y(t)-xt)+KI*∫(y(t)-xt)+KD*d(y(t)-xt)/dt 
• Corrects systematic errors
Potential for Control 
The robot ascends a potential field composed of repelling 
forces asserted from obstacles and an attracting force to the goal configuration 
successful path local optimum 
Has many local minima. 
Does not depend on velocities
Reactive Control 
• Reflex agent design = reactive control 
– For intractable many DOFs or too few sensors 
Genghis, a hexapod robot. 
– Simple rules: 3 legs at a time, a simple control 
• Emergent behavior (no explicit environment model) 
An augmented finite state machine AFSM for the control of a single leg. If stuck 
is moved increasingly higher
Augmented Finite State Machines 
• Finite States augmented with timers 
• Timers enable state changes after 
preprogrammed periods of time 
• AFSM lets behaviors override each other: 
– a suppression signal overrides the normal 
input signal 
– an inhibation signal causes output to be 
completely inhibated
Robotic Software Architectures 
• Software Architecture = methodology for 
structuring algorithms 
– Combining reactive with deliberative control 
• Reactive ctrl is sensor driven, for low level decisions 
• Leads to hybrid architectures. 
• Reactive architectures 
– Subsumption architecture (1986) 
• Each layer’s goal subsumes that of underlying layers. 
– bottom-up design 
– explore  wander  avoid objects 
• Composable augmented finite state machines (AFSM, see 
hexapod). 
– Assumes good sensors; focuses on one task; complex
Three-layer architecture 
• Most popular hybrid architecture 
– Reactive layer (low-level control, milliseconds) 
– Executive layer: 
• Selects which reactive control to invoke 
• Following points proposed by deliberative ctrl 
– Deliberative layer (planning, minutes/cycle) 
• Other possible layers 
– User Interface Layer 
– Inter-robot interface
Summary 
• Chapters to read 
– 11.4 GraphPlan 
– 12. Planning and Acting in Real World 
– 14. Bayesian Nets 
– 15. (Filtering, HMM, Kalmann, DBN) 
– 17 (Decision) 17.1 to 17.4

More Related Content

What's hot

Production System in AI
Production System in AIProduction System in AI
Production System in AI
Bharat Bhushan
 

What's hot (20)

Google Cloud Spanner Preview
Google Cloud Spanner PreviewGoogle Cloud Spanner Preview
Google Cloud Spanner Preview
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
 
Unit ii
Unit iiUnit ii
Unit ii
 
Image enhancement techniques a review
Image enhancement techniques   a reviewImage enhancement techniques   a review
Image enhancement techniques a review
 
Artificial Intelligence 1 Planning In The Real World
Artificial Intelligence 1 Planning In The Real WorldArtificial Intelligence 1 Planning In The Real World
Artificial Intelligence 1 Planning In The Real World
 
Fuzzy expert system
Fuzzy expert systemFuzzy expert system
Fuzzy expert system
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
 
Lidar final ppt
Lidar final pptLidar final ppt
Lidar final ppt
 
GIS and QGIS training notes
GIS and QGIS training notesGIS and QGIS training notes
GIS and QGIS training notes
 
Hidden surface removal algorithm
Hidden surface removal algorithmHidden surface removal algorithm
Hidden surface removal algorithm
 
Problem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence ProjectsProblem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence Projects
 
Computer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC AlgorithmComputer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC Algorithm
 
GAN - Theory and Applications
GAN - Theory and ApplicationsGAN - Theory and Applications
GAN - Theory and Applications
 
Reinforcement Learning Q-Learning
Reinforcement Learning   Q-Learning Reinforcement Learning   Q-Learning
Reinforcement Learning Q-Learning
 
LiDAR
LiDARLiDAR
LiDAR
 
The structure of agents
The structure of agentsThe structure of agents
The structure of agents
 
Structure of agents
Structure of agentsStructure of agents
Structure of agents
 
오픈소스 ROS기반 건설 로보틱스 기술 개발
오픈소스 ROS기반 건설 로보틱스 기술 개발오픈소스 ROS기반 건설 로보틱스 기술 개발
오픈소스 ROS기반 건설 로보틱스 기술 개발
 
Local search algorithm
Local search algorithmLocal search algorithm
Local search algorithm
 
Production System in AI
Production System in AIProduction System in AI
Production System in AI
 

Viewers also liked

Chap19
Chap19Chap19
Chap19
himo
 
Artificial Intelligence and Expert Systems
Artificial Intelligence and Expert SystemsArtificial Intelligence and Expert Systems
Artificial Intelligence and Expert Systems
Siddhant Agarwal
 
Ch2 3-informed (heuristic) search
Ch2 3-informed (heuristic) searchCh2 3-informed (heuristic) search
Ch2 3-informed (heuristic) search
chandsek666
 
Expert system 21 sldes
Expert system 21 sldesExpert system 21 sldes
Expert system 21 sldes
Yasir Khan
 
Robotics: Cartesian Trajectory Planning
Robotics: Cartesian Trajectory PlanningRobotics: Cartesian Trajectory Planning
Robotics: Cartesian Trajectory Planning
Damian T. Gordon
 
Artificial intelligence and knowledge representation
Artificial intelligence and knowledge representationArtificial intelligence and knowledge representation
Artificial intelligence and knowledge representation
Likan Patra
 
Stock markets presentation
Stock markets presentationStock markets presentation
Stock markets presentation
Sahil Gupta
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligence
Albert Orriols-Puig
 
Artificial Intelligence Presentation
Artificial Intelligence PresentationArtificial Intelligence Presentation
Artificial Intelligence Presentation
lpaviglianiti
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
vallibhargavi
 

Viewers also liked (20)

Expert System Seminar
Expert System SeminarExpert System Seminar
Expert System Seminar
 
Chap19
Chap19Chap19
Chap19
 
Expert Systems
Expert SystemsExpert Systems
Expert Systems
 
Artificial Intelligence and Expert Systems
Artificial Intelligence and Expert SystemsArtificial Intelligence and Expert Systems
Artificial Intelligence and Expert Systems
 
Expert Systems
Expert SystemsExpert Systems
Expert Systems
 
Expert system
Expert systemExpert system
Expert system
 
Ch2 3-informed (heuristic) search
Ch2 3-informed (heuristic) searchCh2 3-informed (heuristic) search
Ch2 3-informed (heuristic) search
 
Expert system 21 sldes
Expert system 21 sldesExpert system 21 sldes
Expert system 21 sldes
 
Predicate Logic
Predicate LogicPredicate Logic
Predicate Logic
 
Robotics: Cartesian Trajectory Planning
Robotics: Cartesian Trajectory PlanningRobotics: Cartesian Trajectory Planning
Robotics: Cartesian Trajectory Planning
 
Robot programming
Robot programmingRobot programming
Robot programming
 
Perception
PerceptionPerception
Perception
 
Artificial intelligence and knowledge representation
Artificial intelligence and knowledge representationArtificial intelligence and knowledge representation
Artificial intelligence and knowledge representation
 
03 - Predicate logic
03 - Predicate logic03 - Predicate logic
03 - Predicate logic
 
Knowledge representation and Predicate logic
Knowledge representation and Predicate logicKnowledge representation and Predicate logic
Knowledge representation and Predicate logic
 
Stock markets presentation
Stock markets presentationStock markets presentation
Stock markets presentation
 
Lecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligenceLecture1 AI1 Introduction to artificial intelligence
Lecture1 AI1 Introduction to artificial intelligence
 
Artificial Intelligence Presentation
Artificial Intelligence PresentationArtificial Intelligence Presentation
Artificial Intelligence Presentation
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
Robotics & Artificial Intelligence
Robotics &  Artificial  IntelligenceRobotics &  Artificial  Intelligence
Robotics & Artificial Intelligence
 

Similar to AI Robotics

Lucio marcenaro tue summer_school
Lucio marcenaro tue summer_schoolLucio marcenaro tue summer_school
Lucio marcenaro tue summer_school
Jun Hu
 
Reactive Deformation of Path for Navigation Among Dynamic Obstacles
Reactive Deformation of Path for Navigation Among Dynamic ObstaclesReactive Deformation of Path for Navigation Among Dynamic Obstacles
Reactive Deformation of Path for Navigation Among Dynamic Obstacles
Anand Taralika
 

Similar to AI Robotics (20)

16355694.ppt
16355694.ppt16355694.ppt
16355694.ppt
 
robots.ppt
robots.pptrobots.ppt
robots.ppt
 
robots.ppt
robots.pptrobots.ppt
robots.ppt
 
25 robotics
25 robotics25 robotics
25 robotics
 
actuators, or effectors
actuators, or effectorsactuators, or effectors
actuators, or effectors
 
Lucio marcenaro tue summer_school
Lucio marcenaro tue summer_schoolLucio marcenaro tue summer_school
Lucio marcenaro tue summer_school
 
Reactive Deformation of Path for Navigation Among Dynamic Obstacles
Reactive Deformation of Path for Navigation Among Dynamic ObstaclesReactive Deformation of Path for Navigation Among Dynamic Obstacles
Reactive Deformation of Path for Navigation Among Dynamic Obstacles
 
Robotics
RoboticsRobotics
Robotics
 
September 30, Probabilistic Modeling
September 30, Probabilistic ModelingSeptember 30, Probabilistic Modeling
September 30, Probabilistic Modeling
 
Robotics of Quadruped Robot
Robotics of Quadruped RobotRobotics of Quadruped Robot
Robotics of Quadruped Robot
 
robotics presentation (2).ppt is good for the student life and easy to gain t...
robotics presentation (2).ppt is good for the student life and easy to gain t...robotics presentation (2).ppt is good for the student life and easy to gain t...
robotics presentation (2).ppt is good for the student life and easy to gain t...
 
Machine Learning Techniques for the Smart Grid – Modeling of Solar Energy usi...
Machine Learning Techniques for the Smart Grid – Modeling of Solar Energy usi...Machine Learning Techniques for the Smart Grid – Modeling of Solar Energy usi...
Machine Learning Techniques for the Smart Grid – Modeling of Solar Energy usi...
 
Multiple UGV SLAM Map Sharing
Multiple UGV SLAM Map SharingMultiple UGV SLAM Map Sharing
Multiple UGV SLAM Map Sharing
 
Reconciling Self-adaptation and Self-organization
Reconciling Self-adaptation and Self-organizationReconciling Self-adaptation and Self-organization
Reconciling Self-adaptation and Self-organization
 
Reconciling self-adaptation and self-organization
Reconciling self-adaptation and self-organizationReconciling self-adaptation and self-organization
Reconciling self-adaptation and self-organization
 
Collision Detection an Overview
Collision Detection an OverviewCollision Detection an Overview
Collision Detection an Overview
 
Lecture 7 robotics and ai
Lecture 7   robotics and ai Lecture 7   robotics and ai
Lecture 7 robotics and ai
 
Ant Colony Optimization algorithms in ADSA
Ant Colony Optimization algorithms in ADSAAnt Colony Optimization algorithms in ADSA
Ant Colony Optimization algorithms in ADSA
 
Robotics Localization
Robotics LocalizationRobotics Localization
Robotics Localization
 
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
Computer-Vision based Centralized Multi-agent System on Matlab and Arduino Du...
 

More from Yasir Khan (20)

Lecture 6
Lecture 6Lecture 6
Lecture 6
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
Lecture 3
Lecture 3Lecture 3
Lecture 3
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Lec#1
Lec#1Lec#1
Lec#1
 
Ch10 (1)
Ch10 (1)Ch10 (1)
Ch10 (1)
 
Ch09
Ch09Ch09
Ch09
 
Ch05
Ch05Ch05
Ch05
 
Snooping protocols 3
Snooping protocols 3Snooping protocols 3
Snooping protocols 3
 
Snooping 2
Snooping 2Snooping 2
Snooping 2
 
Introduction 1
Introduction 1Introduction 1
Introduction 1
 
Hpc sys
Hpc sysHpc sys
Hpc sys
 
Hpc 6 7
Hpc 6 7Hpc 6 7
Hpc 6 7
 
Hpc 4 5
Hpc 4 5Hpc 4 5
Hpc 4 5
 
Hpc 3
Hpc 3Hpc 3
Hpc 3
 
Hpc 2
Hpc 2Hpc 2
Hpc 2
 
Hpc 1
Hpc 1Hpc 1
Hpc 1
 
Flynns classification
Flynns classificationFlynns classification
Flynns classification
 
Dir based imp_5
Dir based imp_5Dir based imp_5
Dir based imp_5
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 

AI Robotics

  • 1. AI Robotics Concepts Chapter 25
  • 2. Content • Tasks • Effectors • Sensors • Agent Architectures • Actions in continuous space
  • 3. Robot • Definition from the “Robot Institute of America” (fits a conveyor belt) A robot is a programmable, multifunction manipulator design to move material, parts, tools, or specific devices through variable programmed motions for the performance of a variety of tasks. Russels&Norvig definition (excludes softbots) A robot is an active, artificial agent whose environment is the physical world.
  • 4. What is 'Artificial Intelligence'? • What is 'Artificial Intelligence' (Jim Hendler, CNN, 1999)? – What computers cannot do yet • E.g. NLP started as an AI field. Once successful (found objective mathematical model) became a field by itself. • Getting out of AI (conventional wisdom): – Artificial Neural Networks are the 2nd best solution to any problem – Genetic Algorithms are the 3rd best solution to any problem – The (non-AI) mathematical solution, once found, is the best solution of any problem. • AI is about generalization to unseen and unexpected situations. Once the situation is explicitly taken into account, we can no longer speak of AI. 1
  • 5. The real world as an environment • Inaccessible (sensors do not tell all) – need to maintain a model of the world • Nondeterministic (wheels slip, parts break) – need to deal with uncertainty • Nonepisodic (effects of an action change over time) – need to handle sequential decision problems • Dynamic – need to know when to plan and when to response • Continuous – cannot enumerate possible actions
  • 6. Tasks • Manufacturing and material handling – not autonomous: manufacturing, handling – simple machines are the best solution • need accuracy, power, shapes put in standard cradles • Assistant robots (with autonomy) – Mobile robots (mobots): couriers, under-water spies • Hazardous environments – toxic or deep sea environment • Telepresence and Virtual Reality – teleoperated robots for bomb scares in New York • Augmentation of human abilities – artificial fingers, eyes, exoskeletons
  • 7. Components • Rigid Links • Rigid Joints • End Effectors (screw-drivers) – connected to “actuators” (convert electricity to movement) • an actuator = a degree of freedom • uses: locomotion, planning, processing
  • 8. DOF • Holonomic Robots (same # effective and controllable DOF) – a car has 3 effective but 2 controllable DOF • robotic arms are holonomic • mobile robots are not (more complex mechanics) Stanford manipulator 6DOF 5 revolute + 1 prismatic joint non-holonomic 4-wheeled vehicle
  • 9. Distance sensors laser scanner & range scan • Sensors – force, torque, tactile, range (sonar), image, odometry & shaft decoders, inertial, GPS
  • 10. Locomotion • Types: – statically stable (wheels, slow legs) – dynamically stable (legs, hopping) • less practical for most robots (expensive) 4-legged “Big Dog” DFKI Bremen Marc Raibert
  • 11. Perception • Dynamic belief network – Update the belief net – filtering equations combine • transition model / motion model • sensor model
  • 12. Localization • Problems – Tracking – Global localization – Uncertain situation problem • Techniques – Probabilistic Prediction – Landmarks Models or Range Scan Models
  • 13. Localization • Monte Carlo Localization, (MCL, particle filtering) – starts with a uniform distribution of particles – computes/updates their probability – Generates another set of samples based on the new distribution • Kalman filters – need local linearization, e.g. using Taylor expansion – Uncertainty grows until landmarks are spotted – uncertainty of landmarks => data association problem
  • 15. MCL Run initial global uncertainty bimodal uncertainty (symmetry) unimodal uncertainty
  • 16. Mapping • Simultaneous localization and mapping SLAM – similar to the localization (filtering) problem • has the map in the posterior • Often uses Kalman filters + Landmarks
  • 17. Usual Mapping • Other approaches are less probabilistic and more ad-hoc, and they represent a trend!
  • 18. Planning to Move • Two types of motion: – point-to-point (free motion) – compliant motion (touching a wall, a screw) • Two representations of planning problems – configuration space – workspace • Planning algorithms – cell decomposition – skeletonization.
  • 19. Configuration Space • The Workspace is the 3D world around. – typically a position is needed for each joint. • The configuration Space is the n-ary world of possible robot configurations. – sometimes smaller size than the Workspace, as it integrates linkage constraints. • occupied space vs. free space • Sometimes both spaces are required: – Direct kinematics (configuration  Workspace) – Inverse kinematics (workspace  configuration)
  • 20. Workspace vs Configuration Space Workspace of a robot with2 DOF box with flat hanging obstacle Configuration space of same robot White regions are free of collisions The dot is for the current configuration
  • 21. Robot configurations in workspace and configuration space.
  • 22. Cell decomposition methods • Continuous space is hard to work with • Discretization uses cell decomposition – Grid • Easy to scan with dynamic programming • Difficult to handle “gray” boxes  – Incompleteness – unsoundness • May need further subdivision  exponential in dimensions • Difficult to prove that a box is fully available – Quad-trees – Exact cell decomposition (complex shapes) – Potential Field  optimization via value iteration
  • 23. Same path in workspace coordinates Robot bends elbow to avoid collision. Value function and path for discrete cell approximation of the configuration space
  • 24. Potential A repelling potential field pushes robot away from obstacles Path found by minimizing path length and the potential
  • 25. Skeletonization methods • Skeleton – a lower dimensional representation of the configuration space – Voronoi graph (points equally distant from two obstacles) • Reduces the dimension of the path planning. • Difficult to compute in the configuration space • Difficult for many dimensions • Leads to large/inefficient detours. – Probabilistic roadmap • Random generation of many points in the configuration space (discarding occupied ones) • Lines between close points, if they are in free space. – Distribution of points may be based on need and promise – Scales best
  • 26. Voronoi Voronoi graph is the set of points equidistant to the two or more obstacles in configuration space Probabilistic roadmap, 400 randomly chosen points in free space
  • 27. Planning Uncertain Movements • Errors – from the stochastic model – From the approximation (particle filtering) algorithms • Common simplification: – Assume the “most likely state” – “Works” when uncertainty is limited • Solution – (with observable states) Markov Decision Processes • Returns optimal policy (action in each state) – Called “Navigation function” – (with partially observable states) POMDPs • Creates policies based on distributions of state probabilities – Not practical
  • 28. Robust Methods • Assumes bounded amount of uncertainty – An interval of possibilities – Works if the plan fails in that interval – “Conformant planning” is the state independent planning (Chap 12) • Fine Motion Planning (FMP) – Assume motion in proximity of static objects – Based on sensor feedback, where the motion is too sensible for the robot • Use “guarded motions” (guard is a predicate stating when to end motion, and they attach “compliant motions”, i.e which suffer slide if needed) – Constructing FMP uses: configuration-space, angle of uncertainty in directions, sensors • Generates a set of steps guaranteeing success in the worst case • May not be the most efficient.
  • 29. Fine Motion Planning 2-dimensional environment, velocity uncertainty cone and envelope of possible robot movements.
  • 30. Fine Motion Plan First motion. No matter the error, we know that the final configuration will be left of the hole 2nd motion command. Even with error we get into the home
  • 31. Moving • Computing forces and inertia – Dynamic state = position + velocities – Relation kinematics-velocity via differential eq. • Controller : keeps the robot on track – Reference controller  robot on reference path – Optimizing a cost function  • Optimal controllers (e.g., MDP policies) – (Markov decision processes)
  • 32. Control a) proportional control with gain factor 1. b) proportional control with gain 0.1 c) proportional derivative control: gain 0.3 for proportional and 0.8 for derivative Robot tries to follow the path shown in gray
  • 33. Controllers • Control that is proportional to displacement: P-controllers – at=KP(y(t)-xt) • y(t) desired location • KP – gain parameter • Assume small perturbations – Stable controller: bounded error y(t)-xt. • E.g. P-controllers – Strictly Stable controller: can return to reference path
  • 34. P(I)D-Controllers • Proportional Derivative controllers: – at=KP(y(t)-xt)+KD*d(y(t)-xt)/dt • Proportional Integrative Derivative Controllers: – at=KP(y(t)-xt)+KI*∫(y(t)-xt)+KD*d(y(t)-xt)/dt • Corrects systematic errors
  • 35. Potential for Control The robot ascends a potential field composed of repelling forces asserted from obstacles and an attracting force to the goal configuration successful path local optimum Has many local minima. Does not depend on velocities
  • 36. Reactive Control • Reflex agent design = reactive control – For intractable many DOFs or too few sensors Genghis, a hexapod robot. – Simple rules: 3 legs at a time, a simple control • Emergent behavior (no explicit environment model) An augmented finite state machine AFSM for the control of a single leg. If stuck is moved increasingly higher
  • 37. Augmented Finite State Machines • Finite States augmented with timers • Timers enable state changes after preprogrammed periods of time • AFSM lets behaviors override each other: – a suppression signal overrides the normal input signal – an inhibation signal causes output to be completely inhibated
  • 38. Robotic Software Architectures • Software Architecture = methodology for structuring algorithms – Combining reactive with deliberative control • Reactive ctrl is sensor driven, for low level decisions • Leads to hybrid architectures. • Reactive architectures – Subsumption architecture (1986) • Each layer’s goal subsumes that of underlying layers. – bottom-up design – explore  wander  avoid objects • Composable augmented finite state machines (AFSM, see hexapod). – Assumes good sensors; focuses on one task; complex
  • 39. Three-layer architecture • Most popular hybrid architecture – Reactive layer (low-level control, milliseconds) – Executive layer: • Selects which reactive control to invoke • Following points proposed by deliberative ctrl – Deliberative layer (planning, minutes/cycle) • Other possible layers – User Interface Layer – Inter-robot interface
  • 40. Summary • Chapters to read – 11.4 GraphPlan – 12. Planning and Acting in Real World – 14. Bayesian Nets – 15. (Filtering, HMM, Kalmann, DBN) – 17 (Decision) 17.1 to 17.4