Hierarchical Control Systems<br />A Seminar on Artificial Intelligence<br />1<br />
What is HCS?<br />A form of Control System in which a set of devices and governing software is arranged in a hierarchical ...
Some Notable Features of HCS…<br /><ul><li>This human built system has complex behavior and is often represented in hierar...
It is organized to divide Decision making responsibility.
Each element of the hierarchy is a linked node in the tree.
Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes.
Sensations and command results flow up the tree from subordinate to superior nodes.
Nodes may also exchange messages with their siblings.</li></ul>3<br />
Distinguishing features of a HCS, related to its layers:-<br /><ul><li>Each higher layer of the tree operates with a longe...
Their time constraints are relaxed and are capable of reasoning from abstract world model.
Lower layers form hybrid intelligent systems, which involves a combination of methods and techniques from AI subfields.
They perform local tasks and goals, as planned by higher layers.</li></ul>4<br />
Application<br />Manufacturing, robotics and vehicles<br /><ul><li>Used for creating Autonomous robots. Thus, motion plann...
DARPA and NIST sponsor such researches to develop applications for military purposes.</li></ul>Artificial Intelligence<br ...
A way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized int...
<ul><li>Defense Advanced Research Projects Agency (DARPA) is the most prominent research organization of the United States...
In November 2007, DARPA held “the Urban Challenge”, which was a prize competition for driverless vehicles.
The winning entry, “Tartan Racing” employed a hierarchical control system, with layered mission planning, motion planning,...
Urban Challenge<br /><ul><li>The Urban Challenge required designers to build vehicles able to obey all traffic laws while ...
This is a particular challenge for vehicle software, as vehicles must make "intelligent" decisions in real time based on t...
The competition was open to teams and organizations from all around the world.</li></ul>7<br />
Rules:-<br /><ul><li>Vehicle must be entirely autonomous, using only the information it detects with its sensors and publi...
DARPA will provide the route network 24 hours before the race starts.
Vehicles will complete the route by driving between specified checkpoints.
Vehicles may “stop and stare” for at most 10 seconds.
Vehicles must operate in rain and fog, with GPS blocked.
Vehicles must avoid collision with vehicles and other objects such as carts, bicycles, traffic barrels, and objects in the...
Vehicles must be able to operate in parking areas and perform U-turns as required by the situation. </li></ul>8<br />
<ul><li>The Winner of DARPA Urban Challenge was Tartan Racing ,a team from Carnegie Mellon University, Pennsylvania.
Their vehicle “Boss” was a Chevy Tahoe with over 5,00,000 lines of code to autonomously navigate in town and in traffic.
Tartan Racing technology enabled Boss to:
Follow rules of the road
Detect and track other vehicles at long ranges
Find a spot and park in a parking lot
Obey intersection precedence rules
Follow vehicles at a safe distance
React to dynamic conditions like blocked roads or broken-down vehicles </li></ul>9<br />
Boss, the Tartan Racing robot, is built on a Chevrolet Tahoe chassis.It incorporates a variety of lidar, radar and visual ...
Tartan Racing employed a layered hierarchical control system:<br /><ul><li>Mission planning
Behavior Generation
Motion planning
Upcoming SlideShare
Loading in …5
×

Hcs

583 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
583
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • A hierarchical control system takes the shape of a tree in which each node operates independently, performing tasks from its superior node, commanding tasks of its subordinate nodes, sending abstracted sensations to its superior node, and receiving sensations from its subordinate nodes. Leaf nodes are sensors or actuators.
  • Becoz time of upper nodes= computation of process by lower node + computation that they themselves are doing.
  • Read out the rules
  • Hcs

    1. 1. Hierarchical Control Systems<br />A Seminar on Artificial Intelligence<br />1<br />
    2. 2. What is HCS?<br />A form of Control System in which a set of devices and governing software is arranged in a hierarchical tree.<br />2<br />
    3. 3. Some Notable Features of HCS…<br /><ul><li>This human built system has complex behavior and is often represented in hierarchy.
    4. 4. It is organized to divide Decision making responsibility.
    5. 5. Each element of the hierarchy is a linked node in the tree.
    6. 6. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes.
    7. 7. Sensations and command results flow up the tree from subordinate to superior nodes.
    8. 8. Nodes may also exchange messages with their siblings.</li></ul>3<br />
    9. 9. Distinguishing features of a HCS, related to its layers:-<br /><ul><li>Each higher layer of the tree operates with a longer interval of planning and execution time than its immediately lower layer.
    10. 10. Their time constraints are relaxed and are capable of reasoning from abstract world model.
    11. 11. Lower layers form hybrid intelligent systems, which involves a combination of methods and techniques from AI subfields.
    12. 12. They perform local tasks and goals, as planned by higher layers.</li></ul>4<br />
    13. 13. Application<br />Manufacturing, robotics and vehicles<br /><ul><li>Used for creating Autonomous robots. Thus, motion planning becomes extremely important.
    14. 14. DARPA and NIST sponsor such researches to develop applications for military purposes.</li></ul>Artificial Intelligence<br /><ul><li>For building generic architecture for “behavior based robotics”
    15. 15. A way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers.</li></ul>5<br />
    16. 16. <ul><li>Defense Advanced Research Projects Agency (DARPA) is the most prominent research organization of the United States Department of Defense.(earlier known as ARPA)
    17. 17. In November 2007, DARPA held “the Urban Challenge”, which was a prize competition for driverless vehicles.
    18. 18. The winning entry, “Tartan Racing” employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modeling, and Mechatronics.</li></ul>6<br />
    19. 19. Urban Challenge<br /><ul><li>The Urban Challenge required designers to build vehicles able to obey all traffic laws while they detect and avoid other robots on the course.
    20. 20. This is a particular challenge for vehicle software, as vehicles must make "intelligent" decisions in real time based on the actions of other vehicles.
    21. 21. The competition was open to teams and organizations from all around the world.</li></ul>7<br />
    22. 22. Rules:-<br /><ul><li>Vehicle must be entirely autonomous, using only the information it detects with its sensors and public signals such as GPS.
    23. 23. DARPA will provide the route network 24 hours before the race starts.
    24. 24. Vehicles will complete the route by driving between specified checkpoints.
    25. 25. Vehicles may “stop and stare” for at most 10 seconds.
    26. 26. Vehicles must operate in rain and fog, with GPS blocked.
    27. 27. Vehicles must avoid collision with vehicles and other objects such as carts, bicycles, traffic barrels, and objects in the environment such as utility poles.
    28. 28. Vehicles must be able to operate in parking areas and perform U-turns as required by the situation. </li></ul>8<br />
    29. 29. <ul><li>The Winner of DARPA Urban Challenge was Tartan Racing ,a team from Carnegie Mellon University, Pennsylvania.
    30. 30. Their vehicle “Boss” was a Chevy Tahoe with over 5,00,000 lines of code to autonomously navigate in town and in traffic.
    31. 31. Tartan Racing technology enabled Boss to:
    32. 32. Follow rules of the road
    33. 33. Detect and track other vehicles at long ranges
    34. 34. Find a spot and park in a parking lot
    35. 35. Obey intersection precedence rules
    36. 36. Follow vehicles at a safe distance
    37. 37. React to dynamic conditions like blocked roads or broken-down vehicles </li></ul>9<br />
    38. 38. Boss, the Tartan Racing robot, is built on a Chevrolet Tahoe chassis.It incorporates a variety of lidar, radar and visual sensors to safely navigate urban environments.<br />10<br />
    39. 39. Tartan Racing employed a layered hierarchical control system:<br /><ul><li>Mission planning
    40. 40. Behavior Generation
    41. 41. Motion planning
    42. 42. Planning in Lanes
    43. 43. Planning in Zones
    44. 44. Perception & World Modeling
    45. 45. Moving Obstacle Fusion
    46. 46. Moving Obstacle Tracking
    47. 47. Static Obstacle Detection
    48. 48. Road Shape Feature Detectors
    49. 49. Mechatronics
    50. 50. Vehicle Automation
    51. 51. Power Electronics
    52. 52. Integration and Testing</li></ul>11<br />
    53. 53. The Tartan Racing architecture is decomposed into five broad areas: Mission Planning, Motion Planning, Behavior Generation, Perception and World Modeling, and Mechatronics.<br />12<br />
    54. 54. Mission Planning<br />Objective<br /><ul><li>To Determine efficient route through urban network of roads.
    55. 55. Its component computes:
    56. 56. The cost (as function of time and risk) of all possible routes through given knowledge of road networks.
    57. 57. The next checkpoint that vehicle must achieve.
    58. 58. Optimal path to next checkpoint.
    59. 59. It compares routes based on prior knowledge of
    60. 60. congestion or blockages
    61. 61. construction
    62. 62. legal speed limit</li></ul>13<br />
    63. 63. Algorithm<br />To generate mission plans, data provided in Route Network Definition File (RDNF) is used:<br /><ul><li>To create graph that encodes connectivity of the environment.
    64. 64. Checkpoints are made as nodes and routes between them as directional edges.
    65. 65. Costs are assigned to edges based on various factors:
    66. 66. Expected time to traverse the edge
    67. 67. Length of edge
    68. 68. Complexity of local environment</li></ul>14<br />
    69. 69. <ul><li>Cost graph is searched to compute a minimum-cost path from each position in graph to desired goal position.
    70. 70. Reason: it allows navigation system to behave correctly if vehicle is unable to perfectly execute the original plan.
    71. 71. For e.g.:- if vehicle missed a checkpoint then it can immediately extract the current best path from its current position.
    72. 72. As the vehicle navigates, mission planner keeps updating its graph to incorporate newly-observed information.</li></ul>15<br />
    73. 73. Behavior Generation<br />Objective<br /><ul><li>formulates a problem definition for Motion Planning component to solve, based on strategic information provided by Mission planning component.
    74. 74. implemented as a state machine that decomposes mission task into a set of top-level behaviors and their simpler, sub-behaviors.
    75. 75. Top-level behaviors:
    76. 76. Drive-down road
    77. 77. Handle intersection
    78. 78. Achieve-zone-pose</li></ul>16<br />
    79. 79. Algorithm<br /><ul><li>Based on concept of identifying driving contexts or various behaviors.</li></ul>Drive-Down-Road Behavior<br /><ul><li>Responsible for on road driving.
    80. 80. Primary sub-behavior is Driving In Lanes (includes distance keeping behavior in presence of a lead vehicle).
    81. 81. Lane selector makes lane-change decisions to achieve specified checkpoints based on</li></ul>Timely progress in current lane<br />And necessity of being in correct lane<br /><ul><li>Distance-keeping behavior aims at</li></ul>Zero the difference in our vehicle’s velocity and lead vehicle’s velocity.<br />Zero the difference in desired and actual inter-vehicle gaps<br />17<br />
    82. 82. Handle Intersection Behavior<br /><ul><li>invoked when vehicle is on an intersection or chauraha.
    83. 83. Establishes a polygonal zone around an intersection and tracks all vehicles within that zone.
    84. 84. Those that arrived at stop line before our vehicle are given higher precedence.
    85. 85. Once our vehicle determines that its turn to move, it checks for intersection to be clear of obstacles and vehicle (completing their exit or disobeying precedence rules).
    86. 86. Then traverse the intersection in a “virtual lane” created by connecting exit point of current lane to entry point in goal lane.</li></ul>18<br />
    87. 87. 19<br />
    88. 88. Achieve-zone Pose Behavior<br /><ul><li>Specifies desired position for motion planner to achieve.
    89. 89. Invoked when system needs to traverse a zone and park the vehicle.
    90. 90. Also, when vehicle has to find its way, very creatively, out of jammed traffic or from an off-road position to a position where Drive-down road behavior can resume.</li></ul>20<br />
    91. 91. Motion Planning<br />Objective<br /><ul><li>Responsible for executing segments of route.
    92. 92. Typically involves
    93. 93. Either driving down a lane when on roads (Structured Driving)
    94. 94. or, navigating through obstacle fields to a desired goal position when in zones. (Unstructured Driving)</li></ul>21<br />
    95. 95. Planning in Lanes<br /><ul><li>Driving down road lanes relies on the perception sub-system to provide an indication of the current lane boundaries.
    96. 96. From this information a curve representing the centerline of the current lane is computed. This is the nominal path the vehicle should follow.
    97. 97. A local motion planner generates trajectories to a set of local goals in order to:
    98. 98. Robustly follow the current lane
    99. 99. Avoid static and dynamic obstacles.</li></ul>22<br />
    100. 100. <ul><li>The trajectory generation algorithm is developed by Howard and Kelly.
    101. 101. It is used to compute dynamically feasible trajectories to these local goals.
    102. 102. It predicts where the vehicle will end up after following some specified control trajectory.
    103. 103. Then control trajectory is optimized to minimize the error between the forwards-simulated vehicle position and the current desired goal position.
    104. 104. The resulting trajectories are evaluated against
    105. 105. both static and dynamic obstacles in the environment
    106. 106. their distance from the centerline path
    107. 107. their smoothness and various other metrics.</li></ul>23<br />
    108. 108. Planning in Zones<br /><ul><li>Driving in zones is different from driving in lanes:
    109. 109. the road lane provides a preferred position of the vehicle (centerline).
    110. 110. no driving lanes in parking lot, thus the movement of the vehicle is less constrained.
    111. 111. in zones, very specific goal positions are to be reached.
    112. 112. A lattice planner is used that searches over vehicle position and orientation, to plan the path towards goal position.
    113. 113. Again, trajectory generator is used to find various paths toward goal.
    114. 114. planner searches in a backwards direction, from the goal pose towards the vehicle pose
    115. 115. generates feasible high-fidelity maneuvers that are collision-free with respect to the static obstacles observed in the environment.</li></ul>24<br />
    116. 116. <ul><li>To efficiently generate complex plans over large, obstacle-laden environments, the planner relies on “Anytime D*” search algorithm.
    117. 117. It quickly generates an initial, suboptimal plan for the vehicle
    118. 118. then improves the quality of this solution while deliberation time allows.
    119. 119. When new information about the obstacles is received, this algo efficiently repair its existing solution to account for the new obstacle information.
    120. 120. This repairing by performing the search in a backward direction
    121. 121. This results in a nearly computation-free replanning, when the vehicle deviates from its path due to tracking errors.
    122. 122. This lattice planner is flexible enough to be used in a large variety of cases. For e.g.:
    123. 123. when navigating in congested intersections
    124. 124. to perform U-turns
    125. 125. to get the vehicle back on track</li></ul>25<br />
    126. 126. Perception & World Modeling<br />Objective<br /><ul><li>Interprets information from various sensors and fuses the multiple streams together to provide a composite picture of the world to the rest of the system.
    127. 127. Responsible for addressing three critical functions:
    128. 128. the detection and tracking of moving obstacles,
    129. 129. the detection of static obstacles,
    130. 130. estimating the shape of the road.</li></ul>26<br />
    131. 131. Moving Obstacle Fusion<br /><ul><li>Moving obstacles are tracked using several lidar and radar.
    132. 132. Objective is to fuse together different information gathered by different sensors to make them coherent.
    133. 133. Sensors do measurement of objects they track.
    134. 134. A global list is maintained of tracked object by fusion layer.
    135. 135. Any unfamiliar obstacle tracked is added as new entry. </li></ul>27<br />
    136. 136. Moving Obstacle Tracking<br /><ul><li>Fusion layer provides the global list of tracked objects.
    137. 137. Mobile obstacles are tracked using Extended Kalman Filter to predict and update the state and uncertainty measurement for each tracked object.</li></ul>28<br />
    138. 138. Static Obstacle Detection<br /><ul><li>This algo uses downward looking lasers mounted on robot to evaluate the terrain around the vehicle.
    139. 139. Based on this info, it generates a cost map representing “traversability” of the terrain, by comparing pairs of laser points.
    140. 140. Based on cost, regions are categorized as “fatal”.</li></ul>29<br />
    141. 141. Road Shape Feature Detectors<br /><ul><li>Although, RDNF (Route network definition file) with Ariel images of area (used to make an estimate of road shape), are provided but actual road features can’t be identified, unless on-road.
    142. 142. Online Road shape detection is generated using range and intensity data from down looking short range lidars.
    143. 143. Intensity change is indicative of painted road lines.
    144. 144. relative changes from flat surfaces to raised curbs or drops to soft shoulders are detected by looking for the appropriate geometric features.
    145. 145. Algo used are Haar wavelet transforms, heuristic edge detection with adaptive thresholding and dynamic programming methods.</li></ul>30<br />
    146. 146. Mechatronics for Autonomous Urban Driving<br />Objective<br /><ul><li>The electrical and mechanical components provide a way for algorithms to interact with the world.
    147. 147. The chassis is a Chevrolet Tahoe was selected due to
    148. 148. its integrated electronic interfaces,
    149. 149. its high roof giving a good sensor perspective for detecting other vehicles
    150. 150. its plentiful room on the inside for both developers and additional electronics.
    151. 151. Mechatronics alterations include systems for vehicle automation, auxiliary power, computing, and sensor mounting.</li></ul>31<br />
    152. 152. Vehicle Automation<br /><ul><li>Steering and brake/throttle are actuated by closed-loop control of motors, acting on the steering column and brake and gas pedals.
    153. 153. secondary driving controls (turn signals, transmission shifting, etc.) are actuated by a system from Electronic Mobility Controls (EMC).
    154. 154. A higher level controller is responsible for vehicle functions such as velocity control and curvature control.</li></ul>32<br />
    155. 155. <ul><li>For curvature control, the control loop is feed-forward.
    156. 156. Takes in feedback for steady-state error correction.
    157. 157. The speed control structure includes different controllers for throttle and brake, with a switch.
    158. 158. Switch is required because:
    159. 159. throttle and brake performances of a vehicle are different.
    160. 160. The brake system responds much faster to deceleration commands than the throttle system does to acceleration commands.
    161. 161. the actuator for throttle and brake is a single motor system.</li></ul>33<br />
    162. 162. PowerElectronics<br /><ul><li>The Tartan Racing robots utilize auxiliary power generation to support sensors and computing.
    163. 163. The auxiliary power system is a high voltage generator that is driven by the engine using a secondary serpentine belt.
    164. 164. This generator provides up to 6 kilowatts of power depending on engine load.
    165. 165. Integrated Control System is used for this
    166. 166. It contains power converters to convert high voltage power into both AC and DC .
    167. 167. Manages power output, voltage levels, and temperatures of the generator and converters</li></ul>34<br />
    168. 168. Integration and Testing<br /><ul><li>Since Spiral Development process is being used, at the end of each spiral solid functionality is tested.
    169. 169. Involves incremental testing of each component, system as a whole, as new functions are added or capabilities mature.
    170. 170. System subjected to 5 tiers of testing:
    171. 171. Unit Testing
    172. 172. Subsystem testing with simulated inputs
    173. 173. System testing in pure simulation
    174. 174. System testing on vehicle with simulation in the Loop
    175. 175. System testing on vehicle with live testing</li></ul>35<br />
    176. 176. Unit Testing<br /><ul><li>Unit tests are small, simple software tests written by the developer while writing a particular class or function.
    177. 177. The objective is to verify that the class method or function being tested produces the correct state changes or output for the provided set of inputs.</li></ul>36<br />
    178. 178. Subsystem testing with simulated inputs<br /><ul><li>A particular subsystem or individual process (task) is run in isolation while its interfaces are “simulated” by playing back data.
    179. 179. The process produces output in debug tool for analysis.
    180. 180. This helps in much easier integration with rest of system by finding bugs earlier in the process.</li></ul>37<br />
    181. 181. System testing in pure simulation<br /><ul><li>The entire system can be executed on multiple computers while the various on-vehicle interfaces are simulated.
    182. 182. These interfaces are simulated by playing back previously logged data such as lidar data, cost maps, vehicle state, or any other interface.
    183. 183. On basis of output behavior, analysis is done.</li></ul>38<br />
    184. 184. System Testing on Vehicle with Simulation In the Loop<br /><ul><li>The system is executed on the robot, but live traffic is not used.
    185. 185. Instead, simulation is used to present simulated moving objects to the real system.
    186. 186. This allows the robot to “see” and react to this virtual traffic without the risk of testing in the presence of other vehicles.</li></ul>System Testing on Vehicle with Live Traffic <br /><ul><li>The full-fledged system is executed in the presence of real traffic.</li></ul>39<br />
    187. 187. Summary<br /><ul><li>This multi-modal approach partitions the urban driving problem into tractable behavioral modes (road driving, intersection handling and parking).
    188. 188. It is a snapshot of technology under rapid development. Significant improvements are ongoing in all aspects of overall system performance.</li></ul>40<br />
    189. 189. THANK YOU<br />41<br />

    ×