A hierarchical control system takes the shape of a tree in which each node operates independently, performing tasks from its superior node, commanding tasks of its subordinate nodes, sending abstracted sensations to its superior node, and receiving sensations from its subordinate nodes. Leaf nodes are sensors or actuators.
Becoz time of upper nodes= computation of process by lower node + computation that they themselves are doing.
Read out the rules
Hierarchical Control Systems<br />A Seminar on Artificial Intelligence<br />1<br />
What is HCS?<br />A form of Control System in which a set of devices and governing software is arranged in a hierarchical tree.<br />2<br />
Some Notable Features of HCS…<br /><ul><li>This human built system has complex behavior and is often represented in hierarchy.
It is organized to divide Decision making responsibility.
Each element of the hierarchy is a linked node in the tree.
Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes.
Sensations and command results flow up the tree from subordinate to superior nodes.
Nodes may also exchange messages with their siblings.</li></ul>3<br />
Distinguishing features of a HCS, related to its layers:-<br /><ul><li>Each higher layer of the tree operates with a longer interval of planning and execution time than its immediately lower layer.
Their time constraints are relaxed and are capable of reasoning from abstract world model.
Lower layers form hybrid intelligent systems, which involves a combination of methods and techniques from AI subfields.
They perform local tasks and goals, as planned by higher layers.</li></ul>4<br />
Application<br />Manufacturing, robotics and vehicles<br /><ul><li>Used for creating Autonomous robots. Thus, motion planning becomes extremely important.
DARPA and NIST sponsor such researches to develop applications for military purposes.</li></ul>Artificial Intelligence<br /><ul><li>For building generic architecture for “behavior based robotics”
A way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers.</li></ul>5<br />
<ul><li>Defense Advanced Research Projects Agency (DARPA) is the most prominent research organization of the United States Department of Defense.(earlier known as ARPA)
In November 2007, DARPA held “the Urban Challenge”, which was a prize competition for driverless vehicles.
The winning entry, “Tartan Racing” employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modeling, and Mechatronics.</li></ul>6<br />
Urban Challenge<br /><ul><li>The Urban Challenge required designers to build vehicles able to obey all traffic laws while they detect and avoid other robots on the course.
This is a particular challenge for vehicle software, as vehicles must make "intelligent" decisions in real time based on the actions of other vehicles.
The competition was open to teams and organizations from all around the world.</li></ul>7<br />
Rules:-<br /><ul><li>Vehicle must be entirely autonomous, using only the information it detects with its sensors and public signals such as GPS.
DARPA will provide the route network 24 hours before the race starts.
Vehicles will complete the route by driving between specified checkpoints.
Vehicles may “stop and stare” for at most 10 seconds.
Vehicles must operate in rain and fog, with GPS blocked.
Vehicles must avoid collision with vehicles and other objects such as carts, bicycles, traffic barrels, and objects in the environment such as utility poles.
Vehicles must be able to operate in parking areas and perform U-turns as required by the situation. </li></ul>8<br />
<ul><li>The Winner of DARPA Urban Challenge was Tartan Racing ,a team from Carnegie Mellon University, Pennsylvania.
Their vehicle “Boss” was a Chevy Tahoe with over 5,00,000 lines of code to autonomously navigate in town and in traffic.
Algorithm<br /><ul><li>Based on concept of identifying driving contexts or various behaviors.</li></ul>Drive-Down-Road Behavior<br /><ul><li>Responsible for on road driving.
Primary sub-behavior is Driving In Lanes (includes distance keeping behavior in presence of a lead vehicle).
Lane selector makes lane-change decisions to achieve specified checkpoints based on</li></ul>Timely progress in current lane<br />And necessity of being in correct lane<br /><ul><li>Distance-keeping behavior aims at</li></ul>Zero the difference in our vehicle’s velocity and lead vehicle’s velocity.<br />Zero the difference in desired and actual inter-vehicle gaps<br />17<br />
Handle Intersection Behavior<br /><ul><li>invoked when vehicle is on an intersection or chauraha.
Establishes a polygonal zone around an intersection and tracks all vehicles within that zone.
Those that arrived at stop line before our vehicle are given higher precedence.
Once our vehicle determines that its turn to move, it checks for intersection to be clear of obstacles and vehicle (completing their exit or disobeying precedence rules).
Then traverse the intersection in a “virtual lane” created by connecting exit point of current lane to entry point in goal lane.</li></ul>18<br />
to get the vehicle back on track</li></ul>25<br />
Perception & World Modeling<br />Objective<br /><ul><li>Interprets information from various sensors and fuses the multiple streams together to provide a composite picture of the world to the rest of the system.
Responsible for addressing three critical functions:
the detection and tracking of moving obstacles,
estimating the shape of the road.</li></ul>26<br />
Moving Obstacle Fusion<br /><ul><li>Moving obstacles are tracked using several lidar and radar.
Objective is to fuse together different information gathered by different sensors to make them coherent.
Sensors do measurement of objects they track.
A global list is maintained of tracked object by fusion layer.
Any unfamiliar obstacle tracked is added as new entry. </li></ul>27<br />
Moving Obstacle Tracking<br /><ul><li>Fusion layer provides the global list of tracked objects.
Mobile obstacles are tracked using Extended Kalman Filter to predict and update the state and uncertainty measurement for each tracked object.</li></ul>28<br />
Static Obstacle Detection<br /><ul><li>This algo uses downward looking lasers mounted on robot to evaluate the terrain around the vehicle.
Based on this info, it generates a cost map representing “traversability” of the terrain, by comparing pairs of laser points.
Based on cost, regions are categorized as “fatal”.</li></ul>29<br />
Road Shape Feature Detectors<br /><ul><li>Although, RDNF (Route network definition file) with Ariel images of area (used to make an estimate of road shape), are provided but actual road features can’t be identified, unless on-road.
Online Road shape detection is generated using range and intensity data from down looking short range lidars.
Intensity change is indicative of painted road lines.
relative changes from flat surfaces to raised curbs or drops to soft shoulders are detected by looking for the appropriate geometric features.
Algo used are Haar wavelet transforms, heuristic edge detection with adaptive thresholding and dynamic programming methods.</li></ul>30<br />
Mechatronics for Autonomous Urban Driving<br />Objective<br /><ul><li>The electrical and mechanical components provide a way for algorithms to interact with the world.
The chassis is a Chevrolet Tahoe was selected due to
System testing on vehicle with simulation in the Loop
System testing on vehicle with live testing</li></ul>35<br />
Unit Testing<br /><ul><li>Unit tests are small, simple software tests written by the developer while writing a particular class or function.
The objective is to verify that the class method or function being tested produces the correct state changes or output for the provided set of inputs.</li></ul>36<br />
Subsystem testing with simulated inputs<br /><ul><li>A particular subsystem or individual process (task) is run in isolation while its interfaces are “simulated” by playing back data.
The process produces output in debug tool for analysis.
This helps in much easier integration with rest of system by finding bugs earlier in the process.</li></ul>37<br />
System testing in pure simulation<br /><ul><li>The entire system can be executed on multiple computers while the various on-vehicle interfaces are simulated.
These interfaces are simulated by playing back previously logged data such as lidar data, cost maps, vehicle state, or any other interface.
On basis of output behavior, analysis is done.</li></ul>38<br />
System Testing on Vehicle with Simulation In the Loop<br /><ul><li>The system is executed on the robot, but live traffic is not used.
Instead, simulation is used to present simulated moving objects to the real system.
This allows the robot to “see” and react to this virtual traffic without the risk of testing in the presence of other vehicles.</li></ul>System Testing on Vehicle with Live Traffic <br /><ul><li>The full-fledged system is executed in the presence of real traffic.</li></ul>39<br />
Summary<br /><ul><li>This multi-modal approach partitions the urban driving problem into tractable behavioral modes (road driving, intersection handling and parking).
It is a snapshot of technology under rapid development. Significant improvements are ongoing in all aspects of overall system performance.</li></ul>40<br />