PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

4,927 views
4,934 views

Published on

Here the document of the operations, system architecture, and state of the art concerning robotics for disaster and emergency response.

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,927
On SlideShare
0
From Embeds
0
Number of Embeds
87
Actions
Shares
0
Downloads
624
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

PhD Thesis - Coordination of Multiple Robotic Agents for Disaster and Emergency Response

  1. 1. ´ INSTITUTO TECNOLOGICO Y DE ESTUDIOS SUPERIORES DE MONTERREY CAMPUS CAMPUS MONTERREYSCHOOL OF ENGINEERING AND INFORMATION TECHNOLOGIES GRADUATE PROGRAMS DOCTOR OF PHILOSOPHY IN INFORMATION TECHNOLOGIES AND COMMUNICATIONS MAJOR IN INTELLIGENT SYSTEMS Dissertation Coordination of Multiple Robotic Agents For Disaster and Emergency Response By ´ Jesus Salvador Cepeda Barrera DECEMBER 2012
  2. 2. Coordination of Multiple Robotic Agents For Disaster and Emergency Response A dissertation presented by ´ Jesus Salvador Cepeda Barrera Submitted to theGraduate Programs in Engineering and Information Technologies in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Information Technologies and Communications Major in Intelligent Systems Thesis Committee: Dr. Rogelio Soto - Tecnol´ gico de Monterrey o Dr. Luiz Chaimowicz - Universidade Federal de Minas Gerais Dr. Jos´ Luis Gordillo e - Tecnol´ gico de Monterrey o Dr. Leonardo Garrido - Tecnol´ gico de Monterrey o Dr. Ernesto Rodr´guez ı - Tecnol´ gico de Monterrey o Instituto Tecnol´ gico y de Estudios Superiores de Monterrey o Campus Campus Monterrey December 2012
  3. 3. Instituto Tecnol´ gico y de Estudios Superiores de Monterrey o Campus Campus Monterrey School of Engineering and Information Technologies Graduate ProgramThe committee members hereby certify that have read the dissertation presented by Jes´ s Sal- uvador Cepeda Barrera and that it is fully adequate in scope and quality as a partial fulfillmentof the requirements for the degree of Doctor of Philosophy in Information Technologiesand Communications, with a major in Intelligent Systems. Dissertation Committee Dr. Rogelio Soto Advisor Dr. Luiz Chaimowicz External Co-Advisor Universidade Federal de Minas Gerais Dr. Jos´ Luis Gordillo e Committee Member Dr. Leonardo Garrido Committee Member Dr. Ernesto Rodr´guez ı Committee Member Dr. C´ sar Vargas e Director of the Doctoral Program in Information Technologies and Communications i
  4. 4. Copyright DeclarationI, hereby, declare that I wrote this dissertation entirely by myself and, that, it exclusivelydescribes my own research. Jes´ s Salvador Cepeda Barrera u Monterrey, N.L., M´ xico e December 2012 c 2012 by Jes´ s Salvador Cepeda Barrera u All Rights Reserved ii
  5. 5. DedicatoriaDedico este trabajo a todos quienes me dieron la oportunidad y confiaron en que valdr´a la ıpena este tiempo que no solo requiri´ de trabajo arduo y de nuevas experiencias, sino que odemand´ por apoyo constante, paciencia y aliento ante los per´odos m´ s dif´ciles. o ı a ıA mi padre por su sacrificio eterno para convencerme de pensar en grande y de hacer que ´valga la pena el camino y sus dificultades. A el por aguantar hasta estos d´as la econom´a del ı ıestudiante y confiar siempre que lo mejor est´ por venir. A ti pap´ por tu amor y gu´a con a a ısabidur´a para permitirme llegar hasta donde me lo proponga. ıA mi madre por su abrazo sin igual que siempre abre nuevas brechas cuando pareciera que yano hay por donde continuar. A ella por el regazo donde renacen las fuerzas y motivaci´ n para ovolver a intentar. A ti mam´ por el amor que siempre me da seguridad para seguir adelante asabiendo que hay alguien que por siempre me ha de acompa˜ ar. nA mi hermana por saber demostrarme, sin intenciones, que la preparaci´ n nunca estar´ de o am´ s, que la vida puede complicarse tanto como uno quiera y por ende existe la necesidad de aser cada vez m´ s. A ti por ejemplo de lucha y rebeld´a. a ıA los t´os tecn´ logos que nunca han dejado de invertir ni de creer en mi. A ustedes sin quienes ı ono hubiera sido posible llegar a este momento. Entre econom´a, herramientas y confianza ıconstante, ustedes me dieron siempre motivaci´ n y F´ para ser ejemplo y apostar con el mayor o eesfuerzo.Al abuelo que siempre quiso un ingeniero y ahora se le hizo doctor. Le dedico este trabajoque sin sus conocimientos y compa˜ ´a en el taller nunca hubiera tenido la integridad que lo nıcaracteriza. A usted por ense˜ arme que la ingenier´a no es una decisi´ n, sino una convicci´ n. n ı o oFinalmente, a la mujer que por su existencia es gu´a y voz divina. A ti que sabes decir y hacer ılo que hace falta. A ti que complementas como ying y yang, como sol y luna, como pielmorena y cabellos rizados. A ti mi linda esposa por tu amor constante que nunca permiti´ otristezas ni en los peores momentos. Lo dedico por tu firme disposici´ n a dejar todo por vivir o ´y aprender cosas que nunca te imaginaste, por tu animo vivo por recorrer el mundo a mi lado.A ti princesa por confiar en mi y acompa˜ arme en cada una de estas p´ ginas. n a iii
  6. 6. Acknowledgements If the observer were intelligent (and extraterrestrial observers are always pre- sumed to be intelligent) he would conclude that the earth is inhabited by a few very large organisms whose individual parts are subordinate to a central direct- ing force. He might not be able to find any central brain or other controlling unit, but human biologists have the same difficulty when they try to analyse an ant hill. The individual ants are not impressive objects in fact they are rather stupid, even for insects but the colony as a whole behaves with striking intelligence. – Jonathan Norton LeonardI want to express my deepest feeling of gratitude to all of you who contributed for me to notbe an individual ant. Advisors, peers, friends, and the robotics gurus, which doubtfully willread this but who surely deserve my gratitude because without them this work won’t even bepossible.Thanks Prof. Rogelio Soto for your constant confidence in my ideas and for supporting andguiding all my developments during this dissertation. Thanks for the opportunity you gaveme for working with you and developing that which I like the most and I doesn’t even knewit existed.Thanks Prof. Jos´ L. Gordillo for the hard times you gave me and for sharing your knowledge. eI really appreciate both things, definitively you make me a more integral professional.Thanks Prof. Luiz Chaimowicz, for opening the research doors from the very first day. Thanksfor believing in my developments and letting me live a little of the amazing Brazilian experi-ence. Thanks for your constant guidance even when we are more than 8000km apart. Thanksfor showing me my very first experiences around real robotics and for making me understandthat it is Skynet and not the Terminator which we shall fear.Thanks eRobots friends and colleagues for not only sharing your knowledge and experienceswith me, but also for validating my own. Thanks for your constant support and company whennobody else should be working. Thanks for your words when I needed them the most, youreally are a fundamental part of this work.Thanks Prof. Mario Montenegro and the Verlabians for the most accurate and guided knowl-edge I’ve ever had about mobile robotics. Thanks for giving me the chance to be part of yourteam. Thanks for letting me learn from you and be your mexican friend even though I workedwith Windows.Thanks God and Life for giving me this opportunity. iv
  7. 7. Coordination of Multiple Robotic Agents For Disaster and Emergency Response by Jes´ s Salvador Cepeda Barrera u AbstractIn recent years, the use of Multi-Robot Systems (MRS) has become popular for several appli-cation domains. The main reason for using these MRS is that they are a convenient solutionin terms of costs, performance, efficiency, reliability, and reduced human exposure. In thatway, existing robots and implementation domains are of increasing number and complexity,turning coordination and cooperation fundamental features among robotics research. Accordingly, developing a team of cooperative autonomous mobile robots has been oneof the most challenging goals in artificial intelligence. Research has witnessed a large bodyof significant advances in the control of single mobile robots, dramatically improving thefeasibility and suitability of MRS. These vast scientific contributions have also created theneed for coupling these advances, leading researchers to the challenging task of developingmulti-robot coordination infrastructures. Moreover, considering all possible environments where robots interact, disaster scenar-ios come to be among the most challenging ones. These scenarios have no specific structureand are highly dynamic, uncertain and inherently hostile. They involve devastating effectson wildlife, biodiversity, agriculture, urban areas, human health, and also economy. So, theyreside among the most serious social issues for the intellectual community. Following these concerns and challenges, this dissertation addresses the problem of howcan we coordinate and control multiple robots so as to achieve cooperative behavior for assist-ing in disaster and emergency response. The essential motivation resides in the possibilitiesthat a MRS can have for disaster response including improved performance in sensing andaction, while speeding up operations by parallelism. Finally, it represents an opportunity forempowering responders’ abilities and efficiency in the critical 72 golden hours, which areessential for increasing the survival rate and for preventing a larger damage. Therefore, herein we achieve urban search and rescue (USAR) modularization leverag-ing local perceptions and mission decomposition into robotic tasks. Then, we have developeda behavior-based control architecture for coordinating mobile robots, enhancing most relevantcontrol characteristics reported in literature. Furthermore, we have implemented a hybrid in-frastructure in order to ensure robustness for USAR mission accomplishment with currenttechnology, which is better for simple, fast, reactive control. These single and multi-robotarchitectures were designed under the service-oriented paradigm, thus leveraging reusability,scalability and extendibility. Finally, we have inherently studied the emergence of rescue robotic team behaviors andtheir applicability in real disasters. By implementing distributed autonomous behaviors, weobserved the opportunity for adding adaptivity features so as to autonomously learn additionalbehaviors and possibly increase performance towards cognitive systems. v
  8. 8. List of Figures 1.1 Number of survivors and casualties in the Kobe earthquake in 1995. Image from [267]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Percentage of survival chances in accordance to when victim is located. Based on [69]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 70 years for autonomous control levels. Edited from [44]. . . . . . . . . . . . 6 1.4 Mobile robot control scheme. Image from [255]. . . . . . . . . . . . . . . . 9 1.5 Minsky’s interpretation of behaviors. Image from [188]. . . . . . . . . . . . 18 1.6 Classic and new artificial intelligence approaches. Edited from [255]. . . . . 18 1.7 Behavior in robotics control. Image from [138]. . . . . . . . . . . . . . . . . 19 1.8 Coordination methods for behavior-based control. Edited from [11]. . . . . . 19 1.9 Group architecture overview. . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.10 Service-oriented group architecture. . . . . . . . . . . . . . . . . . . . . . . 25 2.1 Major challenges for networked robots. Image from [150]. . . . . . . . . . . 30 2.2 Typical USAR Scenario. Image from [267]. . . . . . . . . . . . . . . . . . . 30 2.3 Real pictures from the WTC Tower 2. a) shows a rescue robot within the white box navigating in the rubble; b) robots-eye view with three sets of victim remains. Image edited from [194] and [193]. . . . . . . . . . . . . . . . . . 31 2.4 Typical problems with rescue robots. Image from [268]. . . . . . . . . . . . . 35 2.5 Template-based information system for disaster response. Image based on [156, 56]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.6 Examples of templates for disaster response. Image based on [156, 56]. . . . 42 2.7 Task force in rescue infrastructure. Image from [14]. . . . . . . . . . . . . . 43 2.8 Rescue Communicator, R-Comm: a) Long version, b) Short version. Image from [14]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.9 Handy terminal and RFID tag. Image from [14]. . . . . . . . . . . . . . . . . 44 2.10 Database for Rescue Management System, DaRuMa. Edited from [210]. . . . 44 2.11 RoboCup Rescue Concept. Image from [270]. . . . . . . . . . . . . . . . . . 46 2.12 USARSim Robot Models. Edited from [284, 67]. . . . . . . . . . . . . . . . 47 2.13 USARSim Disaster Snapshot. Edited from [18, 17]. . . . . . . . . . . . . . . 47 2.14 Sensor Readings Comparison. Top: Simulation, Bottom: Reality. Image from [67]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.15 Control Architecture for Rescue Robot Systems. Image from [3]. . . . . . . . 50 2.16 Coordinated exploration using costs and utilities. Frontier assignment consid- ering a) only costs; b) costs and utilities; c) three robots paths results. Edited from [58]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 vi
  9. 9. 2.17 Supervisor sketch for MRS patrolling. Image from [168]. . . . . . . . . . . . 532.18 Algorithm for determining occupancy grids. Image from [33]. . . . . . . . . 542.19 Multi-Robot generated maps in RoboCup Rescue 2007. Image from [225]. . . 552.20 Behavioral mapping idea. Image from [164]. . . . . . . . . . . . . . . . . . . 552.21 3D mapping using USARSim. Left) Kurt3D and its simulated counterpart. Right) 3D color-coded map. Edited from [20]. . . . . . . . . . . . . . . . . . 562.22 Face recognition in USARSim. Left) Successful recognition. Right) False positive. Image from [20]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 572.23 Human pedestrian vision-based detection procedure. Image from [90]. . . . . 572.24 Human pedestrian vision-based detection procedure. Image from hal.inria.fr/inria- 00496980/en/. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582.25 Human behavior vision-based recognition. Edited from [207]. . . . . . . . . 582.26 Visual path following procedure. Edited from [103]. . . . . . . . . . . . . . . 592.27 Visual path following tests in 3D terrain. Edited from [103]. . . . . . . . . . 592.28 START Algorithm. Victims are sorted in: Minor, Delayed, Immediate and Expectant; based on the assessment of: Mobility, Respiration, Perfusion and Mental Status. Image from [80]. . . . . . . . . . . . . . . . . . . . . . . . . 612.29 Safety, security and rescue robotics teleoperation stages. Image from [36]. . . 612.30 Interface for multi-robot rescue systems. Image from [209]. . . . . . . . . . . 622.31 Desired information for rescue robot interfaces: a)multiple image displays, b) multiple map displays. Edited from [292]. . . . . . . . . . . . . . . . . . . . 632.32 Touch-screen technologies for rescue robotics. Edited from [185]. . . . . . . 642.33 MRS for autonomous exploration, mapping and deployment. a) the complete heterogeneous team; b) sub-team with mapping capabilities. Image from [130]. 652.34 MRS result for autonomous exploration, mapping and deployment. a) origi- nal floor map; b) robots collected map; c) autonomous planned deployment. Edited from [130]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652.35 MRS for search and monitoring: a) Piper J3 UAVs; b) heterogeneous UGVs. Edited from [131]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662.36 Demonstration of integrated search operations: a) robots at initial positions, b) robots searching for human target, c) alert of target found, d) display nearest UGV view of the target. Edited from [131]. . . . . . . . . . . . . . . . . . . 672.37 CRASAR MicroVGTV and Inuktun [91, 194, 158, 201]. . . . . . . . . . . . 702.38 TerminatorBot [282, 281, 204]. . . . . . . . . . . . . . . . . . . . . . . . . . 702.39 Leg-in-Rotor Jumping Inspector [204, 267]. . . . . . . . . . . . . . . . . . . 712.40 Cubic/Planar Transformational Robot [266]. . . . . . . . . . . . . . . . . . . 712.41 iRobot ATRV - FONTANA [199, 91, 158]. . . . . . . . . . . . . . . . . . . . 712.42 FUMA [181, 245]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722.43 Darmstadt University - Monstertruck [8]. . . . . . . . . . . . . . . . . . . . 722.44 Resko at UniKoblenz - Robbie [151]. . . . . . . . . . . . . . . . . . . . . . 722.45 Independent [84]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732.46 Uppsala University Sweden - Surt [211]. . . . . . . . . . . . . . . . . . . . . 732.47 Taylor [199]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732.48 iRobot Packbot [91, 158]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742.49 SPAWAR Urbot [91, 158]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 vii
  10. 10. 2.50 Foster-Miller Solem [91, 194, 158]. . . . . . . . . . . . . . . . . . . . . . . 742.51 Shinobi - Kamui [189]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.52 CEO Mission II [277]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.53 Aladdin [215, 61]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752.54 Pelican United - Kenaf [204, 216]. . . . . . . . . . . . . . . . . . . . . . . . 762.55 Tehzeeb [265]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 762.56 ResQuake Silver2009 [190, 187]. . . . . . . . . . . . . . . . . . . . . . . . 762.57 Jacobs Rugbot [224, 85, 249]. . . . . . . . . . . . . . . . . . . . . . . . . . 772.58 PLASMA-Rx [87]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 772.59 MRL rescue robots NAJI VI and NAJI VII [252]. . . . . . . . . . . . . . . . 772.60 Helios IX and Carrier Parent and Child [121, 180, 267]. . . . . . . . . . . . . 782.61 KOHGA : Kinesthetic Observation-Help-Guidance Agent [142, 181, 189, 276]. 782.62 OmniTread OT-4 [40]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 782.63 Hyper Souryu IV [204, 276]. . . . . . . . . . . . . . . . . . . . . . . . . . . 792.64 Rescue robots: a) Talon, b) Wolverine V-2, c) RHex, d) iSENSYS IP3, e) Intelligent Aerobot, f) muFly microcopter, g) Chinese firefighting robot, h) Teleoperated extinguisher, i) Unmanned surface vehicle, j) Predator, k) T- HAWK, l) Bluefin HAUV. Images from [181, 158, 204, 267, 287]. . . . . . . 802.65 Jacobs University rescue arenas. Image from [249]. . . . . . . . . . . . . . . 812.66 Arena in which multiple Kenafs were tested. Image from [205]. . . . . . . . 822.67 Exploration strategy and centralized, global 3D map: a) frontiers in current global map, b) allocation and path planning towards the best frontier, c) a final 3D global map. Image from [205]. . . . . . . . . . . . . . . . . . . . . 822.68 Mapping data: a) raw from individual robots, b) fused and corrected in a new global map. Image from [205]. . . . . . . . . . . . . . . . . . . . . . . . . . 832.69 Building exploration and temperature gradient mapping: a) robots as mobile sensors navigating and deploying static sensors, b) temperature map. Image from [144]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842.70 Building structure exploration and temperature mapping using static sensors, human mobile sensor, and UAV mobile sensor. Image from [98]. . . . . . . . 842.71 Helios IX in a door-opening procedure. Image from [121]. . . . . . . . . . . 852.72 Real model and generated maps of the 60 m. hall: a) real 3D model, b) generated 3D map with snapshots, c) 2D map with CPS, d) 2D map with dead reckoning. Image from [121]. . . . . . . . . . . . . . . . . . . . . . . . . . . 862.73 IRS-U and K-CFD real tests with rescue robots: a) deployment of Kohga and Souryu robots, b) Kohga finding a victim, c) operator being notified of victim found, d) Kohga waiting until human rescuer assists the victim, e) Souryu finding a victim, f) Kohga and Souryu awaiting for assistance, g) hu- man rescuers aiding the victim, and h) both robots continue exploring. Images from [276]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872.74 Types of entries in mine rescue operations: a) Surface Entry (SE), b) Borehole Entry (BE), c) Void Entry (VE), d) Inuktun being deployed in a BE [201]. . . 892.75 Standardized test arenas for rescue robotics: a) Red Arena, b) Orange Arena, c) Yellow Arena. Image from [67]. . . . . . . . . . . . . . . . . . . . . . . . 91 viii
  11. 11. 3.1 MaSE Methodology. Image from [289]. . . . . . . . . . . . . . . . . . . . . 943.2 USAR Requirements (most relevant references to build this diagram include: [261, 19, 80, 87, 254, 269, 204, 267, 268]). . . . . . . . . . . . . . . . . . . 963.3 Sequence Diagram I: Exploration and Mapping (most relevant references to build this diagram include: [173, 174, 175, 176, 21, 221, 86, 232, 10, 58, 271, 101, 33, 240, 92, 126, 194, 204]). . . . . . . . . . . . . . . . . . . . . . . . . 993.4 Sequence Diagram IIa: Recognize and Identify - Local (most relevant refer- ences to build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207, 89, 226]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.5 Sequence Diagram IIb: Recognize and Identify - Remote (most relevant ref- erences to build this diagram include: [170, 175, 221, 23, 242, 163, 90, 207, 89, 226]). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013.6 Sequence Diagram III: Support and Relief (most relevant references to build this diagram include: [58, 33, 80, 19, 226, 150, 267, 204, 87, 254]). . . . . . . 1023.7 Robots used in this dissertation: to the left a simulated version of an Adept Pioneer 3DX, in the middle the real version of an Adept Pioneer 3AT, and to the right a Dr. Robot Jaguar V2. . . . . . . . . . . . . . . . . . . . . . . . . 1033.8 Roles, behaviors and actions mappings. . . . . . . . . . . . . . . . . . . . . 1063.9 Roles, behaviors and actions mappings. . . . . . . . . . . . . . . . . . . . . 1073.10 Behavior-based control architecture for individual robots. Edited image from [178].1083.11 The Hybrid Paradigm. Image from [192]. . . . . . . . . . . . . . . . . . . . 1093.12 Group architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103.13 Architecture topology: at the top the system element communicating wireless with the subsystems. Subsystems include their nodes, which can be differ- ent types of computers. Finally, components represent the running software services depending on the existing hardware and node’s capabilities. . . . . . 1123.14 Microsoft Robotics Developer Studio principal components. . . . . . . . . . 1143.15 CCR Architecture: when a message is posted into a given Port or PortSet, triggered Receivers call for Arbiters subscribed to the messaged port in order for a task to be queued and dispatched to the threading pool. Ports defined as persistent are concurrently being listened, while non-persistent are one-time listened. Image from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1163.16 DSS Architecture. The DSS is responsible for loading services and manag- ing the communications between applications through the Service Forwarder. Services could be distributed in a same host and/or through the network. Im- age from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173.17 MSRDS Operational Schema. Even though DSS is on top of CCR, many services access CCR directly, which at the same time is working on low level as the mechanism for orchestration to happen, so it is placed sidewards to the DSS. Image from [137]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 ix
  12. 12. 3.18 Behavior examples designed as services. Top represents the handle collision behavior, which according to a goal/current heading and the laser scanner sen- sor, it evaluates the possible collisions and outputs the corresponding steering and driving velocities. Middle represents the detection (victim/threat) behav- ior, which according to the attributes to recognize and the camera sensor, it implements the SURF algorithm and outputs a flag indicating if the object has been found and the attributes that correspond. Bottom represents the seek behavior, which according to a goal position, its current position and the laser scanner sensor, it evaluates the best heading using the VFH algorithm and then outputs the corresponding steering and driving velocities. . . . . . . . . 1194.1 Process to Quick Simulation. Starting from a simple script in SPL we can decide which is more useful for our robotic control needs and programming skills, either going through C# or VPL. . . . . . . . . . . . . . . . . . . . . . 1224.2 Created service for fast simulations with maze-like scenarios. Available at http://erobots.codeplex.com/. . . . . . . . . . . . . . . . . . . . . . . . . . . 1234.3 Fast simulation to real implementation process. It can be seen that going from a simulated C# service to real hardware implementations is a matter of chang- ing a line of code: the service reference. Concerning VPL, simulated and real services are clearly identified providing easy interchange for the desired test. . 1244.4 Local and remote approaches used for the experiments. . . . . . . . . . . . . 1244.5 Speech recognition service experiment for voice-commanded robot naviga- tion. Available at http://erobots.codeplex.com/. . . . . . . . . . . . . . . . . 1254.6 Vision-based recognition service experiment for visual-joystick robot naviga- tion. Available at http://erobots.codeplex.com/. . . . . . . . . . . . . . . . . 1264.7 Wall-follow behavior service. View is from top, the red path is made of a robot following the left (white) wall in the maze, while the blue one corresponds to another robot following the right wall. . . . . . . . . . . . . . . . . . . . . . 1274.8 Seek behavior service. Three robots in a maze viewed from the top, one static and the other two going to specified goal positions. The red and blue paths are generated by each one of the navigating robots. To the left of the picture a simple console for appreciating the VFH [41] algorithm operations. . . . . . 1274.9 Flocking behavior service. Three formations (left to right): line, column and wedge/diamond. In the specific case of 3 robots a wedge looks just like a diamond. Red, green and blue represent the traversed paths of the robots. . . 1284.10 Field-cover behavior service. At the top, two different global emergent behav- iors for a same algorithm and same environment, both showing appropriate field-coverage or exploration. At the bottom, in two different environments, just one robot doing the same field-cover behavior showing its traversed path in red. Appendix D contains complete detail on this behavior. . . . . . . . . . 1284.11 Victim and Threat behavior services. Being limited to vision-based detection, different figures were used to simulate threats and victims according to recent literature [116, 20, 275, 207]. To recognize them, already coded algorithms were implemented including SURF [26], HoG [90] and face-detection [279] from the popular OpenCV [45] and EmguCV [96] libraries. . . . . . . . . . . 129 x
  13. 13. 4.12 Simultaneous localization and mapping features for the MSRDS VSE. Robot 1 is the red path, robot 3 the green and robot 3 the blue. They are not only mapping the environment by themselves, but also contributing towards a team map. Nevertheless localization is a simulation cheat and laser scanners have no uncertainty as they will have in real hardware. . . . . . . . . . . . . . . . 1304.13 Subscription Process: MSRDS partnership is achieved in two steps: running the subsystems and then running the high-level controller asking for subscrip- tions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324.14 Single robot exploration simulation results: a) 15% wandering rate and flat zones indicating high redundancy; b) Better average results with less redun- dancy using 10% wandering rate; c) 5% wandering rate shows little improve- ments and higher redundancy; d) Avoiding the past with 10% wandering rate, resulting in over 96% completion of a 200 sq. m area exploration for every run using one robot. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1354.15 Typical navigation for qualitative appreciation: a) The environment based upon Burgard’s work in [58]; b) A second more cluttered environment. Snap- shots are taken from the top view and the traversed paths are drawn in red. For both scenarios the robot efficiently traverses the complete area using the same algorithm. Black circle with D indicates deployment point. . . . . . . . 1364.16 Autonomous exploration showing representative results in a single run for 3 robots avoiding their own past. Full exploration is completed at almost 3 times faster than using a single robot, and the exploration quality shows a balanced result meaning an efficient resources (robots) management. . . . . . . . . . . 1374.17 Autonomous exploration showing representative results in a single run for 3 robots avoiding their own and teammates’ past. Results show more interfer- ence and imbalance at exploration quality when compared to avoiding their own past only. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384.18 Qualitative appreciation: a) Navigation results from Burgard’s work [58]; b) Our gathered results. Path is drawn in red, green and blue for each robot. High similarity with a much simpler algorithm can be appreciated. Black circle with D indicates deployment point. . . . . . . . . . . . . . . . . . . . 1384.19 The emergent in-zone coverage behavior for long time running the exploration algorithm. Each color (red, green and blue) shows an area explored by a different robot. Black circle with D indicates deployment point. . . . . . . . 1394.20 Multi-robot exploration simulation results, appropriate autonomous explo- ration within different environments including: a) Open Areas; b) Cluttered Environments; c) Dead-end Corridors; d) Minimum Exits. Black circle with D indicates deployment point. . . . . . . . . . . . . . . . . . . . . . . . . . 1404.21 Jaguar V2 operator control unit. This is the interface for the application where autonomous operations occur including local perceptions and behaviors coor- dination. Thus, it is the reactive part of our proposed solution. . . . . . . . . 1424.22 System operator control unit. This is the interface for the application where manual operations occur including state change and human supervision. Thus, it is the deliberative part of our proposed solution. . . . . . . . . . . . . . . . 1424.23 Template structure for creating and managing reports. Based on [156, 56]. . . 143 xi
  14. 14. 4.24 Deployment of a Jaguar V2 for single robot autonomous exploration experi- ments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444.25 Autonomous exploration showing representative results implementing the ex- ploration algorithm in one Jaguar V2. An average of 36 seconds for full ex- ploration demonstrates coherent operations considering simulation results. . . 1454.26 Deployment of two Jaguar V2 robots for multi-robot autonomous exploration experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454.27 Autonomous exploration showing representative results for a single run using 2 robots avoiding their own past. An almost half of the time for full explo- ration when compared to single robot runs demonstrates efficient resource management. The resultant exploration quality shows the trend towards per- fect balancing between the two robots. . . . . . . . . . . . . . . . . . . . . . 1464.28 Comparison between: a) typical literature exploration process and b) our pro- posed exploration. Clear steps and complexity reduction can be appreciated between sensing and acting. . . . . . . . . . . . . . . . . . . . . . . . . . . 147A.1 Generic single robot architecture. Image from [2]. . . . . . . . . . . . . . . . 154A.2 Autonomous Robot Architecture - AuRa. Image from [12]. . . . . . . . . . . 155D.1 8 possible 45◦ heading cases with 3 neighbor waypoints to evaluate so as to define a CCW, CW or ZERO angular acceleration command. For example, if heading in the -45◦ case, the neighbors to evaluate are B, C and D, as left, center and right, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . 181D.2 Implemented 2-state Finite State Automata for autonomous exploration. . . . 184 xii
  15. 15. List of Tables 1.1 Comparison of event magnitude. Edited from [182]. . . . . . . . . . . . . . . 7 1.2 Important concepts and characteristics on the control of multi-robot systems. Based on [53, 11, 2, 24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.3 FSA, FSM and BBC relationships. Edited from [192]. . . . . . . . . . . . . . 20 1.4 Components of a hybrid-intelligence architecture. Based on [192]. . . . . . . 21 1.5 Nomenclature. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.6 Relevant metrics in multi-robot systems . . . . . . . . . . . . . . . . . . . . 23 2.1 Factors influencing the scope of the disaster relief effort from [83]. . . . . . . 40 2.2 A classification of robotic behaviors. Based on [178, 223]. . . . . . . . . . . 51 2.3 Recommendations for designing a rescue robot [37, 184, 194, 33, 158, 201, 267]. 69 3.1 Main advantages and disadvantages for using wheeled and tracked robots [255, 192]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.1 Experiments’ results: average delays . . . . . . . . . . . . . . . . . . . . . . 133 4.2 Metrics used in the experiments. . . . . . . . . . . . . . . . . . . . . . . . . 134 4.3 Average and Standard Deviation for full exploration time in 10 runs using Avoid Past + 10% wandering rate with 1 robot. . . . . . . . . . . . . . . . . 136 4.4 Average and Standard Deviation for full exploration time in 10 runs using Avoid Past + 10% wandering rate with 3 robots. . . . . . . . . . . . . . . . . 137 4.5 Average and Standard Deviation for full exploration time in 10 runs using Avoid Kins Past + 10% wandering rate with 3 robots. . . . . . . . . . . . . . 138 B.1 Comparison among different software systems engineering techniques [219, 46, 82, 293, 4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 C.1 Wake up behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 C.2 Resume behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 C.3 Wait behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 C.4 Handle Collision behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 C.5 Avoid Past behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 C.6 Locate behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 C.7 Drive Towards behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 C.8 Safe Wander behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 C.9 Seek behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 C.10 Path Planning behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 xiii
  16. 16. C.11 Aggregate behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167C.12 Unit Center Line behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . 167C.13 Unit Center Column behavior. . . . . . . . . . . . . . . . . . . . . . . . . . 168C.14 Unit Center Diamond behavior. . . . . . . . . . . . . . . . . . . . . . . . . . 168C.15 Unit Center Wedge behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.16 Hold Formation behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.17 Lost behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169C.18 Flocking behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170C.19 Disperse behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171C.20 Field Cover behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171C.21 Wall Follow behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.22 Escape behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.23 Report behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172C.24 Track behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173C.25 Inspect behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173C.26 Victim behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174C.27 Threat behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174C.28 Kin behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175C.29 Give Aid behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175C.30 Aid- behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.31 Impatient behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.32 Acquiescent behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176C.33 Unknown behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 xiv
  17. 17. ContentsAbstract vList of Figures xiiList of Tables xiv1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Problem Statement and Context . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Disaster Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Mobile Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.3 Search and Rescue Robotics . . . . . . . . . . . . . . . . . . . . . . 12 1.2.4 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3 Research Questions and Objectives . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Solution Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.1 Dynamic Roles + Behavior-based Robotics . . . . . . . . . . . . . . 17 1.4.2 Architecture + Service-Oriented Design . . . . . . . . . . . . . . . . 20 1.4.3 Testbeds Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.5 Main Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.6 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Literature Review – State of the Art 28 2.1 Fundamental Problems and Open Issues . . . . . . . . . . . . . . . . . . . . 29 2.2 Rescue Robotics Relevant Software Contributions . . . . . . . . . . . . . . . 38 2.2.1 Disaster Engineering and Information Systems . . . . . . . . . . . . 38 2.2.2 Environments for Software Research and Development . . . . . . . . 45 2.2.3 Frameworks, Algorithms and Interfaces . . . . . . . . . . . . . . . . 49 2.3 Rescue Robotics Relevant Hardware Contributions . . . . . . . . . . . . . . 68 2.4 Testbed and Real-World USAR Implementations . . . . . . . . . . . . . . . 79 2.4.1 Testbed Implementations . . . . . . . . . . . . . . . . . . . . . . . . 81 2.4.2 Real-World Implementations . . . . . . . . . . . . . . . . . . . . . . 87 2.5 International Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903 Solution Detail 93 3.1 Towards Modular Rescue: USAR Mission Decomposition . . . . . . . . . . 95 3.2 Multi-Agent Robotic System for USAR: Task Allocation and Role Assignment 98 xv
  18. 18. 3.3 Roles, Behaviors and Actions: Organization, Autonomy and Reliability . . . 104 3.4 Hybrid Intelligence for Multidisciplinary Needs: Control Architecture . . . . 106 3.5 Service-Oriented Design: Deployment, Extendibility and Scalability . . . . . 113 3.5.1 MSRDS Functionality . . . . . . . . . . . . . . . . . . . . . . . . . 1134 Experiments and Results 121 4.1 Setting up the path from simulation to real implementation . . . . . . . . . . 122 4.2 Testing behavior services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.3 Testing the service-oriented infrastructure . . . . . . . . . . . . . . . . . . . 130 4.4 Testing more complete operations . . . . . . . . . . . . . . . . . . . . . . . 133 4.4.1 Simulation tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 4.4.2 Real implementation tests . . . . . . . . . . . . . . . . . . . . . . . 1395 Conclusions and Future Work 148 5.1 Summary of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151A Getting Deeper in MRS Architectures 153B Frameworks for Robotic Software 158C Set of Actions Organized as Robotic Behaviors 162D Field Cover Behavior Composition 178 D.1 Behavior 1: Avoid Obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . 178 D.2 Behavior 2: Avoid Past . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 D.3 Behavior 3: Locate Open Area . . . . . . . . . . . . . . . . . . . . . . . . . 180 D.4 Behavior 4: Disperse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 D.5 Emergent Behavior: Field Cover . . . . . . . . . . . . . . . . . . . . . . . . 182Bibliography 210 xvi
  19. 19. Chapter 1Introduction “One can expect the human race to continue attempting systems just within or just beyond our reach; and software systems are perhaps the most intricate and complex of man’s handiworks. The management of this complex craft will demand our best use of new languages and systems, our best adaptation of proven engineering management methods, liberal doses of common sense, and a God-given humility to recognize our fallibility and limitations.” – Frederick P. Brooks, Jr. (Computer Scientist) C HAPTER O BJECTIVES — Why this dissertation. — What we are dealing with. — What we are solving. — How we are solving it. — Where we are contributing. — How the document is organized. In recent years, the use of Multi-Robot Systems (MRS) has become popular for severalapplication domains such as military, exploration, surveillance, search and rescue, and evenhome and industry automation. The main reason for using these MRS is that they are aconvenient solution in terms of costs, performance, efficiency, reliability, and reduced humanexposure to harmful environments. In that way, existing robots and implementation domainsare of increasing number and complexity, turning coordination and cooperation fundamentalfeatures among robotics research [99]. Accordingly, developing a team of cooperative autonomous mobile robots with efficientperformance has been one of the most challenging goals in artificial intelligence. The co-ordination and cooperation of MRS has involved state of the art problems such as efficientnavigation, multi-robot path planning, exploration, traffic control, localization and mapping,formation and docking control, coverage and flocking algorithms, target tracking, individualand team cognition, tasks’ analysis, efficient resource management, suitable communications,among others. As a result, research has witnessed a large body of significant advances inthe control of single mobile robots, dramatically improving the feasibility and suitability ofcooperative robotics. These vast scientific contributions created the need for coupling these 1
  20. 20. CHAPTER 1. INTRODUCTION 2advances, leading researchers to develop inter-robot communication frameworks. Finding aframework for cooperative coordination of multiple mobile robots that ensures the autonomyand the individual requirements of the involved robots has always been a challenge too. Moreover, considering all possible environments where robots interact, disaster scenar-ios come to be among the most challenging ones. These scenarios, either man-made or natu-ral, have no specific structure and are highly dynamic, uncertain and inherently hostile. Thesedisastrous events like: earthquakes, floods, fires, terrorist attacks, hurricanes, trapped popu-lations, or even chemical, biological, radiological or nuclear explosions(CBRN or CBRNE);involve devastating effects on wildlife, biodiversity, agriculture, urban areas, human health,and also economy. So, the rapidly acting to save lives, avoid further environmental damageand restore basic infrastructure has been among the most serious social issues for the intellec-tual community. For that reason, technology-based solutions for disaster and emergency situations aremain topics for relevant international associations, which had created specific divisions forresearch on this area such as IEEE Safety, Security and Rescue Robotics (IEEE SSRR)and the RoboCup Rescue, both active since 2002. Therefore, this dissertation focuses onan improvement for disaster response and recovery, encouraging the relationship betweenmultiple robots as an important tool for mitigating disasters by cooperation, coordination andcommunication among them and human operators.1.1 MotivationHistorically, rescue robotics began in 1995 with one of the most devastating urban disastersin the 20th century: the Hanshin-Awajii earthquake in January 17th in Kobe, Japan. Accord-ing to [267], this disaster claimed more than 6,000 human lives, affected more than 2 millionpeople, damaged more than 785,000 houses, direct damage costs were estimated above 100billion USD, and death rates reached 12.5% in some regions. The same year robotics re-searchers in the US pushed the idea of the new research field while serving as rescue workersat the bombing of the Murrah federal building in Oklahoma City [91]. Then, the 9/11 eventsconsolidated the area by being the first known place in the world to have real implementationsof rescue robots searching for victims and paths through the rubble, inspecting structures, andlooking for hazardous materials [194]. Additionally, the 2005 World Disasters report [283]indicates that between 1995 and 2004 more than 900,000 human lives were lost and directdamage costs surpassed the 738 billion USD, just in urban disasters. Merely indicating thatsomething needs and can be done. Furthermore, these incidents as well as other mentioned disasters can also put the res-cuers at risk of injury or death. In Mexico City the 1985 earthquake killed 135 rescuers duringdisaster response operations [69]. In the World Trade Center in 2001, 402 rescuers lost theirlives [184]. More recently in March 2011, in the nuclear disaster in Fukushima, Japan [227]rescuers were not even allowed to enter the ravaged area because it implied critical radiationexposure. So, the rescue task is dangerous and time consuming, with the risk of further prob-lems arising on the site [37]. To reduce these additional risks to the rescuers and victims,the search is carried out slowly and delicately provoking a direct impact on the time to locate
  21. 21. CHAPTER 1. INTRODUCTION 3survivors. Typically, the mortality rate increases and peaks the second day, meaning that sur-vivors who are not located in the first 48 hours after the event are unlikely to survive beyonda few weeks in the hospital [204]. Figure 1.1 shows the survivors rescued in the Kobe earth-quake. As can be seen, beyond the third day there are almost no more victims rescued. Then,Figure 1.2 shows the average survival chances in a urban disaster according to the days afterthe incident. It can be appreciated that after the first day the chances of surviving are dramati-cally decreased by more than 40%, and also after the third day another critical decrease showsno more than 30% chances of surviving. So, there is a clear urgency for rescuers in the first3 days where chances are good for raising survival rate, thus giving definition to the popularterm among rescue teams of “72 golden hours”.Figure 1.1: Number of survivors and casualties in the Kobe earthquake in 1995. Imagefrom [267].Figure 1.2: Percentage of survival chances in accordance to when victim is located. Basedon [69]. Consequently, real catastrophes and international contributions within the IEEE SSRRand the RoboCup Rescue lead researchers to define the main usage of robotics in the so called
  22. 22. CHAPTER 1. INTRODUCTION 4Urban Search and Rescue (USAR) missions. The essence of USAR is to save lives but,Robin Murphy and Satoshi Tadokoro, two of the major contributors in the area, refer thefollowing possibilities for robots operating in urban disasters [204, 267]: Search. Aimed to gather information on the disaster, locate victims, dangerous ma- terials or any potential hazards in a faster way without increasing risks for secondary damages. Reconnaissance and mapping. For providing situational awareness. It is broader than search in the way that it creates a reference of the ravaged zone in order to aid in the coordination of the rescue effort, thus increasing the speed of the search, decreasing the risk to rescue workers, and providing a quantitative investigation of damage at hand. Rubble removal. Using robotics can be faster than manually and with a smaller foot- print (e.g., exoskeletons) than traditional construction cranes. Structural inspection. Providing better viewing angles at closer distances without ex- posing the rescuers nor the survivors. In-situ medical assessment and intervention. Since medical doctors may not be per- mitted inside the critical ravaged area, called hot zone, robotic medical aid ranges from verbal interactions, visual inspections and transporting medications; to complete sur- vivors’ diagnosis and telemedicine. This is perhaps the most challenging task for robots. Acting as a mobile beacon or repeater. Serve as landmark for localization and ren- dezvous purposes or simply extending the wireless communication ranges. Serving as a surrogate. Decreasing the risk to the rescue workers, robots may be used as sensor extensions for enhancing rescuers’ perceptions enabling them to remotely gather information of the zone and monitor other rescuers progress and needs. Adaptively shoring unstable rubble. In order to prevent secondary collapse and avoid- ing higher risks for rescuers and survivors. Providing logistics support. Provide recovery actions and assistance by autonomously transporting equipment, supplies and goods from storage areas to distribution points and evacuation and assistance centres. Instant deployment. Avoiding the initial overall evaluations for letting human rescuers to go on site, robots can go instantly, thus improving speed of operations in order to raise survival rate. Other. General uses may suggest robots doing particular operations that are impossible or difficult to perform by humans, as they can enter smaller areas and operate without breaks. Also, robots can operate for long periods in harsher conditions in a more ef- ficient way than humans do (e.g., they don’t need water or food, no need to rest, no distractions, and the only fatigue is power running low).
  23. 23. CHAPTER 1. INTRODUCTION 5 In the same line, multi-agent robotic systems (MARS, or simply MRS) have inherentcharacteristics that come to be of huge benefit for USAR implementations. According to [159]some remarkable properties of these systems are: Diversity. They apply to a large range of tasks and domains. Thus, they are a versatile tool for disaster and emergency support where tasks are plenty. Greater efficiency. In general, MRS exchanging information and cooperating tend to be more efficient than a single robot. Improved system performance. It has been demonstrated that multiple robots finish tasks faster and more accurately than a single robot. Fault tolerance. Using redundant units makes a system more tolerable to failures by enabling possible replacements. Robustness. By introducing redundancy and fault tolerance, a task is lesser compro- mised and thus the system is more robust. Lower economic cost. Multiple simpler robots are usually a better and more affordable option than one powerful and expensive robot, essentially for research projects. Ease of development. Having multiple agents allow developers to focus more pre- cisely than when trying to have one almighty agent. This is helpful when the task is as complex as disaster response. Distributed sensing and action. This feature allows for better and faster reconnais- sance while being more flexible and adaptable to the current situation. Inherent parallelism. The use of multiple robots at the same time will inherently search and cover faster than a single unit. So, the essential motivation for developing this dissertation resides in the possibilitiesand capabilities that a MRS can have for disaster response and recovery. As referred, there areplenty of applications for rescue robotics and the complexity of USAR demands for multiplerobots. This multiplicity promises an improved performance in sensing and action that arecrucial in a disaster race against time. Also, it provides a way for speeding up operationsby addressing diverse tasks at the same time. Finally, it represents an opportunity for instantdeployment and for increasing the number of first responders in the critical 72 golden hours,which are essential for increasing the survival rate and for preventing a larger damage. Additionally, before getting into the specific problem statement, it is worth to refer thatchoosing the option for multiple robots keeps developments herein aligned with internationalstate of the art trends as shown in Figure 1.3. Finally, this topic provides us with an insightinto social, life and cognitive sciences, which, in the end, are all about us.
  24. 24. CHAPTER 1. INTRODUCTION 6 Figure 1.3: 70 years for autonomous control levels. Edited from [44].1.2 Problem Statement and ContextThe purpose of this section is to narrow the research field into the specific problematic weare dealing with. In order to do that, it is important to give a precise context on disasters andhazards and about mobile robotics. Then we will be able to present an overview of search andrescue robotics (SAR or simply rescue robotics) for finally stating the problem we addressherein.1.2.1 Disaster ResponseEveryday people around the world confront experiences that cause death, injuries, destroy per-sonal belongings and interrupt daily activities. These incidents are known as accidents, crises,emergencies, disasters, or catastrophes. Particularly, disasters are defined as deadly, destruc-tive, and disruptive events that occur when hazards interact with human vulnerability [182].The hazard comes to be the threat such as an earthquake, CBRNE, terrorist attack, amongothers previously referred (a complete list of hazards is presented in [182]). This dissertationfocuses on aiding in emergencies and disasters such as Table 1.1 classifies. Once a disaster has occurred, it changes with time through 4 phases that characterize theemergency management according to [182, 267] and [204]. In spite of the description pre-sented below, it is worth to refer that Mitigation and Preparedness are pre-incident activities,whereas Response and Recover are post-incident. Particularly, disaster and emergency re-sponse requires the capabilities of being as fast as possible for rescuing survivors and avoidingany further damage, while being cautious and delicate enough to prevent any additional risk.This dissertation is settled precisely in this phase, where the first responders’ post-incidentactions reside. The description of the 4 phases is now presented.Ph. 1: Mitigation. Refers to disaster prevention and loss reduction.
  25. 25. CHAPTER 1. INTRODUCTION 7Ph. 2: Preparedness. Efforts to increase readiness for a disaster.Ph. 3: Response (Rescue). Actions immediately after the disaster for protecting lives and property.Ph. 4: Recovery. Actions to restore the basic infrastructure of the community or, preferably, improved communities. Table 1.1: Comparison of event magnitude. Edited from [182]. Accidents Crises Emergencies/ Calamities/ Catas- Disasters trophes Injuries few many scores hundreds/thousands Deaths few many scores hundreds/thousands Damage minor moderate major severe Disruption minor moderate major severe Geographic localized disperse disperse/diffuse disperse/diffuse Impact Availability abundant sufficient limited scarce of Resources Number of few many hundreds hundreds/thousands Responders Recovery minutes/ days/weeks months/years years/decades Time hours/days During the response phase search and rescue operations take place. In general, theseoperations consist on activities such as looking for lost individuals, locating and diagnosingvictims, freeing extricated persons, providing first aids and basic medical care, and transport-ing the victims away from the dangers. The human operational procedure that persists amongdifferent disasters is described by D. McEntire in [182] as the following steps:1) Gather the facts. Noticing just what happened, the estimated number of victims and rescuers, type and age of constructions, potential environmental influence, presence of other hazards or any detail for improving situational awareness.2) Asses damage. Determine the structural damage in order to define the best actions basi- cally including: entering with medical operation teams, evacuating and freeing victims, or securing the perimeter.3) Identify and acquire resources. Includes the need for goods, personnel, tools, equip- ment and technology.4) Establish rescue priorities. Determining the urgency of the situations for defining which rescues must be done before others.5) Develop a rescue plan. Who will enter the zone, how they will enter, which tools are going to be needed, how they will leave, how to ensure safety for rescuers and victims; all the necessary for following an strategy.
  26. 26. CHAPTER 1. INTRODUCTION 86) Conduct disaster and emergency response operations. Search and rescue, cover, fol- low walls, analyse debris, listen for noises indicating survivors, develop everything that is considered as useful for saving lives. According to [267], this step is the one that takes the longest time.7) Evaluate progress. Prevention of further damage demands for continuously monitor- ing the situation including to see if the plan is working or there must be a better strategy. In the described procedure, research has witnessed characteristic human behavior [182].For example, typically the first volunteers to engage are untrained people. This provokes alack of skills that shows people willing to help but unable to handle equipments, coordinateefforts, or develop any data entry or efficient resources administration and/or distribution. An-other example is that there are emergent and spontaneous rescuers so that the number can beoverwhelming to manage, therefore causing division of labor and encountered priorities sothat some of them are willing to save relatives, friends and neighbors, without noticing otherpossible survivors. Additionally, professional rescuers are not always willing to use volun-teers in their own operations, thus from time to time, there are huge crowds with just a fewworking hands. This situation leads into frustrations that compromise safeness of volunteers,professional rescue teams, and victims, thus decreasing survival rates while increasing possi-bilities for larger damages. The only good behavior that persists is that victims do cooperatewith each other and with rescuers during the search and rescue. Consequently, we can think of volunteering rescue robotic teams for conducting thesearch and rescue operations at step 6, which constitutes the most time-consuming disasterresponse activities. Robots do not feel emotions such as preferences for relatives, they aretypically built for an specific task, and they will surely not become frustrated. Moreover,robots have demonstrated to be highly capable for search and coverage, wall following, andsensing under harsh environments. So, as R. Murphy et al. referred in [204]: there is aparticular need to start using robots in tactical search and rescue, which covers how the fieldteams actually find, support, and extract survivors.1.2.2 Mobile RoboticsGiven the very broad definition of robot, it is important to state that we refer to the machinethat has sensors, a processing ability for emulating cognition and interpreting sensors’ signals(perceive), and actuators in order to enable it to exert forces upon the environment to reachsome kind of locomotion, thus referring a mobile robot. When considering one single mobilerobot, designers must take into account at least an architecture upon which the robotic re-sources are settled in order to interact with the real world. Then robotic control takes place asa natural coupling of the hardware and software resources conforming the robotic system thatmust develop an specified task. This robotic control has received huge amounts of contribu-tions from the robotics community most them focusing in at least one of the topics presentedin Figure 1.4: perception and robot sensing (interpretation of the environment), localizationand mapping (representation of the environment), intelligence and planning, and mobilitycontrol. Furthermore, a good coupling of the blocks in Figure 1.4 shall result in mobile robots ca-pable to develop tasks with certain autonomy. Bekey defines autonomy in [29] as: a systems’
  27. 27. CHAPTER 1. INTRODUCTION 9 Figure 1.4: Mobile robot control scheme. Image from [255].capability of operating in the real-world environment without any form of external controlfor extended periods of time; they must be able to survive dynamic environments, maintaintheir internal structures and processes, use the environment to locate and obtain materials forsustenance, and exhibit a variety of behaviors. This means that autonomous systems mustperform some task while, within limits, being able to adapt to environment’s dynamics. Inthis dissertation special efforts towards autonomy including every block represented in Figure1.4 are required. Moreover, when considering multiple mobile robots there are additional factors that in-tervene for having a successful autonomous system. First of all, the main intention of usingmultiple entities is to have some kind of cooperation, thus it is important to define cooperativebehavior. Cao et al. in [63] refer that: “given some task specified by a designer a multiple-robot system displays cooperative behavior if due to some underlying mechanism, there is anincrease in the total utility of the system”. So, pursuing this increase in utility (better perfor-mance) cooperative robotics addresses major research axes [63] and coordination aspects [99]presented below. Group Architecture. This is the basic element of a multi-robot system, it is the persis- tent structure allowing for variations at team composition such as the number of robots, the level of autonomy, the levels of heterogeneity and homogeneity between them, and the physical constraints. Similar to individual robot architectures, it refers to the set of principles organizing the control system (collective behaviors) and determining its capabilities, limitations and interactions (sensing, reasoning, communication and act- ing constraints). Key features of a group architecture for mobile robots are: multi-level control, centralization / decentralization, entities differentiation, communications, and the ability to model other agents.
  28. 28. CHAPTER 1. INTRODUCTION 10 Resource Conflicts. This is perhaps the principal aspect concerning MRS coordination (or control). Sharing of space, tasks and resources such as information, knowledge, or hardware capabilities (e.g., cooperative manipulation), requires for coordination among the actions of each robot in order for not interfering with each other, and end up devel- oping autonomous, coherent and high-performance operations. This may additionally require for robots taking into account the actions executed by others in order for being more efficient and faster at task development (e.g., avoiding the typical issue of “every- one going everywhere”). Typical resource conflicts also deal with the rational division, distribution and allocation of tasks for achieving an specific goal, mission or global task. Cooperation Level. This aspect considers specifically how robots are cooperating in a given system. The usual is to have robots operating together towards a common goal, but there is also cooperation through competitive approaches. Also, there are types of cooperation called innate or eusocial, and intentional, which implies direct communication through actions in the environment or messaging. Navigation Problems. Inherent problems for mobile robots in the physical world in- clude geometrical navigational issues such as path planning, formation control, pattern generations, collision-avoidance, among others. Each robot in the team must have an individual architecture for correct navigation, but it is the group architecture where nav- igational control should be organized. Adaptivity and Learning. This final element considers the capabilities to adapt to changes in the environment or in the MRS in order to optimize task performance and efficiently deal with dynamics and uncertainty. Typical approaches involve reinforce- ment learning techniques for automatically finding the correct values for the control parameters that will lead to a desired cooperative behavior, which can be a difficult and time-consuming task for a human designer. Perhaps the first important aspect this dissertation concerns is the implementation of agroup architecture that consolidates the infrastructure for a team of multiple robots for searchand rescue operations. For these means it is included in Appendix A a deeper context on thistopic. From those readings the following list of the characteristics that an architecture musthave for successful performance and relevance in a multi-disciplinary research area such asrescue robotics, which involves rapidly-changing software and hardware technologies. So, anappropriate group architecture must consider: • Robotic task and domain independence. • Robot hardware and software abstraction. • Extendibility and scalability. • Reusability. • Simple upgrading. • Simple integration of new components and devices.
  29. 29. CHAPTER 1. INTRODUCTION 11 • Simple debugging and prototyping. • Support for parallelism. • Support for modularity. • Use of standardized tools. These characteristics are fully considered in the implementations concerning this dis-sertation and are detailed further in this document. What is more, the architectural designinvolves the need for a coordination and cooperation mechanism for confronting the disasterresponse requirements. This implies not only solving individual robot control problems butalso the resource conflicts and navigational problems that arise. For this means informationon robotic control is included.Mobile Robots Control and AutonomyA typical issue when defining robotic control is to find where it fits among robotic software.According to [29] there are two basic perspectives: 1) Some designers refer exclusively torobot motion control including maintaining velocities and accelerations at a given set point,and orientation according to certain path. Also, they consider a “low-level” control for whichthe key is to ensure steady-states, quick response time and other control theory aspects. 2) Onthe other hand, other designers consider robotic control to the ability of the robot to followdirections towards a goal. This means that planning a path to follow resides in a way of “high-level” control that constantly sends the commands or directions to the robot control in orderto reach a defined goal. So, it turns difficult to find a clear division between each perspective. Fortunately, a general definition for robotic control states that: “it is the process oftaking information about the environment, through the robot’s sensors, processing it as nec-essary in order to make decisions about how to act, and then executing those actions in theenvironment”– Matari´ [177]. Thus, robotic control typically requires the integration of mul- ctiple disciplines such as biology, control theory, kinematics, dynamics, computer engineering,and even psychology, organization theory and economics. So, this integration implies theneed for multiple levels of control supporting the idea of the necessity for the individual andgroup architectures. Accordingly, from the two perspectives and the definition, we can refer that roboticcontrol happens essentially at two major levels for which we can embrace the concepts ofplatform control and activity control provided by R. Murphy in [204]. The first one is the onethat moves the robot fluidly and efficiently through any given environment by changing (andmaintaining) kinematic variables such as velocity and acceleration. This control is usuallyachieved with classic control theory such as PID controllers and thus can be classified as alow-level control. The next level refers to the navigational control, which main concern is tokeep the robot operational in terms of avoiding collisions and dangerous situations, and to beable to take the robot from one location to another. This control typically includes additionalproblems such as localization and environment representation (mapping). So, generally itneeds to use other control strategies lying under artificial intelligence such as behavior-basedcontrol and probabilistic methods, and thus being classified as a high-level control.
  30. 30. CHAPTER 1. INTRODUCTION 12 Consequently, we must clarify that this dissertation supposes that there is already arobust, working low-level platform control for every robot. So, there is the need for developingthe high-level activity control for each unit and the whole MRS to operate in search andrescue missions. In that way, this need for the activity control leads us to three major designissues [159]: 1. It is not clear how a robot control system should be decomposed; meaning particular problems at intra-robot control (individuals) that differ from inter-robot control (group). 2. The interactions between separate subsystems are not limited to directly visible connect- ing links; interactions are also mediated via the environment so that emergent behavior is a possibility. 3. As system complexity grows, the number of potential interactions between the compo- nents of the system also grows. Moreover, the control system must address and demonstrate characteristics presented inTable 1.2. What is important to notice is that coordination of multi-robot teams in dynamicenvironments is a very challenging task. Fundamentally, for having a successfully controlledrobotic team, every action performed by each robot during the cooperative operations musttake into account not only the robot’s perceptions but also its properties, the task requirements,information flow, teammates’ status, and the global and local characteristics of the environ-ment. Additionally, there must exist a coordination mechanism for synchronizing the actionsof the multiple robots. This mechanism should help in the exchange of necessary informa-tion for mission accomplishment and task execution, as well as provide the flexibility andreliability for efficient and robust interoperability. Furthermore, for fulfilling controller needs, robotics community has been highly con-cerned in creating standardized frameworks for developing robotic software. Since they aresignificant for this dissertation, information on them is included in Appendix B, particularlyfocusing in Service-Oriented Robotics (SOR). Robotic control as well as individuals andgroup architectures must consider the service-oriented approach as a way of promoting itsimportance and reusability capabilities. In this way, software development concerning thisdissertation turns to be capable of being implemented among different resources and circum-stances and thus becoming a more interesting, relevant and portable solution with a betterimpact.1.2.3 Search and Rescue RoboticsHaving explained briefs on disasters and mobile robots, it is appropriate to merge both re-search fields and refer about robotics intended for disaster response. In spite of all the pre-viously referred possibilities for robotics in search and rescue operations, this technology isnew and its acceptance as well as its hardware and software completeness will take time. Ac-cording to [204], as of 2006, rescue robotics took place only in four major disasters: WorldTrade Center, and hurricanes Katrina, Rita and Wilma. Also, in 2011, in the nuclear disasterat Fukushima, Japan, robots were barely used because of problems such as mobility in harshenvironments where debris is scattered all over with tangled steel beams and collapsed struc-tures, difficulties in communication because of thick concrete walls and lots of metal, and
  31. 31. CHAPTER 1. INTRODUCTION 13Table 1.2: Important concepts and characteristics on the control of multi-robot systems. Basedon [53, 11, 2, 24]. Situatedness The robots are entities situated and surrounded by the real world. They do not operate upon abstract representations. Embodiment Each robot has a physical presence (a body). This has consequences in its dynamic interactions with the world. Reactivity The robots must take into account events with time bounds compatible with the correct and efficient achievement of their goals. Coherence Referring that robots should appear to an observer to have coherence of actions towards goals. Relevance / The active behavior should be relevant to the local situation residing on Locality the robot’s sensors. Adequacy / The behavior selection mechanism must go towards the mission accom- Consistency plishment guided by their tasks’ objectives. Representation The world aspect should be shared between behaviors and also trigger for new behaviors. Emergence Given a group of behaviors there is an inherent global behavior with group and individual’s implications. Synthesis To automatically derive a program for mission accomplishing. Communication Increase performance by explicit information sharing. Cooperation Proposing that robots should achieve more by operating together. Interference Creation of protocols for avoiding unnecessary redundancies. Density N number of robots should be able to do in 1 unit of time, what 1 robot should in N units of time. Individuality Interchangeability results in robustness because of repeatability or un- necessary robots operating. Learning / Automate the acquisition of new behaviors and the tuning and modifi- Adaptability cation of existing ones according to the current situation. Robustness The control should be able to exploit the redundancy of the processing functions. This implies to be decentralized to some extent. Programmability A useful robotic system should be able to achieve multiple tasks de- scribed at an abstract level. Its functions should be easily combined according to the task to be executed. Extendibility Integration of new functions and definition of new tasks should be easy. Scalability The approach should easily scale to any number of robots. Flexibility The behaviors should be flexible to support many social patterns. Reliability The robot can act correctly in any given situation over time.
  32. 32. CHAPTER 1. INTRODUCTION 14physical presence within adverse environments because radiation affects electronics [227].In short, the typical difficulty of sending robots inside major disasters is the need for a bigand slow robot that can overcome the referred challenges [217]. Not to mention the needfor robots capable of performing specific complex tasks like opening and closing doors andvalves, manipulating fire fighting hoses, or even carefully handling rubble to find survivors. It is worth to mention that there are many types of robots proposed for search and rescue,including robots that can withstand radiation and fire-fighter robots that shoot water to build-ings, but the thing is that there is still not one all-mighty unit. For that reason, most typicalrescue robotics implementations in the United States and Japan reside in local incidents suchas urban fires, and search with unmanned vehicles (UxVs). In fact, most of the real implemen-tations used robotics only as the eyes of the rescue teams in order to gather more informationfrom the environment as well as to monitor its conditions in order for better decision making.And even that way, all the real operations allowed only for teleoperated robots and no auton-omy at all [204]. Nevertheless, these real implementations are the ones responsible of havinga better understanding of the sensing and acting requirements as well as listing the possibleapplications for robots in a search and rescue operation. On the other hand, making use of the typical USAR scenarios where rescue roboticsresearch is implemented there are the contributions within the IEEE SSRR society and theRoboCup Rescue. Main tasks include mobility and autonomy (act), search for victims andhazards (sense), and simultaneous localization and mapping (SLAM) (reason). Also, human-robot interactions have been deeply explored. The simulated software version of the RoboCupRescue has shown interesting contributions in exploration, mapping and victim detection al-gorithms. Good sources describing some of these contributions can be found at [20, 19]. Thereal testbed version has not only validated functionality of previously simulated contributions,but also pushed the design of unmanned ground vehicles (UGVs) that show complex abilitiesfor mobility and autonomy. Also, it has leveraged the better usage of proprioceptive instru-mentation for localization as well as exteroceptive instrumentation for mapping and victimsand hazards detection. Good examples of these contributions can be found at [224, 261]. So, even though the referred RoboCup contributions are simulated solutions far fromreaching a real disaster response operation, they are pushing the idea of having UGVs that canenable rescuers to find victims faster as well as identifying possibilities for secondary damage.Also, they are leveraging the possibility for other unmanned vehicles such as larger UGVsthat can be able to remove rubble faster than humans do, unmanned aerial vehicles (UAVs)to extend the senses of the responders by providing a birds eye view of the situation, andunmanned underwater vehicles (UUVs) and unmanned surface vehicles (USVs) for similarlyextending and enhancing the rescuers’ senses [204]. In summary, some researchers are encouraging the development of practical technolo-gies such as design of rescue robots, intelligent sensors, information equipment, and humaninterfaces for assisting in urban search and rescue missions, particularly victim search, infor-mation gathering, and communications [267]. Some other researchers are leveraging devel-opments such as processing systems for monitoring and teleoperating multiple robots [108],and creating expert systems on simple triage and rapid medical treatment of victims [80].And there are few others intending the analysis and design of real USAR robot teams forthe RoboCup [261, 8], fire-fighting [206, 98], damaged building inspection [141], mine res-cue [201], underwater exploration robots [203], and unmanned aerial systems for after-collapse
  33. 33. CHAPTER 1. INTRODUCTION 15inspection [228]; but they are still in a premature phase not fully implemented and with noautonomy at all. So, we can synthesize that researchers are addressing rescue robotics chal-lenges in the following order of priority: mobility, teleoperation and wireless communica-tions, human-robot interaction, and robotic cooperation [268]; and we can also refer that thefundamental work is being leaded mainly by Robin Murphy, Satoshi Tadokoro, Andreas Birk,among others (refer Chapter 2 for full details). The truth is that there are a lot of open issues and fundamental problems in this barelyexplored and challenging research field of rescue robotics. There is an explicit need for robotshelping to quickly locate, assess and even extricate victims who cannot be reached; and thereis an urgency for extending the rescuers’ ability to see and act in order to improve disasterresponse operations, reduce risks of secondary damage, and even raise survival rates. Also,there is an important number of robotics researchers around the globe focusing on particularproblems in the area, but there seems to be no direct (maybe less) effort towards generatinga collaborative rescue multi-robot system, which appears to be further in the future. In fact,the RoboCup Rescue estimates a fully autonomous collaborative rescue robotic team by 2050,which sounds pretty much as a reasonable timeline.1.2.4 Problem DescriptionAt this point we have presented several possibilities and problems that involve robotics fordisaster and emergency response. We have mentioned that robots come to fit well as rescuerunits for conducting search and rescue operations but several needs must be met. First wedefined the need for crafting an appropriate architecture for the individual robots as well asfor the complete multi-robot team. Next we added the necessity for appropriate robotic controland the efficient coordination of units in order to take advantage of the inherent characteristicsof a MRS and be able to provide efficient and robust interoperability in dynamic environments.Then we included the requirement for software design under the service-oriented paradigm.Finally, we expressed that there is indeed a good number of relevant contributions using singlerobots for search and rescue but that is not the case when using multiple robots. Thus, ingeneral the central problem this dissertation addresses is the following: H OW DO WE COORDINATE AND CONTROL MULTIPLE ROBOTS SO AS TO ACHIEVE COOPERATIVE BEHAVIOR FOR ASSISTING IN DISASTER AND EMERGENCY RE - SPONSE , SPECIFICALLY, IN URBAN SEARCH AND RESCUE OPERATIONS ? It has to be clear that this problem implies the use of multiple robotic agents workingtogether in a highly uncertain and dynamic environment where there are the special needs forquick convergence, robustness, intelligence and efficiency. Also, even though the essentialpurpose is to address navigational issues, other factors include: time, physical environmen-tal conditions, communications management, security management, resources management,logistics management, information management, strategy, and adaptivity [83]. So, we cangeneralize by mentioning that the rescue robotic team must be prepared for navigating inhostile dynamic environment where the time is critical, the sensitivity and multi-agent coop-eration are crucial, and finally, strategy is vital to scope the efforts towards supporting humanrescuers to achieve faster and more secure USAR operations.

×