Fire in Harbour v
Upcoming SlideShare
Loading in...5
×
 

Fire in Harbour v

on

  • 1,008 views

 

Statistics

Views

Total Views
1,008
Views on SlideShare
1,008
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Fire in Harbour v Fire in Harbour v Document Transcript

  • The University of New Brunswick CONFIDENTIAL Research project Modelling Public Security Operations Sketch on anticipated deliverables Project principal investigator: Prof. M. Ulieru Fredericton
  • August, 2009
  • 1. 1. Synopsis: To enable a collaborative framework for the NSERC project ‘Enabling the SOS Network’ we illustrate on a simplified ‘harbour fire’ scenario how our simulation capability can check “compatibility” of organizational policies by pointing to how “incompatibility” of policies can result, for example, in a deadlock in their collaborative behaviour. To ease the common understanding across the multi-disciplinary team we follow an incremental ‘top-down’ approach and begin by modelling one agent per organization type (e.g. police, firefighters, military, etc.) and study the effect of their interactions under various scenario instances. We follow on by scaling up to several agents per organization to investigate how the complexity of the endeavour affects the deployed holonic meta-organization as the number of agents increase. To achieve this goal we have been advised by the Project Authority to illustrate our methodology on a particular example, namely – a chemical fire outbreak on a ship located in a harbour nearby a densely populated area. To ensure meaningful results we request that the project authority provides us with the necessary information and details regarding emergency response operations pertaining to this example. To a minimum we would expect the project authority to facilitate our access to appropriate authorities who can provide the necessary information for the conduct of our research. In order to ease a shared understanding between the project teams that will enable cross- fertilisation we strongly suggest this example to be adopted by the other research streams such that results obtained by the other clusters can be incorporated seamlessly into our simulation model. 2. Introduction This document presents a sketch on anticipated deliverables forof the NSERC project ‘Enabling the SOS Network’ which will be of high use to accomplish the delivarebles of the project “Modelling public security operations”. We aim atsee the goal of this project in developing a strategy of crisis management supported by intelligent information technology infrastructure of security ecosystems [9, 9] by. The expected results delivering a simulation modeling tool capable to encapsulate security systems dynamics. will contribute to the creation of such ecosystem infrastructure supporting crisis management. To achieve this goal we have been advised by the project authority to illustrate our methodology on a particular example, namely – chemical fire outbreak on a ship, standing in the harbour near to a densely populated area. For consistency and ease of interoperability between the project teams we strongly suggest this example to be adopted by the other research streams and request the project authority to provide us with the necessary information and details regarding
  • emergency response operations pertaining to this example. To a minimum we require the project authority to facilitate our access to appropriate authorities who can provide the necessary information for the conduct of our research. To illustrate howgain experience in multi-agent modelling emphasisesing the role of organizational policies we begin by modelling and simulation of the collaboration of actors involved in handling the crisis. For simplicity, we will consider a limited set of organisations involved. For each organisation we will develop an agent, and then will run a multi-agent simulation. Agents will be programmed based on policies [HADI! HERE WE NEED A REFERENCE TO AGENTS NORMS AND INSTITUTIONS!!!]. We follow the Multi-Agent Systems modeling paradigm [REFERENCE!!! Woolridge, etc etc HADI AND NOCOLA!!!] using the Agents, Norms and Institutions methodologies [REFERENCES!!! AND A FEW WORDS ABOUT THIS METHODOLOGY SO EVERYONE CAN UNDERSTAND WHAT OUR APROACH IS ABOUT!!!] This will give us first hand information on the impact of organizational policies on the effectiveness of collaboration, as a foundation from which we can then scale up in a nested holonic manner with multiple actors per each organization. Our focus here is to illustrate our methodology to the TIF participants and obtain a common understanding and agreement as a foundation for our collaborative work towards achieving the project’s overall goals. For this reason we will not go too deep in achieving realism of policies and simulations at this step, but will use the results of this step to develop the foundations of an ICT-enabled infrastructure for real-time crisis management. We assume that each organisation involved in the crisis management follows their own crispinternal and global (written) policies (written and unwritten) attuned by the ‘invisible hand’ of other, more or less ethereal psycho-social factors such as professional culture, personal motivation of actors, organizational leadership, etc. At this stage we will consider only simple (crisp and clearly articulated) policies that can be articulated clearly and will proceed to incrementally include other factors once they are obtained by the social cluster. To accommodate the high degree of uncertainty inherent at this stage in the project we embrace the agent design into a highly flexible architacture that will accommodate a wide range of psycho-social concepts into the framework later. Once first results of this initial step are digested, argued by the TIF team and accordingly agreed upon by the project authority we will proceed with more complex policies and refine the individual agents within each organization according to the nested holonic paradigm. Summing up: With the scope to clarify our methodology we will illustrate on a ‘harbour fire’ scenario how our simulation capability can check “compatibility” of organizational policies by pointing to how “incompatibility” of policies can result, for example, in a deadlock in their collaborative behaviour. For this we will follow an incremental ‘top-down’ approach by modelling one agent per organization type (e.g. police, firefighters, etc.) and study the effect of their interactions,
  • then we will scale to several agents per organization while investigating the complexity of the endeavour as the number of agents increase.
  • 3. Agency Modeling Methodology One of the key outcomes of this project will be a methodology for systematic modeling of software agents based on the policies written in natural language. As previousely mentioned, we follow … [HERE WE NEED TO REFER TO THE SCIENTIFIC FOUNDATION OF OUR APPROACH BY CLEARLY INTRODUCING THE AGENTS NORMS AND INSTITUTIONS APPROACH!!! LIKE PER HADI’S PRESENTATION WHEN PAUL VISITED!!!] 3.1 Requirements and Overall Methodology According to the project authority, the resulting model of the system shall be capable toof: (1) first perceive that there is a problem, (2) understand the nature of the problem, (3) generateidentify and reccommendimplement solutions, (4) monitor the process dynamics to determine if the implemented solution is appropriate, still applicable or if another solution shall be adaptively generated to accommodate situational changes; (5) forecast if the reccommended implemented solution is producing unintended consequences (e.g. in case of policy clashes, etc…). We will To fulfil these requirements incrementally (from simple to complex) using the Institutional Framework MAS approach , we anticipate 3 steps in development of the modelling methodology. as follows: I. First, we will try to Mmodel each organisation as an “entity” driven by policies. This will enable In simulation, we will be able the identification of to find conflicts between the policies and corresponding consequences on the emergency response operation.f these. Second, II. we willRefine improve the models by introducing more factors and details of the decision making process in agents protocols (‘logic’). This will be achieved by separating “command and control” from the model of the environment. Thanks to this refinement, and locating the model of cognition will be introduced in the “control centre”. III. IThe third refinement will allow us include socio-psychological aspects in the model of decision making. The methodology of agent design will be refined through these research steps. All these three steps are discussed in the subsequent sections: I. Modeling organisations driven by policies USING MAS INSTITUTIONAL FRAMEWORK HADI – HERE I WANT YOU TO SAY HOW THIS USES THE AGENTS NORMS AND INSTOITUTIONS MAS FRAMEWORK OR HOW THIS IS DIFFERENT FROM THAT FRAMEWORK AND CONTRAST THIS APPROACH WITH THE MAS ONE… (I DON’T THINK IT ID DIFFERENT…) WE NEED TO BE SCIENTIFIC AND REFER TO WORK DONE BY OTHERS ON WHICH WE BUILD OUR METHODOLOGY!!!
  • As shown in Figure 1, illustrates our strategy of developing the simulation modeling framework which consists of the following steps:.we will approach the agent-development methodology by 1. first developing Identify sscenarios of inter-agents interactions  2. . This will lead us to Dthe defineition of agents’ interfaces and their data models and. Then we will “populate” the agents with rules and facts, written in human natural language. 3. Capture the internal behaviour of the agents Then we will by usinge a high level agent programming language such as, for example, Brahms [9] (HERE NICOLA mention some alternatives because I don’t think we want to say that Brahms is the only one that can help us in doing our job…) , for capturing internal behaviour of the agents . 4. RThen we will run multi-agent simulations  and 5. Ccompare behaviour of the multi-agent system with the developed scenarios  THIS IS NOT CLEAR! PLEASE REPHRASE!!! WE CAN DISCUSS IF YOU WANT…. After that, if needed, the agents’ code will be refined until their behaviour fits the expectations. NOT CLEAR… WHAT ‘EXPECTATIONS’ – FIRST YOU NEED TO DEFINE ‘EXPECTATIONS’… Figure 1. Strategy for developing the simulation modeling framework Sequence of tasks in the SOS Network / TIF project. 6. To prove the validity and consistency of our results and of the overall methodology, we will also investigate the possibility of formal verification by model-checking [REFERENCE]  as a complementary validation technique (NOT CLEAR!!! to simulation added to the expert opinions. REPHRASE THIS!) Model checking enables proof that certain properties hold in different scenarios. For example (VALERIY PLEASE CLARIFY THIS ON A SIMPLE EXAMPLE OF PROPERTIES FOR OUR SCENARIO…) [REPHRASE THIS – IT IS NOT CLEAR AT ALL WHAT YOU WANT TO SAY!!! which arise in the presence of uncertainties by exploring the state space of the model [9]. ] To prove that … requires aThis requires the modelling of agents to be done in a formal modelling language, of which we consider NCES (spell- out the abbreviation!) to be our preferred choice due to its proven track record of successful use in academic research as well as in industrial applications within the holonic manufacturing systems consortium [9, 9,
  • 9]. HERE A FEW WORDS ABOUT NECS AND HOW IT FITS THE HOLONIC PARADIGM AND ESPECIALLY HOW IT ENABLES TRACKING OF ‘NESTED’ DECISION MAKING WITHIN A HOLONIC ORGANIZATION AND ACROSS INTER-HOLARCHIES OF ECOSYSTEMS). Model checking also enables the optimization of agent behaviour by exploring the state space and finding trajectories with minimum time duration, etc. HERE SAY A FEW WORDS TO EXPLAIN WHAT YOU MEAN FOR THE SIMPLE EXAMPLE. Details are are provided An example-based illustration of formal verification is provided in Section 6 of this document. Results of this first step will have value on their own (validation of policies), but our focus here will be to illustrate our methodology to the TIF participants and obtain a common understanding and agreement as a foundation for our collaborative work towards achieving the project’s overall goals. For this reason we will not go too deep in achieving realism of policies and simulations at this step, but will use the results of this step to develop the foundations of an ICT-enabled infrastructure for real-time crisis management. We will provide example-based discussion of this step in Sections 4 and 5 of this document. In the sequel we detail the steps ON THE ILLUSTRATIVE EXAMPLE I. 1. Illustrative Scenario Consider the situation of a burning tanker in harbour illustrated in Figure 31. The model of the world in this case may include such parameters as location of vessels in the harbour, their geometric description and other properties (e.g. load), dynamics of the sleek field development in case of an oil spill, etc. Important is that it (WHO?) should be able to react to external actions (say of firefighters) as the real world would. (Here detail the scenario to a minimum such that we can illustrate all the concepts as they are introduced below. Identify the agents and their coarse grain design, etc…) In Figure 3, we present a sketch of 3 scenarios evolving from the fire outbreak in the tanker (this has to be clearly marked in the figure as ‘Scenario 1’ ‘Scenarion 2’, …etc.). In the first scenario no action is taken by the fire fighters. As a result more and more oil is being dumped into harbour water, fire develops through the tanker while the burning sleek field on the water reaches coastal area. In the end, temperature in the ship grows above the limit and the tanker explodes. In the second scenario, small fireboat is deployed to extinguish fire on the tanker, while a larger and more powerful fireboat is trying to the same with the sleek spot on the water. While fire is partially controlled, unfortunately, this is not sufficient to stop the tanker from burning and oil keeps flowing into water increasing the burning oil spot. 1 This scenario is a complete fiction and may be criticized from many angles. Nevertheless, we believe it illustrates to some extent the potential of our modelling approach.
  • The third strategy concentrates the effort of both fireboats on the tanker. After fire is extinguished there, it becomes possible to stop the leak and focus on the sleek field. This strategy turns out to be most promising, contending the fire and almost removing the oil spot. I. 2. RefinementDefine agent i 1: interfaces, cognition, explicit modelling of the environment After the initial coarse-grain design of agents representing various organizations the agents will be refined using agent-oriented design methodologies [9,9], .namely In particular we will be using the popular ‘model-view-control (MVC)’ architecture [9], Fig. 2. Such an As per Figure 2, the architecture of a single agent distinguishes for each agent a decision making unit (the “Control” block in Fig. 2) from the model of the “world” or environment (the ‘Model’ block in Fig. 2), onto which the agents actions are applied. By analysing of the state of Model unit the Controller selects Inside the “Control” the actions are selected based on the actionsanalysis of the model status using, followed by some reasoning process, for example rule based. For our example we assume that the situation is to be handled by the maritime fire department, whose control center at this stage is modelled by the “Control” block (Fig. 3). The actions of the controller include place, intensity and duration of fire extinguishing actions. The decision on a particular action may result from complex reasoning inside the controller alone, or done in collaboration with controllers of other organisations involved. The reasoning is based on the current state of the Model (cognition) and by applying rules and policies, formalized in an appropriate formal language. For example, when the vessel approaches the port, the decision making on where to dock it can come as a result of complex negotiation process between several organisations, which will be explicitly modelled. Or, when two organisations try to extinguish fire on the ship and around, one has priority of protecting harbour and port buildings, while the other more concerns about the whole region, ecology, etc. The role of the “View” module is to receives the same status information from the “Model” and using it is able to visualize what is going on in the world at every moment of time using receives the same status information from the “Model” unit. This will provide a common integrated situational awareness picture facilitating a shared understanding by all parties involved. Given that it is not essential for our problem understanding at this stage for simplicity, we will omit the “View” component and use only a reduced Model-Control configuration, as illustrated in Figure 4 which is a subset of Fig. 1 reducing the ‘problem space’ to the absolute essential elements that ease the understanding of our approach. With this architecture we will be able to see the difference when policies change as follows: .
  • Figure 2. Model-View-Control architecture. For example, consider the situation of a burning tanker in harbour illustrated in Figure 32. The model of the world in this case may include such parameters as location of vessels in the harbour, their geometric description and other properties (e.g. load), dynamics of the sleek field development in case of an oil spill, etc. Important is that it should be able to react to external actions (say of firefighters) as the real world would. We assume the situation to be handled by the maritime fire department, whose control center at this stage is modelled by the “Control” block in the diagram. The actions of the controller include place, intensity and duration of fire extinguishing actions. The decision on a particular action may result from complex reasoning inside the controller alone, or done in collaboration with controllers of other organisations involved. The reasoning takes place based on the current state of the model (cognition) and by applying rules and policies, formalized in an appropriate formal language. Figure 3. Illustration of three different control strategies applied in the same situation. Driven by their goals, priorities and policies (and failing to come to agreement) they may start doing things in the wrong order to what is required, and even block each other. This will be seen in simulation, while policies, negotiations, etc, will need to be modelled first. 2 This scenario is a complete fiction and may be criticized from many angles. Nevertheless, we believe it illustrates to some extent the potential of our modelling approach.
  • Once the Model-View-Control system is simulated in some model time, state of the Model changes at increasing readings of clock, and thanks to the View component, it can be observed. In Figure 3, we present a sketch of 3 scenarios evolving from the fire outbreak in the tanker. In the first scenario no action is taken by the fire fighters. As a result more and more oil is being dumped into harbour waters, fire develops through the tanker while the burning sleek field on the water reaches coastal area. In the end, temperature in the ship grows above the limit and the tanker explodes. In the second scenario, small fireboat is deployed to extinguish fire on the tanker, while a larger and more powerful fireboat is trying to the same with the sleek spot on the water. While fire is partially controlled, unfortunately, this is not sufficient to stop the tanker from burning and oil keeps flowing into water increasing the burning oil spot. The third strategy concentrates the effort of both fireboats on the tanker. After fire is extinguished there, it becomes possible to stop the leak and focus on the sleek field. This strategy turns out to be most promising, contending the fire and almost removing the oil spot. The value of the Model-(View)-Control architecture enables us toin this case will be in ability of “playing” with the Ccontrol part of the simulation framework system, changing policies, priorities, initial conditions, etc., to see their impact on the overall situation development. The challenge of MVC design is in obtaining a clear separation of the Model, View and Control units from each other while keeping the fidelity of the problem space. (Illustrate this on our example!!!) However the effort is very rewarding because once an appropriate separation is achievedAs a result, the Model and View units can be re-used over and over again while the Control unit can be modified as other more appropriate policies are being identified. With regard to formal verification once the Model-View-Control system is simulated in some model time, state of the Model changes at increasing readings of clock, and thanks to the View component, it can be observed. This has to be rephrased – it sounds extremely confusing!!! For simplicity, we may omit the “View” component and use only reduced Model-Control configuration, as illustrated in Figure 4. Here we illustrate that first we go through refinement from the Agent representing overall behaviour of the organisation to the agent where Control and Model parts are clearly separated. We apply this approach to all organisations involved in the crisis and put them together into multi-agent simulation. After that we may apply the same analysis technique of comparing traces as in the previous case (Figure 1, not shown here). We may also apply formal verification, for which we will need to come up with updated set of models where control and model of behaviour are clearly separated.
  • Figure 4. Refinement of agents using Model-Control architecture. As a result of this step, we will be able to address most of the goals set in the beginning of this section. For example, when the vessel approaches the port, the decision making on where to dock it can come as a result of complex negotiation process between several organisations, which will be explicitly modelled. Or, when two organisations try to extinguish fire on the ship and around, one has priority of protecting harbour and port buildings, while the other more concerns about the whole region, ecology, etc. Driven by their goals, priorities and policies (and failed to come to agreement) they may start doing things in the wrong order to what is required, and even block each other. This will be seen in simulation, while policies, negotiations, etc, will need to be modelled first. Figure 5. Refinement of agents using Model-Control architecture. I.3. Refinement 2: introducing the ‘human factor’ model and creating ecosystem frameworkin the Controller unit Results of the previously described modelling and simulation can give us sufficient help in evaluating policies of different organisations involved in the crisis management. However, these are of little help in real time, when the crisis starts. Also, the model does not reflect any socio aspects. While organizational policies are fixed and meant to be applied ‘top-down’ in a hierarchical fashion, their implementation is achieved through people which introduces a substantial subjective dimension (e.g. tiredness and other psychological complications such as fear factor, pursuing goals other than direct work responsibilities, etc.) which increases the complexity of the problem space. Rather than being ‘cogs’ performing required tasks in the organization ‘machine’ people are creative, have initiative and can significantly contribute autonomous solutions which can prove crucial for the success of operations in unexpected situations related to an emergency. The policies are applied via humans, who may substantially change the outcome. To address these two aspects, To encapsulate the ‘human factor’ we separatethe Controldecision making unit will be further subdivided into a model of a decision making person “Human Operator
  • - HO” and a “Personal Assistant Agent - PAA”, as shown in Figure 6. This architecture delineates and enables modeling the boundary between the social (physical) world and the ‘Cyber’ (control) part of the security ecosystem (regarded as Cyber-Physical ecosystem [Doursat and Ulieru 2008]), having the PAA influence and direct the actions of the HO thus supporting the finding of most appropriate course of action in the chaos of crisis. To achieve this t While the previous case considered “ideal” policy application case, here the policies will be applied not directly but via the human operator. The model of human operator may include elements reflecting tiredness and other psychological complications such as fear factor, pursuing goals other than direct work responsibilities, etc. The HO model will in turn include human cognition and decision making components which determine the actions taken. (HERE GIVE AN EXAMPLE AND ILLUSTRATE WITH A FIGURE OF THESE AGENTS…) The human cognition component… ?? The decision making component will be based upon models of common sense behaviour (REFERNCES OR EXAMPLES DEFINING WHAT THESE ‘MODELS’ ARE LIKE FROM A SCIENTIFIC PERSOECTIVE – IS THIS THE DOMAIN OF PSYCHOLOGY ?!…) which includes listening to the recommendations of PAA, but also reflecting to personal experiences, etc. The PAA component contains rules of each individual agents (‘protocols’ of work defining the agent actions) that compiled together using emergent engineering (Ulieru and Doursat 2009) will result into the desired meta-organizational policies that prove most effective to be applied for a particular instance. Combining agents according to their protocols enables the spontaneous creation (emergence) and deployment of appropriate ‘ecologies’ [Doursat and Ulieru 2010 – submitted to Transactions IEEE SMC) for each particular emergency situation. is seen as a part of the future intelligent ecosystem infrastructure deployed to handle such disasters in most efficient way. Model of policies and rules will be implemented in PAA. The result of PAA ecology operation will provide each HO withbe recommendations given to HO on which concrete actions to take at a specific moment as the crisis unfolds [Autonomics 2008]. [PAA will also have cognition abilities, but only formal ones, i.e. it will have access to the information available through sensors and communication with other PAAs. NOT CLEAR – PLEASE EXPLAIN THIS - GIVE EXAMPLE ETC…] This architecture will help delineate and model the boundary between the social (physical) world and the ‘Cyber’ (control) part of the security ecosystem (techno-social ecosystem). While in the simulation this architecture will be capable of giving more realistic results, there is one more side benefit expected: the PAA can be directly deployed in the future as a part of techno-social ecosystem.
  • Figure 6. Decision-Making (‘CONTROL’ or Command and Control) Unit Refinement into Personal Assistant and Human Operator Agents.
  • 4. Illustrative example In this section we elaborate more on the example by introducing more details of organisations involved, their goals, environment and infrastructures, etc. 3.1 Overview Consider the following example of a disaster with a tanker in harbour near a city as illustrated in Figure 7: A multi-purpose tanker arrives to the harbour to get on board 100 tons of chemical substance A, and also to fix engine, pump, and fill up their supplies of water and fuel. In another tank the ship has minor remains of chemical substance B which were not properly discharged in the previous port due to the pump break down. Figure 7. Schematic diagram of the disaster area. Because the terminal for liquid substances is in high demand, the port authority decides to move the ship to the dry goods terminal to fill up their supplies and fix the engine right after the substance B is loaded on board. During the engine repairs the fuel pipe gets damaged and fire breaks out. It leads to a spill of both substances A and B into the harbour waters the sleek field drifts on the surface. Mix of A and B is flammable and poisonous, so the water is burning. The burning sleek field grows in size and moves towards the river mouth. The situation aggravates by the tide: in high tide the spot can go into the river and contaminate water source of the city. It can also pollute the coastal area. The burning tanker is also of great danger to the environment: if the temperature goes above certain limit, it can explode destroying or damaging piers, warehouses, ships, and power substation located nearby as shown in Figure 8. The latter may have major impact on the city.
  • Figure 8. Zone of potential destruction if the ship explodes. This disaster occurs in the area operated by many organizations responsible for prevention and handling of such situations. Further we describe in detail organisations involved and brief description of their roles: Ship’s Crew, Port, City, Marine and civilian firefighters, Military, Medics, Ship Owner, Police, etc. First responders such as police forces, fire fighters, medics and maybe private NGO’s start their actions soon after they find out about the danger. Every first responder has a different goal to achieve regarding their organization, group and division it belongs to. For example, police forces intent to evacuate the affected areas, help managing roads by patrolling in the area or making detours around it-as important infrastructure to get supplies and medical helps- and disarming the threats (or in this case, finding possible causes). Fire fighters on the other hand need to get into the field to find possible threats imposed by chemical substances, extinguish possible fire, rescue the civilians and in case it is needed, call for medics and ambulances. 3.2 Environment and infrastructure An important aspect of any rescue mission is to have a complete plan and understanding of the environment. In a crisis situation all the first responder organizations involved in managing and controlling the area are needed to have information about a common environment which they are working on. Some of the environmental information is easily accessible by organizations such as firefighters or police forces however there are some critical information about the field of area which is not crystal clear or may not be considered crucial at the first glance. The environment sketched in Figures 6 and 7 can be modelled as an aggregation of buildings and infrastructures. Basic infrastructures include roads, telecommunication (wired and cellular), and electricity networks as illustrated in Figure 9. Modeling infrastructures explicitly is important to investigate, for example, impact of their reliability on the crisis development. Besides, objects of infrastructure can have their own dynamics: a problem in one node can cause cascading problems in all the other dependent nodes, thus impacting on the overall crisis handling result.
  • Figure 9. Infrastructures deployed around the harbour: electricity, communication (wired and wireless). Having essential knowledge about the infrastructures such as electricity nodes, power stations, telecommunication stations, etc. is a vital missing piece for all the first responders to prevent any further damages. The necessary plans of power structure and current status of each power station (or even power nodes) are the type of data which is usually not easily accessible by the first responders. For example, a power distribution company in a province or state knows all the detailed information about the power nodes which is critical for firefighters in the field in order to avoid any mistake. Also, telecommunication nodes and available GSM network play an important role in information sharing among organizations and individuals. In our scenario, a fire in the harbour may result in an explosion which may affect another disastrous fire in the electricity node close to the harbour. This explosion can cause a major black-out in the city. Also, losing electricity in telecommunication towers will cause damage to communication networks and most of the organizations and agents will lose their connections to each other and other organizations. Ensuring the reliable operation of infrastructures in disasters is one of the key goals of holistic security ecosystems. For example, work [9] provides an idea of making power distribution grid capable of self healing and restoring power supply to the “blackouted” areas. 3.3 Agency cast The organisations involved in the crisis handling are modelled as agents. In our example the following limited set of agents is used: AuthPort.Police The security team of the port. 4 Persons (including the commander). AuthPort.Fire The fire fighters group inside the port. AuthPort.TugBoat The tugboat under the control of the port authority. AuthPort Port authority.
  • AuthMun Municipal authority. AuthProv Provincial authority. ShipCrew The crew of the burning ship. 3 Persons (without commander). AuthMun. Fire Municipal fire fighters. AuthMun. Hospital Municipal hospital. AuthMun.Amb Ambulance belonging to the municipal hospital. AuthProv.RescueBoat Rescue boat belonging to the provincial authority. The <Agent.Number> notation refers to the rule listed under the Number in the set of rules of the corresponding organisation.
  • 5. Agent design methodology illustrated on example 4.1 Methodology overview In this section we will develop step by step our design methodology using two potential scenarios of interactions between the organisations, emerging from the fire outbreak. As outlined in Section 2, our approach starts with verbal description of the scenario, then further specified in a form of interaction diagram between the agents involved. We study interaction between different organisations involved in resolving this crisis. At the first step, we assume organisations to be atomic, i.e. we will not be considering their internal structure. Thus, we assume that behaviour of organisations is determined by some “high- level” policies (which, in reality, may be a “sum” of policies, rules, practices of different departments). The goal of this exercise is mainly to model compatibility of such overall policies and their impact on collaboration of organisations. The expected sequence of steps is as follows: • First we start with a simple crisis scenario. By having a time ordered verbal description of all the communications and actions between organizations involved in such a scenario, we analyze all the statements and apply a reverse engineering process to come up with an interaction diagram. The interaction diagram shows all the actions of the organizations involved along with the communications (interactions) between organizations and teams. Communicating with another agent or organization is assumed to be through landlines (L.L) or wireless networks (such as GSM) and in physical levels by voice (calling or shouting). • After capturing the essence of behaviours and interactions, we will be able to set up the rules corresponding to the behaviours of agents. All the behaviours are set of conditional (IF-Then) statements showing the simple process of behaving toward the conditions. • Later, we want to simulate this scenario and the interactions, using Brahms agent-based framework [9]. Brahms is able to produce a timeline of behaviours showing all the actions and decisions that agents have taken throughout the crisis scenario, as shown in Figure 10. Having this, we will compare our results from the simulation with the initial interaction diagrams and try to change or modify the “interaction patterns” used by organizations and test these new set of rules in our simulation iteratively.
  • Figure 10. Brahms snapshot: timeline of sample simulation scenario. • Also, we will add intra-organizational rules and policies to our system to see the impacts on overall interaction patterns. Iterating this process helps us to get better understanding of the organizations’ policies and will result in finding novel optimized policies for handling inter and intra organizational interactions and conflicts. 4.2 Scenario of an unsuccessful crisis management attempt Verbal description: 8.00pm A crew member (ShipCrew) hears a small explosion on the upper deck of the ship and immediately calls the port authority (AuthPort) <ShipCrew.1>. The crew is not aware of the payload of the ship and starts trying to extinguish the fire <ShipCrew.2>. 8.02pm AuthPort, thinking about a minor incident, alerts port's fire fighters (AuthPort.Fire) and security (AuthPort.Police) <AuthPort.1> <AuthPort.2> <Police.2>. The port's fire fighters, following a standard procedure <Fire.1> <Fire.2>, alert the local hospital that immediately sends an ambulance (AuthMun.Amb) to the port <Hospital.1> <Amb.1>. 8.11pm The port's fire fighters, security and the ambulance arrive in front of the burning ship. The port's fire fighters evaluate that the fire is too big for their resources and call the municipal fire fighters (AuthMun.Fire) <Fire.8> <Fire.2>that with no delay run towards the port. The port's police start blocking the area <Police.1>. 8.13pm The port's fire fighters notice containers with flammable and chemical warning over them. They immediately call the port authority inquiring about the payload of the ship <Fire.6>. The port's authority starts looking at the full data sheet of the ship <All.1>. 8.19pm Municipal fire fighters (AuthMun.Fire) arrive at the port and immediately start helping the crew and the other team (AuthPort.Fire) in controlling the fire <Fire.4>. 8.22pm Port's authority realizes that the ship is carrying dangerous chemicals and informs the teams (AuthPort.Fire) <All.2>. 8.25pm A member of AuthPort.Fire team informs a member of AuthMun.Fire team about the payload of the ship. AuthMun.Fire team, following a clear directive of their corp that states to avoid burning chemicals in all the cases in which
  • civilians are not involved <Fire.10>, stops fighting the fire and calls AuthMun to inform about the situation and asking if civilians can be somehow involved <Fire.6>. 8.35pm AuthMun looking at the map of the city and without contacting the AuthPort calls AuthMun.Fire asking their final decision and saying that a properly equipped boat should be available in less than 1h. AuthMun.Fire confirms the former decision <Fire.10>. 8.41pm AuthMun calls the provincial authority (AuthProv) about the equipped ship and receives an affirmative reply <AuthMun.1>. 8.41pm AuthPort.Fire team, under pressure because AuthMun.Fire team left the ship and worried about the chemical fire, stops fighting the fire and starts leaving <Fire.11>. 8.47pm AuthMun informs AuthPort about the arrival of an equipped ship from the provincial forces. 8.51pm AuthPort worried about a burning ship under its jurisdiction calls AuthMun trying to establish if the ship has to be moved from its original position or not <AuthPort.4>. 8.55pm AuthProv informs both AuthMun and AuthPort that the equipped ship is approaching the harbour and ask them if the burning ship can be moved out of the harbour in order to minimize risks. AuthPort agree immediately and AuthMun, without any specific policy, agree as well. 8.57pm AuthPort contacts the crew that is still on board trying to control the fire. They leave immediately <ShipCrew.3>. AuthPort order two tug boats to pull the burning ship out of the harbour <ShipCrew.4>. 9.15pm The tug boats start moving the burning ship towards the provincial equipped ship waiting out of the harbour. Unfortunately it was too late. The boat explodes very close to an energy distribution node. Five persons are seriously injured in the harbour, half of the city is without electricity and a massive quantity of a burning chemical is dispersed into the sea. This scenario description corresponds to the interaction diagram shown in Figure 11. The diagram shows explicit communication between the agents. AuthPort. Police AuthPort.Fire AuthPort AuthMun AuthProv ShipCrew AuthPort.Tugboats AuthMun.Fire AuthMun.Hospital AuthMun.Amb AuthProv.RescueBoat Call(GSM) 8:00 Call(L.L) Call(L.L) Call(GSM) Call Move Move Call(GSM) Move Extinguish fire Call(GSM) Move 8:15 Extinguish fire Call(GSM) Call(v) Extinguish fire Call(GSM) Block Area 8:30 Call(L.L) Reasoning Call(GSM) Call(L.L) Call(GSM) Move 8:45 Call(L.L) Reasoning Call(L.L) Call(GSM) Call(GSM) Move Move Move Carry Explosion Figure 11. Interaction diagram between the agents involved in the scenario.
  • 4.3 Example of a successful collaboration effort Verbal description: 8:00 p.m. One of the crew members on board of a ship realizes that a specific part of the ship is on fire. Immediately after, he calls the port authority (AuthPort) using GSM network <ShipCrew.1>, to inform them about the fire and ask for help. At the same time, Crew is trying to extinguish the fire by taking necessary steps <ShipCrew.1> however; they have insufficient knowledge about the payload and the substances in the ship. 8:03 p.m. The AuthPort makes landline calls to fire station located in the port (AuthPort.Fire) <AuthPort.1>and security inside the port (AuthPort.Police) <AuthPort.2> to send them where incident is taking place. Fire station sends a unit of 3-4 firemen to the boat <Fire.2> while calling the Hospital <Fire.1> to send an ambulance (AuthMun.Amb) in case of any emergency. Then the hospital sends an ambulance to the port <AuthMun.Hospital.1>. 8:10 p.m. Firefighter unit and police get to the ship. The port’s security evacuates < Police.2> and blocks all the roads ending to this area < Police.1>. The port’s firefighters evaluate the incident by following a regular procedure <Fire.4> and call the main fire station in the city (AuthMun.Fire) <Fire.3> to dispatch backup teams to the port, while trying to control the fire and avoiding it to spread out <Fire.7 , Fire.8>. 8:13 p.m. The fire fighters ask some information about the substances and personnel of the ship from AuthPort <Fire.9> and they start to process and search for relevant information <All.1>. In the meantime, backup team of firefighters arrives to the port. 8:22 p.m. AuthPort sends back complete information about the content of the ship. AuthPort.Fire informs other groups about the substance <Fire.3> . As the substance is a highly flammable chemical, firefighters just try to control the fire <Fire.7> and call the crew to evacuate the ship immediately <Fire.5>. Soon after, they send a report to municipality authorities (AuthMun) informing the threat and possible explosion <Fire.6> which may lead to a bigger disaster as it’s located close to a major energy node (main electricity node), keep the electricity zone safe and avoid any cascading disaster to the city. AuthMun asks the firefighters to stay in the zone and continue controlling the fire while keeping the area evacuated at 8:35 p.m. At the same time, they call AuthProv to send the closest rescue boat to the port to extinguish the fire <AuthMun.1>. 8:42 p.m. AuthMun asks port authorities to prepare and move the ship while still trying to harness the fire. The AuthPort starts making arrangements to move the ship – asking for necessary actions – <Authport.4> and inform firefighters about the new plan and ask them to leave the burning ship at 9:07 p.m. <AuthPort.3> 9:10 p.m. The process of moving the ship starts. By the time that the ship is moved, rescue boat arrives to the port at 9:17 p.m. and starts extinguishing fire.
  • AuthPort.Police AuthPort.Fire AuthPort AuthMun AuthProv ShipCrew AuthPort.TugBoats AuthMun.Fire AuthMun.Hospital AuthMun.Amb AuthProv.RescueBoat Call(GSM) 8:00 Call(L.L) Call(L.L) Call(GSM) Call(GSM) move move Call(GSM) move Extinguish fire Call(GSM) move 8:15 Reasoning Call(GSM) Call(v) Call(v) 8:30 Call(GSM) Move Call(GSM) Reasoning Control fire Call(L.L) Call(GSM) Block Area Call(L.L) Reasoning Call(GSM) Control fire 8:45 Call(GSM) Call(GSM) Move Move 9:00 move move Carry Explosion Extinguish fire 9:15 Figure 12. Interaction diagram for the second scenario. From two timing diagrams we can try to extract rules for each agent’s behaviour. These are presented in Appendix 1 and will be discussed in the next section.
  • 6. Agent architecture 5.1 Rule-based reasoning The control section of an agent or an organization can be implemented either through a set of rules in a rule-based logic system or in a Command & Control (C2) system [9]. Command and control is a hierarchical rule-based system in which actions and behaviours are controlled by an agent(or group of agents) with higher authority, however, we tend to design a system that supports a network of agents (ecosystem of agents) being able to follow the suitable rules and collaborate to solve a problem. The process of modification –we term it as “Policy Refinement” – is a method to find the best rules and behaviours in the control section of an agent (that is implemented in rules or P.A.A.) and find the best practices for each case and finally generalize the originated results for a wide range of scenarios that is applicable to various complex systems. The logic we are using to represent the actions and the behaviours of our agents (including human agents and artificial personal assistant agents) is inspired by the methods that other agent-based systems such as JADE, Brahms, etc., are using. This rule-based system is a simple “production rule system” [9] which consists primarily of a set of rules about behaviours. These rules consist of a sensory precondition (IF) and an action (THEN). Ideally, the rules reflect the policies of the organisation modelled by the agent. However, in our approach, some of the rules can be empirically extracted from the scenario interaction diagrams. Let us consider how some of the rules of the AuthPort.Fire agent are extracted from the interaction diagrams in Figure 11, Figure 12. (Refer to Appendix 1 for the complete agent’s description). 1. AuthPort.Fire, the fire fighters team belonging to the port authority, receive a landline call. According to the rule: <*.Fire.1, OnCall(Agent, Area, Move) -> Call(*.Hospital, Area, DangerLevel)> they immediately alert the hospital about a possible emergence and quickly move to the area of the incident <*.Fire.2, OnCall(Agent, Area, Move) -> Move(Area)> . 2. Once arrived in the port, they evaluate that the fire is too big for their resources and call the municipal fire fighters: <*.Fire.8, (Fire -> ExtinguishFire(Fire)) -> Danger(Fire.Area) -> Call(*Fire, Fire.Area, Move)>. They receive the call and quickly run towards the port as well following the same rules of the other group <*.Fire.2>. 1. AuthPort.Fire notice containers with flammable and chemical warning over them. They immediately call the port authority inquiring about the payload of the ship: <*.Fire.6, Fire -> Call(*Auth, Fire.Area, Reason)>.
  • 2. After some time spent trying to extinguish the fire, the team, under pressure because AuthMun.Fire team left the ship and worried about the chemical fire, stops fighting the fire and starts leaving <*.Fire.11, (Fire -> ExtinguishFire(Fire)) -> HighDanger(Fire.Area) -> Move(*)>. The reader must note the abstract form for the rules presentation. It is similar to those used in the agent-based languages referenced above, but not following syntax of any particular language. 5.2 Dealing with uncertainties It is clear that those interactions are modelled in a too deterministic way. Uncertainties, human factors and unknown parts of the problem space have not been considered yet. For example we didn’t include a mechanism to make the team able to understand how bad is the situation by itself. In interaction 2 we simply assumed that they are able to perceive a bad situation. Moreover, in interaction 4, we assumed that the team was under pressure but we didn’t explicitly refer to this property in any rule. We believe that this approach is a good starting point along our research path, especially considering the early stage of the project. However, we elaborated some ideas regarding those major issues we’ll face in the near future. We believe that it would be very relevant to model how humans and organizations manage uncertainties. In particular we will need some mechanism to put in relation a situation, even not completely known, with one or more actions. To reach this goal, we have to introduce a formal description of those concepts, in particular how to model known and unknown parts of a situation. For every organization, we define a situation as an n-dimensional feature vector containing all the parameters that we think are critical to that specific domain (See Figure XX). As introduced before, every organization has to take decisions and, as a consequence, actions that best suit a particular situation. Due to this, we define rules as a mapping between a situation and a set of possible actions (See Figure 13). n-dimensional situation vector Actions 0 1 1 0 1 1 1 0 1 1 0 1 0 0 1 1 1 1 0 1 rule s Figure 13. Rule based decision making. In this way we are able to represent both cases in which organizations are completely aware about the situation but also completely or partially unknown situations. The most four relevant cases that we will address are:
  • 1. Known Knowns. A variable is taken into account inside a decision process and its value is known. In this case one or more rules can fire and related actions can be taken. See Figure 14(a). 2. Unknown Knowns. A variable is taken into account inside a decision process and its value is not known by an organization but known by other organizations. For example the number of persons injured in an accident is required to decide how many units have to be sent to the place. In these cases, we can define rules enabling communications between organizations with the goal of retrieving the values of the unknown parameters. See Figure 14(b). If the value cannot be found, the variable has to be treated as a Known Unknown. 3. Known Unknowns. A variable is taken into account inside a decision process and its value is not known by any organization. We can treat these cases by making organizations able to choose the most suitable action according to some distance function to be applied to the incomplete feature vectors. See Figure 14(c). 4. Unknown Unknowns. A variable, and obviously its value, is completely ignored by an organization. These kinds of variables usually stand on the edge between complicated and complex situations, between order and chaos. We can model them by adding and unexpected attribute to an existing feature vector. In this case the entire set of rules for that organization will become useless due to fact that all the rules refer to feature vectors of a different size. We will need to better understand how those kind of situations work in the real world before suggesting a modelling technique. a b c a b c d e d e 0 1 1 0 1 Acti 0 1 ? 0 1 ? on a) known known ? 1 1 0 1 ? a b c d e 0 1 ? 0 1 ? 0 1 1 0 1 Acti on a b c d e c) Known unknown 0 1 1 0 1 Acti ? on a b c d e 0 1 1 0 1 Acti 0 1 1 0 1 ? ? on d) Unknown unknown b) Unknown known Figure 14. Illustration of handling uncertainties in the rules. As a final note, it is worth emphasizing that after having introduced uncertainties, we will also focus on methods and techniques to model
  • genuine human factors as stress, fatigue or panic and how they influence the behaviour of responders, both the managers and the operators into the field. 5.3 Evaluation of rules The production rules have general form <State vector> -> Action. If a match between state and left side of a rule is found, then the rule is applied and as a result the action is executed. To implement the production rule logic there are various algorithms available and being used by different agent-based logic systems. Among all, Brahms simulation framework [11] uses Rete algorithm [10] to reason in a logic database. The Rete algorithm is an efficient pattern matching algorithm for implementing production rule systems. This algorithm provides a mechanism to find the first possible answer (through pattern matching) and then optimizes all the rules into a network of inter-related conditions. To handle uncertainties we can propose several extensions to these algorithms. Thus, for the “Unknown Unknowns” case, several decision making models can be proposed: Learning: importing rules from other agents. If an agent encounters the “unknown unknown” situation, it may try to find an appropriate action by asking other agents on what to do. Best match: still generalised rules with wildcards can be applied to the state vector even if a new parameter has appeared there. The following sequence of steps can be proposed to model behaviour in presence of such “heavy” uncertainties: 1. Understand the situation. In other words, response managers try to move from unknown unknowns to known unknowns 2. Collect known unknowns from other organizations 3. Learn from other organisations or invent the rules of behaviour 4. Provide initial estimate parameters around the known unknowns 5. Apply actions under the assumption that the estimate parameters to the known unknowns are reasonably accurate 5.4 Refinements of agents After the model will be designed in the way outlined above, and simulation will confirm the sound behaviour of the model, we will apply refinement ideas outlined in sections 2.2 and 2.3. In addition, we may consider a finer grain structure of each organisation, representing it as an aggregation of departments/units and applying the same modelling concept at another level of hierarchy.
  • 7. Formal modelling and verification Policies can be violated or implemented incompletely. This creates uncertainties and may result in multiple scenarios of behaviour. A single simulation run, however follows only one possible trace (a sequence of states) in the system’s behaviour, as schematically represented in Figure 15,a. In fact, there could be multiple scenarios possible due to the uncertainties. These multiple scenarios can be represented as paths in the model’s state space, which is shown as a graph in Figure 15,b. Uncertainties are represented here as multiple transition arcs from some states, e.g. S1, S2 or S3. Figure 15. Single simulation trace (a) vs. possibility of multiple paths in the state space (b). To cope with this problem, when complex systems are simulated, usually many simulation runs are executed to cover different possible scenarios. However, even such time consuming process cannot guarantee that the entire problem space has been investigated and is not able to highlight relevant states while they are immersed in thousands of less relevant others. Considering that we are modeling systems in which humans have a great role we think that a pure simulative approach, in the long term, can be insufficient. We think it would be very relevant and innovative to test if some system properties, produced for example by new policies, hold not only in one specific scenario or on average but in all the possible cases. Due to this, we would like to apply formal verification techniques coming from computer science and control theory. Formal verification by model-checking [9] enables proof that certain properties hold in all possible scenarios by exploring the entire state space of the model.
  • In this project we plan to use modular modelling language of Net Condition/Event Systems (NCES). NCES is discrete state/discrete time formalism [9], with which we can efficiently model the logic of collaboration between agents [9]. NCES model of the system from Figure 3 is presented in Figure 16. Here the situation of two fireboats facing the fire on the tanker and spillout of oil is modelled. The fireboats are represented as two instances of the model type “FireBoat”. Tanker is represented as an instance of model type “BurningShip2”, and the sleek field as an instance of the model type “BurningObject”. Figure 16. Formal NCES model of the situation “conflict of two fireboats” from Figure 3 (top level). Interconnections between the modules transfer information and events. The process is started by the module “start” which models the fire outbreak by sending an event to both modules “tanker” and “sleek_field”. The connection between the “leak” output of the ”tanker” and the “add” input of “sleek_field” models (in discrete way) the leakage from the tanker. Internally NCES modules are specified in Petri Nets extended by condition and event arcs. For example, NCES model “BurningShip2” is represented in Figure 17.
  • Figure 17. NCES model “BurningShip2”. The model has two event inputs: The “outbreak” input event forces the model’s transition to the “Burn” state. The ship is loaded with a given amount of fuel (in this case 100 tons as modelled in place “fuel”). Once the state is “Burn”, the amount of fuel on board decreases for two reasons: burning away and leaking to the water. Leakage of a discrete portion of fuel is modelled as an output event “leak”. The model is also timed. Thus, leaking of one unit of fuel takes 2 units of time. The “extinguishing” event reduces intensity of fire. Burning stops if intensity is brought to zero. This event is sent by the module, modelling a fire fighter. This, discrete state discrete time way of modelling has its limitations, especially in representing high precision numeric data. The limitations can be overcome by using hybrid modelling languages [9] (at coast of much higher computational complexity), or using the technique of thresholds proposed in [9]. The complete model from Figure 8 can be given to the model checking tool ViVe [9] which can generate state space of the model and analyze its behaviour. This is illustrated in Figure 18 where a fragment of the model’s state space is shown in a graph form. The ViVe tool can find paths (trajectories) in the state space which satisfy certain desired or undesired properties, e.g. fire successfully extinguished, or the ship exploded.
  • Figure 18. Results of model-checking in VisualVerifier tool: reachability graph of the model includes all possible scenarios of collaborative behaviour. The timing diagram in the bottom shows a change of model parameters along one path in the reachability graph. ViVe can prove validity of properties against the behaviour of the model. The properties (e.g. non violation of basic safety rules) can be represented as predicates or in Computational Tree Logic language. The formalisation methodology of [9] can be used to generate properties in these formal languages based on the informal natural language property description. The reader should note that the presented formal verification attempt is very preliminary and serves only for illustration of its idea and potential.
  • 8. Conclusion We see one of the goals of this project in showing how the troubles of crisis handling illustrated in Sections 3 and 4 could be efficiently avoided if the futuristic socio-technical infrastructure of the “Internet of things” was in place along with agent-based intelligent reasoning support. On the one hand, such environment will provide a great opportunity of accessing every piece of information available in real time via pervasive sensor networks [9]. On the other hand, it will present the new great challenge of decision making in the “information avalanche”. The only way to cope with the “avalanche” is to avoid central control of the system, making collaborating parties more independent and intelligent [9]. One direct deliverable of this project towards this goal will be the methodology of creating “PAA - Personal assistant agents” for human responders, or their groups or even whole organisations. The importance of this is being gradually realized by researchers (e.g. [9]), but little has been done to the date. Another deliverable will be the simulation/verification environment, in which potential crisis situations can be investigated beforehand or even in real time, while the crisis evolves. Results of such investigation will help anticipate “clashes” of policies and practices and avoid them at early stages. Obviously, it can be used to “play” different response scenarios and choose the best. The formal verification option will extend the analysis potential by exploring all possible scenarios in presence of uncertainties, allowing to proof policies, check collaboration compatibility of policies or to find an optimal strategy of crisis handling. Having conducted initial investigation outlined in this document, we are optimistic on the overall success of this project. We hope that this document will help to accelerate the initial project phase and come to the core research and development tasks.
  • 9. References 1. M. Ulieru, Enabling the SOS (Self-Organizing Security) Network, Proceedings of the IEEE SMC 2008 Conference, October 12-15, Singapore 2. M. Ulieru, "Holistic Security Ecosystems", Invited Keynote Paper at the IEEE Digital Ecosystems Technologies Conference, Istanbul, Turkey, May 31-June 3, 2009] 3. E. M. Clarke, O. Grumberg, and D. A. Peled, Model Checking. Cambridge, MA: MIT Press, 1999. 4. V. Vyatkin, H.-M. Hanisch, C. Pang, J. Yang: “Application of Closed-Loop Modelling in Integrated Component Design and Validation of Manufacturing Automation”, IEEE Transactions on Systems, Machine and Cybernetics - C, No.1, vol. 39, 2009, pp. 17-28 5. Hanisch, H.M., Lobov, A., Martinez Lastra, J.L., Tuokko, R., Vyatkin, V. 'Formal Validation of Intelligent Automated Production Systems towards Industrial Applications', International Journal of Manufacturing Technology and Management, 8, (1), p-75-106, 2006 6. V. Vyatkin, H.-M. Hanisch, 'Verification of Distributed Control Systems in Intelligent Manufacturing', Journal of International Manufacturing, 14, (1), p. 123-136, 2003 7. W. Shen et al, “Applications of agent-based systems in intelligent manufacturing: An updated review”, Advanced Engineering Informatics 20 (2006) 415–431 8. N. Higgins, V. Vyatkin, N. Nair and K. Schwarz, Concept of Intelligent Decentralised Power Distribution Automation with IEC 61850, IEC 61499 and Holonic Control, IEEE Conference on Systems, Machine and Cybernetics, Singapore, 2008 9. M. Dastani, C. Mol, B. Steunebrik, “Modularity in Agent Programming Languages”, 11th Pacific Rim International Conference on Multi-Agents: Intelligent Agents and Multi-Agent Systems, LNAI, 5357,pp.139-152, 2008 10. (2008, Apr.). Model-view-controller design pattern [Online]. Available: http:// heim.ifi.uio.no/∼trygver/themes/mvc/mvc-index.html 11. H.-M. Hanisch and A. Lüder, “Modular modeling of closed-loop systems”, in Proc. Colloq. Petri Net Technol. Model. Commun. Based Syst., Berlin, Germany, 2000, pp. 103–126. 12. M.Khalgui, O.Mosbani, H.-M. Hanisch, “Model Checking of Multi-Agent Distributed Reconfigurable Embedded Control Systems”, 13th IFAC International symposium on Information Control Problems in Manufacturing, Moscow, June, 2009 13. T. A. Henzinger, “The theory of hybrid automata,” in Verification of Digital and Hybrid Systems, M. K. Inan and R. P. Kurshan, Eds. New York: Springer- Verlag, 2000, pp. 265–292. 14. VisualVerifier Framework, Online: http://www.ece.auckland.ac.nz/~vyatkin/vive/ViVe.zip, 2009, August 15. C. Forgy, "Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem", Artificial Intelligence, 19, pp 17–37, 1982 16. Sierhui, Clancey, W and van Hoof, R, BRAHMS - A Multi-agent Modelling Environment for Simulating Work Processes and Practices. International Journal of Simulation and Process Modelling, 2007. pp 134-152.
  • 17. Klahr, D., Langley, P. and Neches, R., Production System Models of Learning and Development. Cambridge, Mass.: The MIT Press. 18. D. S. Alberts and R. E. Hayes, "Understanding Command and Control", CCRP publication series, 2006, Online: http://www.dodccrp.org/files/Alberts_UC2.pdf 19. M. Hoogendoorn, C. M. Jonker, S. Konur, P.-P. van Maanen , V. Popova , A. Sharpanskykh, J. Treur , L. Xu, P. Yolum, “Formal Analysis of Empirical Traces in Incident Management”, Applications and Innovations in Intelligent Systems XII, Springer, London, 2005 20.N. Schurr , J. Marecki , M. Tambe , P. Scerri, “The Future of Disaster Response: Humans Working with Multiagent Teams using DEFACTO”, In AAAI Spring Symposium on AI Technologies for Homeland Security, 2005 21. G. Narzisi, V. Mysore, B. Mishra, “Multi-objective evolutionary optimization of agent-based models: an application to emergency response planning”, Computational Intelligence-CI, November 20 – 22, 2006, San Francisco, California, USA 22.S. Wu, L. Shuman, B. Bidanda, “Disaster Policy Optimization: A Simulation Based Approach”, Proc. Industrial Engineering Research Conference, May 19 - 23, 2007, Nashville, TN 23. N. Bicocchi, M. Mamei, F. Zambonelli: Self-Organizing Spatial Regions for Sensor Network Infrastructures. AINA Workshops (2)2007: 66-71 24.N. Bicocchi, M. Lasagni, M. Mamei, A. Prati, R. Cucchiara, F. Zambonelli, “Pervasive Self-Learning with multi-modal distributed sensors”, PerAda Workshop, IEEE SASO 2008 Conference
  • Appendix 1: Specification of Agents Agent: All Actions - Call(Agent, Area, Message) - Reason - AskBackup Rules Rules description 1. OnCall(Agent, Area, Message) -> If an agent asks some information about an area with certain Reason message, reason using available information 2. OnStop(Reason) -> Call(Agent, Area, If the process of reasoning is finished, then call and give provide Message) the agent with information. Agent: *.Fire Actions 1. Move(Area) 2. CarryPeople(Area) 3. ExtinguishFire(Fire) 4. ControlFire(Fire) Rules Rules description 1. OnCall(Agent, Area, Move) -> If any call from any agent asking to move to that area, call hospital Call(*.Hospital, Area, DangerLevel) and send danger level information 2. OnCall(Agent, Area, Move) -> If any call from any agent asking to move to that area, move to that Move(Area) area 3. HighDanger(Area) -> If there is high danger, ask any fire station and inform them about Call(*.Fire, Area, DangerLevel) area and danger level. 4. Fire -> ExtinguishFire(Fire) If there is a fire, extinguish the fire 5. Fire -> Call(People(Fire.Area), *, If there is a fire, call people to move from fire area to another area Move) 6. Fire -> Call(*Auth, Fire.Area, If there is a fire then inform authorities Reason) 7. (Fire -> ExtinguishFire(Fire)) -> If there is a fire and it cannot be extinguished, danger level is high Danger(Fire.Area) -> and just controls the fire. ControlFire(Fire) 8. (Fire -> ExtinguishFire(Fire)) -> If there is a fire and it cannot be extinguished danger level is high
  • Danger(Fire.Area) -> Call(*Fire, call any fire station and asks for help. Fire.Area, Move) 9. Unknown(Fire.Source) -> If the source of fire is unknown, call the authority and ask for their Call(*Auth, Fire.Area, Reason) order 10. Unknown(Fire.Source) and not If the source of fire is unknown and no one is in the fire area then People(Fire.Area)-> Move(*) move 11. (Fire -> ExtinguishFire(Fire)) -> If there is a fire but extinguishing the fire is impossible and danger HighDanger(Fire.Area) -> Move(*) level is high, move to another area. Agent: *.Hospital Actions 1. Cure(Agent) Rules Rules description 1. OnCall(*.Fire, Area, If any fire station(private or public) reports a LowDanger(Area)) -> Call(*.Amb, fire, simply call an ambulance and send ask Area, Move) them to move. Agent: *.Amb Actions  Move(Area)  CarryPeople(Agent, Area)  Cure(Agent) Rules Rules description 1. OnCall(*.Hospital, Area, Move) -> If you get any call from any hospital for Move(Area) emergency, immediately move to that place Agent: *.Police Actions  Move(Area)  CarryPeople(Agent, Area)  BlockArea(Area)
  •  Arrest(Agent) Rules Rules description 3. HighDanger(Area) -> Block(Area) If danger level of an area is high, block that area 4. OnCall(*, Area, Danger(Area)) -> If there is any call regarding an area with any Move(Area) danger move to that area Agent: *.ShipCrew Actions  Move(Area)  CarryPeople(Agent, Area)  ExtinguishFire(Fire)  ControlFire(Fire)  DriveShip(Area) Rules Rules description 1. OnShip(Fire.Area) -> Call(AuthPort, If a fire is detected on a ship, call the port Fire.Area, DangerLevel) authority describing the fire area and the assumed danger level 2. OnShip(Fire.Area) And If a fire is detected on a ship and the assumed LowDanger(Fire.Area) -> danger level is low, exstinguish the fire ExtinguishFire(Fire) 3. OnCall(AuthPort, Area, Move) -> If the port authority ask to move from that Move(Area) area, move to another area (with less danger) 4. OnCall(AuthPort, Area, DriveShip) - If the port authority ask to move a ship to an > DriveShip(Area) area, move the ship Agent: AuthPort Actions Rules Rules description 1. InPort(Fire.Area) -> If a fire is detected inside the port, call the
  • Call(AuthPort.Police.*, Area, port's police DangerLevel) 2. InPort(Fire.Area) -> If a fire is detected inside the port, call the Call(AuthPort.Fire.*, Area, port's fire fighters DangerLevel) 3. InPort(Fire.Area) And If a fire is detected on a ship inside the port OnShip(Fire.Area) And and the danger level is high, evacuate the area HighDanger(Fire.Area) -> Call(People(Fire.Area), *, Move) 4. InPort(Fire.Area) And If a fire is detected on a ship inside the port OnShip(Fire.Area) And and the danger level is high, call a tug boat to HighDanger(Fire.Area) -> eventually move the ship out of the port Call(*.TugBoat, Fire.Area, Move) Agent: AuthMun Actions Rules Rules description 1. HighDanger(Area) -> Call(AuthProv, If the danger level of an area belonging to the Area, DangerLevel) municipal authority is high, call the provincial authority describing the assumed danger level Agent: AuthProv Actions Rules Rules description 1. HighDanger(Area) -> Call(Auth*, If the danger level of an area belonging to the Area, DangerLevel) provincial authority is high, call another authority describing the assumed danger level In addition to the rules there are some “static” conditions or facts defined as follows:
  • Conditions Conditions Condition description a) People(Area) If there is anyone in the area b) InPort(Area) If you are in the port area c) OnShip(Area) If on the ship d) HighDanger(Area) If the danger of an area is high e) Danger(Area) If there is danger in area f) LowDanger(Area) If the danger of an area is low g) OnCall(Agent, Area, Action| If an agent calls from an area requiring an action DangerLevel) Do the action when starting h) OnStart(Action) Do the action when stopping i) OnStop(Action)