Agent testing

471 views

Published on

This presentation gives some basic knowledge about testing software agents and MAS.

Published in: Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
471
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
20
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Agent testing

  1. 1. Testing Software Agents & MAS Cu D. Nguyen, PhD. Software Engineering (SE) unit,Fondazione Bruno Kessler (FBK) http://selab.fbk.eu/dnguyen/ 1
  2. 2. Testing is critical Software agents & multi-agent systems are enabling technologies to build today’s complex systems, thanks to: ¨ Adaptivity and autonomy properties ¨ Open and dynamic nature As they are increasingly applied, testing to build confidence in their operations is extremely crucial ! NASA’s agents are designed to achieve the goals and intentions of the designers, not merely to respond to predefined events, so that they can react to unimagined events and still ensure that the spacecraft does notFACT waste fuel while keeping to its mission. FACT NASA satellites use autonomous agents to balance49 multiple demands, such as staying on course, 49 keeping experiments running, and dealing with the unexpected, thereby avoiding waste. 2
  3. 3. Software agents and MASSoftware agents are programs Multi-agent systems (MAS) are composed ofthat are situated and have • Autonomous agents and theirtheir own control and goals. interactions • Environment where the agents operateProperties: • Rules, norms, constraints that restrict perceive the behaviors of the agents environment Autonomous response Goal-oriented to changes Reactivity Proactivity Deliberative Agent Z Distributed network (Internet) Environment N Social ability Host N Agent A Agent B Collaborative Competitive Environment 1 Host 1 MAS 3
  4. 4. Challenges in testing agents & MASTraditional software • deterministic inputs outputs • observable state αAgent sensors • non-deterministic, due to self-* outputs inputs and the instant changes of the environment self-*MAS • distributed, asynchronous • message passing • cooperative, emergent behaviours 4
  5. 5. Testing phasesn Acceptance: ensure the system meets the stakeholder goalsn System: test for the macroscopic properties and qualities of the systemn Integration: check the collective behaviors and the interaction of agents with the environmentn Agent: check for the integration of agent components (goal, plan, beliefs, etc.) and agent’s goal fulfillment.n Unit: testing agent units: blocks of code, agent components (plan, goal, etc.) 5
  6. 6. Testing BDI Agents sensors inputs outputs self-* 6
  7. 7. Some factsn Many BDI agent dev. languages exist: ¨ JADEX, Jason, JACK Intelligent Agents, AgentSpeak(RT)n No “popular” de-factor language yetn Often built on top of Javan There are IDEs (integrated development environment) ¨ With testing facilityn We will use JADEX as a reference language 7
  8. 8. BDI Architecture (recap)n Beliefs: represent the informational state of the agentn Desires: represent the motivational state of the agent ¨ Operationalized as Goals + [contextual conditions]n Intentions: represent the deliberative state of the agent – what the agent has chosen to do ¨ Operationalized as Plansn Events: internal/external triggers that an agent receives/perceives and will react to. 8
  9. 9. Testing agent beliefsn Belief state is program state in traditional testing sense: n Example Agent: { belief: Bank-Account-Balance goal: Buy-A-Car } state 1: Bank-Account-Balance = $1,000,0000 state 2: Bank-Account-Balance = $100n What to test: ¨ Belief update (read/write) n direct: injection, change the agent belief brutally n indirect: perform belief update via plan execution 9
  10. 10. Testing agent goalsn Goals are driven by contextual conditions ¨ conditions to activate ¨ conditions to hibernate/drop ¨ target/satisfactions conditionn What to test: ¨ goal triggering ¨ goal achievement ¨ goal interaction n one goal might trigger or inhibit other goals n goal reasoning to solve conflicts or to archive higher level goals 10
  11. 11. Testing agent plansn Plans are triggered by goals, a goal activated will trigger a plan executionn Plan execution results in: ¨ interacting with external world ¨ changing the external world ¨ changing agent belief ¨ triggering other goalsn What to test: ¨ plan instantiation ¨ plan execution results 11
  12. 12. Testing eventsn Events are primarily test inputs in agent testingn Can be: ¨ Messages ¨ Observing state of the environmentn What to test: ¨ Event filtering, what are the events an agent should receive ¨ Event handling, to trigger goals or update beliefs 12
  13. 13. Example: testing a cleaning agentn Environment: ¨ wastebins ¨ charging stations ¨ obstacles ¨ wasten This agent has to keep the floor clean 13
  14. 14. Example (contd.) n Example of Beliefs: …………... <!-- The current cleaner location. --> <belief name="my_location" class="Location" exported="true"> <fact>new Location(0.5, 0.5)</fact> </belief> <!-- Last visited location --> <belief name="last_location" class="Location"> <fact>new Location(0.5, 0.5)</fact> </belief> <!-- target location, moving to this location --> <belief name="target_location" class="Location"> <fact>new Location(0.5, 0.5)</fact> </belief> …………... n Test concerns: n is my_location updated after every move? n is the next target_location determined? how does it differ from current_location? n ….. 14
  15. 15. Example (contd.)n Example of goal <!-- Observe the battery state. --> <maintaingoal name="maintainbatteryloaded" retry="true" recur="true" retrydelay="0"> <deliberation cardinality="-1"> <inhibits ref="performlookforwaste" inhibit="when_in_process"/> <inhibits ref="achievecleanup" inhibit="when_in_process"/> <inhibits ref="achievepickupwaste" inhibit="when_in_process"/> <inhibits ref="achievedropwaste" inhibit="when_in_process"/> </deliberation> <!-- Engage in actions when the state is below MINIMUM_BATTERY_CHARGE. --> <maintaincondition> $beliefbase.my_chargestate > MyConstants.MINIMUM_BATTERY_CHARGE </maintaincondition> <!-- The goal is satisfied when the charge state is 1.0. --> <targetcondition> $beliefbase.my_chargestate >= 1.0 </targetcondition> </maintaingoal>n Test concerns: n are the conditions correctly specified? n is the goal activated when the maintaincondition satisfied? n …... 15
  16. 16. Oraclesn Different agent types demand for different types of oracle ¨ Reactive agents: oracles can be pre-determined at test design ¨ Proactive (autonomous, evolving) agents: “evolving & flexsible” oracles are needed n It’s hard to say if a behavior is correct because the agent has evolved, learned overtimen Some exist type of oracles: ¨ Constraint/contract based ¨ Ontology based ¨ Stakeholder soft-goal based 16
  17. 17. From OCL constraints, monitoring guards (to check con tions) can be generated automatically, using a tool called OCConstraint-based oracles its user-defined handler. We specialize this type of violatio notify a local monitoring agent during testing whenever a violated. Local monitoring agent is an agent that runs in thn Agents’ behaviours agents under test. It isbut theymonitoring not o with the can change, in charge of must respect designed constracts/constraints (if any)as com violations but also many more types of events, such ¨ low-level constraints: pre, post conditions, invariants mon exceptions, belief changes, and so on. Details about the will be introduced shortly. ¨ high-level constraints: norms, regulations Following is an example of pre-/post-condition specified inn Constraint violations are faults between 0 and 2000: requires the order attribute to be not null and ensures that a the proposed price must be Example: public class ExecuteOrderPlan extends Plan { .... @Constraint("pre: self.order->notEmptyn" + "post: price > 0 and price < 2000") public void body() { .... } .... } The following code is generated by OCL4Java from the con 17
  18. 18. Ontology-based oraclesn Interaction ontology defines the semantics of agent interactionsn Mismatching ontology specifications is faulty AgentAction Propose +book: Book +price: float Thing Concept Book +title: String + title: string +author: String + price: double Fig. 2. The book-trading interaction ontology, specified as UML class diagram Rule example <owl:Restriction> <owl:onProperty rdf:resource="#price"/> <owl:hasValue ...>min 0 and max 2000</owl:hasValue> </owl:Restriction> In the course of negotiation, a buyer initiates the interaction by sending a call for proposals for a given book (an instance of Book) to all the sellers that 18
  19. 19. Requirement-based (stakeholder soft-goalsbased)n Soft-goals capture quality requirements, e.g. performance, safetyn Soft-goals can be represented as quality functions (metrics)n In turn, quality functions are used to assess the agents under test 
 Efficient Robust d>ε ε time Good looking time 19
  20. 20. Input space in testing agentsn Test inputs for an agents: ¨ Passive: n Messages from other agents n Control signals from users or controller agents ¨ Active: n Information obtained from monitoring/sensing the environment n Information obtained from querying third party servicesn Agents often operate in an open & dynamic environment ¨ other agents and objects can be intelligent, leading to nondeterministic behaviors ¨ instant changes, e.g. contextual information 20
  21. 21. Example of dynamic environmentn Environment: ¨ wastebins ¨ charging stations ¨ obstacles ¨ wasten Obstacles can moven The locations of these objects changesn New objects might come in 21
  22. 22. Mock Agents n Mock Agents are sample implementation of an agent used for testing ¨ Mock Agents simulate a few functionality of the real agent n An Agent under test can interact with mock agents, instead of real agents, during test execution Payment AgentExample: during testing the SaleAgent, we use a Mock PaymentAgent instead of the real one toavoid real payment Sale Agent Mock Payment Agent 22
  23. 23. Tester Agentn Tester Agent is a special agent that plays the role of a human tester ¨ interact with the Agent under test, use the same language is the agent under test ¨ manipulate test inputs, generate test inputs ¨ monitoring the behavior of the agent under test ¨ evaluating the agent under test, according to the human tester’s requirements n Tester Agent stays on the different side, against the agent under test!!!!n Used in continuous testing (next part) 23
  24. 24. Continuous Testing ofAutonomous Agents 24
  25. 25. Why?n Autonomous agents evolve over timen One single test execution is not enough because the next execution of the same test case, test result can be different ¨ because of learning ¨ because of self-programing (e.g. genetic programming) 25
  26. 26. Continuous testingn Consists of input generation, execution and monitoring, and output evaluationn Test cases are evolved and executed continuously and automatically Evaluation final results outputs Generation Test execution & Monitoring self-* Agent & Evolution inputs initial test cases (random, or existing) 26
  27. 27. Test input generation • Manual • Random randomly-selected interaction protocol + random messages: ‣ message content ‣ environment settings: random values of artefacts’ attributes • Ontology ‣ rules and concept definitions can be used to generate messages • Evolutionary ‣ quality of test case is measured by a fitness function f(TC) ‣ use f to guide the meta-heuristic search to generate better test cases ‣ example: quality-function-based fitness 27
  28. 28. Random generation• Messages: ‣ select randomly a standard interaction protocol, combine it with randomly-generated data or domain specific data• Environmental settings: Example: ‣ identify the attributes of the entities in the environment ‣ generate randomly values for these attributes 28
  29. 29. Ontology-based generation• Information available inside interaction ontology: ‣ concepts, relations, data types of properties. E.g. action Propose is an AgentAction, two properties: book: Book , price: Double ‣ instances of concepts, user-defined or obtained from ontology alignment• Use these data to generate messages automatically Example: Book(title:”A”) Propose(book:”A”, price:10) agent under test tester agent Propose(book:”A”, price:9) 29
  30. 30. Quality-function-based evolutionarygeneration • Build the fitness of test cases based on quality functions • Use this fitness measure to guide the evolution, using a genetic algorithm, e.g GA, NSGAII etc. • For example: ‣ soft-goal: safety ‣ quality function: the closest distance of the agent to obstacles must be greater than 0.5cm ‣ fitness f = d - 0.5, search for test cases that give f < 0 generation i generation i + K distance distance 0.5 0.5 time time 30
  31. 31. Example: evolutionary testing of the cleaneragentn Test case (environment) encoding ¨ coordinates (x,y) of wastebins, charging stations, obstacles, wastes ¨ TCi = <x1,y1,x2,y2,….,xN,yN>n Fitness functions: ¨ fpower = 1/Total power consumption n search for environments where the agent consumes more power ¨ fobs = 1/Number of obstacles encountered n search for environments where the agent encounters more obstacles 31
  32. 32. Example (contd)n Genetic algorithm, driven by fpower and fobs, will search for test cases in which the agent will have higher chance to run out of battery, and hit obstacle. ¨ that violates the user’s requirementsn Results: n Evolutionary test generation technique founds test cases where: 1) wastes are far away from wastebin -> more power 2) obstacles on the way to wastes -> easy to hit Black circles: obstacle red dots: wastes - squares: charging stations, red circles: wastebins 32
  33. 33. Example (contd)n More about the evolutionary of the environment: ¨ http://www.youtube.com/watch?v=xx3QG5OuBz0n The search converts to the test cases where the 2 fitness functions are optimized. 33
  34. 34. Conclusionsn Testing software agents is important, yet still immature.n Concerns in testing BDI agent: ¨ BDI components: beliefs, goals, plans, events ¨ Their integrationsn Oracles: ¨ Reactive agents: can be specified at design time ¨ Proactive agents: new types of oracles are needed, e.g. use quality function derived from soft-goalsn Many approaches to generate test inputs ¨ Evolutionary proved to be effective 34
  35. 35. Additional resourcesn CD Nguyen (2009) Testing Techniques for Software Agents. PhD thesis, University of Trento, Fondazione Bruno Kessler. http://eprints-phd.biblio.unitn.it/68/n CD Nguyen, Anna Perini, Paolo Tonella, Simon Miles, Mark Harman, and Michael Luck. 2009. Evolutionary testing of autonomous software agents. In Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 (AAMAS 09), Vol. 1. International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 521-528.n CD Nguyen, A Perini, C Bernon, J Pavón, J Thangarajah, Testing in multi-agent systems, Agent-Oriented Software Engineering X, 180-190.n Zhiyong Zhang, Automated unit testing of agent systems, PhD Thesis, Computer Science and Information Technology, RMIT University.n Roberta de Souza Coelho, Uirá Kulesza, Arndt von Staa, Carlos José Pereira de Lucena, Unit Testing in Multi-agent Systems using Mock Agents and Aspects, In Proceedings of the 2006 international workshop on Software engineering for large-scale multi-agent systems (SELMAS 06). ACM, New York, NY, USA, 83-90. DOI=10.1145/1138063.1138079 http:// doi.acm.org/10.1145/1138063.1138079 35

×