Slides
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
534
On Slideshare
534
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Collaborative Testing of Web Services -- The Service oriented framework and implementation in Semantic WS Hong Zhu Department of Computing and Electronics Oxford Brookes University Oxford OX33 1HX, UK
  • 2. Acknowledgement
    • Mr. Yufeng Zhang , MSc student at the National University of Defence Technology, China
    • Mr. Qingning Huo , PhD student at Oxford Brookes University, UK
    • Dr. Sue Greenwood , Oxford Brookes University, UK
  • 3. Overview
    • Analyse the impact of the novel features of service-orientation on software testing
    • Identify the grant challenges in testing WS applications
    • Propose an framework to support testing WS
    • Report on prototype implementation of the framework and case studies
    • Compare with related works and discussion of future work
  • 4. Characteristics of Web Services
    • Web services is a distributed computing technique that offers more flexibility and looser coupling so that it is more suitable for internet computing.
    • The dominant of program-to-program interactions
      • The components of WS applications (such as service providers):
        • Autonomous : control their own resources and their own behaviours
        • Active : execution not triggered by message, and
        • Persistent : computational entities that last long time
      • Interactions between components:
        • Social ability : discover and establish interaction at runtime
        • Collaboration : as opposite to control, may refuse service, follow a complicated protocol, etc.
  • 5. WS technique stack
    • Basic standards:
      • WSDL: service description and publication
      • UDDI: for service registration and retrieval
      • SOAP for service invocation and delivery
    • More advanced standards for collaborations between service providers and requesters.
      • BPEL4WS: business process and workflow models.
      • OWL-S: ontology for the description of semantics of services
    Registry Provider Requester Search for services registered services register service request service deliver service
  • 6. A typical scenario
    • Car Insurance Broker
      • Suppose that a fictitious car insurance broker CIB is developing a web-based system that provides a complete service of car insurance.
    • End users:
      • Submit car insurance requirements to CIB
      • Get quotes from various insurers
      • Select one insurer to insure the car
      • Submit payment information
      • Get insurance document//confirmation
    • Broker:
      • Take information about the user and the car
      • Check the validity of user’s information
      • Get quotes from insurers and pass them to the user
      • Get user’s selection of the insurer
      • Get insurance from the insurer and pass the payment to the selected insurer
      • Take commissions from the insurer or the user
  • 7. Structure of the CIB application End users Other service users Could be statically integrated Should be dynamically integrated for business flexibility and competence, and lower operation and maintenance cost CIB’s Services Bank B’s Services Insurance A 1 ’s Services Insurance A 2 ’s Services Insurance A n ’s Services GUI Interface CIB’s service requester WS Registry
  • 8. Testing own side services (1)
    • Similar to test software components.
    • Many existing work on software component testing can be applied or adapted with special considerations:
      • The stateless feature of HTTP protocol;
      • XML encoding of the data passing between services as in SOAP standard;
      • Confirmation to the published descriptions:
        • WSDL for the syntax of the services
        • workflow specification in BPEL4WS
        • semantic specification in e.g. OWL-S.
    Progresses on these issues have been made in the past few years
  • 9. Testing own side services (2)
    • Dealing with requesters’ abnormal behaviours
      • The requesters are autonomous, thus they may stop cooperation in the middle of a transaction for many reasons, such as intentional quit, network failure, or failures of requester’s software system due to fault.
      • Burdens are on the testers to ensure that the system handles such abnormal behaviours properly.
    • Dealing with unexpected usages/loads
      • As all web-based applications, load balance is essential.
      • But, the knowledge of the usage of a WS may not be available during the design and implementation of the system.
    • Dealing with incomplete systems
      • A service may have to rely on other services to perform its functionality properly, thus hard to separate the testing of the own services from the integration testing, especially when it involves complicated workflows.
      • In the worst case, when WS is dynamically bound to the other services, the knowledge of their format and semantics can only be based on assumptions and standards.
  • 10. Testing of other side services
    • Some similarity to component testing, however, the differences are dominant
      • Lack of software artifacts
      • Lack of control over test executions
      • Lack of means of observation on system behaviour
  • 11. Lack of software artifacts
    • The problem:
    • No design documents, No source code, No executable code
    • The impacts:
      • For statically bound services,
        • Techniques that automatically derive stubs from source code are not applicable
        • Automatic instrumentation of original source code or executable code is not applicable
      • For dynamic bound services,
        • Human involvement in the integration becomes completely impossible.
    • Possible solutions:
      • (a) Derive test harness from WS descriptions;
      • (b) The service provider to make the test stubs and drivers available for integration.
  • 12. Lack of control over test executions
    • Problem:
      • Services are typically located on a computer on the Internet that testers have no control over its execution.
    • Impact:
      • Control over the execution has been essential to apply existing testing techniques and will continue to be essential for testing services:
        • An invocation of the service as a test must be distinguished from a real request of the service.
        • System may be need to be restarted or put into a certain state to test it.
        • The situation could become much more complicated when a WS is simultaneously tested by many service requesters.
    • Possible solution:
      • The service provider must provide a mechanism and a service that enable service requesters control the testing executions of the service.
    Currently, there is no support to such mechanisms in W3C standards of WS.
  • 13. Lack of means of observation
    • The problem:
      • A tester cannot observe the internal behaviours of the services
    • The Impacts:
      • No way to measure test coverage
    • Possible solutions:
      • The service provider provides a mechanism and the services to the outside tester to observe its software’s internal behaviour in order to achieve the test adequacy that a service requester requires.
      • The service provider opens its document, source code as well as other software artifacts that are necessary for testing to some trusted test service providers.
  • 14. Testing service composition
    • The need to deal with diversity
      • the parts may operate on a variety of the hardware and software platforms
      • different deployment configurations
      • delivering different quality of services.
    • The need of testing on-the-fly
      • testing just before the invocation
    • Consequences of testing on-the-fly:
    • The need of non - intrusive testing: the test invocations of a service must be distinguished from the real ones
      • service provider: the normal operation of the service must not disturbed by test activities.
      • client: do not actually receive the real services and do not incur the cost
    • The need of automation : all test activities to be performed automatically.
    A typical scenario of service oriented computing is that a service requester searches for a required function in a registry, and then dynamically links to the service and invokes it.
  • 15. Dealing with diversity
    • The problem
      • The need to deal with diversity
    • The Implications
      • testing must be performed in a heterogeneous environment.
      • different service requesters may well have different test requirements to meet their own business purposes
    • Possible solutions
      • Universal powerful testing tools: expensive if not impossible
      • Collaboration and integration of a variety of tools
  • 16. Testing on-the-fly
    • The problem
      • Test just before the invocation while the parts are in operation
    • Implications
      • human involvement is impossible
      • Not to interference with the normal operation
    • Possible solutions
      • Fully automated testing process
      • Mock services, etc.
  • 17. The proposed approach
    • A WS should be accompanied by a testing service.
      • functional services : the services of the original functionality
      • testing services : the services to enable test the functional services.
    • Testing services can be either provided by the same vendor of the functional services, or by a third party.
    • Independent testing services:
      • Provider:
        • testing tool vendors
        • companies of specialized in software testing
      • The services:
        • to generate test cases,
        • to measure test adequacy,
        • to extract various types of diagrams from source code or design and specification documents, etc.
  • 18. Architecture of service oriented testing
  • 19. Illustration of service oriented testing
  • 20. How does the system work
    • The Scenario
    • Suppose the car insurance broker want to search for web services of insurers and test the web service before making quote for its customers.
    Car Insurance Broker CIB Insurer Web Service IS customer Information about the car and the user Insurance quotes Testing the integration of two services
  • 21.  
  • 22. Collaboration Process in the Typical Scenario
  • 23. Automating Test Services
    • The key technique issues to enable automated online test of WS:
      • How a testing service should be described, published and registered at WS registry;
      • How a testing service can be retrieved automatically even for testing dynamically bound services;
      • How a testing service can be invoked by both a human tester and a program to dynamically discover a service and then test it before bind to it.
      • How testing results can be summarized and reported in the forms that are suitable for both human beings to read and machine to understand.
    These issues can be resolved by the utilization of a software testing ontology (Zhu & Huo 2003, 2005).
  • 24. STOWS: Software Testing Ontology for WS
    • Ontology defines the basic terms and relations comprising the vocabulary of a topic area as well as the rules for combining them to define extensions to the vocabulary
    • STOWS is base on an ontology of software testing originally developed for agent oriented software testing (Zhu & Huo 2003, 2005).
      • The concepts of software testing are divided into two groups.
      • Knowledge about software testing are also represented as relations between concepts
  • 25. STOWS (1): Basic concepts
    • Tester : a particular party who carries out a testing activity.
    • Activity : consists of actions performed in testing process, including test planning , test case generation , test execution , result validation , adequacy measurement and test report generation , etc.
    • Artefact: the files, data, program code and documents etc. inovlved in testing activities. An Artefact possesses an attribute Location expressed by a URL or a URI.
    • Method: the method used to perform a test activity . Test methods can be classified in a number of different ways.
    • Context: the context in which testing activities may occur in software development stages to achieve various testing purposes. Testing contexts typically include unit testing, integration testing, system testing, regression testing, etc.
    • Environment . The testing environment is the hardware and software configurations in which a testing is to be performed.
  • 26. Structure of basic concepts: Examples Test Activity Test planning Test Case Generation Test Execution Result validation Adequacy measurement Report generation Tester Atomic Service Composite Service
  • 27. STOWS (2): Compound concepts
    • Capability: describes what a tester can do
    • the activities that a tester can perform
    • the context to perform the activity
    • the testing method used
    • the environment to perform the testing
    • the required resources (i.e. the input)
    • the output that the tester can generate
    Capability Method Activity Environment Context Capability Data Artefact <<enumeration>> Capability Data Type Input Output 1 1 0-1 0-1 0-* 1-*
  • 28.
    • Task: describes what testing service is requested
    • A testing activity to be performed
    • How the activity is to be performed:
      • the context
      • the testing method to be used
      • the environment in which the activity must be carried out
      • the available resources
      • the expected outcomes
    Task Method Activity Environment Context Task Data Artefact <<enumeration>> Task Data Type Input Output 0-1 0-1 1 1 1-* 1-*
  • 29. STOWS (3): Relations between concepts
    • Relationships between concepts are a very important part of the knowledge of software testing:
      • Subsumption relation between testing methods
      • Compatibility between artefacts’ formats
      • Enhancement relation between environments
      • Inclusion relation between test activities
      • Temporal ordering between test activities
    • How such knowledge is used:
      • Instances of basic relations are stored in a knowledge-base as basic facts
      • Used by the testing broker to search for test services through compound relations
  • 30. Compound relations
    • MorePowerful relation: between two capabilities.
      • MorePowerful ( c 1 , c 2 ) means that a tester has capability c 1 implies that the tester can do all the tasks that can be done by a tester who has capability c 2 .
    • Contains relation: between two tasks.
      • Contains ( t 1 , t 2 ) means that accomplishing task t 1 implies accomplishing t 2 .
    • Matches relation: between a capability and a task.
      • Match ( c , t ) means that a tester with capability c can fulfil the task t .
    Capability Tester MorePowerful * * IsMorePowerful C 2 C 1 Task Contains T 1 T 2 C T Matches Match Contain * * * *
  • 31. Definition of the MorePowerful relation
    • A capability C 1 is more powerful than C 2 , written MorePowerful (C 1 , C 2 ), if and only if
      • C 2 ’s capability is included in C 1 ’s activities
      • C 1 and C 2 have the same context.
      • Environment of C 1 is the enhancement of the environment of C 2 .
      • The method of C 2 is subsumed by C 1 .
      • For each input artefact of C 1 , there is a corresponding compatible input in the input artefact of C 2
      • For each output artefact of C 2 there is a corresponding compatible output artefact of C 1 .
  • 32. Definition of the Contains relation
    • A task T 1 contains task T 2 , written Contains ( T 1 , T 2 ), if and only if
      • T 1 and T 2 have the same context,
      • T 1 ’s activities include and T 2 ’ s activities,
      • The method of T 1 subsumes the method of T 2 ,
      • The environment of T 2 is an enhancement of the environment of T 1 ,
      • For each input artefact of T 1 , there is a corresponding compatible the input artefact of T 2 ,
      • For each output artefact of T 2 , there is a corresponding compatible the output artefact of T 1 .
  • 33. Definition of the Matches relation
    • A capability C matches a task T , written Matches ( C , T ), if and only if
      • C and T have the same context,
      • C ’s activities include T ’s activity,
      • The method of C subsumes the method of T ,
      • The environment of T is an enhancement of environment of C ,
      • For each input artefact of T , there is a corresponding compatible input artefact of C ,
      • For each output artefact of C, there is a corresponding compatible the output artefact of T .
  • 34. Properties of the compound relations
    • (1) The relations MorePowerful and Contains are reflexive and transitive.
    • (2)  c 1 , c 2  Capability ,  t  Task ,
    • MorePowerful ( c 1 , c 2 )  Matches ( c 2 , t )
    • Matches ( c 1 , t ).
    • (3)  c  Capability ,  t 1 , t 2  Task ,
    • Contains ( t 1 , t 2 )  Matches ( c , t 1 )
    •  Matches ( c , t 2 ).
  • 35. Prototype implementation
    • Representation of STOWS in OWL
      • Both basic and compound concepts are classes in OWL and represented as XML data definition
    • Use STOWS in Semantic Web Services
      • Compound concepts represented in OWL are transformed into OWL-S Service Profile for registration , discovery and invocation
        • UDDI /OWL-S registry server: using OWL-S/UDDI Matchmaker
          • The environment: Windows XP, Intel Core Duo CPU 2.16GHz, Jdk 1.5, Tomcat 5.5 and Mysql 5.0.
  • 36. Transformation of STOWS in OWL-S Activity Context Environment Method Capability data Input Artefacts Output Artefacts ServiceCategory INPUT PARAMETERS ContextMark EnvironmentMark MethodMark Artefacts… OUTPUT PARAMETERS Artefacts… Capability Service profile
  • 37. Ontology Management
    • Motivation
      • All the terms used in the capability description for test service registration and discovery and invocations must be first defined in the ontology.
      • However, it is impossible to build a complete ontology of software testing
        • the huge volume of software testing knowledge
        • the rapid development of new testing technique, methods and tools.
      • It is only a framework and has not been attempted to be complete.
    • Therefore, the ontology must be extendable and open to the public for updating.
      • An ontology management mechanism is provided to enable the population of the ontology
  • 38. The ontology management mechanism
    • It provides three services to users:
      • AddClass: to add new concept
      • DeleteClass: to delete concept
      • UpdateClass: to revise concept of the ontology
    • Restrictions on the manipulation of the data model
      • Authority Checker:
        • elementary classes
          • form the framework of the ontology STOWS.
          • None of them could be pruned down
        • extended classes
          • attached to the elementary classes to define new concepts
          • instances of the concepts.
          • added by the users and can be deleted from the hierarchy
      • Conflict Checker
        • the new class to be added does not exist in the ontology
        • the class to be deleted has no subclasses in the hierarchy
  • 39. Structure of OMS
  • 40. Test brokers
    • A test service that compose existing test services
      • Decompose test tasks into subtasks
      • Search for test services to carry out the subtasks
      • Select test services from candidates
      • Coordinate the selected test services
        • Invoke them in the right order
        • Pass data between them
        • Collects test results, etc.
    • Itself is a test service as well.
    • There may be multiple test brokers owned by different vendors.
  • 41. Architecture of the prototype test broker We have developed a prototype test broker to demonstrate the feasibility of the approach.
  • 42. Test broker process model
  • 43. Case Studies: Overview
    • Case Study 1 : An automated software testing tool CASCAT is wrapped into a test service called TCG
    • Case Study 2 : A service specific test service called T-NCS is developed for testing a Web Service that provide Numerical Calculation Services (NCS)
    • Case Study 3 : Automated composition of TCG and T-NCS through test broker to perform on-the-fly testing of NCS.
  • 44. Case study 1: (a) The subject
    • The subject CASCAT:
      • Automated component testing tool for EJB
      • Generate test cases from algebraic specification written in CASOCC
      • Execution of EJB on test cases and reports errors if any axiom in the specification is violated
    Bo Yu, Liang Kong, Yufeng Zhang, and Hong Zhu, Testing Java Components Based on Algebraic Specifications , Proc. of ICST 2008, April 9-11, 2008, Lillehammer, Norway.  Liang Kong, Hong Zhu and Bin Zhou, Automated Testing EJB Components Based on Algebraic Specifications , Proc. of TEST’07/ COMPSAC’07, Vol. 2, 2007. pp717-722.  
  • 45. Case study 1: (b) Registry
    • Service Profile of CASCAT
      • ServiceCategory: “TestCaseGenerationServices”.
      • Input artefact: specified by the class CasoccSpecification , which is a subclass of Specification and stands for algebraic specification in CASOCC.
      • Context: “ComponentTest”.
      • Environment: ‘not limited’.
      • Method is CASOCC-method, which is a subclass of SpecificationBased method.
      • Output artefact: test case.
  • 46. Case study 1: (c) Submitting search requests
    • Test requester:
      • a service was built that plays the role of test requester.
      • It constructs test tasks and submits them to the test broker to search for a test service
      • Test task that it produced is to generate test case from CASOCC specification in the context of the test as component test.
  • 47. Example of test task <Task rdf:ID=&quot;testcaseTask&quot;> <inputArtefact rdf:resource=&quot;#CasoccSpec&quot;/> <outputArtefact> <TestCase rdf:ID=&quot;testcase&quot;> <Location rdf:datatype=&quot;http://...&quot;> http://resourse.nudt.edu.cn/testcase/ fictitioustestcase.txt </Location> </TestCase> </outputArtefact> <hasActivity> <TestCaseGeneration rdf:ID=&quot;GenerateTestcase&quot;/> </hasActivity> <hasMethod> <CASOCC-method rdf:ID=&quot;CASOCCBasedMethod&quot;/> </hasMethod> <hasContext rdf:resource=&quot;#ComponentTest&quot;/> </Task>
  • 48. Case study 1. (d) Search and discovery
    • Test Broker:
      • Once receives a test task, it generates a capability description from the test task
      • Constructs a Service Profile.
      • Then calls the API of the Matchmaker Client to search for test service providers.
  • 49. Case study 1: (e) Invocation
    • A Java Enterprise Bean was deployed on Jboss platform
    • A formal specification of the bean was written in CASOCC.
    • The web service of CASCAT is invoked as a web service to generate test case of the component.
    • The result:
      • Test cases as an instance of the OWL class TestCase .
      • Stored as a file on a web server
      • The Location attribute of the file is returned by the service.
  • 50. Case study 2: Overview
    • Goal : to develop a test service (T-NCS) that supports testing a Web Service that provide Numerical Calculation Services NCS
    • Tasks :
      • Registered:
        • Capability is described in the ontology represented in OWL-S
      • Searchable:
        • It can be searched when the testing task matches its capability
      • Invoked through the internet
        • As a web services to execute test cases on the NCS when invoked
  • 51. Case study 3: Composition of test services
    • Gaol:
      • To automatically discover, select and compose test services on-the-fly using the test broker
    • Existing test services:
      • TCG : test case generator based on algebraic specification ( developed in case study 1 )
      • T-NCS : test service for executing test cases to test NCS ( developed in case study 2 )
      • TFT : a WS that transforms test cases in the form that TCG produced into test cases that T-NCS recognizes ( developed in case study 3 )
    • Other artefacts available:
      • NCS : web services for numerical calculation services
      • Algebraic specification of NCS in Casocc language
  • 52. The process
    • Registration
      • The test services are registered to the Matchmaker;
    • Submission of test task
      • A test task for testing NCS against an algebraic specification is submitted to the test broker;
    • Test broker generates a test plan
      • Decompose the task into subtasks
      • Search for test services for each subtask
      • Select a test service for each subtask
      • Invoke test services according to the plan and pass data between them
    • Test services execute test requests and reports test results
  • 53. The test task
    • <Task rdf:ID=&quot;thirdTask&quot;>
    • <hasContext>
    • <ServiceTest rdf:ID=&quot;serviceTest&quot;/>
    • </hasContext>
    • <hasMethod rdf:resource=&quot;#CASOCCMethod&quot;/>
    • <hasEnvironment df:resource=&quot;#notLimited&quot;/>
    • <hasActivity rdf:resource=&quot;#multiactivites&quot;/>
    • <inputArtefact>
    • <CasoccSpecification
    • rdf:ID=&quot;casoccSpecification&quot;>
    • <Location rdf:datatype=
    • &quot;http://www.w3.org/2001
    • /XMLSchema#anyURI&quot;>
    • http://.../specification/Calculator.asoc
    • </Location>
    • </CasoccSpecification>
    • </inputArtefact>
    <outputArtefact> <TestCase rdf:ID=&quot;testcase&quot;> <Location rdf:datatype= &quot;http://www.w3.org/2001/ XMLSchema#anyURI&quot;> http://.../artefacts/testcase/ fictitioustestcase.txt </Location> </TestCase> </outputArtefact> <testObject> <TestObject rdf:ID=&quot;calculateService&quot;> <operationName rdf:datatype= &quot;http://www.w3.org/2001/ XMLSchema#string&quot;> add </operationName> <endpoint rdf:datatype=&quot;http://www.w3.org/ 2001/XMLSchema#string&quot;> http://.../axis/services/CalculatorImpl </endpoint> </TestObject> </testObject> </Task>
  • 54. Subtasks and selected test services
    • Subtask 1 : Generation of test cases
      • Input Artefact : http://.../specification/Calculator.asoc
      • Tester profile : http://../casestudy/third/generateTestcase.owl#generateTestcaseProfile
      • Result : http://../testcase/testcases.log
    •   Subtask 2 : Transformation of test cases from Casocc format to NCS test case format
      • Input Artefact : http://../testcase/testcases.log
      • Tester profile : http://../casestudy/third/transformer.owl#transformerProfile
      • Result : http://../testcase/calculatortestcase.xml
    •   Subtask 3 : Execution on test cases
      • Input Artefact : http://../testcase/calculatortestcase.xml
      • Tester profile : http://../casestudy/third/executeTestcase.owl#executeTestcaseProfile
      • Result : http://../testresult/testresult.xml
  • 55. Conclusion
    • Testing imposes challenges to testing web services applications are analysed:
      • Testing a web service as own software
        • Mostly solvable by adaption of existing software technologies
      • Integration testing at development and at run-time
        • No support in current WS standard stack
        • Grant challenge to existing software testing technology
          • The lack of software artifacts
          • The Lack of control over test executions
          • The Lack of means of observation on internal behaviours
          • The need to deal with diversity
          • The need of testing on-the-fly
          • The need of non - intrusive testing
          • The need of automation
  • 56. Related works
    • Tsai et al . (2004): a framework to extend the function of UDDI to enable collaboration
      • Check-in and check-out services to UDDI servers
        • a service is added to UDDI registry only if it passes a check-in test.
        • A check-out testing is performed every time the service is searched for. It is recommended to a client only if it passes the check-out test.
        • To facilitate such tests, they require test scripts being included in the information registered for the WS on UDDI.
      • Group testing: further investigation of the problem how to select a service from a large number of candidates by testing.
        • a test case ranking technique to improve the efficiency of group testing.
    • Bertolino et al (2005): audition framework
      • an admission testing when a WS is registered to UDDI
      • run time monitoring services on both functional and non-functional behaviours after a service is registered in a UDDI server,
      • Service test governance (STG) (2009):
        • to incorporate testing into a wider context of quality assurance of WS
        • imposing a set of policies, procedures, documented standards on WS development, etc.
    Bertolino and Polini admitted (2009), “ on a pure SOA based scenario the framework is not applicable ”. Both recognised the need of collaboration in testing WS, the technical details about how to collaborate multiple parties in WS testing was left open.
  • 57. Advantages of proposed approach
    • Feasibility
      • tested by case studies with the prototype implementation
    • Practical usability:
      • Implementable without any change to the existing standards of Semantic WS
    • Motivation for wider adoption by industry
      • Business opportunities for testing tool vendors and software testing companies to provide testing services online as web services
    • Scalable
      • test services are distributed and there is no extra-burden on UDDI servers.
    • Dealing with variety
      • collaborations among many test services
      • the employment of ontology of software testing to integrate multiple testing tools.
    • Testing on-the-fly,
      • The automation of the dynamic composition of test services achieved by employing an ontology of software testing
    • No interfere with the normal operations of the services
      • running a separate test services
    • Access to documents with proper intellectual property protection
      • using trusted third party of professional test services
  • 58. Remaining challenges and future work
    • Technical challenges
      • To populate the ontology of software testing (e.g. the formats of many different representations of testing related artefacts)
      • To improve the test broker
      • To device the mechanism of certification and authentication for testing services
    • Social challenges
      • For the above approach to be practically useful, it must be adopted by web service developers, testing tool vendors and software testing companies
  • 59. References
    • Zhang, Y. and Zhu, H., Ontology for Service Oriented Testing of Web Services , Proc. of The Fourth IEEE International Symposium on Service-Oriented System Engineering (SOSE 2008) , Dec. 18-19, 2008, Taiwan.
    • Zhu, H., A Framework for Service-Oriented Testing of Web Services , Proc. of COMPSAC’06, Sept. 2006, pp679-691.
    • Zhu, H. and Huo, Q., Developing A Software Testing Ontology in UML for A Software Growth Environment of Web-Based Applications , Chapter IX: Software Evolution with UML and XML, Hongji Yang (ed.). IDEA Group Inc. 2005, pp263-295.
    • Zhu, H. Cooperative Agent Approach to Quality Assurance and Testing Web Software , Proc. of QATWBA’04/COMPSAC’04, Sept. 2004, IEEE CS, Hong Kong,pp110-113.
    • Zhu, H., Huo, Q. and Greenwood, S., A Multi-Agent Software Environment for Testing Web-based Applications , Proc. of COMPSAC’03, 2003, pp210-215.