Collaborative Testing of Web Services -- The Service oriented framework and implementation in Semantic WS Hong Zhu Department of Computing and Electronics Oxford Brookes University Oxford OX33 1HX, UK
Suppose that a fictitious car insurance broker CIB is developing a web-based system that provides a complete service of car insurance.
Submit car insurance requirements to CIB
Get quotes from various insurers
Select one insurer to insure the car
Submit payment information
Get insurance document//confirmation
Take information about the user and the car
Check the validity of user’s information
Get quotes from insurers and pass them to the user
Get user’s selection of the insurer
Get insurance from the insurer and pass the payment to the selected insurer
Take commissions from the insurer or the user
Structure of the CIB application End users Other service users Could be statically integrated Should be dynamically integrated for business flexibility and competence, and lower operation and maintenance cost CIB’s Services Bank B’s Services Insurance A 1 ’s Services Insurance A 2 ’s Services Insurance A n ’s Services GUI Interface CIB’s service requester WS Registry
The requesters are autonomous, thus they may stop cooperation in the middle of a transaction for many reasons, such as intentional quit, network failure, or failures of requester’s software system due to fault.
Burdens are on the testers to ensure that the system handles such abnormal behaviours properly.
Dealing with unexpected usages/loads
As all web-based applications, load balance is essential.
But, the knowledge of the usage of a WS may not be available during the design and implementation of the system.
Dealing with incomplete systems
A service may have to rely on other services to perform its functionality properly, thus hard to separate the testing of the own services from the integration testing, especially when it involves complicated workflows.
In the worst case, when WS is dynamically bound to the other services, the knowledge of their format and semantics can only be based on assumptions and standards.
Tester : a particular party who carries out a testing activity.
Activity : consists of actions performed in testing process, including test planning , test case generation , test execution , result validation , adequacy measurement and test report generation , etc.
Artefact: the files, data, program code and documents etc. inovlved in testing activities. An Artefact possesses an attribute Location expressed by a URL or a URI.
Method: the method used to perform a test activity . Test methods can be classified in a number of different ways.
Context: the context in which testing activities may occur in software development stages to achieve various testing purposes. Testing contexts typically include unit testing, integration testing, system testing, regression testing, etc.
Environment . The testing environment is the hardware and software configurations in which a testing is to be performed.
Structure of basic concepts: Examples Test Activity Test planning Test Case Generation Test Execution Result validation Adequacy measurement Report generation Tester Atomic Service Composite Service
Generate test cases from algebraic specification written in CASOCC
Execution of EJB on test cases and reports errors if any axiom in the specification is violated
Bo Yu, Liang Kong, Yufeng Zhang, and Hong Zhu, Testing Java Components Based on Algebraic Specifications , Proc. of ICST 2008, April 9-11, 2008, Lillehammer, Norway. Liang Kong, Hong Zhu and Bin Zhou, Automated Testing EJB Components Based on Algebraic Specifications , Proc. of TEST’07/ COMPSAC’07, Vol. 2, 2007. pp717-722.
Tsai et al . (2004): a framework to extend the function of UDDI to enable collaboration
Check-in and check-out services to UDDI servers
a service is added to UDDI registry only if it passes a check-in test.
A check-out testing is performed every time the service is searched for. It is recommended to a client only if it passes the check-out test.
To facilitate such tests, they require test scripts being included in the information registered for the WS on UDDI.
Group testing: further investigation of the problem how to select a service from a large number of candidates by testing.
a test case ranking technique to improve the efficiency of group testing.
Bertolino et al (2005): audition framework
an admission testing when a WS is registered to UDDI
run time monitoring services on both functional and non-functional behaviours after a service is registered in a UDDI server,
Service test governance (STG) (2009):
to incorporate testing into a wider context of quality assurance of WS
imposing a set of policies, procedures, documented standards on WS development, etc.
Bertolino and Polini admitted (2009), “ on a pure SOA based scenario the framework is not applicable ”. Both recognised the need of collaboration in testing WS, the technical details about how to collaborate multiple parties in WS testing was left open.
Zhang, Y. and Zhu, H., Ontology for Service Oriented Testing of Web Services , Proc. of The Fourth IEEE International Symposium on Service-Oriented System Engineering (SOSE 2008) , Dec. 18-19, 2008, Taiwan.
Zhu, H., A Framework for Service-Oriented Testing of Web Services , Proc. of COMPSAC’06, Sept. 2006, pp679-691.
Zhu, H. and Huo, Q., Developing A Software Testing Ontology in UML for A Software Growth Environment of Web-Based Applications , Chapter IX: Software Evolution with UML and XML, Hongji Yang (ed.). IDEA Group Inc. 2005, pp263-295.
Zhu, H. Cooperative Agent Approach to Quality Assurance and Testing Web Software , Proc. of QATWBA’04/COMPSAC’04, Sept. 2004, IEEE CS, Hong Kong,pp110-113.
Zhu, H., Huo, Q. and Greenwood, S., A Multi-Agent Software Environment for Testing Web-based Applications , Proc. of COMPSAC’03, 2003, pp210-215.