Software-centric systems are increasingly ubiquitous, distributed, multi-platform and multi-component, and built on Web services and similar loosely coupled, standardsbased messaging interfaces. The world is growing increasingly accustomed to and dependent on using such systems for a plethora of activities ranging from interpersonal communications to large-scale commerce. Users demand 100% availability and near-perfect reliability from services they rely on.

The ever-growing complexity and operational scale of such systems means that the software must be validated at a level of scrutiny far greater than what manual quality assurance can accomplish. Furthermore, the concept of a “release” is becoming obsolete as SOA applications must remain “up” full-time even as the code evolves. Publishing code and then discovering problems and limitations through live operation (a.k.a. “user testing”) is unacceptable. Agile development of SOA-based applications, including comprehensive testing of all components at all levels, is thus becoming an important requirement of software development and deployment processes[2]. In fact, SOA tests are recognized as being of sufficient importance that service providers are being expected to publish their test suites as part of their service specification [3],[4] so that outside integrators can use them to validate their code that uses the services.

TDD emerged as a means of ensuring full test coverage during Agile software development. Its emphasis on validation of every component of an application explicitly includes Web service interfaces. Just as with other types of code, by implementing a test of every behavior of a Web service, developers can maximize quality, correctness, and robustness of SOA applications. Code is conventionally unit tested by “white-box” testing that calls low-level classes and methods inside a test framework. Web service methods have invocation interfaces that resemble conventional software methods, but are considerably more high-level and complex in implementation.

WS methods are defined as collections of message types and formats, typically in Web Services Description Langauge (WSDL)[5], and are implemented by service containers that both process Web service messages and handle communication with remote applications.

When a Web service method is called, the service container performs multiple operations, including receiving and parsing the Simple Object Access Protocol (SOAP)[6] request message from the remote application, invoking methods to process the request, formatting a response message, and sending it to the caller. To realistically test a Web service, the test must act as a client that connects to an instance of the service container. In this paper, we describe two case studies illustrating how we implemented Web service testing to enable TDD.

WSDL-Defined Web Services

For both the GRIDL and TxFlow projects, Web services were defined using WSDL, generated using Globus Toolkit[7] utilities, and run in Globus Grid Service containers. WSDL is an XML-based format that declaratively specifies a Web service interface as a set of named methods, data types, structures, parameters, exception types, etc., without specifying how the service methods are implemented. WS tools such as the Globus Toolkit take WSRF and generate stubs, which are pieces of code that can be linked into an application to enable it to call a service method as if it were a regular, local method, hiding the complexity of SOAP message parsing and Remote Procedure Call (RPC) invocations on a remote endpoint. Similarly, a Web service container uses a WSRF specification to parse and generate messages.

Test-Driven Development

Test-Driven Development (TDD) is a key practice in Agile Development methodologies. Tests drive development based on the rule that every significant code change must be preceded by defining a test that validates the change. This practice leads to improved design, fewer defects, complete code coverage, and better tests. Writing the test before the code is somewhat counter-intuitive; however, this has key benefits, including verifying that the test fails if the tested functionality is not implemented, and requiring the developer to think about the functionality from the perspective of a caller before implementing it.

In the context of Web service testing, like for other types of code, every behavior of a Web service method should be covered by a test. Prior to creating a WS method, a unit test should be defined that calls the method with “known good” parameters and receives an expected response. Passing this test requires basic implementation of the Web service method. Once this is done, the development of further tests depends on what WS behavior is specified by system requirements.

Common test types include validating changes to the service state caused by WS method calls; validating that an error response is received if the WS method is called with empty or invalid parameters; validating the expected response if a parameter is out of range; and validating an error or exception if the WS session expires or is terminated.

Beyond driving the development of new functionality, TDD also is instrumental in the defect resolution process. Whenever a bug is determined, a test case should be created that fails due to the bug. After this test is in place, the defect is fixed. This helps the developer to establish precisely what is failing, and leaves a test in place to catch the bug if it recurs.