Dynamic Process Orchestration


Published on

Dynamic Process Orchestration

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Dynamic Process Orchestration

  1. 1. THE POWER OF PROCESS P R O D U C T white paper Dynamic Process Orchestration Leading the way in business agility Author(s): Dr. Michael Georgeff - Precedence Research Institute Jon Pyke - MBCS - CTO Staffware p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n
  2. 2. This document contains information which is confidential and proprietary. The contents of this document may not be copied, reproduced or disclosed to any individual, in whole or in part, without the written permission of a duly authorized officer of Staffware. By accepting this document or any part of its contents, the recipient undertakes not to use all or any part of its contents in any way which may be detrimental to Staffware or Precedence Research. Copyright of this material is held by Precedence Research. References in this publication to Staffware products and services do not imply that Staffware Plc intends to make these available in all countries in which Staffware operates or is represented. Staffware, Staffware Process Engine, Staffware iProcess Engine, Staffware Process Definer, Staffware Process Orchestrator, Staffware Process Administrator, Staffware Process Monitor, Staffware Process Objects, Staffware Server Objects, Staffware Process Developers Kit, Staffware Process Integrator, Staffware Process Client, and Staffware Process Relationship Manager are trademarks of Staffware PLC. IBM is the registered trademark of International Business Machines. AIX, CICS, DB2 and MQ Series are registered trademarks of International Business Machines. UNIX is a registered trademark of The Open Group. Microsoft, Windows, Visual Basic are registered trademarks of Microsoft Corporation. Sun, Solaris, Java, J2EE, Enterprise JavaBean, EJB and all Java-based marks are either trademarks or registered trademarks Sun Microsystems. HP, HP-UX, Compaq are either trademarks and/or service marks or registered trademarks and/or service marks of Hewlett Packard Company and/or its subsidiaries. Oracle is a registered trademark of Oracle Corporation. Actional Control Broker is the trademark of Actional Corporation. Tuxedo is the trademark of BEA Systems, Inc. Linux is a registered trademark of Linux Torvalds. W3C, XML (Extensible Markup Language), HTTP (Hypertext Transfer Protocol), HTML (Hypertext Markup Language) are claimed as a trademark or generic term by MIT, ERCIM, and/or Keio on behalf of the W3C (World Wide Web Consortium). Internet is a trademark of Internet Inc. ARIS Toolset is a registered trademark of IDS Scheer AG. CORBA is a registered trademark of the Object Management Group, Inc. OMG and Object Management Group are trademarks of the Object Management Group, Inc. Other company, product and services names mentioned herein may be trademarks or service marks of others. Version 1, March 2003 © Copyright Staffware Plc 2003 All trademarks are recognized.
  3. 3. Dynamic Process Orchestration white paper 2 Table of Contents Page Introduction 3 The New Business Complexity 4 Goal-Driven Processing 6 Static Process Selection 7 Dynamic Process Selection 7 Multiple Process Selection 8 Goal-Driven Process Selection 9 Semantic Modularity 11 Handling Exceptions 12 Web Services 13 Benefits of Goal-Driven Processing 14 Summary 14 p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n
  4. 4. 3 Introduction Business Software has long been used to support key business processes, there is nothing new in this notion. What has changed though is the realization that the easiest way for organizations to be competitive, manage costs, be flexible and responsive, is to understand and improve the structure and execution of their business processes. And that process management technology can play a key role in meeting those objectives. This view is gaining global recognition in both the public sector and in private enterprise, such that many organizations now acknowledge that process holds the key to their successful future. Any solution must bring benefits to the bottom line, without having to discard what currently works. So, harmonizing new processes and new applications with existing infrastructures, including existing technologies such as ERP and CRM, is a priority. Providing technology that enables users to map out the business process in clear graphical notation is important, but being able to execute that process, facilitate simple integration with legacy systems and commercially available packages, and then monitor/manage how those processes are working together, is also vital. Another issue top of the agenda is the effectiveness and flexibility of end-to-end processes, so that operations can run smoothly from the front office, customer/public facing end right through to the back office in one smooth administrative flow using a central database. Staffware's unique Independent Process Layer separates the business process logic from the application layer, making integration much smoother and infinitely scalable. The Staffware Process Suite enables organizations to implement new applications that can tie together the front office applications and back office systems without disruption to the process layer. This reduces maintenance costs and time to deploy, and makes the IT function far more responsive to business needs. Staffware's BPM solution also ensures that organizations are performing in line with internal and external regulations and policies. Speed is of the essence With expectations for fast response, fast solutions and fast return on investment intensifying, the race for doing things faster is accelerating. In the past, there has been much criticism of the time it takes for IT implementations to deliver any real benefits. And building automated business processes is no exception. Staffware, who have been in this market for 15 years and have developed proven technology that is already supremely fast, has now introduced a new enhancement to its product suite. The Staffware Process Orchestrator is, as far as we can tell, a unique approach to managing business processes. And it is likely to deliver significant benefits when it comes to developing and deploying automated business process solutions. What is it? In most BPM products, developers of business processes have to accommodate the outcome of every business decision or event. Until now, this has not been a significant problem, but with the advent of new technologies, such as web services, a new way of executing processes is required. The Staffware Orchestrator is such a mechanism. Essentially it provides processes developers with the ability to dynamically assign process components - sub-processes and web-services - to the overall business process, depending upon data content and unplanned business events. By determining which process fragment to execute, based on the response from an external system, event or web-service integration, users are able to build processes that are adaptive and very responsive to business needs, without having to know what events will occur in advance.
  5. 5. Dynamic Process Orchestration white paper p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 4 How does it work? Dynamic sub-procedures have been introduced to support the Staffware Process Orchestrator. These allow the Process Definer to specify that when a particular point in the process is reached, a number of sub-processes are started. The number can be variable - working off data set during the previous steps of this particular case. In fact, the types of sub-processes can also be varied, with a variety of types being started from this one dynamic sub-process step. This works using an 'array', with each row in the array defining one sub-process instance, including which sub-process to start, which step to start it at, and what data values to import into that sub-process. The parent process will start all of the required instances and then wait until they all complete before moving on. This functionality could, for example, be used to start a number of lines for a Sales Order, with some lines being 'product' sub-processes and others being 'services' sub-processes. The Process Orchestrator also uses a new type of step that enables an external application to initiate a variable number of tasks, including Staffware sub-processes, and then graft them onto the main case. The main case then waits until the sub-processes, and any external tasks, notify completion before continuing along the process. This facility allows for dynamic initialization of process components and also recognizes that, whilst the Staffware process does not necessarily own all of the decision-making, it can monitor and manage the outcome of decisions made by other components of the whole solution. This paper is designed to explain this exciting leap in technology in more detail. The New Business Complexity The emergence of the Internet, application servers, and middleware has provided the means to connect and orchestrate applications from diverse sources, across and within enterprises. This gigantic leap in connectivity has dramatically changed the way businesses deliver services to their customers and interact with their suppliers, partners, and employees. However, the orchestration of these applications and services has become more and more complex. Higher levels of mass-customization and personalization require an ever increasing range of options and interactions. In addition, the environment in which applications are to be integrated and orchestrated is no longer the closed, controlled environment that is familiar to business IT departments. It is open, unpredictable, and rich with exceptions. The services provided by suppliers and other organizations are beyond the control of the enterprise, and failure, incomplete information, and unanticipated responses are the norm rather than the exception. As a result, most attempts to develop high-value applications in this new business environment have proved to be:- very costly to build, very difficult to change and extend, hard to validate, fragile to failures and exceptions, unable to scale, and unable to deliver the expected value.
  6. 6. 5 Why is this? By far the best way to orchestrate applications and automate business services is to use a Business Process Management (BPM) system. These products allow developers (and, in some cases, business analysts) to rapidly design and implement the business processes that control the way in which services and other applications are employed and managed. In designing these processes, the application developer or business analyst must examine all possible situations or cases and must develop (in one form or another) the response to each one. In particular, at design time, all possible linkages within and between business processes must be specified. The processes, in short, are hardwired. This approach works acceptably well for closed environments in which there are relatively few options or cases for the developer to consider. However, it fails to scale up to the kind of environment faced by today's increasingly connected and automated businesses. The reason is that there are just too many options and possibilities to consider and too many process flows to hardwire together. Development is therefore costly, complex and long, and applications are hard to change, extend and maintain. Building such applications is further complicated by the open nature of large scale applications, in which service providers and other components often fail, provide incomplete information, or unanticipated responses. In these cases, the response to every such event has to be explicitly coded into the business processes themselves - an extremely costly, error-prone task that is well beyond the capabilities of even the most proficient business analysts. Keys to Success:- Agility, Robustness, and Scalability In today's business environment, this level of complexity is intolerable. Business applications need to be agile - it must be easy to develop, maintain, and change business processes fast and cost effectively. They need to be robust, so that failure or incomplete information does not bring the system down. And they need to be able to scale with the business - to be able to start small, deliver a fast Return On Investment, and then grow incrementally as services are improved or the business expands. The key to reducing this complexity is to recognize that it will never be possible to achieve agility, robustness and scalability if we have to design process flows for all eventualities. The responsibility for choosing the correct process to use at the correct time and in the correct circumstances must be shifted from the designer to the system itself. The system must be able to decide dynamically which option to pursue and determine autonomously how to deal with exceptions, rather than require a developer or business analyst to specify all of this at design time. Hardwiring vs Dynamic Process Selection As mentioned above, conventional BPM solutions hardwire (statically link) all the process nodes and process flows together at design time. As the complexity of the application increases, these processes can grow to tens to thousands of nodes with an exponentionally expanding number of linkages between them. The use of sub-processes can partially reduce this complexity by providing opportunities for process reuse. However, each sub-process, together with the circumstances for its use, still has to be hardwired into every relevant process flow at design time. This complexity can be greatly reduced by dynamically linking the processes together, deciding at run time the best process to use based on the current situation or case. Because the system itself determines which process to use and in which circumstance to use it, this “loose coupling” of processes eliminates the need to statically link processes together at design time. A developer or business analyst need not know the details of the entire process flow, nor the circumstances of sub-process invocation, and can thereby change and extend processes independently of one another. As a result, agility and scalability can be improved dramatically.
  7. 7. Dynamic Process Orchestration white paper Triage Test Examine Discharge STOP p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 6 Exceptions and Unpredictability As systems get more complex, possibly involving external components over which they have limited control, exceptions, failures, and unpredicted events become more and more common. So too does the possibility of incomplete process specification, as the developer may fail to consider all situations and all possible process flows. Dynamic process linking by itself cannot solve these kinds of problem. We need in addition a solution that enables the system to autonomously handle failures, exceptions, and incompleteness, even when these events are quite unanticipated by the developer. The key to achieving such a solution is to realize that, in most large scale business applications, there are many different paths to a successful outcome. By providing the system with the ability to find those other paths whenever the initially selected path fails, we can deliver a level of robustness and scalability that has, until now, been limited to human practice. Just how this is done we discuss below - but first, we need to establish a uniform framework that will bring all this, and more, together. Goal-Driven Processing Goals To provide a solid conceptual and technical basis on which to build a solution to these problems, we need to move from a task-oriented view of business processes to a goal-oriented view. This change of viewpoint is fundamental - when people are given a goal or objective to achieve, they have far more opportunity to accomplish it (especially in complex or unpredictable environments) than if they are given a predefined task or process to follow. The reason for this is that they commit to their goals and continuously adjust their choice of actions and processes to drive themselves towards achieving those goals. By so doing, they can select appropriate actions even as circumstances change in unanticipated ways. Figure 1: Process to Diagnose Patient However, at this point in the story, there is nothing special about a goal or objective - it may simply be viewed as a convenient terminology for talking about processes and their components. From this viewpoint, we consider each step or activity in a process specification to be a goal (or objective) that is to be achieved as the process is enacted. For example, the process shown in Figure 1 is a process for which the first step is to achieve the goal of Triaging a patient (i.e. deciding what medical tests are to be performed prior to patient examination), then to achieve the goal of Testing the patient, then the goal of Examining the patient, and finally the goal of Discharging the patient. Once all these goals have been achieved, in the order specified, the process will have successfully completed. Using this terminology, we will now examine the various ways in which sub-processes can be called or invoked from such a process.
  8. 8. Static Process Selection The conventional way in which sub-processes are invoked is by static process selection (static linking). In this case, the process developer has a certain goal to achieve at a certain point in a process and knows the name of the (single) sub-process that achieves this goal. Figure 2: Diagnose Patient Process with Static Sub-Process Call For example, there may be only one way of discharging a patient, so that the developer can directly specify the particular sub-process to be used (e.g. “Dischg Proc 1”) at design time. The developer therefore sets up a static call to this sub-process, specifying the name of the sub-process, any parameters that are required, and possibly a mapping of case data from the calling process to the sub-process. This may appear as in Figure 2. Dynamic Process Selection However, it is often the case that a range of different sub-processes may be available to achieve a given goal, the choice of which depends on the current situation or case data. In such cases the names of the potential set of sub-processes and the circumstances of their usage are known. However, it may not be possible to determine these circumstances, and therefore which sub-process to use, until run time. For example, in attempting to achieve the goal of Testing the patient, which test process to apply may not be possible to determine until the patient is interviewed during Triage. It would be complex to hardwire these sub-processes into the calling process using static linking. The resulting process would be difficult to build, hard to understand, and even more difficult to maintain or change. It is far simpler to define the call once, in one place, and to determine dynamically which sub-process to use. This can be done by allowing the name of the sub-process to be represented as a variable, the value of which is determined dynamically at run time prior to the sub-process call. The sub-process call is then accomplished using the value of this variable to select the appropriate sub-process.1 7 Triage Test Examine Dischg Proc1 Statically linked sub-process Dischg Proc 1 STOP 1Another way to think about this is that processes become “first class citizens”. That is, just as customers, products, users, and so on can be assigned as values of variables, so too can processes. And just as a certain customer, product, or user can be dynamically determined and assigned to an appropriate variable, so too can the name of a process be assigned to a variable. Once assigned, that variable can then be used in a call action to invoke the assigned process.
  9. 9. Dynamic Process Orchestration white paper Triage Test Examine Discharge Tst Diabetes Tst Internal Tst Stm Virus Tst Stm Virus Param: 60 Triage Param: ec12 Param: cs27 Tst Diabetes Tst Internal Tst Stm Virus Tst Stm Virus Dynamically assigned call array STOP Dynamically linked sub-processes p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 8 Multiple Process Selection Sometimes one needs to carry out a number of different processes to accomplish a given goal. For example, it may be that the goal of Testing the patient is accomplished by three sub-processes, one testing for diabetes, one for an intestinal disorder and the other for a stomach virus. These sub-processes may be parameterized (e.g. the test for diabetes may have a parameter for the patient’s age). Further, it may be that a single sub-process has to be applied to a range of parameter values (e.g. the test for stomach virus may need to be carried out for two particular virus strains, such as “ec12” and “cs27”). Moreover, the number of test processes, their names, and the parameter values to which they should be applied may not be known at design time. For example, it may be that these values are determined during the Triage process. In such cases, one needs the ability to dynamically set the number, name, and parameter values of these sub-processes at run time. Figure 3: Diagnose Patient Process with Dynamic Multi-Process Call To do this, the method of dynamic process selection described in the preceding section is extended to allow an array of process names (together with their associated parameter values) to be used in the dynamic call rather than a single process name (see Figure 3). In this case, the various sub-processes to be used and the parameter values to which they apply are determined during the Triage process and assigned to the call array for the Test goal. When the calling process reaches the Test goal, it will then use the dynamically assigned process names and parameters to determine which set of sub-processes to call and which parameter values to use. The Test goal will be accomplished, and the calling process proceed to the next step, only when all these sub-processes have completed successfully.
  10. 10. Goal-Driven Process Selection Dynamic process selection provides a substantial reduction in design complexity and an improvement in flexibility and agility. But it is not sufficient to achieve sufficient gains in agility, robustness, and scalability to meet the needs of today's business environment. In many cases it is difficult to determine even the names of the sub-processes needed to accomplish some given goal in the calling process. For example, in designing the calling process, the process developer may know that the patient has to be Examined (the goal), but may not know the names of the sub-processes that can possibly be used to achieve this goal nor the circumstances that determine which one or ones (of possibly many) to use. For example, these sub-processes may be developed by different people, in different parts of the organization, and may be changed frequently. In such cases, it is not possible for the calling process to determine, even dynamically, what particular sub-process to call. All the process developer knows is that a particular goal is to be achieved, but just which sub-process( es) can be used to achieve it cannot be easily determined. Nor, in fact, does the developer really care - he or she simply wants the goal accomplished in an appropriate way. To solve this problem, we tag each sub-process with the goal that it achieves and the circumstances in which it can be used. For example, the sub-processes named “Examine patient with heart condition”, “Examine pregnant patient”, “Examine old patient”, and “Standard examination” may all be tagged with the goal named “Examine”. In addition, each sub-process is tagged with the circumstances in which it can be used, defined as an “entry condition” for the process. The entry condition is a conditional statement defined over the case data and any sub-process parameters. For example, the process “Examine old patient” may be tagged with the entry condition “Age > 65” where Age is a field of the case data. The other sub-processes are similarly tagged. (Some processes, such as “Standard Examination”, may have no entry condition.) By so doing, the calling process simply need call the goal (“Examine”) in the process flow, leaving it to the system to determine which sub-process best achieves this goal in the given circumstances (see Figure 4). At runtime, the system collects up all those sub-processes that satisfy the goal (i.e. that are tagged with the goal specified in the calling process, in this case those tagged with “Examine”). It then evaluates the entry condition of all the sub-processes that satisfy the goal, selecting those whose entry condition evaluates - at run time - to “true”. These sub-processes are called the “applicable” processes. 9 Triage Test Examine Discharge Goal-Driven sub-processes STOP Goal: Examine Age > 65 Goal: Examine Status = Pregnant Goal: Examine Condition = Heart Goal: Examine Not Applicable Figure 4: Diagnose Patient Process with Goal-Driven Process-Call
  11. 11. Dynamic Process Orchestration white paper 2,000,000 1,800,000 1,600,000 1,400,000 1,200,000 1,000,000 800,000 600,000 400,000 200,000 0 0 Potential Process Linkages Exponential Costs, Exponential Time to Value Linear Costs, Linear Time to Value 5 10 15 20 Processes per goal goal-driven static p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 10 The system then selects one of these applicable processes to execute, executes it, and on completion releases it and returns to the calling process. (If there is more than one applicable process, the default case is to select any one, preferring those with a non-empty entry condition. It is also possible for the user to specify other ordering criteria, or to require that all applicable processes be executed rather than just one.) The important point is that the condition that defines the “applicability” of the sub-process is attached to the sub-process, not the calling process. The calling process need not know the selection criteria nor need it specify the selection criteria. This greatly simplifies the construction of the calling process. The developer of the calling process need not know how many sub-processes are available to achieve the goal, their names, or the circumstances in which they can be used - all that need be known is that at least one such process exists. Figure 5: Linear Scalability vs Exponential Complexity This approach greatly facilitates development, agility, and scalability. The calling process is simple and easily understood and new sub-processes can be easily added or removed without any change whatsoever to the calling process or processes. Development time increases linearly with the number of sub-processes per goal (that is, with the degree of customization or process complexity). In contrast, for statically linked processes, development time increases exponentially, as all potential inter-process linkages have to be considered and possibly coded (see Figure 5). And these benefits rapidly multiply when a given goal (such as “Examine”) is reused in a number of different calling processes, as the sub-process selection criteria do not have to be repeated in every such calling process.
  12. 12. Semantic Modularity Now let's stand back from this for a moment and look at a typical sub-process. To each sub-process is attached (tagged) a goal (specifying the goal or objective it achieves on successful completion) and an entry condition (specifying the conditions required for this sub-process to be used). The process flow itself specifies which goals (perhaps thought of as sub-goals) need to be achieved and the ordering of their achievement. In other words, a sub-process can be viewed as a statement saying how to achieve some higher level goal or objective by achieving certain sequences of lower level goals or objectives. This conceptual shift is important. It means that each sub-process can be semantically understood independently of any other process. Provided there is some common understanding defining the names of the goals (e.g. “Examine”), then each process or sub-process can be viewed as a simple logical statement. For example, if we want to achieve the goal of Examining a patient, one way to accomplish this objective is to achieve the various sub-goals in the order specified in the sub-process definition. If these are all achieved, then the goal of Examining the patient will have been achieved. Such a process specification is “semantically complete” - the developer need only ensure that this sub-process represents a true statement about the application. He or she need not know whether this sub-process will ever be called, or where and in what circumstances it will be called. Nor need the developer know what other sub-processes will be called to achieve the various sub-goals specified in the process flow. All that the developer needs to do is guarantee that, if indeed the process entry condition is satisfied, and if the goals in the process flow are achieved as specified, then the goal associated with that sub-process will indeed be achieved. For example, assume that the process defined in Figure 1 is itself a sub-process, and is tagged with the goal “Diagnose Patient” and an entry condition “Patient is insured”. Then the semantics of the process are:- If the patient is insured, and Triage is accomplished, and subsequently Testing is accomplished, and Examination and finally Discharge are accomplished, then Diagnose Patient will have been accomplished. And this is true no matter how many sub-processes are used or available, what their names are, and no matter how complex their selection criteria - all of this is invisible and not even relevant to the developer of the process to Diagnose Patient.2 This approach provides for highly modular business process design. Sub-processes can be easily added, deleted, or changed, without having to search for all possible calling processes and modifying the process selection code in each. In short, processes and sub-processes are developed truly independently of one another. 11 2 Note that there may in turn be many processes with the goal to Diagnose Patient, including ones that cover the situation in which the patient is uninsured.
  13. 13. Dynamic Process Orchestration white paper p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 12 Handling Exceptions We have by now provided a way of defining processes that completely encapsulates their definition in self-contained, semantically complete components, significantly improving agility and scalability. However, we have not addressed one of the major problems of open business environments:- how do we handle failures, incompleteness, and exceptions without explicitly specifying this in the business processes themselves? To understand this, let's reflect on how people get around in highly complex, unpredictable worlds. They do this by keeping track of their goals and their current situation, and then choosing processes that, given the situation they are in, can accomplish their goals. In short, at each moment in time they select a process that gets them from where they are to where they want to be. And they continue to do this, even as processes fail and unexpected events occur.3 Goal-driven processing uses the same mechanism for handling exceptions and failures during process execution. The way this works is as follows. Suppose some sub-process has been selected to achieve a given goal. If this sub-process fails or causes an error condition during execution, perhaps as a result of some unanticipated exception, the calling process reposts the goal. The system then once again collects all sub-processes that are tagged with the given goal, re-evaluates their entry conditions, and selects another that is applicable in the current circumstances. If there is such a sub-process, this will be selected and executed. This process continues until one of the selected sub-processes completes successfully. If at any point there are no further sub-processes to select, then the goal in the calling process will be unachievable (as all possible ways to achieve it have been tried). In this case, the calling process itself fails. If this is the top-level (main) process, then nothing more can be done and any conventionally defined error handlers will be invoked. But in many cases the calling process will itself be a sub-process called by some higher level goal in some higher level process. In this case, the higher level goal will be reposted, resulting in possibly other processes being selected to achieve it. This mechanism bubbles up the entire stack of sub-processes, looking at every level for alternative ways of achieving the calling goal. In large, process-rich applications, an alternative path to the top-level goal is usually found at some level (indeed, good process designers ensure this). As a result, applications are far more robust to exceptions, failure, and incomplete process specification than conventional BPM systems.4 We remarked above that, upon reposting the calling goal, the entry condition of all sub-processes that satisfy the goal have to be re-evaluated. The reason for this is that execution of the failed process may have resulted in a change in the situation or case data, so that some processes that were applicable before may no longer be applicable, and vice versa. In addition, the sub-process (or sub-processes) that failed must not be selected again, on the grounds that having failed once they are unlikely to succeed on the second try. (If this is not the case, the developer can tag the process for retry.) 3 You can see this by reflecting on almost any human behaviour - for example, finding a restaurant in an unfamiliar town, coping with a flat tyre on your way to the airport, misplacing your keys, and so on. Many processes we select fail part way through, yet we have no difficulty selecting another to reach our goal. 4 Of course, it is still possible to insert special error-handling actions or exception-handling processes where appropriate.
  14. 14. Web Services External applications and Web Services can also be invoked using the same mechanisms as described above. In this case, the external application or Web Service is considered also to be a goal, differing only from other goals in that it is to be achieved by one or more external service providers rather than by some internal process. Just as there may be many sub-processes for achieving a given goal, so there may be many service providers for providing a given service; in the same way that sub-processes can be selected on the basis of their applicability to the current situation or case, so too can external service providers be selected; and in the same way that the calling process need not know which sub-processes are available nor their criteria of applicability, neither need the calling process know which service providers are available nor their criteria of applicability. Goal-driven processing therefore extends the loose coupling notions of Web Services to the core business processes themselves, thus providing a uniform, highly agile framework for describing and using both internal processes and external service providers. However, goal-driven processing also adds substantially to the robustness of Web Services applications. Whereas conventional Web Services architectures are silent on the possible failure of a service provider to meet contractual or other commitments, goal-driven processing provides a theoretically well-grounded method for handling such situations. In particular, if a particular service provider fails to deliver a requested service according to agreed Quality of Service criteria, a goal-driven system will automatically re-invoke the service request. This will force the selection of another service provider or the selection of an appropriate sub-process to handle the failure. For example, if one credit checking service provider fails to respond to a service request in a given time limit, goal-driven processing will use the exception recovery mechanism described in the previous section to autonomously select another service provider to deliver the required service. 5 13 5 In those cases where this behaviour is not desired, specialized failure-recovery or exception-handling processes can be invoked using the same goal-driven mechanism. In addition, Staffware provides further support for loosely coupled service selection and monitoring using special dynamic service couplers, provided by its partner WebV2.
  15. 15. Dynamic Process Orchestration white paper The Benefits of Goal-Driven Processing This goal-driven, purposive means of process design and execution represents a new way of thinking about application development. The responsibility for coping with much of the complexity of the application and the environment in which it is embedded - that is, with handling multiple options, exceptions, change, and uncertainty - is transferred from the process developer to the system. The consequences of this move are dramatic. Far more complex applications can be built, far easier and faster, because it is no longer necessary for the developer to encode all the special cases for dealing with a complex, unpredictable world. In contrast to conventional software, even incomplete systems can be developed and deployed, since failures through incompleteness are automatically handled by choosing other paths to the goal. This allows business to deploy applications rapidly and to grow them incrementally, greatly reducing time-to-value. In summary, the benefits of this approach are:- Quick to develop:- Time-to-Value is dramatically reduced; Highly agile:- easy to change, easy to extend; Robust:- in the face of exceptions, failures, and incompleteness; Reduced complexity:- simple, modular components, easily validated and inspected, self contained, accessible to both business analysts and IT developers; Linear scalability:- development effort increases linearly with number of processes, not exponentionally; Incremental build:- start small, grow incrementally Summary The Staffware Process Orchestrator is a new and unique approach to managing business processes. In most BPM products, developers of business processes have to accommodate the outcome of every business decision or event. Until now, this has not been a significant problem. However, with the advent of the Internet and new technologies such as Web Services, a new way of executing processes is required. The Staffware Orchestrator is such a mechanism. It provides process developers with the ability to dynamically assign process components, sub-processes and Web Services to the overall business process depending upon data content and unplanned business events. By determining which process fragment to execute based on the response from an external system, event or Web Service integration, Staffware users are able to build processes that are adaptive and very responsive to business needs, without having to know what events will occur in advance. p e o p l e - t o - p e o p l e • p e o p l e - t o - a p p l i c a t i o n • a p p l i c a t i o n - t o - a p p l i c a t i o n 14
  16. 16. www•staffware•com THE POWER OF PROCESS Headquartered in Maidenhead, UK, the company operates globally, providing access to over 2,500 trained consultants. Staffware plc reserves the right to alter the content and timing of the implementation of the strategy as described in this document. References in this publication to Staffware products and services do not imply that Staffware plc intends to make these available in all countries in which Staffware operates or is represented. © Copyright 2003 Staffware® plc. All trademarks are the property of their respective owners. All rights reserved.