Grid Computing is a form of distributed computing whereby a “super and virtual computer” is composed of a cluster of networked loosely coupled computers acting in concert to perform very large tasks. Pieces of a program are divided among several computers, sometimes up to many thousands. In contrast to the traditional notion of a supercomputer, grids rely on parallel computing between complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. Grid computing is usually applied to a scientific, technical or business problem that requires a great number of computer processing cycles or the need to process large amounts of data.
Grid computing resources, such as computation and storage, can be bundled and provided as Utility Computing similar to a traditional public utility (such as electricity, water, natural gas, or telephone network).
Cloud computing represents a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure &quot;in the cloud&quot; that supports them. The term cloud is used as a metaphor for the Internet, and is an abstraction of the underlying infrastructure it conceals. Cloud computing typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet. The business applications are typically accessed from a web browser, while the software and data are stored on the servers.
Autonomic Computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. IBM has defined the following four functional areas of autonomic computing: Self-Configuration: Automatic configuration of components; Self-Healing: Automatic discovery, and correction of faults; Self-Optimization: Automatic monitoring and control of resources to ensure the optimal functioning with respect to the defined requirements; Self-Protection: Proactive identification and protection from arbitrary attacks.
As the level of abstraction in software continues to rise, the size of functional granularity increases. Consequently, the architectural entities are increasingly decoupled from each other: Objects Data abstraction Methods: encapsulation of behavior Reusability through inheritance Technology-specific Tightly coupled: object instances reference each other Services Context-independent Composable Subordinate Reactive One-to-one request-response Loose-coupling via messages Connectionless Agents Context-aware Communicative Autonomous Proactive One-to-one or one-to-many publish/subscribe Decoupled Usually stateful and connected Form multiagent systems
Event-Driven Architecture (EDA) is the set of design principles to specify and implement systems consisting of decoupled agents that exchange messages. Whereas SOA can be seen as a network of services, EDA could be described as a network of agents. Luckham (2002) defines an event processing agent (EPA) as “an object that monitors an event execution to detect certain patterns of events”. The event processing agent reacts to patterns in its input by actions. An EPA consists of event pattern rules and local variables that form its state. Each rule has two parts: patterns called triggers that are matched with input and bodies consisting of actions that are executed on such matches. EPAs can be organized into an event processing network (EPN) in which outputs of EPAs are used as inputs of other EPAs. Each EPA executes independently as “a reactive, multithreaded process with local state that executes concurrently with other EPAs and communicates with them by events” (ibid.). The overarching architectural principle in EDA is communication rather than composition. Agents are autonomous entities that share common goals. They can determine the flow of control independently; it is not controlled by the client like in SOA. They are structurally decoupled from each other and merely maintain cohesion through contracts governing their interaction
An agent has the following properties: Interaction . The agent perceives its environment through sensors and acts upon that environment through actuators . In the context of EDA, these inputs and outputs are message-based events. Agent function . The agent’s choice of action can depend on the entire percept sequence. Mathematically, the agent function maps any given percept sequence to an action. Agent program is a concrete implementation of the agent function that runs on the agent architecture. Model . Unless the agent is a simple reflex agent, it maintains an internal state that depends on the percept history. This model represents “how the world works”. Connected . The interaction between agents is connected in the sense that the agents maintain correlation between messages, i.e. which messages belong together in the same conversation between agents. Unlike in SOA, there is no central flow of control, but the flow is dispersedly routed by the agents. The middleware routes the message to the appropriate recipients based on message content, business policies and subscription criteria. Thereby, the participating agents are fully decoupled and no agent-level contract is required. Publish-subscribe type of message exchange allows broadcasting and multicasting of messages, which is often more economic than the one-to-one request/reply pattern of SOA.
The very promise of Service-Oriented Architecture (SOA) -- interoperability -- is not redeemed by the thick soup of WS-* standards. In the search for richness, the modern SOA technologies have failed in their reach, as the exact same stack of prohibitively complex standards is required at both ends. Web 2.0 revives the notion of software as open Web services that can be mashed up in new and innovative ways. REST, or REpresentational State Transfer, is advocated as the simple yet powerful method of leveraging the pervasive HTTP protocol to bring about what is referred to as Web-Oriented Architecture (WOA). While WOA may not be sufficient for the most demanding enterprise use, it is arguably the most interoperable and scalable SO technique for the majority of uses.
To illustrate the difference between an object-oriented approach and REST, Anne Thomas Manes of Burton Group has used the example of writing a program to turn a building's lights on and off, keeping in mind that in REST the power is in sending a command to a URI: &quot;A REST application to turn on and off the lights in your building will require you to design a URI for every light bulb and then you send it on/off messages. It's not like I have a single service that manages all my light bulbs. It's a very different approach to designing a system. And it's going to be really hard for developers to get their hands around it.&quot;
Email is the most popular form of asynchronous, distributed interaction. It is simple to use and provides knowledge workers complete flexibility to conduct their work. Despite the pervasiveness of email in contemporary office work, the concept has its disadvantages: there is no process, no visibility, no control, no goal and limited accountability. And as we all know from our own experience, email can be overwhelming. It is not unusual that one spends as much as two hours every day just to sort and read all the email, not to speak of responding to it. According to a study, 43% of people have actually fallen ill because of email-induced stress. Various collaboration tools such as knowledge management, content management, groupware or online conference applications attempt to bring some structure into human interactions. They bring the knowledge workers at the same centralized information repository, facilitating information access and communication, but thereby the tools are also exacerbating the problem of information overload. People have to run faster and faster to even stand still. The biggest challenge is not finding information, but keeping up with it. The scarcest resource is no longer storage or bandwith but human attention. In order to cope with the white water of information rushing through our daily lives, we need new means to organize information temporally (e.g. news feeds) and means to filter the relevant information (e.g. semantic tagging, collaborative filtering). To this end, a proliferation of social networks and social media sharing services are harnessing the power of collective intelligence, but they are also, for their part, contributing to what Keith Harrison-Broninski calls network overload: the increasing volume of human interactions overall. As all the world is a project and we are moving from Information Age to Process Age, people are expected to participate in an increased number of collaborations. To deal with the network overload, communication in collaborative activities must be structured and goal-driven. People need to understand the process context of their interactions: their capabilities, roles and responsibilities as well as those of others. Traditional Business Process Management tools address static, structured processes that account for approximately one fifth of all business processes, but they fail to address the remaining 80% of ad-hoc, dynamic tasks that knowledge workers perform. Human Interaction Management (HIM), or Dynamic BPM, extends process management to this kind of dynamic collaborations, in which the process takes shape as it unfolds. First tools are emerging in this space and large vendors are expected to follow. However, dynamic collaborations are very different from workflows and structured collaborations and as Jon Pyke points out extant BPM tools do not readily lend themselves to knowledge work: “I doubt there are many BPM products on the market today which will be able to meet this seismic shift in requirements - certainly those that rely on BPEL and SOA won’t; what’s more, any that have been in the market for longer than five years will need radical surgery to meet the coming challenge.”
Workflow Management (WfM) is used to manage the class of controlled processes . These processes take place within the structure of the enterprise under a single control. Formally, workflow processes are based on Petri Nets, which have static flow paths. The behavior of these processes is scripted: every conceivable flow path needs to be imperatively determined. Thereby, the approach is not suitable for complex collaborations with varying contingencies. Business Process Management (BPM) approach transcends workflow management by further managing the class of coordinated processes . These processes include the organization aspect of the enterprise by addressing the coordination between multiple control realms. Concurrency is based on message passing between natively parallel threads of execution. The behavior of these processes is fluid: a declarative choreography specifies the boundaries for all feasible flow paths. Thereby, the approach is suitable for mechanistic, structured collaborative processes with a normative coordination contract. However, it does not address irregular collaborations, whose contract dynamically changes in the course of the process. Human Interaction Management (HIM) addresses the previous two classes of processes as well as the class of contracted processes . These processes allow the organization of the enterprise change by enabling renegotiation of the coordination contract within the process. HIM is also based on pi-calculus, and the behavior of contracted processes is essentially mobile: the network “wiring” between the participants can change due to channel passing. The approach is suitable for managing irregular collaborations, specifically human interactions, in which the process dynamically evolves as it is executed.
In this figure, the three IT approaches to process management are compared. Workflow Management addresses the controlled process in the form of a workflow . An analogous concept in the BPM approach is orchestration that specifies the private process of a process participant in the overall business process. The public process governing the message exchange between these participants is specified by choreography that addresses the coordinated process . Coarsely corresponding concepts for controlled and coordinated process in Human Interaction Management are Role and Story , respectively, yet these are much more flexible and adaptive than in WfM/BPM. A HIM process is known as a Story , and includes both Roles representing process participants and Interactions representing their channels of communication. Further, a HIM Agreement specifies the means by which a specific type of consensus about next steps is arrived at in contracted processes .
HIM has already been suggested as The Fourth Wave of process automation. By addressing the human-to-human interaction and bringing the collaboration tools into a process context, it promises to revolutionize the support for collaborative work. The Fourth Wave will expand the coverage of IT to the class of contracted processes – processes that are dynamic, mobile and address irregular collaborations. Thereby, it extends process automation to the strategic level that has traditionally not leveraged BPM technologies to support its processes. Specifically, the process of renegotiating the coordination between lower level processes can be facilitated by the next generation process management system.
Cliff divers must carefully time their dives so that they hit the water when the wave is rolling in. Otherwise, the diver will hit the rocks on the bottom and be injured or killed. In the aftermath of The Third Wave, the most sophisticated early adopters of BPM are already looking forward to their next jump at the time the next wave rolls in.
Not only does the Internet represent an evolutionary shift in technology, but, more importantly, it brings about changes to the very way in which people work. The Web brings people together in virtual communities, leverages their collective intelligence, harnesses their participation in value generation and bridges the chasm that in the last few decades has opened between the buyers and sellers. In the wake of Web 2.0, the companies face new challenges to reach the attention of the end consumer, to take part in the “naked conversations” of the Web and to align their business models accordingly. A number of new phenomena have emerged in the last few years that have a significant impact on business. Three of them are discussed below: Software as a Service, The Long Tail and Crowdsourcing.
Software as a Service (SaaS) is a software distribution model where an application is hosted as a service provided to customers over a network, typically the Internet. SaaS is based on a multitenant architecture, in which there is only one code base for the software, used by all customers. The software might be configurable by users to their individual needs, but the code itself is common for all and not customizable for any individual customer. From the customer's point of view, SaaS provides faster implementation by eliminating the need to install and run the application on the customer's own computer, lower upfront expense of software purchases through on-demand pricing, easier access to current technology because of the single code base, and also fewer bugs for the same reason. However, customers relinquish control over software versions or changing requirements and are tied to continuous expense of using the service From the software vendor's point of view, SaaS provides a continuous revenue stream and better protection of its intellectual property. Application areas such as CRM, HR, video conferencing, ITSM, accounting, web analytics, web content management and e-mail readily lend themselves to the SaaS model, when the customer has a relatively isolated computing need but does not want to deploy software itself. There are three reasons to avoid the Saas model: If the application is on the very core of the business, e.g. ERP, BI. These applications are typically highly customized according to the company's unique core processes. If the application is so critical to the operations that the company must own them. For instance, if even a short downtime can cause a major business disruption. If the application has a high number of integration points to the rest of the environment. Managing the software patches may impose a challenge. Gartner estimates that 5% of all business software spending was for SaaS applications in 2005 and expects this figure to grow to 25% by 2011.
The Long Tail is a phrase first coined by Chris Anderson in his October 2004 Wired magazine article. It refers to the niche strategy of businesses such as Amazon.com of selling a large number of unique items in relatively small quantities. A market with a high freedom of choice will create a certain degree of inequality by favoring the upper 20% of the items (&quot;head&quot;) against the other 80% (&quot;long tail&quot;). Traditionally, brick-and-mortar businesses have been constrained to sell large volumes of a reduced number of popular items. In the wake of the Internet, however, distribution and inventory are virtually of zero cost. Hard-to-find items of digital content can be sold to the &quot;long tail&quot; in small volumes profitably.
The word 'crowdsourcing' was first coined by Jeff Howe in his June 2006 Wired magazine article. It refers to a phenomenon in which an overwhelming task is broken up into little chunks and distributed to a large number of people to make it feasible. The pervasiveness of Internet has catalyzed the phenomenon. In crowdsourcing, an undefined, generally large group of people can be outsourced to perform tasks such as developing a new technology, designing a product, predicting an outcome or analyzing large amounts of data. The &quot;wisdom of crowds&quot; is harnessed at comparatively little cost, while the labor of the participants is compensated or rewarded with kudos or intellectual satisfaction. Crowdsourcing can generally be broken down to three categories: 1. creation (e.g. Wikipedia) 2. prediction (e.g. Marketocracy) 3. organization (e.g. Google)
Closely related to crowdsourcing are the prediction markets -- speculative markets created for the purpose of making predictions. They are betting exchanges, where the current market prices of traded assets can be interpreted as predictions of the probability of an event or the expected value of a parameter. Companies are increasingly using prediction markets as vehicles to forecast sales (e.g. HP), managing manufacturing capacity (e.g. Intel), generate new business ideas (e.g. GE), etc.
IT Approaches to Process Management HIM BPM WfM Input Output Activities Input Output Activities Collaboration Contract
Evolution of Process Management ” Controlled Process” ” Coordinated Process” ” Contracted Process” Workflow Management Business Process Management Workflow Orchestration Choreography Human Interaction Management Role Story Agreement