Your SlideShare is downloading. ×
Deloitte's Cloud Perspectives
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Deloitte's Cloud Perspectives

783

Published on

A collection of working papers, robustness of cloud services, next gen data center management, demystifying the cloud. IT plan points.

A collection of working papers, robustness of cloud services, next gen data center management, demystifying the cloud. IT plan points.

Published in: Education, Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
783
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Thomas B WinansJohn Seely BrownCloud computingA collection ofworking papers
  • 2. Cloud Computing frequently is taken to be a term that simply renamescommon technologies and techniques that we have come to know in IT.It may be interpreted to mean data center hosting and then subsequentlydismissed without catching the improvements to hosting called utilitycomputing that permit near realtime, policy-based control of computingresources. Or it may be interpreted to mean only data center hostingrather than understood to be the significant shift in Internet applicationarchitecture that it is.Perhaps it is the name. Certainly it is more nebulous than mnemonic, ifyou’ll pardon the poor pun. We happen to think so too. We’d rather usethe term service grid, frankly, but that name also has its problems. Thefact is that cloud and service grid computing are paradigmatically differentfrom their common interpretations, and their use can shed light on howinternet architectures are constructed and managed.Cloud computing represents a different way to architect and remotelymanage computing resources. One has only to establish an accountwith Microsoft or Amazon or Google to begin building and deployingapplication systems into a cloud. These systems can be, but certainlyare not restricted to being, simplistic. They can be web applications thatrequire only http services. They might require a relational database. Theymight require web service infrastructure and message queues. There mightbe need to interoperate with CRM or e-commerce application services,necessitating construction of a custom technology stack to deploy intothe cloud if these services are not already provided there. They mightrequire the use of new types of persistent storage that might never haveto be replicated because the new storage technologies build in requiredreliability. They might require the remote hosting and use of custom or3rd party software systems. And they might require the capability toprogrammatically increase or decrease computing resources as a functionof business intelligence about resource demand using virtualization. Whilenot all of these capabilities exist in today’s clouds, nor are all that do existfully automated, a good portion of them can be provisioned.Are the services that are provided by the cloud robust in theenterprise sense?Absolutely … especially if you mean the enterprise as we know it today.While there are important security, privacy and regulatory issues thatenterprises need to sort through before full migration to the cloud, andcloud vendors need to strengthen cloud capabilities in these areas beforeenterprise applications can be effectively hosted in the cloud, there arebenefits that cloud technologies do offer today that can be leveraged inthe deployment of many enterprise applications. Further, cloud vendorsare likely to offer virtual private cloud functionality that quite possibly will(at least temporarily) compensate for current deficiencies as cloud vendorswork to address them in a more strategic and long-term way.Migration to the cloud will not be immediate because of issues notedabove, and because enterprises need to approach migration in astaged fashion, just as they would undertake any significant technologytransition. The first stage of migration is to determine what enterpriseinfrastructure and applications can be reliably rearchitected using cloudcomputing technologies today to gain experience with a cloud-orientedway of organizing and accessing digital technology. This stage mayinclude development of a migration path that progressively transitionsmore of the enterprise infrastructure and applications to cloud providersas they evolve more robust services that can support a broader range ofenterprise IT activities. The key goal of this stage is to define the roadmapto replicate what is available on premise today using cloud technologies(public or private) where this makes sense, and to define fundamentalsthat will guide future architecture efforts. The second stage begins aperiod in which explicitly policy based architectures that help to supportagility and innovation are designed. The third stage is the period inwhich implementation of these fundamentally new architectures – thatare designed to support scalable networks of relationships across largenumbers of very diverse and independent entities (i.e., possibly leveraginga more fully developed service grid) – takes place.Cloud computing A collection of working papers i
  • 3. It is critical to give explicit attention to architecture when preparing tomigrate to the cloud, since this represents an opportunity for corporations torearchitect themselves as next generation enterprises. These globalized anddistributed enterprises must scale process and practice networks (ultimatelycomprising an entire ecosystem) to include thousands to tens of thousandsof members, with the number of users increasing to the millions. This typeof scale requires an architecture and interoperability strategy modulated byharmonized technology and business policies to scale business elastically.Widespread adoption of clouds as a computing platform will requireinteroperability between clouds and management of resources in one cloudby another. Since current-day architectures are not structured to externalizepolicy, the typical architecture fundamentals of applications that enterprisesdeploy must be modified to effectively use and exploit new cloud capabilities.In this context, we urge executives to develop more explicit cloud computingstrategies based on a more explicit and longer-term view of the likelytrajectories of cloud computing evolution. There are undoubtedly compellingbenefits to participating in clouds today. But the real power of cloudcomputing platforms stems from the potential over time to re-think andre-design IT architectures at a more fundamental level. Companies that gainearly experience with this new infrastructure will be in the best position toharness these new architectural approaches to re-shape the broader businesslandscape.To seed this effort, we cull from various candidate definitions for both cloudand service grid computing a set of concepts to be used as a thoughtfulframework of discussion for what happens next in Internet-based computing.We present, here, a set of three papers that discuss:Characteristics of what we believe to be next generation architectures that willsupport substantive changes in global enterprise constructs and operations;Transformation from existing to next generation architectures to simplify thearchitectures and better align them with the businesses they enable, andprovide the means to externalize and manage policy across all architecturelayers; andPain points that might be eliminated altogether by migration to nextgeneration architectures.We understand that cloud and service grid computing, in their present state,do not meet all distributed computing or enterprise needs. However, theydo meet many of them in a way that will provide a smooth transition towhatever next generation distributed computing becomes, and they alreadyare significantly helpful in modulating the technology changes that enterprisesface today. The rapid pace at which cloud vendors are systematizing theirplatforms and attracting stalwart industry supporters and users confirmsthat ecosystems are forming that are based upon the capabilities that cloudcomputing enables. The speed with which they are forming strongly suggeststhat cloud computing will not only meet many of the needs of enterprisecomputing as we have come to know it, but also could form the digitalplatform for a shaping strategy guiding next generation enterprises in theirmigration to and participation in such ecosystems. Hence our optimism ofthings to come, and our contribution to a discussion that we hope readers willfind helpful.Cloud computing A collection of working papers ii
  • 4. ForewordForeword by Tomi Miller and George Collins,Deloitte Consulting LLPThe first “Cloud Computing” conversation we have with our clients oftenstarts with the question “What exactly is the Cloud?”, but becomes muchmore interesting when we move on to “What does (or should) the Cloudmean to my company?” These questions lead to rich discussions aboutwhat cloud computing promises to become as a foundational element inglobal enterprise computing and the approach they should take withintheir own firms. Most of our clients are exploring and/or experimentingwith some aspect of the cloud today. What they seek is a roadmap thatallows them to capitalize on the operational benefits of current cloudofferings, while establishing a migration path towards a business andarchitectural vision for the role cloud computing will play in their future.Deloitte’s Center for the Edge has spent the past year combining extensiveresearch and industry insights to explore the topics of cloud computingand next-generation web services. The following set of papers reflect thiswork from different perspectives: business drivers, architectural models,and transformation strategies. They outline a vision for the role of thecloud, describing how it can evolve and extend todays service-orientedarchitectures into business platforms for the next-generation economy.And so we ask the questions: is cloud computing really such arevolutionary concept? Or is it just a new name that describes a logicalprogression in the evolution of computing, towards the vision of a trulyservice-oriented business ecosystem? What are the key considerations forcompanies looking to leverage the cloud today, and what do they need toconsider to effectively operate as the cloud economy emerges?Certainly cloud computing can bring about strategic, transformational, andeven revolutionary benefits fundamental to future enterprise computing,but it also offers immediate and pragmatic opportunities to improveefficiencies today while cost effectively and systematically setting the stagefor strategic change. And in many cases, the technology supporting cloudofferings serves to facilitate the design of a migration path that reducesinitial investment and accelerates returns from each stage.For organizations with significant investment in traditional software andhardware infrastructure, migration to the cloud will occur systematicallyand over time, as with any other significant technology transition. Forother less-constrained organizations or those with infrastructure nearingend-of-life, technology re-architectures and adoption may be moreimmediate.In either case, Deloitte’s experiences align with several key themescontained in the papers to follow as we assist our clients in framinga practical understanding of what the cloud means to them, and theconsiderations that must be made when constructing a well conceivedadoption strategy:Next-generation data center management - the business casebehind data center strategies is changing. Data centers represent avery logical starting point for a new consumer of cloud services, withrelatively low risk and potentially significant cost savings and efficiencygains. Transitioning existing systems to the cloud offers opportunity tooutsource non-core functions for most businesses. At the same time,it provides experience with a cloud-oriented way of organizing andaccessing digital technology that is necessary to build out a roadmap forsensible cloud adoption.Architectural planning, simplification, and transformation – MovingIT platforms to the clouds represents the next logical step in a service-oriented world, and Build v. Buy v. Lease is the new decision frameworkin service selection within this context. Understanding the level of cloudand internal company maturity will guide decisions such as How andWhen to leverage cloud services to support core as well as non-corebusiness capabilities, and how software assets should interoperateto provision business functionality. It is also critical in this step to giveexplicit focus to policy-based architectures that support agility andinnovation.Policy-oriented business and risk management – Policy within andacross organization boundaries has traditionally been embedded withinenterprise IT platforms and applications. However scaling businessesglobally will require implementing new ways to combine and harmonizepolicies within and across external process networks and value chains.It will become increasingly critical for companies to establish clear andexplicit definitions of governance, policy (regulatory, security, privacy,etc) and SLAs if they are to operate effectively with diverse entities in thecloud.Cloud management – To conduct business within a cloud (recognizingwhat is available today), it is important for cloud consumers andproviders to align on graduated SLAs and corresponding pricing models.Maturing cloud capabilities into more advanced offerings, such asvirtual supply chains, requires support for fully abstracted, policy-driveninteractions across clouds. This is a big jump, and it will become a majorchallenge for the cloud providers to adequately model, expose andextend policies in order to provide integrated services across distributedand heterogeneous business processes and infrastructure. The dataassociated with these business processes and infrastructure will needto be managed appropriately to address and mitigate various risksfrom a security, privacy, and regulatory compliance perspective. This isparticularly important as intellectual property, customer, employee, andbusiness partner data flows across clouds and along virtual supply chains.Cloud computing A collection of working papers iii
  • 5. Cloud computing has captured the attention of today’s CIOs, offering hugepotential for more flexible, readily-scalable and cost-effective IT operations.But the real power of cloud computing platforms is the potential overtime to fundamentally re-think and re-design enterprise business and ITarchitectures. There is significant anticipation for emergent innovation andexpanded capabilities of a cloud-based business environment, even thoughit is clear that today’s cloud offerings are only scratching the surface ofthis potential. As with all technology innovation, the biggest gain will berealized by those who first establish business practices to leverage anddifferentiate this vision, and Deloitte is excited to be actively involved inthese activities.Deloitte is very proud to host and sponsor the Center for the Edge, whichfocuses its attention on identifying, understanding and applying emerginginnovations at institutional, market, geographical, and generational“edges” in global business contexts. We thank John Seely Brown, JohnHagel, and Tom Winans for their thought leadership and commitment tostimulating the thinking required to move forward with confidence in theevolving world of cloud computing.Cloud computing A collection of working papers iv
  • 6. ContentsForeword iiiForeword by Tomi Miller and George Collins, Deloitte Consulting LLP iiiDemystifying clouds — Exploring cloud and service grid architectures 2Introduction 2An autonomic frame of mind 2Characteristics of an autonomic service architecture 5Architecture style 5External user and access control management 6Interaction container 6Externalized policy management/policy engine 7What is policy? 9Utility computing 9Cloud composition 10Service grid — the benefit after the autonomic endpoint 11Service grid deployment architecture 11Container permeability 12Cloud vendors and vendor lock-in 12Virtual organizations and cloud computing 12Concluding remarks 14Moving information technology platforms to the clouds — Insights into IT platform architecture transformation 15Introduction 15Going-forward assumptions and disclaimers 15Outside-in and inside-out architecture styles 15Clouds and service grids 16Architecture transformation 17Transforming an existing architecture 17Addressing architecture layering and partitioning 18Why go to such trouble? 18Externalizing policy 19Replacing application functionality with (composite) services 20Starting from scratch — maybe easier to do, but sometimes hard to sell 21Concluding remarks 21Motivation to leverage cloud and service grid technologies — Pain points that clouds and service grids address 23Introduction 23IT pain points 23Pain point: data center management 23Pain point: architecture transformation/evolution (the Brownfield vs. Greenfield Conundrum) 24Pain point: policy-based management of IT platforms 25Concluding remarks: To the 21st century and beyond 26About the authors 27Contact usCloud computing A collection of working papers 1
  • 7. IntroductionCloud Computing is in vogue. But what is it? Is it just the same thing asoutsourcing the hosting of Web applications? Why might it be useful andto whom? How does it change the future of enterprise architectures? Howmight clouds form the backbone of twenty-first-century ecosystems, virtualorganizations and, for a particular example, healthcare systems that aretruly open, scalable, heterogeneous and capable of supporting the players/providers both big and small? In the past, IT architectures took aim at theenterprise as their endpoint. Perhaps now we must radically raise the barby implementing architectures capable of supporting entire ecosystemsand, in so doing, enable these architectures to scale both downward to anenterprise architecture as well as upward and outward.We see cloud computing offerings today that are suitable to hostenterprise architectures. But while these offerings provide clear benefit tocorporations by providing capabilities complementary to what they have,the fact that they can help to elastically scale enterprise architecturesshould not be understood to also mean that simply scaling in this waywill meet twenty-first-century computing requirements. The architecturerequirements of large platforms like social networks are radically differentfrom the requirements of a healthcare platform in which geographicallyand corporately distributed care providers, medical devices, patients,insurance providers, clinics, coders, and billing staff contribute informationto patient charts according to care programs, quality of service and HIPAAconstraints. And the requirements for both of these are very different thanthose that provision straight-through processing services common in thefinancial services industry. Clouds will have to accommodate differences inarchitecture requirements like those implied here, as well as those relatingto characteristics we subsequently discuss.In this paper, we want to revisit autonomic computing, which defines aset of architectural characteristics to manage systems where complexityis increasing but must be managed without increasing costs or the size ofthe management team, where a system must be quickly adaptable to newtechnologies integrated to it, and where a system must be extensible fromwithin a corporation out to the broader ecosystem and vice versa. Theprimary goal of autonomic computing is that “systems manage themselvesaccording to an administrator’s goals. New components integrate …effortlessly ...”i. Autonomic computing per se may have been viewednegatively in the past years — possibly due to its biological metaphoror the AI or magic-happens-here feel of most autonomic initiatives. Butinnovations in cloud computing in the areas of virtualization and finer-grained, container-based management interfaces, as well as those inhardware and software, are demonstrating that the goals of autonomiccomputing can be realized to a practical degree, and that they couldbe useful in developing cloud architectures capable of sustaining andsupporting ecosystem-scaled use.Taking an autonomic approach permits us to identify core components ofan autonomic computing architecture that Cloud Computing instantiationshave thus far placed little emphasis on. We identify technical characteristicsbelow that must not be overlooked in future architectures, and weelaborate them more fully later in this paper:An architecture style (or styles) that should be used when implementingcloud-based servicesExternal user and access control management that enables roles andrelated responsibilities that serve as interface definitions that controlaccess to and orchestrate across business functionalityAn Interaction Container that encapsulates the infrastructure services andpolicy management necessary to provision interactionsAn externalized policy management engine that ensures that interactionsconform to regulatory, business partner, and infrastructure policyconstraintsUtility Computing capabilities necessary to manage and scale cloud-oriented platformsAn autonomic frame of mindSince a widely accepted industry definition of Cloud Computing — beyonda relationship to the Internet and Internet technologies — does not exist atpresent, we see the term used to mean hosting of hardware in an externaldata center (sometimes called infrastructure as a service), utility computing(which packages computing resources so they can be used as a utility inan always-on, metered, and elastically scalable way), platform services(sometimes called middleware as a service), and application hosting(sometimes called software or applications as a service). All of these waysseem — in some way — right, but they are not helpful to understandthe topology of a cloud, the impact that Cloud Computing will have ondeployment of business platforms, whether or not the business systemarchitecture being deployed in commercial or private data centers todaywill be effective in a cloud, or what architectures should be implementedfor cloud-based computing. Neither do they even begin to get at thechallenge of managing very large and dynamic organizations, called virtualorganizations (to be defined later in this paper), that reorient thinkingabout the need for an architecture to scale massively, and the need tomake parts of an architecture public that, to this point, have been keptprivate.To satisfy the requirements of next century computing, cloud computingwill need to mean more than just externalized data centers and hostingmodels. Although architectures that we deploy in data centers todayshould be able to run in a cloud, simply moving them into a cloud stopswell short of what one might hope that Cloud Computing will come tomean. In fact, tackling global-scaled collaboration and trading partnerCloud computing A collection of working papers 2Demystifying clouds — Exploringcloud and service grid architecturesCloud computing A collection of working papers 2
  • 8. network problems in government, military, scientific, and business contextswill require more than what current architectures can readily support. Forexample:It will be necessary to rapidly set up a temporary collaboration networkenabling network members to securely interact online, where interactioncould imply interoperability with back office systems as well as human-oriented exchanges — all in a matter of hours. Examples that come tomind include emergency medical scenarios, global supply chains andother business process networks. Policies defining infrastructure andbusiness constraints will be varied, so policy must be external to, andmust interact with, deployed functionality. These examples also imply theneed for interoperability between public and private clouds.Business interactions have the potential to become more complex thanpersonal transactions. Because they are likely to be formed as compositeservices, and because services on which they depend may be provisionedin multiple clouds, the ability to provision and uniformly managecomposite cloud services will be required, as will be the ability to ensurethat these services satisfy specified business policy constraints.The way that users and access control are managed in typicalapplications today is no longer flexible enough to express roles andresponsibilities that people will play in next-generation businessinteractions. Roles will be played by people outside of or acrosscorporate boundaries in an online context just as frequently as they areinside. Access control and the management of roles and responsibilitiesmust be externalized from business functionality so that it becomesmore feasible to composite functional behavior into distributed service-oriented applications that can be governed by externalized policy.These considerations suggest that clouds will have to have at least thefollowing characteristicsii:Clouds should be uniquely identifiable so that they can be individuallymanaged even when combined with other clouds. This will be necessaryto distinguish and harmonize cloud business and infrastructure policiesin force.A cloud should be dynamically configurable: configuration should beautomatable in varying and unpredictable, possibly even event-driven,conditions.Systems management technologies for clouds must integrate constraintson business with constraints on infrastructure to make them manageablein aggregate.A cloud should be able to dynamically provision itself and optimize its-own construction and resource consumption over time.A cloud must be able to recover from routine and extraordinary-events that might cause some or all of its parts to malfunction.A cloud must be aware of the contexts in which it is used so that-cloud contents can behave accordingly. For example, if cloudsare composited, policy will have to be harmonized across cloudboundaries; when in multitenant mode, service level agreements maybe used to determine priority access to physical resources. Applicationplatforms today are unaware of their usage context, but businessfunctionality in next-generation platforms will have to be managedwith context in mind.A cloud must be secure, and it must be able to secure itself.Cloud computing A collection of working papers 3
  • 9. These coarse-grained characteristics, sometimes described as autonomic computing, can be represented in the form of finer-grained architecture driversthat are useful in characterizing steps toward an autonomic computing architecture. Cloud Computing offerings that are available today share many ofthe same drivers that we have organized into Systems and Application Management Drivers in the figure below.Numbered circles in the graphic above denote drivers that are listed below:0. Architecture state: no systems management1. Systems and resources must be identifiable2. System and resources must be manageable3. Policy-driven secured access to the system and managed resources mustbe provided4. System must reallocate managed resources on failures as a function ofpolicy5. System must reallocate managed resources on various system-levelconditions by policy6. System must be managed lights-out in a single data center context7. Systems management capability must scale across clouds of the sametype8. Systems management capability must scale across clouds of differenttypes; these clouds must be managed uniformly while maintainingseparate cloud identities9. System must reallocate managed resources on various system-levelconditions as a function of policy to accommodate real-time andbusiness-oriented usage patterns10. Systems management policies are harmonized across cloud boundaries11. It must be possible to integrate management policies of different clouds12. Monolithic applications and traditional application integrations exist/aresufficient13. Application platform must be service oriented14. Applications are replaced with business services15. Business services have secured access16. An Interaction Container1must be used as application container in asingle-tenant environment17. Policies must be consolidated and managed using a single (possiblyfederated) policy engine18. System must reallocate managed business services on various business-level conditions by policy to accommodate real-time/batch usagepatterns19. An Interaction Container must be used as application container in amultitenant environment20. Business service and systems management policies are integrated21. Architecture state: positioned as an autonomic architecture platformfor virtual organization-oriented application systems22. Architecture state: additional structural and business constraintspositioning architecture platform as a service grid1Defined in the next sectionCloud computing A collection of working papers 4
  • 10. The graphic shows two paths toward autonomic computing that ultimatelyconverge at an architecture point that could support business ecosystemsand emergent and fluid virtual organizations:The first path, Systems Management Drivers, begins with no systemsmanagement, and ends with a systems management capability thatis policy driven, and that enables automated systems managementin a cloud and harmonization of business and infrastructure policieswithin and across cloud boundaries — in both single- and multi-tenantmodes. The drivers for systems management are grouped to illustrateneeds common to basic systems management (Systems ManagementCapabilities), and needs that go beyond basic capabilities (UtilityComputing Management and Outside-In ArchitectureiiiCapabilities).The second path, Applications Management Drivers, begins withcommon monolithic corporate applications. It ends with theseapplications having been replaced with service-oriented ones, wherepolicy has been externalized so that business policies can be harmonizedwith utility management policies, where it is possible to implementend-to-end service level agreements and enforce conformance tobusiness and regulatory constraints, and where the use of businessfunctional and infrastructural components can be metered and elasticallyload balanced. At this endpoint, business services and infrastructurecan be organized into a cloud and used in both single- and multitenantmodes.Systems and Applications Management Drivers paths converge at the pointwhere it is necessary to manage both the business and the infrastructureusing common management capabilities, and where related policies mustbe harmonized.Presenting drivers on paths is sometimes risky, as such suggests a linearprogression toward implementing an ultimate architecture, or givespreference to one suggested architecture vision over another. Neither ismeant in this case. In fact, one can view how far one traverses each pathas one of architecture need over a perceived architecture maturity. Tounderscore, we make the following observations relating to commerciallyavailable cloud computing products:Cloud computing does not realize the goals of autonomic computingas they are defined currently, though combining the characteristics ofexisting clouds gets closer to this goal. This fact does not diminish theirvalue for optimizing deployments of applications in place today.Not every cloud needs to be autonomic — but there are benefits alongeach path regardless.Implementing architecture features on the Applications Management-Drivers path will lead to optimizing costs of operating andmaintaining infrastructure and business functionality that currentlyrun a business, and automating systems management, resulting inmore efficient data center management.Evolving an architecture toward Utility Computing Management-and Outside-In Architecture Capabilities will help organizationsexpand their IT systems beyond corporate boundaries. This supportsimplementation of more flexible partner networks and value chains,but it also can scale to serve virtual organizations.Characteristics of an autonomic service architectureAs cloud computing solutions and products are implemented, we believeit critical — especially to those being driven by their business needs upthe Systems and Applications Management Drivers curves — to carefullyconsider their need for support of the architecture characteristics that wesketched in the opening part of this paper and that we now elaborate.Architecture styleArchitecture styles define families of software systems in terms of patternsfor characterizing how architecture components interact. They definewhat types of architecture components can exist in architectures of thosestyles, and constraints on how they may be combined. They define howcomponents may be combined together for deployment. They define howunits of work are managed, e.g., whether or not they are transactional(n-phase commit). And they define how functionality that componentsprovision may be composited into higher order functionality, and how suchcan be exposed for use by human beings or other systems.The Outside-In architectural style is inherently top-down and emphasizesdecomposition to the functional level, but not lower; is service-oriented rather than application-oriented; factors out policy as a first-class architecture component that can be used to govern transparentperformance of service-related tasks; and emphasizes the ability to adaptperformance to user/business needs without having to consider theintricacies of architecture workings2.The counter style, what we call inside-out, is inherently bottom-upand takes much more of an infrastructural point of view as a startingpoint, building up to a business functional layer. Application platformsconstructed using client server, object-oriented, and 2/3/n-tier architecturestyles are those to which we apply the generalization inside-out becausethey form the basis of enterprise application architectures today, andbecause architectures of these types have limitations that requiretransformation to scale in a massive way vis-à-vis outside-in platforms (seeWeb Services 2.0 for a more detailed discussion of both Outside-In andInside-Out architecture styles).Implementation of an outside-in architecture results in better architecturelayering and factoring, and interfaces that become more business thandata oriented.2An outside-in architecture is a kind of service-oriented architecture (SOA) which is fully elaborated in Thomas Erl’s book called “Service-OrientedArchitecture: Concepts, Technology, and Design,” so we will not discuss SOA in detail in this paper.Cloud computing A collection of working papers 5
  • 11. Policy becomes more explicit and is exposed in a way that makes it easierto change it as necessary. Service orientation guides the implementation,making it more feasible to integrate and interoperate using commodityinfrastructure rather than using complex and inflexible applicationintegration middleware.As a rule, it is simpler to integrate businesses at functional levels thanat lower technology layers where implementations might vary widely.Hence we emphasize decomposition to the functional level, which oftenis dictated by standards within a market, regulatory constraints on thatmarket, or even accounting (AP/AR/GL) practices.Architecture style will be critical to orchestrating services and enablingoperability between thousands of collaborating businesses. The Li &Fung Group manages supply chains involving over 10,000 companieslocated in over 40 countries of the world. Point integration solutionsare infeasible at this scale. Similarly, attempts to integrate hundreds ofhospital patient management systems and devices into a healthcare cloud,replete with HL7 variants and new and legacy applications, would resultin the same conclusion that interoperability must be realized through theimplementation of an architecture that integrates at a business functionallevel rather than a data level.External user and access control managementUser and access control management usually is implemented within atypical enterprise application. A user is assigned one to many applicationroles, and a role names a set of privileges that correlate to use of particularapplication functionality through a graphical user interface, or throughsome programming interface. User authentication and authorization canbe integrated with corporate identity management solutions (e.g., singlesign-on solutions) that are in place to ensure that only people within acorporation or corporate partner network are permitted to use corporateapplications.But as businesses globalize and couple more fluidly and dynamically, themanagement of users and their assignments to roles and responsibilities/privileges must be implemented in a scalable fashion that supportscomposition of services into more complex service-oriented behavior.Further, it must be possible for role players to transparently change inresponse to business- and partner-related changes made over time,especially in business interactions that could be in progress over months toyears.A fundamental part of user management is identity management.There are numerous identity solutions available today from vendors likeMicrosoft, Sun Microsystems, and Oracle. The challenges facing thesesolution vendors include their ability to manage the varied ways a usercan be represented in an online context, the means to verify identityand detect and manage identity theft, the need to accommodate auditsof transactions and interactions conducted with a specific identity, andso forth. Identity Management is much larger than any single cloud orsoftware vendor, and forming a solution for the twenty-first-century iseven likely to require help from national governmentsiv.Interaction containerThe J2EE/Java EE community introduced the notion of container to theenterprise architecture community as a means to streamline the structureof thin java clients. Normally, thin-client multitiered applications wouldrequire code to implement persistence, security, transaction management,resource pool management, directory, remote object networking, andobject life cycle management services. The J2EE architecture introduceda container model that made this functionality available — transparently,in many cases — to business logic implemented as classes of behavior aslong as it was implemented to conform to special (e.g., bean) interfaces,freeing developers to focus on implementing business functionality and notinfrastructure — resulting in a standardized use of infrastructure services.Containers are hosted in application servers.As we move toward service orientation, there is need for an analog to anapplication server that not only manages common infrastructure servicesbut provides the infrastructure extension points for managing policy thatis harmonized across technology and business functional stacks within acloud. For the purpose of discussion here, we use the term InteractionServer to mean an architecture component that provides runtime servicesused by Interaction Containers (defined below) to manage the life cycleof multi-party business service interactions in both single- and multitenantcontexts. Runtime services can include those similar to application services(e.g., like J2EE container services), but also services to manage policy(harmonization across architecture layers, policy enforcement, and policyexceptions), Interaction life cycle management, and even specializedcollaboration services (e.g., event-based publish and subscribe capabilities,and services that bring together those people who are involved in businessinteractions).We use the term Interaction to mean a service oriented multiparty businesscollaboration. An interaction can be viewed as an orchestration of businessservices where orchestration flow (not workflow in the typical enterpriseapplication integration sense) is managed using externalized policy (pleasesee Web Services 2.0 for a more detailed discussion on this topic). AnInteraction is hosted within an Interaction Container (defined below), andorchestrates services provisioned in distributed contexts. Interaction lifecycle events are used to trigger system behavior and enforce managementpolicies and are published by the Interaction Server to subscribers.Cloud computing A collection of working papers 6
  • 12. Finally, we use the term Interaction Container as an analog to J2EE/Java EEapplication container. The Interaction Container is hosted in an InteractionServer, statically and dynamically configured to provide infrastructureand policy adjudication services that are specific to a business user’senvironment, integrated with systems management capabilities, and usedto manage one-to-many Interactions and their life cycles. An InteractionContainer essentially holds an execution context in which role players —people or systems participating in an Interaction and conforming tospecific roles (interfaces) — interact to perform their parts in a businessorchestration and manage exceptions and/or faults should they occur inthe process.An Interaction Container can be considered to be organizationally based(i.e., it can be used to manage many Interactions between a set ofparticipants/role players over time), or outcome-based (in which only oneInteraction would be performed). These two usage scenarios reflect theneed to manage Interactions in a dynamic user community, where roleplayers could change over time, and the need to manage an Interaction asa single possibly long-running business transaction.Externalized policy management/policy engineA Policy Engine harmonizes and adjudicates conflicting policies used acrossarchitecture layers. Components at all architecture layers can participate inpolicy harmonization and enforcement, which requires the following:Policy extension points must be exposed and formally declared in anypart of the architecture that must be managed.Policy management must support policy pushdown to enable extensibleand dynamic detection of policy violation and policy enforcement.It must be possible to version policy so that policy decisions made at agiven time can be reproduced.Policy exceptions should be managed in as automated a fashionas possible, but support also must be given to cases where humanjudgment and decision making may be required. Note that fault orexception can connote both system-level occurrences and domainevolution in which policy constraints valid in the past become invalid. Forexample:Inability to connect to a database is a system fault that should be-automatically handled as a software system exception.A regulatory constraint that permitted conduct of business in one way-to a certain point in time, but that no longer does due to changesin law, is a business exception that may require human judgment todetermine if completion of a business transaction according to oldlaw should be permitted.Policy embedded in application functionality is not easy to change, butfuture software systems will have to be implemented in a way that viewschange as the norm — where change results from the emergence of newmarkets, market evolution, changes in regulations and standards, fixing ofpolicy bugs, the whims of interaction participants, and maybe even theircustomers’ whims.Cloud computing A collection of working papers 7
  • 13. Externalizing policy highlights a significant distinction between Inside-Outand Outside-In architecture styles. Inside-Out architectures usually involvelegacy applications in which policy is embedded and thus externalizing itis — at best — very difficult. Where application policies differ in typicalcorporate environments, it becomes the responsibility of integrationmiddleware to implement policy adjudication logic that may work well toharmonize policies over small numbers of integrated systems, but this willnot generalize to manage policy in larger numbers of applications as wouldbe the case in larger value chains. The red rectangle in the figure belowidentifies where an Inside-Out architecture must transform (and simplify inthe process) to become an Outside-In architecture, making it more feasibleto externalize policy and progress toward the fully autonomic computingendpoint. Within certain communities, we refer to this transformationas an architectural Laplace Transform, noted in the graphic below nearpoints 16 and 17, which helps in solving challenging structural problemsby creating an alternative frame or point of view. But it actually representsa fundamental change of mental models that requires shifting from aninside the enterprise (Inside-Out) point of view to an external, distributedbusiness process network (Outside-In) point of view that considers theworld with the granularity inherent in the business process network3.This results in factoring out policy components such that the resultingarchitecture better accommodates the policy requirements of very largenumbers of users in a variety of combinations.3Perhaps the irony of taking an Outside-In point of view is that it results in an internal IT system which provides a unified policy-based framework forinteroperability across multiple lines of business, whether internal or external.Cloud computing A collection of working papers 8
  • 14. What is policy?The word policy could have a number of meanings as it is used inconjunction with IT architecture and systems. For example, it couldmean governance relating to software architecture development andimplementation; or it could mean operational rules and standards foradministering a deployed production system.Our use of the word connotes constraints placed upon the businessfunctionality of a business system, harmonized with constraints on theinfrastructure (hardware and software) that provisions that functionality.These constraints could include accounting rules that businesses follow,role-based access control on business functionality, corporate policy aboutthe maximum allowable hotel room rate that a nonexecutive employeecould purchase when using an online reservation service, rules aboutpeak business traffic that determine when a new virtualized image of anapplication system should be deployed, and the various infrastructuralpolicies that might give customer A preference over customer B shouldcritical resource contention require such.Policy extension points enable the means by which policy requirements areharmonized between interaction containers and the cloud environmentitself 4. They are not configuration points that are usually known inadvance of when an application execution starts and that stay constantuntil the application restarts. Rather, policy extension points are dynamicand late bound to business and infrastructural functionality, and theyprovide the means to dynamically shape execution of this functionality.The sense of the word shape is consistent with how policy is applied in thetelecom world where, for example, bandwidth might be made availableto users during particular times in the day as a function of total numberof users present. Just as policy is used in the telecom world to shape useof critical resources, policy can be used to shape execution of businessfunctionality.For example, suppose that an interaction between business partners isstarted by a partner located in a European country that legally requiresall interaction data to remain in that country, whereas this same type ofdata could be stored anywhere that the deployment platform determinesconvenient otherwise. A policy extension point on storage could beexposed to ensure that storage systems located in the appropriateEuropean country are used when required. Because policy is externalizedas described above, this policy does not imply the need for multiple codebases to realize this constraint.The example above is a simple one that one could imagine implementingat the application business layer of an enterprise architecture. Suppose,however, that this type of policy is moved from the business layer intothe network. From 2005 to date, we have seen the emergence of XMLaccelerators (e.g., IBM/Data Power, Intel, Layer 7 Technologies) thatmake such possible by bringing to application protocol managementwhat network protocol analyzers, or sniffers, bring to network protocolmanagement. These accelerators are able to inspect, transform, and routeboth XML and binary data in ways that are conscious of ecosystem andinteraction constraints, e.g., constraints like the European storage ruleabove. Once equipment like this is aware of the business data and theworkflow context in which it is communicated, it can carry out networkingfunctions such as forwarding, security, usage analysis, traffic management,and billing, in a much more sophisticated manner in comparison totraditional networking techniques — and it can do this taking into accountpolicy constraints across an entire technology stack.Utility computingThe raison d’être of autonomic computing is the need to address thegrowing complexity of IT systems. While loosely coupling architecturecomponents makes them less brittle, it also exposes more moving partsthat also must have management and configuration extension points. Theauthors of The Vision of Autonomic Computing worded their concernsover complexity as follows:“As systems become more interconnected and diverse, architects are lessable to anticipate and design interactions among components, leavingsuch issues to be dealt with at runtime. Soon systems will become toomassive and complex for even the most skilled system integrators to install,configure, optimize, maintain, and merge. And there will be no way tomake timely, decisive responses to the rapid stream of changing andconflicting demands.”Externalization of policy goes a long way toward making it possible tocomposite clouds and manage policy compliance. But the structure of thecloud also must be addressed if we expect to manageably scale a cloud.An autonomic computing architecture calls for architecture componentsto, themselves, be autonomic. This might sound a bit far-fetched unlesswe consider that we have been solving heterogeneity problems withabstraction layers at the operating system layer for some years now, andthat this technique can be used again to manage large collections ofcomputing resources uniformly. In particular, if two clouds are autonomicand essentially support the same management interfaces, then they couldbe composited into a larger cloud while preserving the identities of theoriginal clouds. Intuitively, this simplifies scaling clouds and reconcilingpolicy differences.As we see the emergence of Cloud Computing products into the market,we see (at least) two that appear to recognize the need to compositeclouds, grids, or meshes of manageable computing resources. Some cloudinfrastructure vendors have taken an approach in which they intend todeal directly with architecture components through a software abstractionlayer. One approach taken to manage clouds is to provide a management4Policy extension points provide the way for applications and services within a container to communicate to the cloud’s centralized or federated Policy Engine.Cloud computing A collection of working papers 9
  • 15. interface to which all manageable resources, including the cloud itself,conform so that management over heterogeneous infrastructureis uniform. For example, the approach that Microsoft has takenacknowledges a need for a uniform management abstraction layer, whichit achieves by requiring that the set of manageable resources conform tointerfaces in the .NET Framework. In either case, exposing a cloud andits components through a well defined management interface enablesmanagement policies to be applied even to the contents of containerslike the interaction container discussed earlier, making it possible toharmonize policies and deliver information in context across business andinfrastructure layers.This is in contrast to elastic computing strategies that are virtualization-based, in which the contents of virtual images are not directly manageableby the elastic computing infrastructure. Amazon, with its CloudWatchmanagement dashboard and APIs, provide the ability to manageEC2-based systems using standard systems management metrics.Systems management functionality layered on top of EC2 managementcould correlate business resources to system resources through systemsmetrics, though the correlation is made outside of CloudWatch. Recentpartnerships between IBM and Amazon allow for containers filled with IBMinfrastructure to be managed using Tivoli or similar systems managementfunctionality. However, even in this case, it is important to note thatmanagement of what is in the container is distinct from managementof Amazon’s cloud unless an integration between the two is ultimatelyimplemented. We have referred earlier in this paper to the mechanismpermitting contents of a container to participate in cloud managementas policy extension points. In practice, policy extension points implementa management interface that makes the resources with which they areassociated manageable and makes it possible for these resources toparticipate in cloud management.Cloud compositionThe ability of one cloud to participate in managing another will becomecritical to scaling a cloud. It will provide a means for a private cloudto temporarily use the resources of a public cloud as part of an elasticresource capacity strategy. It also will make it possible to more immediatelyshare functionality, information, and computing resources.One real-life example of a composite cloud is Skype. While Skype maybe considered to be just a p2p application, it actually is a global meshnetwork of managed network elements (servers, routers, switches,encoders/decoders, etc.) that provisions a global VoIP network with voiceendpoints that are laptop/desktop computers or handheld devices that runSkype’s client application at the edge of the Skype cloud. When the Skypeapplication is not running on a laptop/desktop/handheld device, VoIPcalls are not conducted through it. But when the application is runningand calls can be conducted, the Skype cloud expands to use the laptop/desktop/handheld device to route traffic and manage network exceptionsif needed.A second real life example is FortiusOne’s GeoCommons (http://www.FortiusOne.com, http://www.geocommons.com). FortiusOne is developinga next-generation location intelligence platform by blending analysiscapabilities of geographic information systems (GIS) with location-based information on the Web. FortiusOne’s premise is that it can helporganizations make better location-sensitive decisions by simplifying howbusiness information is correlated to visual maps. The technology and datathat make up the FortiusOne platform is a combination of open sourcetechnology and data that it licenses to complement what it can get fromthe public domain.Two applications are made available at GeoCommons: Finder! is anapplication used to find, organize, and share geodata; and Maker! is anapplication used to create maps using GeoCommons and personal data. Asimple use case involving both of these tools is the upload of a named dataset into Finder! that can be linked through postal code, longitude/latitude,or some other location hook to a map, and the subsequent use of Maker!to produce a rendering of a map with the named data set superimposedonto it.FortiusOne has implemented its functionality both as Web applicationsand services (with a Web service programming interface). It makes thisfunctionality available in its own cloud, which is very similar to Amazon’sElastic Compute Cloud core.GeoCommons makes its software-as-a-service platform available througha subscription, with pricing determined by number of reports generated,data set size, and data storage requirements.For those who wish to operate in a more secure yet managed enterprisecontext, GeoCommons can be privately hosted for a customer. Thisversion of the platform includes an expanded data set and dataintegration services.And for those wishing the ultimate in data privacy who simply do nottrust on-line secure environments, FortiusOne packages its GeoCommonsfunctionality and supporting data on a linux appliance and updates dataand functionality periodically as required.The potential for two types of cloud composition can be seen in theFortiusOne offerings. First, Amazon’s Elastic Computing offering can beused should FortiusOne require additional resources beyond its currentcapacity. Second, the GeoCommons is accessible via a Web serviceprogramming interface, which makes it possible to invoke the servicesprovisioned in the FortiusOne cloud from another cloud. Invoking servicesof one cloud by another does not require cloud composition, but a need tomanage multiple clouds with the same policy set could.Cloud computing A collection of working papers 10
  • 16. With these examples in mind, we characterize Utility Computing as follows:An OS management layer that transforms hosted resources in a datacenter into a policy-managed cloud, extensible beyond data centerboundaries.It sits over (possibly components physically run on) production-hardware.It enables clouds conforming to the same cloud management interfaceto be composited while maintaining cloud identity.It knows and manages all resources in a cloud (recursively, as dictated bypolicy).It reallocates (in an autonomic sense) resources in a cloud, as permittedby policy, to accommodate real-time and business-oriented usagepatterns.It meters use of all resources managed within a cloud.It provides security and access control that can federate across cloudboundaries.It participates in adjudication of policy collisions across all cloudarchitecture layers where appropriate.Utility computing can be considered an overlay on a cloud to make itand its elements manageable and compositional. Preservation of cloudidentity also is a nod toward the ability to federate clouds, which has beenelaborated in Service Grid: The Missing Link in Web Services, together withearly thinking of the foundational nature of service grids vis-à-vis businesscomputing ecosystemsv.Service grid — the benefit after the autonomic endpointBefore the term cloud, the term service grid was sometimes used to definea managed distributed computing platform that can be used for businessas well as scientific applications. Said slightly differently, a service grid is amanageable ecosystem of specific services deployed by service businessesor utility companies. Service grids have been likened to a power or utilitygrid: always on, highly reliable, a platform for making managed servicesavailable to some user constituency. When the term came into use in theIT domain, the word service was implied to mean Web service, and servicegrid was viewed as an infrastructure platform on which an ecology ofservices could be composed, deployed, and managed.The phrase service grid implies structure. While grid elements — serverstogether with functionality they host within a service grid — may beheterogeneous vis-à-vis their construction and implementation, theirpresence within a service grid implies manageability as part of the gridas a whole. This implies that a capability exists to manage grid elementsusing policy that is external to implementations of services in a service grid(at the minimum in conjunction with policy that might be embedded inlegacy service implementations). And services in a grid become candidatesfor reuse through service composition; services outside of a grid also arecandidates for composition, but the service grid only can manage serviceswithin its scope of control. Of course, service grids defined as we haveabove are autonomic, can be recursively structured, and can collaborate intheir management of composite services provisioned across different grids.Service grid deployment architectureCloud computing A collection of working papers 11
  • 17. A cloud, as defined by the cloud taxonomy noted earlier, is not necessarilya service grid. There is nothing in cloud definitions that require all serviceshosted in them to be managed in a predetermined way. There is nopolicy engine required in a cloud that is responsible to harmonize policyacross infrastructure and business layers within or across its boundaries,though increased attention is being given to policy-driven infrastructuremanagement. Clouds are not formed with registries or other infrastructurenecessary to support service composition and governance.A service grid can be formed as an autonomic cloud and will placeadditional constraints on cloud structure (e.g., external policymanagement, interaction container-supported composition, a commonmanagement interface, support of specific interface standards). Thesestandards will be necessary to manage a service grid both as a technologyand a business platform.Container permeabilityClouds and service grids both have containers. In clouds, container is usedto mean a virtualized image containing technology and application stacks.The container might hold other kinds of containers (e.g., a J2EE/Java EEapplication container), but the cloud container is impermeable, whichmeans that the cloud does not directly manage container contents, andthe cloud contents do not participate in cloud or container management.In a service grid, container is the means by which the grid providesunderlying infrastructural services, including security, persistence, businesstransaction or interaction life cycle management, and policy management.In a service grid, it is possible for contents in a container to participatein grid management as a function of infrastructure management policiesharmonized with business policies like service level agreements. It also ispossible that policy external to container contents can shape5how thecontainer’s functionality executes. So a service grid container’s wall ispermeable vis-à-vis policy, which is a critical distinction between cloudsand service grids6.Cloud vendors and vendor lock-inVendor lock-in is a concern that will grow as cloud computing becomesmore prevalent. Lock-in is best addressed by the implementation ofand compliance to standards. In particular, standards for security,interoperability, service composition, cloud and service grid composition,management and governance, and auditing will become especially criticalas clouds become embedded into the way that corporations conductbusiness7.Standards for cloud management are emerging as vendors like Microsoft,Google and Amazon make their offerings available for use. The WebServices community has developed a set of standards for Web servicesecurity, Web service management, and Web service policy management,and so forth, that can serve as a basis for standards to be supported incloud computing. And software vendors8are implementing Web servicemanagement platforms based on such standards that provide the meansto define service level agreements that, when integrated with Web serviceand supporting infrastructure, govern end-to-end Web service-basedinteractions, ensure qualities of service, throttle Web service use to ensureperformance minimums, etc.With all this said, however, the fact is that comprehensive standards forcloud computing do not yet exist, since cloud computing is nascent.And until (and probably even after) such standards exist, cloud usersshould expect to see features and capabilities that justify lock-in — justas one does with other software and utility platforms. Externalizing policy(discussed later in this paper) and implementing services from an outside-inperspective will result in getting benefits from clouds while ameliorating atleast some of the aspects of vendor lock-in through loose couplings andmanageable interfaces.Virtual organizations and cloud computingSocial networks are examples of platforms that use a somewhatamorphous definition of organization similar to a virtual organization,which is defined by the National Science Foundation as “a group ofindividuals whose members and resources may be dispersed geographicallyand institutionally, but who function as a coherent unit through the use ofcyberinfrastructure.”viVirtual organizations can form in a variety of ways,usually as a function of roles/responsibilities played in interactions and lessas a function of title or position in an organization. Roles/responsibilitiesrepresent interfaces that have interaction scope and can be used toautomate computing and exception handling.5The sense of the word shape is consistent with how policy is applied in the telecom world where, for example, bandwidth might be made availableto users during particular times in the day as a function of total number of users present.6Cloud management typically is exposed by the cloud vendor through a dashboard. Vendors like Amazon also make functionality underlying thedashboard available as Web services such that cloud users’ functionality could programmatically adjust resources based on some internal policy. Aservice grid is constructed to actively manage itself as a utility of pooled resources and functionality for all grid users. Hence, a service grid will requireinteraction with functionality throughout the grid and determine with the use of policy extension points whether resource supply should be adjusted.7Note the absence of portability in this list. Interoperability is far more important than portability, which more often leads to senseless technologywars. It is unlikely to be possible to port applications from one cloud to another if these applications make use of cloud APIs. Since clouds are notstandard as yet, neither will the APIs be for some time. However, making policy explicit, and providing APIs in the noted areas will go a long waytoward enabling interactions to be orchestrated across cloud and service grid boundaries.8See AmberPoint and SOA Software as two examples of Web service management platform vendors.Cloud computing A collection of working papers 12
  • 18. A virtual organization’s use of cloud services could vary widely:A virtual organization might be a startup company that uses aninfrastructure cloud to deploy its computing services because theeconomic model is right — a pay-for-use model is what it can afford asit gets off the ground, and maybe even throughout its entire corporateexistence. This type of organization may be interested in the elasticresources of a cloud, but may not need more advanced capabilities.A network of thousands of supply chain partners could be consideredto be a virtual organization. It could use a business interaction serverhosted in a cloud that manages interactions, ensuring they conform tolegal and contract policies and giving all participants in an interactiona record of their participation when that interaction completes. Thisvirtual organization might need the full range of autonomic computingcapabilities to manage the complexity of interoperating with manypartner systems and accommodating policy differences.A network of hundreds of thousands of corporate clients that use traveland entertainment services that comply with corporate standards — allhosted in a cloud — could be considered a virtual organization. One canimagine transaction consolidations and other clearinghouse functionsthat are part of this small ecosystem. Interactions might be complex andsomewhat long-lived and guided by business policies, though the roles/responsibilities played are likely to be simple.Rearden Commerce (http://www.reardencommerce.com/) implements-just such a platform that (as of Jan 2009) serves over 4000 corporateclients and 2 million users (client customers). It brings togethercorporate business travel policies, reviews of travel/entertainmentservice providers, expense processing and reporting, etc., in a waythat recognizes life of a traveler and makes it easier by eliminatingthe need to build direct point-to-point traveler to service providerrelationships.A virtual organization could be composed of scientists who collaboratefrom their labs across the globe in compute- and data-intensiveinteractions hosted in a cloud. These organizations typically are not large,but their work requires access to an elastic set of compute resources forhours at a time, and the capability to manipulate huge databases.And we could consider a healthcare context as an example of anecosystem of virtual organizations that scales to be even larger than theuser bases of popular social network platforms. Members might includehealthcare providers whose credentials must be tracked. Patients mustbe able to access their health records securely, and authorize access toportions of their charts to others. Healthcare devices and applications orservice functionality emit HL7 message streams and related events thatresult in updating patient charts, informing care providers of procedureresults, communicating billing information to hospital billing systemsand insurance providers, measuring quality of care, and keeping eachmember of a care provider group informed of all activities and thecorresponding outcomes that occur while they care for a patient whomight be physically located in another country.HL7 application messaging protocols are evolving from being ASCII/-special character delimited protocols (v2.x) to being XML-based(v3.x). From a technology point of view, HL7’s evolution to XML isvery complementary to Web service orientation, though it does notforce standardization of HL7 messages as yet; we hope that it willbring about standardization as v3.x becomes more widely adopted.Use of XML (and XSLT) also complements a strategy to enrich datapassed in messages in a more standard (data extension point) fashion,making it possible for participants in multiparty interactions to passinformation that they care about (but maybe no other participantdoes) along with standard information useful to all participants in theinteraction. Further, because XML structure can be made very explicit,enforcement of business policies is more easily enabled.Cloud computing must (and in some cases already does) address technicalchallenges to accommodate these organizational forms, including thefollowing:The number of machines in a cloud serving hundreds of millions of userscan reach tens of thousands of machines physically distributed acrossmultiple data centers, where it also may be necessary for data centercapacity to spill over to still other data centers. Failed servers in such alarge-scale environment have to be discoverable and cannot impact thecapabilities of the Cloud in aggregate; failed cloud components must beadopted as the norm and not the exception.Failed computers have to be replaced (virtually) by others that are waitingin inventory for automatic configuration and deployment.Storage models will have to be reconsidered, since it may be expedientto use massively distributed storage schemes in addition to thecentralized relational and hierarchical models now in use. We are seeingthe beginnings of such with Microsoft’s and Amazon’s offerings (usingHadoop-like storage models), and the Google File System. “Backup andRecovery” takes on new meanings with distributed file systems. Storagefault tolerance likely will be implemented differently in large clouds thanin smaller enterprise clouds.Security management systems might have to be federated. Accesscontrol schemes will have to accommodate global user bases securingservice methods throughout the cloud. There also are global constraintsto be considered: some countries do not wish data relating to theircitizens to be hosted outside of their national boundaries.We often think of network traffic attributed to systems managementto be small in comparison to the traffic generated by user interactionswith hosted business functionality. Management of clouds and theircomponents, especially clouds containing business functionalitymanaged with externalized business and infrastructure policies, may haveCloud computing A collection of working papers 13
  • 19. to be federated as a function of the size of the cloud to manage a moreappreciable amount of management-related network traffic.Complete testing will be difficult to impossible to perform in a very largeand dynamic cloud context, so it is likely that new test methods andtools will be required.The range of cloud-related virtual organization use cases noted aboveleverage the cloud computing instantiations we see in the market, andmakes clear that the demand is imminent for cloud computing to serveas the infrastructure and utility service grid for a user constituency thatis much larger and varied than we’ve seen to date. We see the first signsof such in social networking platforms and the success that they enjoy asmeasured by number of users. It will be only a matter of time when wesee that business interactions will be conducted in business network groupcontexts where business policy, roles, responsibilities, and functionalityconverge in a new type of cloud architecture.Concluding remarksAutonomic computing, though viewed with suspicion or disbelief in thepast years, can be sensibly applied to Cloud Computing in a way that willbe useful when developing cloud architectures capable of sustaining andsupporting ecosystem-scaled platforms. We suspect that this will becomethe norm as adoption of cloud computing increases and as social networkplatforms transition to include business capabilities.Cloud computing as we see it emerging today is somewhat amorphouslydefined, making it difficult to form a point of view about the capabilitiesof currently available cloud computing instances to manage next-century platforms. While it is clear that they can manage today’scommon platforms, we see architectural challenges for the future thatwe believe will be difficult to address using current cloud architecturesand architecture styles. We identify technical challenges — includingarchitecture style, user and access control management, the need to haveexternally managed business and infrastructure policies through interactioncontainers, and the need for Utility Computing capabilities — that must beaddressed to meet future architecture requirements.Aiming at implementation of an ecosystem platform will take usbeyond the management capabilities of current cloud offerings. Addingarchitecture components like the interaction container and externalizedpolicy engine will improve cloud capabilities, but until these becomefundamental components in cloud architecture, it is unlikely that a cloudwill be able to manage the concerns of a service grid. It is interesting tonote, however, that the construct of a service grid enables it to managethe concerns of a cloud. A service grid, as an autonomic architecturethat is hardened to be both a service-oriented technology platform and abusiness platform, can be expected to scale both downward to supportenterprise architectures and upward and outward to support the types ofarchitectures likely to be pervasive in twenty-first-century computing.Healthcare represents an area where we believe service grid computingand next-generation architectures will prove to be invaluable. Healthcaresystems world-wide are difficult to manage and architecturally extend,and they certainly are difficult to integrate. Unifying information acrosshealthcare facility boundaries is not only an informatics problem, but alsoan architecture problem that, if not addressed, will likely hinder nationalhealthcare agendas in the United States and elsewhere9. We will discussa service grid-enabled healthcare platform architecture in detail in asubsequent paper.9One of the first efforts of which we are aware to solve access control/role-responsibility problems in healthcare systems as these relate to managementof biomedical information in a service grid is being conducted by Dr. Carl Kesselman and Dr. Stephan Erberich at ISI/USC’s Medical Information Systemsdivision. Without doubt, ISI’s work will be critical to the implementation of service grid-based next-generation healthcare systems.Cloud computing A collection of working papers 14
  • 20. Cloud computing A collection of working papers 15Moving information technologyplatforms to the clouds — Insights intoIT platform architecture transformationIntroductionThe Long-Term Credit Bank (LTCB) of Japan underwent a very traumaticreorganization beginning in 1998 following Japan’s economic collapse in1989. The bank was beset with difficulties rooted in bad debts. Possiblemergers with domestic banks were proposed, but the bank eventually wassold to an international group, which set about putting the bank backtogether, launching it in June 2000 as Shinsei Bank, Limited (Shinsei).LTCB’s IT infrastructure was mainframe based, as many banks’infrastructure was at the time (and still is). Acquisitions and organic growthresulted in a variety of different systems supporting similar bank cardproducts. Among the many challenges with which the bank had to grappleas it began its new life was IT infrastructure consolidation, which, in part,translated into deciding how to consolidate bank card products and theirsupporting IT systems without further disruption to its bank clientele. Thebank could have issued a new card representing bank card features andbenefits of its individual products consolidated into a single one, but thiswould have violated the constraint to not further disrupt its client base,risking loss of more clients. Or it could have continued to accept the entirebank card products as it had in the past but, at the same time, find a wayto transparently consolidate systems and applications supporting thesecards into a very reduced set of systems — ideally one system — thatwould enable the retirement of many others.In sum, Shinsei took this second path. Conceptually, and using ITterminology, the bank viewed its various card products as businessinterfaces to Shinsei that it had to continue to support until a card typeno longer had any users, after which the product (the card type) could beretired. Further, to effect consolidation, the bank had to implement an ITapplication platform supporting both its future and its legacy. The bankIT group set about this mission, empowered by the freedom its businessinterfaces provided, and, over the next three to five years, replacedmany (potentially all) of its mainframe legacy systems using applicationsconstructed with modern technologies and hosted on commodityhardware and operating systems.Shinsei’s example is a direct analog to what IT teams in corporations todaymust do to transform legacy/existing Inside-Out application platforms intoOutside-In service oriented ones that effectively leverage the capabilitiesthat are afforded through use of cloud and service grid technologies.We begin this paper with a very brief explanation of Outside-In versusInside-Out architecture styles, clouds, and service grids. Then we explorestrategies for implementing architecture transformations from Inside-Outto Outside-In and issues likely to be encountered in the process.Going-forward assumptions and disclaimersGlobalization, economic crises, technology innovations, and many otherfactors are making it imperative for businesses to evolve away from currentcore capabilities toward new cores. Further, there appear to be indicatorsthat these businesses — if they are to participate in twenty-first-centurybusiness ecosystems for more than just a few years — will have to makemore core transitions during their corporate life than their twentieth-century counterparts, so the capability to leverage technology to efficientlytransform is important to corporate survival.We believe that clouds, service grids, and service oriented architectureshaving an Outside-In architecture style are technologies that will befundamental to successfully making such corporate transformations. Thereare near-term objectives, like the need for cost and resource efficiency or ITapplication portfolio management that justify use of these technologies tore-architect and modernize IT platforms and optimize the way corporationscurrently deploy them. But there are longer-term business imperatives aswell, like the need for a company to be agile in combining their capabilitieswith those of their partners by creating a distributed platform, and it is atthese corporations we specifically target this paper.Outside-in and inside-out architecture stylesArchitecture styles define families of software systems in terms of patternsfor characterizing how architecture components interact. They definewhat types of architecture components can exist in architectures of thosestyles, and constraints on how they may be combined. They define howcomponents may be combined together for deployment. They define howunits of work are managed, e.g., whether they are transactional (n-phasecommit). And they define how functionality that components provisionmay be composited into higher order functionality and how such can beexposed for use by human beings or other systems.The Outside-In architectural style is inherently top-down and emphasizesdecomposition to the functional level but not lower, is service-orientedrather than application-oriented; factors out policy as a first-classarchitecture component that can be used to govern transparentperformance of service-related tasks; and emphasizes the ability to adaptperformance to user/business needs without having to consider theintricacies of architecture workings1.The counter style, what we call Inside-Out, is inherently bottom-upand takes much more of an infrastructural point of view as a startingpoint, building up to a business functional layer. Application platformsconstructed using client server, object-oriented, and 2/3/n-tier architecture1An Outside-In architecture is a kind of service-oriented architecture (SOA) which is fully elaborated in Thomas Erl’s book called “Service-OrientedArchitecture: Concepts, Technology, and Design,” so we will not discuss SOA in detail in this paper.
  • 21. Cloud computing A collection of working papers 16styles are those to which we apply the generalization Inside-Out becausethey form the basis of enterprise application architectures today, andbecause architectures of these types have limitations that requiretransformation to scale in a massive way vis-à-vis Outside-In platforms.Implementation of an Outside-In architecture results in better architecturelayering and factoring, and interfaces that become more business thandata oriented. Policy becomes more explicit, and is exposed in a way thatmakes it easier to change it as necessary. Service orientation guides theimplementation, making it more feasible to integrate and interoperateusing commodity infrastructure rather than using complex and inflexibleapplication integration middleware.As a rule, it is simpler to integrate businesses at functional levels thanat lower technology layers where implementations might vary widely.Hence we emphasize decomposition to the functional level, which oftenis dictated by standards within a market, regulatory constraints on thatmarket, or even accounting (AP/AR/GL) practices.For a much more detailed discussion of Outside-In versus Inside-Outarchitecture styles, please see the working paper we call “Web Services2.0”vii.Clouds and service gridsSince a widely accepted industry definition of cloud computing — beyonda relationship to the Internet and Internet technologies — does not exist atpresent, we see the term used to mean hosting of hardware in an externaldata center (sometimes called infrastructure as a service), utility computing(which packages computing resources so they can be used as a utility inan always on, metered, and elastically scalable way), platform services(sometimes called middleware as a service), and application hosting(sometimes called software or applications as a service).The potential of cloud computing is not limited to hosting applications insomeone else’s data center, though cloud offerings can be used in this wayto elastically manage computing resources and circumvent the need to buynew infrastructure, train new people, or pay for resources that might onlybe used periodically. Special file system, persistence, data indexing/search,payment processing, and other cloud services can provide benefits to thosewho deploy platforms in clouds, but their use often requires modificationsto platform functionality so that it interoperates with these services.Before the term cloud, the term service grid was sometimes used to definea managed distributed computing platform that can be used for businessas well as scientific applications. Said slightly differently, a service grid is amanageable ecosystem of specific services deployed by service businessesor utility companies. Service grids have been likened to a power or utilitygrid … always on, highly reliable, a platform for making managed servicesavailable to some user constituency. When the term came into use in theIT domain, the word service was implied to mean Web service, and servicegrid was viewed as an infrastructure platform on which an ecology ofservices could be composed, deployed, and managed.The phrase service grid implies structure. While grid elements, serverstogether with functionality they host within a service grid, may beheterogeneous vis-à-vis their construction and implementation, theirpresence within a service grid implies manageability as part of the gridas a whole. This implies that a capability exists to manage grid elementsusing policy that is external to implementations of services in a service grid(at the minimum in conjunction with policy that might be embedded inlegacy service implementations). And services in a grid become candidatesfor reuse through service composition; services outside of a grid also arecandidates for composition, but the service grid only can manage serviceswithin its scope of control. Of course, service grids defined as we haveabove are autonomic, can be recursively structured, and can collaborate intheir management of composite services provisioned across different grids.Clouds and service grids both have containers. In clouds, container is usedto mean a virtualized image containing technology and application stacks.The container might hold other kinds of containers (e.g., a J2EE/Java EEapplication container), but the cloud container is impermeable, whichmeans that the cloud does not directly manage container contents, andthe cloud contents do not participate in cloud or container management.In a service grid, container is the means by which the grid providesunderlying infrastructural services, including security, persistence, businesstransaction or interaction life cycle management, and policy management.In a service grid, it is possible for contents in a container to participatein grid management as a function of infrastructure management policiesharmonized with business policies like service level agreements. It also ispossible that policy external to container contents can shape2how thecontainer’s functionality executes. So a service grid container’s wall ispermeable vis-à-vis policy, which is a critical distinction between cloudsand service grids3.2The sense of the word shape is consistent with how policy is applied in the telecom world where, for example, bandwidth might be made availableto users during particular times in the day as a function of total number of users present.3Cloud management typically is exposed by the cloud vendor through a dashboard. Vendors like Amazon also make functionality underlying thedashboard available as Web services such that cloud users’ functionality could programmatically adjust resources based on some internal policy. Aservice grid is constructed to actively manage itself as a utility of pooled resources and functionality for all grid users. Hence, a service grid will requireinteraction with functionality throughout the grid and determine with the use of policy extension points whether resource supply should be adjusted.
  • 22. Cloud computing A collection of working papers 17A cloud, as defined by the cloud taxonomy noted earlier, is not necessarilya service grid. There is nothing in cloud definitions that require all serviceshosted in them to be manageable in a consistent and predeterminedway4. There is no policy engine required in a cloud that is responsible toharmonize policy across infrastructure and business layers within or acrossits boundaries, though increased attention is being given software vendorsto policy-driven infrastructure management. Clouds are not formed withregistries or other infrastructure necessary to support service compositionand governance.However, a service grid can be formed by implementing a cloudarchitecture, adding constraints on cloud structure, and adding constraintson business and infrastructure architecture layers so that the result can bemanaged as both a technology and a business platform.For a much more detailed discussion of architectures in clouds andservice grids, please see the working paper we call “Demystifying Clouds:Exploring Cloud and Service Grid Architectures”viii.Architecture transformationHow to construct an Outside-In architecture that meets next centurycomputing requirements is a topic that requires debate. Should weleverage our past investments in infrastructure, bespoke softwaredevelopment, and third party software products? If so, how can weself-fund this and how long will it take? Or do we go back to the ITfunding well with rationale that defends our need now to develop a newservice platform and jettison that multimillion-dollar investment we justbarely finished paying off?The answer is it depends. We’ve seen both approaches taken. And we’veseen that development of a new platform is no longer as drastic as itsounds.Transforming an existing architectureIt is enticing to think that one could implement an Outside-In architecturesimply by wrapping an existing Inside-Out application platform with Webservice technologies to service-enable it.Not quite.It is possible to do that and then evolve the Inside-Out architecture to anOutside-In one as budget and other resources allow using a strategy verysimilar to Shinsei’s business interface strategy discussed in the introductionof this paper. But the fact that an Inside-Out architecture typically is notservice-oriented — even though it might be possible to access applicationfunctionality using Web services — suggests that just using the wrapperstrategy will not yield the benefits of a full Outside-In architectureimplementation, and compensation for Inside-Out architecture limits mayeven be more costly than taking an alternative approach.To illustrate the process of converting an Inside-Out architecture to anOutside-In one, we consider how a typical Web application platform couldbe converted to an Outside-In architecture in which some Web applicationaccesses all critical business functionality through a Web services layer, andWeb services are hosted in a cloud, a service grid, or internally.From a layered perspective, a Web application usually can be described bya graphic of a three-tiered architecture like the one below.At the top of the graphic we see a user interface layer, which usually isimplemented using some Web server (like Microsoft’s IIS or Apache’sHTTP Web server) and scripting languages or servlet-like technologiesthat they support. The second layer, the business logic layer, is where allbusiness logic programmed in Java, C#, Visual Basic, and php/python/perl/tcl (or pick your favorite programming language that can be used to codelibraries of business functionality) is put. The data layer is where code thatmanipulates basic data structures goes, and this usually is constructedusing object and/or relational database technologies. All of these layers aredeployed on a server configured with an operating system and networkinfrastructure enabling an application user to access Web applicationfunctionality from a browser or rich internet client application.The blue and red lines illustrate that business and data logic sometimesare commingled with code in other layers of the architecture, making itdifficult to modify and manage the application over time (code that isspread out and copied all over the architecture is hard to maintain). Ideally,the red and blue lines would not exist at all in this diagram, so it is herewhere we start in the process of converting this Inside-Out architecture toan Outside-In one.4This should not suggest that clouds and elements in them are not managed, because they are. Service grids, however, impose an autonomic, active,and policy-based management strategy on all of the elements within their scope of control so that heterogeneous application and technologyinfrastructure can be managed through a common interface that can be applied to fine-grained grid elements as desired or necessary.Figure 1
  • 23. Cloud computing A collection of working papers 18Addressing architecture layering and partitioningThe first step of transitioning from one architecture style to another is tocorrect mistakes relating to layering wherever possible. This requires codeto be cleaned and commented, refactored, and consolidated so that itis packaged for reuse and orderly deployment, and so that cross-layerviolations (e.g., database specifics and business logic are removed from theUI layer, or business logic is removed from the data layer) are eliminated.Assuming layering violations are addressed, it makes sense then tointroduce a service application programming interface (API) between theUser Interface Layer and the Business Logic Layer as shown in the slightlymodified layer diagram below:The service layer illustrated here is positioned between the User Interfaceand lower architecture layers as the only means of accessing lower levelfunctionality. This means that the concerns of one architecture layer do notbecome or complicate the concerns at other levels.But while we may have cleaned up layering architecture violations, wemay not have cleaned up partitioning violations. Partitioning refersto the “componentizing” or “modularizing” of business functionalitysuch that a component in one business functional domain (e.g., ordermanagement) accesses functionality in another such domain (e.g.,inventory management) through a single interface (ideally using theappropriate service API). Ensuring that common interfaces are used toaccess business functionality in other modules eliminates the use of privateknowledge (e.g., private APIs) to access business functionality in anotherdomain space. Partitioning also may be referred to as factoring. Whentransitioning to a new architecture style, the first stage of partitioningoften is implemented at the Business Logic Layer, resulting in a modifiedarchitecture depicted as follows:The next phase of transformation focuses attention on partitioningfunctionality in the database so that, for example, side effects of insertingdata into the database in an area supporting one business domain doesnot also publish into or otherwise impact the database supporting otherbusiness domains.Why go to such trouble?Because it is possible to transition the architecture in Figure 1 to becomelike one of the depictions below. Figure 4 illustrates a well-organizedplatform that might be centrally hosted.Figure 2EFigure 3EEFigure 4EEE
  • 24. Cloud computing A collection of working papers 19Figure 5 illustrates a well organized platform that could be hosted in aservice grid or even many service grids.Figures 4 and 5 make it simple to see that services and their supportingbusiness logic and data functionality could be replaced easily with analternative service implementation without negatively impacting otherareas of the architecture, provided that functionality in one service domainis accessed by another service domain only through the service interface.And such capability is required in order to simplify management of anapplication portfolio implemented on such an architecture as well asdistribute and federate service implementations.Externalizing policyThe next step toward implementing an Outside-In architecture is toexternal both business and infrastructure policies from any of thefunctionality provisioning services illustrated in the figures above.Our use of the word policy connotes constraints placed upon thebusiness functionality of a system, harmonized with constraints on theinfrastructure (hardware and software) that provisions that functionality.These constraints could include accounting rules that businesses follow,role-based access control on business functionality, corporate policy aboutthe maximum allowable hotel room rate that a nonexecutive employeecould purchase when using an online reservation service, rules aboutpeak business traffic that determine when a new virtualized image of anapplication system should be deployed, and the various infrastructuralpolicies that might give customer A preference over customer B shouldcritical resource contention require such.Policy extension points provide the means by which policy constraints areexposed to business and corresponding infrastructural5functionality andincorporated into their execution. They are not configuration points thatare usually known in advance of when an application execution starts andthat stay constant until the application restarts. Rather, policy extensionpoints are dynamic and late bound to business and infrastructuralfunctionality, and they provide the potential to dynamically shapeexecution of it within the deployment environment’s runtime.Externalizing policy highlights a significant distinction between Inside-Outand Outside-In architecture styles. Inside-out architectures usually involvelegacy applications in which policy is embedded and thus externalizing itis — at best — very difficult. Where application policies differ in typicalcorporate environments, it becomes the responsibility of integrationmiddleware to implement policy adjudication logic that may work well toharmonize policies over small numbers of integrated systems, but this willnot generalize to manage policy in larger numbers of applications as wouldbe the case in larger value chains. To illustrate the problem of scalingsystems where policy is distributed throughout it, consider the systemillustrated in Figure 6.5Rob Gingell and the Cassatt team are incorporating policy into their next-generation utility computing platform, called Skynet. In their parlance, policyprimitives represent metrics used by policy extension points in support of management as a function of application demand, application service levels,and other policy-based priority inputs, such as total cost. The policy-based approach to management is being implemented so that infrastructurepolicy can be connected to business service level agreements. This will be fundamental to automating resource allocation, service prioritization, etc.,when certain business functionality is invoked or when usage trends determine need. Such capability will prove invaluable as the number of elementswithin a cloud or service grid becomes large.Figure 5EEEE
  • 25. Cloud computing A collection of working papers 20Figure 6 illustrates a system where business policy exists in multiplelocations of the architecture as indicated by areas outlined in red. Scalingthis architecture would be disastrous because policy would be distributedas copies (or, worst case, as different code bases) over a very complexdeployment environment. But a well-factored environment like theones illustrated in Figures 4 and 5 have business logic located in a singlelogical architecture layer and, from it, policy can be externalized with thedevelopment of adapters or similar architecture components that play therole of policy extension points described above. Once this is accomplished,the architecture we started with now begins to resemble the architectureillustrated in Figure 7 below, in which policy has been externalized, possiblyfederated, and put under the control of policy management services. Oncepolicy from business functionality is externalized, it can be harmonizedwith infrastructure policy as feasible/desired.Replacing application functionality with (composite) servicesThe final step in transforming an Inside-Out platform to an Outside-Inplatform is to replace business application code that coordinatesinvocation of multiple services with composite service if this is possible.In Figure 7 we use the term composite service to mean business servicesformed by combining other business services (or methods thereof)together to form coarse (larger) business functions that are peerwith application functionality. For example, we might see services tomanage order fulfillment, invoice submission and payment processing,orchestrations with which billing staff use to prepare for invoicing, logisticsplanning, and so forth. As a kind of mental mapping between Figures 1and 7, the composite service functionality in Figure 7 maps to businesslogic that has leaked into Web pages of the Web application in Figure 1(shown with red and blue lines) that are used to manage order fulfillment,invoice submission, etc.Figure 6Figure 7
  • 26. Cloud computing A collection of working papers 21Orchestration is often equated to workflows used to coordinate someordering of service method invocations. Workflow and other businessprocess management technologies are now well-known within today’scorporations. Workflow engines for Web services have been commoditizedthrough open source initiatives and by commercial software vendors.These engines make it possible to implement composite Web servicesas either state machine or sequential workflows. Use of state machineflows makes it possible to avoid prescriptively dictating how systemsinteroperate. They also provide the opportunity to incorporate humanintelligence tasks to help resolve exception conditions that often emergefrom composite services or straight through processing flows6.Starting from scratch — maybe easier to do, but sometimes hardto sellMany CIOs and IT executives hope that the costs and risks of transforminga legacy platform architecture to an Outside-In one can be amortized overtime, and who can blame them. Most have probably spent a considerablesum developing the current architecture, so the last thing any IT executivewants to ask for is new budget sufficient to fund still more infrastructure-level activities or require their companies to choose between newfunctionality or resolved infrastructure issues.But we have experienced many changes in the technology world duringthe last 20 years that strongly suggest there is value in at least consideringwhether implementing Outside-In architectures from scratch would beworthwhile. An interesting catch here is that this argument could havebeen made and was made at each new stage of development over thelast 20 years. Why is the story now so different? Because today’s contextversus just a few years ago is qualitatively different. Significant broadbandcapacity, economic storage (both self- and cloud-hosted), cheap memoryand modern caching services, commodity 64-bit operating systems,XML accelerators and sophisticated application protocol managementcapabilities, commoditized integration/interoperability technologies,virtualization and utility computing, cloud and service grid computing, andother relatively recent innovations challenge the traditional wisdom thatit is better to evolve and extend an existing platform than it is to createa new one that could circumvent problems from retrofitting an existingarchitecture in ways quite counter to its original design.Coupled with these advances are elaborations of industry domains inthe form of industry or business solution maps. These maps are usedby consulting companies and software vendors to provide businessprocess oriented views of industry, define roles played and responsibilitiesperformed within business processes, begin (at least) to build outfunctional decompositions of the industry domain, and map processes totechnology solutions where feasible. Using these maps as starting pointsstreamlines process and data mapping efforts that used to take monthsto even several years to perform (in larger companies), and results in adetailed functional view that is necessary to build a well-formed Outside-Inarchitecture.Building from scratch is really not the same as starting with nothing buta blank sheet of paper. While it is unusual to find a company able totake a purely greenfield approach (unless it is a startup), there are waysfor established businesses to get comfortable with taking a greenfieldapproach to developing an Outside-In architecture, and subsequentlydeveloping a strategy to implement it even if using components of existingplatforms.Concluding remarksTransforming an Inside-Out architecture to an Outside-In architecturecan be a lengthy process — it is a function of existing system complexity,size, and age. One company who shared with us its experiences whenmaking such a transition was Rearden Commerce (Rearden). Prior to threeand a half years ago, Rearden’s architecture was composed like many ofthe Web applications we see today: three-tiered, open source Web andapplication server technologies, and a relational database. Rearden’s Webapplication exposed a framework to which merchant clients could interfaceto Rearden “services” or functions. Rearden’s management team had theforesight to recognize the company’s need to create a platform (not just anapplication), and the corresponding need to make architecture changes tosupport more rapid development and simpler deployment of new services.By this time, Rearden already had clients, so it understood that change hadto be made transparently to its user base whenever possible or in a waythat the user base viewed as a positive upgrade of capability to which theycould migrate as doing so became expedient to their business.Rearden strengthened its leadership team with technologists who hadparticipated in Web service infrastructure companies and could guidein Rearden’s architecture modernization. This new leadership teamundertook a transformation of the company’s three-tiered architecture toa service-oriented one over a two-year period using a process like the onedescribed above. At the end of the two-and-a-half-year period, Reardenhad transformed its traditional Web application architecture to a serviceoriented one with externalized policy management.When performing an architecture transformation, is it necessary that allarchitecture components are entirely transformed — as was the case withRearden? If there was queue-based middleware in the old architecture,should it be replaced? Should all old applications be replaced with customapplications having appropriate policy extension points?6Ultimately, it may prove necessary to incorporate a constraint engine into the way that services are composited to harmonize policies and dynamicallygovern execution of the composite..
  • 27. Cloud computing A collection of working papers 22The answer to these questions is it depends. Certainly it is possible toreplace enterprise application integration technologies with commodityor open source technologies, simplify them, or maybe — in some cases— even eliminate them. It is unlikely that middleware supporting reliablemessaging and long-lived business transactions between business partnersneeds to be totally replaced in or removed from an Outside-In architecture.But its use can be couched in ways that eliminate tight coupling betweenpartners, and commingling of business policy with integration functionalitythat makes partner integration difficult to change as policies change or asa partner networks expand.Taking an Outside-In point of view requires that we separate concernsfrom the start. Application platforms should be viewed as distributedfrom their beginning rather than be made so after the fact by attachingsome distribution layer to them. We must understand how we havepermitted business security and access control models to be built intoour architectures and how, now that technology innovations enable usto challenge these limits, we must remove them from our computingplatforms to realize business agility goals that will be demanded of anarchitecture in the twenty-first-century. Technologies we’ve used inthe past can be useful to us in the future. Success in implementing anOutside-In architecture is less a function of technology than it is of abusiness and technology architecture vision that forces business andtechnology architects to view business capabilities from a global, outside inand top down perspective.
  • 28. Cloud computing A collection of working papers 23Motivation to leverage cloud and servicegrid technologies — Pain points thatclouds and service grids addressIntroductionIt was September 2008 when Larry Ellison was asked about whetherOracle would pursue a cloud vision. His response at the time was thatOracle would eventually offer cloud computing products. But in the sameconversation, Mr. Ellison also noted — in his inimitable fashion — thatcloud computing was such a commonly used term as to be “idiocy.”Fair comment, actually, given that cloud computing is such an overloadedterm. It can be used as a synonym for outsourced data center hosting.It can be used to define what Salesforce.com and NetSuite do — theyoffer software as a service. Some have referred to the Internet as thecloud … an uber-cloud containing all others. Cloud computing sometimesis imprecisely used to reference grid computing for database resourcemanagement or massively parallel scientific computing. And cloudcomputing has been taken to mean time sharing, which is both a styleof business and a technology strategy that sought to share expensivecomputing resources across corporate boundaries at attractive price points.It appears on the surface that the IT industry has redefined much of whatit does now and has done for quite a while as cloud computing, and thatOracle indeed might need only to change verbiage in a few of its ads toalign them with a well-formed cloud vision.But today’s IT leaders are operating in a business climate in which intensecommoditization and change force deployment of new IT-enabledbusiness processes and require acknowledgement that business processesand architectures that are fixed/rigid in their definition will not scale tolarge networks of practice, that IT budgets may have reached the pointwhere conventional internal cost cutting can wring out only nominaladditional value unless business and IT processes change, and that doingbusiness internationally is not the same as conducting global business(i.e., outsourcing is not equivalent to organizing and conducting globalbusiness). So, while Mr. Ellison’s remarks may express the sentimentsof many IT leaders today who have spent a considerable amount oninfrastructure, they cannot be correct unless the processes and techniquesdeveloped in IT during the past 20 years will be the foundation forprocesses and techniques of the next 20 years.We do not believe that twentieth-century IT thinking can or should bethe de facto foundation for twenty-first-century IT practices. We believethat now, more than ever before, IT matters, and it has already becomethe critical center of business operations today. As such, IT leadershave no choice but to continue to chase cost and margin optimization.They also have no choice but to carefully set and navigate a course torenovate and/or replace twentieth-century practices with for twenty-first-century practices and technologies so that product lines and services thatcompanies offer today can remain relevant through significant markettransitions.This paper is the first in a set of three that attempt to establish a thoughtframework around cloud computing, its architectural implications, andmigration from current computing architectures and hosting capabilitiesto cloud computing architectures and hosting services. We begin byexploring three IT pain points that can be addressed by cloud and servicegrid computing. Subsequent papers more completely elaborate these painpoints and methods to handle them.IT pain pointsThere probably is a very large number of IT pain points that IT leaders intoday’s corporations would want to see addressed by cloud and servicegrid computing. We highlight three in this paper:Data Center ManagementArchitecture Transformation and Evolution (evolving current architecturesor beginning from scratch)Policy-based Management of IT PlatformsPain point: data center managementSummary: The challenges of managing a data center, including networkmanagement, hardware acquisition and currency, energy efficiency, andscalability with business demands, all are costly to implement in a way thateasily expands and contracts as a function of demand. Also, as businessesaggregate and collaborate in global contexts, data center scalability isconstrained by cost, the ability to efficiently manage environments andsatisfy regulatory requirements, and geographic limitations. Adding to thiscomplexity, the need to manage change demanded by today’s changingglobal and corporate climates underscores the need for transforming theways that we manage data centers: methods of the past 5 to 10 years donot scale and do not provide the requisite agility that is now needed.Cloud solutions: Cloud solutions can form the basis of a next generationdata center strategy, bringing agility, elasticity, and economic viability tothe fore:Affordable elasticity, scalabilityResource optimization through virtualization-Management dashboards simplify responses evoked by seasonal peak-utilization demands or business expansionFiner-grained container management capabilities (e.g., Cassatt’s) will-serve to fine tune elasticity policiesCapability to affordably deploy many current technology-based-applications as they exist today, possibly to re-architect them overtime
  • 29. Cloud computing A collection of working papers 24Minimized capital outlay, which is especially important for startups,-where initial funding is way too limited to use to capitalizeinfrastructureExtreme elasticity — handling spikes of traffic stemming from-something catching on or “going viral” (e.g., having to scale from 50to 5000 servers in one day because of the power of social media)Affordable and alternative provisioning of disaster recoveryCloud data storage schemes provide a different way to persistently-store certain types of information that make explicit data replicationunnecessary (storage is distributed/federated transparently)1Creation of virtual images of an entire technology stack streamlines-recovery in the event of server failureUtility computing management platforms enable consistent-management across data center boundaries. Evolution of utilitycomputing to enable cloud composition will simplify implementationof failover strategiesAffordable state-of-the-art technologyThe exponential nature of digital hardware advances makes the-challenge of keeping hardware current particularly vexing. Buyingcloud services transfers the need to keep hardware current to thecloud vendor — except, possibly, as this applies to mainframe orother legacy technologies that remain viableIt is reasonable to expect cloud vendors to offer specialized hardware-capabilities (e.g., DSP/GPU/SMP capabilities) over time in support ofgaming/graphics, parallel/multithreaded applications, etc.Specialized hardware needs (e.g., mainframe-based) probably will not-be the responsibility of the cloud vendor, but there is no reason whya private cloud/service grid should not be able to be composed with apublic cloud/service gridOperational agility and efficiency — cloud vendors will overseemanagement of hardware and network assets at a minimum. Wherethey provide software assets or provision a service grid ecosystem, theylikely will provide software stack management as wellManagement of assets deployed into a cloud is standardized.-Management dashboards simplify the management and deploymentof both hardware and softwareEnergy efficiency becomes the cloud vendor’s challenge. The scale ofa cloud may well precipitate the move to alternative cooling strategies(e.g., water vs. fan at hardware (board/hardware module) levels), airand water cooling of data centers, increased management softwarecapabilities to interoperate with data center policies to control power up/down of resources and consolidate (on fewer boxes) processes runningin a cloud given the visibility to utilization, service level agreements,etc. One could even imagine implementing a power strategy thatcontinuously moves resource intensive applications to run where poweris less expensive (e.g., power might be less expensive at night than theday so keep running this application on the dark side of the earth)Big enterprise capabilities for small company prices-Startups can use and stay with cloud solutions as they grow and-become more establishedHardware and data center cost-savings and staff cost optimizationsenable businesses to self-fund innovative IT initiativesThose who wish to leverage the cloud’s functional capabilities will-have to build their own capabilities (e.g., services and/or applications)to interoperate with resources in the cloud provided that they wish todo more than simply use a cloud as outsourced hostingSecurity compliance (e.g., security of information and data centerpractices, PCI data security compliance) will increasingly become theresponsibility of cloud vendorsThis will be true especially as clouds become/evolve into service grids,-as cloud vendors geographically distribute their capabilities, and asspecialized clouds are provisioned to serve specific industries. Wherethere may be constraints on location of data, policies to guaranteedata will be stored as required, together with auditing/monitoringprocedures will have to be put into placeIn today’s outsourced hosting environments, clients work with service-providers using audits to identify gaps in security strategies andcompliance, and to ensure that such gaps are quickly closed. Weexpect the same to be true in cloud contexts — especially where newdistributed storage technologies can be used. Further, we expect thatemphasis on the use of policy to deal with security, data privacy, andregulatory requirements ultimately will distinguish cloud vendors andtheir technologies from service grid vendors, who will focus on theconstruction and management of a policy-driven business ecosystemleveraging cloud computingPain point: architecture transformation/evolution (the Brownfieldvs. Greenfield Conundrum)Summary: Significant investment in application platforms in the last10 years have resulted in heterogeneous best-of-breed applicationsystems that have proved hard and costly to integrate within corporateboundaries. Scaling them beyond corporate boundaries into corporatenetworks of practice takes integration to a level of complexity that appearsinsurmountable, but the perceived costs of starting fresh seem equally so.IT leadership knows it must do something about the application portfolioit has assembled to make it interoperable with partner networks withoutrequiring massive technology restructure in unrealistically short timeperiods. It also knows the business must quickly respond to global marketopportunities when they present themselves. How does IT Leadershipguide the architectural evolution and transformation of what exists todayto enable rapid-fire response without starting from scratch or trying tochange its application platform in unrealistic time periods?1It is interesting to consider the implications that new cloud persistence schemes can have on registries like public DNS and Web service registries.Were these registries to be hosted in a cloud, it might be possible to rethink their implementation so as to simplify underlying database replicationand streamline propagation of registry updates.
  • 30. Cloud computing A collection of working papers 25Cloud solutions: Cloud solutions can form the basis of an applicationportfolio management strategy that can be used to address tacticalshort-term needs, e.g., interoperability within a business community ofpractice using the cloud to provision community resources, and to addressthe longer-term needs to optimize the application portfolio and possiblyre-architect it.Cloud vendors offer the capability to construct customized virtualizedimages that can contain software for which a corporation has licenses.Hosting current infrastructure in a cloud (where such is possible) providesan isolated area in which a corporation or corporate partners (probablyon a smaller scale due to integration complexities associated with olderinfrastructure and application technologies) could interoperate usingexisting technologiesWhy would companies do this?-To move quickly with current platformsTo economically host applications, minimize private data centermodifications, and, in so doing, self-fund portfolio optimization and/orre-architecture workUse current capabilities, but shadow them with new capabilities asthey are developed — ultimately replacing new with oldSimplify architecture by removing unnecessary moving partsCloud vendors offer application functionality that could replace existingcapabilities (e.g., small-to-large ERP, CRM systems). Incorporating thisfunctionality into an existing application portfolio leads to incrementalre-architecture of application integrations using newer technologies andtechniques (Brownfield), which, in turn, should result in service-orientedinterfaces that can become foundational to future state. An incrementalmove toward a re-architected platform hosted using cloud technologiesmay prove to be the only way to mitigate risks of architecturaltransformation while keeping corporate business running. Conversely,clouds also represent locations where Greenfield efforts can be hosted.Greenfield efforts are not as risky as they sound given the maturity (now)of hosted platforms like SalesForce, NetSuite, etc.How quickly can transformation of an existing platform be-accomplished? This depends upon the architectural complexity ofwhat is to be transformed or replaced. A very complex and tightlycoupled architecture might require several years to decouple sothat new architecture components could be added — assumingno Greenfield scenario is desired or feasible — whereas it might bepossible to move a simply structured Web application in a matter ofhours. A platform that has specialized hardware requirements (e.g.,there is mainframe dependency, or digital signal processing hardwareis required) might have to be privately hosted, or be hosted partly inpublic and partly in private cloudsCloud application programming interfaces (APIs), together with theconcepts of distribution, federation, and services that are baked in,provide a foundation on which to implement loosely coupled, service-oriented architectures and can logically lead to better architectureWeb services, reliable queuing, and distributed storage (nonrelational-with somewhat relational interfaces, and relational) providefoundational infrastructure capabilities to implement modernarchitectures using standardized APIs (e.g., WS-*)Standardized interfaces, loose architecture couplings, and-standardized deployment environments and methods increase reusepotential by making it easier to compose new services with existingservicesClouds provide a means to deal with heterogeneityInitially, heterogeneity is dealt with through management layers-Better architecture as noted above further enhances this as-heterogeneity is encapsulated beneath standardized and service-oriented APIsOnce heterogeneity is contained, a portfolio optimization/-modernization strategy can be put into place and implementedPain point: policy-based management of IT platformsSummary: Policy constraints are difficult to impossible to implementespecially in a rapidly changing world/context. Business processes andpolicies are embedded in monolithic applications that comprise corporatebusiness platforms today. Even the bespoke applications constructed inthe past 10 years share this characteristic since policy and process werenot treated as formal architecture components. Consequently, applicationscalability vis-à-vis provisioning policy-driven business capabilities islimited. Organizational model changes, e.g., mergers and acquisitions (ordivestitures) or corporate globalization into very loosely coupled businessnetworks, underscore the need for policy to be made explicit. The abilityto conduct business in a quickly changing world will be a direct function ofthe capability to manage using policy.Service grid solutions: In one sense, this pain point can be consideredto be related to the Brownfield/Greenfield Conundrum that, if addressed,results in a distributed/federated, service-oriented, loosely coupledarchitecture in which policy can be factored out and considered a first-order architecture component. However, it also is clear that: (1) everyonedoes not need policy factored like this; and (2) where policy must beexposed may vary by domain, community, geography, etc. Hence wedeal with this pain point separately and consider the need to address it aprerequisite condition to leveraging the full capabilities of a service grid.Corporations require enterprise qualities in architectures, and they will havethe same expectations of clouds where they will deploy critical platforms.Scaling architectures for use in increasingly larger communities as well assimply making platforms that are far more configurable and compositionalrequires the ability to expose policy extension points that can beincorporated into a management scheme that can implement and enforcepolicy across the entire technology stack. The result of such architecturally
  • 31. Cloud computing A collection of working papers 26pervasive policy management is that control over environmental as wellas business constraints is provisioned. When corporate architectures aredeployed into service grids, these policy extension points must be usedeven in grids composed of other grids.Cloud vendors are implementing utility computing managementcapabilities, which are, themselves, policy driven. As these capabilities arefurther refined, it will become feasible to integrate business policies withinfrastructure management policies on which business and infrastructureservice level agreements/processes can be basedOutside-In architectures (in our view, service-oriented architectureproperly done) provision interfaces that easily align with businessprocesses and minimize architecture complexity, resulting in a simplerarchitecture in which policy is externalized. Externalized policy providesthe opportunity for business policy to join with infrastructure policyExternalized policy provides the foundation on which policy-drivenbusiness processes can be constructed and managed. This resultsin increased business agility because it simplifies how businessesinteroperate: they interoperate at the business process level, and notat the technology level; policies can be changed with significantly lessimpact on the code that provisions business functionalityCloud vendors use containers to deploy functionality. As these containersbecome permeable such that their contents can both be managed andexpose policy extension points, then policies can span the entire cloudand grid technology stacksService grids provision architecture components, e.g., policy enginesand interaction services, that enable policy to be managed/harmonizedexplicitly and separately from other business functionality — acrossarchitecture layers, across business networks of practice — and used asthe foundation of business interactionsIt is important to note that policy is viewed as a constraint continuum-covering infrastructure management to domain (regulatory, industry/market sector) policy constraintsConcluding Remarks: To the 21st century and beyondWe see, in this paper, that cloud computing can be used to address tacticalproblems with which IT continually deals, like resource availability andreliability, data center costs, and operational process standardization. Thesenear-term objectives represent sufficient justification for companies to usecloud computing technologies even when they have no need to improvetheir platforms or practices. But there are longer-term business imperativesas well, like the need for a company to be agile in combining theircapabilities with those of their partners by creating a distributed platformthat will drive aggressively toward cloud and service grid computing. Webelieve that clouds, service grids, and service-oriented architectures aretechnologies that will be fundamental to twenty-first-century corporations’successfully navigating the changes that they now face.The pain points discussed above illustrate a progression of change thatmost corporations have already begun, whether they are just starting upor are well established. We began with use of cloud hosting services as analternative to self-hosting, or as an alternative to other current day third-party hosting arrangements that do not offer at least the potential of cloudcomputing. For those companies that need to pursue implementation andmanagement of a service-oriented architecture, we discussed pain pointsrelating to re-architecting current platforms to leverage cloud computing,and the possible need to formalize the way that policy is used to manageIT platforms within and across service grid boundaries.Many of the concepts mentioned in the pain point discussions arearchitectural, and are not defined at all in this paper. However, they aremore completely elaborated in our other papers, called DemystifyingClouds: Exploring Cloud and Service Grid Architectures, and MovingInformation Technology Platforms To The Clouds: Insights Into IT PlatformArchitecture Transformation.
  • 32. Cloud computing A collection of working papers 27About the authorsThomas B (Tom) Winans is the principal consultant of Concentrum Inc., aprofessional software engineering and technology diligence consultancy.His client base includes Warburg Pincus, LLC and the Deloitte Centerfor the Edge. Tom may be reached through his Website at http://www.concentrum.com.John Seely Brown is the independent co-chairman of the Deloitte Centerfor the Edge, where he and his Deloitte colleagues explore what executivescan learn from innovation emerging on various forms of edges, includingthe edges of institutions, markets, geographies, and generations. He is alsoa Visiting Scholar and Advisor to the Provost at The University of SouthernCalifornia. His Website is at http://www.johnseelybrown.com.iThe Vision of Autonomic Computing, by Jeffrey O Kephart and David M Chess, IBM Thomas J Watson Research Center, 2001iiAutonomic Computing Manifesto, http://www.research.ibm.com/autonomic/manifesto/autonomic_computing.pdf, International Business MachinesCorporation 2001iiiWeb Services 2.0, by Thomas B Winans and John Seely Brown, Deloitte, 2008ivIdentity Management, by Bill Coleman, Working Paper, 2009vService Grids: The Missing Link in Web Services, by John Hagel III and John Seely Brown, Working Paper Series, 2002viBeyond Being There: A Blueprint for Advancing the Design, Development, and Evaluation of Virtual Organizations, National Science Foundation, May 2008viiWeb Services 2.0, by Thomas B Winans and John Seely Brown, Deloitte, 2008viiiDemystifying Clouds: Exploring Cloud and Service Grid Architectures, by Thomas B Winans and John Seely Brown, Deloitte, 2009
  • 33. About the Deloitte LP Center for the EdgeThe Deloitte Center for the Edge conducts original research and develops substantive points of view for new corporate growth. The SiliconValley-based Center helps senior executives make sense of and profit from emerging opportunities on the edge of business and technology.Center leaders believe that what is created on the edge of the competitive landscape—in terms of technology, geography, demographics,markets—inevitably strikes at the very heart of a business. The Center’s mission is to identify and explore emerging opportunities related tobig shifts that aren’t yet on the senior management agenda, but ought to be. While Center leaders are focused on long-term trends andopportunities, they are equally focused on implications for near-term action, the day-to-day environment of executives.Below the surface of current events, buried amid the latest headlines and competitive moves, executives are beginning to see the outlines ofa new business landscape. Performance pressures are mounting. The old ways of doing things are generating diminishing returns. Companiesare having harder time making money—and increasingly, their very survival is challenged. Executives must learn ways not only to do their jobsdifferently, but also to do them better. That, in part, requires understanding the broader changes to the operating environment:Decoding the deep structure of this economic shift will allow executives to thrive in the face of intensifying competition and growing economicpressure. The good news is that the actions needed to address near-term economic conditions are also the best long-term measures to takeadvantage of the opportunities these challenges create. For more information about the Center’s unique perspective on these challenges, visitwww.deloitte.com/centerforedge.About DeloitteDeloitte refers to one or more of Deloitte Touche Tohmatsu, a Swiss Verein, and its network of member firms, each of which is a legally separateand independent entity. Please see www.deloitte.com/about for a detailed description of the legal structure of Deloitte Touche Tohmatsu and itsmember firms. Please see www.deloitte.com/us/about for a detailed description of the legal structure of Deloitte LLP and its subsidiaries.Copyright © 2009 Deloitte Development LLC. All rights reserved.Member of Deloitte Touche TohmatsuContact usFor further information, please contact:John HagelCo-chairman, Deloitte LLP Center for the EdgeDirector, Deloitte Consulting LLP+1 408 704 2778jhagel@deloitte.comGlen DongChief of Staff, Deloitte LLP Center for the EdgeSenior Manager, Deloitte Services LP+1 408 704 4434gdong@deloitte.comChristine BrodeurNational Marketing Lead, Deloitte LLP Center for the EdgeSenior Manager, Deloitte Services LP+1 213 688 4759cbrodeur@deloitte.com

×