Bridging IaaS With PaaS To Deliver The Service-Oriented Data Center


Published on

As enterprises deploy private IaaS clouds into production they are reevaluating their future application delivery models. SUSE and WSO2 believe that private PaaS will leverage the automation and scalability of Private IaaS solutions, such as OpenStack-based SUSE Cloud, to deliver the secure, standardized development environments that will make migrating to an agile, service oriented delivery model possible. Come learn how the combination of IaaS and PaaS enables enterprises to more efficiently and flexibly tackle the challenges of the modern connected enterprise.

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Bridging IaaS and PaaS to Deliver the Service-Oriented Data Center As enterprises deploy private IaaS clouds into production they are reevaluating their future application delivery models. SUSE and WSO2 believe that private PaaS will leverage the automation and scalability of Private IaaS solutions, such as OpenStack-based SUSE Cloud, to deliver the secure, standardized development environments that will make migrating to an agile, service oriented delivery model possible. Come learn how the combination of IaaS and PaaS enables enterprises to more efficiently and flexibly tackle the challenges of the modern connected enterprise.
  • This is why. OpenStack has the greatest industry support and most vibrant community among open source cloud software projects. The most recent release, Grizzly, had contributions from 517 developers from 180 companies. In the long-run the vibrancy of the community will lead to more rapid innovation than competing projects and even proprietary solutions When we were formulating our private cloud strategy we went and spoke with our customers and they wanted to know what was our OpenStack solution. So, customer demand is really driving our participation in OpenStack Finally, the formation of the OpenStack Foundation helps to ensure: The long-term viability of the project and that the project's development goals benefit the industry and not one vendor – unlike CloudStack and Eucalyptus Our developers find this to be the most professional community because most contributors come from vendors and are committed to solving user issues.
  • But, why do you need a distribution? Enterprises don't download the Linux Kernel from and deploy. What you see in Orange is OpenStack. It is like To make OpenStack usable you also need other items. You need servers. OpenStack needs to run on an OS and a hypervisor or two. You need a messaging service and a database. And you need all of these items and support for all of these items integrated together. And you also need an install framework to ease the initial set-up and ongoing use of the cloud. And then you have other management tools that need to integrate seamlessly into your private cloud environment and be fully supported. Piecing all of this together yourself and getting it deployed can be costly and time consuming.
  • In particular the installation of OpenStack can be challenging. So here we have 782, 11, and 2. 782 – There are 782 parameters within OpenStack that can be chosen at installation. And these need to be deployed in a certain order. Without an installation framework you will need to track and remember your choices and where you are in the process. 11 – There are 11 different components that also need to be deployed in an orderly fashion. If you deploy one of the components out of order you will need to start over. 2 – This is the number of days it took one of our SI partners in the Netherlands to install OpenStack for clients even after gaining experience through multiple engagements. 2 – This is the number of hours it took them with SUSE Cloud using our install framework based on Crowbar. We also have a service provider customer using SUSE Cloud. They were trying to build OpenStack-based clouds in their home country and four others. However, they were struggling to get the clouds up and running and pulled back their plans to focus only on the home country. They then started using SUSE Cloud and found it so easy to set-up and run that they decided to continue with their business expansion and build clouds in the other 4 countries.
  • Used to explain the pricing and how it all fits together
  • SUSE Cloud a an Infrastructure-as-a Service private cloud solution provides the functionality in combination with SUSE's tools like SUSE Studio and SUSE Manager to enable the implementation of an automated cloud-centric lifecycle management. It is clear that cloud both enhances the development process, but also demands that IT revisit the process to make sure the organization gets maximum advantage from the new technology. .
  • The Breakup of the Corporation Agile, Collaborative Ecosystems The Long Tail Keeping pace with IT innovation
  • In the abstract, business agility can be defined as your ability to rapidly change business vectors. A business vector is your business speed and direction.  The direction may lead into nIew markets and new products, or engaging with new participants.  Reducing time to IT solution delivery increases your team’s ability to adjust the business vector and match business opportunity. With adequate instrumentation, IT delivery agility can be quantified.  Consider the following agility metric recommendations: Time to create project workspace Time to build, integrate, test Time to approve, promote Time to deploy, release Dwell time – time waiting for the next operation to commence or complete After application project inception and before coding commences, systems administrators must create project workspaces.   How long does your team wait before gaining access to source code management repositories, requirement management projects, and defect tracking projects? Moving code through build, integration, and test tools is often a time and labor-intensive process.  The entire team waits while applications assets are built, integrated, and tested.  When teams use iterative development processes, the wait time aggregates over several hundred or thousands cycles.  How long does your team wait during build, integration, and test phases? When one team member finishes a task and the work enters an approval phase, how long does the team wait?  After the work is approved to move through phase gate, how long before the project is promoted into the next phase?
  • In addition to the APIs and repositories that improve the cloud workflow that we already discussed, the service centric cloud need a new set of APIs to locate, configure and connect to services. In addition to the repositories that hold deployable images, the predefined services will exist in the cloud. As SUSE continues to develop our private cloud offerings we will do so with a focus on ensuring smooth delivery of IT services to the Line of Business.
  • When defining a roadmap to align IT’s pace with business agility expectations, establish IT team objectives that quicken IT solution development and delivery, offer new technology as on-demand shared services, and enhance your team’s ability to rapidly satisfy emerging business use cases (e.g. social collaboration, mobile application connectivity, ecosystem partnering). Open source PaaS, Open APIs, and Open Ecosystems are accelerating agility, empowering developers, and enabling innovative business strategies.  In a recently published white paper, I describe how adopting a New IT plan can create a responsive IT team. The path to New IT requires moving away from traditional application platforms, traditional team structure, and traditional information flows.  Responsive IT teams are adapting their infrastructure, processes and tooling to re-invent the application platform and re-think application delivery.  The New IT architecture underlying Responsive IT intelligently incorporates Cloud Platforms, BigData Analytics, Enterprise DevOps, and API first development.
  • New Agile, Multi-Purpose Tools
  • Cloud platforms exhibiting Cloud Native PaaS architecture provide an opportunity to increase business innovation and creativity.   Cloud native platform solutions shield teams from infrastructure details and inject new behavior into the application. Cloud native PaaS architecture requires infrastructure innovation in provisioning, service governance, management, deployment, load-balancing, policy enforcement, and tenancy.  Cloud native, innovative provisioning infrastructure increases tenant density and streamlines code deployment and synchronization. Multi-tenancy within middleware containers enables teams to customize applications and services per consumer by changing run-time configuration settings instead of provisioning new instances. A Cloud platform may automate governance and enforce policies (i.e. security, service level management, usage) through enterprise PaaS services.  Cloud provisioning may fulfill enterprise deployment requirements across all service providers and technologies used by solution delivery teams. To re-invent the platform and achieve benefits, new Cloud Native platform architectural components and services are required.   Traditional client-server and N-tier web application architectures do not exhibit requisite cloud characteristics (i.e. elastic scalability, multi-tenancy, resource pooling, or self-service).  Figure 1 below depicts the new Cloud Platform architectural components and services.   The PaaS controller layer deploys, scales, monitors, and manages an elastic middleware Cloud. PaaS Foundation services provide common solution building blocks.  A complete, comprehensive, and Cloud-aware middleware container layer delivers new cloud-aware capabilities to business applications. The middleware container layer should not be tightly coupled to the PaaS foundation.  A cartridge or droplet pattern is used to support running any application or service container on the PaaS.   By providing a cartridge plug-point, Cloud Native PaaS environments can run any language, framework, or server (after appropriate integration via the cartridge API and agents). WSO2 Stratos : Complete middleware middleware PaaS, providing capabilities for application and service hosting, integration, business processes, identity management, storage, and more – all the capabilities of the WSO2 Carbon platform – in a self-service, elastic, multi-tenant platform. Stratos Foundation Services : Core platform services available in the system, including security, registry, messaging, logging, storage, task management, and billing. Stratos Cartridge : A wrapper allowing a runtime product (e.g. WSO2 Carbon middleware) to run “as-a-Service” within the Stratos platform. A cartridge can be architected to support shared-process multi-tenancy, or a single-tenant legacy runtime can be encapsulated for management by the Stratos system. WSO2 Carbon Service Types : Each WSO2 Carbon family product is available as a cartridge that allows it to be exposed as a multi-tenant service, including Enterprise Service Bus, Application Server, Governance Registry, Identity Server, etc. Stratos Controller : Core services supporting self-service, elastic scaling and control of underlying IaaS layer, and automated network, workload, and artifact distribution. Elastic Load Balancer Elastic Load Balancer (ELB) balances load across cloud service instances on-premise or in the cloud. The ELB should provide multi-tenancy, fail-over, and auto-scaling of services in line with dynamically changing load characteristics.   Cloud Native Elastic Load Balancers are tenant-aware, service-aware, partition-aware, and region-aware.  They can direct traffic based on the consuming tenant or target service.   Cloud Native Elastic Load Balancers manage traffic across diverse topologies (i.e. private partitions, shared partitions, hybrid cloud), and direct traffic according to performance, cost, and resource pooling policies.   A Cloud Native ELB is tightly integrated with the Service Load monitor component and dynamically adjusts to topology changes.   Service Load Monitor The Service Load Monitor component acquires load information from multiple sources (e.g. app servers, load balancers) and communicates utilization and performance information to an Elastic Load Balancer responsible for distributing requests to the optimal instances, based on tenant association, load balancing policies, service level agreements, and partitioning policies.   When the level of abstraction is raised above Infrastructure as a Service (IaaS) instances, Teams no longer have direct access to specific virtual machines.  New Cloud Native components are required to flexibly distribute applications, services, and APIs across a dynamic topology.  A Cloud Controller, Artifact Distribution Server, and Deployment Synchronizer perform DevOp activities (i.e. continuous deployment, instance provisioning, automated scaling) without requiring a hard, static binding to run-time instances. Cloud Controller A Cloud Native Cloud Controller (or auto-scaler) component creates and removes cloud instances (virtual machines or Linux containers) based on input from the Load Monitor component.   The Cloud Controller right-sizes the instance number to satisfy shifting demand, and conforms instance scaling with quota and reservation thresholds (i.e. minimum instance count, maximum instance count).   The Cloud Native Cloud Controller may provision instances on top of bare metal machines, hypervisors, or Infrastructure as a Service offerings (e.g. Amazon EC2, OpenStack, Eucalyptus). Artifact Distribution Server The Artifact Distribution Server takes complete applications (i.e. application code, services, mediation flows, business rules, and APIs) and breaks the composite bundle into per-instance components, which are then loaded into instances by a Deployment Synchronizer.  The Artifact Distribution Server maintains a versioned repository of run-time artifacts and their association with Cloud service definitions. Deployment Synchronizer The Deployment Synchronizer checks out and deploys the right code for each Cloud application platform instance (e.g. application server, Enterprise Service Bus, API Gateway).   With infrastructure and servers abstracted and encapsulated by the Cloud, a Cloud Native PaaS Management Console allows control of tenant partitions, services, quality of service, and code deployment by either Web UI or command-line tooling. Cloud Native PaaS Architecture Business Benefits Cloud Native PaaS architecture accelerates innovation, increases operational efficiency, and reduces cost. The traditional, keep-the-lights-on, operational run-rate consumes precious resources and limits innovative new projects.  By optimizing project footprint across pooled resources on  a shared Cloud Native PaaS infrastructure, Responsive IT can reduce operational spend, improve total cost of ownership (TCO), and make more projects financially viable.   Multi-tenant delivery models create an efficient delivery environment and significant lower solution deployment cost. For more information on the financial benefits of multi-tenant, Cloud Native platforms, read the white paper. By building a Cloud Native PaaS environment, you provide your teams with a platform to rapidly develop solutions that address connected business use cases (i.e. contextual business delivery, ecosystem development, mobile interactions).   Recommended Reading A Path to Responsive IT PaaS Services Does your PaaS architecture show a paradigm shift? Cloud-aware Applications and PaaS Architecture  
  • Traditional application PaaS (aPaaS) environments do not help organizations build apps, but simply serve as a cloud run-time environment.   DevOps PaaS brings no waits, faster phase execution, widespread accessibility, rapid grassroots innovation, and increased resource availability to IT projects. DevOps PaaS delivers development, test, and production run-time clouds that are integrated into development workspaces containing source code management, defect tracking, requirements management, test automation frameworks, and continuous build.  Figure 2 describes the infrastructure topology underlying a DevOps PaaS. By automating software activities, workflow, and phase approval gates, a DevOps PaaS decreases software development and delivery times.   A rapid IT timeframe closely matching today’s fast business-pace will accelerate revenue growth and enhance customer retention rates.  A New IT model driven by DevOps PaaS will expand development team participation, lower IT cost, and increase business agility. Recommended Reading DevOps Meets ALM in the Cloud PaaS Performance Metrics Multi-tenant, shared container PaaS TCO WSO2 App Factory Product Page
  • Agile and DevOps principles must be applied across a cross-functional team and the entire lifecycle (e.g. project inception, design, development, deployment, and management). Operations activities related to deployment and release management often hinders agility and time-to-market.   The level of effort required to deploy a real-world application is often non-trivial.  Continuous deployment technology automates operations activities and replaces manual intervention. While dwell time sounds cozy and refreshing, excessive wait states and downtime between activities diminishes team efficiency and engagement.  Automated notifications eliminate dwell time between hand-offs.  Automated project workspace creation, Cloud environment provisioning, and on-demand self-service access reduces wait time between software development phases. A DevOps focus on continuous activity execution (e.g. continuous build, continuous integration, continuous test, continuous delivery) creates a ‘no wait’ environment.   Teams do not have to wait for the next script to run or for the next activity to commence.  By incorporating automation into developer and operations processes, teams bypass time consuming manual tasks and gain faster phase execution.  Both DevOps and PaaS promote simple, on-demand self-service environments that shield team members from complexity and reduce skill hurdles.  By offering on-demand self-service access, rapid business innovation and experimentation is possible. By reducing complexity, team members are not required to obtain special training and skills before consuming IT services and infrastructure. To read more about Enterprise DevOps PaaS accelerating team agility, read a recent blog post.
  • Foundational performance metrics focus on time to market.   Key metrics include: Time and effort to create new application environment Time to redeploy application Time to promote application into a new lifecycle phase Optimization performance metrics focus on portfolio efficiency.   Key metrics include Ability to dynamically right-size infrastructure and elastic scalability Ability to re-use existing platform services and business services from resource pool instead of re-building solution stack Transformational performance metrics focus on productivity.   Key metrics include: Time and effort required integrating business process, event processor – creating a complex app. Time and effort required to apply policy across tenant(s) Cost to operate application per user or transaction measured against the value provided by the application or transaction.
  • Bridging IaaS With PaaS To Deliver The Service-Oriented Data Center

    1. 1. Bridging IaaS and PaaSto DeliverThe Service-Oriented Data CenterFrank Regomailto:frego@suse.comChris Haddad@cobiacomm on Twitter more about Platform as a Service at
    2. 2. SUSE CloudOpenStack-based IaaS Private Cloud
    3. 3. 3What are the Drivers of Private Cloud?Lower Costs• Reduce upfront capital expense• Automation to reduce ongoingadministration costsIncreased Agility• Dynamic configuration of ITresources• Respond quickly to businessdemands• Self-service provisioningGreater Control and Security• Data remains inside the firewall• Standard enterprise security
    4. 4. 4What is OpenStack?
    5. 5. 5Why OpenStack?
    6. 6. 6Billing VM Mgmt Image ToolAppMonitor Sec & PerfManagementPortalWhy an OpenStack Distribution?Compute(Nova)Images(Glance)Authentication(Keystone)Object(Swift)EC2 API Dashboard(Horizon)OpenStackAPIsOpenStack ComponentInstallFrameworkSMTCrowbarDHCPTFTPCHEFInstall FrameworkRequiredServicesRabbitMQPostgreSQLOperating SystemPhysical Infrastructure: x86-64 server with virtualizationHypervisorRequired Components
    7. 7. 7DaysHoursWhy an Install Framework?ParametersComponents782112
    8. 8. 8SUSE Cloud 1.0SUSE CloudRabbitMQPostgreSQLOperating System: SUSE Linux Enterprise ServerPhysical Infrastructure: Any x86-64 server certified on SUSE Linux Enterprise 11 SP2Compute(Nova Essex)Images(Glance)Authentication(Keystone)Object(Swift)EC2 APIBillingVM MgmtSUSE ManagerImage ToolSUSE Studio App Monitor Sec & PerfDashboard(Horizon)OpenStack Cloud APIsAdminServerSMTCrowbarDHCPTFTPChefObject(RADOS)Block(RBD)OpenStack Component SUSE Cloud Enhancement SUSE Product Partner ProductPortalHypervisor(Xen, KVM)API ClientsRequiredServices
    9. 9. 9SUSE Cloud 2.0 (target 3Q2013)SUSE CloudRequiredServicesRabbitMQPostgreSQLOperating System: SUSE Linux Enterprise ServerPhysical Infrastructure: Any x86-64 server certified on SUSE Linux Enterprise 11 SP2Compute(Nova Grizzly)Images(Glance)Authentication(Keystone)Object(Swift)EC2 APIBillingCloudCruiserVM MgmtSUSE ManagerImage ToolSUSE Studio App Monitor Sec & PerfDashboard(Horizon)OpenStack Cloud APIsAdminServerSMTCrowbar 2DHCPTFTPChefOpenStack Component SUSE Cloud Enhancement SUSE Product Partner ProductPortalRightScaleHypervisor(Xen, KVM)API ClientsHypervisor(HyperV)Object(RADOS)Block(RBD)VolumeNetwork(Networking)(Cinder)S3(RGW)
    10. 10. 10SUSE® Cloud StructureAdmin ServerControl NodeCompute /Storage NodeCustomerCenterCloud Control• SLES• Database• Message queue• Self-Service Portal• Image Repository• Centralized Tracking• Scheduler• Identity and Authentication• Storage•SLES•Xen or KVM•Cloud Compute•Storage proxyCrowbar + PXE Boot•SLES•Chef server•Crowbar•Software mirror•TFTP•PXE Server
    11. 11. 11Why SUSE Cloud?Enterprise Ready● 20 year history of commercializing and supporting open sourceprojects in the enterprise● Backed by the excellence of SUSE engineering and award-winning support organization● Packaged for enterprise deployments and integrated with SUSEmaintenance and lifecycle management● Crowbar orchestration to automate installation at scaleLeverage existing infrastructure, while optimizingcurrent licensing costs● Runs on standard hardware● SUSE application and hardware certificationsIntegration with SUSE Studio and SUSE Manager● Makes it easy to build and manage cloud applications formultiple cloud environments – Hybrid Cloud
    12. 12. 12SUSE Cloud Lifecycle ManagementBuildImageCreationTest &QA Provision &DeployManage &MonitorRepositoriesAPI
    13. 13. 13Service-Oriented IT Drivers
    14. 14. 14Service-Oriented Delivers The Speed of NowTime to create project workspaceTime to build, integrate, testTime to approve, promoteTime to deploy, releaseDwell time – time waiting for the nextoperation to commence or complete
    15. 15. 15Service-Oriented Yields
    16. 16. 16RepositoriesAPIScale and BalanceTenantsChooseApplicationTemplateAuto-ProvisionApplicationPlatformAuto-DeployApplication andServicesRe-configurePlatformCloud PlatformServicesAPIOur Service Oriented VisionRe-configureApplicationMonitor Platformand Tune Policies
    17. 17. 17Service-Oriented Delivery Models
    18. 18. 18Outlook for Private PaaS• Open Environment• Polyglot language – Java, PHP, JavaScript, Scala• Multi-framework – JEE, Spring, CXF, Ember.js• Complete• A Platform for complex applications• Integrates Legacy with Next Generation• For example, WebSphere with WSO2• Enterprise Aligned• Policy based control• Enables DevOps practices and IT-as-a-Service• Supports Enterprise Chargebacks and Showback scenarios
    19. 19. 19Source: Tools
    20. 20. 20WSO2CarbonmiddlewareimagesWSO2CarbonmiddlewareimagesApplicationContainers &ServicesApplicationContainers &ServicesWSO2 StratosPaaS ControllerWSO2 StratosPaaS ControllerWSO2 StratosFoundation ServicesWSO2 StratosFoundation ServicesNew IT Reference Architecture
    21. 21. 21WSO2 Architecture AdvantageAvailability Scalability ManagementLoad monitor Tenant partitioningPrivate jet modeCloud controllerBalancing and failoveracross hybrid cloudsGhost deployment BigData Logging infrastructureState replication andsession replicationBAM 2.0 architecture Artifact Distribution ControllerandDeployment synchronizationMultiple load balancerswithkeepalived or DNS RRAuto-scaling P2 RepositoryNative multi-tenancy Elastic Load Balancer Consistent management andinfrastructure services acrossentire platformDynamic Clustering Multi-tenant sharedcontainerManagement console21
    22. 22. 22Complete, Cloud-Native PaaS ServicesApplication, Integration, Analytics, Identity, Data
    23. 23. 23Open Source PaaSCloud Native Architecture
    24. 24. 24Consider Enhanced Virtualization ModelsSUSE Cloud with WSO2 Stratos 2.0supports all models and model combinationsSUSE Cloud with WSO2 Stratos 2.0supports all models and model combinationsSUSE Cloud withStratos Carbon(Shared Process)AgilityResource OptimizationPureHardwareVirtualMachineSUSE Cloud withStratos Cartridge(LXC)SUSE Cloud
    25. 25. 25Cloud Native PaaS Difference
    26. 26. 26Tenant-aware and Service-awareLoad Balancing
    27. 27. 27Automated Provisioning Service
    28. 28. 28Automated App Deployment Service
    29. 29. 29Log Aggregation Service
    30. 30. 30Bridging IaaS and PaaS
    31. 31. 31Enterprise DevOps PaaSBridging Development with Deployment
    32. 32. 32DevOps Service-OrientationA developer’s perspective
    33. 33. 33Service Performance MetricsFoundationalTime to MarketOptimizationPortfolio EfficiencyTransformationalProductivity
    34. 34. 34Bridge IaaS with PaaS
    35. 35. Corporate HeadquartersMaxfeldstrasse 590409 NurembergGermany+49 911 740 53 0 (Worldwide)www.suse.comJoin us on:www.opensuse.org35
    36. 36. Unpublished Work of SUSE. All Rights Reserved.This work is an unpublished work and contains confidential, proprietary and trade secret information of SUSE.Access to this work is restricted to SUSE employees who have a need to know to perform tasks within the scope oftheir assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated,abridged, condensed, expanded, collected, or adapted without the prior written consent of SUSE.Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability.General DisclaimerThis document is not to be construed as a promise by any participating company to develop, deliver, or market aproduct. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in makingpurchasing decisions. SUSE makes no representations or warranties with respect to the contents of this document,and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose.The development, release, and timing of features or functionality described for SUSE products remains at the solediscretion of SUSE. Further, SUSE reserves the right to revise this document and to make changes to its content, atany time, without obligation to notify any person or entity of such revisions or changes. All SUSE marks referenced inthis presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. Allthird-party trademarks are the property of their respective owners.