Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Cloud 2010


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Cloud 2010

  1. 1. An Architecture for a Mashup Container in Virtualized Environments Massimo Maresca Computer Platform Research Center (CIPI) University of Padova & Genova (Italy) [email_address]
  2. 2. Agenda <ul><li>Introduction </li></ul><ul><li>The Mashup Container </li></ul><ul><li>The monolithic solution in Virtualized Environments </li></ul><ul><li>Conclusions </li></ul>
  3. 3. 1. Introduction (1/4) <ul><li>Scenario </li></ul><ul><li>High availability of contents and services (often provided according to the Software as a Service - SaaS model) through different technologies such as RSS Feed, Atom, JSON, REST-WS, SOAP-WS, etc. </li></ul><ul><li>Availability of tools for the rapid development of convergent Composite Services (a.k.a. Mashups) that combine different resources such as Yahoo Pipes!, JackBe Presto, etc. </li></ul><ul><ul><ul><li>Such tools mainly support Data Mashups (i.e., those Mashups that combine data extracted by different sources), while they do not support Event Driven Mashups (i.e., those Mashups in which the basic components interact through events) </li></ul></ul></ul><ul><li>Adoption of the Cloud Computing paradigm for Service Delivery. In this presentation we mainly refer to the PaaS (Platform as a Service) model. </li></ul><ul><li>We describe an architecture for a Mashup Container that allows to execute Mashups according to the PaaS model. </li></ul>
  4. 4. 1. Introduction (2/4) <ul><li>Service Taxonomy and Mashup Patterns: the “Monitor Resource + Notification” pattern </li></ul><ul><li>Examples: </li></ul><ul><li>Reminder Mashup </li></ul><ul><li>Alert Mashup </li></ul><ul><li>Notification of interesting events Mashup </li></ul>
  5. 5. 1. Introduction (3/4) <ul><li>Service Taxonomy and Mashup Patterns: the “Monitor Resource + Processing + Notification” pattern </li></ul><ul><li>Examples: </li></ul><ul><li>Time or Location or Presence dependent Notification Mashup </li></ul><ul><li>Non repudiation Mashup </li></ul>
  6. 6. 1. Introduction (4/4) <ul><li>Service Taxonomy and Mashup Patterns: traditional Data Mashup and “something” + Map Mashup </li></ul><ul><li>Examples: </li></ul><ul><li>Data Fusion </li></ul><ul><li>Data Migration </li></ul><ul><li>… </li></ul><ul><li>Examples: </li></ul><ul><li>Traffic on Map </li></ul><ul><li>Wheather on Map </li></ul><ul><li>… </li></ul>
  7. 7. 2. The Mashup Container (1/7) <ul><li>Server Side versus Client Side </li></ul><ul><li>The use of the Server Side Solution is preferable for the following reasons: </li></ul><ul><li>Service continuity </li></ul><ul><li>User Terminal power consumption </li></ul><ul><li>Security </li></ul>Client side Server side Svc1 Svc2 Svc3 User Node (Browser) Svc1 Svc2 Svc3 Browser (User) Mashup Engine (Server) Request Results
  8. 8. 2. The Mashup Container (2/7) <ul><li>Mashup Lifecycle </li></ul><ul><li>A Mashup is created by means of a (possibly graphical) Service Creation Environment and stored as a XML file in a repository. </li></ul><ul><li>The new Mashup is deployed into a software system - the Service Execution Platform – that is in charge of executing the composite services. </li></ul><ul><li>The Mashup is now ready to be used by final users. There are two phases to take into account: </li></ul><ul><ul><li>Activation: the SEP is configured by the user (i.e., he sets the input properties of the service) to be ready to execute the Mashup when a new initial event occurs. </li></ul></ul><ul><ul><li>Execution: when a Mashup previously activated by the user receives a new initial event, the actual execution of the service logic takes place. </li></ul></ul>
  9. 9. 2. The Mashup Container (3/7) <ul><li>Requirements for the Mashup Container </li></ul><ul><li>Deployment support according to the PaaS model </li></ul><ul><li>Automatic Scalability , to support the simultaneous execution of a very large numbers of Mashup instances on variable size hardware systems. </li></ul><ul><li>Fault Tolerance , to guarantee the levels of reliability and availability typical of the Telecoms Operators (99.99999%). </li></ul><ul><li>Low latency , to effectively support also the execution of long sequences short lived Telco services. </li></ul><ul><li>Authentication, Authorization and Accounting , to support the seamless integration of the Mashup with the AAA modules already existing in the owner’s platform environment. </li></ul><ul><li>Management, to have the complete control of the platform resources, to control Mashup activation/execution as well as to allow the platform administrator to perform the appropriate management actions (e.g., enforcing SLA rules). </li></ul>
  10. 10. 2. The Mashup Container (4/7) <ul><li>Description of the Mashup Container </li></ul><ul><li>The Mashup Container is composed by two sub-systems: </li></ul><ul><li>The Deployment Module manages the installation of new components in the SEP. It is composed by two sub-modules, namely: </li></ul><ul><ul><li>Mashup Deployer (i.e., translation of an XML file) </li></ul></ul><ul><ul><li>Service Deployer (i.e., interaction with the JAVA classloading system for the addition of new JAVA classes) </li></ul></ul><ul><li>The Service Execution Platform is composed by two software entities (see figure in next slide): </li></ul><ul><ul><li>The Service Proxy – SP: it wraps an external resource to make it compliant with the APIs used by the Orchestrator (invokeAction / notifyEvent) </li></ul></ul><ul><ul><li>The Orchestrator: it executes the Mashups’ logics combining the available SPs </li></ul></ul>
  11. 11. 2. The Mashup Container (5/7) <ul><li>Two approaches for the Container implementation: </li></ul><ul><ul><li>Framework-based solution (distributed approach) </li></ul></ul><ul><ul><li>Monolithic solution (all-in-one-JVM approach) </li></ul></ul>
  12. 12. 2. The Mashup Container (6/7) <ul><li>Framework-based Mashup Container implementation </li></ul><ul><li>Technology: Java Message Service – JMS </li></ul><ul><li>Implementation used: Red Hat JBoss Application Server </li></ul><ul><li>Deployment module: Application Server dependent </li></ul><ul><li>Performance: </li></ul><ul><ul><li>Latency: 85% of the processing time needed to handle a single request is waste in framework operations (e.g., message parsing, queues management, communication delay, etc.) </li></ul></ul><ul><li>Monolithic-based Mashup Container implementation </li></ul><ul><li>Technology: POJO (Plain Old JAVA Objects) </li></ul><ul><li>Implementation used: JDK version 6 </li></ul><ul><li>Deployment module: development of a new classloader JAVA class </li></ul><ul><li>Performance: </li></ul><ul><ul><li>Latency: x8 times faster than framework based </li></ul></ul>
  13. 13. 2. The Mashup Container (7/7) <ul><li>Comparison between the two solutions </li></ul>Framework-based version Monolithic version <ul><li>Replicated instances of the Orchestrator and SPs are deployed on different nodes </li></ul><ul><li>A Session is usually executed on different nodes to balance the workload </li></ul><ul><li>Communication overhead due to the framework </li></ul><ul><li>Framework-based Fault Tolerance </li></ul><ul><li>Each node runs a “complete” SEP </li></ul><ul><li>A Session is completely executed on one node </li></ul><ul><li>NO Communication overhead due to the framework (the components are implemented as POJO) </li></ul><ul><li>Fault Tolerance given by the Virtualization Environment </li></ul>
  14. 14. 3. The monolithic solution in Virtualized Environments (1/3) <ul><li>Replication of the monolithic SEP (called Service Processor VM). </li></ul><ul><li>Session level scalability: each Session is completely executed on a single VM. </li></ul><ul><li>The Orchestration Management System – OMS manages Session Launching on the basis of the resources available in the system. </li></ul><ul><li>The OMS is in charge of adding / removing VMs in the system on the basis of the workload. </li></ul><ul><li>The fault tolerance requirement is fulfilled by means of the Hypervisor-based Fault Tolerance mechanism provided by the Virtualized Environment (see next slide for details). </li></ul>
  15. 15. 3. The monolithic solution in Virtualized Environments (2/3) <ul><li>The Hypervisor based Fault Tolerance mechanism provided by VMware </li></ul><ul><li>Two VMs (namely the Primary and the Secondary/Backup VM) run in lockstep on two distinct physical hosts. </li></ul><ul><li>The two VMs are connected through Logging Traffic network link. </li></ul><ul><li>The two VMs run independently but only the Primary VM actually communicates with the external world. </li></ul><ul><li>In case of a failure of the Primary VM, the Secondary VM replaces it and continues the execution of the applications installed in the first VM in transparent way and with zero downtime. </li></ul>Source: VMware whitepaper “Protecting Mission-Critical Workloads with VMware Fault Tolerance”
  16. 16. 3. The monolithic solution in Virtualized Environments (3/3) Performance test Testbed Hosts: IBM Blade Center HS22 (2 Intel E5530 processors at 2,4 GHz, 32GB RAM) Guest OS: Red Hat Enterprise Linux 5.4 64 bit Results The performance degradation is negligible Latency Avg (µs) Throughput (Events/sec) Log. Traffic (KB/sec) SEP FT On 82 15948870 761 SEP FT Off 60 16187340 Not Applicable
  17. 17. 4. Conclusions <ul><li>We defined a Mashup Container that allows users to deploy their Composite Services according to the PaaS model. </li></ul><ul><li>We described an architecture that leverages on the functionalities provided by the Virtualized Environments to fulfill the scalability and fault tolerance requirements. </li></ul><ul><li>We developed a prototype of the Architecture that we have defined based on the vmWare commercial product. </li></ul><ul><li>We performed some evaluating tests showing that the Hypervisor-base Fault Tolerance mechanism is suitable for the implementation of this kind of platforms. </li></ul>
  18. 18. Thank you for your attention