UNIT-3
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services (the various cloud service models and their extensions)
• Figure shows six layers of cloud services, ranging from hardware, network, and collocation to infrastructure,
platform, and software applications.
• The top three service layers as SaaS, PaaS, and IaaS.
• The cloud platform provides PaaS, which sits on top of the IaaS infrastructure. The top layer offers SaaS.
Although the three basic models are dissimilar in usage, they are built one on top of another.
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services (the various cloud service models and their extensions)
• The bottom three layers are more related to physical requirements. The bottommost layer provides Hardware
as a Service (HaaS).
• The next layer is for interconnecting all the hardware components, and is simply called Network as a Service
(NaaS). (allowing companies to set up their own networks entirely without hardware). Virtual LANs fall within the scope of NaaS.
• The next layer up offers Location as a Service (LaaS), which provides a collocation service to house, power, and
secure all the physical hardware and network resources. (LaaS is the facility that offers space with the proper power, cooling and
security to host businesses’ computing hardware and servers).
• The cloud infrastructure layer can be further subdivided as Data as a Service (DaaS) and Communication as a
Service (CaaS) in addition to compute.
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services (the various cloud service models and their extensions)
• As shown in Table, cloud players are divided into three classes:
➢ Cloud service providers and IT Administrators
➢ Software developers or Vendors
➢ End Users or Business Users
• These cloud players vary in their roles under the IaaS, PaaS, and SaaS models.
• The table entries distinguish the three cloud models as viewed by different players. (Table shows how three players view
the three cloud models)
• From the software vendors’ perspective, application performance on a given cloud platform is most
important.(Designing application with optimized performance wrt time, space, works for all scenario)
• From the providers’ perspective, cloud infrastructure performance is the primary concern.(optimized CPU utilization,
Storage utilization)
• From the end users’ perspective, the quality of services, including security, is the most important
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services
1. Cloud Service Tasks and Trends
• The top layer in the cloud service is SaaS applications for business applications.
• For example, CRM is heavily practiced in business promotion, direct sales, and marketing services.
• CRM offered the first SaaS on the cloud successfully.
• The approach is to widen market coverage by investigating customer behaviors and revealing opportunities by
statistical analysis.
• SaaS tools also apply to distributed collaboration (Google docs), and financial and human resources management.
• These cloud services have been growing rapidly in recent years.
• PaaS is provided by Google, Salesforce.com, and Facebook (Facebook service), among others.
• IaaS is provided by Amazon, Windows Azure, and RackRack, among others.
• Collocation services require multiple cloud providers to work together to support supply chains in manufacturing.
• Network cloud services provide communications such as those by AT&T, Qwest, and AboveNet.
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services
2. Software Stack for Cloud Computing
• The overall software stacks are built from scratch to meet rigorous goals.
• Developers have to consider how to design the system to meet critical requirements such as high throughput,
HA, and fault tolerance. (Developers need to think in designing a software that takes care of all these layers/services at different levels
and providing these services to users in such a way the meet required throughput, HA, and fault tolerance).
• Even the operating system might be modified to meet the special requirement of cloud data processing. (Even OS
that runs at cloud data center need to be modified that takes care of all these services NaaS, LaaS etc.)
• the overall software stack structure of cloud computing software can be viewed as layers. Each layer has its
own purpose and provides the interface for the upper layers just as the traditional software stack does.
However, the lower layers are not completely transparent to the upper layers. (SaaS layer takes care of providing/sharing
softwares to customers, PaaS SaaS layer takes care of providing platform to customers, IaaS layer takes care of sharing hardwares to customers)
INTER-CLOUD RESOURCE MANAGEMENT
Extended Cloud Computing Services
3. Runtime Support Services
• As in a cluster environment, there are also some runtime supporting services in the cloud computing
environment.
• Cluster monitoring is used to collect the runtime status of the entire cluster.
• The scheduler queues the tasks submitted to the whole cluster and assigns the tasks to the processing nodes
according to node availability.
• The distributed scheduler for the cloud application has special characteristics that can support cloud
applications, such as scheduling the programs written in MapReduce style.
• The runtime support system keeps the cloud cluster working properly with high efficiency.
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
• Cloud architecture puts more emphasis on the number of processor cores or VM instances.
1. Provisioning of Compute Resources (VMs)
• Providers supply cloud services by signing SLAs with end users.
• The SLAs must commit sufficient resources such as CPU, memory, and bandwidth that the user can use for a
preset period.
• Underprovisioning of resources will lead to broken SLAs and penalties.
• Overprovisioning of resources will lead to resource underutilization, and consequently, a decrease in revenue
for the provider.
• Deploying an autonomous system to efficiently provision resources to users is a challenging problem.
• The difficulty comes from the unpredictability of consumer demand, software and hardware failures,
heterogeneity of services (User may take NaaS,Queue Service, SaaS) , power management (heat dissipation from server), and
conflicts in signed SLAs between consumers and service providers.
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
1. Provisioning of Compute Resources (VMs) (Cont…)
• Efficient VM provisioning depends on the cloud architecture and management of cloud infrastructures.
• In a virtualized cluster of servers, this demands efficient installation of VMs, live VM migration, and fast
recovery from failures.
• To deploy VMs, users treat them as physical hosts with customized operating systems for specific applications.
• For example, Amazon’s EC2 (IaaS service from Amazon) uses Xen as the virtual machine monitor (VMM). The same
VMM is used in IBM’s Blue Cloud.
• In the EC2 platform, some predefined VM templates are also provided. Users can choose different kinds of
VMs from the templates.
• IBM’s Blue Cloud does not provide any VM templates. In general, any type of VM can run on top of Xen.
• Microsoft also applies virtualization in its Azure cloud platform. The provider should offer resource-economic
services.
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
2. Resource Provisioning Methods
➢ Demand-Driven method
➢ Event Driven method
➢ Popularity-Driven method
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
Demand-Driven method
• This method adds or removes computing instances based on the current utilization level of the allocated resources.
• The demand-driven method automatically allocates two Xeon processors for the user application, when the user was
using one Xeon processor more than 60 percent of the time for an extended period.
• In general, when a resource has surpassed a threshold for a certain amount of time, the scheme increases that resource
based on demand. When a resource is below a threshold for a certain amount of time, that resource could be decreased
accordingly. (Defines a range for CPU utilization say for eg: 30% to 70%. if CPU utilization below 30% decreases the CPU capacity. If CPU utilization above 70% increases the CPU capacity)
• Amazon implements such an auto-scale feature in its EC2 platform.
• This method is easy to implement.
• Disadvantage: The scheme does not work out right if the workload changes abruptly.
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
Event Driven method
• This scheme adds or removes machine instances based on a specific time event.
• The scheme works better for seasonal or predicted events such as Christmastime in the West and the Lunar
New Year in the East.
• During these events, the number of users grows before the event period and then decreases during the event
period.
• This scheme anticipates peak traffic before it happens.
• The method results in a minimal loss of QoS, if the event is predicted correctly. Otherwise, wasted resources
are even greater due to events that do not follow a fixed pattern.
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
Popularity-Driven method
• In this method, the Internet searches for popularity of certain applications and creates the instances by
popularity demand. (Currently popular applications→ Facebook, Instagram, Twitter)
• The scheme anticipates increased traffic with popularity.
• Again, the scheme has a minimal loss of QoS, if the predicted popularity is correct. Resources may be wasted if
traffic does not occur as expected.
INTER-CLOUD RESOURCE MANAGEMENT
• (Grid is a distributed high performance computing paradigm that offers various types of
resources (like computing, storage, communication) to resource-intensive user tasks.) Grid→ a
site which provides some set of resources for the user applications)
• The cloud uses VMs as building blocks to create an
execution environment across multiple resource sites.
• The InterGrid-managed infrastructure was developed
by a Melbourne University group.
• The InterGrid is a Java-implemented software system
that lets users create execution cloud environments on
top of all participating grid resources. (Framework/Software
designed by Melbourne University group that runs on each grid(organization site/cloud site)
that allows users to create VMs on the top of all participating grid, where each grid maintains
the set of resources)
Resource Provisioning and Platform Deployment
Dynamic Resource Deployment
INTER-CLOUD RESOURCE MANAGEMENT
• Peering arrangements established between gateways
enable the allocation of resources from multiple grids to
establish the execution environment.
• In Figure, a scenario is illustrated by which an intergrid
gateway (IGG) allocates resources from a local cluster to
deploy applications in three steps: (1) requesting the
VMs, (2) enacting the leases (sanctioning), and (3)
deploying the VMs as requested.
• Under peak demand, this IGG interacts with another IGG
that can allocate resources from a cloud computing
provider.
Resource Provisioning and Platform Deployment
Dynamic Resource Deployment
INTER-CLOUD RESOURCE MANAGEMENT
• A grid has predefined peering arrangements with other grids, which the IGG
manages.
• Through multiple IGGs, the system coordinates the use of InterGrid resources.
• An IGG is aware of the peering terms with other grids, selects suitable grids that
can provide the required resources, and replies to requests from other IGGs.
• An IGG can also allocate resources from a cloud provider.
• The cloud system creates a virtual environment to help users deploy their
applications. These applications use the distributed grid resources.
• The InterGrid allocates and provides a distributed virtual environment (DVE). This
is a virtual cluster of VMs that runs isolated from other virtual clusters.
• A component called the DVE manager performs resource allocation and
management on behalf of specific user applications.
Resource Provisioning and Platform Deployment
Dynamic Resource Deployment
INTER-CLOUD RESOURCE MANAGEMENT
Resource Provisioning and Platform Deployment
Provisioning of Storage Resources
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
• Figure shows the interactions among VM managers for VM creation and management. The managers provide a
public API for users to submit and control the VMs.
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Independent Service Management
• Independent service request facilities to execute many unrelated tasks.
• Commonly, the APIs provided are some web services that the developer can use conveniently.
• In Amazon cloud computing infrastructure, SQS is constructed for providing a reliable communication service
between different providers. Even the endpoint does not run while another entity has posted a message in
SQS.
• By using independent service providers, the cloud applications can run different services at the same time.
(providing data, compute or storage services).
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Running Third-Party Applications
• Cloud platforms have to support for building applications by providing applications that are constructed by third-party
application providers or programmers.
• As current web applications are often provided by using Web 2.0 forms (interactive applications with Ajax), the
programming interfaces are different from the traditional programming interfaces such as functions in runtime libraries.
• The APIs are often in the form of services.
• Web service application engines are often used by programmers for building applications.
• As examples, GAE and Microsoft Azure apply their own cloud APIs to get special cloud services.
• The WebSphere application engine is deployed by IBM for Blue Cloud. It can be used to develop any kind of web
application written in Java.
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Virtual Machine Manager
• The VM manager is the link between the gateway and resources.
• The gateway doesn’t share physical resources directly, but relies on virtualization technology. Hence, the
actual resources it uses are VMs. (VIE runs at each cloud, VMM connects with different VIE.
• The manager manage VMs deployed on a set of physical resources.
• The VM manager implementation is generic so that it can connect with different VIEs. Typically, VIEs can create
and stop VMs on a physical cluster. (If OpenNebula platform running at one cloud to create VMs, then VIE can communicate with VIE of other
cloud running Amazon EC2 platform to create VMs )
• The Melbourne group has developed managers for OpenNebula, Amazon EC2, and French Grid’5000.
• To deploy a VM, the manager needs to use its template
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Virtual Machine Templates
• A VM template is analogous to a computer’s configuration and contains a description for a VM with the following
static information:
➢ The number of cores or processors to be assigned to the VM
➢ The amount of memory the VM requires
➢ The kernel used to boot the VM’s operating system
➢ The disk image containing the VM’s file system (Files)
➢ The price per hour of using a VM
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Virtual Machine Templates
• The gateway administrator provides the VM template information when the infrastructure is set up. The
administrator can update, add, and delete templates at any time.
• In addition, each gateway in the InterGrid network must agree on the templates to provide the same configuration
on each site.
• To deploy an instance of a given VM, the VMM generates a descriptor from the template.
• This descriptor contains the same fields as the template and additional information related to a specific VM
instance.
• Typically the additional information includes:
➢ The disk image that contains the VM’s file system
➢ The address of the physical machine hosting the VM
➢ The VM’s network configuration
INTER-CLOUD RESOURCE MANAGEMENT
Virtual Machine Creation and Management
Distributed VM Management
INTER-CLOUD RESOURCE MANAGEMENT
Global Exchange of Cloud Resources
• In order to support a large number of consumers from around the world, cloud infrastructure providers have
established data centers in multiple geographical locations to provide redundancy and ensure reliability in
case of site failures.
• For example, Amazon has data centers in the United States (e.g., one on the East Coast and another on the
West Coast) and Europe.
• However, it is difficult for cloud customers to determine in advance the best location for hosting their services
as they may not know the origin of consumers of their services.
• Also, SaaS providers may not be able to meet the QoS expectations of their service consumers originating from
multiple geographical locations.
• This necessitates building mechanisms for seamless federation of data centers of a cloud provider or providers
supporting dynamic scaling of applications across multiple domains in order to meet QoS targets of cloud
customers. (Creating of VMs at multiple data centers at multiple places all over the world that satisfies customer QoS),
INTER-CLOUD RESOURCE MANAGEMENT
• Figure shows the high-level components of the Melbourne
group’s proposed InterCloud architecture.
• In addition, no single cloud infrastructure provider will be able to
establish its data centers at all possible locations throughout the
world.
• As a result, cloud providers will have difficulty in meeting QoS
expectations for all their consumers.
• Hence, they would like to make use of services of multiple cloud
infrastructure service providers who can provide better support
for their specific consumer needs.
• This necessitates federation of cloud infrastructure service
providers for seamless provisioning of services across different
cloud providers.
Global Exchange of Cloud Resources
INTER-CLOUD RESOURCE MANAGEMENT
• To realize this, the University of Melbourne has proposed
InterCloud architecture supporting brokering and exchange of
cloud resources for scaling applications across multiple clouds.
• Cloud providers will be able to dynamically expand or resize
their provisioning capability based on sudden spikes in
workload demands by leasing available computational and
storage capabilities from other cloud service providers;
operate as part of a market-driven resource leasing
federation.
• They consist of client brokering and coordinator services that
support utility-driven federation of clouds: application
scheduling, resource allocation, and migration of workloads.
Global Exchange of Cloud Resources
INTER-CLOUD RESOURCE MANAGEMENT
• The Cloud Exchange (CEx) acts as a market maker for bringing together
service producers and consumers. It aggregates the infrastructure
demands from application brokers and evaluates them against the
available supply currently published by the cloud coordinators.
• It supports trading of cloud services based on competitive economic
models such as commodity markets and auctions.
• An SLA specifies the details of the service to be provided in terms of
metrics agreed upon by all parties, and incentives and penalties for
meeting and violating the expectations, respectively.
• The availability of a banking system within the market ensures that
financial transactions pertaining to SLAs between participants are
carried out in a secure and dependable environment.
Global Exchange of Cloud Resources

intercloud-global.pdf-INTERCHANGE OF GLOBAL RESOURCES

  • 1.
  • 2.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services (the various cloud service models and their extensions) • Figure shows six layers of cloud services, ranging from hardware, network, and collocation to infrastructure, platform, and software applications. • The top three service layers as SaaS, PaaS, and IaaS. • The cloud platform provides PaaS, which sits on top of the IaaS infrastructure. The top layer offers SaaS. Although the three basic models are dissimilar in usage, they are built one on top of another.
  • 3.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services (the various cloud service models and their extensions) • The bottom three layers are more related to physical requirements. The bottommost layer provides Hardware as a Service (HaaS). • The next layer is for interconnecting all the hardware components, and is simply called Network as a Service (NaaS). (allowing companies to set up their own networks entirely without hardware). Virtual LANs fall within the scope of NaaS. • The next layer up offers Location as a Service (LaaS), which provides a collocation service to house, power, and secure all the physical hardware and network resources. (LaaS is the facility that offers space with the proper power, cooling and security to host businesses’ computing hardware and servers). • The cloud infrastructure layer can be further subdivided as Data as a Service (DaaS) and Communication as a Service (CaaS) in addition to compute.
  • 4.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services (the various cloud service models and their extensions) • As shown in Table, cloud players are divided into three classes: ➢ Cloud service providers and IT Administrators ➢ Software developers or Vendors ➢ End Users or Business Users • These cloud players vary in their roles under the IaaS, PaaS, and SaaS models. • The table entries distinguish the three cloud models as viewed by different players. (Table shows how three players view the three cloud models) • From the software vendors’ perspective, application performance on a given cloud platform is most important.(Designing application with optimized performance wrt time, space, works for all scenario) • From the providers’ perspective, cloud infrastructure performance is the primary concern.(optimized CPU utilization, Storage utilization) • From the end users’ perspective, the quality of services, including security, is the most important
  • 5.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services 1. Cloud Service Tasks and Trends • The top layer in the cloud service is SaaS applications for business applications. • For example, CRM is heavily practiced in business promotion, direct sales, and marketing services. • CRM offered the first SaaS on the cloud successfully. • The approach is to widen market coverage by investigating customer behaviors and revealing opportunities by statistical analysis. • SaaS tools also apply to distributed collaboration (Google docs), and financial and human resources management. • These cloud services have been growing rapidly in recent years. • PaaS is provided by Google, Salesforce.com, and Facebook (Facebook service), among others. • IaaS is provided by Amazon, Windows Azure, and RackRack, among others. • Collocation services require multiple cloud providers to work together to support supply chains in manufacturing. • Network cloud services provide communications such as those by AT&T, Qwest, and AboveNet.
  • 6.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services 2. Software Stack for Cloud Computing • The overall software stacks are built from scratch to meet rigorous goals. • Developers have to consider how to design the system to meet critical requirements such as high throughput, HA, and fault tolerance. (Developers need to think in designing a software that takes care of all these layers/services at different levels and providing these services to users in such a way the meet required throughput, HA, and fault tolerance). • Even the operating system might be modified to meet the special requirement of cloud data processing. (Even OS that runs at cloud data center need to be modified that takes care of all these services NaaS, LaaS etc.) • the overall software stack structure of cloud computing software can be viewed as layers. Each layer has its own purpose and provides the interface for the upper layers just as the traditional software stack does. However, the lower layers are not completely transparent to the upper layers. (SaaS layer takes care of providing/sharing softwares to customers, PaaS SaaS layer takes care of providing platform to customers, IaaS layer takes care of sharing hardwares to customers)
  • 7.
    INTER-CLOUD RESOURCE MANAGEMENT ExtendedCloud Computing Services 3. Runtime Support Services • As in a cluster environment, there are also some runtime supporting services in the cloud computing environment. • Cluster monitoring is used to collect the runtime status of the entire cluster. • The scheduler queues the tasks submitted to the whole cluster and assigns the tasks to the processing nodes according to node availability. • The distributed scheduler for the cloud application has special characteristics that can support cloud applications, such as scheduling the programs written in MapReduce style. • The runtime support system keeps the cloud cluster working properly with high efficiency.
  • 8.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment • Cloud architecture puts more emphasis on the number of processor cores or VM instances. 1. Provisioning of Compute Resources (VMs) • Providers supply cloud services by signing SLAs with end users. • The SLAs must commit sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset period. • Underprovisioning of resources will lead to broken SLAs and penalties. • Overprovisioning of resources will lead to resource underutilization, and consequently, a decrease in revenue for the provider. • Deploying an autonomous system to efficiently provision resources to users is a challenging problem. • The difficulty comes from the unpredictability of consumer demand, software and hardware failures, heterogeneity of services (User may take NaaS,Queue Service, SaaS) , power management (heat dissipation from server), and conflicts in signed SLAs between consumers and service providers.
  • 9.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment 1. Provisioning of Compute Resources (VMs) (Cont…) • Efficient VM provisioning depends on the cloud architecture and management of cloud infrastructures. • In a virtualized cluster of servers, this demands efficient installation of VMs, live VM migration, and fast recovery from failures. • To deploy VMs, users treat them as physical hosts with customized operating systems for specific applications. • For example, Amazon’s EC2 (IaaS service from Amazon) uses Xen as the virtual machine monitor (VMM). The same VMM is used in IBM’s Blue Cloud. • In the EC2 platform, some predefined VM templates are also provided. Users can choose different kinds of VMs from the templates. • IBM’s Blue Cloud does not provide any VM templates. In general, any type of VM can run on top of Xen. • Microsoft also applies virtualization in its Azure cloud platform. The provider should offer resource-economic services.
  • 10.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment 2. Resource Provisioning Methods ➢ Demand-Driven method ➢ Event Driven method ➢ Popularity-Driven method
  • 11.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment Demand-Driven method • This method adds or removes computing instances based on the current utilization level of the allocated resources. • The demand-driven method automatically allocates two Xeon processors for the user application, when the user was using one Xeon processor more than 60 percent of the time for an extended period. • In general, when a resource has surpassed a threshold for a certain amount of time, the scheme increases that resource based on demand. When a resource is below a threshold for a certain amount of time, that resource could be decreased accordingly. (Defines a range for CPU utilization say for eg: 30% to 70%. if CPU utilization below 30% decreases the CPU capacity. If CPU utilization above 70% increases the CPU capacity) • Amazon implements such an auto-scale feature in its EC2 platform. • This method is easy to implement. • Disadvantage: The scheme does not work out right if the workload changes abruptly.
  • 12.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment Event Driven method • This scheme adds or removes machine instances based on a specific time event. • The scheme works better for seasonal or predicted events such as Christmastime in the West and the Lunar New Year in the East. • During these events, the number of users grows before the event period and then decreases during the event period. • This scheme anticipates peak traffic before it happens. • The method results in a minimal loss of QoS, if the event is predicted correctly. Otherwise, wasted resources are even greater due to events that do not follow a fixed pattern.
  • 13.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment Popularity-Driven method • In this method, the Internet searches for popularity of certain applications and creates the instances by popularity demand. (Currently popular applications→ Facebook, Instagram, Twitter) • The scheme anticipates increased traffic with popularity. • Again, the scheme has a minimal loss of QoS, if the predicted popularity is correct. Resources may be wasted if traffic does not occur as expected.
  • 14.
    INTER-CLOUD RESOURCE MANAGEMENT •(Grid is a distributed high performance computing paradigm that offers various types of resources (like computing, storage, communication) to resource-intensive user tasks.) Grid→ a site which provides some set of resources for the user applications) • The cloud uses VMs as building blocks to create an execution environment across multiple resource sites. • The InterGrid-managed infrastructure was developed by a Melbourne University group. • The InterGrid is a Java-implemented software system that lets users create execution cloud environments on top of all participating grid resources. (Framework/Software designed by Melbourne University group that runs on each grid(organization site/cloud site) that allows users to create VMs on the top of all participating grid, where each grid maintains the set of resources) Resource Provisioning and Platform Deployment Dynamic Resource Deployment
  • 15.
    INTER-CLOUD RESOURCE MANAGEMENT •Peering arrangements established between gateways enable the allocation of resources from multiple grids to establish the execution environment. • In Figure, a scenario is illustrated by which an intergrid gateway (IGG) allocates resources from a local cluster to deploy applications in three steps: (1) requesting the VMs, (2) enacting the leases (sanctioning), and (3) deploying the VMs as requested. • Under peak demand, this IGG interacts with another IGG that can allocate resources from a cloud computing provider. Resource Provisioning and Platform Deployment Dynamic Resource Deployment
  • 16.
    INTER-CLOUD RESOURCE MANAGEMENT •A grid has predefined peering arrangements with other grids, which the IGG manages. • Through multiple IGGs, the system coordinates the use of InterGrid resources. • An IGG is aware of the peering terms with other grids, selects suitable grids that can provide the required resources, and replies to requests from other IGGs. • An IGG can also allocate resources from a cloud provider. • The cloud system creates a virtual environment to help users deploy their applications. These applications use the distributed grid resources. • The InterGrid allocates and provides a distributed virtual environment (DVE). This is a virtual cluster of VMs that runs isolated from other virtual clusters. • A component called the DVE manager performs resource allocation and management on behalf of specific user applications. Resource Provisioning and Platform Deployment Dynamic Resource Deployment
  • 17.
    INTER-CLOUD RESOURCE MANAGEMENT ResourceProvisioning and Platform Deployment Provisioning of Storage Resources
  • 18.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management • Figure shows the interactions among VM managers for VM creation and management. The managers provide a public API for users to submit and control the VMs.
  • 19.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Independent Service Management • Independent service request facilities to execute many unrelated tasks. • Commonly, the APIs provided are some web services that the developer can use conveniently. • In Amazon cloud computing infrastructure, SQS is constructed for providing a reliable communication service between different providers. Even the endpoint does not run while another entity has posted a message in SQS. • By using independent service providers, the cloud applications can run different services at the same time. (providing data, compute or storage services).
  • 20.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Running Third-Party Applications • Cloud platforms have to support for building applications by providing applications that are constructed by third-party application providers or programmers. • As current web applications are often provided by using Web 2.0 forms (interactive applications with Ajax), the programming interfaces are different from the traditional programming interfaces such as functions in runtime libraries. • The APIs are often in the form of services. • Web service application engines are often used by programmers for building applications. • As examples, GAE and Microsoft Azure apply their own cloud APIs to get special cloud services. • The WebSphere application engine is deployed by IBM for Blue Cloud. It can be used to develop any kind of web application written in Java.
  • 21.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Virtual Machine Manager • The VM manager is the link between the gateway and resources. • The gateway doesn’t share physical resources directly, but relies on virtualization technology. Hence, the actual resources it uses are VMs. (VIE runs at each cloud, VMM connects with different VIE. • The manager manage VMs deployed on a set of physical resources. • The VM manager implementation is generic so that it can connect with different VIEs. Typically, VIEs can create and stop VMs on a physical cluster. (If OpenNebula platform running at one cloud to create VMs, then VIE can communicate with VIE of other cloud running Amazon EC2 platform to create VMs ) • The Melbourne group has developed managers for OpenNebula, Amazon EC2, and French Grid’5000. • To deploy a VM, the manager needs to use its template
  • 22.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Virtual Machine Templates • A VM template is analogous to a computer’s configuration and contains a description for a VM with the following static information: ➢ The number of cores or processors to be assigned to the VM ➢ The amount of memory the VM requires ➢ The kernel used to boot the VM’s operating system ➢ The disk image containing the VM’s file system (Files) ➢ The price per hour of using a VM
  • 23.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Virtual Machine Templates • The gateway administrator provides the VM template information when the infrastructure is set up. The administrator can update, add, and delete templates at any time. • In addition, each gateway in the InterGrid network must agree on the templates to provide the same configuration on each site. • To deploy an instance of a given VM, the VMM generates a descriptor from the template. • This descriptor contains the same fields as the template and additional information related to a specific VM instance. • Typically the additional information includes: ➢ The disk image that contains the VM’s file system ➢ The address of the physical machine hosting the VM ➢ The VM’s network configuration
  • 24.
    INTER-CLOUD RESOURCE MANAGEMENT VirtualMachine Creation and Management Distributed VM Management
  • 25.
    INTER-CLOUD RESOURCE MANAGEMENT GlobalExchange of Cloud Resources • In order to support a large number of consumers from around the world, cloud infrastructure providers have established data centers in multiple geographical locations to provide redundancy and ensure reliability in case of site failures. • For example, Amazon has data centers in the United States (e.g., one on the East Coast and another on the West Coast) and Europe. • However, it is difficult for cloud customers to determine in advance the best location for hosting their services as they may not know the origin of consumers of their services. • Also, SaaS providers may not be able to meet the QoS expectations of their service consumers originating from multiple geographical locations. • This necessitates building mechanisms for seamless federation of data centers of a cloud provider or providers supporting dynamic scaling of applications across multiple domains in order to meet QoS targets of cloud customers. (Creating of VMs at multiple data centers at multiple places all over the world that satisfies customer QoS),
  • 26.
    INTER-CLOUD RESOURCE MANAGEMENT •Figure shows the high-level components of the Melbourne group’s proposed InterCloud architecture. • In addition, no single cloud infrastructure provider will be able to establish its data centers at all possible locations throughout the world. • As a result, cloud providers will have difficulty in meeting QoS expectations for all their consumers. • Hence, they would like to make use of services of multiple cloud infrastructure service providers who can provide better support for their specific consumer needs. • This necessitates federation of cloud infrastructure service providers for seamless provisioning of services across different cloud providers. Global Exchange of Cloud Resources
  • 27.
    INTER-CLOUD RESOURCE MANAGEMENT •To realize this, the University of Melbourne has proposed InterCloud architecture supporting brokering and exchange of cloud resources for scaling applications across multiple clouds. • Cloud providers will be able to dynamically expand or resize their provisioning capability based on sudden spikes in workload demands by leasing available computational and storage capabilities from other cloud service providers; operate as part of a market-driven resource leasing federation. • They consist of client brokering and coordinator services that support utility-driven federation of clouds: application scheduling, resource allocation, and migration of workloads. Global Exchange of Cloud Resources
  • 28.
    INTER-CLOUD RESOURCE MANAGEMENT •The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and consumers. It aggregates the infrastructure demands from application brokers and evaluates them against the available supply currently published by the cloud coordinators. • It supports trading of cloud services based on competitive economic models such as commodity markets and auctions. • An SLA specifies the details of the service to be provided in terms of metrics agreed upon by all parties, and incentives and penalties for meeting and violating the expectations, respectively. • The availability of a banking system within the market ensures that financial transactions pertaining to SLAs between participants are carried out in a secure and dependable environment. Global Exchange of Cloud Resources