SlideShare a Scribd company logo
1 of 24
Download to read offline
1
Resource usage optimization in cloud-based networks
Postgraduate Diploma in Innovation Management
University of Limerick May 2017
Dimo Iliev
2
Table of Contents
Abbreviations..........................................................................................................................................2
Abstract...................................................................................................................................................3
1. Introduction ....................................................................................................................................3
2. Literature Review / Background Research......................................................................................3
2. Research methodology .................................................................................................................12
3. Problem Statement.......................................................................................................................13
4. Prototype Development ...............................................................................................................14
5. Results...........................................................................................................................................18
6. Discussion......................................................................................................................................21
7. Conclusions & Recommendations ................................................................................................22
8. References ....................................................................................................................................23
Abbreviations
Job Control Language (JCL)
Quality-of-Service (QoS)
Service level agreement (SLA)
Return of Investment ( ROI)
Traffic Conditioning Agreement (TCA)
Application delivery control ( ADC)
Virtual machine (VM)
Service-oriented architecture (SOA)
Central processing unit (CPU)
Software-defined networking ( SDN)
Service-oriented architecture (SOA)
3
Abstract
The purpose of this research is to improve the efficiency of the current approach towards
resource usage in cloud based environments and to propose solution that combines the latest
technical developments and meets the emerging requirements. The adopted approach is to
select and examine the available tools and concepts and to combine them in comprehensive
product prototype. As this solution consist of different products, currently available on the
market , we will test the main components and provide qualitative analysis The final result is
prototype of a solution that will answer the requirements of the market in terms of resource
usage optimization.
1. Introduction
The current state of the cloud computing landscape consists of technically advanced solutions
which are yet to be unified under comprehensive approach from technology and business point
of view. The paper will structure the current optimization methods in 3 groups- operational
optimizations, cloud virtualization and emerging concepts and provide critical review. This
will be followed by description of the research methodology , combination of practical action-
research model and process research for the development of prototype design of application
delivery control solution that meets the technological and business requirements of cloud based
environments and leverages emerging networking technologies. The description of the
implementation of the solution will be followed by conclusion that will examine the expected
results and further challenges.
2. Literature Review / Background Research
2.1 Operational optimization
The research by ( Chee, Franklin, 2010) starts with definition of cloud infrastructure and
description of the abstraction of the different levels of computing— networking, applications,
and user interface. The roots of modern cloud computing are found in the Job Control Language
(JCL) scripting language used on IBM mainframe operating systems back in the 1970s. The
authors discuss the concept of separating the applications from the underlying hardware and
4
the benefits to the user and list this as one of the prerequisites for developing cloud computing.
This allows users to use the Internet as an abstraction layer, which allows cloud computing
resources to be accessed from any place with available Internet connectivity. The authors link
the fundamental idea behind cloud computing – availability of computing power, to the
problem that some of the processes in science are so computing-intensive and require expensive
computing engines which have been available only for the best funded projects. This idea is
further developed in distributed computing systems, this consist of grid computing and
virtualization with better utilization of host system resources while maintaining control.
2.1.1. Quality-of-Service (QoS) and service level agreement (SLA)
The research (Xiong, 2014) looks at how management of the Quality-of-Service (QoS) and
SLA agreements between cloud services providers and clients impacts on the structure of cloud
based resources and facilitates better ROI (Return of Investment) for both sides, client and
provider. According to the author the SLA sets expectations between a customer and a service
provider, and helps establish the relationship between these two parties and as a result further
focuses on resource optimization. The service provider minimizes the level resources usage at
each service node band in the same time still is able to achieve the predefined service level
agreement (SLA).
2.1.2. Traffic differentiation
Performance and price are the two most important components of the dynamic process of
offering cloud based services. Each client has different request characteristics which should be
classified and addressed by the cloud services provider using “priority structure with
preemption-resume” (Xiong, 2014) As a result the overall cost of the service provider’s
computing resources is decreased, and integrity of the service level agreement (SLA) is
preserved . This idea is further developed by stating that in order to improve performance of
the system, data must be separated into several streams where each one is transmitted using a
unique multicast tree. (Walkowiak, 2016). Separating the client request in multiple substreams
is characterized by lower granularity and decreased system saturation and easier management
of available resources. Faynberg , Lu , Skuler, 2016 review several different methods for client
request management based on processing packets differently according to their specific types
and uses packet scheduling disciplines, which manage how packet transmitted . (Faynberg , Lu
, Skuler, 2016). The authors suggest the implementation of control system for how packages
are accepted which allows to schedule the client request per classes of service.
5
Key point in their study is the usage of “Traffic Conditioning Agreement (TCA), and how this
relates to traffic profiles and the relevant methods of policing.” (Faynberg , Lu , Skuler, 2016)
They suggest that the above methods stop network congestion and help providers to work in
accordance to the predefined service level agreement (SLA). The differentiation of traffic and
client requests is also discussed by (Crabb, 2014) who illustrates cloud architecture with
multiple levels , which allows to send traffic according to the application type and business
purpose. This, combined with the mandatory for cloud networks, caching and compression
allows to build highly scalable system that can respond to the business requirements in timely
manner and adhere to predefined service level agreement (SLA). The question of traffic
differentiation further highlights the significance of the loadbalancing components of the cloud
based networks. Native or third party, these application delivery control ( ADC) solutions are
capable of managing the transmission of content combined with advanced layer 7 scheduling
methods. ( Lerner, Skorupa, Ciscato, 29 August 2016)
2.1.3. Cloud federation
Upon considering the literature, in order to make full use of the benefits of the cloud
infrastructure the clients should be given access to the functionalities of the cloud Operation
System trough common interface. Currently there are several different popular solutions which
offer their own Application program interface (API), such as “Amazon EC2 or VMware’s
vCloud” ( Chee, Franklin, 2010). But this sporadic landscape makes it difficult for different
information technology systems and software applications to work together, send and receive
data, and use the exchanged information. This is also hindrance in the way of computer
programs that can be used in an operating systems other than the one in which they were
developed without the need to rewrite the cose. As a result there is growing need for cloud
adapters capable of supporting cloud federations. From optimization point of view this is a key
point as it helps cloud providers and IT companies to work together and pool their resources (
Moreno-Vozmediano, Montero, Llorente, 2012). This leads to the definition of the different
types of cloud federation architecture available - Bursting, Broker, Aggregated and Multitier
and how they enable the optimization of resource usage in cloud based networks. As reported
by (Crabb, 2014) – Successful and fault tolerant cloud architecture “requires your cloud
application to work statelessly between cloud providers
6
and regions.” (Crabb, 2014). This is a fundamental concept in the cloud architecture, which
supports the VM replication between different cloud operators without negative impact on the
CAPEX and OPEX.
2.1.4. Resource scheduling
Ala'a Al-Shaikh, Hebatallah Khattab, Ahmad Sharieh and Azzam Sleit, 2016 develop the
concept that we can find the best methods to given resource by managing the profit obtained
and the number of times when it is utilized. For the purpose they use greedy algorithm which
is an algorithmic paradigm that follows the problem of making the best choice at each stage in
order to find overll optimum. They suggest this as a solution and further analyze it in terms of
runtime complexity. The solution is combination of two optimization problems , the Knapsack
problem algorithm and the activity-selection problem which are ultimately implemented using
Java computer programming language.
2.2 Cloud Virtualization
According to the authors of the research paper “From Virtualized Datacenters to Federated
Cloud Infrastructures” ,virtualization plays key role in the separation of the compute, network,
and storage service platforms from the physical hardware on which they are based. This process
allows cloud based datacenters to benefit from server consolidation and on demand
provisioning capabilities which lead to better utilization rates and significant cost and energy
savings ( Moreno-Vozmediano, Montero, Llorente, 2012).
In order to guarantee optimal performance of the virtual infrastructure the authors suggest the
implementation of Virtual Infrastructure manager
1.2.1. Virtual Infrastructure Manager
The primary purpose of this component is to orchestrate the deployment of virtual resources
and to manage the physical and virtual infrastructures to command-and control
service provisioning. The authors of “Deployment Models and Optimization
Procedures in Cloud Computing” (Kotowski, Oko, Ochla, 2015) mention that Orchestrator is
used as a workflow management solution in the data center. Orchestrator automates the
creation, monitoring, and deployment of resources. According to the authors any IT
organization can use Orchestrator to increase efficiency and reduce operational costs to
7
facilitate the completion of objectives between different departments. Orchestrator provides an
environment with shared access to common data. By using Orchestrator, an enterprise can
evolve and automate key processes between groups and consolidate repetitive manual tasks. It
can automate cross-functional team processes
and enforce best practices for incident, change, and service management.” (Walkowiak, 2016)
According to (Rafael Moreno-Vozmediano, Rubén S. Montero, Ignacio M. Llorente, 2012),
the advantages , in terms of optimization of resource are represented by the availability of the
following features :
Basic
Adaptability, interoperability , scalability and standardization
Advanced
Server consolidation, on-the-fly resizing of the physical infrastructure, service workload
balance, server replication, dynamic partitioning.
According to the authors the role of the cloud infrastructure manager is to controls the usage
of datacenter resources to deliver a agile, secure, and independent multitenant environment for
services separated from the underlying physical infrastructure and has unique interfaces and
APIs for working with the cloud based netorks. ( Moreno-Vozmediano, Montero, Llorente,
2012). The connection to the underlying infrastructure is provided by hypervisor, network,
storage, and information drivers/adapters. This helps define the role of the components of the
cloud OS Virtual machine manager
 Storage manager
 Network manager
 Image manager
 Information manager
1.2.2. Authentication and authorization
The authors also discuss the significance of Authentication and authorization not only as a
security solution- “Authorization policies control and manage user privileges and permissions
to access different cloud resources, such as VMs, networks, or storage systems.” ( Moreno-
8
Vozmediano, Montero, Llorente, 2012), but also as a way to control amount of resources—
CPU, memory, network bandwidth, or disk space that can be accessed by specific user .
1.2.3. Federation manager
The federation manager is key component of the cloud based network as it provides the basic
mechanisms for deployment, ongoing management, creating and deleting of virtual resources
, monitoring , user authentication in remote cloud instances, access control management ,
remote resource permission and tools for creating images on different clouds regardless of the
format. Further optimization is supported by using of cross-site networks and cross-site VM
migration which allows increased cooperation and interoperability between the different cloud
based networks.
1.2.4. Scheduler, Administrative tools and Service manager
The role of the scheduler is presented from 2 points of view, on the physical level host and
cloud level. The decision on how to manage the resources such as physical CPU or memory,
belonging to each VM is combined with making decisions regarding on which host to place
the specific VM. But in federated or hybrid environment the Schuler can decide to deploy the
VM in a remote cloud when insufficient resources are available in the local infrastructure (
Moreno-Vozmediano, Montero, Llorente, 2012) which optimizes the overall performance of
the network. This process is further streamlined by the Service manager which is responsible
for managing and further optimizing the performance of multitier services. This involved
accepting the service and managing the lifecycle by interacting with the scheduler and the
administrative tools. The service manager is important component of the cloud infrastructure
optimization as it controls the service elasticity , trough using different mechanisms for
autoscaling. The importance of automation in the cloud based network is also discussed in in
the article “The BestBuy.com Cloud Architecture” (Crabb, 2014). The author points out that
“Automating the build out of infrastructure is essential for scaling elastically and recovering
quickly from any failure.” Designing any tasks to be achieved manually will subsequently lead
to bottleneck when attempting to provision large amount of instances and have negative impact
on the overall network performance.
2.3 Emerging concepts
2.3.1. Decentralized peer-to-peer network
9
The fundamental ideas behind cloud computing, Increased collaboration, availability,
flexibility, decentralization are further developed in block chain-based distributed computing
platforms featuring smart contract (scripting) functionality such as Ethereum. This open-
source, public, block chain-based platform was developed as a result of the need to building
decentralized applications, shared among a distributed network of computers. The result is a
more open, transparent, and system which can be verified by the public and fundamentally
change the way we think about exchanging value and assets, enforcing contracts, and sharing
data across industries. According to ( Ladha, Pandit, Ralhan, 2016) Ethereum represents new
innovation in the fields of cryptocurrency which has become relatively stagnate, by introducing
an entire programming language and development environment built into cloud based network
2.3.2. Virtual Containers
The concept of virtual containers is the process of creating environments where applications
function in a framework of virtualized operating-system resource areas where the applications
have "ownership" of the platform. According to (Prashant, 2016) Virtualization has key
importance in the field of abstraction and resource management. The problem is that these
additional layers of abstraction provided by virtualization require establishing a balance
between performance and cost in a cloud environment where everything is on a pay-per-use
basis. Virtual containers address these issues and are perceived to be the future of virtualization.
The reviewed literature indicates that the question of optimizing the resource usage of cloud
based networks is predominanat in the latest trends of cloud computing. On the background of
increasing cloud market revenue, we see the emerging of more distributed, trusted, intelligent,
and industry specialized infrastructure. Some of the key points are machine learning which is
facilitated by the cloud in terms of sufficient computing power and the opportunity to
collaborate and easily develop and deploy applications on top of the cloud platforms. The idea
of Serverless computing and containers which allows users to move beyond the traditional
construct of virtual machines and servers is categorised as “next-generation computing”. In
general, to market motion is towards closer collaboration, automatization, hybrid solutions and
toward leaner, cheaper solutions that include and integrate PaaS capabilities, cloud
management, and container support.
2.3.3. Microservices
10
Microservices are programs with single task and connectivity mechanism. They are the
building blocks of the service-oriented architecture (SOA) architectural style which consist of
distributed services. The monolithic approach towards software architecture represented as
one long string of code is no longer compatible with the complexity of the current cloud based
environments. The modular software architecture offers the following benefits:
 Easier to make changes, update and test
 It is easier to introduce new technology trends
 Starting time for software is decreased
 It is easier to mix and match modules with different profiles
 Modules make the process of constructing applications easier
In terms of cloud based service-oriented architecture (SOA) the role of microservices can be
formulated in the following manner: The basic idea is that in SOA environment we have remote
services which can leveraged using some type of infrastructure control and used as if they are
local to the cloud based application. As a result, each one of our applications are made up of
multitude of local and remote application services. Furthermore, they are location and platform
independent which means that they can reside on premises or in any public cloud.
Microservices facilitate the migration from on premises to the cloud as they are building blocks
which can be used to rebuild application in the cloud. Therefore we don’t have to start building
application from the initial step. The benefits of microservices are even further enhanced when
they are used in conjunction with containerization. This allows for application to be distributed
and optimized according to their utilization of the platform from within the container. As a
result we can distinguish two separate architectural patterns, braking down application into
building blocks and then rebuilding it in the cloud with minimal amount of code changing and
on the hand other decoupling data from the application services. This way the data can be
changed without dismantling the application.
In essence , we are taking a monolithic application and converting it into something that’s more
complex and distributed. (Linthicum, 2016)
2.3.4. Osmotic computing
Cloud computing and especially Infrastructure as a service ( IaaS) provide unlimited power,
scalability and reliability across multiple application domains. However, as a result of the latest
technological advances and in particular the phenomenon of Internet of Things (IoT), the
11
current cloud computing model is changing with bigger emphasis on the proximity of the cloud
resources to the users. In order to reduce the communication delay, the storage and processing
capabilities are included in the IoT devices, and reside in the periphery of the central cloud data
center.
Osmotic Computing is driven by the resource capacity of the network edge and the availability
of data transfer protocols that can support seamless interaction with the datacenter based
services. In highly distributed and federated environments it enables the automatic deployment
of microservices across the edge and cloud infrastructures. Similar to the chemistry process of
“osmosis” cloud services and microservices are migrated across datacenters to the network
edge. This movement contributes towards increased reliability of IoT support and the specific
levels of QoS. Therefore, we have better understanding how the data from IoT devices can be
analyzed and prevent the large -scale cloud computing systems from becoming a bottleneck.
One of the challenges of the current centralized cloud datacenter environments is to transfer
large data streams in timely and reliable manner. To address this, both cloud and edge resources
should be used and set up hybrid virtual infrastructure as shown on the following picture
( Villari , Fazio, 2016)
Figure 1
Osmotic computing decomposes application into microservices and trough dynamic
management strategies optimizes the resource usage in cloud and edge infrastructures. The
breakthrough approach comes from abstracting the management of users date and application
from the management of networking and security services. This allows for automated, flexible
and secure microservice deployment solution. ( Villari , Fazio, 2016)
12
2.3.5.Software-defined networking ( SDN) and Cloud Computing
Software-defined networking ( SDN) separates the networking functions from the lower level
functionalities. It provides dynamic network management via open interfaces and traffic
distribution optimization. The ability to remove the hardware limitations in network
infrastructures has significant benefits that can be translated into the cloud based environments.
First, it allows to run all solutions as software on traditional systems or private/ public clouds.
Also, it links the network configuration to the network requirements of the application ad
workloads and allows configuration that can match any business purpose. SDN is a major step
further away from static routing which does not support dynamic adaption to the applications
specific needs. The latest trend in the cloud based SDN vision is that the SDN solutions are
abstracted within separated layer in the cloud-enabled infrastructure. (Linthicum, 2016)
2. Research methodology
3.1 Action research
Action research can be defined as the process of problem discovery and solving trough
client centered and action orientated methods. There are two types of action research,
participatory and practical. The latest is particularly useful for the chosen topic as it
tends to solve a particular problem and to produce guidelines for best practice.
According to (Denscombe, 2010) “ action research strategy's purpose is to solve a
particular problem and to produce guidelines for best practice”. This method will be
applied to the current research as it allows to examine the state of the cloud computing
landscape, to evaluate the existing solutions and to propose prototype design that
combines strategic technical functionalities in order to address the requirement for
resource usage optimization.
3.2 Process research
Process research is regarded as an important qualitative approach in the study of
strategy and organizations, and is particularly useful in the study of networks because
13
of their inherent dynamics and complex process (Abrahamsena, Hennebergb, 2006).
This approach will be supported with interviews with senior managers form the product
development team of a company that specializes in application delivery control
solutions specialized for cloud environments. The practice of ‘academic interventions’
will be used to introduce concepts related to the best case practices for resource
optimization and in the same time these will be validated by using “local knowledge”
from cloud solutions specialist within KEMP Technologies.
3. Problem Statement
The current trend of moving on site infrastructure to the cloud is driven by the constantly
increasing client demands in terms of availability, automatization, security and equally
flexible billing approach. The problem is that since most of the current cloud based
networks are not native but result of some type of migration process form on premises, they
ultimately tend to inherit the hindrances of the hardware or private cloud infrastructures.
For example, if the on premises network requires X amount computing resources in order
to meet the business requirements, by literally translating hardware to virtual machines and
migrating these to the cloud the new infrastructure will require equal amount of computing
resources. This highlights the importance of changing the approach towards resource usage
in cloud networks through proposing new optimized method of application traffic
distribution. The currently available solutions do not address all the challenges. Thus, the
clients must mix and match solutions from different vendors in order to achieve replication
of their on-premises and benefit from the technological advantages of cloud computing.
The current cloud market, constituted by many different public cloud providers, is highly
fragmented in terms of interfaces, pricing schemes, virtual machine offers and value-added
features. In this context, a cloud broker can provide intermediation and aggregation
capabilities to enable users to deploy their virtual infrastructures across multiple clouds.
However, most current cloud brokers do not provide advanced service management
capabilities to make automatic decisions, based on optimization algorithms. They are
limited in their abilities to select the best match cloud to deploy a service, how to optimize
the distribution of the different components of a service among the available cloud
providers , or even when to move a given service component from a cloud to another to
14
satisfy some optimization criteria. (Lucas-Simarro, Moreno-Vozmediano,. Montero,
Llorente, 2012). One of the fundamental benefits of cloud computing is the ability to deploy
new services according to the demand of the clients, without additional time needed to
acquire servers and other infrastructure. Organizations are billed only for the resources that
are actually used, compared to traditional datacenter model which requires upfront capital
expenditure for projected peak capacity to meet unpredictable and unexpected business
needs.
Figure 2
4. Prototype Development
Based on the research, analyzed in the literature review the proposed prototype
addresses the current requirements of the cloud based networks by combining
emerging technologies and methods in parallel with the areas of cloud virtualization
and operational optimization as discussed. The solution is branded as “Inter-Cloud
Resource Manager” and based on virtual load balancing appliance with the following
load balancing algorithms:
 Weighted Round Robin – Requests are dispatched to each cloud instance
proportionally based on the assigned weights and in circular order. However, if the
Resource usage
Cost
Resource usage
Cost
15
highest weighted server fails, the virtual server with the next highest priority
number will be available to serve clients. The weight for each server is assigned
based on multi-class reasoning. The support of multi-class reasoning is useful in
real applications when the users have different privileges, e.g. golden, silver and
bronze, which stand for different levels of services. Such levels of service can be
formalised by means of SLAs. In the Reasoner Engine, the calculation of
throughput and response time is based on log data analysis of the load balancing
controller. At runtime, the per request logs are accumulated in the load balancing
controller log file based on the requests hitting backend resources.
 Weighted Least Connection- If the servers have different resource capacities the
Weighted Least Connection method is more applicable: The number of active
connections combined with the various weights defined by the administrator
generally provides a very balanced utilization of the servers, as it employs the
advantages of both worlds.
 Resource-based (SDN Adaptive)- In traditional networks, there is no end-to-end
visibility of network paths and applications are not always routed optimally. The
prototype is integrated with an SDN Controller solution which solves this problem
by making the critical flow pattern data available.
The above distribution methods are implemented based on input from the Reasoner
Engine that determines the routing policy. The policy can be set at design-time
based on the result of the reasoner or at runtime based on periodic observation of
response time and throughput. The pre-defined SLA is enforced trough algorithm
in the Reasoner Engine that changes the weight of the cloud instances and as a result
the network traffic distribution pattern. The algorithm will be based on metrics like
response time, arrival and departure time-stamps, request type and session IDs
which are particularly useful for the load balancing analysis and examination of the
processing requirement of each type of request on different types of servers. It is
important to note that the SLA will be supported with service templates, managed
by the cloud providers. This ensures that the client is able to estimate and achieve
cost effective commitment to the provided services. The cloud provider is able to
fulfill the obligation to the customers and to use the available resources optimally (
from cost prospective). This results form the ability to deploy service with the help
of the orchestrator and to add and delete instances (or other resources) as specified
in the service agreement .
16
(Faynberg , Lu , Skuler, 2016)
Figure 3
The loadbalancing module provides reverse proxy capabilities. This allows for
secure user access and protection from data loss. From the client point of view, the
reverse proxy appears to be the virtual server/application and so is totally
transparent to the remote user. As all client requests pass through the proxy, it is a
perfect point in a network to control traffic while also optimizing performance with
compression, encryption offloading and caching. At a more advanced level, the
proxy may enforce encryption on all traffic and also inspect traffic for suspicious
activities using a Web Application Firewall (WAF)
Figure 4
“Inter-Cloud Resource Manager” also supports modular broker architecture that
can work with different scheduling strategies and improve deployment of virtual
services across multiple clouds, based on different optimization criteria (e.g. cost
17
optimization or performance optimization), different user segmentation (e.g.
budget, performance, instance types, placement, reallocation or load balancing
constraints), and different environmental conditions (i.e., static vs. dynamic
conditions, regarding instance prices, instance types, service workload, etc.).
(Lucas-Simarro, Moreno-Vozmediano,. Montero, Llorente, 2012). This is achieved
with adaptable automation engine that automatically adapts the deployment to
certain events in a pre-established way. In addition, it includes a multi-cloud engine
that interacts with cloud infrastructure APIs, migrates the containers or container
swarms consisting of nodes that hold microservices and manages the unique
requirements of each cloud. The components of this engine are:
o Cloud manager - periodically collects information about availability and
price from cloud provider and acts as a pricing interface for users, updating
the database when new information is available.
o The Scheduler – Makes placement decisions on behalf of clients traffic and
microservices based applications according to the data from the Cloud
manager. Before each decision, the scheduler obtains information about
clouds, instances, prices, and others from the database, and invokes to the
particular scheduling strategy specified in the service description and its
features
o Database- Contains and manages information provided by the other
components
o VM manager – Deploys the virtual resources of a service across a set of
cloud providers and manages the deployed resources collecting data about
CPU, memory and network usage of each one, which is continuously
updated in the database.
From a virtualization point of view the solution is based containerization and
microservices. This guarantees resource availability for accommodation of surges
in network traffic and elasticity and flexibility, when deploying. As a microservices
based virtual appliance it will be distributed and optimized according to the rules
of utilization of the platform from within the container. The containers, the swarms
and the respective microservices are migrated between the different cloud providers
according to data collected by the Cloud Manager trough the API interface. Since
18
containers must run inside Linux machine the solution also includes open source
single click installer which hides this additional step form the end user.
5. Results
“Inter-Cloud Resource Manager” optimizes the performance of the cloud based
resources by aligning them with the best performing , from computing resources and
pricing point of view cloud providers. One of the main benefits of using multi-cloud
brokering solution is the possibility of using the most appropriate type of instance in
every moment. The multi-cloud approach ensures that the virtual instances have access
to the required resources at the most competitive price. We will explore the benefits of
the this approach by examining the results of the application deployment in three
different geographical instances of Amazon EC2, in US, EU and Asia. These
configurations are compared to multi-cloud deployment of the same application. The
experiment was done regarding prices from a selected period of time, which means that
selecting prices from another period may provide different results among them. Based
on the performed test described in the research paper “Scheduling strategies for optimal
service performance across multiple clouds” (Lucas-Simarro, Moreno-Vozmediano,.
Montero, Llorente, 2012) the reported performance improvement is in the range of 3%
to 4%.
Figure 5
Additional test are performed using cloud based load balancer from KEMP
Technologies in Microsoft Azure environment. This helps to estimate the level of
performance optimization by adding loadbalancing module to the “Inter-Cloud
Resource Manager”. This module is in charge of distributing the application traffic
19
between different cloud providers. In the same time the loadbalancing module will
support the containerization part by managing the client request sent to the different
container swarms. This will be achieved by using the load balancing algorithms
weighted round robin and content switching. The benefits of the load balancing solution
, as described in the following test results are combined and reinforced by the
containerization approach. The infrastructure of the load balancing test environment
consist of :
 1 VLM
 1 FreeRadius server
 2 web servers
Figure 6
For the purpose of the test we compare the native solution Azure Load Balancer to the
more technically advanced 3rd
party solution KEMP VLM for Azure. This allows us to
provide qualitative measurement of the benefits resulting from the implementation of
the advanced technical capabilities of the load balancing solution.
20
Figure 7
Persistence which can also be referred to as server affinity, or server stickiness -- is the
property that enables all requests from an individual client to be sent to the same server
in a server farm. When enabled “ server persistence” reduces the response time of the
virtual server by 20%, which ultimately translates into better performance form client
point of view. The loadbalancer also supports caching and compression which further
optimizes the performance of the cloud based network by decreasing the volume of the
overall traffic. This lowers the level of the required computing resources in order to
process the same level of traffic as in the non- load balancing state.
As stated earlier in this chapter the resource usage optimization works on two levels,
the way the client’s traffic and application are distributed across the cloud providers
and the nature of the actual solution managing this process. In order to handle the
complexity of this process the “Inter-Cloud Resource Manager” will require access to
significant level of computing resources. To meet these demands the solution is
fragmented into microservices which are distributed using containerization. The test
results reported in the article “Evaluation of containers as a virtualisation alternative for
HEP workloads” (Roy, Washbrook, Crooks, Qin, 2015) will be used to provide
quantitative representation of the benefits of using containerization. The test evaluates
the baseline performance of the three chosen platforms (native, container and VM). The
main goal is to determine the relative performance of applications running under native
and container platforms, compared to a VM platform running the same test workload.
The key performance metric of interest is the average CPU time taken to process an
21
individual event in the test sample. At least three iterations of the same test are run to
validate per-event
timing consistency across samples. Figure 8 illustrates the value per CPU core for both
of the test servers (labelled Xeon and Avoton) for the 32-bit and 64-bit version of the
benchmark.
Figure 8
It is observed for all tests that the values for the container platform were within 2% of
native performance which is considered the benchmark. The results show that the
containers had near native performance ( bare metal) in both scenarios and when
running on both the Xeon and Avoton processors.
6. Discussion
The proposed solution “Inter-Cloud Resource Manager” addresses the task of resource
usage optimization in cloud based networks on two different levels. It implements the
principles of load balancing and applies these towards multi-cloud orchestration. The
network traffic is distributed between the different cloud providers based on
combination of loadbalancing algorithms and SLA agreements combined with pricing
information gathered by the Cloud Manager. This approach results in well balanced
business environment, where the provider can segment their client base and give
priority to specific tier clients. From client point of view this allows greater flexibility
22
in assigning the available resources. For example the clients can route database
querying traffic to specific cloud instance, computation heavy traffic to different cloud
instance or even to on premises private cloud.
On the other hand, the prototype of the “Inter-Cloud Resource Manager” is capable of
proactively deploying virtual instances based on resource usage relevant to the pre-
defined SLAs. This further optimizes the resource usage from user’s traffic distribution
point of view, as the solution is capable of managing the traffic between the cloud nodes
and in the same time deploying new instances on request. This optimized scalability is
possible because the cloud resources are based on two principles, microservices and
containerization. The “Inter-Cloud Resource Manager” is capable of deploying not
only virtual instances, but also just parts of these instances according to the
functionality requirements of the clients.
The test results reported in the article “Evaluation of containers as a virtualization
alternative for HEP workloads” (Roy, Washbrook, Crooks, Qin, 2015) show that the
performance of virtual containers is relevant to the technical specification of the
underlying hardware. Also , the current state of the cloud computing landscape shows
that majority of the clients are investing in hybrid configurations rather than cloud
native environments. The proposed solution is aimed at optimizing the performance of
the network resources based in the cloud, but it is anticipated that ultimately the
underlying hardware infrastructure will have significant impact on the performance.
7. Conclusions & Recommendations
The research on the different aspects of the cloud computing landscape demonstrate
that the clients are working with solutions, which regardless of their technical advanced
functionalities and ability to resolve specific issues are not providing comprehensive
approach towards resource usage optimization. These sporadic solutions function in an
environment where different cloud providers have different configurations which
further complicates the cloud federation and interoperability. Combining the
functionalities of standard cloud orchestrator with advanced loadbalancing capabilities
and basing these on microservices and containers also addresses the issue of “ cloud
readiness”. Since most of the current applications based in cloud environments are non-
native, they have been created on premises and migrated at certain point. This is
ongoing process which determines the overall state of cloud computing and establish
23
the predominant trend of hybrid configurations where the client combine on-premises
deployment ( including private cloud) with hyperscale clouds like AWS, Azure and
Google Cloud Platform (GCP). Hybrid deployments allow customers to selectively
leverage cloud services for their needs
without having to completely migrate away from on-premises deployments. This even
further reinforces the importance of the proposed product prototype as it achieves
infrastructure efficiency and business agility, trough new operational model (e.g.,
automation, self-service, standardized commodity elements) rather than through
performance optimization of infrastructure elements. (Faynberg , Lu , Skuler, 2016).
The essence of the product prototype is combination of existing tools and resources,
such as virtualization, load balancing, network function virtualization, containerization
and microservices. They are structured in such a way, that the overall performance of
the network elements is optimized trough enhanced collaboration.
Overwhelming majority of IT infrastructure spend today is still going to solutions that
live on premises or are hosted in a non-hyperscale clouds, such as those run by smaller,
more regional Cloud Service Providers (CSPs). For example, as of July 2016, Gartner
estimated that only around 7.5% of what they classify as “System Infrastructure or
IaaS” are hosted in a true hyperscale public cloud, with that percentage expected to
grow to only 17% by 2020.
Open source will continue to gain relevance, and we will see more commercialized
services based on open source platforms coming to market. The emergence of Open
Source (Open Stack) holds great promise to the cloud ecosystem. It will help eliminate
customer 'lock-in' to proprietary cloud technologies and enable ecosystems of cloud
services to have interoperability. This will permit organizations to have a broad
spectrum of choice, leverage best-of-breed cloud services and ensure interoperability.
This even further reinforces the requirement for a solution that can bridge the gap
between the different cloud platforms and allow for optimized utilization of the
available resources.
8. References
24
Abrahim Ladha, Sharbani Pandit,Sanya Ralhan, 2016. The Ethereum Scratch Off Puzzle, s.l.: s.n.
Andrew Lerner, Joe Skorupa, Danilo Ciscato, 29 August 2016 . Magic Quadrant for Application Delivery
Controllers, s.l.: Gartner.
Brian J.S. Chee and Curtis Franklin, Jr., 2010. Cloud Computing Technologies and Strategies of the
Ubiquitous Data Center. Boca Raton,: CRC Press Taylor and Francis Group, LLC.
Crabb, J., 2014. The BestBuy.com Cloud Architecture. IEEE SOFTWARE, pp. 91-96.
Denscombe, 2010. Good Research Guide : For small-scale social research projects (4th Edition.
s.l.:Open University Press. Berkshire.
Gareth Roy, Andrew Washbrook, David Crooks, Gang Qin, 2015. Evaluation of containers as a
virtualisation alternative for HEP workloads. IOPscience.
Igor Faynberg , Hui-Lan Lu , Dor Skuler, 2016. CLOUD COMPUTING BUSINESS TRENDS AND
TECHNOLOGIES. s.l.:John Wiley & Sons Ltd,.
Jerzy Kotowski, Jacek Oko, and Mariusz Ochla, 2015. Deployment Models and Optimization Procedures
in Cloud Computing, Wroclaw: Springer International, Wroclaw University of Technology, Wroclaw,
Poland.
Jose Luis Lucas-Simarro,Rafael Moreno-Vozmediano,Ruben S. Montero,Ignacio M. Llorente, 2012.
Scheduling strategies for optimal service deployment across multiple clouds. Elsevier.
Linthicum, D., 2016. Practical use of Microaervices in Moving Workloads to the Cloud. IEEE Cloud
Computing, September.
Linthicum, D., 2016. Software-defined networks meet Cloud Computing. IEEE Cloud Computing , May.
Massimo Villari , Maria Fazio, 2016. Osmotic Computing : A New Paradigm for Edga/Cloud Integration.
IEEE Cloud Computing, November.
Morten H. Abrahamsena, Stephan C. Hennebergb, 2006. Network picturing: An action research study
of strategizing in business networks. Industrial Marketing Management.
Prashant, D. R., 2016. A Survey of Performance Comparison between Virtual Machines and Containers.
International Journal of Computer Sciences and Engineering, 4(7), pp. 55-59.
Rafael Moreno-Vozmediano, Rubén S. Montero, Ignacio M. Llorente, 2012. From Virtualized
Datacenters to Federated Cloud Infrastructures. IEEE Computer Society.
Walkowiak, K., 2016. Modeling and Optimization of Cloud-Ready and Content-Oriented Networks.
s.l.:Springer International Publishing Switzerland.
Xiong, K., 2014. Resource Optimization and Security for Cloud Services. London: ISTE Ltd and John Wiley
& Sons, Inc..

More Related Content

What's hot

What's hot (15)

An Efficient Queuing Model for Resource Sharing in Cloud Computing
	An Efficient Queuing Model for Resource Sharing in Cloud Computing	An Efficient Queuing Model for Resource Sharing in Cloud Computing
An Efficient Queuing Model for Resource Sharing in Cloud Computing
 
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...
NEURO-FUZZY SYSTEM BASED DYNAMIC RESOURCE ALLOCATION IN COLLABORATIVE CLOUD C...
 
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...
Neuro-Fuzzy System Based Dynamic Resource Allocation in Collaborative Cloud C...
 
Efficient Resource Sharing In Cloud Using Neural Network
Efficient Resource Sharing In Cloud Using Neural NetworkEfficient Resource Sharing In Cloud Using Neural Network
Efficient Resource Sharing In Cloud Using Neural Network
 
Ieeepro techno solutions 2014 ieee java project - deadline based resource p...
Ieeepro techno solutions   2014 ieee java project - deadline based resource p...Ieeepro techno solutions   2014 ieee java project - deadline based resource p...
Ieeepro techno solutions 2014 ieee java project - deadline based resource p...
 
Cloud Computing IEEE 2014 Projects
Cloud Computing IEEE 2014 ProjectsCloud Computing IEEE 2014 Projects
Cloud Computing IEEE 2014 Projects
 
A New Approach to Volunteer Cloud Computing
A New Approach to Volunteer Cloud ComputingA New Approach to Volunteer Cloud Computing
A New Approach to Volunteer Cloud Computing
 
Cloud computing-ieee-2014-projects
Cloud computing-ieee-2014-projectsCloud computing-ieee-2014-projects
Cloud computing-ieee-2014-projects
 
1732 1737
1732 17371732 1737
1732 1737
 
Parallel and Distributed System IEEE 2015 Projects
Parallel and Distributed System IEEE 2015 ProjectsParallel and Distributed System IEEE 2015 Projects
Parallel and Distributed System IEEE 2015 Projects
 
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTINGA SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
 
A Framework for Multicloud Environment Services
A Framework for Multicloud Environment ServicesA Framework for Multicloud Environment Services
A Framework for Multicloud Environment Services
 
35 content distribution with dynamic migration of services for minimum cost u...
35 content distribution with dynamic migration of services for minimum cost u...35 content distribution with dynamic migration of services for minimum cost u...
35 content distribution with dynamic migration of services for minimum cost u...
 
M035484088
M035484088M035484088
M035484088
 
Agent based Aggregation of Cloud Services- A Research Agenda
Agent based Aggregation of Cloud Services- A Research AgendaAgent based Aggregation of Cloud Services- A Research Agenda
Agent based Aggregation of Cloud Services- A Research Agenda
 

Similar to Resource usage optimization in cloud based networks

IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHMIMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
Associate Professor in VSB Coimbatore
 
On the Optimal Allocation of VirtualResources in Cloud Compu.docx
On the Optimal Allocation of VirtualResources in Cloud Compu.docxOn the Optimal Allocation of VirtualResources in Cloud Compu.docx
On the Optimal Allocation of VirtualResources in Cloud Compu.docx
hopeaustin33688
 

Similar to Resource usage optimization in cloud based networks (20)

DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUD
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUDDYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUD
DYNAMIC TENANT PROVISIONING AND SERVICE ORCHESTRATION IN HYBRID CLOUD
 
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTINGA SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
A SURVEY ON RESOURCE ALLOCATION IN CLOUD COMPUTING
 
A Survey on Resource Allocation in Cloud Computing
A Survey on Resource Allocation in Cloud ComputingA Survey on Resource Allocation in Cloud Computing
A Survey on Resource Allocation in Cloud Computing
 
Hybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in CloudHybrid Based Resource Provisioning in Cloud
Hybrid Based Resource Provisioning in Cloud
 
A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...
A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...
A CLOUD BROKER APPROACH WITH QOS ATTENDANCE AND SOA FOR HYBRID CLOUD COMPUTIN...
 
A Study On Service Level Agreement Management Techniques In Cloud
A Study On Service Level Agreement Management Techniques In CloudA Study On Service Level Agreement Management Techniques In Cloud
A Study On Service Level Agreement Management Techniques In Cloud
 
PROPOSED ONTOLOGY FRAMEWORK FOR DYNAMIC RESOURCE PROVISIONING ON PUBLIC CLOUD
PROPOSED ONTOLOGY FRAMEWORK FOR DYNAMIC RESOURCE PROVISIONING ON PUBLIC CLOUDPROPOSED ONTOLOGY FRAMEWORK FOR DYNAMIC RESOURCE PROVISIONING ON PUBLIC CLOUD
PROPOSED ONTOLOGY FRAMEWORK FOR DYNAMIC RESOURCE PROVISIONING ON PUBLIC CLOUD
 
QoS Based Scheduling Techniques in Cloud Computing: Systematic Review
QoS Based Scheduling Techniques in Cloud Computing: Systematic ReviewQoS Based Scheduling Techniques in Cloud Computing: Systematic Review
QoS Based Scheduling Techniques in Cloud Computing: Systematic Review
 
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHMIMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
IMPROVEMENT OF ENERGY EFFICIENCY IN CLOUD COMPUTING BY LOAD BALANCING ALGORITHM
 
ESTIMATING CLOUD COMPUTING ROUND-TRIP TIME (RTT) USING FUZZY LOGIC FOR INTERR...
ESTIMATING CLOUD COMPUTING ROUND-TRIP TIME (RTT) USING FUZZY LOGIC FOR INTERR...ESTIMATING CLOUD COMPUTING ROUND-TRIP TIME (RTT) USING FUZZY LOGIC FOR INTERR...
ESTIMATING CLOUD COMPUTING ROUND-TRIP TIME (RTT) USING FUZZY LOGIC FOR INTERR...
 
Cloud Computing: A Perspective on Next Basic Utility in IT World
Cloud Computing: A Perspective on Next Basic Utility in IT World Cloud Computing: A Perspective on Next Basic Utility in IT World
Cloud Computing: A Perspective on Next Basic Utility in IT World
 
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICES
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICESANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICES
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICES
 
B03410609
B03410609B03410609
B03410609
 
On the Optimal Allocation of VirtualResources in Cloud Compu.docx
On the Optimal Allocation of VirtualResources in Cloud Compu.docxOn the Optimal Allocation of VirtualResources in Cloud Compu.docx
On the Optimal Allocation of VirtualResources in Cloud Compu.docx
 
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...
Profit Maximization for Service Providers using Hybrid Pricing in Cloud Compu...
 
1732 1737
1732 17371732 1737
1732 1737
 
Scheduling in CCE
Scheduling in CCEScheduling in CCE
Scheduling in CCE
 
D04573033
D04573033D04573033
D04573033
 
E42053035
E42053035E42053035
E42053035
 
Cobe framework cloud ontology blackboard environment for enhancing discovery ...
Cobe framework cloud ontology blackboard environment for enhancing discovery ...Cobe framework cloud ontology blackboard environment for enhancing discovery ...
Cobe framework cloud ontology blackboard environment for enhancing discovery ...
 

Recently uploaded

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 

Resource usage optimization in cloud based networks

  • 1. 1 Resource usage optimization in cloud-based networks Postgraduate Diploma in Innovation Management University of Limerick May 2017 Dimo Iliev
  • 2. 2 Table of Contents Abbreviations..........................................................................................................................................2 Abstract...................................................................................................................................................3 1. Introduction ....................................................................................................................................3 2. Literature Review / Background Research......................................................................................3 2. Research methodology .................................................................................................................12 3. Problem Statement.......................................................................................................................13 4. Prototype Development ...............................................................................................................14 5. Results...........................................................................................................................................18 6. Discussion......................................................................................................................................21 7. Conclusions & Recommendations ................................................................................................22 8. References ....................................................................................................................................23 Abbreviations Job Control Language (JCL) Quality-of-Service (QoS) Service level agreement (SLA) Return of Investment ( ROI) Traffic Conditioning Agreement (TCA) Application delivery control ( ADC) Virtual machine (VM) Service-oriented architecture (SOA) Central processing unit (CPU) Software-defined networking ( SDN) Service-oriented architecture (SOA)
  • 3. 3 Abstract The purpose of this research is to improve the efficiency of the current approach towards resource usage in cloud based environments and to propose solution that combines the latest technical developments and meets the emerging requirements. The adopted approach is to select and examine the available tools and concepts and to combine them in comprehensive product prototype. As this solution consist of different products, currently available on the market , we will test the main components and provide qualitative analysis The final result is prototype of a solution that will answer the requirements of the market in terms of resource usage optimization. 1. Introduction The current state of the cloud computing landscape consists of technically advanced solutions which are yet to be unified under comprehensive approach from technology and business point of view. The paper will structure the current optimization methods in 3 groups- operational optimizations, cloud virtualization and emerging concepts and provide critical review. This will be followed by description of the research methodology , combination of practical action- research model and process research for the development of prototype design of application delivery control solution that meets the technological and business requirements of cloud based environments and leverages emerging networking technologies. The description of the implementation of the solution will be followed by conclusion that will examine the expected results and further challenges. 2. Literature Review / Background Research 2.1 Operational optimization The research by ( Chee, Franklin, 2010) starts with definition of cloud infrastructure and description of the abstraction of the different levels of computing— networking, applications, and user interface. The roots of modern cloud computing are found in the Job Control Language (JCL) scripting language used on IBM mainframe operating systems back in the 1970s. The authors discuss the concept of separating the applications from the underlying hardware and
  • 4. 4 the benefits to the user and list this as one of the prerequisites for developing cloud computing. This allows users to use the Internet as an abstraction layer, which allows cloud computing resources to be accessed from any place with available Internet connectivity. The authors link the fundamental idea behind cloud computing – availability of computing power, to the problem that some of the processes in science are so computing-intensive and require expensive computing engines which have been available only for the best funded projects. This idea is further developed in distributed computing systems, this consist of grid computing and virtualization with better utilization of host system resources while maintaining control. 2.1.1. Quality-of-Service (QoS) and service level agreement (SLA) The research (Xiong, 2014) looks at how management of the Quality-of-Service (QoS) and SLA agreements between cloud services providers and clients impacts on the structure of cloud based resources and facilitates better ROI (Return of Investment) for both sides, client and provider. According to the author the SLA sets expectations between a customer and a service provider, and helps establish the relationship between these two parties and as a result further focuses on resource optimization. The service provider minimizes the level resources usage at each service node band in the same time still is able to achieve the predefined service level agreement (SLA). 2.1.2. Traffic differentiation Performance and price are the two most important components of the dynamic process of offering cloud based services. Each client has different request characteristics which should be classified and addressed by the cloud services provider using “priority structure with preemption-resume” (Xiong, 2014) As a result the overall cost of the service provider’s computing resources is decreased, and integrity of the service level agreement (SLA) is preserved . This idea is further developed by stating that in order to improve performance of the system, data must be separated into several streams where each one is transmitted using a unique multicast tree. (Walkowiak, 2016). Separating the client request in multiple substreams is characterized by lower granularity and decreased system saturation and easier management of available resources. Faynberg , Lu , Skuler, 2016 review several different methods for client request management based on processing packets differently according to their specific types and uses packet scheduling disciplines, which manage how packet transmitted . (Faynberg , Lu , Skuler, 2016). The authors suggest the implementation of control system for how packages are accepted which allows to schedule the client request per classes of service.
  • 5. 5 Key point in their study is the usage of “Traffic Conditioning Agreement (TCA), and how this relates to traffic profiles and the relevant methods of policing.” (Faynberg , Lu , Skuler, 2016) They suggest that the above methods stop network congestion and help providers to work in accordance to the predefined service level agreement (SLA). The differentiation of traffic and client requests is also discussed by (Crabb, 2014) who illustrates cloud architecture with multiple levels , which allows to send traffic according to the application type and business purpose. This, combined with the mandatory for cloud networks, caching and compression allows to build highly scalable system that can respond to the business requirements in timely manner and adhere to predefined service level agreement (SLA). The question of traffic differentiation further highlights the significance of the loadbalancing components of the cloud based networks. Native or third party, these application delivery control ( ADC) solutions are capable of managing the transmission of content combined with advanced layer 7 scheduling methods. ( Lerner, Skorupa, Ciscato, 29 August 2016) 2.1.3. Cloud federation Upon considering the literature, in order to make full use of the benefits of the cloud infrastructure the clients should be given access to the functionalities of the cloud Operation System trough common interface. Currently there are several different popular solutions which offer their own Application program interface (API), such as “Amazon EC2 or VMware’s vCloud” ( Chee, Franklin, 2010). But this sporadic landscape makes it difficult for different information technology systems and software applications to work together, send and receive data, and use the exchanged information. This is also hindrance in the way of computer programs that can be used in an operating systems other than the one in which they were developed without the need to rewrite the cose. As a result there is growing need for cloud adapters capable of supporting cloud federations. From optimization point of view this is a key point as it helps cloud providers and IT companies to work together and pool their resources ( Moreno-Vozmediano, Montero, Llorente, 2012). This leads to the definition of the different types of cloud federation architecture available - Bursting, Broker, Aggregated and Multitier and how they enable the optimization of resource usage in cloud based networks. As reported by (Crabb, 2014) – Successful and fault tolerant cloud architecture “requires your cloud application to work statelessly between cloud providers
  • 6. 6 and regions.” (Crabb, 2014). This is a fundamental concept in the cloud architecture, which supports the VM replication between different cloud operators without negative impact on the CAPEX and OPEX. 2.1.4. Resource scheduling Ala'a Al-Shaikh, Hebatallah Khattab, Ahmad Sharieh and Azzam Sleit, 2016 develop the concept that we can find the best methods to given resource by managing the profit obtained and the number of times when it is utilized. For the purpose they use greedy algorithm which is an algorithmic paradigm that follows the problem of making the best choice at each stage in order to find overll optimum. They suggest this as a solution and further analyze it in terms of runtime complexity. The solution is combination of two optimization problems , the Knapsack problem algorithm and the activity-selection problem which are ultimately implemented using Java computer programming language. 2.2 Cloud Virtualization According to the authors of the research paper “From Virtualized Datacenters to Federated Cloud Infrastructures” ,virtualization plays key role in the separation of the compute, network, and storage service platforms from the physical hardware on which they are based. This process allows cloud based datacenters to benefit from server consolidation and on demand provisioning capabilities which lead to better utilization rates and significant cost and energy savings ( Moreno-Vozmediano, Montero, Llorente, 2012). In order to guarantee optimal performance of the virtual infrastructure the authors suggest the implementation of Virtual Infrastructure manager 1.2.1. Virtual Infrastructure Manager The primary purpose of this component is to orchestrate the deployment of virtual resources and to manage the physical and virtual infrastructures to command-and control service provisioning. The authors of “Deployment Models and Optimization Procedures in Cloud Computing” (Kotowski, Oko, Ochla, 2015) mention that Orchestrator is used as a workflow management solution in the data center. Orchestrator automates the creation, monitoring, and deployment of resources. According to the authors any IT organization can use Orchestrator to increase efficiency and reduce operational costs to
  • 7. 7 facilitate the completion of objectives between different departments. Orchestrator provides an environment with shared access to common data. By using Orchestrator, an enterprise can evolve and automate key processes between groups and consolidate repetitive manual tasks. It can automate cross-functional team processes and enforce best practices for incident, change, and service management.” (Walkowiak, 2016) According to (Rafael Moreno-Vozmediano, Rubén S. Montero, Ignacio M. Llorente, 2012), the advantages , in terms of optimization of resource are represented by the availability of the following features : Basic Adaptability, interoperability , scalability and standardization Advanced Server consolidation, on-the-fly resizing of the physical infrastructure, service workload balance, server replication, dynamic partitioning. According to the authors the role of the cloud infrastructure manager is to controls the usage of datacenter resources to deliver a agile, secure, and independent multitenant environment for services separated from the underlying physical infrastructure and has unique interfaces and APIs for working with the cloud based netorks. ( Moreno-Vozmediano, Montero, Llorente, 2012). The connection to the underlying infrastructure is provided by hypervisor, network, storage, and information drivers/adapters. This helps define the role of the components of the cloud OS Virtual machine manager  Storage manager  Network manager  Image manager  Information manager 1.2.2. Authentication and authorization The authors also discuss the significance of Authentication and authorization not only as a security solution- “Authorization policies control and manage user privileges and permissions to access different cloud resources, such as VMs, networks, or storage systems.” ( Moreno-
  • 8. 8 Vozmediano, Montero, Llorente, 2012), but also as a way to control amount of resources— CPU, memory, network bandwidth, or disk space that can be accessed by specific user . 1.2.3. Federation manager The federation manager is key component of the cloud based network as it provides the basic mechanisms for deployment, ongoing management, creating and deleting of virtual resources , monitoring , user authentication in remote cloud instances, access control management , remote resource permission and tools for creating images on different clouds regardless of the format. Further optimization is supported by using of cross-site networks and cross-site VM migration which allows increased cooperation and interoperability between the different cloud based networks. 1.2.4. Scheduler, Administrative tools and Service manager The role of the scheduler is presented from 2 points of view, on the physical level host and cloud level. The decision on how to manage the resources such as physical CPU or memory, belonging to each VM is combined with making decisions regarding on which host to place the specific VM. But in federated or hybrid environment the Schuler can decide to deploy the VM in a remote cloud when insufficient resources are available in the local infrastructure ( Moreno-Vozmediano, Montero, Llorente, 2012) which optimizes the overall performance of the network. This process is further streamlined by the Service manager which is responsible for managing and further optimizing the performance of multitier services. This involved accepting the service and managing the lifecycle by interacting with the scheduler and the administrative tools. The service manager is important component of the cloud infrastructure optimization as it controls the service elasticity , trough using different mechanisms for autoscaling. The importance of automation in the cloud based network is also discussed in in the article “The BestBuy.com Cloud Architecture” (Crabb, 2014). The author points out that “Automating the build out of infrastructure is essential for scaling elastically and recovering quickly from any failure.” Designing any tasks to be achieved manually will subsequently lead to bottleneck when attempting to provision large amount of instances and have negative impact on the overall network performance. 2.3 Emerging concepts 2.3.1. Decentralized peer-to-peer network
  • 9. 9 The fundamental ideas behind cloud computing, Increased collaboration, availability, flexibility, decentralization are further developed in block chain-based distributed computing platforms featuring smart contract (scripting) functionality such as Ethereum. This open- source, public, block chain-based platform was developed as a result of the need to building decentralized applications, shared among a distributed network of computers. The result is a more open, transparent, and system which can be verified by the public and fundamentally change the way we think about exchanging value and assets, enforcing contracts, and sharing data across industries. According to ( Ladha, Pandit, Ralhan, 2016) Ethereum represents new innovation in the fields of cryptocurrency which has become relatively stagnate, by introducing an entire programming language and development environment built into cloud based network 2.3.2. Virtual Containers The concept of virtual containers is the process of creating environments where applications function in a framework of virtualized operating-system resource areas where the applications have "ownership" of the platform. According to (Prashant, 2016) Virtualization has key importance in the field of abstraction and resource management. The problem is that these additional layers of abstraction provided by virtualization require establishing a balance between performance and cost in a cloud environment where everything is on a pay-per-use basis. Virtual containers address these issues and are perceived to be the future of virtualization. The reviewed literature indicates that the question of optimizing the resource usage of cloud based networks is predominanat in the latest trends of cloud computing. On the background of increasing cloud market revenue, we see the emerging of more distributed, trusted, intelligent, and industry specialized infrastructure. Some of the key points are machine learning which is facilitated by the cloud in terms of sufficient computing power and the opportunity to collaborate and easily develop and deploy applications on top of the cloud platforms. The idea of Serverless computing and containers which allows users to move beyond the traditional construct of virtual machines and servers is categorised as “next-generation computing”. In general, to market motion is towards closer collaboration, automatization, hybrid solutions and toward leaner, cheaper solutions that include and integrate PaaS capabilities, cloud management, and container support. 2.3.3. Microservices
  • 10. 10 Microservices are programs with single task and connectivity mechanism. They are the building blocks of the service-oriented architecture (SOA) architectural style which consist of distributed services. The monolithic approach towards software architecture represented as one long string of code is no longer compatible with the complexity of the current cloud based environments. The modular software architecture offers the following benefits:  Easier to make changes, update and test  It is easier to introduce new technology trends  Starting time for software is decreased  It is easier to mix and match modules with different profiles  Modules make the process of constructing applications easier In terms of cloud based service-oriented architecture (SOA) the role of microservices can be formulated in the following manner: The basic idea is that in SOA environment we have remote services which can leveraged using some type of infrastructure control and used as if they are local to the cloud based application. As a result, each one of our applications are made up of multitude of local and remote application services. Furthermore, they are location and platform independent which means that they can reside on premises or in any public cloud. Microservices facilitate the migration from on premises to the cloud as they are building blocks which can be used to rebuild application in the cloud. Therefore we don’t have to start building application from the initial step. The benefits of microservices are even further enhanced when they are used in conjunction with containerization. This allows for application to be distributed and optimized according to their utilization of the platform from within the container. As a result we can distinguish two separate architectural patterns, braking down application into building blocks and then rebuilding it in the cloud with minimal amount of code changing and on the hand other decoupling data from the application services. This way the data can be changed without dismantling the application. In essence , we are taking a monolithic application and converting it into something that’s more complex and distributed. (Linthicum, 2016) 2.3.4. Osmotic computing Cloud computing and especially Infrastructure as a service ( IaaS) provide unlimited power, scalability and reliability across multiple application domains. However, as a result of the latest technological advances and in particular the phenomenon of Internet of Things (IoT), the
  • 11. 11 current cloud computing model is changing with bigger emphasis on the proximity of the cloud resources to the users. In order to reduce the communication delay, the storage and processing capabilities are included in the IoT devices, and reside in the periphery of the central cloud data center. Osmotic Computing is driven by the resource capacity of the network edge and the availability of data transfer protocols that can support seamless interaction with the datacenter based services. In highly distributed and federated environments it enables the automatic deployment of microservices across the edge and cloud infrastructures. Similar to the chemistry process of “osmosis” cloud services and microservices are migrated across datacenters to the network edge. This movement contributes towards increased reliability of IoT support and the specific levels of QoS. Therefore, we have better understanding how the data from IoT devices can be analyzed and prevent the large -scale cloud computing systems from becoming a bottleneck. One of the challenges of the current centralized cloud datacenter environments is to transfer large data streams in timely and reliable manner. To address this, both cloud and edge resources should be used and set up hybrid virtual infrastructure as shown on the following picture ( Villari , Fazio, 2016) Figure 1 Osmotic computing decomposes application into microservices and trough dynamic management strategies optimizes the resource usage in cloud and edge infrastructures. The breakthrough approach comes from abstracting the management of users date and application from the management of networking and security services. This allows for automated, flexible and secure microservice deployment solution. ( Villari , Fazio, 2016)
  • 12. 12 2.3.5.Software-defined networking ( SDN) and Cloud Computing Software-defined networking ( SDN) separates the networking functions from the lower level functionalities. It provides dynamic network management via open interfaces and traffic distribution optimization. The ability to remove the hardware limitations in network infrastructures has significant benefits that can be translated into the cloud based environments. First, it allows to run all solutions as software on traditional systems or private/ public clouds. Also, it links the network configuration to the network requirements of the application ad workloads and allows configuration that can match any business purpose. SDN is a major step further away from static routing which does not support dynamic adaption to the applications specific needs. The latest trend in the cloud based SDN vision is that the SDN solutions are abstracted within separated layer in the cloud-enabled infrastructure. (Linthicum, 2016) 2. Research methodology 3.1 Action research Action research can be defined as the process of problem discovery and solving trough client centered and action orientated methods. There are two types of action research, participatory and practical. The latest is particularly useful for the chosen topic as it tends to solve a particular problem and to produce guidelines for best practice. According to (Denscombe, 2010) “ action research strategy's purpose is to solve a particular problem and to produce guidelines for best practice”. This method will be applied to the current research as it allows to examine the state of the cloud computing landscape, to evaluate the existing solutions and to propose prototype design that combines strategic technical functionalities in order to address the requirement for resource usage optimization. 3.2 Process research Process research is regarded as an important qualitative approach in the study of strategy and organizations, and is particularly useful in the study of networks because
  • 13. 13 of their inherent dynamics and complex process (Abrahamsena, Hennebergb, 2006). This approach will be supported with interviews with senior managers form the product development team of a company that specializes in application delivery control solutions specialized for cloud environments. The practice of ‘academic interventions’ will be used to introduce concepts related to the best case practices for resource optimization and in the same time these will be validated by using “local knowledge” from cloud solutions specialist within KEMP Technologies. 3. Problem Statement The current trend of moving on site infrastructure to the cloud is driven by the constantly increasing client demands in terms of availability, automatization, security and equally flexible billing approach. The problem is that since most of the current cloud based networks are not native but result of some type of migration process form on premises, they ultimately tend to inherit the hindrances of the hardware or private cloud infrastructures. For example, if the on premises network requires X amount computing resources in order to meet the business requirements, by literally translating hardware to virtual machines and migrating these to the cloud the new infrastructure will require equal amount of computing resources. This highlights the importance of changing the approach towards resource usage in cloud networks through proposing new optimized method of application traffic distribution. The currently available solutions do not address all the challenges. Thus, the clients must mix and match solutions from different vendors in order to achieve replication of their on-premises and benefit from the technological advantages of cloud computing. The current cloud market, constituted by many different public cloud providers, is highly fragmented in terms of interfaces, pricing schemes, virtual machine offers and value-added features. In this context, a cloud broker can provide intermediation and aggregation capabilities to enable users to deploy their virtual infrastructures across multiple clouds. However, most current cloud brokers do not provide advanced service management capabilities to make automatic decisions, based on optimization algorithms. They are limited in their abilities to select the best match cloud to deploy a service, how to optimize the distribution of the different components of a service among the available cloud providers , or even when to move a given service component from a cloud to another to
  • 14. 14 satisfy some optimization criteria. (Lucas-Simarro, Moreno-Vozmediano,. Montero, Llorente, 2012). One of the fundamental benefits of cloud computing is the ability to deploy new services according to the demand of the clients, without additional time needed to acquire servers and other infrastructure. Organizations are billed only for the resources that are actually used, compared to traditional datacenter model which requires upfront capital expenditure for projected peak capacity to meet unpredictable and unexpected business needs. Figure 2 4. Prototype Development Based on the research, analyzed in the literature review the proposed prototype addresses the current requirements of the cloud based networks by combining emerging technologies and methods in parallel with the areas of cloud virtualization and operational optimization as discussed. The solution is branded as “Inter-Cloud Resource Manager” and based on virtual load balancing appliance with the following load balancing algorithms:  Weighted Round Robin – Requests are dispatched to each cloud instance proportionally based on the assigned weights and in circular order. However, if the Resource usage Cost Resource usage Cost
  • 15. 15 highest weighted server fails, the virtual server with the next highest priority number will be available to serve clients. The weight for each server is assigned based on multi-class reasoning. The support of multi-class reasoning is useful in real applications when the users have different privileges, e.g. golden, silver and bronze, which stand for different levels of services. Such levels of service can be formalised by means of SLAs. In the Reasoner Engine, the calculation of throughput and response time is based on log data analysis of the load balancing controller. At runtime, the per request logs are accumulated in the load balancing controller log file based on the requests hitting backend resources.  Weighted Least Connection- If the servers have different resource capacities the Weighted Least Connection method is more applicable: The number of active connections combined with the various weights defined by the administrator generally provides a very balanced utilization of the servers, as it employs the advantages of both worlds.  Resource-based (SDN Adaptive)- In traditional networks, there is no end-to-end visibility of network paths and applications are not always routed optimally. The prototype is integrated with an SDN Controller solution which solves this problem by making the critical flow pattern data available. The above distribution methods are implemented based on input from the Reasoner Engine that determines the routing policy. The policy can be set at design-time based on the result of the reasoner or at runtime based on periodic observation of response time and throughput. The pre-defined SLA is enforced trough algorithm in the Reasoner Engine that changes the weight of the cloud instances and as a result the network traffic distribution pattern. The algorithm will be based on metrics like response time, arrival and departure time-stamps, request type and session IDs which are particularly useful for the load balancing analysis and examination of the processing requirement of each type of request on different types of servers. It is important to note that the SLA will be supported with service templates, managed by the cloud providers. This ensures that the client is able to estimate and achieve cost effective commitment to the provided services. The cloud provider is able to fulfill the obligation to the customers and to use the available resources optimally ( from cost prospective). This results form the ability to deploy service with the help of the orchestrator and to add and delete instances (or other resources) as specified in the service agreement .
  • 16. 16 (Faynberg , Lu , Skuler, 2016) Figure 3 The loadbalancing module provides reverse proxy capabilities. This allows for secure user access and protection from data loss. From the client point of view, the reverse proxy appears to be the virtual server/application and so is totally transparent to the remote user. As all client requests pass through the proxy, it is a perfect point in a network to control traffic while also optimizing performance with compression, encryption offloading and caching. At a more advanced level, the proxy may enforce encryption on all traffic and also inspect traffic for suspicious activities using a Web Application Firewall (WAF) Figure 4 “Inter-Cloud Resource Manager” also supports modular broker architecture that can work with different scheduling strategies and improve deployment of virtual services across multiple clouds, based on different optimization criteria (e.g. cost
  • 17. 17 optimization or performance optimization), different user segmentation (e.g. budget, performance, instance types, placement, reallocation or load balancing constraints), and different environmental conditions (i.e., static vs. dynamic conditions, regarding instance prices, instance types, service workload, etc.). (Lucas-Simarro, Moreno-Vozmediano,. Montero, Llorente, 2012). This is achieved with adaptable automation engine that automatically adapts the deployment to certain events in a pre-established way. In addition, it includes a multi-cloud engine that interacts with cloud infrastructure APIs, migrates the containers or container swarms consisting of nodes that hold microservices and manages the unique requirements of each cloud. The components of this engine are: o Cloud manager - periodically collects information about availability and price from cloud provider and acts as a pricing interface for users, updating the database when new information is available. o The Scheduler – Makes placement decisions on behalf of clients traffic and microservices based applications according to the data from the Cloud manager. Before each decision, the scheduler obtains information about clouds, instances, prices, and others from the database, and invokes to the particular scheduling strategy specified in the service description and its features o Database- Contains and manages information provided by the other components o VM manager – Deploys the virtual resources of a service across a set of cloud providers and manages the deployed resources collecting data about CPU, memory and network usage of each one, which is continuously updated in the database. From a virtualization point of view the solution is based containerization and microservices. This guarantees resource availability for accommodation of surges in network traffic and elasticity and flexibility, when deploying. As a microservices based virtual appliance it will be distributed and optimized according to the rules of utilization of the platform from within the container. The containers, the swarms and the respective microservices are migrated between the different cloud providers according to data collected by the Cloud Manager trough the API interface. Since
  • 18. 18 containers must run inside Linux machine the solution also includes open source single click installer which hides this additional step form the end user. 5. Results “Inter-Cloud Resource Manager” optimizes the performance of the cloud based resources by aligning them with the best performing , from computing resources and pricing point of view cloud providers. One of the main benefits of using multi-cloud brokering solution is the possibility of using the most appropriate type of instance in every moment. The multi-cloud approach ensures that the virtual instances have access to the required resources at the most competitive price. We will explore the benefits of the this approach by examining the results of the application deployment in three different geographical instances of Amazon EC2, in US, EU and Asia. These configurations are compared to multi-cloud deployment of the same application. The experiment was done regarding prices from a selected period of time, which means that selecting prices from another period may provide different results among them. Based on the performed test described in the research paper “Scheduling strategies for optimal service performance across multiple clouds” (Lucas-Simarro, Moreno-Vozmediano,. Montero, Llorente, 2012) the reported performance improvement is in the range of 3% to 4%. Figure 5 Additional test are performed using cloud based load balancer from KEMP Technologies in Microsoft Azure environment. This helps to estimate the level of performance optimization by adding loadbalancing module to the “Inter-Cloud Resource Manager”. This module is in charge of distributing the application traffic
  • 19. 19 between different cloud providers. In the same time the loadbalancing module will support the containerization part by managing the client request sent to the different container swarms. This will be achieved by using the load balancing algorithms weighted round robin and content switching. The benefits of the load balancing solution , as described in the following test results are combined and reinforced by the containerization approach. The infrastructure of the load balancing test environment consist of :  1 VLM  1 FreeRadius server  2 web servers Figure 6 For the purpose of the test we compare the native solution Azure Load Balancer to the more technically advanced 3rd party solution KEMP VLM for Azure. This allows us to provide qualitative measurement of the benefits resulting from the implementation of the advanced technical capabilities of the load balancing solution.
  • 20. 20 Figure 7 Persistence which can also be referred to as server affinity, or server stickiness -- is the property that enables all requests from an individual client to be sent to the same server in a server farm. When enabled “ server persistence” reduces the response time of the virtual server by 20%, which ultimately translates into better performance form client point of view. The loadbalancer also supports caching and compression which further optimizes the performance of the cloud based network by decreasing the volume of the overall traffic. This lowers the level of the required computing resources in order to process the same level of traffic as in the non- load balancing state. As stated earlier in this chapter the resource usage optimization works on two levels, the way the client’s traffic and application are distributed across the cloud providers and the nature of the actual solution managing this process. In order to handle the complexity of this process the “Inter-Cloud Resource Manager” will require access to significant level of computing resources. To meet these demands the solution is fragmented into microservices which are distributed using containerization. The test results reported in the article “Evaluation of containers as a virtualisation alternative for HEP workloads” (Roy, Washbrook, Crooks, Qin, 2015) will be used to provide quantitative representation of the benefits of using containerization. The test evaluates the baseline performance of the three chosen platforms (native, container and VM). The main goal is to determine the relative performance of applications running under native and container platforms, compared to a VM platform running the same test workload. The key performance metric of interest is the average CPU time taken to process an
  • 21. 21 individual event in the test sample. At least three iterations of the same test are run to validate per-event timing consistency across samples. Figure 8 illustrates the value per CPU core for both of the test servers (labelled Xeon and Avoton) for the 32-bit and 64-bit version of the benchmark. Figure 8 It is observed for all tests that the values for the container platform were within 2% of native performance which is considered the benchmark. The results show that the containers had near native performance ( bare metal) in both scenarios and when running on both the Xeon and Avoton processors. 6. Discussion The proposed solution “Inter-Cloud Resource Manager” addresses the task of resource usage optimization in cloud based networks on two different levels. It implements the principles of load balancing and applies these towards multi-cloud orchestration. The network traffic is distributed between the different cloud providers based on combination of loadbalancing algorithms and SLA agreements combined with pricing information gathered by the Cloud Manager. This approach results in well balanced business environment, where the provider can segment their client base and give priority to specific tier clients. From client point of view this allows greater flexibility
  • 22. 22 in assigning the available resources. For example the clients can route database querying traffic to specific cloud instance, computation heavy traffic to different cloud instance or even to on premises private cloud. On the other hand, the prototype of the “Inter-Cloud Resource Manager” is capable of proactively deploying virtual instances based on resource usage relevant to the pre- defined SLAs. This further optimizes the resource usage from user’s traffic distribution point of view, as the solution is capable of managing the traffic between the cloud nodes and in the same time deploying new instances on request. This optimized scalability is possible because the cloud resources are based on two principles, microservices and containerization. The “Inter-Cloud Resource Manager” is capable of deploying not only virtual instances, but also just parts of these instances according to the functionality requirements of the clients. The test results reported in the article “Evaluation of containers as a virtualization alternative for HEP workloads” (Roy, Washbrook, Crooks, Qin, 2015) show that the performance of virtual containers is relevant to the technical specification of the underlying hardware. Also , the current state of the cloud computing landscape shows that majority of the clients are investing in hybrid configurations rather than cloud native environments. The proposed solution is aimed at optimizing the performance of the network resources based in the cloud, but it is anticipated that ultimately the underlying hardware infrastructure will have significant impact on the performance. 7. Conclusions & Recommendations The research on the different aspects of the cloud computing landscape demonstrate that the clients are working with solutions, which regardless of their technical advanced functionalities and ability to resolve specific issues are not providing comprehensive approach towards resource usage optimization. These sporadic solutions function in an environment where different cloud providers have different configurations which further complicates the cloud federation and interoperability. Combining the functionalities of standard cloud orchestrator with advanced loadbalancing capabilities and basing these on microservices and containers also addresses the issue of “ cloud readiness”. Since most of the current applications based in cloud environments are non- native, they have been created on premises and migrated at certain point. This is ongoing process which determines the overall state of cloud computing and establish
  • 23. 23 the predominant trend of hybrid configurations where the client combine on-premises deployment ( including private cloud) with hyperscale clouds like AWS, Azure and Google Cloud Platform (GCP). Hybrid deployments allow customers to selectively leverage cloud services for their needs without having to completely migrate away from on-premises deployments. This even further reinforces the importance of the proposed product prototype as it achieves infrastructure efficiency and business agility, trough new operational model (e.g., automation, self-service, standardized commodity elements) rather than through performance optimization of infrastructure elements. (Faynberg , Lu , Skuler, 2016). The essence of the product prototype is combination of existing tools and resources, such as virtualization, load balancing, network function virtualization, containerization and microservices. They are structured in such a way, that the overall performance of the network elements is optimized trough enhanced collaboration. Overwhelming majority of IT infrastructure spend today is still going to solutions that live on premises or are hosted in a non-hyperscale clouds, such as those run by smaller, more regional Cloud Service Providers (CSPs). For example, as of July 2016, Gartner estimated that only around 7.5% of what they classify as “System Infrastructure or IaaS” are hosted in a true hyperscale public cloud, with that percentage expected to grow to only 17% by 2020. Open source will continue to gain relevance, and we will see more commercialized services based on open source platforms coming to market. The emergence of Open Source (Open Stack) holds great promise to the cloud ecosystem. It will help eliminate customer 'lock-in' to proprietary cloud technologies and enable ecosystems of cloud services to have interoperability. This will permit organizations to have a broad spectrum of choice, leverage best-of-breed cloud services and ensure interoperability. This even further reinforces the requirement for a solution that can bridge the gap between the different cloud platforms and allow for optimized utilization of the available resources. 8. References
  • 24. 24 Abrahim Ladha, Sharbani Pandit,Sanya Ralhan, 2016. The Ethereum Scratch Off Puzzle, s.l.: s.n. Andrew Lerner, Joe Skorupa, Danilo Ciscato, 29 August 2016 . Magic Quadrant for Application Delivery Controllers, s.l.: Gartner. Brian J.S. Chee and Curtis Franklin, Jr., 2010. Cloud Computing Technologies and Strategies of the Ubiquitous Data Center. Boca Raton,: CRC Press Taylor and Francis Group, LLC. Crabb, J., 2014. The BestBuy.com Cloud Architecture. IEEE SOFTWARE, pp. 91-96. Denscombe, 2010. Good Research Guide : For small-scale social research projects (4th Edition. s.l.:Open University Press. Berkshire. Gareth Roy, Andrew Washbrook, David Crooks, Gang Qin, 2015. Evaluation of containers as a virtualisation alternative for HEP workloads. IOPscience. Igor Faynberg , Hui-Lan Lu , Dor Skuler, 2016. CLOUD COMPUTING BUSINESS TRENDS AND TECHNOLOGIES. s.l.:John Wiley & Sons Ltd,. Jerzy Kotowski, Jacek Oko, and Mariusz Ochla, 2015. Deployment Models and Optimization Procedures in Cloud Computing, Wroclaw: Springer International, Wroclaw University of Technology, Wroclaw, Poland. Jose Luis Lucas-Simarro,Rafael Moreno-Vozmediano,Ruben S. Montero,Ignacio M. Llorente, 2012. Scheduling strategies for optimal service deployment across multiple clouds. Elsevier. Linthicum, D., 2016. Practical use of Microaervices in Moving Workloads to the Cloud. IEEE Cloud Computing, September. Linthicum, D., 2016. Software-defined networks meet Cloud Computing. IEEE Cloud Computing , May. Massimo Villari , Maria Fazio, 2016. Osmotic Computing : A New Paradigm for Edga/Cloud Integration. IEEE Cloud Computing, November. Morten H. Abrahamsena, Stephan C. Hennebergb, 2006. Network picturing: An action research study of strategizing in business networks. Industrial Marketing Management. Prashant, D. R., 2016. A Survey of Performance Comparison between Virtual Machines and Containers. International Journal of Computer Sciences and Engineering, 4(7), pp. 55-59. Rafael Moreno-Vozmediano, Rubén S. Montero, Ignacio M. Llorente, 2012. From Virtualized Datacenters to Federated Cloud Infrastructures. IEEE Computer Society. Walkowiak, K., 2016. Modeling and Optimization of Cloud-Ready and Content-Oriented Networks. s.l.:Springer International Publishing Switzerland. Xiong, K., 2014. Resource Optimization and Security for Cloud Services. London: ISTE Ltd and John Wiley & Sons, Inc..