System Models for Distributed and Cloud
Computing,Peer-to-peer (P2P) Networks,Computational and Data Grids,Clouds,Advantage of Clouds over Traditional
Distributed Systems,Performance Metrics and Scalability Analysis,System Efficiency,Performance Challenges in Cloud Computing,WHY CLOUD COMPUTING,What is cloud computing and why is it distinctive,CLOUD SERVICE DELIVERY MODELS AND THEIR
PERFORMANCE CHALLENGES,Cloud computing security,What does Cloud Computing Security mean,Cloud Security Landscape,Energy Efficiency of Cloud Computing,How energy-efficient is cloud computing?
2. 2
Presented to the Faculty of the
Department of Computer Sciences
Federal Urdu Science Art, Science & Technologies.
Gulshan Campus
In the fulfillment of the
course “Cloud Computing”
Semester: 7th
BSCS Evening Program
Submitted to:
Sir. Muhammad Sajid
By:
Haris Sarfraz
Enrollment No:
BS/E/20048/13/CS
3. 3
Question # 1. Explain System models for Distributed
and cloud computing?
System Models for Distributed and Cloud
Computing
• These can be classified into 4 groups: clusters, peer-to-peer
networks, grids, and clouds.
• A computing cluster consists of interconnected stand-alone
computers which work cooperatively as a single integrated
computing resource. The network of compute nodes are
connected by LAN/SAN and are typically homogeneous with
distributed control running Unix/Linux. They are suited to HPC.
4. 4
Peer-to-peer (P2P) Networks
In a P2P network, every node (peer) acts as both a client and server.
Peers act autonomously to join or leave the network. No central
coordination or central database is needed. No peer machine has a
global view of the entire P2P system. The system is self-organizing with
distributed control.
Unlike the cluster or grid, a P2P network does not use dedicated
interconnection network.
P2P Networks are classified into different groups:
Distributed File Sharing: content distribution of MP3 music, video, etc.
E.g. Gnutella, Napster, BitTorrent.
Collaboration P2P networks: Skype chatting, instant messaging,
gaming etc.
Distributed P2P computing: specific application computing such as
SETI@home provides 25 Tflops of distributed computing power over 3
million Internet host machines.
Computational and Data Grids
Grids are heterogeneous clusters interconnected by high-speed
networks. They have centralized control, are server-oriented with
authenticated security. They are suited to distribute supercomputing.
E.g. TeraGrid.
Like an electric utility power grid, a computing grid offers an
infrastructure that couples computers, software/middleware,
people, and sensors together.
The grid is constructed across LANs, WANs, or Internet backbones at
a regional, national, or global scale.
5. 5
The computers used in a grid include servers, clusters, and
supercomputers. PCs, laptops, and mobile devices can be used to
access a grid system.
Clouds
A Cloud is a pool of virtualized computer resources. A cloud can
host a variety of different workloads, including batch-style backend
jobs and interactive and user-facing applications.
Workloads can be deployed and scaled out quickly through rapid
provisioning of VMs. Virtualization of server resources has enabled
cost effectiveness and allowed cloud systems to leverage low costs
to benefit both users and providers.
Cloud system should be able to monitor resource usage in real time
to enable rebalancing of allocations when needed.
Cloud computing applies a virtualized platform with elastic resources
on demand by provisioning hardware, software, and data sets
dynamically. Desktop computing is moved to a service-oriented
platform using server clusters and huge databases at datacenters.
Advantage of Clouds over Traditional
Distributed Systems
Traditional distributed computing systems provided for on-premise
computing and were owned and operated by autonomous
administrative domains (e.g. a company).
6. 6
These traditional systems encountered performance bottlenecks,
constant system maintenance, poor server (and other resource)
utilization, and increasing costs associated with hardware/software
upgrades.
Cloud computing as an on-demand computing paradigm resolves
or relieves many of these problems.
Software Environments for Distributed Systems and Clouds:
Service-Oriented Architecture (SOA) Layered Architecture
In web services, Java RMI, and
CORBA, an entity is, respectively,
a service, a Java remote object,
And a CORBA object. These build
On the TCP/IP network stack. On
Top of the network stack we have
A base software environment, which
Would be .NET/Apache Axis for web
Services, the JVM for Java, and the
ORB network for CORBA. On top of
This base environment, a higher level
Environment with features specific to
The distributed computing
Environment is built.
Loose coupling and support of
heterogeneous implementations
make services more attractive than
distributed objects.
7. 7
Performance Metrics and Scalability Analysis
• Performance Metrics:
• CPU speed: MHz or GHz, SPEC benchmarks like SPECINT
• Network Bandwidth: Mbps or Gbps
• System throughput: MIPS, TFlops (tera floating-point operations
per second), TPS (transactions per second), IOPS (IO operations
per second)
• Other metrics: Response time, network latency, system
availability
• Scalability:
• Scalability is the ability of a system to handle growing amount
of work in a capable/efficient manner or its ability to be
enlarged to accommodate that growth.
• For example, it can refer to the capability of a system to
increase total throughput under an increased load when
resources (typically hardware) are added.
Scalability
Scale Vertically
To scale vertically (or scale up) means to add resources to a single
node in a system, typically involving the addition of CPUs or memory
to a single computer.
Tradeoffs
There are tradeoffs between the two models. Larger numbers of
computers means increased management complexity, as well as a
more complex programming model and issues such as throughput
8. 8
and latency between nodes. Also, some applications do not lend
themselves to a distributed computing model.
In the past, the price difference between the two models has
favored "scale up" computing for those applications that fit its
paradigm, but recent advances in virtualization technology have
blurred that advantage, since deploying a new virtual system/server
over a hypervisor is almost always less expensive than actually
buying and installing a real one.
Scalability
• One form of scalability for parallel and distributed systems is:
• Size Scalability
This refers to achieving higher performance or more functionality by
increasing the machine size. Size in this case refers to adding
processors, cache, and memory, storage, or I/O channels.
• Scale Horizontally and Vertically
Methods of adding more resources for a particular application fall
into two broad categories:
Scale Horizontally
To scale horizontally (or scale out) means to add more nodes to a
system, such as adding a new computer to a distributed software
application. An example might be scaling out from one Web server
system to three.
The scale-out model has created an increased demand for shared
data storage with very high I/O performance, especially where
processing of large amounts of data is required.
9. 9
Amdahl’s Law
It is typically cheaper to add a new node to a system in order to
achieve improved performance than to perform performance
tuning to improve the capacity that each node can handle. But this
approach can have diminishing returns as indicated by Amdahl’s
Law.
Consider the execution of a given program on a uni- processor
workstation with a total execution time of T minutes. Now, let’s say
that the program has been parallelized or partitioned for parallel
execution on a cluster of many processing nodes.
Assume that a fraction α of the code must be executed sequentially,
called the sequential block. Therefore, (1 - α) of the code can be
compiled for parallel execution by n processors. The total execution
time of program is calculated by:
α T + (1 - α) T / n
Where the first term is the sequential execution time on a single
processor and the second term is the parallel execution time on n
processing nodes.
All system or communication overhead is ignored here. The I/O and
exception handling time is also not included in the speedup analysis.
Amdahl’s Law Amdahl’s Law states that the Speedup Factor of using
the n-processor system over the use of a single processor is expressed
by
Speedup = S = T / [α T + (1 - α) T / n]
= 1 / [α + (1 - α) / n]
The maximum speed up of n is achievable only when α = 0, i.e. the
entire program is parallelizable.
As the cluster becomes sufficiently large, i.e. n ∞, then S 1 / α, an
upper bound on the speedup S. This upper bound is independent of
the cluster size, n. The sequential bottleneck is the portion of the
code that cannot be parallelized.
Example, α = 0.25 and so (1 – 0.25) = 0.75 then the maximum speed
up, S = 4 even if one uses hundreds of processors.
10. 10
Amdahl’s Law teaches us that we should make the sequential
bottleneck as small as possible. Increasing the cluster size alone may
not result in a good speed up in this case.
Amdahl’s Law
• Example: suppose 70% of a program can be sped up if
parallelized and run on multiple CPUs instead of one CPU.
• N = 4 processors
S = 1 / [0.3 + (1 – 0.3) / 4] = 2.105
• Doubling the number of processors to N = 8 processors
S = 1 / [0.3 + (1 – 0.3) / 8] = 2.581
Double the processing power has only improved the speedup by
roughly one-fifth. Therefore, throwing in more hardware is not
necessarily the optimal approach.
System Efficiency
To execute a fixed workload on n processors, parallel processing
may lead to a system efficiency defined as:
System Efficiency, E = S / n = 1 / [α n + (1 - α)]
System efficiency can be rather low if the cluster size is very large.
Example: To execute a program on a cluster with n = 4, α = 0.25 and
so
(1 – 0.25) = 0.75,
E = 1 / [0.25 * 4 + 0.75] = 0.57 or 57%
Now if we have 256 nodes (i.e. n = 256)
E = 1 / [0.25 * 256 + 0.75] = 0.015 or 1.5%
This is because only a few processors (4, as in the previous case) are
kept busy, while the majority of the processors (or nodes) are left
idling.
11. 11
Fault Tolerance and System Availability
• High availability (HA) is desired in all clusters, grids, P2P
networks, and cloud systems. A system is highly available if it
has a long Mean Time to Failure (MTTF) and a short Mean Time
to Repair (MTTR).
• System Availability = MTTF / (MTTF + MTTR)
• All hardware, software, and network components may fail.
Single points of failure that bring down the entire system must
be avoided when designing distributed systems.
• Adding hardware redundancy, increasing component
reliability, designing for testability all help to enhance system
availability and dependability.
• In general, as a distributed system increases in size, availability
decreases due to a higher chance of failure and a difficulty in
isolating failures.
12. 12
Question # 2. Explain the performance, Security, and Energy
Efficiency of Cloud Computing?
Performance Challenges in Cloud Computing
Cloud computing is important for the today’s demanding business
requirements. The cloud computing concept, with its salient features,
and the three Cloud Service delivery models are explained here. The
three cloud delivery models of Software as a Service (SaaS), Platform
as a service (PaaS) and Infrastructure as a Service (IaaS) are
explored with their Inter-dependencies and performance
considerations. Cloud adoption in the business has performance
obstacles, and suggestions to overcome these obstacles are
provided while suggesting performance considerations for the three
cloud delivery models. Performance considerations are vital for the
overall success of cloud computing, including the optimum cost of
cloud services, reliability and scalability. They require a lot of
attention and efforts by the cloud computing providers, integrators
and service consumers.
WHY CLOUD COMPUTING?
It is simply difficult to manage today's complex businesses
environments by traditional IT solutions. Some reasons are:
• Explosive growth in applications: Web 2.0 social networking,
YouTube, Face book, biomedical informatics, space
exploration, and business analytics
• Extreme scale content generation: e-science and e-business
data deluge
13. 13
• Extraordinary rate of digital content consumption: digital
gluttony: Apple iPhone, iPad, Amazon Kindle
• Exponential growth in compute capabilities: multi-core,
storage, bandwidth, virtual machines (virtualization)
• Very short cycle of obsolescence in technologies: Windows
Vista to Windows 7; Java versions; C to C# ; Python
• Newer architectures: web services, persistence models,
distributed file systems/repositories (Google, Hadoop), multi-
core, wireless and mobile.
What is cloud computing and why is it distinctive?
Cloud computing is a model for enabling convenient, on-demand
network access to a shared pool of configurable computing
resources (for example, networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction.
It Involves shifting the bulk of the costs from capital expenditures
(CapEx), or buying and installing servers, storage, networking, and
related infrastructure to an operating expense (OpEx) model, where
you pay for usage of these types of resources.
Cloud computing is unique because of its distinct general
characteristics:
Multi-tenancy: Public cloud service providers often host the cloud
services for multiple users within the same infrastructure
• Elasticity and scalability: Ability to expand and reduce resources
according to your specific service requirement. e.g., you may need
a large number of server resources for the duration of a specific task.
You can then release these server resources after you complete your
task.
• Pay-per-use: Pay for cloud services only when you use them, either
for the short term (e.g., for CPU time) or for a longer duration (e.g.,
for cloud-based storage or vault).
14. 14
• On demand: One can invoke cloud services on need basis, need
not to be part of IT infrastructure—a significant advantage for cloud
use as opposed to internal IT services.
• Resiliency: Cloud can completely isolate the failure of server and
storage resources from cloud users. (Work can be migrated to a
different physical resource in the cloud with or without user
awareness and intervention.)
• Workload movement: It is important for resiliency and cost
considerations, service providers can migrate workloads across
servers — both inside the data center and across data centers (even
in a different geographic area). Typical reasons for workload
movement are due to a catastrophic event in a geographic region
(say Hurricane Sandy in the US). Then the corkload can be moved to
some other geographic region for the time being, or there can be
some other business drivers for the workload movement to get these
advantages:
• Less cost - It is less expensive to run a workload in a data center in
another area based on time of day or power requirements.
• Efficient –Better resources / network bandwidth availability. For
example, US nightly processing in India day time is less costly and
more efficient.
• Regulatory considerations - For certain types of workloads, e.g.
New York stock exchange processing from India.
CLOUD SERVICE DELIVERY MODELS AND THEIR
PERFORMANCE CHALLENGES
Cloud service delivery models
Software as a Service (SaaS)
Enterprises will have software licenses to support the various
applications used in their daily business. These applications could be
in human resources, finance, or customer relationship management.
15. 15
The traditional option is to obtain the desktop and server licenses for
the software products used.
Software as a Service (SaaS) allows the enterprise to obtain the same
functions through a hosted service from a provider through a
network connection. Consumer services include social platforms
(e.g. Face book) or online email services (e.g. Gmail). There are also
increasing numbers of business services being delivered as-a-service
(e.g. software package rendering through VDO / Citrix server to the
mass public).
Centralized services typically designed to cater for large numbers
of end users over Internet.
SaaS reduces the complexity of software installation,
maintenance, upgrades, and patches for the IT team within the
enterprise, because the software is now managed centrally at the
SaaS provider’s facilities.
SaaS providers are responsible to monitor the application-delivery
performance.
Platform as a Service (PaaS)
Unlike the fixed application functionality offered by SaaS, Platform as
a Service (PaaS) provides a software platform on which users can
build their own applications and host them on the PaaS provider’s
infrastructure (e.g. Google with its App-Engine or Force.com APIs).
• The software platform is used as a development framework to
provide services for use by applications.
• PaaS is a true cloud model in that applications do not need to
worry about the scalability of the underlying hardware and
software platform.
• PaaS providers are responsible to monitor the application delivery
performance elasticity and scalability.
16. 16
Infrastructure as a Service (IaaS)
An Infrastructure as a Service (IaaS) provider offers you raw
computing, storage, and network infrastructure so that you can load
your own software, including operating systems and applications, on
to this infrastructure (e.g. Amazon’s Elastic Computing Cloud (EC2)
service).
• This scenario is equivalent to a hosting provider provisioning
physical servers and storage, and letting you install your own OS,
web services, and database applications over the provisioned
machines.
• Greatest degree of control of the three models, resource
requirement management, is required to exploit IaaS well.
• Scaling and elasticity are user’s responsibility and not the provider’s
responsibility.
Inter-dependencies of delivery models and their performance
measures
The cloud can also be defined as the virtualized infrastructure found
on the lowest level of the solution stack (i.e. IaaS layer as in Figure 1).
The higher service layers depend on the underlying supporting
service layers. Service providers can be service users as well:
• A SaaS provider may be a SaaS user
• A SaaS provider may or may not be a PaaS user
• SaaS and PaaS providers are directly or indirectly IaaS users
Summary - Characteristics of Performance Measures
• SaaS performance measures are directly perceived by users as
business transaction response times and throughput, technical
service reliability and availability, and by scalability of the
applications.
17. 17
• PaaS performance measures are indirectly perceived by users
and defined as technical transaction response times and
throughput, technical service reliability and availability, and by
IaaS Performance Measures are defined as infrastructure
performance, capacity, reliability, availability, and scalability.
• In general, characteristics of performance measures of the upper
service layers depend on those characteristics in the underlying
Layers, e.g. SaaS layer scalability depends on IaaS layer
scalability.
PERFORMANCE ASPECTS ARE MAJOR OBSTACLES IN CLOUD
ADOPTION AND GROWTH
The success of Cloud deployments is highly dependent on practicing
holistic performance engineering and capacity management
techniques.
A majority of the obstacles for adoption and growth of cloud
computing are related to the basic performance aspects, such as
availability, performance, capacity, or scalability. Please refer below
to Table 1 for the obstacles and opportunities details.
• Potential cloud solutions to overcome these obstacles need to
be carefully assessed for their authenticity in real-life situations.
• Performance engineers need to get to the bottom of the
technical transactions of underlying cloud services before
advising cloud computing users and cloud computing
providers for the cloud services.
• The degree to which cloud services can meet agreed service-
level requirements for availability, performance, and scalability
Can be estimated by using performance modeling techniques,
so that potential performance anti-patterns can be detected
before they happen.
• In the absence of sophisticated tooling for automated
monitoring, the automatic provisioning and usage-based
costing (metering) facilities rely mainly on fine-grained capacity
management. Until more data collection, analysis, and
forecasting are in place, capacity management is more
opportune than ever.
18. 18
• Irrespective of sophisticated tooling for automated monitoring,
cloud computing users need to analyze their demand for
capacity and their requirements for performance. In their
contract with cloud computing providers, users should always
take a bottom-line approach to accurately formulate their
service-level requirements.
19. 19
Advanced Data Analytics for better performance
An individual enterprise may produce terabytes of log data per
month which can contain millions of events per second. The
techniques for gathering monitoring data have become a lot better
through the development of performance tools for in-house
enterprise systems; however the analysis of the large volume of data
collected has been still a major challenge.
• There is a burning need of having efficient and preferred log
analytics systems in cloud environment, with the surfacing of
new cloud technologies on these challenges such as log
management as-a-service.
• A log management as-a-service technology handling log
analysis for large numbers of enterprises must be able to
manage millions of events per second, performing visualization,
analysis and alerting in real time to allow for autonomic
management of the system.
• Cloud has produced new challenges due to the larger scale of
systems and the much larger volumes of data produced by
these systems. Real time analytics is a growing area and
provides challenges in the analysis of upwards of millions of
events per second with real time constraints.
• Real time analytics can be a BIG aid for performance
monitoring; this is another emerging rich area of research. Real
time analytics with time constraints will certainly enhance the
performance management of cloud based systems.
20. 20
Cloud computing security
Introduction
Failure to ensure appropriate security protection when using cloud
services could ultimately result in higher costs and potential loss of
business, thus eliminating any of the potential benefits of cloud
computing.
The aim of this guide is to provide a practical reference to help
enterprise information technology (IT) and business decision makers
analyze the security implications of cloud computing on their
business. The paper includes a list of steps, along with guidance and
strategies, designed to help these decision makers evaluate and
compare security offerings from different cloud providers in key
areas.
When considering a move to cloud computing, customers must
have a clear understanding of potential security benefits and risks
associated with cloud computing, and set realistic expectations with
their cloud provider. Consideration must be given to the different
service categories: Infrastructure as a Service (IaaS), Platform as a
Service (PaaS), and Software as a Service (SaaS) as each model
brings different security requirements and responsibilities.
Additionally, this paper highlights the role that standards play to
improve cloud security and also identifies areas where future
standardization could be effective.
The section titled “Current Cloud Security Landscape” provides an
overview of the security and privacy challenges pertinent to cloud
computing and points out considerations that organizations should
weigh when migrating data, applications, and infrastructure to a
cloud computing environment.
The section titled “Cloud Security Guidance” is the heart of the
guide and includes the steps that can be used as a basis for
evaluation of cloud provider security. It discusses the threats,
technology risks, and safeguards for cloud computing environments,
and provides the insight needed to make informed IT decisions on
their treatment. Although guidance is provided, each organization
must perform its own analysis of its needs, and assess, select,
21. 21
engage, and oversee the cloud services that can best fulfill those
needs.
The section titled “Cloud Security Assessment” provides customers
with an efficient method of assessing the security capabilities of
cloud providers and assessing their individual risk. A questionnaire for
customers to conduct their own assessment across each of the
critical security domains is provided.
A related document, the Practical Guide to Cloud Service
Agreements [1], provides additional guidance on evaluating security
criteria from prospective cloud providers. The guide titled Cloud
Security Standards: What to Expect & Negotiate [2] highlights the
security standards and certifications that are currently available in
the market as well as the cloud specific security standards that are
currently being developed.
22. 22
Cloud computing security
Cloud computing security is the set of control-based technologies
and policies designed to adhere to regulatory compliance rules and
protect information, data applications and infrastructure associated
with cloud computing use.
Cloud computing security is the set of control-based technologies
and policies designed to adhere to regulatory compliance rules and
protect information, data applications and infrastructure associated
with cloud computing use.
IT operations are sometimes unexpectedly affected by major audit
regulations – is your IT team prepared? Explore the critical role your IT
team plays in ensuring compliance and review the penalties for non-
compliance by downloading this FREE e-guide, which covers any
questions you might have regarding 4 major legislative regulations.
Because of the cloud's very nature as a shared resource, identity
management, privacy and access control are of particular concern.
With more organizations using cloud computing and associated
cloud providers for data operations, proper security in these and
other potentially vulnerable areas have become a priority for
organizations contracting with a cloud computing provider.
Cloud computing security processes should address the security
controls the cloud provider will incorporate to maintain the
customer's data security, privacy and compliance with necessary
regulations. The processes will also likely include a business continuity
and data backup plan in the case of a cloud security breach.
Definition - What does Cloud Computing Security mean?
Cloud computing security refers to the set of procedures, processes
and standards designed to provide information security assurance in
a cloud computing environment.
23. 23
Cloud computing security addresses both physical and logical
security issues across all the different service models of software,
platform and infrastructure. It also addresses how these services are
delivered (public, private or hybrid delivery model).
Techopedia explains Cloud Computing Security
Cloud security encompasses a broad range of security constraints
from an end-user and cloud provider's perspective, where the end-
user will primarily will be concerned with the provider's security
policy, how and where their data is stored and who has access to
that data. For a cloud provider, on the other hand, cloud computer
security issues can range from the physical security of the
infrastructure and the access control mechanism of cloud assets, to
the execution and maintenance of security policy. Cloud security is
important because it is probably the biggest reason why
organizations fear the cloud.
The Cloud Security Alliance (CSA), a nonprofit organization of
industry specialists, has developed a pool of guidelines and
frameworks for implementing and enforcing security within a cloud
operating environment.
Cloud Security Landscape
While security and privacy concerns1 are similar across cloud
services and traditional non-cloud services, those concerns are
amplified by the existence of external control over organizational
assets and the potential for mismanagement of those assets.
Transitioning to public cloud computing involves a transfer of
responsibility and control to the cloud provider over information as
well as system components that were previously under the
customer’s direct control.
Despite this inherent loss of control, the cloud service customer still
needs to take responsibility for its use of cloud computing services in
24. 24
order to maintain situational awareness, weigh alternatives, set
priorities, and effect changes in security and privacy that are in the
best interest of the organization. The customer achieves this by
ensuring that the contract with the provider and its associated cloud
service agreement has appropriate provisions for security and
privacy. In particular, the agreement must help maintain legal
protections for the privacy of data stored and processed on the
provider's systems. The customer must also ensure appropriate
integration of cloud computing services with their own systems for
managing security and privacy.
There are a number of security risks associated with cloud computing
that must be adequately addressed
● Loss of governance. In a public cloud deployment, customers
cede control to the cloud provider over a number of issues that may
affect security. Yet cloud service agreements may not offer a
commitment to resolve such issues on the part of the cloud provider,
thus leaving gaps in security defenses.
● Responsibility ambiguity. Responsibility over aspects of security
may be split between the provider and the customer, with the
potential for vital parts of the defenses to be left unguarded if there
is a failure to allocate responsibility clearly. This split is likely to vary
depending on the cloud computing model used (e.g., IaaS vs.
SaaS).
● Authentication and Authorization. The fact that sensitive cloud
resources are accessed from anywhere on the Internet heightens
the need to establish with certainty the identity of a user -- especially
if users now include employees, contractors, partners and customers.
Strong authentication and authorization becomes a critical concern.
● Isolation failure. Multi-tenancy and shared resources are defining
characteristics of public cloud computing. This risk category covers
the failure of mechanisms separating the usage of storage, memory,
routing and even reputation between tenants (e.g. so-called guest-
hopping attacks).
● Compliance and legal risks. The cloud customer’s investment in
achieving certification (e.g., to demonstrate compliance with
industry standards or regulatory requirements) may be lost if the
25. 25
cloud provider cannot provide evidence of their own compliance
with the relevant requirements, or does not permit audits by the
cloud customer. The customer must check that the cloud provider
has appropriate certifications in place.
• Handling of security incidents. The detection, reporting and
subsequent management of security breaches may be
delegated to the cloud provider, but these incidents impact the
customer. Notification rules need to be negotiated in the cloud
service agreement so that customers are not caught unaware or
informed with an unacceptable delay.
● Management interface vulnerability. Interfaces to manage public
cloud resources (such as self-provisioning) are usually accessible
through the Internet. Since they allow access to larger sets of
resources than traditional hosting providers, they pose an increased
risk, especially when combined with remote access and web
browser vulnerabilities.
● Application Protection. Traditionally, applications have been
protected with defense-in-depth security solutions based on a clear
demarcation of physical and virtual resources, and on trusted zones.
With the delegation of infrastructure security responsibility to the
cloud provider, organizations need to rethink perimeter security at
the network level, applying more controls at the user, application
and data level. The same level of user access control and protection
must be applied to workloads deployed in cloud services as to those
running in traditional data centers. This requires creating and
managing workload-centric policies as well as implementing
centralized management across distributed workload instances.
● Data protection. Here, the major concerns are exposure or release
of sensitive data as well as the loss or unavailability of data. It may
be difficult for the cloud service customer (in the role of data
controller) to effectively check the data handling practices of the
cloud provider. This problem is exacerbated in cases of multiple
transfers of data, (e.g., between federated cloud services or where
a cloud provider uses subcontractors).
26. 26
● Malicious behavior of insiders. Damage caused by the malicious
actions of people working within an organization can be substantial,
given the access and authorizations they enjoy. This is compounded
in the cloud computing environment since such activity might occur
within either or both the customer organization and the provider
organization.
● Business failure of the provider. Such failures could render data
and applications essential to the customer's business unavailable
over an extended period.
● Service unavailability. This could be caused by hardware, software
or communication network failures.
● Vendor lock-in. Dependency on proprietary services of a particular
cloud service provider could lead to the customer being tied to that
provider. The lack of portability of applications and data across
providers poses a risk of data and service unavailability in case of a
change in providers; therefore it is an important if sometimes
overlooked aspect of security. Lack of interoperability of interfaces
associated with cloud services similarly ties the customer to a
particular provider and can make it difficult to switch to another
provider.
● Insecure or incomplete data deletion. The termination of a
contract with a provider may not result in deletion of the customer’s
data. Backup copies of data usually exist, and may be mixed on the
same media with other customers’ data, making it impossible to
selectively erase. The very advantage of multi-tenancy (the sharing
of hardware resources) thus represents a higher risk to the customer
than dedicated hardware.
● Visibility and Audit. Some enterprise users are creating a “shadow
IT” by procuring cloud services to build IT solutions without explicit
organizational approval. Key challenges for the security team are to
know about all uses of cloud services within the organization (what
resources are being used, for what purpose, to what extent, and by
whom), understand what
Laws, regulations and policies may apply to such uses, and regularly
assess the security aspects of such uses.
27. 27
Cloud computing does not only create new security risks: it also
provides opportunities to provision improved security services that
are better than those many organizations implement on their own.
Cloud service providers could offer advanced security and privacy
facilities that leverage their scale and their skills at automating
infrastructure management tasks. This is potentially a boon to
customers who have little skilled security personnel.
Cloud Security Guidance
As customers transition their applications and data to the cloud, it is
critical for them to maintain, or preferably surpass, the level of
security they had in their traditional IT environment.
This section provides a prescriptive series of steps for cloud customers
to evaluate and manage the security of their use of cloud services,
with the goal of mitigating risk and delivering an appropriate level of
support. The following steps will be discussed in detail below:
1. Ensure effective governance, risk and compliance processes exist
2. Audit operational and business processes
3. Manage people, roles and identities
4. Ensure proper protection of data and information
5. Enforce privacy policies
6. Assess the security provisions for cloud applications
7. Ensure cloud networks and connections are secure
8. Evaluate security controls on physical infrastructure and facilities
9. Manage security terms in the cloud service agreement
10. Understand the security requirements of the exit process
Requirements and best practices are highlighted for each step. In addition,
each step takes into account the realities of today’s cloud computing
landscape and postulates how this space is likely to evolve in the future,
including the important role that standards will play to improve interoperability
and portability across providers.
30. 30
Energy Efficiency of Cloud Computing
Abstract:
Cloud computing is an emerging technology which provides
metering based services to consumers. Cloud computing offers ITC
based services and provide computing resources through
virtualization over internet. Data center is heart of cloud computing
which contains collection of servers on which Business information is
stored and applications run. Data center (includes servers, network,
cables, air conditioner etc.) consumes more power and releases
huge amount of Carbon-di-oxide (CO2) to the environment. One of
the most important challenge in cloud computing is optimization of
energy utilization and hence have a green cloud computing. There
are many techniques and algorithms used to minimize the energy
consumption in cloud. Techniques include DVFS, VM Migration and
VM Consolidation. Algorithms are Maximum Bin Packing, Power
Expand Min-Max and Minimization Migrations, Highest Potential
growth, Random Choice. The main goal of all these approaches is to
optimize the energy utilization in cloud. This paper provides overview
of literature survey on approaches to have energy efficient cloud.
Energy Efficiency of Cloud Computing
Most agree that cloud computing is inherently more efficient that on
premise computing in each of several dimensions. Last November, I
went after two of the easiest to argue gains: utilization and the ability
to sell excess capacity (Datacenter Renewable Power Done Right):
Cloud computing is a fundamentally more efficiently way to operate
compute infrastructure. The increases in efficiency driven by the
cloud are many but a strong primary driver is increased utilization. All
companies have to provision their compute infrastructure for peak
usage. But, they only monetize the actual usage which goes up and
down over time. What this leads to incredibly low average utilization
levels with 30% being extraordinarily high and 10 to 20% the norm.
31. 31
Cloud computing gets an easy win on this dimension. When non-
correlated workloads from a diverse set of customers are hosted in
the cloud, the peak to average flattens dramatically. Immediately
effective utilization sky rockets. Where 70 to 80% of the resources are
usually wasted, this number climbs rapidly with scale in cloud
computing services flattening the peak to average ratio.
To further increase the efficiency of the cloud, Amazon Web Services
added an interesting innovation where they sell the remaining
capacity not fully consumed by this natural flattening of the peak to
average. These troughs are sold on a spot market and customers are
often able to buy computing at less the amortized cost of the
equipment they are using (Amazon EC2 Spot Instances). Customers
get clear benefit. And, it turns out; it’s profitable to sell unused
capacity at any price over the marginal cost of power. This means
the provider gets clear benefit as well. And, with higher utilization,
the environment gets clear benefit as well.
Back in June, Lawrence Berkeley National Labs released a study that
went after the same question quantitatively and across a much
broader set of dimensions. I first came across the report via
coverage in Network Computing: Cloud Data Centers: Power
Savings or Power drain? The paper was funded by Google which
admittedly has an interest in cloud computing and high scale
computing in general. But, even understanding that possible bias or
influence, the paper is of interest. From Google’s summary of the
findings (How Green is the Internet?):
Funded by Google, Lawrence Berkeley National Laboratory
investigated the energy impact of cloud computing. Their research
indicates that moving all office workers in the United States to the
cloud could reduce the energy used by information technology by
up to 87%.
These energy savings are mainly driven by increased data center
efficiency when using cloud services (email, calendars, and more).
The cloud supports many products at a time, so it can more
efficiently distribute resources among many users. That means we
can do more with less energy.
32. 32
How energy-efficient is cloud computing?
Researchers have found that, at high usage levels, the energy
required to transport data in cloud computing can be larger than
the amount of energy required storing the data
conventionally, data storage and data processing are done at the
user's own computer, using that computer's storage system and
processor. An alternative to this method is cloud computing, which is
Internet-based computing that enables users at home or office
computers to transfer data to a remote data center for storage and
processing. Cloud computing offers potential benefits – especially
financial ones – to users, but in a new study, researchers have
investigated a different aspect of cloud computing: how does its
energy consumption compare with conventional computing?
“Many industry participants see the evolution toward mobility will
intrinsically mean an evolution toward cloud-based services,” Tucker
said. “The reason is that mobile access devices will have limited
processing and storage capacity (due to size and power constraints)
and so the most convenient place to put the applications and data is
in the cloud. The user device will contain little more than a browser
when it is started up. Any application or data that it requires will be
brought down from the cloud. When that application is finished, its
data will be put back into the cloud and the application will be
removed from the user device until it is again required. In this way,
the user device is kept simple, energy-efficient and cheap.”