9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
CS8791 CLOUD COMPUTING_UNIT-I_FINAL_ppt (1).pptx
1. CS8791 CLOUD COMPUTING
(2017 R)
YEAR/SEM: IV YEAR-VII SEM
Presented by,
Dr.M.Malathy
Professor/CSE
VEL TECH HIGH TECH DR.RR DR.SR
ENGINEERING COLLEGE
2. OUTCOME BASED EDUCATION(OBE)
OBJECTIVES:
To understand the concept of cloud computing.
To appreciate the evolution of cloud from the
existing technologies.
To have knowledge on the various issues in
cloud computing.
To be familiar with the lead players in cloud.
To appreciate the emergence of cloud as the
next generation computing paradigm.
3. UNIT I -INTRODUCTION
Introduction to Cloud Computing – Definition of
Cloud – Evolution of Cloud Computing –
Underlying Principles of Parallel and Distributed
Computing – Cloud Characteristics – Elasticity in
Cloud – On-demand Provisioning.
4. 1.1 Introduction to Cloud Computing
• What Is the Cloud?
The term cloud has been used historically as a metaphor for
the Internet.
This usage was originally derived from its common
depiction in network diagrams as an outline of a cloud,
used to represent the transport of data across carrier
backbones (which owned the cloud) to an endpoint
location on the other side of the cloud. This concept dates
back as early as 1961,
When Professor John McCarthy suggested that computer
time-sharing technology might lead to a future where
computing power and even specific applications might be
sold through a utility-type business model.
6. Introduction to Cloud Computing
• This idea became very popular in the late
1960s, but by the mid-1970s the idea faded
away when it became clear that the IT-related
technologies of the day were unable to sustain
such a futuristic computing model.
• However, since the turn of the millennium, the
concept has been revitalized. It was during
this time of revitalization that the term cloud
computing
7. Cloud Computing
• Definition –
• “A cloud is a pool of virtualized computer resources. A cloud
can host a variety of different workloads, including batch-style
backend jobs and interactive and user-facing applications”
• The cloud allows workloads to be deployed and scaled out
quickly through rapid provisioning of virtual or physical
machines.
• The cloud supports redundant, self-recovering, highly scalable
programming models that allow workloads to recover from
many unavoidable hardware/software failures.
• Finally, the cloud system should be able to monitor resource
use in real time to enable rebalancing of allocations when
needed.
9. The Emergence of Cloud Computing
• Utility computing can be defined as the
provision of computational and storage
resources as a metered service, similar to
those provided by a traditional public utility
company.
• Companies have begun to extend the model
to a cloud computing paradigm providing
virtual servers that IT departments and users
can access on demand.
10. The Global Nature of the Cloud
• Internet Clouds
• Cloud computing applies a virtualized platform
with elastic resources on demand by provisioning
hardware, software, and data sets dynamically.
• The idea is to move desktop computing to a
service-oriented platform using server clusters
and huge databases at data centers.
• Cloud computing leverages its low cost and
simplicity to benefit both users and providers.
12. The Cloud Landscape
• The traditional systems have encountered
several performance bottlenecks: constant
system maintenance, poor utilization, and
increasing costs associated with
hardware/software upgrades.
• Cloud computing as an on-demand computing
paradigm resolves or relieves us from these
problems.
13. CLOUD SERVICE MODELS:
• Infrastructure as a Service (IaaS)---This model puts
together infrastructures demanded by users — namely
servers, storage, networks, and the data center fabric.
• Platform as a Service (PaaS) - This model enables the
user to deploy user-built applications onto a virtualized
cloud platform. PaaS includes middleware, databases,
development tools, and some runtime support such as Web
2.0 and Java.
• Software as a Service (SaaS)-The SaaS model applies to
business processes, industry applications, consumer
relationship management (CRM), enterprise resources
planning (ERP), human resources (HR) , and collaborative
applications.
16. CLOUD SERVICE MODELS
• Cloud Computing is a general term used to
describe a new class of network based computing
that takes place over the Internet,
Using the Inter basically a step on from Utility
Computing a collection/group of integrated and
networked hardware, software and Internet
infrastructure is called a platform.
17. 1.2 Evolution of Cloud Computing
• Underlying Principles of Parallel and Distributed
Computing:
• Distributed computing is a field of computer science
that studies distributed systems. A distributed system is
models in which components located on networked
computers communicate and coordinate their actions
by passing messages
• The components interact with each other in order to
achieve a common goal.
• Three significant characteristics of distributed systems
are: concurrency of components, lack of a global clock,
and independent failure of components.
18. Distributed Computing:
• Examples of distributed systems vary from SOA-
based systems to massively multiplayer online
games to peer-to-peer applications.
• A computer program that runs in a distributed
system is called a distributed program, and
distributed programming is the process of writing
such programs.
• There are many alternatives for the message
passing mechanism, including pure HTTP, RPC-like
connectors and message queues.
20. HISTORY
• The use of concurrent processes that communicate by message-passing
has its roots in operating system architectures studied in the 1960s.
• The first widespread distributed systems were local area networks such as
Ethernet, which was invented in the 1970s.
• ARPANET (Advanced Research Projects Agency Network), the predecessor
of the Internet, was introduced in the late 1960s, and ARPANET email was
invented in the early 1970s. E-mail became the most successful
application of ARPANET, and it is probably the earliest example of a large-
scale distributed application.
• In addition to ARPANET, and its successor, the Internet, other early
worldwide computer networks included Usenet and FidoNet from the
1980s, both of which were used to support distributed discussion systems.
• The study of distributed computing became its own branch of computer
science in the late 1970s and early 1980s.
• The first conference in the field, Symposium on Principles of Distributed
Computing (PODC), dates back to 1982, and its European counterpart
International Symposium on Distributed Computing (DISC) was first held in
1985.
21. SCALABLE COMPUTING OVER THE INTERNET
• Over the past 60 years, computing technology has
undergone a series of platform and environment
changes.
• Evolutionary changes in machine include architecture,
operating system, platform, network connectivity, and
application workload.
• Instead of using a centralized computer to solve
computational problems, a parallel and distributed
computing system uses multiple computers to solve
large-scale problems over the Internet. Thus,
distributed computing becomes data-intensive and
network-centric.
22. SCALABLE COMPUTING OVER THE
INTERNET
• 2.1 The Age of Internet Computing
• Billions of people use the Internet every day.
• Supercomputer sites and large data centers must provide high-performance
computing services to huge numbers of Internet users concurrently.
• The Linpack Benchmark for high-performance computing (HPC) applications is no
longer optimal for measuring system performance.
• The emergence of computing clouds instead demands high-throughput computing
(HTC) systems built with parallel and distributed computing technologies.
• The purpose is to advance network-based computing and web services with the
emerging new technologies.
• The Platform Evolution
• Computer technology has gone through five generations of development, with
each generation lasting from 10 to 20 years.
• 1950 to 1970 – mainframes – eg: IBM 360 and CDC 6400.
• 1960 to 1980 - lower-cost mini-computers – eg: DEC PDP 11 and VAX Series .
• 1970 to 1990 - personal computers built with VLSI microprocessors.
• 1980 to 2000 - portable computers
23. High-Performance Computing
• For many years, HPC systems emphasize the raw speed
performance.
• The speed of HPC systems has increased from Gflops in the early
1990s to now Pflops in 2010.
• This improvement was driven mainly by the demands from
scientific, engineering, and manufacturing communities.
24. High-Throughput Computing
– The development of market-oriented high-end computing systems is
undergoing a strategic change from an HPC paradigm to an HTC
paradigm.
– The main application for high-flux computing is in Internet searches
and web services by millions or more users simultaneously.
– The performance goal thus shifts to measure high throughput or the
number of tasks completed per unit of time.
– HTC technology needs to not only improve in terms of batch
processing speed, but also address the acute problems of cost, energy
savings, security, and reliability at many data and enterprise
computing centers.
• Three New Computing Paradigms:
– The maturity of radio-frequency identification (RFID), Global
Positioning System (GPS), and sensor technologies has triggered the
development of the Internet of Things (IoT).
25. Computing Paradigm Distinctions
• The following list defines these terms more clearly; their architectural and operational
differences are discussed ,
• Centralized computing: This is a computing paradigm by which all computer resources are
centralized in one physical system. All resources (processors, memory, and storage) are fully
shared and tightly coupled within one integrated OS. Many data centers and supercomputers
are centralized systems, but they are used in parallel, distributed, and cloud computing
applications.
• Parallel computing: In parallel computing, all processors are either tightly coupled with
centralized shared memory or loosely coupled with distributed memory. Some authors refer
to this discipline as parallel processing. Inter processor communication is accomplished
through shared memory or via message passing
• Distributed computing: A distributed system consists of multiple autonomous computers,
each having its own private memory, communicating through a computer network.
Information exchange in a distributed system is accomplished through message passing. A
computer program that runs in a distributed system is known as a distributed program. The
process of writing distributed programs is referred to as distributed programming.
• Cloud computing An Internet cloud of resources can be either a centralized or a distributed
computing system. The cloud applies parallel or distributed computing, or both. Clouds can
be built with physical or virtualized resources over large data centers that are centralized or
distributed. Some authors consider cloud computing to be a form of utility
computing or service computing.
26. Hardware architectures for parallel
processing
• The Core elements of parallel processing are
CPUs. Based on the number of instruction and
data streams That can be processed
simultaneously, computing systems are classified
into the following four categories:
• Single-instruction,single-data(SISD)systems
• Single-instruction,multiple-data(SIMD)systems
• Multiple-instruction,single-data(MISD)systems
• Multiple-instruction,multiple-
data(MIMD)systems
27.
28.
29. Distributed system families
• The system efficiency is decided by speed,
programming, and energy factors (i.e.,
throughput per watt of energy consumed).
Meeting these goals requires yielding the
following design objectives:
Efficiency
Dependability.
Adaptation in the programming model
Flexibility in application deployment
30. The Trend toward Utility Computing
Characteristics
• Ubiquitous
• Reliability
• Scalability
• Autonomic
• Self-organized to support dynamic discovery.
• Utility vision - Utility computing focuses on a
business model in which customers receive
computing resources from a paid service
provider. All grid/cloud platforms are regarded as
utility service providers.
31. THE ANATOMY OF CLOUD
INFRASTRUCTURES
• IaaS clouds and, more specifically, on the efficient management of
virtual machines in this type of cloud. There are many commercial IaaS
cloud providers in the market, such as those cited earlier, and all of them
share five characteristics:
• (i) They provide on-demand provisioning of computational resources;
• (ii) They use virtualization technologies to lease these resources;
• (iii) They provide public and simple remote interfaces to manage those
resources;
• (iv)They use a pay-as-you-go cost model, typically charging by the hour; 51
• (v) They operate data centers large enough to provide a seemingly
unlimited amount of resources to their clients (usually touted as ―infinite
capacity‖ or ―unlimited elasticity‖). Private and hybrid clouds share these
same characteristics, but instead of selling capacity over publicly
accessible interfaces, focus on providing capacity to an organization‘s
internal users.
32. Cloud Characteristics
• Centralization of infrastructure and lower
costs
• Increased peak-load capacity
• Efficiency improvements for systems that are
often underutilized
• Dynamic allocation of CPU, storage, and
network bandwidth
• Consistent performance that is monitored by
the provider of the service
33. Cloud Characteristics
No up-front commitments
On-demand access
Nice pricing
Simplified application acceleration and scalability
Efficient resource allocation
Energy efficiency
Seamless creation and use of third-party services
34. Characteristics of Cloud Computing:
• On-demand self-service. A consumer can
unilaterally provision computing capabilities, such
as server time and network storage, as needed
automatically without requiring human
interaction with each service provider.
• Broad network access. Capabilities are available
over the network and accessed through standard
mechanisms that promote use by heterogeneous
thin or thick client platforms (e.g., mobile
phones, tablets, laptops, and workstations).
35. Characteristics of Cloud Computing:
• Resource pooling.
• The provider's computing resources are pooled to serve
multiple consumers using a multi-tenant model, with
different physical and virtual resources dynamically
assigned and reassigned according to consumer demand.
• There is a sense of location independence in that the
customer generally has no control or knowledge over the
exact location of the provided resources but may be able to
specify location at a higher level of abstraction (e.g.,
country, state, or datacenter).
• Examples of resources include storage, processing,
memory, and network bandwidth.
36. Characteristics of Cloud Computing:
Elasticity in Cloud
• Cloud computing gives the illusion of infinite
computing resources available on demand.
Therefore users expect clouds to rapidly provide
resources in any Quantity at any time. In
particular, it is expected that the additional
resources can be
• i. Provisioned, possibly automatically, when an
application load increases and
• ii. Released when load decreases (scale up and
down)
37. Characteristics of Cloud Computing:
• Rapid elasticity.
• Capabilities can be elastically provisioned and
released, in some cases automatically, to scale
rapidly outward and inward commensurate
with demand.
• To the consumer, the capabilities available for
provisioning often appear to be unlimited and
can be appropriated in any quantity at any
time.
38. Characteristics of Cloud Computing:
• Measured service.
• Cloud systems automatically control and optimize
resource use by leveraging a metering capability
at some level of abstraction appropriate to the
type of service (e.g., storage, processing,
bandwidth, and active user accounts).
• Typically this is done on a pay-per-use or charge-
per-use basis. Resource usage can be monitored,
controlled, and reported, providing transparency
for both the provider and consumer of the
utilized service.