2. ⢠Unit 1.
⢠Introduction.
Cloud Computing at a Glance
Historical Developments
Building Cloud Computing Environments
Computing Platforms and Technologies,
⢠Principles of Parallel and Distributed Computing
Eras of Computing,
Parallel vs. Distributed Computing
Elements of Parallel Computing,
Elements of Distributed Computing.
2
3. ⢠Cloud computing is the practice of using
remote servers on the internet for carrying
out a task , rather than using our own
computers/servers.
3
4. Introduction
⢠Computing is being transformed to a model
consisting of services that are commoditized and
delivered in a manner similar to utilities such as
water, electricity, gas and telephony.
⢠In such a model , users access services based on
their requirements regardless of where they are
hosted.
⢠Cloud computing is the most recent emerging
paradigm promising to turn the vision of
âcomputing utilitiesâ in to a reality.
4
5. ⢠It is based on the concept of dynamic provisioning
.
⢠Resources are made available through the
internet and offered on a pay-per-use basis from
cloud computing vendors.
⢠Today, anyone with a credit card can subscribe to
cloud services, and deploy and configure services
for an application in hours, growing and shrinking
the infrastructure serving its application according
to the demand , and paying only for the time
these resources have been used.
5
6. Cloud computing at a glance
⢠In 1969, Leonard Kleinrock, one of the chief
scientists of Advanced Research Projects Agency
Network (ARPANET) said
âAs of now , computer networks are still in their
infancy, but as they grow up and become
sophisticated, we will probably see the spread of
âcomputer utilitiesâ which, like present electric
and telephone utilities, will service individual
homes and offices across the country â
6
7. ⢠Computing services will be readily available on
demand , like other utility services such as
water, electricity, telephone, and gas available
in todayâs society.
⢠Users (consumers)need to pay providers only
when they access the computing services.
⢠Consumers no longer need to invest heavily,
or encounter difficulties in building and
maintaining complex IT infrastructure.
7
8. ⢠In this model , users access services based on their requirements
without regard to where the services are hosted. This model has
been referred to as utility computing recently (since 2007) as
Cloud Computing.
⢠Cloud Computing can be classified as a new paradigm for the
dynamic provisioning of computing services supported by
state-of-the-art data centers employing virtualization technologies
for consolidation and effective utilization of resources.
⢠Cloud computing allows renting infrastructure , runtime
environments, and services on pay-per-use basis.
⢠End users leveraging cloud computing services can access their
documents and data at anytime, anywhere and from any device
connected to the internet.
8
9. ⢠One of the most diffused views of Cloud
computing is as follows:
â I dont care where my servers are, who
manages them, where my documents are
stored , or where my applications are hosted. I
just want them always available and access
them from any device connected through
Internet. And i am willing to pay for this
service for as long as i need it â
9
10. ⢠Cloud computing turns IT services into utilities.
⢠Web 2.0 technologies play a central role in making
cloud computing an attractive opportunity for building
computing systems.
⢠They transformed the internet into a rich application
and service delivery platform ,mature enough to serve
complex needs.
⢠Service â orientation allows cloud computing to
deliver its capabilities with familiar abstractions while
virtualization confers cloud computing the necessary
degree of customization, control and flexibility for
building production and enterprise systems.
10
11. ⢠Besides being an extremely flexible environment for
building new systems and applications , cloud
computing also provides an opportunity for integrating
additional capacity ,or new features , into existing
systems.
⢠The use of dynamically provisioned IT resources
constitutes more attractive opportunity than buying
additional infrastructure and software , whose sizing
can be difficult to estimate and needs are limited in
time. This is one of the most important advantages of
cloud computing which made it a popular
phenomenon.
11
12. The vision of Cloud computing
⢠Cloud computing allows anyone having a credit card to provision
virtual hardware , runtime environments and services.
⢠Different stakeholders leverage Clouds for a variety of services.
The need for ubiquitous storage and compute power on demand
is the most common reason to consider cloud computing.
A scalable runtime for applications is an attractive option for
application and system developers that do not have infrastructure
or cannot afford any further expansion of existing one.
The capability of Web-based access to documents and their
processing using sophisticated applications is one of the appealing
factors for end-users.
12
13. The Vision of Cloud Computing
⢠Cloud computing allows anyone having a
credit card to provision virtual hardware ,
runtime environments, and services.
13
14. Defining a Cloud
⢠Cloud computing has become a popular buzzword
and it has been widely used to refer to different
technologies , services and concepts
⢠Cloud computing always associated with
Virtualized infrastructure (hardware on demand)
Utility computing
IT outsourcing
Platform and software as a service
14
15. ⢠Different notations in Cloud computing
No capital investments
Pay as you go
Elasticity
Scalability
Provisioning on demand
Virtualization
Internet
IT outsourcing
Privacy and trust
Security
Service level agreement
Quality of service
Green computing
Utility computing
Billing
Saas
IaaS
PaaS
15
16. ⢠The term âcloudâ has historically been used in the
telecommunication industry as an abstraction of the
network in system diagrams.
⢠It then became the symbol of the most popular
computer network: Internet.
⢠This meaning also applies to cloud computing , Which
refers to the internet-centric way of doing computing.
⢠Internet plays a fundamental role in cloud computing
since it represents either the medium or the platform
through which many cloud computing services are
delivered and made accessible.
16
17. ⢠Definition given by Armbrust :
âCloud computing refers to both the
applications delivered as a service over the
internet, and the hardware and system
software in the datacenters that provide those
servicesâ
17
18. ⢠This definition describes cloud computing as a
phenomenon touching on he entire stack :
from the underlying hardware to the high
level software services and applications.
⢠It introduces the concept of
everything-as-a-service â XaaS .
18
19. ⢠NIST
(National Institute of Standards and Technology)
â Cloud computing is a model for enabling
ubiquitous , convenient, on-demand network
access to a shared pool of configurable computing
resources (e.g. Networks, servers, storage,
applications, and services) that can be rapidly
provisioned and released with minimal
management effort or service provider
interaction.â
19
20. ⢠Another aspect of cloud computing is its
utility-oriented approach. Cloud computing
focuses on delivering services with a given
pricing model : in most of the cases a
âpay-per-useâ strategy. It makes possible to
access online storage , to rent virtual
hardware , or to use development platforms
and pay only for their effective usage with no
or minimal upfront usage.
20
21. ⢠Rajkumar Buyya definied
âA cloud is type of parallel and distributed
system consisting of a collection of
interconnected and virtualized computers that
are dynamically provisioned and presented as
one or more unified computing resources
based on service-level agreements established
through negotiation between the service
provider and consumers.â
21
22. A closer look
1. Large enterprises can offload some of their
activities to cloud based systems.
The New York Times has converted its digital
library about past editions into a Web friendly
format. This required a considerable amount of
computing power for a short period of time. By
renting Amazon EC2 and S3 cloud resources, it
performed this task in 36 hours, and relinquished
these resources with out any additional costs.
22
23. 2. Small enterprises and start-ups can afford to
translate into business results their ideas more
quickly without excessive upfront costs.
Animoto is a company that creates videos out of
images . The process involves considerable amount
of storage and backend processing. Animoto doesnot
own a single server. It fully depend on Amazon web
Services, which is sized on demand according to the
workload to be processed. Workload can vary a lot
(in one week scaled from 70 to 8500 servers)and
requires instant scalability. Upfront investment is not
an effective solution.
23
24. 3. System developers can concentrate on the
business logic rather than dealing with the
complexity of infrastructure management and
scalability.
Little Fluffy Toys is a London company has
developed a widget that provide users with
information about nearby rental bicycle services.
The company managed to back the widgets
computing needs on Google AppEngine and be
on market in one week .
24
25. 4. End users can have their documents accessible
from everywhere and any device.
Apple iCloud is a service that allows users to have
their documents stored in the cloud and access
them from any device they connect to it .
This makes it possible to take a picture with
smart phone , editing the same picture on laptop
, show updated on tablet.
This process is completely transparent to the
users who do not have to set up cables and
connect these devices with each other.
25
26. Characteristics and benefits
⢠Cloud computing has some interesting characteristics
that bring benefits to both cloud service consumers
(CSCs) and cloud service providers (CSPs). They are
1. No upfront commitments
2. On demand access
3. Nice pricing
4. Simplified application acceleration and scalability
5. Efficient resource allocation
6. Energy efficiency
7. Seamless creation and the use of third-party services.
26
27. Challenges ahead
⢠Besides the practical aspects ( which are related to
configuration , networking, and sizing of cloud computing
systems) a new set of challenges concerning the dynamic
provisioning of cloud computing services and resources
arises.
⢠Issues and challenges concerning the integration of real and
virtual infrastructure need to be taken into account from
different perspective , such as security and legislation.
⢠Security in terms of confidentiality, secrecy, and protection
of data in a cloud environment , is another challenge.
⢠Legal issues may also arise.
27
32. Cloud computing reference model
⢠A fundamental characteristic of cloud
computing is the capability of delivering on
demand a variety of IT services.
⢠It is possible to classify cloud computing
services offerings into 3 major categories.
32
33. 1. Software as a Service (SaaS)
2. Platform as a Service (PaaS)
3. Infrastructure as a Service (IaaS)
33
35. ⢠At the base of the stack ,Infrastructure âas-a âService
solutions deliver infrastructure on demand, in the form of
virtual hardware, storage , and networking.
Virtual hardware â is utilized to provide compute on
demand in the form of virtual machine instances. These are
created on users request on the providers infrastructure
and users are given tools and interfaces to configure the
software stack installed in the virtual machine.
virtual storage â is delivered in the form of raw disk space
or object store.
Virtual networking â identifies the collection of services that
manage the networking among virtual instances and their
connectivity towards the internet or private networks.
35
36. ⢠Platformâasâa-Service solutions are the next step in the
stack .
They deliver scalable and elastic runtime environments on
demand that host the execution of applications.
These services are backed by a core middleware platform
that is responsible for creating the abstract environment
where applications are deployed and executed.
It is the responsibility of the service provider to provide
scalability and to manage fault-tolerance, while users are
requested to focus on the logic of the application developed
by leveraging the providers APIs and libraries.
36
37. ⢠At the top of the stack, Software-as-a-Service.
SaaS- solutions provide applications and services on
demand.
Most of the common functionalities of desktop
applications- such as office automation , document
management , photo editing, and customer
relationship management (CRM) software â are
replicated on the providers infrastructure, made more
scalable, and accessible through a browser on
demand.
SaaS layer is also the area of social networking
websites, which leverage cloud â based infrastructures
to sustain the load generated by their popularity.
37
38. ⢠Each layer provides different services to users:
1. IaaS- solutions are sought by users that want to leverage
cloud computing from building dynamically scalable
computing systems requiring a specific software stack.
IaaS-services are used to develop scalable web sites or for
background processing.
2. PaaS â solutions provide scalable programming platforms
for developing applications, and are more appropriate
when new systems have to be developed.
3. IaaS solutions target mostly end users, who want to
benefit from the elastic scalability of the Cloud without
doing any software development, installation,
configuration, and maintenance.
38
39. Historical developments
⢠In tracking the historical evolution , 5 core
technologies played important role
1. Distributed systems
2. Virtualization
3. Web 2.0
4. Service oriented computing
5. Utility computing.
39
40. Distributed systems
⢠Clouds are essentially large distributed
computing facilities that make available their
services to third parties on demand.
⢠By Tanenbaum
âA distributed system is a collection of
independent computers that appears to its
users as a single coherent systemâ
40
41. ⢠Other 3 major milestones that led to cloud
computing is :
1. Mainframe computing
2. Cluster computing
3. Grid computing
41
42. 1. Mainframes
This is the first example of large computational facility leveraging
multiple processing units.
They are powerful, highly reliable computers specialised for large data
movement and massive IO operations.
They are mostly used by large organizations for bulk data processing.
They provide large computational power by using multiple processors
which were presented as a single entity to users.
They are highly reliable computers that were âalways onâ and capable of
tolerating failures transparently.
Batch processing is the main application of mainframes.
The popularity have reduced but still in use for transaction processing
(i.e., online banking, airline ticket booking, supermarket and telcos and
government services)
42
43. ⢠Clusters
started as a low cost alternative to use of mainframes
and supercomputers.
In 1980 clusters became standard technology for
parallel and high performance computing.
Being built by commodity machines they are cheaper
than mainframes.
Cluster technology contributed to the evolution of
tools and framework , they include condor, parallel
virtual machine (PVM), message passing interface
(MPI).
43
44. ⢠Grids
Apperaed in early 1990s
It proposed new approach to access large
computational power , huge storage facilities,
and variety of services.
Users can âconsumeâ resources in the same
way as they use other facilities such as power,
gas, and water.
44
45. ⢠Virtualization
Virtualization is another core technology for cloud computing.
It encompasses a collection of solutions allowing the abstraction
of some of the fundamental elements for computing such as :
hardware, runtime environments, storage, and networking.
A technology that allows creation of different computing
environments. These environments are named as virtual, because
they simulate the interface that is expected by a user.
Different virtualizations are
1. Hardware virtualization
2. Storage virtualization
3. Network virtualization
45
46. Hardware virtualization allows simulating the hardware
interface expected by an operating system.
Hardware virtualization allows the co-existence of different
software stacks on top of the same hardware. These stacks
are contained inside âvirtual machine instancesâ, which
operate completely isolated from each other.
High performance server can host several virtual machine
instances, thus creating the opportunity of having
customized software stack on demand.
This is the base technology that enables cloud computing
solutions delivering virtual server on demands, such as
Amazon EC2, RightScale, Vmware vCloud, and others.
46
47. ⢠Virtualization technologies are also used to
replicate runtime environments for programs.
In the case of âprocess virtual machineâ , which
include the foundation of technologies such as
Java or .NET, where applications instead of
being executed by the operating system are
run by a specific program called âvirtual
machineâ .
47
48. Web 2.0
The web is the primary interface through which
cloud computing deliver its services.
It encompasses a set of technologies and services
that facilitate interactive information sharing ,
collaboration , user-centered design and
application composition.
This has transformed the web into a rich platform
for application development . Such evolution is
know as web 2.0.
48
49. Web 2.0
The second stage of development of the Internet,
characterized especially by the change from static
web pages to dynamic or user-generated content
and the growth of social media.
Web 2.0 basically refers to the transition from static
HTML Web pages to a more dynamic Web that is
more organized and is based on
serving Web applications to users.
49
50. 1. Web 2.0 brings interactivity and flexibility into web pages, which
provide enhanced user experience by gaining web-based access
to all the functions that are normally found in desktop
applications.
2. Web 2.0 applications are extremely dynamic : they improve
continuously and new updates and features are integrated at a
constant rate, by following the usage trend of the community.
3. Loose coupling is another fundamental property. New
applications can be âsynthesizedâ simply by composing existing
services and integrating them together, thus providing added
value.
4. Web 2.0 applications aim to leverage the long tail of internet
users by making themselves available to everyone either in
terms of media accessibility or cost.
50
51. ⢠Examples of web 2.0 applications are
Google Documents
Google Maps
Flicker
Facebook
Twitter
Youtube
de.li.cious
Blogger
Wikipedia.
51
52. â˘Web 2.0
Examples
⢠Hosted services- Google Maps
⢠Web applications -OpenOffice, Google Docs, and Flickr.
⢠Social networking. ...
⢠Video sharing sites- YouTube
Tools
⢠Social media sites such as Facebook and Twitter
⢠http://www.wikipedia.com/ Gives the users the ability to
manipulate and enhance the content. Focus on
participation rather than publishing.
social media is a platform that uses Web 2.0 technology.
52
53. ⢠A collective term for certain applications of
the Internet and the World Wide Web, which
focus on interactive sharing and participatory
collaboration rather than simple content delivery.
⢠What is important is building simple things that
people can use.
⢠Web 2.0 is a people-oriented technology
movement. Ease of use, social features,
collaboration, fast-loading applications,
interactivity, quick development times and
real-time updates are all major trends.
53
54. The second stage of development of the Internet,
characterized especially by the change from static
web pages to dynamic or user-generated content
and the growth of social media.
It is the primary interface through which cloud
computing deliver its services.
This term captures a new way in which developers
architect applications, deliver services through the
internet, and provide a new user experience for
their users.
54
55. â˘Service-oriented computing(SOC)
⢠A service is an abstraction representing a self-describing
and platform agnostic component that can perform any
function. This can be anything from simple function to a
complex business process.
⢠Any piece of code that performs a task can be turned into a
service and expose its functionalities through a network
accessible protocol.
⢠A service is supposed to be loosely coupled , reusable ,
programming independent and location transparent.
⢠SOC introduces two important concepts, which are
fundamental to cloud computing.
Quality of Service (QoS)
Software as a Service (SaaS)
55
56. ⢠QoS
A set of functional and non-functional attributes that
can be used to evaluate the behaviour of a service
from different perspectives.
These can be response time, security attributes,
transactional integrity , reliability, scalability and
availability.
QoS requirements are established between the client
and the provider between a Service Level Agreement
(SLA) that identifies the minimum values for the QoS
attributes that need to be satisfied.
56
57. ⢠SaaS
These deliver software service based solutions
across the wide area network from a central data
center and make them available on subscription
or rental basis.
The ASP(application service providers) is
responsible for maintaining the infrastructure
and making available the application , and the
client is freed from maintenance cost and difficult
upgrades.
57
58. ⢠Utility oriented computing
Its a model in which resources such as
storage, compute power, applications, and
infrastructure are packaged and offered on a
pay-per-use basis.
The idea of providing computing as a utility
like natural gas, water, power, and telephone
connection has long history but has reality
today with the advent of cloud computing.
58
59. Building cloud computing
environments
1. Application development
⢠Applications that leverage cloud computing
benefit from its capability of dynamically scaling
on demand.
⢠One class of applications that take the biggest
advantage from this feature is web applications.
⢠With the diffusion of web 2.0 technologies , the
web has become platform for developing rich
and complex applicatons.
59
60. 2. Infrastructure and system development
⢠Distributed computing is a foundational
model for cloud computing because cloud
systems are distributed systems.
⢠Web 2.0 technologies constitute the
interface through which cloud computing
services are delivered, managed, and
provisioned.
⢠Virtualization is the core feature of the
infrastructure used by cloud providers.
60
61. Computing platforms and technologies
1. Amazon Web Services (AWS)
Offers cloud IaaS services, ranging from virtual compute, storage and networking.
Famous for compute and storage on demand services namely Elastic Compute
Cloud (EC2) and Simple Storage Service (S3).
EC2 provides users with customizable virtual hardware that can be used as the
base infrastructure for deploying computing systems on the cloud.
S3 delivers persistent storage on demand. Users can store objects of any size ,
from simple files to entire disk images and have them accessible from everywhere.
Amazon Virtual Private Cloud (VPC)
Amazon CloudFront
Amazon Relational Database Services (RDS)
61
62. 2. Google AppEngine
App Engine is a fully managed, server less platform for developing and
hosting web applications at scale. we can choose from several popular
languages, libraries, and frameworks to develop your apps, then let App
Engine take care of provisioning servers and scaling your app instances
based on demand.
Is a scalable runtime environment devote to executing Web applications.
AppEngine provides both a secure execution environment and a
collection of services that simplify the development of scalable and high
performance web applications.
Developers can build and test applications on their on machines by using
the AppEngine SDK , which replicates the production runtime
environment, and helps test and profile applications. Once development
is complete developers can easily migrate their application to AppEngine
and make it available to the world. The languages supposed are Python,
Java, and Go.
62
63. 3. Microsoft Azure
Is a cloud operating system and a platform for
developing applications in the cloud.
It provides a scalable runtime environment for web
applications and distributed applications in general.
Applications are organized around the concept of
roles.
1. Web role: designed to host a web application.
2. Worker role: used to perform workload processing.
3. Virtual machine role: provides a virtual environment
63
64. 4. Hadoop
Apache Hadoop â an open source framework suited for
processing large data set.
It is an implementation of MapReduce, an application
programming model developed by google, which provides two
fundamental operations for data processing: map and reduce
Map â transforms and synthesizes the input data provided by the
user.
Reduce â aggregates the output obtained by the map operations
.
Hadoop provides the runtime environment and developers need
only to provide the input data, and specify the map and reduce
functions that need to be executed.
64
65. 5. Force.com and salesforce.com
Force .com is a cloud computing platform for
developing social enterprise applications.
The platform is the basis of salesforce.com â a
SaaS solution for customer relationship
management.
The platform provides complete support for
developing applications :from the design of the
data layout ,to the definition of business rules and
workflows, and the definition of the user
interface.
65
66. 6. Manjrasoft Aneka
Cloud application platform for rapid creation of
scalable applications and their deployment on various
types of clouds in a seamless and elastic manner.
It supports a collection programming abstractions for
developing applications and a distributed runtime
environment that can be deployed on heterogeneous
hardware .
Developers can choose different abstractions to design
their application: tasks, threads, and mapreduce.
66
67. Principles of Parallel and Distributed
computing
⢠Eras of computing
Two dominant models of computing are
1. Sequential
2. Parallel
Four key elements developed during these eras
are:
1. Architectures
2. Compilers
3. Applications
4. Problem-solving environments
67
69. Parallel vs distributed computing
⢠The term parallel â implies a tightly coupled system
⢠distributed â implies a wider class of system including those
who are tightly coupled.
Parallel computing
Refers to the model where the computation is divided among several
processors sharing the same memory.
It is characterised by the homogeneity of components : each processor is
of the same type and it has the same capability of the others.
The shared memory has a single address space , which is accessible to all
the processors.
Parallel programs are then broken down in to several units of executions
that can be allocated to different processors, and can communicate each
other by means of the shared memory.
69
70. ⢠Distributed computing
Any system that allows the computation to be
broken down into units and executed
concurrently on different computing
elements, whether these are processors on
different nodes , processors on same
computer, or cores within the same processor.
Therefore it includes a wide range of systems
and applications.
70
71. Elements of parallel computing
⢠What is parallel processing ?
Processing of multiple tasks simultaneously on multiple processors .
The parallel program consists of multiple active processes simultaneously
solving a given problem .
A given task is divided into multiple subtasks using divide and conquer
technique and each of them is processed on different CPUs.
Programming on multi-processor system using divide and conquer
technique is called parallel programming.
Many applications today require more computing power , parallel
processing provides a cost-effective solution to this problem by increasing
the number of CPUs in a computer and by adding an efficient
communication system between them.
The workload can be shared between different processors.
This results in higher computing power and performance than a single
processor.
71
72. Hardware architectures for parallel
processing
The core elements of parallel processing are CPUs.
Based on a number of instruction and data
streams that can be processed simultaneously,
computing systems are classified into the
following 4 categories:
1. Single Instruction Single Data (SISD)
2. Single Instruction Multiple Data (SIMD)
3. Multiple Instruction Single Data (MISD)
4. Multiple Instruction Multiple Data (MIMD)
72
74. ⢠SISD
Machine instructions are processed sequentially,
hence computers adopting this model are called
sequential computers.
Most of the conventional computers are built
using SISD model.
All instructions and data to be processed are
stored in primary memory.
Dominant SISD systems are IBM-PC, Macintosh,
workstations etc.
74
76. ⢠SIMD
Is a multiprocessor machine capable of executing the same
instruction on all the CPUs, but operating on different streams.
These are well suited for scientific computing since they involve
lots of vector and matrix operations.
E.g: statements such as Ci=Ai*Bi
can be passed to all the PEs (Processing Elements), organized data
elements of vector A and B can be divided into multiple sets
(N-sets for N PE systems), and each PE can process one data set.
Dominant SIMD systems are CRAYâs vector-processing machine,
thinking machines cm, etc
76
78. ⢠MISD
Is a multiprocessor machine capable of executing
different instructions on different PEs , but all of
them operating on the same data-set.
E.g: statements such as
y=sin(x)+cos(x)+tan(x)
Perform different operations on the same data set.
None of them available commercially.
78
80. ⢠MIMD
Is a multiprocessor machine capable of executing multiple
instructions on multiple data sets.
Each PE has separate instructions and data streams, and
hence machines built using this model are suited for any
kind of operation.
Unlike SIMD and MISD, PEs in MIMD machines work
asynchronously.
Categorized into 2 , based on how PEs are coupled to the
main memory.
1. Shared-memory MIMD
2. Distributed-memory MIMD
80
82. ⢠Shared memory MIMD
All the PEs are connected to a single global memory
and they all have access to it.
Also called tightly-coupled multiprocessor systems.
Communication between PEs takes place through the
shared memory, modification of the data stored in the
global memory by one PE is visible to all other PEs
Dominant shared memory MIMD are silicon graphics
machines and sun/IBMâs SMPs (symmetric
multi-processing)
82
83. ⢠Distributed memory MIMD machine.
All PEs have a local memory.
Also called loosely-coupled multiprocessor
systems.
The communication between PEs takes place
through the interconnection network (IPC-
inter process communication channel).
Easier to program but is less tolerant to
failures.
83
84. Approaches to parallel programming
A sequential program is one which runs on a
single processor and has a single line of
control.
To make many processors collectively work on
a single program, the program must be
divided into smaller independent chunks so
that each processor can work on separate
chunks of the problem. The program
decomposed in this way is a parallel program.
84
85. ⢠A wide variety of parallel programming approaches are
available. The most prominent among them are :
1. Data parallelism- divide and conquer technique is used to
split data into multiple sets, and each data set is processed
on different PEs by using the same instruction.
2. Process parallelism- a given operation has multiple
activities , which can be processed on multiple processors.
3. Farmer and worker model- a job distribution approach is
used. One processor is configured as master and all other
remaining PEs are designated as slaves; the master assigns
a job to slave PEs and they on completion inform the
master which in turn collects the results.
85
86. Levels of parallelism
⢠Levels of parallelism are decided based on the
lumps of code (grain size) that can be
potential candidate for parallelism.
86
87. Grain size Code item Parallelized by
large Separate and heavy weight
process
programmer
Medium Function or procedure programmer
fine Loop or instruction block Parallelizing compiler
Very fine Instruction processor
87
88. ⢠Parallelism within an application can be
detected at several levels:
1. Large-grain (or task-level)
2. Medium-grain(or control-level)
3. Fine-grain(data-level)
4. Very -fine grain(multiple instruction level)
88
89. Laws of caution
⢠Parallelism is used to perform multiple
activities together so that the system can
increase its throughput or its speed.
⢠For a given ânâ processors , the user expects
speed to be increased by ânâ times. It is an
ideal situation , which rarely happens because
of the communication overhead.
89
90. ⢠Here are two important guidelines to take into
account:
1. Speed of computation is proportional to the
square root of system cost ; they never
increase linearly. Therefore, faster the
system , more the expense to increase its
speed.
2. Speed-up by a parallel computer increases as
the logarithm of the number of processors.
90
92. Elements of distributed computing
⢠General concept and definitions
By Tanenbaum
â A distributed system is a collection of independent
computers that appears to its users as a single
coherent system.â
Communication is one of the fundamental aspect of
distributed computing. Since distributed systems are
composed by more than one computer that
collaborate together, it is necessary to provide some
sort of data and information exchange between them,
which generally occurs through the network.
92
93. ⢠â A distributed system is one in which
components located at networked computers
communicate and coordinate their actions
only by passing messagesâ
93
94. Components of a distributed system
⢠Components of a distributed system.
A distributed system is the result of the
interaction of several components that
traverse the entire computing stack from
hardware to software.
overview of the different layers that are
involved in providing the services of a
distributed system.
94
96. Architectural styles for distributed
computing
Architectural styles are mainly used to
determine the vocabulary of components and
connectors that are used as instances of the
style together with a set of constraints on how
they can be combined.
Architectural styles for distributed systems are
helpful in understanding the different roles of
components in the system and how they are
distributed across multiple machines.
96
97. ⢠Architectural styles can be divided into two major
classes:
1. Software architectural styles â relates to the logical
organization of the software.
2. System architectural styles â describe the physical
organization. (components and connectors)
components â represents a unit of software that
encapsulates a function or a feature of the system.
Connector â is a communication mechanism that
allows the cooperation and coordination among
components.
97
98. 1. Software architectural styles
a. Data centered architectures.
b. Data flow architectures.
c. Virtual machine architectures.
d. Call and return architectures.
e. Architectural styles based on independent
components.
98
99. 2. System architectural styles.
a. Client server â this architecture is very popular in
distributed computing
- suitable for wide variety of applications.
- features two major components: a server and a client.
- these two components interact with each other
through a network connection.
- the communication is unidirectional: the client issues a
request to the server, and the server after processing the
request returns a response.
- hence the important operations are request ,
accept(client side ), and listen and response(server side).
99
100. - for the client design , two major models
are 1. thin-client model â the load of data
processing and transformation is put on the
server side, and the client is concerned with
retrieving and returning the data
2. fat-client model â responsible for
processing and transforming the data before
returning it back to the user.
100
102. b. Peer âto- peer
this model introduces a symmetric
architecture where all the components , called
peers, play the same role and incorporate
both the client and server capabilities.
102
105. Characteristics of Cloud
⢠1. Resources Pooling
⢠It means that the Cloud provider pulled the computing resources
to provide services to multiple customers with the help of a
multi-tenant model. There are different physical and virtual
resources assigned and reassigned which depends on the demand
of the customer. The customer generally has no control or
information over the location of the provided resources but is able
to specify location at a higher level of abstraction.
⢠2. On-Demand Self-Service
⢠It is one of the important and valuable features of Cloud
Computing as the user can continuously monitor the server
uptime, capabilities, and allotted network storage. With this
feature, the user can also monitor the computing capabilities.
105
106. ⢠3. Easy Maintenance
⢠The servers are easily maintained and the downtime is very
low and even in some cases, there is no downtime. Cloud
Computing comes up with an update every time by
gradually making it better. The updates are more
compatible with the devices and perform faster than older
ones along with the bugs which are fixed.
⢠4. Large Network Access
⢠The user can access the data of the cloud or upload the data
to the cloud from anywhere just with the help of a device
and an internet connection. These capabilities are available
all over the network and accessed with the help of internet.
106
107. ⢠5. Availability
⢠The capabilities of the Cloud can be modified as per
the use and can be extended a lot. It analyzes the
storage usage and allows the user to buy extra Cloud
storage if needed for a very small amount.
⢠6. Automatic System
⢠Cloud computing automatically analyzes the data
needed and supports a metering capability at some
level of services. We can monitor, control, and report
the usage. It will provide transparency for the host as
well as the customer.
107
108. ⢠7. Economical
⢠It is the one-time investment as the company (host) has to
buy the storage and a small part of it can be provided to the
many companies which save the host from monthly or
yearly costs. Only the amount which is spent is on the basic
maintenance and a few more expenses which are very less.
⢠8. Security
⢠Cloud Security, is one of the best features of cloud
computing. It creates a snapshot of the data stored so that
the data may not get lost even if one of the servers gets
damaged. The data is stored within the storage devices,
which cannot be hacked and utilized by any other person.
The storage service is quick and reliable.
108
109. ⢠9.Pay as you go
⢠In cloud computing, the user has to pay only for the service or the
space they have utilized. There is no hidden or extra charge which
is to be paid. The service is economical and most of the time some
space is allotted for free.
⢠10. Measured Service
⢠Cloud Computing resources used to monitor and the company
uses it for recording. This resource utilization is analyzed by
supporting charge-per-use capabilities. This means that the
resource usages which can be either virtual server instances that
are running in the cloud are getting monitored measured and
reported by the service provider. The model pay as you go is
variable based on actual consumption of the manufacturing
organization.
109
111. ⢠What is service oriented computing in cloud
computing?
⢠Service orientation is the core reference model
for cloud computing systems. This approach
adopts the concept of services as the main
building blocks of application and system
development. ... A service is supposed to be
loosely coupled, reusable, programming language
independent, and location transparent.
111
112. ⢠What is utility oriented computing?
⢠Utility computing is a model in
which computing resources are provided to the
customer based on specific demand. ... The
foundational concept is that users or businesses
pay the providers of utility computing for the
amenities used â such as computing capabilities,
storage space and applications services.
112