Parallel Computing Vs.Distributed Computing
• Computation is divided among several
processors sharing the same memory.
• Homogeneity of components
• Shared memory has a single address
space.
• Parallel programs are then broken
down into several units of execution.
• Originally multiple processors shared
the same physical memory.
• Any architecture that allows the
computation to be broken down into
units and executed concurrently on
different computing elements.
• Distributed computing includes a wider
range of systems and applications than
parallel computing.
• Heterogeneous
• Examples- Computing grids or Internet
computing systems
3
4.
Parallel Computing
• Processingof multiple tasks simultaneously on multiple processors is called
parallel processing.
• Many applications today require more computing power than a traditional
sequential computer can offer.
• Parallel processing provides a cost-effective solution to the above problem.
• Programming on a multiprocessor system using the divide-and-conquer
technique is called parallel programming.
4
5.
Hardware architectures forparallel processing
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems
SISD
( Sequential Computers)
5
• A multiprocessormachine capable of executing different instructions on
different PEs but all of them operating on the same data set.
MISD
More of an intellectual
exercise than a practical
configuration.
8.
• An MIMDcomputing system is a multiprocessor machine capable of
executing multiple instructions on multiple data sets.
• Each PE in the MIMD model has separate instruction and data streams.
• Well suited to any kind of
application.
• Unlike SIMD and MISD,
MIMD machines work
asynchronously.
MIMD
9.
Levels of parallelism
•Levels of parallelism are decided based on the lumps of code
(grain size)
*Large grain (or task level) *Medium grain (or control level)
*Fine grain (data level) *Very fine grain (multiple-instruction issue)
9
Laws of caution
•Speed of computation is proportional
to the square root of system cost;
they never increase linearly.
• Faster a system becomes, the more
expensive it is to increase its speed.
• Speed by a parallel computer increases
as the logarithm of the number of
processors, i.e., y = k*log(N).
11
12.
Distributed Computing
• Adistributed system is a collection of independent computers
that appears to its users as a single coherent system.
• Components located at networked computers communicate and
coordinate their actions only by passing messages.
12
13.
Components of adistributed system
13
Layered view of
a distributed
system
Models for interprocesscommunication
• Message-based communication
Several distributed programming paradigms eventually use
message-based communication
• Message passing
• Remote procedure call (RPC)
• Distributed objects
• Distributed agents and active objects
• Web services.
15
16.
Models for message-basedcommunication
• Point-to-point message model
Each message is sent to one component from another.
• Publish-and-subscribe message model
publisher and the subscriber
• Push strategy-> responsibility of the publisher to notify all the
subscribers
• Pull strategy -> responsibility of the subscribers to check whether there
are messages
• Request-reply message model
For each message sent by a process, there is a reply.
17.
Technologies for distributedcomputing
• Remote procedure call
• Distributed object frameworks, and
• Service-oriented computing.
17
18.
Remote procedure call
•RPC has been a dominant
technology for IPC for quite
a long time.
• RPC is the fundamental
abstraction enabling the
execution of procedures
on client’s request.
• Marshaling & unmarshaling
19.
Distributed object frameworks
•Extend OOPS by allowing objects to be distributed across a
heterogeneous network.
• Provide facilities so that
they can coherently act as
though they were in the
same address space.
• Extension of RPC to enable
the remote invocation of
object methods
20.
Service-oriented computing
• Service-orientedcomputing organizes distributed systems in terms of
services.
• Web services are the de facto
approach for developing
Service Oriented Architecture.
Service:
A service encapsulates a software
component that provides a set of
coherent and related functionalities
that can be reused and integrated
into bigger and more complex
applications.