2. Learning Outcomes
At the end of the course, the students will be able to
• - define Parallel Algorithms
• - recognize parallel speedup and performance analysis
• - identify task decomposition techniques
• - perform Parallel Programming
• - apply acceleration strategies for algorithms
3. Four decades of computing
• Batch Era
• Time sharing Era
• Desktop Era
• Network Era
4. Batch era
• Batch processing
• Is execution of a series of programs on a computer
without manual intervention
• The term originated in the days when users entered
programs on punch cards
5. Time-sharing Era
• time-sharing is the sharing of a computing
resource among many users by means of
multiprogramming and multi-tasking
• Developing a system that supported multiple
users at the same time
7. Network Era
• Systems with:
• Shared memory
• Distributed memory
• Example for parallel computers: Intel iPSC, nCUBE
8. Parallel Computing
Parallel computing is a form of computationin whichmanyinstructionsarecarriedout simultaneously
operating on the principle that largeproblems canoftenbedividedinto smallerones,whicharethen
solved concurrently (in parallel).
With the increased use of computers in every sphere of human activity, computer scientists are
facedwithtwo crucial issues today.
Processing has to bedonefaster likenever before
Larger or complex computationproblems need to be solved
Increasing the number of transistors as per Moore’s Law isn’t a solution, asit also increases the
frequency scalingandpower consumption.
Power consumptionhasbeen amajor issue recently, as it causes a problem of processor heating.
Theperfect solutionis PARALLELISM In hardware as well as software.
9. Parallel Computing
Difference between Parallel Computing & Distributed Computing
When different processors/computers work on a single common goal, It is parallel computing. Eg. Ten men pulling a
rope to lift up one rock, supercomputers implement parallel computing. Distributed computing is where several
different computers work separately on a multi-faceted computing workload. Eg. Ten men pulling ten ropes to lift ten
different rocks, employees working in an office doing their own work.
Difference between Parallel Computing & Cluster Computing
A computer cluster is a group of linked computers, working together closely so that in many respects they form a
single computer. Eg.,In an office of 50 employees, group of 15 doing some work,25 some other,and remaining 10
something else. Similarly, in a network of 20 computers,16 working on a common goal, whereas 4 on some other
common goal. Cluster Computing is a specific case of parallel computing.
Difference between Parallel Computing & Grid Computing
Grid Computing makes use of computers communicating over the Internet to work on a given problem.
Eg.When 3 persons,one of them from USA, another from Japan and a third from Norway are working together
online on a common project. Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux
are examples of grid computing. Again, it serves a san example of parallel computing.
10. FLYNN's taxonomy of computer
architecture
Two types of information flow into processor:
Instructions
Data
what are instructions and data?
19. Why Use Parallel Computing?
• TAKE ADVANTAGE OF NON-LOCAL RESOURCES:
20. Why Use Parallel Computing?
• MAKE BETTER USE OF UNDERLYING PARALLEL HARDWARE
• Modern computers, even laptops, are parallel in architecture with multiple
processors/cores
27. Factors influencing Parallel Computing
•Increased Scientific & Business Computing
•Sequential architecture constrained by speed of light and thermodynamics law
•H/w improvement in pipelining, superscalar, require sophisticated compiler
•Vector processing works well for matrix/graphics processing
•Parallel processing is matured and can be explored commercially
28. Shared Memory System
• A shared memory system typically accomplishes
interprocessor coordination through a global memory shared by all
processors.
Easier to program, Less tolerant, limited scalability
Failure affects entire system
• Ex: Server systems, GPGPU
29. Message Passing System
(Distributed Memory)
• This kind of systems typically combine the local
memory and processor at each node of the
interconnection network
• There is no global memory, Use message passing
technique to move data from one local memory to
another
• Difficult to program, More tolerant, higher scalability
Failure affects system partially
Superior price performance ratio
30. Limits and Costs of Parallel Programming
• Amdahl's Law:
Amdahl's Law states that potential program speedup is defined by the
fraction of code (P) that can be parallelized:
𝑆𝑝𝑒𝑒𝑑𝑢𝑝 =
1
1 − 𝑝
• If none of the code can be parallelized, P = 0 and the speedup = 1 (no
speedup).
• If all of the code is parallelized, P = 1 and the speedup is infinite (in
theory).
31. Limits and Costs of Parallel Programming
• If 50% of the code can be parallelized, maximum speedup = 2,
meaning the code will run twice as fast.
32. Limits and Costs of Parallel Programming
• Introducing the number of processors performing the parallel fraction
of work, the relationship can be modeled by:
𝑠𝑝𝑒𝑒𝑑𝑢𝑝 = 𝑃
1
𝑁 + 𝑆
• where P = parallel fraction, N = number of processors and S = serial
fraction
35. Bit-Level Parallelism
When an 8-bit processor needs to add two 16- bit integers,it’s
to be done in two steps.
The processor must first add the 8 lower-order bits from each
integer using the standard addition instruction,
Then add the 8 higher-order bits using an add- with-carry
instruction and the carry bit from the lower order addition
36. Instruction Level Parallelism
The instructions given to a computer for processing can be
divided into groups, or re- ordered and then processed
without changing the final result.
This is known as instruction-level parallelism. i.e.,ILP.
37. An Example
1. e = a + b
2. f = c + d
3. g = e * f
Here, instruction 3 is dependent on instruction
1 and 2 .
However,instruction 1 and 2 can be
independently processed.
38. Data Parallelism
Data parallelism focuses on distributing the data across
different parallel computing nodes.
It is also called as loop-level parallelism.
39. An Illustration
In a data parallel implementation, CPU A could add all
elements from the top half of the matrices, while CPU B could
add all elements from the bottom half of the matrices.
Since the two processors work in parallel, the job of
performing matrix addition would take one half the time of
performing the same operation in serial using one CPU
alone.
40. Task Parallelism
Task Parallelism focuses on distribution of tasks across
different processors.
It is also known as functional parallelism or control
parallelism
41. An Example
As a simple example, if we are running code
on a 2-processor system (CPUs "a" & "b") in a
parallel environment and we wish to do tasks
"A" and "B" , it is possible to tell CPU "a" to
do task "A" and CPU "b" to do task 'B"
simultaneously, thereby reducing the
runtime of the execution.
42. Key Difference Between Data And Task
Parallelism
It is the division of
threads(processes) or instructions or
tasks internally into sub-parts for
execution.
A task ‘A’ is divided into sub-parts
and then processed.
Data Parallelism Task Parallelism
It is the divisions among
threads(processes) or instructions or
tasks themselves for execution.
A task ‘A’ and task ‘B’ are
processed separately by different
processors.
43. Implementation Of Parallel Computing
In Software
When implemented in software(or rather algorithms), the
terminology calls it ‘parallel programming’.
An algorithm is split into pieces and then executed, as
seen earlier.
44. Important Points In Parallel
Programming
Dependencies-A typical scenario when line 6 of an algorithm
is dependent on lines 2,3,4 and 5
Application Checkpoints-Just like saving the algorithm, or
like creating a backup point.
Automatic Parallelisation-Identifying dependencies and
parallelising algorithms automatically.This has achieved
limited success.
45. Implementation Of Parallel Computing
In Hardware
When implemented in hardware, it is called as ‘parallel
processing’.
Typically,when a chunk of load for execution is divided for
processing by units like cores,processors,CPUs,etc.