CS-416 Parallel and Distributed Systems<br />JawwadShamsi<br />
Course Outline<br />Parallel Computing Concepts<br />Parallel Computing Architecture<br />Algorithms<br />Parallel Program...
Introduction<br />parallel computing is the simultaneous use of multiple compute resources to solve a computational proble...
Types of Processes<br />Sequential processes: that occur in a strict order, where it is not possible to do the next step u...
Need for Parallelism<br />A huge complex problems<br />Super Computers<br />Hardware <br />Use Parallelization techniques<...
Motivation<br />Solve complex problems in a much shorter time<br />Fast CPU<br />Large Memory<br />High Speed Interconnect...
Applications<br />Large data set or Large eguations<br />Seismic operations<br />Geological predictions<br />Financial Mar...
Parallel computing: more than one computation at a time using more than one processor. <br />If one processor can perform ...
Parallel Programming Environments<br />MPI<br />Distributed Memory<br />OpenMP<br />Shared Memory<br />Hybrid Model<br />T...
How Much of Parallelism<br />Decomposition:The process of partitioning a computer program into independent pieces that can...
Data Parallelism<br />Same code segment runs concurrently on each processor<br />Each processor is assigned its own part o...
SIMD: Single Instruction Multiple data<br />
Increase speed processor<br />Greater no. of transistors<br />Operation can be done in fewer clock cycles<br />Increased c...
Multicore<br />A multi-core processor is one processor that contains two or more complete functional units. Such chips are...
Symmetric Multi-Processing<br />SMP Symmetric multiprocessing is where two or more processors have equal access to the sam...
Upcoming SlideShare
Loading in …5
×

Lecture1

1,350 views

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,350
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
40
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Lecture1

  1. 1. CS-416 Parallel and Distributed Systems<br />JawwadShamsi<br />
  2. 2. Course Outline<br />Parallel Computing Concepts<br />Parallel Computing Architecture<br />Algorithms<br />Parallel Programming Environments<br />
  3. 3. Introduction<br />parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: <br />To be run using multiple CPUs <br />A problem is broken into discrete parts that can be solved concurrently <br />Each part is further broken down to a series of instructions <br />Instructions from each part execute simultaneously on different resources : Source llnl.gov<br />
  4. 4. Types of Processes<br />Sequential processes: that occur in a strict order, where it is not possible to do the next step until the current one is completed. Parallel processes: are those in which many events happen simultaneously.<br />
  5. 5. Need for Parallelism<br />A huge complex problems<br />Super Computers<br />Hardware <br />Use Parallelization techniques<br />
  6. 6. Motivation<br />Solve complex problems in a much shorter time<br />Fast CPU<br />Large Memory<br />High Speed Interconnect<br />The interconnect, or interconnection network, is made up of the wires and cables that define how the multiple processors of a parallel computer are connected to each other and to the memory units<br />
  7. 7. Applications<br />Large data set or Large eguations<br />Seismic operations<br />Geological predictions<br />Financial Market <br />
  8. 8. Parallel computing: more than one computation at a time using more than one processor. <br />If one processor can perform the arithmetic in time t.<br />Then ideally p processors can perform the arithmetic in time t/p.<br />
  9. 9. Parallel Programming Environments<br />MPI<br />Distributed Memory<br />OpenMP<br />Shared Memory<br />Hybrid Model<br />Threads<br />
  10. 10.
  11. 11.
  12. 12. How Much of Parallelism<br />Decomposition:The process of partitioning a computer program into independent pieces that can be run simultaneously (in parallel).<br />Data Parallelism<br />Task Parallelism<br />
  13. 13. Data Parallelism<br />Same code segment runs concurrently on each processor<br />Each processor is assigned its own part of the data to work on<br />
  14. 14. SIMD: Single Instruction Multiple data<br />
  15. 15. Increase speed processor<br />Greater no. of transistors<br />Operation can be done in fewer clock cycles<br />Increased clock speed<br />More operations per unit time<br />Example<br />8088/8086 : 5 Mhz, 29000 transistors<br />E6700 Core 2 Duo: 2.66 GHz, 291 million transistor<br />
  16. 16. Multicore<br />A multi-core processor is one processor that contains two or more complete functional units. Such chips are now the focus of Intel and AMD. A multi-core chip is a form of SMP<br />
  17. 17. Symmetric Multi-Processing<br />SMP Symmetric multiprocessing is where two or more processors have equal access to the same memory. The processors may or may not be on one chip.<br />

×