Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

No Downloads

Total views

1,350

On SlideShare

0

From Embeds

0

Number of Embeds

3

Shares

0

Downloads

40

Comments

0

Likes

1

No embeds

No notes for slide

- 1. CS-416 Parallel and Distributed Systems<br />JawwadShamsi<br />
- 2. Course Outline<br />Parallel Computing Concepts<br />Parallel Computing Architecture<br />Algorithms<br />Parallel Programming Environments<br />
- 3. Introduction<br />parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: <br />To be run using multiple CPUs <br />A problem is broken into discrete parts that can be solved concurrently <br />Each part is further broken down to a series of instructions <br />Instructions from each part execute simultaneously on different resources : Source llnl.gov<br />
- 4. Types of Processes<br />Sequential processes: that occur in a strict order, where it is not possible to do the next step until the current one is completed. Parallel processes: are those in which many events happen simultaneously.<br />
- 5. Need for Parallelism<br />A huge complex problems<br />Super Computers<br />Hardware <br />Use Parallelization techniques<br />
- 6. Motivation<br />Solve complex problems in a much shorter time<br />Fast CPU<br />Large Memory<br />High Speed Interconnect<br />The interconnect, or interconnection network, is made up of the wires and cables that define how the multiple processors of a parallel computer are connected to each other and to the memory units<br />
- 7. Applications<br />Large data set or Large eguations<br />Seismic operations<br />Geological predictions<br />Financial Market <br />
- 8. Parallel computing: more than one computation at a time using more than one processor. <br />If one processor can perform the arithmetic in time t.<br />Then ideally p processors can perform the arithmetic in time t/p.<br />
- 9. Parallel Programming Environments<br />MPI<br />Distributed Memory<br />OpenMP<br />Shared Memory<br />Hybrid Model<br />Threads<br />
- 10.
- 11.
- 12. How Much of Parallelism<br />Decomposition:The process of partitioning a computer program into independent pieces that can be run simultaneously (in parallel).<br />Data Parallelism<br />Task Parallelism<br />
- 13. Data Parallelism<br />Same code segment runs concurrently on each processor<br />Each processor is assigned its own part of the data to work on<br />
- 14. SIMD: Single Instruction Multiple data<br />
- 15. Increase speed processor<br />Greater no. of transistors<br />Operation can be done in fewer clock cycles<br />Increased clock speed<br />More operations per unit time<br />Example<br />8088/8086 : 5 Mhz, 29000 transistors<br />E6700 Core 2 Duo: 2.66 GHz, 291 million transistor<br />
- 16. Multicore<br />A multi-core processor is one processor that contains two or more complete functional units. Such chips are now the focus of Intel and AMD. A multi-core chip is a form of SMP<br />
- 17. Symmetric Multi-Processing<br />SMP Symmetric multiprocessing is where two or more processors have equal access to the same memory. The processors may or may not be on one chip.<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment