Parallel Architecture          &Parallel Programming
Content:-•   Introduction     – Von-Neumann Architecture.     – Serial ( Single ) Computational.     – Concepts and Termin...
Introduction:• Von-Neumann Architecture  Since then, virtually all computers  have followed this basic design, which  Comp...
IntroductionSerial Computational :-• Traditionally,       software has been written for serial computation: To be run on  ...
IntroductionSerial Computational :-
Parallel Architecture
Definition:• parallel computing: is the simultaneous use of   multiple compute resources to solve a computational   proble...
Definition:
Concepts and Terminology:General Terminology• Task – A logically discrete section of  computational work• Parallel Task – ...
Benefits & Advantages:• Save Time & Money• Solve Larger Problems
How To Distinguishing Parallelprocessors:  – Resource Allocation:     • how large a collection?     • how powerful are the...
Multiprocessor ArchitectureClassification :• Distinguishes multi-processor architecture by instruction and  data:-•   SISD...
Flynn’s Classical Taxonomy:            SISD              • Serial              • Only one instruction                and d...
Flynn’s Classical Taxonomy:            SIMD              • All processing units                execute the same           ...
Flynn’s Classical Taxonomy:            MISD              • Different instructions                operated on a single     ...
Flynn’s Classical Taxonomy:           MIMD              • Can execute different                instructions on            ...
Parallel Computer Memory Architectures:Shared Memory Architecture• All processors access  all memory as a  single global a...
Parallel Computer Memory Architectures:Distributed Memory• Each processor has  its own memory.• Is scalable, no  overhead ...
Parallel Programming
Parallel Programming Models• Exist as an abstraction above hardware and  memory architectures• Examples:  – Shared Memory ...
Parallel Programming Models:Shared Memory Model• Appears to the user as a single shared  memory, despite hardware implemen...
Parallel Programming Models:Threads Model• A single process may have  multiple, concurrent  execution paths.• Typically us...
Parallel Programming Models:Message Passing Model• Tasks exchange data by sending  and receiving messages. Typically  used...
Parallel Programming Models:Data Parallel Model• Tasks performing the  same operations on a set  of data. Each task  worki...
Designing Parallel Programs:Automatic Parallelization• Automatic  – Compiler analyzes code and identifies    opportunities...
Designing Parallel Programs:Manual Parallelization• Understand the problem  – A Parallelizable Problem:    • Calculate the...
Designing Parallel Programs:Domain Decomposition   Each task handles a portion of the data set. •
Designing Parallel Programs:Functional DecompositionEach task performs a function of the overall work •
Conclusion• Parallel computing is fast.• There are many different approaches and  models of parallel computing.• Parallel ...
References• A Library of Parallel Algorithms, www-  2.cs.cmu.edu/~scandal/nesl/algorithms.html• Internet Parallel Computin...
Case Study  Developing Parallel Applications            On the Web               usingJava mobile agents and Java threads
My References :• Parallel Computing Using JAVA Mobile  Agents    By: Panayiotou Christoforos, George Samaras ,Evaggelia   ...
Parallel architecture-programming
Upcoming SlideShare
Loading in...5
×

Parallel architecture-programming

2,390

Published on

Published in: Education, Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,390
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
118
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Parallel architecture-programming

  1. 1. Parallel Architecture &Parallel Programming
  2. 2. Content:-• Introduction – Von-Neumann Architecture. – Serial ( Single ) Computational. – Concepts and Terminology• Parallel Architecture – Definition – Benefits & Advantages – Distinguishing Parallel Processors – Multiprocessor Architecture Classifications – Parallel Computer Memory Architectures• Parallel Programming – Definition – Parallel Programming Model – Designing Parallel Programs – Parallel Algorithm Examples – Conclusion• Case Study
  3. 3. Introduction:• Von-Neumann Architecture Since then, virtually all computers have followed this basic design, which Comprised of four main components: – Memory – Control Unit – Arithmetic Logic Unit – Input/output
  4. 4. IntroductionSerial Computational :-• Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU)• Problem is broken into discrete SERIES of instructions.• Instructions are EXECUTED one after another.• One instruction may execute at any moment in TIME
  5. 5. IntroductionSerial Computational :-
  6. 6. Parallel Architecture
  7. 7. Definition:• parallel computing: is the simultaneous use of multiple compute resources to solve a computational problem To be run using multiple CPUs.In which:- - A problem is broken into discrete parts that can be solved concurrently - Each part is further broken down to a series of instructions - Instructions from each part execute simultaneously on different CPUs
  8. 8. Definition:
  9. 9. Concepts and Terminology:General Terminology• Task – A logically discrete section of computational work• Parallel Task – Task that can be executed by multiple processors safely• Communications – Data exchange between parallel tasks• Synchronization – The coordination of parallel tasks in real time
  10. 10. Benefits & Advantages:• Save Time & Money• Solve Larger Problems
  11. 11. How To Distinguishing Parallelprocessors: – Resource Allocation: • how large a collection? • how powerful are the elements? • how much memory? – Data access, Communication and Synchronization • how do the elements cooperate and communicate? • how are data transmitted between processors? • what are the abstractions and primitives for cooperation? – Performance and Scalability • how does it all translate into performance? • how does it scale?
  12. 12. Multiprocessor ArchitectureClassification :• Distinguishes multi-processor architecture by instruction and data:-• SISD – Single Instruction, Single Data• SIMD – Single Instruction, Multiple Data• MISD – Multiple Instruction, Single Data• MIMD – Multiple Instruction, Multiple Data
  13. 13. Flynn’s Classical Taxonomy: SISD • Serial • Only one instruction and data stream is acted on during any one clock cycle
  14. 14. Flynn’s Classical Taxonomy: SIMD • All processing units execute the same instruction at any given clock cycle. • Each processing unit operates on a different data element.
  15. 15. Flynn’s Classical Taxonomy: MISD • Different instructions operated on a single data element. • Very few practical uses for this type of classification. • Example: Multiple cryptography algorithms attempting to crack a single coded message.
  16. 16. Flynn’s Classical Taxonomy: MIMD • Can execute different instructions on different data elements. • Most common type of parallel computer.
  17. 17. Parallel Computer Memory Architectures:Shared Memory Architecture• All processors access all memory as a single global address space.• Data sharing is fast.• Lack of scalability between memory and CPUs
  18. 18. Parallel Computer Memory Architectures:Distributed Memory• Each processor has its own memory.• Is scalable, no overhead for cache coherency.• Programmer is responsible for many details of communication between processors.
  19. 19. Parallel Programming
  20. 20. Parallel Programming Models• Exist as an abstraction above hardware and memory architectures• Examples: – Shared Memory – Threads – Messaging Passing – Data Parallel
  21. 21. Parallel Programming Models:Shared Memory Model• Appears to the user as a single shared memory, despite hardware implementations• Locks and semaphores may be used to control shared memory access.• Program development can be simplified since there is no need to explicitly specify communication between tasks.
  22. 22. Parallel Programming Models:Threads Model• A single process may have multiple, concurrent execution paths.• Typically used with a shared memory architecture.• Programmer is responsible for determining all parallelism.
  23. 23. Parallel Programming Models:Message Passing Model• Tasks exchange data by sending and receiving messages. Typically used with distributed memory architectures.• Data transfer requires cooperative operations to be performed by each process. Ex.- a send operation must have a receive operation.• MPI (Message Passing Interface) is the interface standard for message passing.
  24. 24. Parallel Programming Models:Data Parallel Model• Tasks performing the same operations on a set of data. Each task working on a separate piece of the set.• Works well with either shared memory or distributed memory architectures.
  25. 25. Designing Parallel Programs:Automatic Parallelization• Automatic – Compiler analyzes code and identifies opportunities for parallelism – Analysis includes attempting to compute whether or not the parallelism actually improves performance. – Loops are the most frequent target for automatic parallelism.
  26. 26. Designing Parallel Programs:Manual Parallelization• Understand the problem – A Parallelizable Problem: • Calculate the potential energy for each of several thousand independent conformations of a molecule. When done find the minimum energy conformation. – A Non-Parallelizable Problem: • The Fibonacci Series – All calculations are dependent
  27. 27. Designing Parallel Programs:Domain Decomposition Each task handles a portion of the data set. •
  28. 28. Designing Parallel Programs:Functional DecompositionEach task performs a function of the overall work •
  29. 29. Conclusion• Parallel computing is fast.• There are many different approaches and models of parallel computing.• Parallel computing is the future of computing.
  30. 30. References• A Library of Parallel Algorithms, www- 2.cs.cmu.edu/~scandal/nesl/algorithms.html• Internet Parallel Computing Archive, wotug.ukc.ac.uk/parallel• Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis• Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003• The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993
  31. 31. Case Study Developing Parallel Applications On the Web usingJava mobile agents and Java threads
  32. 32. My References :• Parallel Computing Using JAVA Mobile Agents By: Panayiotou Christoforos, George Samaras ,Evaggelia Pitoura, Paraskevas Evripidou• An Environment for Parallel Computing on Internet Using JAVA By:P C Saxena, S Singh, K S Kahlon
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×