Your SlideShare is downloading. ×
0
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Parallel architecture &programming
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Parallel architecture &programming

1,156

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,156
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
63
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Parallel Architecture &Parallel Programming Submitted: Dr.Hesham El-Zouka By:Eng. Ismail Fathalla El-Gayar
  • 2. Content:-• Introduction – Von-Neumann Architecture. – Serial ( Single ) Computational. – Concepts and Terminology• Parallel Architecture – Definition – Benefits & Advantages – Distinguishing Parallel Processors – Multiprocessor Architecture Classifications – Parallel Computer Memory Architectures• Parallel Programming – Definition – Parallel Programming Model – Designing Parallel Programs – Parallel Algorithm Examples – Conclusion• Case Study
  • 3. Introduction:• Von-Neumann Architecture Since then, virtually all computers have followed this basic design, which Comprised of four main components: – Memory – Control Unit – Arithmetic Logic Unit – Input/output
  • 4. IntroductionSerial Computational :-• Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU)• Problem is broken into discrete SERIES of instructions.• Instructions are EXECUTED one after another.• One instruction may execute at any moment in TIME
  • 5. IntroductionSerial Computational :-
  • 6. Parallel Architecture
  • 7. Definition:• parallel computing: is the simultaneous use of multiple compute resources to solve a computational problem To be run using multiple CPUs.In which:- - A problem is broken into discrete parts that can be solved concurrently - Each part is further broken down to a series of instructions - Instructions from each part execute simultaneously on different CPUs
  • 8. Definition:
  • 9. Concepts and Terminology:General Terminology• Task – A logically discrete section of computational work• Parallel Task – Task that can be executed by multiple processors safely• Communications – Data exchange between parallel tasks• Synchronization – The coordination of parallel tasks in real time
  • 10. Benefits & Advantages:• Save Time & Money• Solve Larger Problems
  • 11. How To Distinguishing Parallelprocessors: – Resource Allocation: • how large a collection? • how powerful are the elements? • how much memory? – Data access, Communication and Synchronization • how do the elements cooperate and communicate? • how are data transmitted between processors? • what are the abstractions and primitives for cooperation? – Performance and Scalability • how does it all translate into performance? • how does it scale?
  • 12. Multiprocessor ArchitectureClassification :• Distinguishes multi-processor architecture by instruction and data:-• SISD – Single Instruction, Single Data• SIMD – Single Instruction, Multiple Data• MISD – Multiple Instruction, Single Data• MIMD – Multiple Instruction, Multiple Data
  • 13. Flynn’s Classical Taxonomy: SISD • Serial • Only one instruction and data stream is acted on during any one clock cycle
  • 14. Flynn’s Classical Taxonomy: SIMD • All processing units execute the same instruction at any given clock cycle. • Each processing unit operates on a different data element.
  • 15. Flynn’s Classical Taxonomy: MISD • Different instructions operated on a single data element. • Very few practical uses for this type of classification. • Example: Multiple cryptography algorithms attempting to crack a single coded message.
  • 16. Flynn’s Classical Taxonomy: MIMD • Can execute different instructions on different data elements. • Most common type of parallel computer.
  • 17. Parallel Computer Memory Architectures:Shared Memory Architecture• All processors access all memory as a single global address space.• Data sharing is fast.• Lack of scalability between memory and CPUs
  • 18. Parallel Computer Memory Architectures:Distributed Memory• Each processor has its own memory.• Is scalable, no overhead for cache coherency.• Programmer is responsible for many details of communication between processors.
  • 19. Parallel Programming
  • 20. Parallel Programming Models• Exist as an abstraction above hardware and memory architectures• Examples: – Shared Memory – Threads – Messaging Passing – Data Parallel
  • 21. Parallel Programming Models:Shared Memory Model• Appears to the user as a single shared memory, despite hardware implementations• Locks and semaphores may be used to control shared memory access.• Program development can be simplified since there is no need to explicitly specify communication between tasks.
  • 22. Parallel Programming Models:Threads Model• A single process may have multiple, concurrent execution paths.• Typically used with a shared memory architecture.• Programmer is responsible for determining all parallelism.
  • 23. Parallel Programming Models:Message Passing Model• Tasks exchange data by sending and receiving messages. Typically used with distributed memory architectures.• Data transfer requires cooperative operations to be performed by each process. Ex.- a send operation must have a receive operation.• MPI (Message Passing Interface) is the interface standard for message passing.
  • 24. Parallel Programming Models:Data Parallel Model• Tasks performing the same operations on a set of data. Each task working on a separate piece of the set.• Works well with either shared memory or distributed memory architectures.
  • 25. Designing Parallel Programs:Automatic Parallelization• Automatic – Compiler analyzes code and identifies opportunities for parallelism – Analysis includes attempting to compute whether or not the parallelism actually improves performance. – Loops are the most frequent target for automatic parallelism.
  • 26. Designing Parallel Programs:Manual Parallelization• Understand the problem – A Parallelizable Problem: • Calculate the potential energy for each of several thousand independent conformations of a molecule. When done find the minimum energy conformation. – A Non-Parallelizable Problem: • The Fibonacci Series – All calculations are dependent
  • 27. Designing Parallel Programs:Domain Decomposition Each task handles a portion of the data set. •
  • 28. Designing Parallel Programs:Functional DecompositionEach task performs a function of the overall work •
  • 29. Conclusion• Parallel computing is fast.• There are many different approaches and models of parallel computing.• Parallel computing is the future of computing.
  • 30. References• A Library of Parallel Algorithms, www- 2.cs.cmu.edu/~scandal/nesl/algorithms.html• Internet Parallel Computing Archive, wotug.ukc.ac.uk/parallel• Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis• Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003• The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993
  • 31. Case Study Developing Parallel Applications On the Web usingJava mobile agents and Java threads
  • 32. My References :• Parallel Computing Using JAVA Mobile Agents By: Panayiotou Christoforos, George Samaras ,Evaggelia Pitoura, Paraskevas Evripidou• An Environment for Parallel Computing on Internet Using JAVA By:P C Saxena, S Singh, K S Kahlon

×