Parallel Programing Model

1,022 views

Published on

Published in: Education, Technology
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,022
On SlideShare
0
From Embeds
0
Number of Embeds
13
Actions
Shares
0
Downloads
41
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Parallel Programing Model

  1. 1. By,S . Adlin JeenaD. Jagadeeswari
  2. 2. Introduction collection of program abstractions. designed for multiprocessors, multicomputer orvector/SIMD computers Five models: Shared-Variable Model Message-Passing Model Data-Parallel Model Object-oriented Model Functional and Logic Models
  3. 3. Shared-Variable Model To limit the scope and rights, the process addressspace may be shared or restricted.Mechanisms for IPC:1. IPC using shared variable:2. IPC using message passing:Shared Variablesin a commonmemoryProcess AProcess BProcess CProcess D Process E
  4. 4. Following are some issues of Shared-variable Model: Shared-Variable communication: Critical Section(CS): code segment accessing shared variables. Requirements are – Mutual exclusion No deadlock in waiting Non preemption Eventual entry Protected Access: based on CS value Multiprogramming Multiprocessing – two types: MIMD mode MPMD mode Multitasking Multithreading
  5. 5.  Partitioning and Replication: Program partitioning is a technique for decomposing a largeprogram and data set into many small pieces for parallelexecution by multiple processors. Program replication is referred to duplication of the sameprogram code for parallel execution on multiple processorover different data sets. Scheduling and Synchronization: Scheduling of divided program modules on parallel processor Two types are : Static scheduling Dynamic scheduling Cache Coherence and Protection:If the value is returned on a read instruction is always the valuewritten by the latest write instruction on the same memorylocation is called coherent.
  6. 6. Message-Passing Model Synchronous Message Passing – It is must synchronize the sender process and thereceiver process in time and space Asynchronous Message Passing – It does not require message sending and receiving besynchronized in time and space Non blocking can be achieved Distributing the computations: Subprogram level is handled rather than at theinstructional or fine grain process level in a tightlycoupled multiprocessor
  7. 7. Data-Parallel Model It is easier to write and to debug because parallelism isexplicitly handled by hardware synchronization andflow control. It requires the use of pre-distributed data sets Synchronization is done at compile time rather thanrun time. the following are some issued handled Data Parallelism- Array Language Extensions Compiler support
  8. 8. Object-Oriented Model Concurrent OOP – 3 application demands There is increased use of interacting processes by individualusers Workstation networks have become a cost-effectivemechanism Multiprocessor technology in several variants has advanced tothe point of providing supercomputing power An actor model It is presented as one framework for COOP They are self-contained , interactive, independentcomponents of a computing system. Basic primitives are :create to , send to, become Parallelism in COOP: 3 patterns- 1. pipeline concurrency 2.divide and conquercurrency 3.cooperative problem solving
  9. 9. Functional and Logic Models Two types of language oriented programming modelsare Functional programming model It emphasizes functionality of a program No concepts of storage, assignment and branching All single-assignment and dataflow languages are functionalin nature Some e.g. are Lisp, SISAL and strand 88 Logic programming model Based on logic ,logic programming tat suitable for dealingwith large database. Some e.g. are concurrent Prolog - Concurrent Parlog

×