PRINCIPLES OF PROGRAMMING LANGUAGES MODULE 5 PARALLEL PROGRAMMING Presented by  :  Sreerag Gopinath P.C,   Semester VIII, Computer Science & Engineering, SJCET, Palai.
CONTENTS 11.2  PARALLEL PROGRAMMING 11.2.1  CONCURRENT EXECUTION 11.2.2  GUARDED COMMANDS 11.2.3  ADA OVERVIEW 11.2.4  TASKS 11.2.5  SYNCHRONIZATION OF TASKS Reference: Programming Languages – Design and Implementation, Fourth Edition
MOTIVATION FOR PARALLEL PROGRAMMING Moore’s law : “Capacity and number of transistors in a chip ( Computational power) for a given cost doubles ever 18 months ” Moore’s law in action : Courtesy:  http://www.engr.udayton.edu/faculty/jloomis/ece446/notes/intro/moore.html
PARALLEL PROGRAMMING – AN INTRODUCTION Parallel / Concurrent Programs   :  No single execution sequence  as in Sequential programs.  Several subprograms execute concurrently. Computer systems with  increased capability  – Multiprocessor systems, Distributed / Parallel computer system. Advantages:   1) May make programs  simpler to design . 2) Programs  run faster  than a matching sequential program.
Our concern  – concurrent execution  within a single program. Major stumbling block  –  lack of programming language constructs  for building such systems. C  -  fork() Ada  -  tasks, concurrent execution OUR CONCERN
4 STEPS IN CREATING A PARALLEL PROGRAM Decomposition  of computation in tasks Assignment  of tasks to processes Orchestration  of data access, communication, synch. Mapping  processes to processors
PRINCIPLES OF PARALLEL PROGRAMMING  LANGUAGES Variable definitions   :  mutable   -  values changed during program execution definitional  -  assigned a value only once 2.   Parallel composition   :  parallel  statements  like  and Program Structure   :  transformational   - transform input data into output value reactive  -  pgm reacts to external stimuli (events) Communication  :  shared memory   - common data objects messages   - individual copy of data obj. & pass val 5. Synchronization  : pgms must be able to order execution of various threads of  control  -  determinism / non-determinism
CONCURRENT EXECUTION Mechanism :   Construct that allows parallel execution - the  and  statement,   statement 1   and   statement 2   and   … and   statement n Concurrent task OS specification   call  ReadProcess  and call  WriteProcess  and call  ExecuteUserProgram; Correct Data Handling x := 1; x := 2  and  y := x+x; (*) print (y);
IMPLEMENTATION OF THE ‘and’ CONSTRUCT Execution in sequence (no assumption on order of execution) while  MoreToDo  do MoreToDo := false;   call  ReadProcess;   call  WriteProcess;   call  ExecuteUserProgram end Parallel execution primitives of the underlying OS fork ReadProcess;   fork WriteProcess; fork ExecuteUserProgram; wait /* for all 3 to terminate*/
GUARDED COMMANDS Dijkstra  -  True non determinacy  (1970) ,  guarded command Nondeterministic execution  –  alternative execution paths  are possible Guards   -  If  B is a  guard (condition)  and S is a  command (statement) , guarded command B    S  Guarded if statement  – If  Bi is a set of conditions and Si is a set of statements, if   B 1     S 1   ||  B 2     S 2  || … || B n     S n   fi Guarded Repetition statement do  B 1     S 1   ||  B 2     S 2   || … || B n     S n   od No true implementation  of guarded commands as defined by Dijkstra in common PLs.
ADA OVERVIEW General purpose language , although originally designed for  military applications . Block structure  & data type mechanism  similar to Pascal . Extensions  for  real-time & distributed applications . More  secure form of data encapsulation  than Pascal. Recent versions  – ability to develop  objects  & provide for  method inheritance . Major features   -  tasks & concurrent execution ,  real time task control ,  exception handling ,  abstract data types .
ADA – A BRIEF LANGUAGE OVERVIEW Supports constrn. of  large programs  by teams of programmers. Program designed as a  collection of packages , rep. an  abstract data type  or  data objects . Program consists of a  single procedure that serves as a main program. Broad range of built in data types  – integers, reals, enumerations, Booleans, arrays,    records, character strings, pointers. Sequence control within subprograms  – expressions & stmt level ctrl structures similar    to Pascal.  - concurrently executing tasks controlled by a    time clock & other scheduling mechanisms. Exception Handling  – extensive set of features. Data-control structures  –  static block structure  organization as in Pascal. Nonlocal references  to type names, subprogram names, identifiers in package Central stack  for each separate task. Heap storage area  for programmer constructed data objects.
TASKS Task  :  Each subprogram that can  execute concurrently  with other subprograms is called  a task (or sometimes a process ) A task is  dependent   of the task that initiated it . THE FORK JOIN MODEL
TASK MANAGEMENT Task definition  defines how the task  synchronizes & communicates  with others Task definition in Ada task  Name  is - Declarations for synchronization and communication   end;   task body  Name  is - Usual local declarations as found in any subprogram   begin  – Sequence of statements   end; Initiating execution of a task ( PL/I)   call  B ( parameters ) task; Ada –  No explicit call  statement needed. -  Concurrent execution  on entry  into larger program structure
TASK MANAGEMENT (Contd..) Multiple simultaneous activations of same task PL/I :  Repeated execution of  call Ada : task type  Terminal  is - Rest of definition in the same form as above end; … A: Terminal; B,C: Terminal; Alternative type  TaskPtr  is access  Terminal;  - Defines pointer type NewTerm: TaskPtr :=  new  Terminal;  - Declares pointer variable
SYNCHRONIZATION OF TASKS Synchronization  is needed for tasks running asynchronously to  coordinate their activities. Consider,  task A  ---  read in  a batch of data task B  ---  process each batch  of data input from device SYNCHRONIZATION Task B doesn’t start processing  data before taskA has finished reading these in. Task A doesn’t overwrite data  taskB is still processing  SYNCHRONIZATION MECHANISMS Interrupts Semaphores Messages Guarded Commands Rendezvous
INTERRUPTS Common mechanism in  computer hardware. Task A  sends event  to task B :  Interrupt    control transfer    resume execution . Disadvantages  as a sync. Mechanism in high level languages-  1.  Confusing program structure  –  separate  interrupt handling code 2. Waiting for an interrupt -  Busy waiting loop  --- does nothing else 3.  Data shared  between task body and interrupt has to be  protected . High-level languages usually provide  other synchronization mechanisms .
SEMAPHORES Consists of   two parts  –  1.   An integer counter  - count number of signals sent but not received. 2.   A queue of tasks   - waiting for signals to be sent. Two primitive operations on a semaphore object P:  1. signal(P)   -  Tests value  of counter in P. If zero, remove first task  in task queue, and  resume  its execution.   If not zero  or if  queue empty ,  increment  counter by one. (indicates  signal sent  but not received) 2. wait(P)   -  Tests value  of counter in P. If nonzero, decrement counter  by one (indicates signal received) If zero, insert  task at the end of task queue, and suspend task.
SEMAPHORES (Contd..) Synchronization Problem 2 binary semaphores   -  StartB  - used by  Task A  to signal  input complete StartA  – used by  Task B  to signal  processing complete task A; begin - input first data set loop signal(StartB) – Invoke task B - Input next data set wait(StartA) – Wait until taskB finishes with data endloop; end A; task B; begin loop wait(StartB) – Wait for task A  to read data -Process data signal(StartA) – Tell Task A to continue endloop; end A;
SEMAPHORES (Contd..) Disadvantages for use in high level programming of tasks A task can  wait for only one semaphore  at a time. Deadlocks  may occur if a task fails to signal at the appropriate point. Increasingly  difficult to understand, debug and verify . Semantics of signal and wait imply  shared memory , not necessarily so in multiprocessor systems and computer networks. In essence, semaphore is a relatively  low-level synchronization construct  that is adequate primarily in  simple situations .
MESSAGES A message is a  transfer of information  from one task to another. The task remains  free to continue  executing when not synchronising. A message is  placed into the pipe  (or message queue) by a  send  command. A message is  accepted by a waiting task  using a  receive  command. task  Producer; begin loop  - while more to read - Read new data; send( Consumer, data) endloop; end Producer task  Consumer; begin loop  - while more to process receive (Producer, data) - Process new data endloop; end Consumer PRODUCER CONSUMER PROBLEM
GUARDED COMMANDS Adds  nondeterminacy  into programming. Good model  for task synchronization. The guarded if command in Ada is termed a  select statement  with the general form select when   condition 1  ==>  statement 1   or when   condition 2  ==>  statement 2   … or when   condition n  ==>  statement n else  statement n+1  - optional else clause end select;
RENDEZVOUS When two tasks  synchronize their actions for a brief period , that synchronization is termed a rendezvous. Similar to message, but requires a  synchronization action  with each message. A  B  A  B
RENDEZVOUS (Contd…) A rendezvous point in B is called an  entry  ( in this example,  DataReady ) When  Task B is   ready to begin processing  a new batch of data, it must execute an  accept  statement: accept  DataReady  do - statements to copy new data from  A  into local data area of  B   end; When  Task A has completed input  of a batch of data, it must execute the entry call: DataReady . The Rendezvous:   Task B reaches  accept  stmt     wait until  Task A ---  entry call: DataReady Task A reaches  DataReady      wait until  Task B reaches accept stmt. A continues to wait while B executes all stmts contained within the  do…end of  the accept stmt, and both A and B continue their separate executions.
RENDEZVOUS (Contd…) select when  Device1Status =  ON   ==>  accept  Ready1  do … end; or  when  Device2Status  =  ON   ==>  accept  Ready2  do … end; or  when  Device3Status = connected  ==>  accept  Ready3 do … end; else …  -  No device is ready; do something else end select; Conditional rendezvous on the status of each device
TASKS & REAL TIME PROCESSING A program that must  interact with i/o devices  or other tasks  within some fixed time period  is said to be operating in  real time . In real time computer systems,  h/w failure  of an i/o device leads to a task’s being  abruptly terminated . If other tasks wait on such a failed task, the entire systems of tasks may  deadlock  & cause the system to  crash . Real-time processing requires that the language hold some  explicit notion of time .  Ada   -  package called  Calendar  that includes a type  Time  and a function  clock. A task waiting for a rendezvous may watch the clock, as in select  DataReady; or  delay 0.5;  - Wait at most 0.5 seconds end select ;
TASKS & SHARED DATA Tasks sharing data present special problems due to  concurrent execution involved . There are  two issues :  1. Storage Management    Single Stack      Multiple stacks      Single Heap 2. Mutual Exclusion    Critical Regions      Monitors      Message Passing
STORAGE MANAGEMENT IN TASKS Single Stack Multiple Stacks Single Heap stack heap stack stack stack heap stack1 stack2 stack3  act rec act rec act rec act rec act rec act rec act rec
STORAGE MANAGEMENT IN TASKS (Contd…) Single stack  -  *  C, Pascal *  If  Stack & heap meet , no more space---program terminates *  Efficient use of space Multiple stacks  -  *  Used when there is  enough memory *  If any  stack overlaps  next memory segment, program terminates *  Effective solution  with today’s modern virtual memory systems Single Heap  -  *  Systems with  limited memory *  Used for early  PL/I compilers *  High  overhead *  Memory fragmentation  can be a problem in unique sized ARs.
CACTUS STACK MODEL OF MULTIPLE TASKS Task 3 Task 1 Task 2 Task 4
MUTUAL EXCLUSION If TaskA and TaskB  each have access  to a  single data object  X, then A and B must  synchronize  their access to X so that TaskA is not in the process of  assigning  a new value to X while TaskB is simultaneously  referencing  that value or assigning a different value. To ensure that two tasks  do not simultaneously attempt to access and update  a shared data object, one task must be able to gain  exclusive access  to the data object while it manipulates it.
CRITICAL REGIONS A  Critical region  is a  sequence of program statements  within a task where the task is operating on some data object  shared  with other tasks. If a critical region in Task A is manipulating data object X, then mutual exclusion requires that  no other task  be simultaneously executing a critical region that also manipulates X. During execution of Task A, A must  wait  until any other task has completed a critical region that manipulates X. As Task A begins its critical region, all other tasks must be  locked out  so that they cannot enter their critical regions (for variable X) until A has completed its critical region. Critical regions may be implemented in tasks by associating a  semaphore  with each shared data object.
MONITORS A monitor is a  shared data object  together with the  set of operations  that may manipulate it. Similar to a data object defined by an  abstract data type . To enforce mutual exclusion, it is only necessary to require that  at most one of the operations  defined for the data object may be executing at any given time. Mutual exclusion and encapsulation constraints require a monitor to be  represented as a task.
MONITORS (Contd…) task  TableManager  is entry  EnterNewItem(…); entry  FindItem(…); end ; task body  TableManager  is BigTable:  array (…)  of procedure  Enter(…) is - Statements to enter item in BigTable end  Enter; function  Find(…)  returns … is - Statements to find item in BigTable end  Find; begin - Statements to initialise BigTable loop  – Loop forever to process entry    requests select accept  EnterNewItem(…)  do - Call Enter to enter received in    BigTable end; or accept  FindItem(…)  do - Call Find to look up received item    in BigTable end; end select; end loop; end  TableManager;
MESSAGE PASSING The idea is to  prohibit shared data objects  and provide only the  sharing of data values  through  passing the values as messages . Mutual exclusion comes naturally because each data object is  owned by exactly one task , and no other task may access the data object directly. Copy of data values Local copy Data object Processing Task A Task B  Processed Copy
THANK YOU !!!

Sreerag parallel programming

  • 1.
    PRINCIPLES OF PROGRAMMINGLANGUAGES MODULE 5 PARALLEL PROGRAMMING Presented by : Sreerag Gopinath P.C, Semester VIII, Computer Science & Engineering, SJCET, Palai.
  • 2.
    CONTENTS 11.2 PARALLEL PROGRAMMING 11.2.1 CONCURRENT EXECUTION 11.2.2 GUARDED COMMANDS 11.2.3 ADA OVERVIEW 11.2.4 TASKS 11.2.5 SYNCHRONIZATION OF TASKS Reference: Programming Languages – Design and Implementation, Fourth Edition
  • 3.
    MOTIVATION FOR PARALLELPROGRAMMING Moore’s law : “Capacity and number of transistors in a chip ( Computational power) for a given cost doubles ever 18 months ” Moore’s law in action : Courtesy: http://www.engr.udayton.edu/faculty/jloomis/ece446/notes/intro/moore.html
  • 4.
    PARALLEL PROGRAMMING –AN INTRODUCTION Parallel / Concurrent Programs : No single execution sequence as in Sequential programs. Several subprograms execute concurrently. Computer systems with increased capability – Multiprocessor systems, Distributed / Parallel computer system. Advantages: 1) May make programs simpler to design . 2) Programs run faster than a matching sequential program.
  • 5.
    Our concern – concurrent execution within a single program. Major stumbling block – lack of programming language constructs for building such systems. C - fork() Ada - tasks, concurrent execution OUR CONCERN
  • 6.
    4 STEPS INCREATING A PARALLEL PROGRAM Decomposition of computation in tasks Assignment of tasks to processes Orchestration of data access, communication, synch. Mapping processes to processors
  • 7.
    PRINCIPLES OF PARALLELPROGRAMMING LANGUAGES Variable definitions : mutable - values changed during program execution definitional - assigned a value only once 2. Parallel composition : parallel statements like and Program Structure : transformational - transform input data into output value reactive - pgm reacts to external stimuli (events) Communication : shared memory - common data objects messages - individual copy of data obj. & pass val 5. Synchronization : pgms must be able to order execution of various threads of control - determinism / non-determinism
  • 8.
    CONCURRENT EXECUTION Mechanism: Construct that allows parallel execution - the and statement, statement 1 and statement 2 and … and statement n Concurrent task OS specification call ReadProcess and call WriteProcess and call ExecuteUserProgram; Correct Data Handling x := 1; x := 2 and y := x+x; (*) print (y);
  • 9.
    IMPLEMENTATION OF THE‘and’ CONSTRUCT Execution in sequence (no assumption on order of execution) while MoreToDo do MoreToDo := false; call ReadProcess; call WriteProcess; call ExecuteUserProgram end Parallel execution primitives of the underlying OS fork ReadProcess; fork WriteProcess; fork ExecuteUserProgram; wait /* for all 3 to terminate*/
  • 10.
    GUARDED COMMANDS Dijkstra - True non determinacy (1970) , guarded command Nondeterministic execution – alternative execution paths are possible Guards - If B is a guard (condition) and S is a command (statement) , guarded command B  S Guarded if statement – If Bi is a set of conditions and Si is a set of statements, if B 1  S 1 || B 2  S 2 || … || B n  S n fi Guarded Repetition statement do B 1  S 1 || B 2  S 2 || … || B n  S n od No true implementation of guarded commands as defined by Dijkstra in common PLs.
  • 11.
    ADA OVERVIEW Generalpurpose language , although originally designed for military applications . Block structure & data type mechanism similar to Pascal . Extensions for real-time & distributed applications . More secure form of data encapsulation than Pascal. Recent versions – ability to develop objects & provide for method inheritance . Major features - tasks & concurrent execution , real time task control , exception handling , abstract data types .
  • 12.
    ADA – ABRIEF LANGUAGE OVERVIEW Supports constrn. of large programs by teams of programmers. Program designed as a collection of packages , rep. an abstract data type or data objects . Program consists of a single procedure that serves as a main program. Broad range of built in data types – integers, reals, enumerations, Booleans, arrays, records, character strings, pointers. Sequence control within subprograms – expressions & stmt level ctrl structures similar to Pascal. - concurrently executing tasks controlled by a time clock & other scheduling mechanisms. Exception Handling – extensive set of features. Data-control structures – static block structure organization as in Pascal. Nonlocal references to type names, subprogram names, identifiers in package Central stack for each separate task. Heap storage area for programmer constructed data objects.
  • 13.
    TASKS Task : Each subprogram that can execute concurrently with other subprograms is called a task (or sometimes a process ) A task is dependent of the task that initiated it . THE FORK JOIN MODEL
  • 14.
    TASK MANAGEMENT Taskdefinition defines how the task synchronizes & communicates with others Task definition in Ada task Name is - Declarations for synchronization and communication end; task body Name is - Usual local declarations as found in any subprogram begin – Sequence of statements end; Initiating execution of a task ( PL/I) call B ( parameters ) task; Ada – No explicit call statement needed. - Concurrent execution on entry into larger program structure
  • 15.
    TASK MANAGEMENT (Contd..)Multiple simultaneous activations of same task PL/I : Repeated execution of call Ada : task type Terminal is - Rest of definition in the same form as above end; … A: Terminal; B,C: Terminal; Alternative type TaskPtr is access Terminal; - Defines pointer type NewTerm: TaskPtr := new Terminal; - Declares pointer variable
  • 16.
    SYNCHRONIZATION OF TASKSSynchronization is needed for tasks running asynchronously to coordinate their activities. Consider, task A --- read in a batch of data task B --- process each batch of data input from device SYNCHRONIZATION Task B doesn’t start processing data before taskA has finished reading these in. Task A doesn’t overwrite data taskB is still processing SYNCHRONIZATION MECHANISMS Interrupts Semaphores Messages Guarded Commands Rendezvous
  • 17.
    INTERRUPTS Common mechanismin computer hardware. Task A sends event to task B : Interrupt  control transfer  resume execution . Disadvantages as a sync. Mechanism in high level languages- 1. Confusing program structure – separate interrupt handling code 2. Waiting for an interrupt - Busy waiting loop --- does nothing else 3. Data shared between task body and interrupt has to be protected . High-level languages usually provide other synchronization mechanisms .
  • 18.
    SEMAPHORES Consists of two parts – 1. An integer counter - count number of signals sent but not received. 2. A queue of tasks - waiting for signals to be sent. Two primitive operations on a semaphore object P: 1. signal(P) - Tests value of counter in P. If zero, remove first task in task queue, and resume its execution. If not zero or if queue empty , increment counter by one. (indicates signal sent but not received) 2. wait(P) - Tests value of counter in P. If nonzero, decrement counter by one (indicates signal received) If zero, insert task at the end of task queue, and suspend task.
  • 19.
    SEMAPHORES (Contd..) SynchronizationProblem 2 binary semaphores - StartB - used by Task A to signal input complete StartA – used by Task B to signal processing complete task A; begin - input first data set loop signal(StartB) – Invoke task B - Input next data set wait(StartA) – Wait until taskB finishes with data endloop; end A; task B; begin loop wait(StartB) – Wait for task A to read data -Process data signal(StartA) – Tell Task A to continue endloop; end A;
  • 20.
    SEMAPHORES (Contd..) Disadvantagesfor use in high level programming of tasks A task can wait for only one semaphore at a time. Deadlocks may occur if a task fails to signal at the appropriate point. Increasingly difficult to understand, debug and verify . Semantics of signal and wait imply shared memory , not necessarily so in multiprocessor systems and computer networks. In essence, semaphore is a relatively low-level synchronization construct that is adequate primarily in simple situations .
  • 21.
    MESSAGES A messageis a transfer of information from one task to another. The task remains free to continue executing when not synchronising. A message is placed into the pipe (or message queue) by a send command. A message is accepted by a waiting task using a receive command. task Producer; begin loop - while more to read - Read new data; send( Consumer, data) endloop; end Producer task Consumer; begin loop - while more to process receive (Producer, data) - Process new data endloop; end Consumer PRODUCER CONSUMER PROBLEM
  • 22.
    GUARDED COMMANDS Adds nondeterminacy into programming. Good model for task synchronization. The guarded if command in Ada is termed a select statement with the general form select when condition 1 ==> statement 1 or when condition 2 ==> statement 2 … or when condition n ==> statement n else statement n+1 - optional else clause end select;
  • 23.
    RENDEZVOUS When twotasks synchronize their actions for a brief period , that synchronization is termed a rendezvous. Similar to message, but requires a synchronization action with each message. A B A B
  • 24.
    RENDEZVOUS (Contd…) Arendezvous point in B is called an entry ( in this example, DataReady ) When Task B is ready to begin processing a new batch of data, it must execute an accept statement: accept DataReady do - statements to copy new data from A into local data area of B end; When Task A has completed input of a batch of data, it must execute the entry call: DataReady . The Rendezvous: Task B reaches accept stmt  wait until Task A --- entry call: DataReady Task A reaches DataReady  wait until Task B reaches accept stmt. A continues to wait while B executes all stmts contained within the do…end of the accept stmt, and both A and B continue their separate executions.
  • 25.
    RENDEZVOUS (Contd…) selectwhen Device1Status = ON ==> accept Ready1 do … end; or when Device2Status = ON ==> accept Ready2 do … end; or when Device3Status = connected ==> accept Ready3 do … end; else … - No device is ready; do something else end select; Conditional rendezvous on the status of each device
  • 26.
    TASKS & REALTIME PROCESSING A program that must interact with i/o devices or other tasks within some fixed time period is said to be operating in real time . In real time computer systems, h/w failure of an i/o device leads to a task’s being abruptly terminated . If other tasks wait on such a failed task, the entire systems of tasks may deadlock & cause the system to crash . Real-time processing requires that the language hold some explicit notion of time . Ada - package called Calendar that includes a type Time and a function clock. A task waiting for a rendezvous may watch the clock, as in select DataReady; or delay 0.5; - Wait at most 0.5 seconds end select ;
  • 27.
    TASKS & SHAREDDATA Tasks sharing data present special problems due to concurrent execution involved . There are two issues : 1. Storage Management  Single Stack  Multiple stacks  Single Heap 2. Mutual Exclusion  Critical Regions  Monitors  Message Passing
  • 28.
    STORAGE MANAGEMENT INTASKS Single Stack Multiple Stacks Single Heap stack heap stack stack stack heap stack1 stack2 stack3 act rec act rec act rec act rec act rec act rec act rec
  • 29.
    STORAGE MANAGEMENT INTASKS (Contd…) Single stack - * C, Pascal * If Stack & heap meet , no more space---program terminates * Efficient use of space Multiple stacks - * Used when there is enough memory * If any stack overlaps next memory segment, program terminates * Effective solution with today’s modern virtual memory systems Single Heap - * Systems with limited memory * Used for early PL/I compilers * High overhead * Memory fragmentation can be a problem in unique sized ARs.
  • 30.
    CACTUS STACK MODELOF MULTIPLE TASKS Task 3 Task 1 Task 2 Task 4
  • 31.
    MUTUAL EXCLUSION IfTaskA and TaskB each have access to a single data object X, then A and B must synchronize their access to X so that TaskA is not in the process of assigning a new value to X while TaskB is simultaneously referencing that value or assigning a different value. To ensure that two tasks do not simultaneously attempt to access and update a shared data object, one task must be able to gain exclusive access to the data object while it manipulates it.
  • 32.
    CRITICAL REGIONS A Critical region is a sequence of program statements within a task where the task is operating on some data object shared with other tasks. If a critical region in Task A is manipulating data object X, then mutual exclusion requires that no other task be simultaneously executing a critical region that also manipulates X. During execution of Task A, A must wait until any other task has completed a critical region that manipulates X. As Task A begins its critical region, all other tasks must be locked out so that they cannot enter their critical regions (for variable X) until A has completed its critical region. Critical regions may be implemented in tasks by associating a semaphore with each shared data object.
  • 33.
    MONITORS A monitoris a shared data object together with the set of operations that may manipulate it. Similar to a data object defined by an abstract data type . To enforce mutual exclusion, it is only necessary to require that at most one of the operations defined for the data object may be executing at any given time. Mutual exclusion and encapsulation constraints require a monitor to be represented as a task.
  • 34.
    MONITORS (Contd…) task TableManager is entry EnterNewItem(…); entry FindItem(…); end ; task body TableManager is BigTable: array (…) of procedure Enter(…) is - Statements to enter item in BigTable end Enter; function Find(…) returns … is - Statements to find item in BigTable end Find; begin - Statements to initialise BigTable loop – Loop forever to process entry requests select accept EnterNewItem(…) do - Call Enter to enter received in BigTable end; or accept FindItem(…) do - Call Find to look up received item in BigTable end; end select; end loop; end TableManager;
  • 35.
    MESSAGE PASSING Theidea is to prohibit shared data objects and provide only the sharing of data values through passing the values as messages . Mutual exclusion comes naturally because each data object is owned by exactly one task , and no other task may access the data object directly. Copy of data values Local copy Data object Processing Task A Task B Processed Copy
  • 36.