Parallel programming in the
parallel virtual machine
Submitted by
N.Jayanthi
Msc(cs)
 Change at any time during computation.
 Groups are useful in cases when a collective operation is
performed on only a subset of the task.
 A broadcast operation,which sends a message to all tasks
in a system,can use a named group to send a message to
only the members of this group.
 A task may join or leave a group at any time without
informing other tasks in the group.
 A task may also belong to multiple groups.
 PVM provides several functions for tasks to join and leave
a group,and retrieve information about other groups.
i = pvm_joingroup(group_name)
 Group named group_name.
 Created pvm_joingroup is called for the first time.
 The first caller gets 0 as instance number.
 Starts at 0 and is incremented by 1 every time a new task
join the group.
 Set of instance numbers may have gaps sas a result of
having one or more tasks leave the group.
 New member will get the lowest available instance
number.
 Maintaining a set of instance numbers without gaps is the
programmers responsibility.
info = pvm_lvgroup(group_name)
 Leave the group group_name.
 info will have a negative value.
 Task decides to rejoin this group at a later time, it
may get a different instance number because the
old number may have been assigned to another
task that may have joined.
 There are a number of other functions that can be
called by any task to retrieve information without
having to be a member of the specified group.
 Function pvm_gsize() can be used to retrieve the
size of a group.
 It takes as input the group name and returns the
number of members in the group.
 The function pvm_gettid() is provided to retrieve
the TIP of a task given its instance number and its
groupname.
 Function pvm_getinst() retrieves the instance
number of a tasjk its TIP and the name of a group
Synchronization constructs can be used to force
a certain order of execution among the
activities in a parallel program.
Synchronization in PVM can be achieved using
several constructs,most notably blocking
receive and barrier operations.
Example:
 Member of a group that finish their work
early may need to wait at a synchornization
point until those tasks that take a longer time
reach the same point.
 Message passing can be used effectively to force
precedence constraints among tasks.
 The blocking receive operation (pvm_recv())
forces the receiving task to wait until a matching
message is received.
 Sender of this matching message may hold its
message as long as it wants the receiver to wait
Two task: T0 and T1
 function g() in T1 is not executed until T0 has completed
the execution of the function f().
 order of execution using a send operation after calling f()
in T0 and a matching blocking receive operation before
calling g() in T1.
f()
Pvm_s end(200,t ag)
T0
(TIP=100)
Pvm_r ecv(100,t ag)
g()
T1
(TID=200)
 Parallel tasks can be sychronized through the use
of synchronization points called barriers.
 Members of a group can choose to wait at a barrier
until a specified number of group members check
in at that barrier.
 Function:
info=pvm_barrier(group_name,ntasks)
 Two input: group name ,number of group members
Group slave(T0,T1,T2)
Info=pvm_barrier T1 T2
PVM_barrier(“slave”,3)
PVM_barrier(“slave”,3)
PVM_barrier(“slave”,3)
wait
wait
proceed proceed proceed
Group: slave
T0
parallel programming in tthe PVM-advanced system architecture

parallel programming in tthe PVM-advanced system architecture

  • 1.
    Parallel programming inthe parallel virtual machine Submitted by N.Jayanthi Msc(cs)
  • 2.
     Change atany time during computation.  Groups are useful in cases when a collective operation is performed on only a subset of the task.
  • 3.
     A broadcastoperation,which sends a message to all tasks in a system,can use a named group to send a message to only the members of this group.  A task may join or leave a group at any time without informing other tasks in the group.  A task may also belong to multiple groups.  PVM provides several functions for tasks to join and leave a group,and retrieve information about other groups.
  • 4.
    i = pvm_joingroup(group_name) Group named group_name.  Created pvm_joingroup is called for the first time.  The first caller gets 0 as instance number.  Starts at 0 and is incremented by 1 every time a new task join the group.  Set of instance numbers may have gaps sas a result of having one or more tasks leave the group.  New member will get the lowest available instance number.  Maintaining a set of instance numbers without gaps is the programmers responsibility.
  • 5.
    info = pvm_lvgroup(group_name) Leave the group group_name.  info will have a negative value.  Task decides to rejoin this group at a later time, it may get a different instance number because the old number may have been assigned to another task that may have joined.  There are a number of other functions that can be called by any task to retrieve information without having to be a member of the specified group.
  • 6.
     Function pvm_gsize()can be used to retrieve the size of a group.  It takes as input the group name and returns the number of members in the group.  The function pvm_gettid() is provided to retrieve the TIP of a task given its instance number and its groupname.  Function pvm_getinst() retrieves the instance number of a tasjk its TIP and the name of a group
  • 7.
    Synchronization constructs canbe used to force a certain order of execution among the activities in a parallel program. Synchronization in PVM can be achieved using several constructs,most notably blocking receive and barrier operations. Example:  Member of a group that finish their work early may need to wait at a synchornization point until those tasks that take a longer time reach the same point.
  • 8.
     Message passingcan be used effectively to force precedence constraints among tasks.  The blocking receive operation (pvm_recv()) forces the receiving task to wait until a matching message is received.  Sender of this matching message may hold its message as long as it wants the receiver to wait
  • 9.
    Two task: T0and T1  function g() in T1 is not executed until T0 has completed the execution of the function f().  order of execution using a send operation after calling f() in T0 and a matching blocking receive operation before calling g() in T1. f() Pvm_s end(200,t ag) T0 (TIP=100) Pvm_r ecv(100,t ag) g() T1 (TID=200)
  • 10.
     Parallel taskscan be sychronized through the use of synchronization points called barriers.  Members of a group can choose to wait at a barrier until a specified number of group members check in at that barrier.  Function: info=pvm_barrier(group_name,ntasks)  Two input: group name ,number of group members
  • 11.
    Group slave(T0,T1,T2) Info=pvm_barrier T1T2 PVM_barrier(“slave”,3) PVM_barrier(“slave”,3) PVM_barrier(“slave”,3) wait wait proceed proceed proceed Group: slave T0