Module 3 part 1 inter process communication

2,769 views

Published on

Published in: Education
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,769
On SlideShare
0
From Embeds
0
Number of Embeds
144
Actions
Shares
0
Downloads
137
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide

Module 3 part 1 inter process communication

  1. 1. Module 3 – Inter Process Communication Hemang Kothari Assistant Professor Computer Engineering Department MEFGI, Rajkot. Email: hemang.kothari@marwadieducation.edu.in Slides: http://www.slideshare.net/hemangkothari Computer Engineering Department - MEFGI 111/25/2015
  2. 2. Motivation • Processes want to talk with other processes on same computer. • Threads of single process want to communicate with each other. • Threads of different processes want to communicate with each other. • Process want to talk with other process on different computer. 11/25/2015 2Computer Engineering Department - MEFGI
  3. 3. Cooperating Process • Independent process cannot affect or be affected by the execution of another process. • Cooperating process can affect or be affected by the execution of another process • Advantages of process cooperation – Information sharing – Computation speed-up – Modularity – Convenience • Dangers of process cooperation – Data corruption, deadlocks, increased complexity – Requires processes to synchronize their processing 11/25/2015 Computer Engineering Department - MEFGI 3
  4. 4. What we gain • Data Transfer • Sharing Data • Event notification • Resource Sharing and Synchronization • Process Control 11/25/2015 Computer Engineering Department - MEFGI 4
  5. 5. Formal Definition • Interprocess communication (IPC) includes thread synchorization and data exchange between threads beyond the process boundaries. • If threads belong to the same process, they execute in the same address space, i.e. they can access global (static) data or heap directly, without the help of the operating system. • However, if threads belong to different processes, they cannot access each others address spaces without the help of the operating system. 11/25/2015 Computer Engineering Department - MEFGI 5
  6. 6. Concepts – Shared Memory • In computing, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. • Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. • Using memory for communication inside a single program, for example among its multiple threads, is also referred to as shared memory. 11/25/2015 Computer Engineering Department - MEFGI 6
  7. 7. Shared Memory • Hardware , shared memory refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiple- processor computer system. 11/25/2015 Computer Engineering Department - MEFGI 7
  8. 8. Critical Section • In concurrent programming, a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. • A critical section will usually terminate in fixed time, and a thread, task, or process will have to wait for a fixed time to enter it (aka bounded waiting). 11/25/2015 Computer Engineering Department - MEFGI 8
  9. 9. 11/25/2015 Computer Engineering Department - MEFGI 9
  10. 10. Solution • To avoid this problem a lock is created on that block of code which establishes that no new process can access that block until the first process release it after it's usage. Thus there would be no chance conflict b/w processes while reading that block simultaneously 11/25/2015 Computer Engineering Department - MEFGI 10
  11. 11. 11/25/2015 Computer Engineering Department - MEFGI 11
  12. 12. Mutually Exclusive • Two events are mutually exclusive if they cannot occur at the same time. An example is tossing a coin once, which can result in either heads or tails, but not both. • Failure to do so opens up the possibility of corrupting the shared state. 11/25/2015 Computer Engineering Department - MEFGI 12
  13. 13. Race Condition • Race conditions arise in software when separate processes or threads execution depend on some shared state. • Operations upon shared states are critical sections that must be mutually exclusive. Failure to do so opens up the possibility of corrupting the shared state. 11/25/2015 Computer Engineering Department - MEFGI 13
  14. 14. In is local variable containing pointer to next free slot Out is local variable pointing to next file to be printed Figure : Two processes want to access shared memory at the same time
  15. 15. How to avoid Races ??? • Mutual exclusion–only one process at a time can use a shared variable/file • Critical regions-shared memory which leads to races • Solution- Ensure that two processes can’t be in the critical region at the same time
  16. 16. We want this kind of system 11/25/2015 Computer Engineering Department - MEFGI 16
  17. 17. Other Solutions to avoid race… • For parallel processes cooperate correctly & efficiently using shared data only mutual exclusion is not sufficient. Good solutions are : 1. No two processes may be simultaneously inside their critical sections. 2. No assumptions may be made about speeds or number of CPUs. 3. No process running outside its critical region may block other processes. 4. No process should have to wait forever to enter its critical region.
  18. 18. We want this kind of solution • In this section, we will examine various proposals for achieving mutual exclusion, so that while one process is busy updating shared memory in its critical region, no other process will enter its critical region and cause trouble. 11/25/2015 Computer Engineering Department - MEFGI 18
  19. 19. 1st Solution - Disabling Interrupts • Idea: process disables interrupts, enters critical region, enables interrupts when it leaves critical region • Problems – Process might never enable interrupts, crashing system – Won’t work on multi-core chips as disabling interrupts only effects one CPU at a time 11/25/2015 Computer Engineering Department - MEFGI 19
  20. 20. 2nd Solution – Lock Variable • A software solution-everyone shares a lock • When lock is 0, process turns it to 1 and enters critical region • When exit critical region, turn lock to 0 – Problem-Race Condition 11/25/2015 Computer Engineering Department - MEFGI 20
  21. 21. Problem in Lock Variable • Unfortunately, this idea contains exactly the same fatal flaw that we saw in the spooler directory. • Suppose that one process reads the lock and sees that it is 0. Before it can set the lock to 1, another process is scheduled, runs, and sets the lock to 1. • When the first process runs again, it will also set the lock to 1, and two processes will be in their critical regions at the same time. 11/25/2015 Computer Engineering Department - MEFGI 21
  22. 22. Busy Waiting • A process that wants to enter a critical section, checks first to see if the entry is allowed and if it is not, the process waits in a tight loop • Continuously testing a variable waiting for some value to appear is denoted as “busy waiting” ⇒ Wastes CPU time 11/25/2015 Computer Engineering Department - MEFGI 22
  23. 23. 3rd Solution – Strict Alteration while (TRUE) { while (turn != 0)/* loop */ ; critical_region(); turn = 1; noncritical_region(); } while (TRUE) { while (turn != 1);/* loop */ ; critical_region(); turn = 0; noncritical_region(); } 11/25/2015 Computer Engineering Department - MEFGI 23 Before entering in the critical section, each process checks whether it is its turn (turn == process_no) and if it is, it enters, otherwise busy waits for its turn
  24. 24. Problem with Strict Alteration • Processes may enter critical sections only in fixed order of process number - Uses busy waiting • The third condition violated. P0 may be blocked by P1 outside the critical region. Such situation is called starvation. • In fact, this solution requires that the two processes strictly alternate in entering their critical regions. • This solution is incorrect, the problem of race conditions replaced by the problem of starvation 11/25/2015 Computer Engineering Department - MEFGI 24
  25. 25. Peterson’s Solution • Peterson's solution requires two data items to be shared between the two processes: int turn; Boolean interested [2] • The variable turn indicates whose turn it is to enter its critical section. That is, if turn == i, then process Pi is allowed to execute in its critical section. The interested array is used to indicate if a process is ready to enter its critical section. For example, if intersted[i] is true, this value indicates that Pi is ready to enter its critical section. 11/25/2015 Computer Engineering Department - MEFGI 25
  26. 26. 4th Solution – Peterson’s Solution 11/25/2015 Computer Engineering Department - MEFGI 26
  27. 27. Achievement of Peterson’s Solution 1. Mutual exclusion is preserved. 2. The progress requirement is satisfied. 3. The bounded-waiting (Fixed – Waiting) requirement is met. 11/25/2015 Computer Engineering Department - MEFGI 27
  28. 28. • TSL reads lock into register and stores NON ZERO VALUE in lock (e.g. process number) • Instruction is atomic: done by freezing access to memory bus line (bus disable). TSL (Test & Set Lock)
  29. 29. TSL is atomic. Memory bus is locked until it is finished executing. Using TSL
  30. 30. What’s wrong with Peterson, TSL etc.? • Both Peterson’s solution and the solution using TSL are correct, but both have the defect of requiring busy waiting. • In essence, what these solutions do is this: when a process wants to enter its critical region, it checks to see if the entry is allowed. If it is not, the process just sits in a tight loop waiting until it is.
  31. 31. Another Issue – Priority Inversion Problem • Consider a computer with two processes, H , with high priority and L , with low priority. • The scheduling rules are such that H is run whenever it is in ready state. At a certain moment, with L in its critical region, H becomes ready to run (e.g., an I/O operation completes). H now begins busy waiting, but since L is never scheduled while H is running, L never gets the chance to leave its critical region, • so H loops forever. This situation is sometimes referred to as the priority inversion problem. 11/25/2015 Computer Engineering Department - MEFGI 31
  32. 32. Solution – IPC Primitives (System Calls) • Interprocess communication primitives that block instead of wasting CPU time when they are not allowed to enter their critical regions. • One of the simplest is the pair sleep and wakeup . • Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up. • The wakeup call has one parameter, the process to be awakened. • Alternatively, both sleep and wakeup each have one parameter, a memory address used to match up sleep’s with wakeup’s. 11/25/2015 Computer Engineering Department - MEFGI 32
  33. 33. The Producer-Consumer Problem (aka Bounded Buffer Problem)
  34. 34. Race Condition in Problem • The buffer is empty and the consumer has just read count to see if it is 0. At that instant, the scheduler decides to stop running the consumer temporarily and start running the producer. • The producer inserts an item in the buffer, increments count , and notices that it is now 1. Reasoning that count was just 0, and thus the consumer must be sleeping, the producer calls wakeup to wake the consumer up. 11/25/2015 Computer Engineering Department - MEFGI 34
  35. 35. Continue • Unfortunately, the consumer is not yet logically asleep, so the wakeup signal is lost. • When the consumer next runs, it will test the value of count it previously read, find it to be 0, and go to sleep. • Sooner or later the producer will fill up the buffer and also go to sleep. Both will sleep forever. 11/25/2015 Computer Engineering Department - MEFGI 35
  36. 36. Solution - Semaphores • One problem with implementing a Sleep and Wakeup policy is the potential for losing Wakeups. • Semaphores solve the problem of lost wakeups. In the Producer-Consumer problem, semaphores are used for two purposes: – mutual exclusion – synchronization. 11/25/2015 Computer Engineering Department - MEFGI 36
  37. 37. Semaphore • Definition :- A semaphore is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming or multi user environment. • Semaphore is an integer variable • A useful way to think of a semaphore is as a record of how many units of a particular resource are available, coupled with operations to safely (i.e., without race conditions) adjust that record as units are required or become free, and, if necessary, wait until a unit of the resource becomes available. 11/25/2015 Computer Engineering Department - MEFGI 37
  38. 38. In a way • PERMISSIBLE OPERATIONS: • Given a semaphore s, two non-divisible operations are defined: – signal(s) // increments s by one – wait(s) // decrements s by one as soon as it is possible • Value_of (s) = init(s) + number_of_signals(s) – number_of_successful_waits(s) 11/25/2015 Computer Engineering Department - MEFGI 38
  39. 39. Types of Semaphores • Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores (same functionality that mutexes have). • Used to count sleeping processes/wakeups • If semaphore could have value zero, indicates that no wake ups were saved or some positive value if one or more wakeups were pending. 11/25/2015 Computer Engineering Department - MEFGI 39
  40. 40. How can we use Semaphore • One important property of these semaphore variables is that their value cannot be changed except by using the wait() and signal() functions. • Correct solution using a semaphore named mutex, initialized to 1: wait(mutex); // Critical section code goes here; signal(mutex) 11/25/2015 Computer Engineering Department - MEFGI 40
  41. 41. Wait() and Signal() • wait(): Decrements the value of semaphore variable by 1. If the value becomes negative, the process executing wait() is blocked (like sleep), i.e., added to the semaphore's queue. • signal(): Increments the value of semaphore variable by 1. After the increment, if the pre-increment value was negative (meaning there are processes waiting for a resource), it transfers a blocked process from the semaphore's waiting queue to the ready queue. (like Wake up) 11/25/2015 Computer Engineering Department - MEFGI 41
  42. 42. Producer / Consumer with Semaphore • 3 semaphores: full, empty and mutex • Full counts full slots (initially 0) • Empty counts empty slots (initially N) • Mutex protects variable which contains the items produced and consumed. (Binary Semaphore) 11/25/2015 Computer Engineering Department - MEFGI 42
  43. 43. Producer / Consumer with Semaphore 1. A single consumer enters its critical section. Since fullCount is 0, the consumer blocks. 2. The producers, one at a time, gain access to the queue through mutex and deposit items in the queue. 3. Once the first producer exits its critical section, fullCount is incremented, allowing one consumer to enter its critical section. 11/25/2015 Computer Engineering Department - MEFGI 43
  44. 44. Producer / Consumer with Semaphore 11/25/2015 Computer Engineering Department - MEFGI 44
  45. 45. Achievement • We do not get busy waiting • At a time only one process is accessing critical section. • Synchronization using Semaphore. • Mutual Exclusion using Mutex 11/25/2015 Computer Engineering Department - MEFGI 45
  46. 46. When to use What • Semaphore: Use a semaphore when you (thread) want to sleep till some other thread tells you to wake up. Semaphore 'down' happens in one thread (producer) and semaphore 'up' (for same semaphore) happens in another thread (consumer) e.g.: In producer-consumer problem, producer wants to sleep till at least one buffer slot is empty - only the consumer thread can tell when a buffer slot is empty. • Mutex: Use a mutex when you (thread) want to execute code that should not be executed by any other thread at the same time. Mutex 'down' happens in one thread and mutex 'up' must happen in the same thread later on. e.g.: If you are deleting a node from a global linked list, you do not want another thread to muck around with pointers while you are deleting the node. When you acquire a mutex and are busy deleting a node, if another thread tries to acquire the same mutex, it will be put to sleep till you release the mutex. 11/25/2015 Computer Engineering Department - MEFGI 46
  47. 47. Find a Crack 11/25/2015 Computer Engineering Department - MEFGI 47
  48. 48. Mutex • Mutex: variable which can be in one of two states- locked (1 or other value), unlocked(0) • Easy to implement • Good for using with thread packages in user space – Thread (process) wants access to cr (critical region), calls mutex_lock. – If mutex is unlocked, call succeeds. Otherwise, thread blocks until thread in the cr does a mutex_unlock. 11/25/2015 Computer Engineering Department - MEFGI 48
  49. 49. User space code for mutex lock and unlock 11/25/2015 Computer Engineering Department - MEFGI 49
  50. 50. Pthread calls for mutex 11/25/2015 Computer Engineering Department - MEFGI 50 • Pthread_mutex-trylock tries to lock mutex. If it fails it returns an error code, and can do something else.
  51. 51. Condition Variables • Allows a thread to block if a condition is not met, e.g. Producer-Consumer. Producer needs to block if the buffer is full. • Mutex make it possible to check if buffer is full • Condition variable makes it possible to put producer to sleep if buffer is full • Both are present in pthreads and are used together 11/25/2015 Computer Engineering Department - MEFGI 51
  52. 52. Pthread calls for Condition Variable 11/25/2015 Computer Engineering Department - MEFGI 52
  53. 53. Producer Consumer with condition variables and mutexes • Producer produces one item and blocks waiting for consumer to consume the item. • Producer signals consumer that the item has been produced. • Consumer has been blocked and waiting for signal from producer that item is in buffer • Consumer consumes item, signals producer to produce new item. 11/25/2015 Computer Engineering Department - MEFGI 53
  54. 54. 11/25/2015 Computer Engineering Department - MEFGI 54
  55. 55. Event Counters • An event counter is another data structure that can be used for process synchronization. Like a semaphore, it has an integer count and a set of waiting process identifications. • Unlike semaphores, the count variable only increases. It is similar to the “next customer number” used in systems where each customer takes a sequentially numbered ticket and waits for that number to be called 11/25/2015 Computer Engineering Department - MEFGI 55
  56. 56. Producer-Consumer with Event Counters #define N 100 typedef int event_counter; event_counter in = 0; /* counts inserted items */ event_counter out = 0; /* items removed from buffer */ void producer(void) { int item, sequence = 0; while(TRUE) { produce_item(&item); sequence = sequence + 1; /* counts items produced */ await(out, sequence - N); /* wait for room in buffer */ enter_item(item); /* insert into buffer */ advance(&in); /* inform consumer */ } } 56 Operating Systems, 2011, Danny Hendler & Amnon Meisels
  57. 57. Event counters (producer-consumer) void consumer(void) { int item, sequence = 0; while(TRUE) { sequence = sequence + 1; /* count items consumed */ await(in, sequence); /* wait for item */ remove_item(&item); /* take item from buffer */ advance(&out); /* inform producer */ consume_item(item); } } 57 Operating Systems, 2011, Danny Hendler & Amnon Meisels
  58. 58. Monitor 11/25/2015 Computer Engineering Department - MEFGI 58 • A new synchronization structure called a monitor. • Monitors are features to be included in high level programming languages level languages. • A monitor is a collection of functions, declarations, and initialization statements. • Only one process at a time is allowed inside the monitor. Language compilers generate code that guarantees this restriction.
  59. 59. Monitor • Monitors provide control by allowing only one process to access a critical resource at a time – A class/module/package – Contains procedures and data • Syntax name : monitor … some local declarations … initialize local data procedure name(…arguments) … do some work … other procedures 11/25/2015 Computer Engineering Department - MEFGI 59
  60. 60. Monitor Rules • Any process can access any monitor procedure at any time • Only one process may enter a monitor procedure • No process may directly access a monitor’s local variables • A monitor may only access it’s local variables 11/25/2015 Computer Engineering Department - MEFGI 60
  61. 61. Things Needed to Enforce Monitor 11/25/2015 Computer Engineering Department - MEFGI 61 • “wait” operation – Forces running process to sleep • “signal” operation – Wakes up a sleeping process • A condition – Something to store who’s waiting for a particular reason – Implemented as a queue
  62. 62. Final Monitor 11/25/2015 Computer Engineering Department - MEFGI 62 • Advantages – Data access synchronization simplified (vs. semaphores or locks) – Better encapsulation • Disadvantages: – Deadlock still possible (in monitor code) – Programmer can still botch (mess or destroy )use of monitors – No provision for information exchange between machines
  63. 63. Message Passing • Semaphores, monitor and event counters are all designed to function within a single system (that is, a system with a single primary memory). • They do not support synchronization of processes running on separate machines connected to a network (Distributed System). • Messages, which can be sent across the network, can be used to provide synchronization. • So message passing is strategy for inter process communication in distributed environment. 11/25/2015 Computer Engineering Department - MEFGI 63
  64. 64. Message Passing • Send and receive primitives defined: send ( P, message ) : send a message to process P receive ( Q, message ) : receive a message from process Q • Process P Process C while (TRUE) { while (TRUE) { produce an item receive ( P, item ) send ( C, item ) consume item } } 11/25/2015 Computer Engineering Department - MEFGI 64
  65. 65. Message Passing in Prod / Cons 11/25/2015 Computer Engineering Department - MEFGI 65 In this solution, each message has two components: • an empty/full flag, and a data component being passed from the producer to the consumer. • Initially, the consumer sends N messages marked as “ empty” to the producer. • The producer receives an empty message, blocking until one is available, fills it, and sends it to the consumer. • The consumer receives a filled message, blocking if necessary, processes the data it contains, and returns the empty to the producer.
  66. 66. Classic IPC Problems • Dining philosophers • Readers and writers • Sleeping barber
  67. 67. Readers and writers • Multiple readers can concurrently read from the data base. • But when updating the db, there can only be one writer (i.e., no other writers and no readers either)
  68. 68. Dining philosophers Philosophers eat and think. 1. To eat, they must first acquire a left fork and then a right fork (or vice versa). 2. Then they eat. 3. Then they put down the forks. 4. Then they think. 5. Go to 1.
  69. 69. Sleeping barber
  70. 70. “Every man, wherever he goes, is encompassed by a cloud of comforting convictions, which move with him like flies on a summer day" 11/25/2015 Computer Engineering Department - MEFGI 70

×