Your SlideShare is downloading. ×
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Chapter 02 modified
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Chapter 02 modified

163

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
163
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Notice that the condition of one person in the kitchen is now relaxed. Two new rules: there must be one dish in the sink to clean a dish there must be one dish in dishes to cook
  • Transcript

    • 1. Chapter 2Processes and Threads2.1 Processes2.2 Threads2.3 Interprocess communication2.4 Classical IPC problems2.5 Scheduling 1
    • 2. Processes The Process Model• Multiprogramming of four programs• Conceptual model of 4 independent, sequential processes• Only one program active at any instant 2
    • 3. Process CreationPrincipal events that cause process creation1. System initialization2. Execution of a process creation system by a running process.3. User request to create a new process4. Initiation of a batch job 3
    • 4. Process TerminationConditions which terminate processes1. Normal exit (voluntary)2. Error exit (voluntary)3. Fatal error (involuntary)4. Killed by another process (involuntary) 4
    • 5. Process Hierarchies• Parent creates a child process, child processes can create its own process• Forms a hierarchy – UNIX calls this a "process group"• Windows has no concept of process hierarchy – all processes are created equal 5
    • 6. Process States (1) • Process Transitions• Possible process states – running – blocked – ready• Transitions between states shown 6
    • 7. Process States (2)• Lowest layer of process-structured OS – handles interrupts, scheduling• Above that layer are sequential processes 7
    • 8. Implementation of ProcessesThe OS organizes the data about each process in a table naturallycalled the process table. Each entry in this table is calleda process table entry or process control block (PCB). Characteristics of the process table.1.One entry per process.2.The central data structure for process management.3.A process state transition (e.g., moving from blocked to ready)is reflected by a change in the value of one or more fields in thePCB.4.We have converted an active entity (process) into a datastructure (PCB). Finkel calls this the level principle an active entitybecomes a data structure when looked at from a lower level. 8
    • 9. Implementation of ProcessesA process in an operating system is represented by adata structure known as a Process Control Block (PCB)or process descriptor.The PCB contains important information about thespecific process including1.The current state of the process i.e., whether it isready, running, waiting, or whatever.2.Unique identification of the process in order to track"which is which" information.3.A pointer to parent process. 9
    • 10. Implementation of Processes4. Similarly, a pointer to child process (if it exists).5. The priority of process (a part of CPU scheduling information).6. Pointers to locate memory of processes.7. A register save area.8. The processor it is running on.The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems. 10
    • 11. Process Control Block 11
    • 12. Process Control Block 12
    • 13. Process Table PID PCB 1 . 2 . . . n . Process control BlockProcess control Block Process control Block 13
    • 14. 14
    • 15. Process States 15
    • 16. Implementation of Processes (2)Skeleton of what lowest level of OS does when an interrupt occurs 16
    • 17. Implementation of Processes (1) Fields of a process table entry 17
    • 18. Threads The Thread Model (1)(a) Three processes each with one thread(b) One process with three threads 18
    • 19. The Thread Model (2)• Items shared by all threads in a process• Items private to each thread 19
    • 20. The Thread Model (3)Each thread has its own stack 20
    • 21. Thread Usage (1)A word processor with three threads 21
    • 22. Thread Usage (2)A multithreaded Web server 22
    • 23. Thread Usage (3)• Rough outline of code for previous slide (a) Dispatcher thread (b) Worker thread 23
    • 24. Thread Usage (4)Three ways to construct a server 24
    • 25. Implementing Threads in User Space A user-level threads package 25
    • 26. Implementing Threads in the Kernel A threads package managed by the kernel 26
    • 27. Hybrid ImplementationsMultiplexing user-level threads ontokernel- level threads 27
    • 28. Scheduler Activations• Goal – mimic functionality of kernel threads – gain performance of user space threads• Avoids unnecessary user/kernel transitions• Kernel assigns virtual processors to each process – lets runtime system allocate threads to processors• Problem: Fundamental reliance on kernel (lower layer) calling procedures in user space (higher layer) 28
    • 29. Pop-Up Threads• Creation of a new thread when message arrives (a) before message arrives (b) after message arrives 29
    • 30. Making Single-Threaded Code Multithreaded (1)Conflicts between threads over the use of a global variable 30
    • 31. Making Single-Threaded Code Multithreaded (2) Threads can have private global variables 31
    • 32. Interprocess Communication (IPC)• Process frequently need to communicate with other process. ( Ex: A shell Pipeline)• Interrupt is the one way to achieve IPC.• But we require a well structured way to achieve IPC. 32
    • 33. Interprocess Communication (IPC)• Issues to be considered:1.How one process can pass information to other process.2.Making sure that two or more process don’t get into each others’ way, when engaging Critical Region.3.Proper sequencing of processes when dependencies present. Ex: Process A produce Data & Process B has to print this data 33
    • 34. Interprocess Communication Race Conditions• In o.s. processes working together may share recourses (Storage) .• Shared storage1. may be in primary memory2. may be a shared file. 34
    • 35. IPC – Race conditions1. The process wants to print a file enters the file name in a special spooler directory. (shared)2. Another process, the printer daemon periodically checks, if there are any files to be printed and if there are, it prints them & then removes their name from Print Spooler the directory. Two processes want to access shared memory at same time 35
    • 36. IPC – Race conditions here, In: points to the next free slots in the directory (Local variable) Out : points to the next file to be printed & both are shared VariablePrint Spooler 36
    • 37. IPC – Race conditions Following Might Happen: 1. Process A reads in and stores the value 7 in a local variable called Next –Free-Slot. 2. Just then clock interrupt occurs and CPU decides that process A has run long enough. 3. It switches to the process B. 4. Process B also reads in & also get a 7. 5. It too stores 7 into its local variable Next –Free-Slot.Print Spooler 37
    • 38. IPC – Race conditions 6. Process B continues to run and store the name of the file in slot 7 & updates in to be 8. 7. Now, process B goes off & does other things. 8. Eventually, process A runs again, starting from the place it lefts off. 9. It looks at Next-Free-Slot. 10. It finds 7 there. 11. It writes its file name in slot 7 erasing the name thatPrint Spooler process B just put there. 38
    • 39. IPC – Race conditions 12. Then it computes Next-Free- Slot +1, which is 8. 13. Now, it sets in to 8. 14. The spooler directory is now internally consistent. 15. So, the printer daemon process will not notice any thing wrong. 16. But, process B never get its’ job done. 17. Situation like this is known as RACE CONDITIONS.Print Spooler 39
    • 40. Mutual exclusion & Critical Regions• We must avoid race conditions by finding some way to prohibit more than one process reading & writing the shared data at the same time.• We can achieve this by doing MUTUAL EXCLUSION. 40
    • 41. Mutual exclusion & Critical Regions• MUTUAL EXCLUSION : it is, some way of making sure that if one process is using a shared variable or file, the other process will be excluded from doing the same thing.• CRITICAL REGION: the part of the memory where the shared memory is accessed is called the critical region. 41
    • 42. Mutual exclusion & Critical RegionsConditions required to avoid race condition:1. No two processes may be simultaneously inside their critical regions.2. No assumptions may be made about speeds or the number of CPUs.3. No process running outside its critical region may block other processes.4. No process should have to wait forever to enter its critical region. 42
    • 43. Mutual exclusion using critical regions• CRITICAL REGION: the part of the memory where the shared memory is accessed is called the critical region. 43
    • 44. Mutual Exclusion with Busy WaitingBUSY WAITING : Continually testing a variable until some value appears is called BUSY WAITING. Proposals for achieving mutual exclusion: • Disabling interrupts • Lock variables • Strict alternation • Petersons solution • The TSL instruction 44
    • 45. Mutual Exclusion with Busy Waiting Disabling Interrupts• It is the Simplest Solution• Each Process should disable all interrupts just after entering its critical region• Each Process should re-enable all interrupts just before leaving its critical region• With interrupts disabled, No clock interrupts occur• CPU can’t switch from process to process without clock interruptsDisadvantages:• What happens if one user disables interrupts and then never turned them on again• If a system is a multi processor system ; disabling interrupts affects only the CPU that executed disable instruction 45
    • 46. Mutual Exclusion with Busy Waiting LOCK VARIABLES• It is the Simplest software Solution• We can have a single shared (Lock) variable• Keep initially 0• Now a process wants to enter critical region , it first test Lock variable• If the lock is zero , the process sets it to 1 and enters the critical region.• If the lock is 1 , the process just waits to be it 0Disadvantages:• Unfortunately , this idea contains exactly the same problem that we show in the spooler directory example. 46
    • 47. Mutual Exclusion with Busy Waiting (1) Strict AlternationNotice the semicolons terminating the while statements inFig. above•Busy waiting continuously testing a variable until some value appears using it as a lock.•A lock that uses busy waiting is called a spin lock.•It should usually be avoided, since it wastes CPU time. 47
    • 48. 1. The integer variable turn (keeps track of whose turn it is to enter the CR),2. Initially, process 0 inspects turn, finds it to be 0, and enters its CR,3. Process 1 also finds it to be 0 and therefore sits in a tight loop continually testing turn to see when it becomes 1,4. When process 0 leaves the CR, it sets turn to 1, to allow process 1 to enter its CR,5. Suppose that process 1 finishes its CR quickly, so both processes are in their nonCR (with turn set to 0) 48
    • 49. 6. Process 0 finishes its nonCR and goes back to the top of its loop. Process 0 executes its whole loop quickly, exiting its CR and setting turn to 1.7. At this point turn is 1 and both processes are executing in their nonCR,8. Process 0 finishes its nonCR and goes back to the top of its loop,9. Unfortunately, it is not permitted to enter its CR, turn is 1 and process 1 is busy with its nonCR,10. It hangs in its while loop until process 1 sets turn to 0,11. This algorithm does avoid all races. But violates condition Fault tolerance. 49
    • 50. Mutual Exclusion with Busy Waiting TSL Instruction• Lets take some help of hardware• Many multiprocessor system have an instruction – TSL RX, Lock ( Test and set lock)• This works as follows1. It reads the content of the memory word into register RX and then stores a non zero value at the memory address Lock (Sets a lock )2. No other processor can access the memory word until the instruction is finished3. In other words the CPU executing TSL instruction locks the memory bus to prohibit other CPUs from accessing memory until it is done 50
    • 51. Mutual Exclusion with Busy Waiting TSL Instruction 1. To use the TSL instruction, we will use a shared variable , Lock to co- ordinate the access to shared memory 2. When lock = 0 any process can use it by setting it 1 3. When lock = 1 no process can use itEntering and leaving a critical region using TSL Instruction 51
    • 52. Petersons Solution to achieve Mutual Exclusion.Peterson’s algorithm is shown in Fig. 2-21.This algorithm consists of two procedures written in ANSI C.Before using the shared variables (i.e., before entering its criticalregion), each process calls enter_region with its own processnumber, 0 or 1, as parameter.This call will cause it to wait, if need be, until it is safe to enter. After it has finished with the shared variables, the process callsleave_region to indicate that it is done and to allow the otherprocess to enter, if it so desires.
    • 53. Petersons SolutionLet us see how this solution works.1.Initially neither process is in its critical region.2.Now process 0 calls enter_region.3.It indicates its interest by setting its array element and sets turnto 0.4.Since process 1 is not interested, enter_region returnsimmediately.5.If process 1 now calls enter_region, it will hang there untilinterested[0] goes to FALSE, an event that only happens whenprocess 0 calls leave_region to exit the critical region.
    • 54. Petersons Solution6. Now consider the case that both processes call enter_region almost simultaneously.7. Both will store their process number in turn.8. Whichever store is done last is the one that counts; the first one is overwritten and lost.9. Suppose that process 1 stores last, so turn is 1.10. When both processes come to the while statement, process 0 executes it zero times and enters its critical region.11. Process 1 loops and does not enter its critical region until process 0 exits its critical region.
    • 55. Mutual Exclusion with Busy Waiting (2)Petersons solution for achieving mutual exclusion 55
    • 56. PRIORITY INVERSION PROBLEM1. In Scheduling, priority inversion is the scenario where a low priority Task holds a shared resource, that is required by a high priority task.2. This causes the execution of the high priority task to be blocked until the low priority task has released the resource, effectively “inverting” the relative priorities of the two tasks.3. If some other medium priority task, one that does not depend on the shared resource, attempts to run in the interim, it will take precedence over both the low priority task and the high priority task. 56
    • 57. PRIORITY INVERSION PROBLEMPriority Inversion will1.Make problems in real time systems.2.Reduce the performance of the system3.May reduce the system responsivenesswhich leads to the violation of responsetime guarantees. 57
    • 58. 1. Consider Three Tasks A,B,C with priorities A > B > C.2. Assume these tasks are served by a common server (Sequential).3. Assume A & C share a critical resource.4. Suppose C has the Server and acquires the resource.5. A requests the server, Preempting C. PRIORITY INVERSION EXAMPLE6. A then Wants the Resource.7. Now C must take the server while A blocks waiting for C to release the resource.8. Meanwhile B requests the server.9. Since B > C, B can run arbitrarily long, all the while with A being blocked.10. But A > B, Which is Anomaly. (Priority Inversion) 58
    • 59. Sleep & Wakeup• Both Peterson & TSL solution have the defect of requiring Busy Waiting• So we can have some problems like,1. CPU time is wasted2. Priority Inversion ProblemThese problems can be solved by using Sleep & Wakeup primitives (System Calls). 59
    • 60. Sleep & Wakeup• Sleep: Sleep is a system call that causes the caller to block, that is, be suspended until another process wakes it up• Wakeup : Wakeup system call awakens the process. It has one parameter which is process itself. 60
    • 61. Producer – Consumer Problem (Bounded Buffer Problem)• It consists of two processes, Producer & Consumer• They share a common fixed size Buffer• Producer puts information into Buffer• Consumer takes information out of buffer 61
    • 62. Producer – Consumer Problem (Bounded Buffer Problem)• Trouble: When the producer wants to put information but the buffer isn’t empty• Solution:1. Producer to go to sleep2. To be awakened when consumer removes a item or items from buffer 62
    • 63. Producer – Consumer Problem (Bounded Buffer Problem)• Trouble: When the consumer wants to take information from the buffer but buffer is empty.• Solution:1. Consumer go to sleep2. To be awakened when Producer put information in the buffer 63
    • 64. 64
    • 65. 65
    • 66. Sleep and Wakeup Producer Module Producer-consumer problem with fatal race conditionReason: Access to the count is unconstrained( Ex: Book) 66
    • 67. Sleep and Wakeup Consumer Module Producer-consumer problem with fatal race conditionReason: Access to the count is unconstrained( Ex: Book) 67
    • 68. Sleep and Wakeup• Due to access to the count in unconstrained manner a fatal race condition occurs here• So some wakeups calls are wasted here• Wakeup waiting bit is used here to avoid this• A wakeup bit is set to a process which is still awake• Later on when the process go to sleep & if wakeup bit is set , this bit is turned off but the process remains still awake 68
    • 69. Problem With Sleep and WakeupThe problem with this solution is that it contains a racecondition that can lead into a deadlock. Consider the followingscenario:1.The consumer has just read the variable itemCount, noticedits zero and is just about to move inside the if-block.2.Just before calling sleep, the consumer is interrupted and theproducer is resumed.3.The producer creates an item, puts it into the buffer, andincreases itemCount. 69
    • 70. Problem With Sleep and Wakeup1.Because the buffer was empty prior to the last addition, theproducer tries to wake up the consumer.2.Unfortunately the consumer wasnt yet sleeping, and thewakeup call is lost. When the consumer resumes, it goes tosleep and will never be awakened again. This is because theconsumer is only awakened by the producer when itemCountis equal to 1.3.The producer will loop until the buffer is full, after which itwill also go to sleep.4.Since both processes will sleep forever, we have run into adeadlock. This solution therefore is unsatisfactory. 70
    • 71. Semaphores• Semaphore is an integer variable• It is used to count the number of wakeups saved for future use• A semaphore could have –• Value 0 : No wakeups were saved• Value +ve Integer: Indicates wakeups pendingSemaphore operations: 1. Down operation 2. Up operation 71
    • 72. Operations on Semaphores• Down operation1.It checks the value of the semaphore.2.If it is greater than zero, it decrements the value by 1 & just continues.3.If it is zero, the process is put to sleep without completing Down for a moment.4.All these operations are done as a single, indivisible Atomic action. 72
    • 73. Operations on Semaphores• UP operation1. It increments the value of the semaphore addressed2. If one or more process were sleeping on that semaphore unable to complete down earlier, one of them chosen by the system3. it is allowed to complete Down (Decrement semaphore by 1)4. Thus, after an UP on a semaphore with process sleeping on it, the semaphore will still be 05. But there will be one less process sleeping on it.6. Above operation is totally invisible7. No process ever blocks here 73
    • 74. Producer – Consumer Problem using Semaphore• This solution uses three semaphores (1) full (2) empty & (3) mutexFull : Full is used for counting the number of slots that are fullEmpty: Empty is used for counting the number of slots that are emptyMutex: Mutex is used to make sure that Producer & Consumer don’t access the buffer at the same timeSemaphores used here in two different ways –1. For synchronization ( full & empty)2. To guarantee Mutual exclusion ( mutex) 81
    • 75. Semaphores : Producer 82
    • 76. Semaphores : Consumer 83
    • 77. SemaphoresThe producer-consumer problem using semaphores 84
    • 78. Mutexes• A mutex is a variable• It can be in one out of two states : Unlocked or Locked• Only one bit is required to represent it• In practice an integer value is often used, with 0 meaning unlocked and all other values meaning locked• When a process (or thread) needs access to a critical region, it calls mutex_lock• If the mutex is currently unlocked, the call succeeds and the calling process (or thread )is free to enter the critical region 85
    • 79. Mutexes• On the other hand, if mutex is already locked, the calling process (or thread) is blocked until the process (or thread) in the critical region is finished and calls mutex_unlock.• Because mutexes are so simple, they can easily be implemented in user space if a TSL instruction is available• The code for mutex_lock and mutex_unlock for use with a user level threads package 86
    • 80. MutexesThe code for mutex_lock and mutex_unlock foruse with a user level threads package is asunder.Implementation of mutex_lock and mutex_unlock 87
    • 81. Monitors (1)Example of a monitor 88
    • 82. Monitors (2)• Outline of producer-consumer problem with monitors – only one monitor procedure active at one time – buffer has N slots 89
    • 83. Monitors (3)Solution to producer-consumer problem in Java (part 1) 90
    • 84. Monitors (4)Solution to producer-consumer problem in Java (part 2) 91
    • 85. Message PassingThe producer-consumer problem with N messages 92
    • 86. MONITORS• The Problem With Semaphores• Suppose that the two downs in producers’ code is reversed in order....• Both process would stay blocked forever• If resources are not tightly controlled, “chaos will ensue” - Race conditions • To make it easier to write correct programs, a higher – level synchronization primitive called a monitor.
    • 87. The Solution• Monitors provide control by allowing only one process to access a critical resource at a time• A monitor is a collection of procedures, variables and data structures that are all grouped together in a special kind of module or package.• Procedures may call the procedures in a monitor whenever they want to, but they cannot directly access the monitor’s internal data structures from procedures declared outside the monitor.• Monitors have an important property that makes them useful for achieving mutual exclusion: only one process can be active in a monitor at any instant.• A monitor may only access it’s local variables
    • 88. An Abstract Monitorname : monitor … some local declarations … initialize local data procedure name(…arguments) … do some work … other procedures
    • 89. MonitorsExample of a monitor 96
    • 90. Monitors• Outline of producer-consumer problem with monitors – only one monitor procedure active at one time – buffer has N slots 97
    • 91. Things Needed to Enforce Monitor• A solution lies in the introduction of condition variables , along with two operators on them, Wait & Signal• “Wait” operation – Forces running process to sleep• “signal” operation – Wakes up a sleeping process• A condition (Condition variable) – Something to store who’s waiting for a particular reason – Implemented as a queue
    • 92. A Running Example – Kitchenkitchen : monitor Monitor Declaration occupied : Boolean; occupied := false; nonOccupied : condition; Declarations / Initialization procedure enterKitchen if occupied then nonOccupied.wait; occupied = true; Procedure procedure exitKitchen occupied = false; Procedure nonOccupied.signal;
    • 93. Multiple Conditions• Sometimes desirable to be able to wait on multiple things• Can be implemented with multiple conditions• Example:• Two reasons to enter kitchen- cook (remove clean dishes)- clean (add clean dishes)• Two reasons to wait: – Going to cook, but no clean dishes – Going to clean, no dirty dishes
    • 94. Emerson’s Kitchenkitchen : monitor cleanDishes, dirtyDishes : condition; dishes, sink : stack; dishes := stack of 10 dishes sink := stack of 0 dishes procedure cook if dishes.isEmpty then cleanDishes.wait sink.push ( dishes.pop ); dirtyDishes.signal; procedure cleanDish if sink.isEmpty then dirtyDishes.wait dishes.push (sink.pop) cleanDishes.signal
    • 95. Condition Queue• Checking if any process is waiting on a condition: – “condition.queue” returns true if a process is waiting on condition• Example: Doing dishes only if someone is waiting for them
    • 96. Summary• Advantages – Data access synchronization simplified (vs. semaphores or locks) – Better encapsulation• Disadvantages: – Deadlock still possible (in monitor code) – Programmer can still botch use of monitors – No provision for information exchange between machines
    • 97. Interprocess Communication (IPC) Mechanism for processes to communicate and synchronize their actions.  Via shared memory  Via Messaging system - processes communicate without resorting to shared variables. Messaging system and shared memory not mutually exclusive -  can be used simultaneously within a single OS or a single process. IPC facility provides two operations.  send(message) - message size can be fixed or variable  receive(message)
    • 98. Producer-Consumer using IPC  Producer repeat … produce an item in nextp; … send(consumer, nextp); until false;  Consumer repeat receive(producer, nextc); … consume item from nextc; … until false;
    • 99. IPC via Message Passing If processes P and Q wish to communicate, they need to:  establish a communication link between them  exchange messages via send/receive Fixed vs. Variable size message  Fixed message size - straightforward physical implementation, programming task is difficult due to fragmentation  Variable message size - simpler programming, more complex physical implementation.
    • 100. Producer-Consumer using Message Passing  Producer repeat … produce an item in nextp; … send(consumer, nextp); until false;  Consumer repeat receive(producer, nextc); … consume item from nextc; … until false;
    • 101. Direct Communication Sender and Receiver processes must name each other explicitly:  send(P, message) - send a message to process P  receive(Q, message) - receive a message from process Q Properties of communication link:  Links are established automatically.  A link is associated with exactly one pair of communicating processes.  Exactly one link between each pair.  Link may be unidirectional, usually bidirectional.
    • 102. Indirect Communication Messages are directed to and received from mailboxes (also called ports) Unique ID for every mailbox. Processes can communicate only if they share a mailbox. Send(A, message) /* send message to mailbox A */ Receive(A, message) /* receive message from mailbox A */ Properties of communication link Link established only if processes share a common mailbox. Link can be associated with many processes. Pair of processes may share several communication
    • 103. Indirect Communication usingmailboxes
    • 104. Mailboxes (cont.) Operations create a new mailbox  send/receive messages through mailbox  destroy a mailbox Issue: Mailbox sharing  P1, P2 and P3 share mailbox A.  P1 sends message, P2 and P3 receive… who gets message?? Possible Solutions  disallow links between more than 2 processes  allow only one process at a time to execute receive operation  allow system to arbitrarily select receiver and then notify
    • 105. BarriersThis mechanism is used for groups of processes rather than two-process producer-consumer type of situations • Use of a barrier – processes approaching a barrier – all processes but one blocked at barrier – last process arrives, all are let through 112
    • 106. Dining Philosophers (1)• Philosophers eat/think• Eating needs 2 forks• Pick one fork at a time• How to prevent deadlock 113
    • 107. Dining Philosophers (2)A nonsolution to the dining philosophers problem 114
    • 108. Dining Philosophers (3)Solution to dining philosophers problem (part 1) 115
    • 109. Dining Philosophers (4)Solution to dining philosophers problem (part 116
    • 110. The Readers and Writers ProblemA solution to the readers and writers problem 117
    • 111. The Sleeping Barber Problem (1) 118
    • 112. The Sleeping Barber Problem (2) Solution to sleeping barber problem. 119
    • 113. Scheduling Introduction to Scheduling (1)• Bursts of CPU usage alternate with periods of I/O wait – A CPU/Compute-bound process – Spends most of the time in computing. They have long CPU Bursts and infrequent I/O waits – An I/O bound process - Spends most of the time waiting for I/O. They have Short CPU Bursts and frequent I/O waits 120
    • 114. Introduction to Scheduling Types of Scheduling Algorithms• Non –Preemptive : a non-preemptive scheduling algorithm picks a process to run and then just lets it run until it blocks OR until it voluntarily releases CPU. It can’t be forcibly suspended• Preemptive: a preemptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and scheduler picks another process to run. 121
    • 115. Categories of Scheduling Algorithms • Batch • Interactive • Real time
    • 116. Introduction to Scheduling (2) Scheduling Algorithm Goals 123
    • 117. Scheduling in Batch Systems• There are following methods-1.First – Come – First – Serve2.Shortest Job First3.Shortest Remaining Time Next4.Three level Scheduling 124
    • 118. Scheduling in Batch Systems• First – Come – First – Serve method:1.Simplest non-preemptive algorithm2.Processes are assigned the CPU in the order they request it3.Basically there is a single queue of ready process4.It is very easy to understand and program5.A single linked list keeps track of all ready process 125
    • 119. Scheduling in Batch SystemsFCFS – Example : (With the arrival at Same Time) Average turn around time is (20 + 30 + 55 + 70 + 75) / 5 = 250/5 = 50 126
    • 120. FCFS – Example : (With the arrival at Different Times) 127
    • 121. Scheduling in Batch Systems• FCFS Disadvantages:What happens when –1.One compute bound process, runs for one second at a time and goes for disk read (CPU will remain Idle)2.Many I/O bound process that uses little CPU time but each have to perform 1000 disk reads to complete (CPU will remain Idle) 128
    • 122. Scheduling in Batch Systems Shortest Job First methodWorking: here when several equally important jobs are sittingin the input queue waiting to be started, the schedulerpicks the shortest job first. Average Turn Around time here: (5 + 15 +30 + 50 + 75 ) / 5 = 175/5 = 35 An example of shortest job first scheduling 129
    • 123. Shortest Job FirstFigure 2-40. An example of shortest job first scheduling. (a) Running four jobs in the original order. (b) Running them in shortest job first order. Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall, Inc. All rights reserved. 0-13-6006639
    • 124. Preemptive Shortest job Scheduling 131
    • 125. Scheduling in Batch Systems • It is worth pointing out that shortest job first is only optimal when all the jobs are available simultaneously • See following example: Processes A B C D E Run times 2 4 1 1 1 Arrival times 0 0 3 3 3Here we can run SJF in two orders like ABCDE or BCDEAAverage Turn. time (ABCDE) = (2-0)+(6-0)+(7-3)+(8-3)+(9-3) = 4.6Average Turn. time (BCDEA) = ? 132
    • 126. Three level scheduling in Batch SystemsThe CPU schedulerDecides the job to be givenCPU first.The admissionscheduler Decideswhich job to admit first The Memory schedulerto the system. Decides which job is to be kept inIt is used to handle memory & which are to be swapcompute out to handle memory space and I/O bound jobs. problem. 133
    • 127. Scheduling in Interactive Systems (1)1. Round Robin Scheduling2. Priority Scheduling• Round Robin Scheduling – list of runnable processes (a) – list of runnable processes after B uses up its quantum(b) 134
    • 128. Priority Scheduling1. A priority number (integer) is associated with each process2. The CPU is allocated to the process with the highest priority Normally (smallest integer = highest priority)It can be:• Preemptive• Non-preemptive
    • 129. Processes Burst time Priority Arrival Priority time Scheduling P1 10 3 00Example With P2 1 1 00 Same Arrival P3 2 4 00 Time P4 1 5 00 P5 5 2 00 P2 P5 P1 P3 P40 1 6 16 18 19 The average waiting time: =((16-10) + (1-1) + (18-2) + (19-1) + (6-5))/5 = (6+0+16+18+1)/5 = 41/5 = 8.2
    • 130. Priority Scheduling Example With Different Arrival Time Processes Burst time Priority Arrival time P1 10 3 00 P2 1 1 1 P3 2 4 2 P4 1 5 3 P5 5 2 4The average waiting time:=(( ? ) + ( ? ) + ( ? ) + ( ? ) + ( ? ))/5= ( ? +?+?+?+?)/5 = ?/5 = ?
    • 131. Priority SchedulingProblem :Starvation – low priority processes may never executeSolution :Aging – As time progresses increase the priority of the process
    • 132. Round-Robin Scheduling• The Round-Robin is designed especially for time sharing systems.• Similar to FCFS but adds preemption concept• Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds• After this time has elapsed, the process is preempted and added to the end of the ready queue.
    • 133. Round-Robin Scheduling ExampleTime Quantum : 20ms Arrival Time : 00 (Simultaneously) The average waiting time: =((134 ) + (37) + (162) + (121) )/4 = 113.5
    • 134. Round Robin scheduling Example Time Quantum here : 04ms Process Arrival Time Service time 1 0 8 2 1 4 3 2 9 4 3 5 P1 P2 P3 P4 P1 P3 P4 P3 0 4 8 12 16 20 24 25 26The average waiting time:=((20-0 ) + (8-1) + (26-2) + (25-3))/4 = (74 )/4 = 18.5 141
    • 135. PRIORITY BASED SCHEDULING• Assign each process a priority. Schedule highest priority first. All processes within same priority are FCFS.• Priority may be determined by user or by some default mechanism. The system may determine the priority based on memory requirements, time limits, or other resource usage.• Starvation occurs if a low priority process never runs. Solution: build aging into a variable priority.• Delicate balance between giving favorable response for interactive jobs, but not starving batch jobs. 142
    • 136. ROUND ROBIN• Use a timer to cause an interrupt after a predetermined time. Preempts if task exceeds it’s quantum.• Train of events 1. Dispatch 2. Time slice occurs OR process suspends on event 3. Put process on some queue and dispatch next• Use numbers to find queueing and residence times. (Use quantum.) 143
    • 137. ROUND ROBIN• Definitions:– Context Switch: Changing the processor from running one task (or process) to another. Implies changing memory.– Processor Sharing : Use of a small quantum such that each process runs frequently at speed 1/n.– Reschedule latency : How long it takes from when a process requests to run, until it finally gets control of the CPU. 144
    • 138. ROUND ROBIN • Choosing a time quantum– Too short - inordinate fraction of the time is spent in context switches.– Too long - reschedule latency is too great. If many processes want the CPU, then its a long time before a particular process can get the CPU. This then acts like FCFS.– Adjust so most processes wont use their slice. As processors have become faster, this is less of an issue. 145
    • 139. Round-Robin Scheduling NEXT SLIDE
    • 140. Multilevel Queue• Ready Queue partitioned into separate queues – Example: system processes, foreground (interactive), background (batch), student processes….• Each queue has its own scheduling algorithm – Example: foreground (RR), background(FCFS)• Processes assigned to one queue permanently.• Scheduling must be done between the queues – Fixed priority - serve all from foreground, then from background. Possibility of starvation. – Time slice - Each queue gets some CPU time that it schedules - e.g. 80% foreground(RR), 20% background (FCFS)
    • 141. Multilevel Queues
    • 142. MULTI-LEVEL QUEUES:• Each queue has its scheduling algorithm.• Then some other algorithm (perhaps priority based) arbitrates between queues.• Can use feedback to move between queues• Method is complex but flexible.• For example, could separate system processes, interactive, batch, favored, unfavored processes 149
    • 143. Multilevel Queue Interactive SystemsA scheduling algorithm with four priority classes 150
    • 144. Scheduling in Real-Time SystemsReal Time Scheduling: •Hard real-time systems – required to complete a critical task within a guaranteed amount of time. •Soft real-time computing – requires that critical processes receive priority over less fortunate ones. 151
    • 145. Scheduling in Real-Time SystemsSchedulable real-time system• Given – m periodic events – event i occurs within period Pi and requires Ci seconds• Then the load can only be handled if m Ci ∑ P ≤1 i =1 i 152
    • 146. Scheduling in Real-Time SystemsExample: Events Periods CPU Time 01 100 50 02 200 30 03 500 100Here , = 0.5 + 0.15 + 0.2 = 0.85 System is schedulable because here m Ci ∑P i =1 ≤1 i 153
    • 147. Policy versus Mechanism• Separate what is allowed to be done with how it is done – a process knows which of its children threads are important and need priority• Scheduling algorithm parameterized – mechanism in the kernel• Parameters filled in by user processes – policy set by user process 154
    • 148. Thread Scheduling (1)Possible scheduling of user-level threads• 50-msec process quantum• threads run 5 msec/CPU burst 155
    • 149. Thread Scheduling (2)Possible scheduling of kernel-level threads• 50-msec process quantum• threads run 5 msec/CPU burst 156
    • 150. FCFS Process Burst Time P1 3 P2 6 P3 4 P4 2Order : P1,P2,P3,P4 FCFSProcess Compl Time P1 3 P2 9 P3 13 P4 15average Waiting Time = ( )/4 = 157
    • 151. Shortest Job FirstProcess Burst Time P1 3 P2 6 P3 4 P4 2Process Compl Time P4 2 P1 5 P3 9 P2 15Average Waiting Time = ( )/4 = 158
    • 152. Priority Scheduling Process Burst Time Priority P1 3 2 P2 6 4 P3 4 1 P4 2 3Gantt Chart :P3 P1 P4 P20 4 7 15 9Average Waiting Time: = 159
    • 153. Round Robin SchedulingProcess Burst Time P1 3 P2 6 P3 4 P4 2 Time Quantum : 2ms Gantt Chart : ? Average Waiting Time: = 160

    ×