scheduling

2,049 views

Published on

Published in: Technology, Business
0 Comments
3 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,049
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
69
Comments
0
Likes
3
Embeds 0
No embeds

No notes for slide
  • We can classify multiprocessor systems as follows: • Loosely coupled or distributed multiprocessor, or cluster: Consists of a collection of relatively autonomous systems, each processor having its own main memory and I/O channels. • Functionally specialized processors: An example is an I/O processor. In this case, there is a master, general-purpose processor; specialized processors are controlled by the master processor and provide services to it. • Tightly coupled multiprocessing : Consists of a set of processors that share a common main memory and are under the integrated control of an operating system.
  • The kernel can execute on any processor, and each processor does self-scheduling from the pool of available processes. This approach complicates the operating system. The operating system must ensure that two processors do not choose the same process and that the processes are not somehow lost from the queue. Techniques must be employed to resolve and synchronize competing claims to resources.
  • Animated Slide Click 1 Cars approach intersection Then Cars announce their resource needs All deadlocks involve conflicting needs for resources by two or more processes. A common example is the traffic deadlock. The typical rule of the road in the United States is that a car at a four-way stop should defer to a car immediately to its right. This rule works if there are only two or three cars at the intersection. If all four cars arrive at about the same time, each will refrain from entering the intersection, this causes a potential deadlock. The deadlock is only potential, not actual, because the necessary resources are available for any of the cars to proceed. If one car eventually does proceed, it can do so.
  • Animated Slide Click 1 Cars move to deadlock Then Cars announce their resource need But if all four cars ignore the rules and proceed (cautiously) into the intersection at the same time, then each car seizes one resource (one quadrant) but cannot proceed because the required second resource has already been seized by another car. This is an actual deadlock.
  • Example of copying file from tape drive to disk & then printing it.
  • scheduling

    1. 1. Unit IIIScheduling 1
    2. 2. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 2
    3. 3. Uniprocessor (CPU) Scheduling The problem: Scheduling the usage of a single processor among all the existing processes in the system The goal is to achieve:  High processor utilization  High throughput  number of processes completed per unit time  Low response time  time elapse from the submission of a request to the beginning of the response 3
    4. 4. Scheduling ObjectivesThe scheduling function should• Share time fairly among processes• Prevent starvation of a process• Use the processor efficiently• Have low overhead• Prioritise processes when necessary (e.g. real time deadlines) 4
    5. 5. 5
    6. 6. Scheduling and Process State Transitions 6
    7. 7. Scheduling and Process State Transitions 7
    8. 8. Nesting of Scheduling Functions 8
    9. 9. Queuing Diagram for Scheduling 9
    10. 10. Long-Term Scheduling: (job scheduler) Selects which processes should be brought into the ready queue May be first-come-first-served Or according to criteria such as priority, I/O requirements or expected execution time Controls the degree of multiprogramming If more processes are admitted  less likely that all processes will be blocked  better CPU usage  each process has less fraction of the CPU The long term scheduler will attempt to keep a mix of processor-bound and I/O-bound processes 10
    11. 11. Medium-Term Scheduling (Swaper)Part of the swapping functionSwapping decisions based on the need to manage multiprogramming Done by memory management software 11
    12. 12. Short-Term Scheduling: (CPU scheduler) Selects which process should be executed next and allocates CPU The short term scheduler is known as the dispatcher Executes most frequently Is invoked on a event that may lead to choose another process for execution:  clock interrupts  I/O interrupts  operating system calls and traps  signals 12
    13. 13.  Processes can be described as either:  I/O-bound process – spends more time doing I/O than computations, many short CPU bursts  CPU-bound process – spends more time doing computations; few very long CPU bursts 13
    14. 14. Short-Tem Scheduling CriteriaUser-oriented: relate to the behavior of the system as perceived by the individual user or process  Response Time: Elapsed time from the submission of a request to the beginning of response  Turnaround Time: Elapsed time from the submission of a process to its completionSystem-oriented:focused on effective and efficient utilization of the processor.  Processor utilization: keep the CPU as busy as possible  Fairness  Throughput: number of process completed per unit time  Waiting time – amount of time a process has been waiting in the ready queue 14
    15. 15. Interdependent Scheduling Criteria 15
    16. 16. Interdependent Scheduling Criteria 16
    17. 17. Scheduling Algorithm Optimization Criteria  Max CPU utilization  Max throughput  Min turnaround time  Min waiting time  Min response time 17
    18. 18. Characterization of Scheduling Policies The selection function: determines which process in the ready queue is selected next for execution. The decision mode: specifies the instants in time at which the selection function is exercised.  Nonpreemptive  Once a process is in the running state, it will continue until it terminates or blocks itself for I/O  Preemptive  Currently running process may be interrupted and moved to the Ready state by the OS  Allows for better service since any one process cannot monopolize the processor for very long time. 18
    19. 19. Priorities Scheduler will always choose a process of higher priority over one of lower priority Have multiple ready queues to represent each level of priority Lower-priority may suffer starvation  Allow a process to change its priority based on its age or execution history 19
    20. 20. FCFS approach 20
    21. 21. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 21
    22. 22. Running example to discuss various scheduling policies Arrival Service/Burst Process Time Time 1 0 3 2 2 6 3 4 4 4 6 5 5 8 2•Service time = total processor time needed in one (CPU-I/O) cycle•Jobs with long service time are CPU-bound jobs and are referred toas “long jobs” 22
    23. 23. First Come First Served (FCFS) The simplest scheduling policy is first-come-first-served (FCFS),first-in-first-out (FIFO) or a strict queuing scheme. As each process becomes ready, it joins the ready queue. When the currently running process ceases to execute, the process that has been in the ready queue the longest is selected for running. Selection function: the process that has been waiting the longest in the ready queue (hence, FCFS) Decision mode: nonpreemptive, a process run until it blocks itself 23
    24. 24. FCFS Drawbacks A short process may have to wait a very long time before it can execute. A process that does not perform any I/O will monopolize the processor. Favors CPU-bound processes  I/O-bound processes have to wait until CPU-bound process completes  They may have to wait even when their I/O are completed (poor device utilization)  we could have kept the I/O devices busy by giving more priority to I/O Bound processes. 24
    25. 25. FCFS Drawbacks FCFS is not an attractive alternative on its own for a uniprocessor system. However, it is often combined with a priority scheme to provide an effective scheduler. Thus, the scheduler may maintain a number of queues, one for each priority level, and dispatch within each queue on a first-come-first-served basis.
    26. 26. Round-Robin Selection function: same as FCFS Decision mode: preemptive  a process is allowed to run until the time slice period (quantum, typically from 10 to 100 ms) has expired  then a clock interrupt occurs and the running process is put on the ready queue 26
    27. 27. Round-Robin• Clock interrupt is generated at periodic intervals• When an interrupt occurs, the currently running process is placed in the ready queue• Next ready job is selected.• The principal design issue is the length of the time quantum, or slice, to be used.• If the quantum is very short, then short processes will move through the system relatively quickly. • BUT there is processing over-head involved in handling the clock interrupt and performing the scheduling and dispatching function. • Thus, very short time quantum should be avoided.
    28. 28. Time Quantum for Round Robin Must be substantially larger than the time required to handle the clock interrupt and dispatching Should be larger than the typical interaction (but not much more to avoid penalizing I/O bound processes) 28
    29. 29. Round Robin: Critique Still favors CPU-bound processes  A I/O bound process uses the CPU for a time less than the time quantum, then is blocked waiting for I/O  A CPU-bound process runs for all its time slice and is put back into the ready queue (thus, getting in front of blocked processes) A solution: virtual round robin  When a I/O has completed, the blocked process is moved to an auxiliary queue, which gets preference over the main ready queue  A process dispatched from the auxiliary queue runs no longer than the basic time quantum minus the time spent running, since it was selected from the ready queue 29
    30. 30. Queuing for Virtual Round Robin 30
    31. 31. Shortest Process Next (SPN) (SJF-Nonpreemptive) Selection function: the process with the shortest expected CPU burst time Decision mode: nonpreemptive I/O bound processes will be picked first We need to estimate the required processing time (CPU burst time) for each process 31
    32. 32. Shortest Process Next: Critique Possibility of starvation for longer processes as long as there is a steady supply of shorter processes Lack of preemption is not suited in a time sharing environment  CPU bound process gets lower priority (as it should) but a process doing no I/O could still monopolize the CPU if it is the first one to enter the system SPN implicitly incorporates priorities: shortest jobs are given preferences The next (preemptive) algorithm penalizes directly longer jobs 32
    33. 33. Shortest Remaining Time(SJF-Preemptive) Preemptive version of shortest process next policy Must estimate processing time 33
    34. 34. Priority SchedulingA priority number (integer) is associated with each process.The CPU is allocated to the process with the highest priority Some systems have a high number represent high priority. Other systems have a low number represent high priority.Text uses a low number to represent high priority. Priority scheduling may be preemptive or nonpreemptive.
    35. 35. Assigning PrioritiesSJF is a priority scheduling where priority is the predicted next CPU burst time.Other bases for assigning priority: Memory requirements Number of open files Avg I/O burst / Avg CPU burst External requirements (amount of money paid, political factors, etc).Problem: Starvation -- low priority processes may never execute.Solution: Aging -- as time progresses increase the priority of the process.
    36. 36. First-Come, First-Served (FCFS) Scheduling Process that requests the CPU first is allocated the CPU first. Easily managed with a FIFO queue. Often the average waiting time is long. Process Burst Time P1 24 P2 3 P3 3Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 P2 P3 0 24 27 30
    37. 37. Suppose that the processes arrive in the order P2 , P3 , P 1 . The Gantt chart for the schedule is: P2 P3 P1 0 3 6 30
    38. 38. Example of Non-Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4SJF (non-preemptive) P1 P3 P2 P4 0 3 7 8 12 16
    39. 39. Example of Preemptive SJF ProcessArrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4• SJF (preemptive)
    40. 40. Example of Preemptive SJF Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 P1 P2 P3 P2 P4 P10 2 4 5 7 11 16
    41. 41. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 41
    42. 42. Multiprocessor and Real-Time Scheduling
    43. 43. Introduction• When a computer system contains more than a single processor, several new issues are introduced into the design of scheduling functions.• We will examine these issues and the details of scheduling algorithms for tightly coupled multi-processor systems.
    44. 44. Classifications of Multiprocessor Systems1. Loosely coupled, distributed multiprocessors, or clustersFairly autonomous systems.Each processor has its own memory and I/O channels2. Functionally specialized processorsTypically, specialized processors are controlled by a mastergeneral-purpose processor and provide services to it. An examplewould be an I/O processor.3. Tightly coupled multiprocessingConsists of a set of processors that share a common main memoryand are under the integrated control of an operating system.We’ll be most concerned with this group.
    45. 45. Multiprocessor Systems Interconnection Network P P P Memory Memory Memory Interconnection Network P P P P P PMemory Memory Memory Global Shared Memory Interconnection Networkdisk disk disk disk disk disk disk disk disk Shared Shared Shared Nothing Memory Disk (Loosely (Tightly Coupled) Coupled)
    46. 46. Granularity• A good metric for characterizing multiprocessors and placing then in context with other architectures is to consider the synchronization granularity, or frequency of synchronization, between processes in a system.• Or frequency of synchronization, between processes in a system.• Five categories, differing in granularity:1. Independent Parallelism2. Coarse Parallelism3. Very Coarse-Grained Parallelism4. Medium-Grained Parallelism5. Fine-Grained Parallelism
    47. 47. Independent Parallelism•No explicit synchronization among processes•Each represents a separate, independent application or job.•A typical use of this type of parallelism is in a time-sharingsystem.• Each user is performing a particular application, such as wordprocessing or using a spreadsheet.• The multiprocessor provides the same service as amultiprogrammed uniprocessor.• Because more than one processor is available, averageresponse time to the users will be less. Synchronization interval (instruction) :N/A
    48. 48. Coarse and Very Coarse-Grained Parallelism• With coarse and very coarse grained parallelism, there is synchronization among processes, but at a very gross level.• This type of situation is easily handled as a set of concurrent processes running on a multi-programmed uni-processor and can be supported on a multiprocessor with little or no change to user software.• In general, any collection of concurrent processes that need to communicate or synchronize can benefit from the use of a multiprocessor architecture.• In the case of very infrequent interaction among the processes, a distributed system can provide good support. However, if the interaction is somewhat more frequent, then the overhead of communication across the network may negate some of the potential speedup. In that case, the multiprocessor organization provides the most effective support.
    49. 49. Medium-Grained Parallelism• A single application can be effectively implemented as a collection of threads within a single process.• In this case, the programmer must explicitly specify the potential parallelism of an application.• Threads usually interact frequently, affecting the performance of the entire application Synchronization interval (instruction) :20 -200
    50. 50. Fine Grained ParallelismHighly parallel applications:Fine-grained parallelism represents a much more complex use ofparallelism than is found in the use of threads. Synchronization interval (instruction)< 20
    51. 51. Synchronization Granularity and Processes
    52. 52. Design IssuesScheduling on a multiprocessor involves three interrelated issues: Assignment of processes to processors. Use of multiprogramming on individual processors. Actual dispatching of a process.
    53. 53. Assignment of processes to processors•If we assume that the architecture of the multiprocessor isuniform, in the sense that no processor has a particularphysical advantage with respect to access to main memory orI/O devices, then the simplest scheduling protocol is to treatprocessors as a pooled resource and assign processes toprocessors on demand.•The question then arises as to whether the assignment shouldbe static or dynamic.
    54. 54. Assignment of processes to processors•If a process is permanently assigned (static assignment) to a processorfrom activation until completion, then a dedicated short-term queue ismaintained for each processor.•Allows for group or gang scheduling (details later).•Advantage: less overhead – processor assignment occurs only once.•Disadvantage: One processor could be idle (has an empty queue) whileanother processor has a backlog. To prevent this situation from arising, acommon queue can be utilized. In this case, all processes go into oneglobal queue and are scheduled to any available processor. Thus, overthe life of a process, it may be executed on several different processors atdifferent times.
    55. 55. Assignment of processes to processors• Regardless of whether processes are dedicated to processors, some mechanism is needed to assign processes to processors.• Two approaches have been used: master/slave and peer.1. Master/slave architecture – Key kernel functions always run on a particular processor. The other processors can only execute user programs. – Master is responsible for scheduling jobs. – Slave sends service request to the master. – Advantages • Simple, requires little enhancement to a uniprocessor multiprogramming OS. • Conflict resolution is simple since one processor has control of all memory and I/O resources. – Disadvantages • Failure of the master brings down whole system • Master can become a performance bottleneck
    56. 56. Assignment of processes to processors2. Peer architecture – The OS kernel can execute on any processor. – Each processor does self-scheduling from the pool of available processes. – Advantages: • All processors are equivalent. • No one processor should become a bottleneck in the system. – Disadvantage: • Complicates the operating system • The OS must make sure that two processors do not choose the same process and that the processes are not somehow lost from the queue. • Techniques must be employed to resolve and synchronize competing claims for resources.
    57. 57. Multiprogramming at each processor Completion time and other application-related performance metrics are much more important than processor utilization in multi- processor environment. For example, a multi-threaded application may require all its threads be assigned to different processors for good performance. Static or dynamic allocation of processes.
    58. 58. Process dispatching After assignment, deciding who is selected from among the pool of waiting processes --- process dispatching. Single processor multiprogramming strategies may be counter- productive here. Priorities and process history may not be sufficient.
    59. 59. Process scheduling Single queue of processes or if multiple priority is used, multiple priority queues, all feeding into a common pool of processors. Specific scheduling policy does not have much effect as the processor number increases. Conclusion: Use FCFS with priority levels.
    60. 60. Thread scheduling An application can be implemented as a set of threads that cooperate and execute concurrently in the same address space. Criteria: When related threads run in parallel performance improves. Load sharing: processes are not assigned to a particular processor. A global queue of ready threads is maintained and idle processor selects thread from the queue. Gang scheduling: Bunch of related threads scheduled together to run on a set of processors at the same time, on one –to one basis.
    61. 61. Thread scheduling Dedicated processor assignment: Each program gets as many processors as there are parallel threads. Dynamic scheduling: Scheduling done at run time.
    62. 62. Load sharing: Advantages:• The load is distributed evenly across the processors, assuring that no processor is idle.• No centralizer scheduler is required Three versions of Load sharing:• FCFS• Smallest number of threads first• Preemptive smallest number of threads first
    63. 63. Real-time systems Real-time computing is an important emerging discipline in CS and CE. Control of lab experiments, robotics, process control, telecommunication etc. It is a type of computing where correctness of the computation depends not only on the logical results but also on the time at which the results are produced. Hard real-time systems: Must meet deadline. Ex: Space shuttle rendezvous with other space station. Soft real-time system: Deadlines are there but not mandatory. Results are discarded if the deadline is not met.
    64. 64. Characteristics of Real-Time (RT) systems Determinism Responsiveness User control Reliability Fail-soft operation
    65. 65. Deterministic Response External event and timings dictate the request of service. OS’s response depends on the speed at which it can respond to interrupts and on whether the system has sufficient capacity to handle requests. Determinism is concerned with how long the OS delays before acknowledging an interrupt. In non-RT this delay may be in the order of 10’s and 100’s of millisecs, whereas in an RT it may have few microsec to 1millisec.
    66. 66. RT .. Responsiveness Responsiveness is the time for servicing the interrupt once it has been acknowledged. Comprises:  Time to transfer control, (and context switch) and execute the ISR  Time to handle nested interrupts, the interrupts that should be serviced when executing this ISR. Higher priority Interrupts. response time = F(responsiveness, determinism)
    67. 67. RT .. User Control User control : User has a much broader control in RT-OS than in regular OS. Priority Hard or soft deadlines Deadlines
    68. 68. RT .. Reliability Reliability: A processor failure in a non-RT may result in reduced level of service. But in an RT it may be catastrophic : life and death, financial loss, equipment damage.
    69. 69. Fail-soft operation: Fail-soft operation: Ability of the system to fail in such a way preserve as much capability and data as possible. In the event of a failure, immediate detection and correction is important. Notify user processes to rollback.
    70. 70. Requirements of RT Fast context switch Minimal functionality (small size) Ability to respond to interrupts quickly (Special interrupts handlers) Multitasking with signals and alarms Special storage to accumulate data fast Preemptive scheduling
    71. 71. Requirements of RT (contd.) Priority levels Minimizing interrupt disables Short-term scheduler (“omni-potent”) Time monitor Goal: Complete all hard real-time tasks by dead-line. Complete as many soft real-time tasks as possible by their deadline.
    72. 72. RT scheduling Static table-driven approach Static priority-driven preemptive scheduling Dynamic planning-based scheduling Dynamic best-effort scheduling:
    73. 73. RT scheduling Static table-driven approach  For periodic tasks.  Input for analysis consists of : periodic arrival time, execution time, ending time, priorities.  Inflexible to dynamic changes.  General policy: earliest deadline first.
    74. 74. RT scheduling Static priority-driven preemptive scheduling  For use with non-RT systems: Priority based preemptive scheduling.  Priority assignment based on real-time constraints.  Example: Rate monotonic algorithm
    75. 75. RT scheduling Dynamic planning-based scheduling  After the task arrives before execution begins, a schedule is prepared that includes the new as well as the existing tasks.  If the new one can go without affecting the existing schedules than nothing is revised.  Else schedules are revised to accommodate the new task.  Remember that sometimes new tasks may be rejected if deadlines cannot be met.
    76. 76. RT scheduling Dynamic best-effort scheduling:  used in most commercial RTs of today  tasks are aperiodic, no static scheduling is possible  some short-term scheduling such as shortest deadline first is used.  Until the task completes we do not know whether it has met the deadline.
    77. 77. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term,Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockPreventionDeadlock Avoidance (Operating system concepts byGalvin ,Gagne) Deadlock Detection, Deadlock Recovery OS Services layer in the Mobile OS: Comms Services 77
    78. 78. DEADLOCKSEXAMPLES: "It takes money to make money". You cant get a job without experience; you cant get experience without a job.BACKGROUND:The cause of deadlocks: Each process needing what another process has. This results from sharing resources such as memory, devices, links.Under normal operation, a resource allocations proceed like this:: – Request a resource (suspend until available if necessary ). – Use the resource. – Release the resource.
    79. 79. Potential DeadlockI need quad I need quad C and B B and C I need quad I need quad A and B D and A
    80. 80. Actual DeadlockHALT until HALT until D is free C is free HALT until HALT until B is free A is free
    81. 81. Example of deadlockDining Philosophers Problem
    82. 82. DEADLOCKS Permanent blocking of a single or set of processes, competing for system resources or may want to cooperate for communication. Formal definition : A set of processes is deadlocked if each process in the set is waiting for an event that only another process in the set can cause Usually the event is release of a currently held resource. Generally it is because of the conflicting needs of different processes. There is no general solution to solve it completely.
    83. 83. Resource CategoriesTwo general categories of resources:1. Reusable – can be safely used by only one process at a time and is not depleted by that use.Processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases, and semaphoresDeadlock occurs if each process holds one resource and requests the other2. Consumable – one that can be created (produced) and destroyed (consumed).Such as Interrupts, signals, messages, and information in I/O buffersDeadlock may occur if a Receive message is blockingMay take a rare combination of events to cause deadlock
    84. 84. Resource Categories Resources can be• physical : printer, tape drive, cpu cycles• Logical: file, semaphore, monitors Preemptable resources  can be taken away from a process with no ill effects  Ex: memory,cpu Nonpreemptable resources  will cause the process to fail if taken away  Ex: printer
    85. 85. System Model Resource types R1, R2, . . ., Rm CPU cycles, memory space, I/O devices Each resource type Ri has Wi instances. Each process utilizes a resource as follows:  request  use  release
    86. 86. Deadlock Characterization Deadlock can arise if four conditions hold simultaneously.1. Mutual exclusion :Only single process is allowed to use the resource.2. Hold and wait :Existence of a process holding at least one resource and waiting to acquire additional resources currently held by other processes.3. No preemption :No resource can be removed forcibly from a process.4. Circular wait: Processes waiting for resources held by other waiting processes.
    87. 87. Resource-Allocation Graph Directed graph to describe deadlocks A set of vertices V and a set of edges E. V is partitioned into two types:  P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.  R = {R1, R2, …, Rm}, the set consisting of all resource types in the system. request edge – directed edge Pi → Rj assignment edge – directed edge Rj → Pi
    88. 88. Resource-Allocation Graph (Symbols)Def: Overall view of the processes holding or waiting for the various resources in thesystem.Used for deadlock free resource allocation strategy• Process node Pi• Resource node Rj• Assignment edge• Request edge
    89. 89. Example of a Resource allocation graph
    90. 90. A cycle representing a deadlock
    91. 91. Resource Allocation Graph With A Deadlock
    92. 92. Graph With A Cycle But No Deadlock
    93. 93. Basic Facts If graph contains no cycles ⇒ no deadlock If graph contains a cycle ⇒  if only one instance per resource type, then deadlock.  if several instances per resource type, possibility of deadlock . • If there is single instance of each resource then cycle in the resource graph is necessary and sufficient condition for the existence of a deadlock. • If each resource type has multiple instances ,then a cycle is s necessary but not sufficient condition for the existence of a deadlock.
    94. 94. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 94
    95. 95. Methods for Handling Deadlocks Ensure that the system will never enter a deadlock state. (Deadlock prevention /avoidance) Allow the system to enter a deadlock state and then recover. (Deadlock detection & recovery) Ignore the problem and pretend that deadlocks never occur in the system; used by most operating systems, including UNIX.
    96. 96. Deadlock preventionDesign a system in such a way that the possibility of deadlock is excluded.Two main methods1. Indirect – prevent all three of the necessary conditions occurring at once (Mutual exclusion,hold-wait,No- preemption)2. Direct – prevent circular waits
    97. 97. Deadlock PreventionRestrain the ways request can be made Mutual Exclusion – not required for sharable resources; must hold for non-sharable resources. Must be supported by the OS cannot be disallowed (in general). If access to a resource requires mutual exclusion, then mutual exclusion must be supported by the OS. Some resources, such as files, may allow multiple accesses for reads but only exclusive access for writes. Even in this case, deadlock can occur if more than one process requires write permission.
    98. 98. Deadlock PreventionHold and Wait – must guarantee that whenever a process requests a resource, it does not hold any other resources – Requires each process to request and be allocated all its resources before it begins execution, – Allow process to request resources only when the process has none Disadvantages: – Low resource utilization – starvation is possible – process may not know in advance all of the resources that it will require.
    99. 99. Deadlock Prevention No Preemption – 1. If a process that is holding some resources and requests another resource that cannot be immediately allocated to it, then all resources currently being held are released Preempted resources are added to the list of resources for which the process is waiting Process will be restarted only when it can regain its old resources, as well as the new ones that it is requesting 2. If a process requests a resource that is currently held by another process, the OS may preempts the second process and require it to release its resources  Ex: CPU registers, memory space but not printer /tape drives.
    100. 100. Deadlock Prevention Circular Wait – impose a total ordering of all resource types, and require that each process requests resources in an increasing order of enumeration. If a process holds a resource type whose number is i, it may request a resource of type having number j only if j>I Issue only one request for several units of same resource Ordering should be as per usage pattern of resources.Ex: Tape drive should have lower number than printer, as it requires earlier than printer Ordering is done in program, so reordering requires reprogramming
    101. 101. Deadlock Avoidance Requires that the system has some additional a priori information available Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition. Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes
    102. 102. Basic Facts If a system is in safe state ⇒ no deadlocks If a system is in unsafe state ⇒ possibility of deadlock Avoidance ⇒ ensure that a system will never enter an unsafe state.
    103. 103. Safe, Unsafe , Deadlock State
    104. 104. Safe State When a process requests an available resource, system must decide if immediate allocation leaves the system in a safe state. System is in safe state if there exists a safe sequence of all processes. Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that Pi can still request can be satisfied by currently available resources + resources held by all the Pj, with j<I.  If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.  When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and terminate.  When Pi terminates, Pi+1 can obtain its needed resources, and so on.
    105. 105. Avoidance algorithms Single instance of a resource type  Use a resource-allocation graph Multiple instances of a resource type  Use the banker’s algorithm
    106. 106. Resource-Allocation Graph Scheme Claim edge Pi → Rj indicated that process Pi may request resource Rj at some time in future Similar to request edge in direction but is represented by a dashed line when a process requests a resource, Claim edge converts to request edge Request edge converted to an assignment edge when the resource is allocated to the process. When a resource is released by a process, assignment edge reconverted to a claim edge Resources must be claimed a prior in the system
    107. 107. Resource-Allocation Graph
    108. 108. Unsafe State In Resource-Allocation Graph
    109. 109. Resource-Allocation Graph Algorithm Suppose that process Pi requests a resource Rj The request can be granted only if converting the request edge to an assignment edge does not result in the formation of a cycle in the resource allocation graph. Time complexity: For detecting a cycle in a graph it requires an order of n2 operations, Where n in number of processes in the system.
    110. 110. Banker’s Algorithm Multiple instances of each resource type. Each process must a prior claim maximum use. Every process declares it maximum need/requirement. Maximum requirement should not exceed the total number of resources in the system. When a process requests a resource, system determines whether allocation will keep the system in safe state or not. If it is, resources get allocated. Otherwise need to wait until resources get available. When a process gets all its resources it must return them in a finite amount of time.
    111. 111. Data Structures for the Banker’s Algorithm Let n = number of processes, and m = number of resources types Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj available Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its task. Need [i, j] = Max[i, j] – Allocation [i, j]
    112. 112. Safety Algorithm1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i = 0, 1, …, n- 12. Find an i such that both: Requires m * n2 operation to (a) Finish [i] = false decide whether a state is safe. (b) Needi ≤ Work If no such i exists, go to step 43. Work = Work + Allocationi Finish[i] = true go to step 24. If Finish [i] == true for all i, then the system is in a safe state
    113. 113. Resource-Request Algorithm Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resourcetype Rj1. If Requesti ≤ Needi go to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim2. If Requesti ≤ Available, go to step 3. Otherwise Pi must wait, since resources are not available3. Pretend the system to allocate requested resources to Pi by modifying the state as follows: Available = Available – Requesti Allocationi = Allocationi + Requesti Needi = Needi – Requesti If safe ⇒ the resources are allocated to Pi If unsafe ⇒ Pi must wait, and the old resource-allocation state is restored
    114. 114. Example of Banker’s Algorithm 5 processes P0 through P4; 3 resource types: A (10 instances), B (5 instances), and C (7 instances)Snapshot at time T0: Allocation Max Available ABC ABC ABC P0 010 753 332 P1 200 322 P2 302 902 P3 211 222 P4 002 433
    115. 115. Example (Cont.) The content of the matrix Need is defined to be Max – Allocation Need ABC P0 743 P1 122 P2 600 P3 011 P4 431 The system is in a safe state since the sequence < P1, P3, P4, P0, P2>or < P1, P3, P4, P2, P0> satisfies safety criteria
    116. 116. 1. P1 - (1 2 2 ) < ( 3 3 2) i. e need < availableSo available= ( 3 3 2 ) + ( 2 0 0 ) available= available+ allocation available= ( 5 3 2)2. P3 - (0 1 1 ) < ( 5 3 2 )Available= ( 5 3 2 )+ ( 2 1 1) = ( 7 4 3)3. P4 - (4 3 1 ) < ( 5 3 2)Available= ( 7 4 3 )+ ( 0 0 2) = ( 7 4 5)4. P0 - (7 4 3 ) < ( 7 4 5 )Available= ( 7 4 5 )+ ( 0 1 0) = ( 7 5 5)5. P2 - (6 0 0 ) < ( 7 5 5 )Available= ( 7 5 5 )+ ( 3 0 2) = ( 10 5 7)
    117. 117. Example: P1 Request (1,0,2) Check that Request ≤ Available that is, (1,0,2) ≤ (3,3,2) ⇒ true Allocation Need Available ABC ABC ABC P0 0 1 0 743 230 P1 302 020 P2 3 0 2 600 P3 2 1 1 011 P4 0 0 2 431 Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety requirement Can request for (3,3,0) by P4 be granted? Can request for (0,2,0) by P0 be granted?
    118. 118. Deadlock Detection An algorithm that examines the state of the system to determine whether a deadlock has occurred An algorithm to recover from deadlock.
    119. 119. Single Instance of Each Resource Type Maintain wait-for graph  Nodes are processes  Pi → Pj if Pi is waiting for Pj Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock. An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices/processes in the graph
    120. 120. Resource-Allocation Graph and Wait-for Graph
    121. 121. Resource-Allocation Graph and Wait-for GraphResource-Allocation Graph Corresponding wait-for graph
    122. 122. Several Instances of a Resource Type Available: A vector of length m indicates the number of available resources of each type. Allocation: An n x m matrix defines the number of resources of each type currently allocated to each process. Request: An n x m matrix indicates the current request of each process. If Request [i,j ] = k, then process Pi is requesting k more instances of resource type Rj.
    123. 123. Detection Algorithm1. Let Work and Finish be vectors of length m and n, respectively Initialize: (a) Work = Available (b) For i = 1,2, …, n, if Allocationi ≠ 0, then Finish[i] = false; otherwise, Finish[i] = true2. Find an index i such that both: (a) Finish[i] == false (b) Requesti ≤ Work If no such i exists, go to step 4
    124. 124. Detection Algorithm (Cont.)1. 3. Work = Work + Allocationi Finish[i] = true go to step 22. 4. If Finish[i] == false, for some i, 1 ≤ i ≤ n, then the system is in deadlock state. Moreover, if Finish[i] == false, then Pi is deadlockedAlgorithm requires an order of O(m x n2) operations to detect whetherthe system is in deadlocked state
    125. 125. Example of Detection Algorithm Five processes P0 through P4; three resource types A (7 instances), B (2 instances), and C (6 instances) Snapshot at time T0: Allocation Request Available ABC ABC ABC P0 010 000 000 (0 1 0 ) P1 200 202 ( 7 2 6) P2 303 000 ( 3 1 3) P3 211 100 (524) P4 002 002 ( 5 2 6) Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
    126. 126. Example (Cont.) P2 requests an additional instance of type C Request ABC P0 0 0 0 P1 201 P2 001 P3 100 P4 002 State of system?  Can reclaim resources held by process P0, but insufficient resources to fulfill other processes requests.  Deadlock exists, consisting of processes P1, P2, P3, and P4
    127. 127. Detection-Algorithm Usage When, and how often, to invoke algorithm depends on:  How often a deadlock is likely to occur?  How many processes will be affected by deadlock when it happens? If deadlock occurs frequently, then the detection algorithm should be invoked frequently. We could invoke the deadlock detection algorithm every time a request for allocation cannot be granted immediately. By this we can identify deadlock causing process n processes involved in deadlock also. But this incurs overhead in computation time. So can call algorithms after 1 hour / CPU utilization drops below 40%. If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.
    128. 128. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 128
    129. 129. Recovery from Deadlock: (A) Process Termination Abort all deadlocked processes Abort one process at a time until the deadlock cycle is eliminated In which order should we choose to abort?  Priority of the process  How long process has computed, and how much longer to completion  Resources the process has used  Resources process needs to complete  How many processes will need to be terminated  Is process interactive or batch?
    130. 130. Recovery from Deadlock: (B) Resource Preemption  Preempt some resources from processes and give these resources to other processes until deadlock cycle get broken.  Following 3 issues need to be considered 1. Selecting a victim – Which resources/ processes are to be preempted? - minimize cost 2. Rollback – what should be done with the process from which resources are preempted? -return to some safe state, restart process from that state. 3. Starvation – same process may always be picked as victim, -include number of rollback in cost factor
    131. 131. Contents Uniprocessor Scheduling: Types of Scheduling:Preemptive, Non-preemptive, Long-term, Medium-term, Short-term scheduling Scheduling Algorithms: FCFS, SJF, RR, Priority Multiprocessor Scheduling: Granularity Design Issues, Process Scheduling Deadlock: Principles of deadlock, DeadlockAvoidance Deadlock Detection, Deadlock Prevention Deadlock Recovery OS Services layer in the Mobile OS: CommsServices 131
    132. 132. OS Layered Model OS Services layer in the Mobile OS: Generic ServicesFig: Block decomposition in the system model
    133. 133. Blocks of OS Services components
    134. 134. Purpose• Symbian OS is a microkernel operating system.• The kernel is restricted to providing the minimum of essential services, specifically those required to implement process execution and memory access models.• In Symbian OS, the higher-level system services are located in the OS Services layer.• These services provide the specialized system-level support required by other system components and by higher layers of the system, as well as by applications.• Ex: graphics support, communications support including networking and telephony, and the connectivity infrastructure are all provided as OS services.
    135. 135. Generic OS Services Block The Generic OS Services block provides 1.a number of general-purpose utility-style services: Include logging and scheduling services and some legacy components. 2. The frameworks and libraries include an implementation of the C Standard Library 3. Framework support for secure certificates, keys and tokens.
    136. 136. Generic OS Services BlockComponents are organized into two small collections of servers, frame-work, and libraries. The common theme of the collections is generalutility.Generic Services Collection:
    137. 137. Generic Services Collection1. The Task Scheduler component is an application-launching serverthat supports creating, querying and editing of time or condition triggered tasks.2. The Event Logger component is only an interface supporting logging of events, for example, call and message lists and retrieval, filtering and viewing by clients.3. The System Agent component is a legacy component that performs number of useful functions for monitoring and reporting system state.4. The File Logger component is a legacy utility for logging systemor application messages to a log file.
    138. 138. Generic Libraries Collection• This collection provides system-level libraries for use by applications and system components .
    139. 139. Generic Libraries Collection• The Certificate and Key Management, Certificate Store and Key Store components provide a framework for certificate and key management that supports public key cryptography for RSA, DSA and DH key pairs, assignment of trust status and certificate-chain construction, validation and revocation.• The Cryptographic Token Framework supports the use of secure hardware tokens ,for example DRM-protected games or films on SD cards or memory sticks.• The C Standard Library is a subset of the POSIX C library which maps C function calls in as simple a way as possible to native Symbian OS calls.
    140. 140. OS Services layer in the Mobile OS: Comms Services
    141. 141. Comms Services• ‘Comms’ (or communications means ‘data communications’ – the art, science and technology of moving data between different devices over direct connections or networks.• Symbian OS supports a wide range of communications technologies including conventional serial communications, short link technologies such as USB, Bluetooth and infrared, as well as networking technologies.
    142. 142. Purpose• Comms Services in Symbian OS provides the support for a wide variety of communications protocols and services:• Serial protocols including RS232, IrDA and USB• Bluetooth radio• Networking protocols including TCP/IP (both IPv4 and IPv6), network security (TLS and IPSec) and dial-up protocols (PPP and SLIP)• Wi-Fi• 2G, 2.5G and 3G mobile telephony voice, data (including fax) andmessaging services for GSM/UMTS and CDMA/CDMA2000 network
    143. 143. Comms Services• The system model divides the Comms Services block into four distinct sub-blocks:1. Comms Framework2. Telephony services3. Short Link services4. Networking services
    144. 144. Comms Services1. Comms Framework: It provides the generic infrastructure that supports all communications services.• Most importantly, it includes the Comms Root Server, which is the ‘meta’ process server for all communications services and the ESock Socket Server which provides the generic, sockets-style interface used to access all communications services.2. Telephony Services:These are based on the ETel Telephony Server that provides support for 2G, 2.5G and 3G mobile phone networks, including GSM/GPRS/EDGE/UTMS (2G/2.5G/3G) and CDMA/CDMA2000.
    145. 145. Comms Services3. Networking Services: Networking Services provides packet-based network services with Ethernet emulation and includes the TCP/IP stack implementation, secure networking extensions including TLS/SSL and IPSec, which support secure browsing and VPN gateways, together with a variety of application-level Internet services including FTP and HTTP.4. Short-link Services :Short-link services provides USB, Bluetooth and infrared services
    146. 146. OS Services layer in the Mobile OS:Multimedia and Graphics Services Block
    147. 147. Multimedia and Graphics Services Block This block provides all graphics services above the level of hardware drivers and provides the frameworks supporting multimedia services.
    148. 148. Multimedia and Graphics Services Block1. The Multimedia Framework provides a single extensibleframework for integrating support for audio, video, MIDI, automatedspeech recognition, cameras, and integrated broadcast tuners.• Its purpose is to consolidate and standardize the multimedia APIs, so that they are common across all devices based on Symbian OS.2. OpenGL ES is an open standard for 2D and 3D graphics, specificallytargeted at embedded systems including consoles and phones.It defines application APIs for rendering, texture mapping, and other graphical effects, as well as a portable binding to native windowing systems.
    149. 149. Multimedia and Graphics Services Block3. Windowing Model :The Window Server is at the heart of the graphics architecture of Symbian OS and it is central to the event-handling model that drives applications.The Window Server owns and manages access to the screen as a drawable resource, which is made available to applications through the abstraction of windowed screen areas. It also provides access to the keyboard and pointer or digitizer for GUI applications,4. Graphics and Printing Services Collection: These components support all bitmapped graphics operations on display and printer devices, including all font and drawing operations. The principal components are the Font and Bitmap Server, through which alloperations are made within a client-side server session
    150. 150. Multimedia and Graphics Services Block5. Graphics Device Interface Collection: This is the lowest level of the graphics services, providing low-level graphics abstractions and color palette support.
    151. 151. OS Services layer in the Mobile OS: Connectivity Services Block
    152. 152. Connectivity Services Block• This block provides the device-side support for connectivity services, for example backup and restore, file transfer and browsing and application installation.
    153. 153. Connectivity Services Block• The basic supported services are:1. backup and restore of a drive on the device to a desktop host2. file management (e.g. copying files to and from the device, renaming and deleting files and directories on the device, and formatting device drives)3. installation of software from the desktop host.
    154. 154. Component Collections1. Service Providers Collection :These components provide named services which run on the device side to provide service interfaces to remote (host-side) clients. Remote File Server, Software Install Server,Secure Backup Socket Server,Secure Backup Engine,PLP Variant2. Service Framework Collection :This service based on configuration files and port registration enables device-side services to register a port number for use by PC-side clients,which can query for and start device-side services. The configurtion files have an XML- based format.
    155. 155. Component Collections• The Bearer Abstraction Layer component is a framework for plug- ins, which encapsulates actual bearers (for example m-Router), providing a connection-management API to PC link-type applications.• The Server Socket component is a helper library that supports creating (new, unnamed) port-number-based TCP/IP services for use by the Service Broker for device–host communications, for example with a PC. It communicates service port numbers and manages messages and commands.• m-Router is a licensed, PPP-like data-communications protocol and framework, which provides a TCP/IP-based connection between two devices.
    156. 156. End of Chapter

    ×