Operating System 5

48,939 views

Published on

Published in: Education, Business
8 Comments
18 Likes
Statistics
Notes
No Downloads
Views
Total views
48,939
On SlideShare
0
From Embeds
0
Number of Embeds
831
Actions
Shares
0
Downloads
1,861
Comments
8
Likes
18
Embeds 0
No embeds

No notes for slide

Operating System 5

  1. 1. CPU Scheduling Chapter 5
  2. 2. CPU Scheduling <ul><li>Basic Concepts </li></ul><ul><li>Scheduling Criteria </li></ul><ul><li>Scheduling Algorithms </li></ul><ul><li>Multiple-Processor Scheduling </li></ul><ul><li>Thread Scheduling </li></ul><ul><li>UNIX example </li></ul>
  3. 3. Basic Concepts <ul><li>Maximum CPU utilization obtained with multiprogramming </li></ul><ul><li>CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait </li></ul><ul><li>CPU times are generally much shorter than I/O times. </li></ul>
  4. 4. CPU-I/O Burst Cycle Process A Process B
  5. 5. Histogram of CPU-burst Times
  6. 6. Schedulers <ul><li>Process migrates among several queues </li></ul><ul><ul><li>Device queue, job queue, ready queue </li></ul></ul><ul><li>Scheduler selects a process to run from these queues </li></ul><ul><li>Long-term scheduler: </li></ul><ul><ul><li>load a job in memory </li></ul></ul><ul><ul><li>Runs infrequently </li></ul></ul><ul><li>Short-term scheduler: </li></ul><ul><ul><li>Select ready process to run on CPU </li></ul></ul><ul><ul><li>Should be fast </li></ul></ul><ul><li>Middle-term scheduler </li></ul><ul><ul><li>Reduce multiprogramming or memory consumption </li></ul></ul>
  7. 7. CPU Scheduler <ul><li>CPU scheduling decisions may take place when a process: </li></ul><ul><ul><li>1. Switches from running to waiting state (by sleep). </li></ul></ul><ul><ul><li>2. Switches from running to ready state (by yield). </li></ul></ul><ul><ul><li>3. Switches from waiting to ready (by an interrupt). </li></ul></ul><ul><ul><li>Terminates (by exit). </li></ul></ul><ul><li>Scheduling under 1 and 4 is nonpreemptive . </li></ul><ul><li>All other scheduling is preemptive . </li></ul>
  8. 8. Dispatcher <ul><li>Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: </li></ul><ul><ul><li>switching context </li></ul></ul><ul><ul><li>switching to user mode </li></ul></ul><ul><ul><li>jumping to the proper location in the user program to restart that program </li></ul></ul><ul><li>Dispatch latency – time it takes for the dispatcher to stop one process and start another running </li></ul>
  9. 9. Scheduling Criteria <ul><li>CPU utilization – keep the CPU as busy as possible </li></ul><ul><li>Throughput – # of processes that complete their execution per time unit </li></ul><ul><li>Turnaround time ( TAT ) – amount of time to execute a particular process </li></ul><ul><li>Waiting time – amount of time a process has been waiting in the ready queue </li></ul><ul><li>Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) </li></ul>
  10. 10. “ The perfect CPU scheduler” <ul><li>Minimize latency: response or job completion time </li></ul><ul><li>Maximize throughput: Maximize jobs / time. </li></ul><ul><li>Maximize utilization: keep I/O devices busy. </li></ul><ul><ul><li>Recurring theme with OS scheduling </li></ul></ul><ul><li>Fairness: everyone makes progress, no one starves </li></ul>
  11. 11. Algorithms
  12. 12. Scheduling Algorithms FCFS <ul><li>First-come First-served (FCFS) (FIFO) </li></ul><ul><ul><li>Jobs are scheduled in order of arrival </li></ul></ul><ul><ul><li>Non-preemptive </li></ul></ul><ul><li>Problem: </li></ul><ul><ul><li>Average waiting time depends on arrival order </li></ul></ul><ul><ul><li>Troublesome for time-sharing systems </li></ul></ul><ul><ul><ul><li>Convoy effect short process behind long process </li></ul></ul></ul><ul><li>Advantage: really simple! </li></ul>
  13. 13. First Come First Served Scheduling <ul><li>Example: Process Burst Time </li></ul><ul><li>P 1 24 </li></ul><ul><li> P 2 3 </li></ul><ul><li> P 3 3 </li></ul><ul><li>Suppose that the processes arrive in the order: P 1 , P 2 , P 3 </li></ul><ul><li>Suppose that the processes arrive in the order: P 2 , P 3 , P 1 . </li></ul>Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Waiting time for P 1 = 6 ; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 P 1 P 3 P 2 6 3 30 0 P 1 P 2 P 3 24 27 30 0
  14. 14. Shortest-Job-First (SJR) Scheduling <ul><li>Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time </li></ul><ul><li>Two schemes: </li></ul><ul><ul><li>nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU burst </li></ul></ul><ul><ul><li>preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF) </li></ul></ul><ul><li>SJF is optimal – gives minimum average waiting time for a given set of processes </li></ul>
  15. 15. Shortest Job First Scheduling <ul><li>Example: Process Arrival Time Burst Time </li></ul><ul><li>P 1 0 7 </li></ul><ul><li> P 2 2 4 </li></ul><ul><li> P 3 4 1 </li></ul><ul><li>P 4 5 4 </li></ul><ul><li>Non preemptive SJF </li></ul>P 1 P 3 P 2 7 P 1 (7) 16 0 P 4 8 12 Average waiting time = (0 + 6 + 3 + 7)/4 = 4 2 4 5 P 2 (4) P 3 (1) P 4 (4) P 1 ‘s wating time = 0 P 2 ‘s wating time = 6 P 3 ‘s wating time = 3 P 4 ‘s wating time = 7
  16. 16. Shortest Job First Scheduling Cont’d <ul><li>Example: Process Arrival Time Burst Time </li></ul><ul><li>P 1 0 7 </li></ul><ul><li> P 2 2 4 </li></ul><ul><li> P 3 4 1 </li></ul><ul><li>P 4 5 4 </li></ul><ul><li>Preemptive SJF </li></ul>Average waiting time = (9 + 1 + 0 +2)/4 = 3 P 1 (7) P 2 (4) P 3 (1) P 4 (4) P 1 ‘s wating time = 9 P 2 ‘s wating time = 1 P 3 ‘s wating time = 0 P 4 ‘s wating time = 2 P 1 (5) P 2 (2) P 1 P 3 P 2 4 2 11 0 P 4 5 7 P 2 P 1 16
  17. 17. Shortest Job First Scheduling Cont’d <ul><li>Optimal scheduling </li></ul><ul><li>However, there are no accurate estimations to know the length of the next CPU burst </li></ul>
  18. 18. <ul><li>Optimal for minimizing queueing time, but impossible to implement. Tries to predict the process to schedule based on previous history. </li></ul><ul><li>Predicting the time the process will use on its next schedule: </li></ul><ul><li>t( n+1 ) = w * t( n ) + ( 1 - w ) * T( n ) </li></ul><ul><li>Here: t(n+1) is time of next burst. </li></ul><ul><li> t(n) is time of current burst. </li></ul><ul><li> T(n) is average of all previous bursts . </li></ul><ul><li> W is a weighting factor emphasizing current or previous bursts. </li></ul>Shortest Job First Scheduling Cont’d
  19. 19. <ul><li>A priority number (integer) is associated with each process </li></ul><ul><li>The CPU is allocated to the process with the highest priority (smallest integer  highest priority in Unix but lowest in Java). </li></ul><ul><ul><li>Preemptive </li></ul></ul><ul><ul><li>Non-preemptive </li></ul></ul><ul><li>SJF is a priority scheduling where priority is the predicted next CPU burst time. </li></ul><ul><li>Problem  Starvation – low priority processes may never execute. </li></ul><ul><li>Solution  Aging – as time progresses increase the priority of the process. </li></ul>Priority Scheduling
  20. 20. Round Robin (RR) <ul><li>Each process gets a small unit of CPU time ( time quantum ), usually 10-100 milliseconds . After this time has elapsed, the process is preempted and added to the end of the ready queue. </li></ul><ul><li>If there are n processes in the ready queue and the time quantum is q , then each process gets 1/ n of the CPU time in chunks of at most q time units at once. No process waits more than ( n -1) q time units. </li></ul>
  21. 21. <ul><li>time quantum = 20 </li></ul><ul><li>Process Burst Time Wait Time </li></ul><ul><li>P 1 53 57 +24 = 81 </li></ul><ul><li> P 2 17 20 </li></ul><ul><li> P 3 68 37 + 40 + 17= 94 </li></ul><ul><li> P 4 24 57 + 40 = 97 </li></ul>Round Robin Scheduling Average wait time = (81+20+94+97)/4 = 73 57 20 37 57 24 40 40 17 P 1 (53) P 2 (17) P 3 (68) P 4 (24) P 1 (33) P 1 (13) P 3 (48) P 3 (28) P 3 (8) P 4 (4) P 1 P 2 P 3 P 4 P 1 P 3 P 4 P 1 P 3 P 3 0 20 37 57 77 97 117 121 134 154 162
  22. 22. <ul><li>Typically, higher average turnaround than SJF, but better response . </li></ul><ul><li>Performance </li></ul><ul><ul><li>q large  FCFS </li></ul></ul><ul><ul><li>q small  q must be large with respect to context switch, otherwise overhead is too high. </li></ul></ul>Round Robin Scheduling
  23. 23. Turnaround Time Varies With The Time Quantum TAT can be improved if most process finish their next CPU burst in a single time quantum.
  24. 24. Multilevel Queue <ul><li>Ready queue is partitioned into separate queues: EX: </li></ul><ul><li>foreground (interactive) background (batch) </li></ul><ul><li>Each queue has its own scheduling algorithm </li></ul><ul><li>EX </li></ul><ul><ul><li>foreground – RR </li></ul></ul><ul><ul><li>background – FCFS </li></ul></ul><ul><li>Scheduling must be done between the queues </li></ul><ul><ul><li>Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation . </li></ul></ul><ul><ul><li>Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes; </li></ul></ul><ul><ul><li>EX </li></ul></ul><ul><ul><li>80% to foreground in RR </li></ul></ul><ul><ul><li>20% to background in FCFS </li></ul></ul>
  25. 25. Multilevel Queue Scheduling
  26. 26. Multi-level Feedback Queues <ul><li>Implement multiple ready queues </li></ul><ul><ul><li>Different queues may be scheduled using different algorithms </li></ul></ul><ul><ul><li>Just like multilevel queue scheduling, but assignments are not static </li></ul></ul><ul><li>Jobs move from queue to queue based on feedback </li></ul><ul><ul><li>Feedback = The behavior of the job, </li></ul></ul><ul><ul><ul><li>EX does it require the full quantum for computation, or </li></ul></ul></ul><ul><ul><ul><li>does it perform frequent I/O ? </li></ul></ul></ul><ul><li>Need to select parameters for: </li></ul><ul><ul><li>Number of queues </li></ul></ul><ul><ul><li>Scheduling algorithm within each queue </li></ul></ul><ul><ul><li>When to upgrade and downgrade a job </li></ul></ul>
  27. 27. Example of Multilevel Feedback Queue <ul><li>Three queues: </li></ul><ul><ul><li>Q 0 – RR with time quantum 8 milliseconds </li></ul></ul><ul><ul><li>Q 1 – RR time quantum 16 milliseconds </li></ul></ul><ul><ul><li>Q 2 – FCFS </li></ul></ul><ul><li>Scheduling </li></ul><ul><ul><li>A new job enters queue Q 0 which is served FCFS. When it gains CPU, job receives 8 milliseconds (RR). If it does not finish in 8 milliseconds, job is moved to queue Q 1 . </li></ul></ul><ul><ul><li>At Q 1 job is again served FCFS and receives 16 additional milliseconds (RR). If it still does not complete, it is preempted and moved to queue Q 2 . </li></ul></ul><ul><ul><li>AT Q 2 job is served FCFS </li></ul></ul>
  28. 28. Multilevel Feedback Queues
  29. 29. Multiple-Processor Scheduling <ul><li>CPU scheduling more complex when multiple CPUs are available </li></ul><ul><li>Different rules for homogeneous or heterogeneous processors. </li></ul><ul><li>Load sharing in the distribution of work, such that all processors have an equal amount to do. </li></ul><ul><li>Asymmetric multiprocessing – only one processor accesses the system data structures, alleviating the need for data sharing </li></ul><ul><li>Symmetric multiprocessing (SMP) – each processor is self-scheduling </li></ul><ul><ul><li>Each processor can schedule from a common ready queue OR each one can use a separate ready queue. </li></ul></ul>
  30. 30. Thread Scheduling <ul><li>On operating system that support threads the kernel-threads (not processes) that are being scheduled by the operating system. </li></ul><ul><li>Local Scheduling (process-contention-scope PCS )– How the threads library decides which thread to put onto an available LWP </li></ul><ul><ul><ul><li>PTHREAD_SCOPE_PROCESS </li></ul></ul></ul><ul><li>Global Scheduling (system-contention-scope SCS )– How the kernel decides which kernel thread to run next </li></ul><ul><ul><ul><li>PTHREAD_SCOPE_PROCESS </li></ul></ul></ul>
  31. 31. Linux Scheduling <ul><li>Two algorithms: time-sharing and real-time </li></ul><ul><li>Time-sharing </li></ul><ul><ul><li>Prioritized credit-based – process with most credits is scheduled next </li></ul></ul><ul><ul><li>Credit subtracted when timer interrupt occurs </li></ul></ul><ul><ul><li>When credit = 0, another process chosen </li></ul></ul><ul><ul><li>When all processes have credit = 0, recrediting occurs </li></ul></ul><ul><ul><ul><li>Based on factors including priority and history </li></ul></ul></ul><ul><li>Real-time </li></ul><ul><ul><li>Defined by Posix.1b </li></ul></ul><ul><ul><li>Real time Tasks assigned static priorities. All other tasks have dynamic (changeable) priorities. </li></ul></ul>
  32. 32. The Relationship Between Priorities and Time-slice length
  33. 33. List of Tasks Indexed According to Prorities
  34. 34. Conclusion <ul><li>We’ve looked at a number of different scheduling algorithms. </li></ul><ul><li>Which one works the best is application dependent. </li></ul><ul><ul><li>General purpose OS will use priority based, round robin, preemptive </li></ul></ul><ul><ul><li>Real Time OS will use priority, no preemption. </li></ul></ul>
  35. 35. References <ul><li>Some slides from </li></ul><ul><ul><li>Text book slides </li></ul></ul><ul><ul><li>Kelvin Sung - University of Washington, Bothell </li></ul></ul><ul><ul><li>Jerry Breecher - WPI </li></ul></ul><ul><ul><li>Einar Vollset - Cornell University </li></ul></ul>

×