Cpu scheduling

  • 473 views
Uploaded on

This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware

This Presention contains Cpu scheduling algorithms,Scheduling Criteria,process sychroization,mutilevel feed back que,critical section problem anad semaphores,Synchoroniztion hardware

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
473
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
27
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them  CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates  Scheduling under 1 and 4 is non preemptive  All other scheduling is preemptive 
  • 2. Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves:  switching context  switching to user mode  jumping to the proper location in the user program to restart that program  Dispatch latency – time it takes for the dispatcher to stop one process and start another running 
  • 3.      CPU utilization – keep the CPU as busy as possible Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-
  • 4.      Max CPU utilization Max throughput Min turnaround time Min waiting time Min response time
  • 5. First-Come, First-Served (FCFS)  Complete the jobs in order of arrival  Shortest Job First (SJF)  Complete the job with shortest next CPU burst  Priority (PRI)  Processes have a priority  Round-Robin (RR)  Each process gets a small unit of time on CPU (time quantum or time slice) 
  • 6. Process Burst Time P1 24 P2 3 P3 3  Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: P1 0   P2 24 P3 27 30 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17
  • 7. Suppose that the processes arrive in the order P2 , P3 , P1  The Gantt chart for the schedule is: P2 0     P3 3 P1 6 30 Waiting time for P1 = 6; P2 = 0; P3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Convoy effect short process behind long process
  • 8. Associate with each process the length of its next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst  SJF is optimal – gives minimum average waiting time for a given set of processes  The difficulty is knowing the length of the next CPU request 
  • 9.  Process Arrival Time Burst Time P1 0.0 6 P2 0.0 8 P3 0.0 7 P4 0.0 3 SJF scheduling chart P4 0  P3 P1 3 9 P2 16 24 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7
  • 10. A priority number (integer) is associated with each process  The CPU is allocated to the process with the highest priority (smallest integer highest priority)  Preemptive  non preemptive  SJF is a priority scheduling where priority is the predicted next CPU burst time  Problem Starvation – low priority processes may never execute 
  • 11. Process Burst Time Priority P1 4 0 1 0 P3 1 10 P2 3 5 Arrival Time 2 0 P2 0 P4 P5 1 P1 1 6 P1 P4 P3 16 18 Average waiting time: P5 5 (0+1+6+16+18)/5 = 8.2 19 0 0
  • 12. FCFS (First-come, First-Served)  Non-preemptive  SJF (Shortest Job First)  Can be either  Choice when a new (shorter) job arrives  Can preempt current job or not  Priority  Can be either  Choice when a processes priority changes or when a higher priority process arrives 
  • 13. Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.  If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n1)q time units.  Performance 
  • 14. Process Burst Time P1 P2 P3  0 0 0 The Gantt chart is: P1 0  24 3 3 Arrival P2 4 P3 7 P1 10 P1 14 P1 18 22 P1 26 P1 30 Average waiting time: (0+4+7+(10-4))/3 = 5.66 With FCFS: (0+24+27)/3 = 17
  • 15.  Ready queue is partitioned into separate queues: foreground (interactive) background (batch)  Each queue has its own scheduling algorithm  foreground – RR  background – FCFS  Scheduling must be done between the queues  Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation.
  • 16. A process can move between the various queues; aging can be implemented this way  Multilevel-feedback-queue scheduler defined by the following parameters:   number of queues  scheduling algorithms for each queue  method used to determine when to upgrade a process  method used to determine when to demote a process  method used to determine which queue a process will enter when that process needs service
  • 17.  Three queues:  Q0 – RR with time quantum 8 milliseconds  Q1 – RR time quantum 16 milliseconds  Q2 – FCFS  Scheduling  A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1.  At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
  • 18.  The Critical-section Problem  Synchronization Hardware
  • 19.  n processes all competing to use some shared data  Each process has a code segment, called critical section, in which the shared data is accessed.  Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section.
  • 20.   Only 2 processes, P0 and P1 General structure of process Pi (other process Pj) do { entry section critical section exit section  reminder section } while (1); Processes may share some common variables to synchronize their actions.
  • 21.    Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that requestReturn is granted.
  • 22.  Any solution to critical section problem requires a simple tool or a lock  modern computers provide special hardware instructions that allow us to test and modify the content of a word atomically ie as a one uninterrupted unit  TestAndSet lock and Semaphores are two examples of H/W tools Back
  • 23.  Definition: Boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }
  • 24.   Shared Boolean variable lock., initialized to false. Solution: do { while ( TestAndSet (&lock )) ; // do nothing // critical section lock = FALSE; // } while (TRUE); remainder section
  • 25.      Synchronization tool that does not require busy waiting Semaphore S – integer variable Two standard operations modify S: wait() and signal()  Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations  wait (S) { while S <= 0 ; // no-op S--; }  signal (S) { S++;
  • 26. Counting semaphore – integer value can range over an unrestricted domain  Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement  Also known as mutex locks  Can implement a counting semaphore S as a binary semaphore  Provides mutual exclusion  Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section
  • 27.    Concurrently running processes P1,P2 Executing statement S1 first then S2 int (synch) initialized to 0 S1; P1 Signal (synch); Wait (synch); P2 S2;
  • 28.     Ajay Anandu Archith Abhijith     Subeesh Sai Navneeth Navin