Chapter 2 (Part 2)
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
2,025
On Slideshare
1,411
From Embeds
614
Number of Embeds
4

Actions

Shares
Downloads
98
Comments
0
Likes
2

Embeds 614

http://fp202-os.blogspot.com 602
http://fp202-os.blogspot.in 6
http://www.fp202-os.blogspot.com 5
http://webcache.googleusercontent.com 1

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Chapter 2Memory and process Management PART 2: PROCESSMANAGEMENT IN OS DEPARTMENT OF IT & COMMUNICATION POLITEKNIK TUANKU SYED SIRAJUDDIN
  • 2. Learning Outcome By the end of this chapter, student will be able to:1) Explain role of control blocks and interrupt in the dispatching process2) Describe various types of scheduling processes3) Explain different types of scheduling algorithms4) Explain how queuing and the scheduler work together5) Differences between multiprogramming and time sharing6) Explain how to handle deadlock
  • 3. ProcessProcess – a program in execution.A process – need resources (CPU time, files, I/O devices) to accomplish task.process execution must progress in sequential fashion
  • 4. Process include:1)Program counter2)Process Stack – containing temp data (subroutineparameters, return addresses, temp variables)3)Data section – containing global variables.
  • 5. Process state
  • 6. ……………………….Process stateAs a processes executes, it changes state.Each process – in on of the following state :  NEW – The process is being created.  READY – The process is waiting to be assigned to a pocessor.  RUNNING – Instruction are being executed.  WAITING – The process are waiting for some event to occur (I/O completion or reception of a signal)  TERMINATED – The process has finished execution.
  • 7. Control blocksEach process is representer in the OS by a process control blok (PCB).There are several control fields that must be maintained in support of each active program.Often, a control block is created to hold: 1) a partition’s key control flags, 2) constants 3) variablesThe control blocks (one per partition) are linked to form a linked list.
  • 8. ………………………..Control blocksThe dispatcher typically determines which program is to start by following the chain of pointers from control block to control block.A given control block’s relative position in the linked list might be determined by its priority or computed dynamically, perhaps taking into account such factors as: 1) program size, 2) time in memory, 3) peripheral device requirements 4) andother measures of the program’s impact on system resources.
  • 9. Control blocks •Information about each program is stored in the program’s control block. •The dispatcher determines which program to start next by following a linked list of control blocks.
  • 10. InterruptsAn interrupt is an electronic signal. Hardware senses the signal, saves key control information for the currently executing program, and starts the operating system’s interrupt handler routine. At that instant, the interrupt ends.The operating system then handles the interrupt.Subsequently, after the interrupt is processed, the dispatcher starts an application program.Eventually, the program that was executing at the time of the interrupt resumes processing.
  • 11. Example of how interrupt workStep 1:
  • 12. Example of how interrupt workStep 2:
  • 13. Example of how interrupt workStep 3:
  • 14. Example of how interrupt workStep 4:
  • 15. Example of how interrupt workStep 5:
  • 16. Example of how interrupt workStep 6:
  • 17. CPU SCHEDULINGBasic Conceptts :Objective of multiprogramming  to have someprocess running at all time, to maximize CPUutilization.When one process in wait state  OS will take theCPU away from that process and gives the CPU toanother one. This pattern continues…Uniprecess ???  Only ONE running process. Ifmore than one ???
  • 18. ……………………………….CPU SchedulerCPU scheduling decisions may take place whena process:1. Switches from running to waiting state2. Switches from running to ready state3. Switches from waiting to ready4. Terminates Scheduling under 1 and 4 is nonpreemptiveAll other scheduling is preemptive
  • 19. ………………………………...CPU Scheduler Preemptive scheduling policy interrupts processing of a job and transfers the CPU to another job.- The process may be pre-empted by the operating system when:1) a new process arrives (perhaps at a higher priority), or2) an interrupt or signal occurs, or3) a (frequent) clock interrupt occurs. Non-preemptive scheduling policy functions without external interrupts.- once a process is executing, it will continue to execute until it terminates, or by switching o the waiting state.
  • 20. …………………………………………CPU SchedulingScheduling criteria:1)CPU utilizationThe ratio of busy time of the processor to the total timepasses for processes to finish. Processor Utilization = (Processor busy time) / (Processor busy time + Processor idle time)To keep the CPU as busy as possible
  • 21. …………………………………………CPU Scheduling2) ThroughputIf the CPU is busy executing process  then work is being done.The measure of work done in a unit time interval. Throughput =(Number of processes completed) / (Time Unit)Long process would take one process/hourShort process might take 10 processes/second.
  • 22. …………………………………………….CPU Scheduling3) Turnaround time How long it takes to execute a process. The sum of time spent waiting to get into memory, waiting in ready queue, execution time on CPU and doing I/O time. tat = time(process completed) – time(process submitted)4) Waiting time The sum of periods spent waiting in the ready queue only.5) Response time Time from the submission of a request until the first response is produced. This criterion is important for interactive systems. rt = t(first response) – t(submission of request)
  • 23. ………………………………………….CPU SchedulingTypes of scheduling:1)long-term scheduling2)Medium-term scheduling3)Short-term scheduling
  • 24. Long-term schedulingDetermine which programs admitted to system for processing - controls degree of multiprogramming Once admitted, program becomes a process, either: – added to queue for short-term scheduler – swapped out (to disk), so added to queue for medium-term scheduler
  • 25. Medium –term schedulingPart of swapping function between main memory and disk - based on how many processes the OS wants available at any one time - must consider memory management if no virtual memory (VM), so look at memory requirements of swapped out processes
  • 26. Short –term scheduling (dispatcher)Executes most frequently, to decide which process to execute next – Invoked whenever event occurs that interrupts current process or provides an opportunity to preempt current one in favor of another – Events: clock interrupt, I/O interrupt, OS call, signal
  • 27. Scheduling AlgorithmCPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU.
  • 28. …………………………………….Scheduling AlgorithmTypes of scheduling algorithm:Basic strategies1)First In First Out (FIFO) / First –Come, First-Served.2)Shortest Job First (SJF)3)Shortest Remaining Time First (SRTF)4)Round Robin (RR)5)PriorityCombined strategies1)Multi-level queue2)Multi-level feedback queue
  • 29. First Come First Serve (FIFO) Non-preemptive. Handles jobs according to their arrival time -- the earlier they arrive, the sooner they’re served. The process the request the CPU first is allocated the CPU first. Simplest algorithm to implement -- uses a FIFO queue. Good for batch systems; not so good for interactive ones. Turnaround time is unpredictable.
  • 30. Shortest Job First (SJF) Non-preemptive. Handles jobs based on length of their CPU cycle time.  Use lengths to schedule process with shortest time. Optimal – gives minimum average waiting time for a given set of processes.  optimal only when all of jobs are available at same time and the CPU estimates are available and accurate. Doesn’t work in interactive systems because users don’t estimate in advance CPU time required to run their jobs.
  • 31. Shortest Remaining Time First (SRTF) Preemptive version of the SJF algorithm. Processor allocated to job closest to completion.  This job can be preempted if a newer job in READY queue has a “time to completion” thats shorter. Can’t be implemented in interactive system -- requires advance knowledge of CPU time required to finish each job. SRT involves more overhead than SJN.  OS monitors CPU time for all jobs in READY queue and performs “context switching”.
  • 32. Round Robin (RR) FCFS with Preemption. Used extensively in interactive systems because it’s easy to implement. Isn’t based on job characteristics but on a predetermined slice of time that’s given to each job.  Ensures CPU is equally shared among all active processes and isn’t monopolized by any one job. Time slice is called a time quantum  size crucial to system performance (100 ms to 1-2 secs)
  • 33. Priority scheduling Non-preemptive. Gives preferential treatment to important jobs.  Programs with highest priority are processed first.  Aren’t interrupted until CPU cycles are completed or a natural wait occurs. If 2+ jobs with equal priority are in READY queue, processor is allocated to one that arrived first (first come first served within priority). Many different methods of assigning priorities by system administrator or by Processor Manager.
  • 34. Multi-level queue
  • 35. Multi-level queue
  • 36. Multi-level queue
  • 37. Multi-level feedback queue (MLFQ)
  • 38. Multi-level feedback queue (MLFQ) : Example
  • 39. Queuing and schedulerAs one program finishes processing and space becomes available, which program is loaded into memory next?This decision typically involves two separate modules, a queuing routine and a scheduler
  • 40. Queuing and scheduler 1) As programs enter the system, they are placed on a queue by the queuing routine. 2) When space becomes available, the scheduler selects a program from the queue and loads it into memory.
  • 41. Multiprogramming and time sharing A timesharing system allows multiple users to interact with a computer at the same time Multiprogramming allowed multiple processes to be active at once, which gave rise to the ability for programmers to interact with the computer system directly, while still sharing its resources In a timesharing system, each user has his or her own virtual machine, in which all system resources are (in effect) available for use