15. Computer Systems Basic Software 1


Published on

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Text Page 92 (Figure 3-1-2) Reading 1
  • JCL Function of Job Management is to decode and execute these detailed instructions written in the JCL SPOOL An indispensable function in multi-programming environments Job Scheduling executed by the job scheduler using a dedicated program incorporated in the OS
  • The buffer provides a waiting station where data can rest while the slower device catches up
  • Execution Control State Transition Processor becomes free when the process being executed turns into the wait status Execute other process during this time – multiple process (multitasking)
  • Preemption: Once the time has expired, the processor is interrupted to let another process with the same priority to access the processor. If a second process with higher priority is ready to execute, the OS interrupts the current executing process to the process with higher priority Round robin: Once time is up, execution halted and process is sent to last position in process queue
  • User process calls on the OS to perform some function requiring privileged instructions. Done by means of a supervisor call (SVC) or system call An interruption occurs An error condition occurs in a user process (program interrupt)
  • Refer to Text Page 102
  • Paging Exchange of programs between main storage unit and auxiliary storage device (Refer to Text Page 103 Figure 3-1-18) Address Translation Issue often arises when paging is performed is that the page-in address of the main storage unit is unknown (Refer to Text Page 104 Figure 3-1-19) Segmentation Paging (Refer to Text Page 104 Figure 3-1-20) Page Replacement Least Recently Used (LRU) Method Fist-in First-out) FIFO Method
  • 15. Computer Systems Basic Software 1

    1. 1. Part2: Computer Systems Chapter 2: Basic Software (Text No. 1 Chapter 3)
    2. 2. Introduction <ul><li>This chapter describes the mechanism and functions of the operating system (control program), which performs different kinds of control/management </li></ul><ul><ul><li>Understand the software names and classifications, functions and roles, including the relation with the hardware and the user </li></ul></ul><ul><ul><li>Understand the reasons why the operating system (OS) is necessary, its roles, structure, functions, etc. </li></ul></ul><ul><ul><li>Understand the types and characteristics of the major OS </li></ul></ul>
    3. 3. Software <ul><li>Programs that control the operations of the computer and its devices </li></ul><ul><ul><li>Starting up the computer </li></ul></ul><ul><ul><li>Opening, executing, and running applications </li></ul></ul><ul><ul><li>Disk formatting </li></ul></ul><ul><ul><li>File compression </li></ul></ul><ul><ul><li>Backups </li></ul></ul>
    4. 4. Operating System Position Application software Operating system Hard ware Application
    5. 5. Operating System “ A set of system software routines that sits between the application program and the hardware. ” User Application program Hardware Operating system
    6. 6. Operating System <ul><li>OS serves as a hardware/software interface, application programmers and users rarely communicate directly with hardware (simplifies programming) </li></ul><ul><li>Acts as repository for command, shared routines, and defines a platform for constructing and executing application software </li></ul>
    7. 7. OS Configuration and Functions <ul><li>OS was “born” for the purpose of having the computer prepare the data to be processed and control the execution process by itself. </li></ul><ul><li>Concerns and considerations of users was how to operate the extremely expensive machines </li></ul>
    8. 8. OS Configuration and Functions OS role Efficient use of resources Consecutive job processing Multiple programming Reduction of the response Improvement of reliability
    9. 9. OS Configuration and Functions <ul><li>1. OS Role </li></ul><ul><ul><li>Efficient use of the resources </li></ul></ul><ul><ul><ul><li>Efficiently use of resources (e.g. processor, memory, storage devices, I/O devices, application software, other components of computer system) for computer use without relying on human intervention and wastage of these resources </li></ul></ul></ul><ul><ul><li>Consecutive job processing </li></ul></ul><ul><ul><ul><li>Implements automatic processes for work done (job) in computer </li></ul></ul></ul><ul><ul><ul><li>Minimize or eliminate human involvement to increase processing efficiency </li></ul></ul></ul><ul><ul><li>Multi-programming </li></ul></ul><ul><ul><ul><li>Processing multiple jobs with the same processor </li></ul></ul></ul><ul><ul><ul><li>Enhance computer processing efficiency by minimizing or eliminating processor idle time </li></ul></ul></ul>
    10. 10. Consecutive Job Processing
    11. 11. Multi-programming
    12. 12. OS Configuration and Functions <ul><ul><li>Reduction of the response time </li></ul></ul><ul><ul><ul><li>Enhances services (reduce waiting time) </li></ul></ul></ul><ul><ul><ul><li>E.g. Online transaction processing systems </li></ul></ul></ul><ul><ul><li>Improvement of reliability </li></ul></ul><ul><ul><ul><li>Utility programs </li></ul></ul></ul><ul><ul><li>Others </li></ul></ul><ul><ul><ul><li>User friendly </li></ul></ul></ul><ul><ul><ul><li>“extensibility” of the OS (e.g. .Net framework) </li></ul></ul></ul>
    13. 13. OS Configuration and Functions <ul><li>2. OS Configuration </li></ul><ul><ul><li>Components of OS are configured to work together for its complex and wide range of functions </li></ul></ul>OS Service Programs Hardware Control Program General-purpose language processors
    14. 14. OS Configuration and Functions <ul><li>3. OS Functions </li></ul><ul><ul><li>The control program (nucleus or kernel) is equipped with diverse functions, aimed at enabling efficient use of hardware </li></ul></ul><ul><ul><ul><li>Job Management Function </li></ul></ul></ul><ul><ul><ul><li>Process Management Function </li></ul></ul></ul><ul><ul><ul><li>Data Management Function </li></ul></ul></ul><ul><ul><ul><li>Memory Management Function </li></ul></ul></ul><ul><ul><ul><li>Operation Management Function </li></ul></ul></ul><ul><ul><ul><li>Failure Management Function </li></ul></ul></ul><ul><ul><ul><li>Input/Output Management Function </li></ul></ul></ul><ul><ul><ul><li>Communication Management Function </li></ul></ul></ul>
    15. 15. OS Functions
    16. 16. Job Management <ul><li>To improve the computer system processing capacity by performing consecutive processing of the job </li></ul><ul><ul><li>Within the OS, routines that dispatch, queue, schedule, load, initiate, and terminate jobs or tasks </li></ul></ul><ul><ul><li>Concerned with job-to-job and task-to-task transition </li></ul></ul><ul><ul><li>JCL and SPOOL </li></ul></ul>
    17. 17. Job Management <ul><li>1. Job control language (JCL) </li></ul><ul><ul><li>Often used in larger computer systems, JCL is any control language that controls the execution of applications </li></ul></ul><ul><ul><li>Need to supply the information that the job requires and instruct the computer what to do with this information </li></ul></ul><ul><ul><li>This includes information about: </li></ul></ul><ul><ul><ul><li>the program or procedure to be executed </li></ul></ul></ul><ul><ul><ul><li>input data </li></ul></ul></ul><ul><ul><ul><li>output data </li></ul></ul></ul><ul><ul><ul><li>output reports </li></ul></ul></ul>
    18. 18. Job Management <ul><ul><li>A JCL statement also provides information about who the job belongs to and which account to charge for the job. </li></ul></ul><ul><ul><li>Syntax of JCL differs depending on the OS: </li></ul></ul><ul><ul><ul><li>JOB statement - submit a job to the operating system </li></ul></ul></ul><ul><ul><ul><li>EXEC statement - control the system’s processing of the job </li></ul></ul></ul><ul><ul><ul><li>DD statement - request resources needed to run the job (location of files required) </li></ul></ul></ul>
    19. 19. JCL
    20. 20. Job Management <ul><li>Job Control Language </li></ul>Command Translation Object module Linkage edition Load module Execution Result
    21. 21. Job Management <ul><li>2. SPOOL (Simultaneous Peripheral Operations Online) </li></ul><ul><ul><li>Used especially in multiprogramming environments </li></ul></ul><ul><ul><li>Discrepancy between I/O transfer rate and CPU processing speed </li></ul></ul><ul><ul><li>Refers to putting jobs in a buffer, a special area in memory or on a disk where a device can access them when it is ready </li></ul></ul><ul><ul><li>Allow devices to access data at different rates </li></ul></ul><ul><ul><li>Example: Print Spooling </li></ul></ul><ul><ul><ul><li>Documents are loaded into a buffer (usually an area on a disk), and then the printer pulls them off the buffer at its own rate </li></ul></ul></ul><ul><ul><ul><li>Able to perform other operations on the computer while the printing takes place as documents are in a buffer where they can be accessed by the printer </li></ul></ul></ul><ul><ul><ul><li>Able to place a number of print jobs on a queue instead of waiting for each one to finish before specifying the next one </li></ul></ul></ul>
    22. 22. Job Management <ul><li>SPOOL (Simultaneous Peripheral Operations OnLine) </li></ul>Running program SPOOL file Output Devices
    23. 23. Print Spooling print queue print spooler print job jobs to be printed jobs being printed server laser printer disk
    24. 24. Job Management <ul><li>3. Job scheduling </li></ul><ul><ul><li>Executing the right job, on the right day, at the right time, after the right dependency </li></ul></ul><ul><ul><li>A job scheduler can initiate and manage jobs automatically by processing prepared job control language statements or through equivalent interaction with a human operator. </li></ul></ul><ul><ul><li>Some features that may be found in a job scheduler include: </li></ul></ul><ul><ul><ul><li>Continuously automatic monitoring of jobs and completion notification </li></ul></ul></ul><ul><ul><ul><li>Event-driven job scheduling </li></ul></ul></ul><ul><ul><ul><li>Performance monitoring </li></ul></ul></ul><ul><ul><ul><li>Report scheduling </li></ul></ul></ul>
    25. 25. Job Scheduling
    26. 26. Job Management <ul><li>Job Scheduler </li></ul>SPOOL file Job queue Data Input device Output device Reader SPOOL file Initiator Memory Execution Writer Terminator Initiator Data Input device Output device
    27. 27. Device Management <ul><li>Interface for device drivers </li></ul><ul><li>Communication between a device and the computer </li></ul><ul><ul><li>Inflow (buffer) and outflow (queue) control </li></ul></ul><ul><ul><li>Multiple formats </li></ul></ul><ul><li>Print Manager </li></ul>
    28. 28. Questions <ul><li>What is an operating system? </li></ul><ul><li>Describe SPOOLING. </li></ul><ul><li>Describe the OS roles. </li></ul><ul><li>Describe Job Management. </li></ul>
    29. 29. Where to Get More Information <ul><li>http://www.okstate.edu/cis_info/cis_manual/jcl_toc.html </li></ul><ul><li>http://whatis.techtarget.com/definition/0,,sid9_gci214229,00.html </li></ul>
    30. 30. Operating System <ul><li>Operating System </li></ul><ul><ul><li>OS Configuration and Functions </li></ul></ul><ul><ul><ul><li>OS Role </li></ul></ul></ul><ul><ul><ul><li>OS Configuration </li></ul></ul></ul><ul><ul><ul><li>OS Functions </li></ul></ul></ul><ul><ul><li>Job Management </li></ul></ul><ul><ul><ul><li>Job Control Language (JCL) </li></ul></ul></ul><ul><ul><ul><li>Simultaneous Peripheral Operations Online (SPOOL) </li></ul></ul></ul><ul><ul><ul><li>Job Scheduling </li></ul></ul></ul>
    31. 31. Process Management <ul><li>Main purpose is to efficiently use the processor </li></ul><ul><ul><li>1 . Execution Control </li></ul></ul><ul><ul><ul><li>a. State transition </li></ul></ul></ul><ul><ul><ul><ul><li>Job is converted into a processing unit called process. It is processed while repeating the state transition through the process management </li></ul></ul></ul></ul>1. Job submission (Job step) 2.5. Executable status (Ready status) 4. Wait status 3. Running (status) Process generation Dispatching Timer interrupt SVC interrupt Process termination I/O interrupt
    32. 32. <ul><ul><ul><li>b. Dispatcher </li></ul></ul></ul><ul><ul><ul><ul><li>The OS routine that determines which application routine or task the processor will execute next </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>i. Preemption </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>A form of multitasking in which each process is given a set amount of time to access the processor (priority-based queue) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>ii. Round robin </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>A processor management technique in which each program is limited to an equal time slice (standard-based queue) </li></ul></ul></ul></ul></ul>Process Management
    33. 33. Process Management <ul><li>After a job is submitted </li></ul><ul><ul><li>A process is initiated </li></ul></ul><ul><ul><li>The process may lead to multiple processes (or threads) be created </li></ul></ul><ul><ul><li>All processes are assigned certain amount of CPU time enabling multi-tasking </li></ul></ul>
    34. 34. CPU Scheduling <ul><li>Deals with the problem of deciding which processes in the ready queue is to be allocated to the CPU </li></ul><ul><ul><li>First-Come, First-Served </li></ul></ul><ul><ul><li>Shortest-Job-First </li></ul></ul><ul><ul><li>Priority </li></ul></ul><ul><ul><li>Round-Robin </li></ul></ul><ul><ul><li>Multilevel Queue </li></ul></ul><ul><li>Multiprocessor Scheduling </li></ul>
    35. 35. First-Come, First-Served <ul><li>Simplest CPU scheduling algorithm </li></ul><ul><li>Process that requests the CPU first is allocated the CPU first </li></ul><ul><li>Easily managed with a queue (FIFO) </li></ul><ul><ul><li>When a process enters the ready queue, its PCB is linked onto the tail of the queue </li></ul></ul><ul><ul><li>When the CPU is free, it is allocated to the process at the head of the queue </li></ul></ul><ul><ul><li>The running process is then removed from the queue </li></ul></ul><ul><li>Advantages </li></ul><ul><ul><li>Simple to write and understand </li></ul></ul><ul><li>Disadvantages </li></ul><ul><ul><li>Average waiting time often quite long </li></ul></ul>
    36. 36. First-Come, First-Served <ul><li>If the processes arrive in the order P1, P2 and P3, and are served in FCFS order </li></ul>0 24 27 30 <ul><li>Waiting time: P1(0); P2(24); P3(27) </li></ul><ul><li>Average waiting time = (0+24+27)/3 = 17ms </li></ul>3 P3 3 P2 24 P1 Burst Time Process P3 P2 P1
    37. 37. First-Come, First-Served <ul><li>If the processes arrive in the order P2, P3 and P1, and are served in FCFS order </li></ul>0 3 6 30 <ul><li>Average waiting time = (0+3+6)/3 = 3ms </li></ul>3 P3 3 P2 24 P1 Burst Time Process P1 P3 P2
    38. 38. First-Come, First-Served <ul><li>Issues </li></ul><ul><ul><li>Average waiting time </li></ul></ul><ul><ul><ul><li>generally not minimal and varies substantially if the process CPU-burst times vary greatly </li></ul></ul></ul><ul><ul><li>Non-preemptive scheduling </li></ul></ul><ul><ul><ul><li>Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O </li></ul></ul></ul><ul><ul><ul><li>Troublesome for time-sharing systems where each user gets a share of the CPU at regular intervals </li></ul></ul></ul>
    39. 39. Shortest-Job-First <ul><li>Associates with each process the length of the latter’s next CPU burst </li></ul><ul><li>When the CPU is available, it is assigned to the process that has the smallest next CPU burst </li></ul><ul><li>If two processes have the same length, FCFS scheduling is used to break the tie </li></ul><ul><li>Also known as shortest-next-CPU-burst </li></ul><ul><li>Advantages </li></ul><ul><ul><li>Optimal average waiting time </li></ul></ul><ul><li>Disadvantages </li></ul><ul><ul><li>Difficulty in predicting next CPU burst </li></ul></ul>
    40. 40. Shortest-Job-First 0 3 9 16 <ul><li>Average waiting time = (3+16+9+0)/4 = 7ms </li></ul>24 7 P3 3 P4 8 P2 6 P1 Burst Time Process P4 P2 P3 P1
    41. 41. Shortest-Job-First <ul><li>Issues </li></ul><ul><ul><li>Knowing the length of the next CPU request </li></ul></ul><ul><ul><ul><li>For long-term (batch) scheduling, the process time limit that a user specifies when s/he submits the job </li></ul></ul></ul><ul><ul><ul><li>Users are motivated to estimate process time limit accurately </li></ul></ul></ul><ul><ul><ul><ul><li>Lower value = higher response rate </li></ul></ul></ul></ul><ul><ul><ul><li>For short-term scheduling, only approximations are possible </li></ul></ul></ul><ul><ul><ul><ul><li>Premise: Expect the next CPU burst to be similar in length to the previous ones </li></ul></ul></ul></ul><ul><ul><ul><ul><li>By computing an approximation of the length of the next CPU burst, we can pick the process with the shortest predicted CPU burst </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Exponential average formula </li></ul></ul></ul></ul><ul><ul><li>Can be pre-emptive or non-preemptive </li></ul></ul>
    42. 42. Shortest-Job-First (Pre-emptive) 0 1 5 10 <ul><li>Process P1 is started at time 0, since it is the only process in the queue. </li></ul><ul><li>Process P2 arrives at time 1. The remaining time for P1 (7) is larger than the time for P2 (4). So, P1 is pre-empted and P2 is scheduled. </li></ul><ul><li>Average waiting time = (10-1)+(1-1)+(17-2)+5-3)/4 = 6.5 ms </li></ul>17 26 3 2 1 0 Arrival Time 9 P3 5 P4 4 P2 8 P1 Burst Time Process P1 P1 P3 P4 P2
    43. 43. Shortest-Job-First ( Non Pre-emptive ) <ul><li>Calculate Average Waiting Time. </li></ul>3 2 1 0 Arrival Time 9 P3 5 P4 4 P2 8 P1 Burst Time Process
    44. 44. Priority Scheduling <ul><li>FCFS = equal priority scheduling algorithm </li></ul><ul><li>SJF = general priority scheduling algorithm </li></ul><ul><ul><li>priority (p) is the inverse of the predicted next CPU burst </li></ul></ul><ul><li>Priority </li></ul><ul><ul><li>Domain: fixed range of numbers e.g. 0 to 7 </li></ul></ul><ul><ul><li>To simplify discussion </li></ul></ul><ul><ul><ul><li>High priority (0) </li></ul></ul></ul><ul><ul><ul><li>Low priority (7) </li></ul></ul></ul>
    45. 45. Priority Scheduling 5 1 P4 5 2 1 10 Burst Time 9 P3 2 P5 4 P2 8 P1 Priority Process
    46. 46. Priority Scheduling <ul><li>Priorities can be defined: </li></ul><ul><ul><li>Externally </li></ul></ul><ul><ul><ul><li>External to the operating system </li></ul></ul></ul><ul><ul><ul><li>For example, importance of the process, type and amount of funds paid for computer use, department, political/hierarchical factors </li></ul></ul></ul><ul><ul><li>Internally </li></ul></ul><ul><ul><ul><li>Measurable quantity or quantities to compute the priority of a process </li></ul></ul></ul><ul><ul><ul><li>For example, time limits, memory requirements, number of open files, ratio of average I/O burst to average CPU burst </li></ul></ul></ul>
    47. 47. Priority Scheduling <ul><li>Pre-emptive </li></ul><ul><ul><li>When a process arrives at the ready queue, its priority is compared with currently running processes </li></ul></ul><ul><ul><li>Preempt the CPU if the priority of the newly arrived process is higher </li></ul></ul><ul><li>Non pre-emptive </li></ul><ul><ul><li>Simply put the new process at the head of the ready queue </li></ul></ul>
    48. 48. Priority Scheduling <ul><li>Issues </li></ul><ul><ul><li>Indefinite blocking or Starvation </li></ul></ul><ul><ul><ul><li>Leave some low-priority processes waiting indefinitely for the CPU </li></ul></ul></ul><ul><ul><ul><li>In a heavily loaded environment, a steady stream of higher-priority processes can prevent a low-priority process from ever getting the CPU </li></ul></ul></ul><ul><ul><ul><li>Situations: </li></ul></ul></ul><ul><ul><ul><ul><li>Process will eventually be run (Sun 3am) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Computer system will eventually crash and lose all unfinished low-priority processes </li></ul></ul></ul></ul><ul><ul><ul><li>Solutions: </li></ul></ul></ul><ul><ul><ul><ul><li>Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time </li></ul></ul></ul></ul><ul><ul><ul><ul><li>For example, if priorities range from 127 (low) to 0 (high), decrement the priority by 1 every 15 minutes. Hence, no more than 32 hours for a priority 127 process to age to a priority 0 process. </li></ul></ul></ul></ul>
    49. 49. Round-Robin <ul><li>Specifically designed for time-sharing systems </li></ul><ul><li>RR = FCFS + Pre-emption </li></ul><ul><li>A small unit of time called time slice or time quantum is defined </li></ul><ul><ul><li>Generally 10 to 100 ms </li></ul></ul><ul><li>Ready queue is a circular queue </li></ul><ul><li>CPU scheduler goes around the ready queue allocating the CPU to each process for a time interval of up to 1 time quantum </li></ul><ul><ul><li>New processes added to the tail of the ready queue </li></ul></ul><ul><ul><li>CPU scheduler picks the first process from the ready queue and sets a timer to interrupt after 1 time quantum and then dispatches the process </li></ul></ul>
    50. 50. RR <ul><li>Situations </li></ul><ul><ul><li>Process may have a CPU burst < 1 time quantum (voluntary CPU release) </li></ul></ul><ul><ul><li>Else, timer will go off  interrupt to the OS  context switch executed to the next process in the queue  process will be put at the tail of the ready queue </li></ul></ul>
    51. 51. RR <ul><li>With a time quantum of 4ms, calculate the average waiting time. </li></ul>3 3 24 Burst Time P3 P2 P1 Process
    52. 52. RR <ul><li>Issue </li></ul><ul><ul><li>Choosing the size of the time quantum </li></ul></ul><ul><ul><ul><li>Too small a time quantum and the CPU will be spent context switching </li></ul></ul></ul><ul><ul><ul><li>Too large a time quantum and the algorithm will revert to FCFS </li></ul></ul></ul><ul><ul><ul><li>Rule of thumb – 80% of CPU bursts should be shorter than the time quantum </li></ul></ul></ul>
    53. 53. Multilevel Queue <ul><li>Created for situations in which processes are easily classified into different groups </li></ul><ul><ul><li>For example, foreground (interactive) processes and background (batch) processes </li></ul></ul><ul><ul><ul><li>Different response-time requirements </li></ul></ul></ul><ul><ul><ul><li>Different scheduling needs </li></ul></ul></ul><ul><ul><ul><li>Foreground processes may have priority (externally defined) over background processes </li></ul></ul></ul>
    54. 54. Multilevel Queue <ul><li>Partition the ready queue into several separate queues </li></ul><ul><ul><li>For example (from highest to lowest priority) </li></ul></ul><ul><ul><ul><li>System processes </li></ul></ul></ul><ul><ul><ul><li>Interactive processes </li></ul></ul></ul><ul><ul><ul><li>Interactive editing processes </li></ul></ul></ul><ul><ul><ul><li>Batch processes </li></ul></ul></ul><ul><ul><ul><li>Student processes </li></ul></ul></ul><ul><ul><li>Processes are permanently assigned to one queue based on some property (memory size, process priority, or process type) </li></ul></ul>
    55. 55. Multilevel Queue <ul><ul><li>Each queue can use a different scheduling algorithm </li></ul></ul><ul><ul><ul><li>Foreground processes (RR) </li></ul></ul></ul><ul><ul><ul><li>Background processes (FCFS) </li></ul></ul></ul><ul><ul><li>Scheduling between queues </li></ul></ul><ul><ul><ul><li>General, high priority queues have absolute priority over lower priority queues </li></ul></ul></ul><ul><ul><ul><li>No process in the batch queue could run unless the queues for system processes, interactive processes and interactive editing processes were all empty. </li></ul></ul></ul><ul><ul><ul><li>If an interactive editing process entered the ready queue while a batch process was running, the batch process would be pre-empted. </li></ul></ul></ul><ul><ul><li>Time-slicing between queues </li></ul></ul><ul><ul><ul><li>Foreground queue (80% of CPU to schedule locally using RR) </li></ul></ul></ul><ul><ul><ul><li>Background queue (20% of CPU to schedule locally using FCFS) </li></ul></ul></ul>
    56. 56. Multi-Processor Scheduling <ul><li>More complex than single processor scheduling </li></ul><ul><li>Discussion assumes homogenous processes </li></ul><ul><ul><li>Identical in functionality </li></ul></ul><ul><ul><li>Any processor can then be used to run any processes in the queue </li></ul></ul><ul><li>Possibilities </li></ul><ul><ul><li>Separate queue for each processor </li></ul></ul><ul><ul><ul><li>One could be empty while another would be full </li></ul></ul></ul><ul><ul><li>One queue for all (common ready queue) </li></ul></ul><ul><ul><ul><li>Self-scheduling </li></ul></ul></ul><ul><ul><ul><li>Master-slave scheduling </li></ul></ul></ul>
    57. 57. Process Control Block <ul><li>All of the information needed to keep track of a process when switching </li></ul><ul><li>Typically contains </li></ul><ul><ul><li>Process ID </li></ul></ul><ul><ul><li>Last instruction and data pointers </li></ul></ul><ul><ul><li>Register contents </li></ul></ul><ul><ul><li>States of various flags and switches </li></ul></ul><ul><ul><li>Upper and lower bounds of the memory </li></ul></ul><ul><ul><li>List of files opened </li></ul></ul><ul><ul><li>Priority of the process </li></ul></ul><ul><ul><li>Status of all I/O devices needed </li></ul></ul>
    58. 58. Thrashing <ul><li>OS requires some CPU cycles to maintain each PCB and switching between processes </li></ul><ul><li>When vast majority of the CPU time is spent on swapping between processes rather than executing processes, this condition is known as thrashing </li></ul>
    59. 59. <ul><ul><ul><li>Kernel and interruption control </li></ul></ul></ul><ul><ul><ul><ul><li>Interruption is performed to control the process execution (when a process transition or anomaly occurs) </li></ul></ul></ul></ul><ul><ul><ul><ul><li>The prevention program is called the ‘interrupt routine’ </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Program is restarted once the fix is completed </li></ul></ul></ul></ul>Process Management Halt Restart Halt Restart Anomaly A occurs Anomaly B occurs Program execution Interrupt routine A Interrupt routine B Interrupt routine C Interruption Interruption
    60. 60. <ul><ul><ul><ul><li>Central part of the OS performing the interruption control is called Kernel </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Depending on the location where the anomaly occurs, the interruption is divided into </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Internal Interrupt </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>External Interrupt </li></ul></ul></ul></ul></ul>Process Management
    61. 61. Internal Interrupt <ul><li>Interruptions that occur due to errors of the program itself </li></ul><ul><ul><li>Program interrupt </li></ul></ul><ul><ul><ul><li>Interruption occurs due to an error generated during the execution of a program e.g. division by zero </li></ul></ul></ul><ul><ul><li>Supervisor call (SVC) interrupt </li></ul></ul><ul><ul><ul><li>When data input is requested during the execution of a program </li></ul></ul></ul><ul><li>Saves value of program counter for the interrupted process and transfers control to a fixed location in memory (This location contains code known as the interrupt routine or the interrupt service routine or the interrupt handler) </li></ul>
    62. 62. External Interrupt <ul><li>Interruption that occurs due to external factors and not due to the program </li></ul><ul><li>The following external interrupts exist: </li></ul><ul><ul><li>Input/output interrupt </li></ul></ul><ul><ul><ul><li>anomaly occurs in the input/output process completion report or in an input device or output device during processing </li></ul></ul></ul><ul><ul><li>Machine check interrupt </li></ul></ul><ul><ul><ul><li>malfunction of the processor or the main storage unit or an anomaly in the power supply, etc. happen. The failure occurrence is reported to the operating system by the processor. </li></ul></ul></ul><ul><ul><li>Timer interrupt </li></ul></ul><ul><ul><ul><li>generated by the timer contained inside the processor. Programs exceeding the execution time specified with the time sharing process, etc. are subject to forced termination by this interruption. Likewise, timer interrupt occurs when an abortion of programs of routines that never end, called infinite loops, is to be performed. </li></ul></ul></ul><ul><ul><li>Console interrupt </li></ul></ul><ul><ul><ul><li>a special process request was indicated from the operator console during the execution of a program. </li></ul></ul></ul>
    63. 63. Summary of Interrupts <ul><li>Internal interrupt </li></ul><ul><ul><li>Program </li></ul></ul><ul><ul><li>Supervisor call </li></ul></ul><ul><li>External interrupt </li></ul><ul><ul><li>Input/output device </li></ul></ul><ul><ul><li>Console (e.g. shutting down a server) </li></ul></ul><ul><ul><li>Hardware (e.g. low battery power) </li></ul></ul><ul><ul><li>Timer </li></ul></ul>
    64. 64. Multi-Processing <ul><li>Asymmetric OS uses one CPU for its own needs and divide application processes among the remaining CPUs </li></ul>CPU 1 (OS) CPU 2 CPU 3 CPU 4
    65. 65. Multi-Processing <ul><li>Symmetric OS divides itself among the various CPUs, balancing demand versus CPU availability </li></ul>CPU 1 CPU 2 CPU 3 CPU 4
    66. 66. Multi-Threading <ul><li>A multi-threaded application is one that has been specifically designed to break up its instructions into multiple streams, or threads, so that multiple processors (either physical or logical) can process the streams concurrently </li></ul><ul><li>Adobe Photoshop and Windows Movie Maker are but a few multi-threaded mainstream applications on a very short list </li></ul>
    67. 67. Process Management <ul><li>Multi-programming </li></ul><ul><ul><li>A processor management technique that takes advantage of the speed disparity between a computer and its peripheral devices to load and execute two or more programs concurrently </li></ul></ul><ul><ul><li>Multi-tasking = instructions / data from different processes co-resident in memory </li></ul></ul><ul><ul><li>Multi-programming implies multi-tasking, but not vice versa </li></ul></ul>
    68. 68. Multi-programming
    69. 69. Process Management <ul><li>TSS (Time sharing system) </li></ul><ul><ul><li>Multiple, concurrent, interactive users are assigned, in turn, a single time-slice before being forced to surrender the processor to the next user </li></ul></ul><ul><ul><li>Provides the “feel” that one is the only user of the computer </li></ul></ul><ul><li>Exclusive control </li></ul><ul><ul><li>Same resource cannot be used by all processes at the same time </li></ul></ul><ul><ul><li>Using semaphore, synchronization among processes is conducted and resource sharing is implemented </li></ul></ul><ul><ul><li>Deadlock occurs when the status of two or more processes wait for the resource release of each other </li></ul></ul>
    70. 70. Semaphore <ul><li>A way to achieve exclusive control </li></ul><ul><li>Two types of semaphore </li></ul><ul><ul><li>One for the use of a resource </li></ul></ul><ul><ul><li>Another for the release of a resource </li></ul></ul><ul><li>Applications </li></ul><ul><ul><li>Synchronisation among processes </li></ul></ul><ul><ul><li>Sharing resources </li></ul></ul>
    71. 71. Deadlock
    72. 72. <ul><li>Controls the storage area of the main storage unit </li></ul><ul><li>1. Partition method </li></ul><ul><ul><li>A fixed length unit of memory defined when the OS is first generated or loaded </li></ul></ul><ul><ul><ul><li>Single-partition method </li></ul></ul></ul><ul><ul><ul><ul><li>Divided into areas to store the control program and to store one program </li></ul></ul></ul></ul><ul><ul><ul><li>Multiple partitions method </li></ul></ul></ul><ul><ul><ul><ul><li>Divided and multiple programs are stored in each of the partitions </li></ul></ul></ul></ul><ul><ul><ul><li>Variable partitions method </li></ul></ul></ul><ul><ul><ul><ul><li>Sequentially assigns the area required by application programs in the program storage area </li></ul></ul></ul></ul>Main Memory Management
    73. 73. Main Memory Management <ul><ul><li>Comparison of 3 methods </li></ul></ul>
    74. 74. Main Memory Management <ul><li>Compaction </li></ul><ul><ul><li>Using these methods, areas that are not used (garbage) are generated in each partition of the main storage unit. This phenomenon is called fragmentation </li></ul></ul><ul><ul><li>In order to solve this fragmentation, it is necessary to reset each partition at specific times or at specific intervals. This operation is called compaction </li></ul></ul>
    75. 75. Fragmentation NEW NEW Defragmentation / Compaction Variable partitions Control Program Unused Program
    76. 76. Main Memory Management <ul><li>2. Swapping </li></ul><ul><ul><li>The process of moving program instructions and data between memory and secondary storage as the program is executed </li></ul></ul>Auxiliary storage device Main storage unit Program 1 Swap out Swap in Program 2 Program 1 Control program
    77. 77. Swapping NEW NEW NEW Variable partitions Control Program Unused Program
    78. 78. <ul><li>3. Overlay </li></ul><ul><ul><li>A program is broken into logically independent modules (segments) and only the active modules are loaded into memory </li></ul></ul><ul><ul><li>When module not yet in memory is referenced, it replaces (or overlays) a module already in memory </li></ul></ul><ul><ul><li>It is for the execution of programs that are larger than the storage capacity of the partitions of the main storage unit </li></ul></ul><ul><ul><li>Program is executed after each module is stored in the main storage unit </li></ul></ul>Main Memory Management
    79. 79. Main Memory Management <ul><ul><ul><li>With overlay structures, only the necessary portions of a program are loaded into memory: </li></ul></ul></ul>The complete program consists of 4 modules Under normal conditions, only modules 1 and 2 are in memory When errors occur, module 3 overlays module 2 At the end-of-job, only modules 1 and 4 are needed Module 4 Module 3 Module 2 Module 1 Module 2 Module 1 Module 3 Module 1 Module 4 Module 1
    80. 80. Overlay Program segments Memory Segment Segment
    81. 81. Memory Release <ul><li>When a program ends, the memory assigned is released </li></ul><ul><li>When the memory cannot be released, it is known as memory leak </li></ul>
    82. 82. <ul><li>3. Memory protection </li></ul><ul><ul><ul><li>An OS routine that intervenes if a program attempts to modify (or, sometimes, even to read) the contents of memory locations that do not belong to it and (usually) terminates the program </li></ul></ul></ul><ul><ul><ul><ul><li>Boundary address method </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>The address range that can be accessed is specified for each of the programs to be executed </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Checking is on whether execution is within the address range </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Lower and upper limit boundaries per process </li></ul></ul></ul></ul></ul>Main Memory Management
    83. 83. <ul><ul><ul><ul><li>Ring protection method </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Each process is assigned a ring number </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>A ring number is assigned to each program and the access is controlled according to the number size </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Small number is assigned to important program (kernel/critical) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Large number is assigned to user program </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Access from small numbers to large numbers can be performed, but not vice versa </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><li>III. Keylock method </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>The main storage unit is divided into multiple partitions and each partition is locked for memory protection </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Each program to be executed contains its respective memory protection key(s) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Authorization is done when memory is unlocked (using the key) </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Each memory partition is assigned a lock </li></ul></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Each process is assigned a key </li></ul></ul></ul></ul></ul>Main Memory Management
    84. 84. <ul><li>5. Other main memory management issues </li></ul><ul><ul><ul><li>Dynamic allocation </li></ul></ul></ul><ul><ul><ul><ul><li>Technique by which the main storage unit is dynamically assigned during program execution </li></ul></ul></ul></ul><ul><ul><ul><li>Memory leak </li></ul></ul></ul><ul><ul><ul><ul><li>Occurs due to failure to release the area that should have been released by a program that used the main storage unit </li></ul></ul></ul></ul><ul><ul><ul><ul><li>All storage area is released when power is turned off </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Usually occurs in servers, which remain operational 24hrs a day </li></ul></ul></ul></ul>Main Memory Management
    85. 85. Virtual Storage Management <ul><li>Enables the execution of programs without worrying about the storage capacity of the main storage unit </li></ul><ul><li>Basic approach in implementing virtual storage: </li></ul><ul><ul><li>Main storage unit is divided into partitions (page frames) of specific size </li></ul></ul><ul><ul><li>Program is stored temporarily in the external page storage area of an external storage device </li></ul></ul><ul><ul><li>This external page storage area is divided into partitions called slots (same size as page frame) </li></ul></ul>
    86. 86. <ul><ul><li>Program stored in page frames or slots are called pages (approx. 2KB per page) </li></ul></ul><ul><ul><li>External page storage area of the main storage unit and the auxiliary storage device is called logical address space </li></ul></ul><ul><ul><li>Pages of the slots needed for execution are transferred to empty page frames of the main storage unit to be executed </li></ul></ul><ul><ul><li>Execution is repeatedly performed by transferring the programs that are stored by page unit in the external page storage area to the page frames of the main storage unit </li></ul></ul><ul><ul><li>- This act of transferring is called ‘load’ </li></ul></ul>Virtual Storage Management
    87. 87. Virtual Storage Management <ul><li>1. Paging </li></ul><ul><ul><li>Exchange of programs between the main storage unit and an auxiliary storage device </li></ul></ul><ul><ul><li>In multi-programming, when paging occurs frequently, this condition is known as slashing </li></ul></ul>
    88. 88. <ul><li>2. Address translation </li></ul><ul><ul><li>During “page-in”, translation of the instruction address according to the address of the page frame is required </li></ul></ul><ul><ul><li>Address stored in external page storage area – static address </li></ul></ul><ul><ul><li>Address stored in page frame of main storage unit (after address translation) – dynamic address </li></ul></ul><ul><ul><li>Main address translation method is called Dynamic Address Translation (DAT) </li></ul></ul><ul><ul><ul><li>Performs translation at the time the instruction paged in is executed </li></ul></ul></ul><ul><ul><ul><li>Address begins from 0 and increment of page unit </li></ul></ul></ul>Virtual Storage Management
    89. 89. DAT
    90. 90. Virtual Memory <ul><li>Paging </li></ul>Page in Page out Dynamic Address Translation Memory Page Page Page Page Page Page Page Page Page DA00 Page 1 DB00 Page 2 DC00 Page 3 DD00 Page 4 DE00 Page 5 DF00 Page 6
    91. 91. <ul><li>3. Segmentation paging </li></ul><ul><ul><li>Segment – a group of pages logically related </li></ul></ul><ul><ul><li>Page-in and page-out are performed by these segments </li></ul></ul><ul><li>4. Page replacement (displacement) </li></ul><ul><ul><li>To achieve system processing efficiency: </li></ul></ul><ul><ul><ul><li>Pages with high application frequency are permanently stored in the main storage unit; </li></ul></ul></ul><ul><ul><ul><li>Pages with low application frequency are stored in the external page storage area (transferred to the main storage unit only when they are needed) </li></ul></ul></ul><ul><ul><ul><ul><li>LRU (Least recently used) method </li></ul></ul></ul></ul><ul><ul><ul><ul><li>FIFO (First-in First-out) method </li></ul></ul></ul></ul>Virtual Storage Management
    92. 92. Segmentation Paging