2. Introduction
Os – “Executive Manager” is the part of the computing
system that manages all of the hardware and all of the
software.
OS controls every device, every section of memory and
every nanosecond of processing time.
OS also controls who can use the system and how.
OS is a program running at all times on a computer with all
other programs being application programs.
Without an OS, no applications can be run.
A computer system interfaces the user through an
operating system. The user through an operating system
enters instructions into a computer system.
Examples : MS-DOS, Windows NT/2000, Mac OS
3. Major OS Components
Memory Manager
In charge of main memory
such as check the validity of
each request for memory
space etc.
Processor Manager
Decides how to allocate the
CPU.
Keep track of status of each
process
Device Manager
Monitors every device,
channel and control unit.
File Manager
Keeps track for every file in
the system including data files,
assemblers, compilers and
application programs.
User Command Interface
4. OS Subsystem Manager
Tasks
Monitor its resources continuously
Enforce the policies that determine who gets
what, when and how much
Allocate the resources - reclaim it - when
appropriate
Deallocate the resource - reclaim it - when
appropriate
5. OS Services
Program execution
I/O operation
File and directory services
Communication
Error detection and handling
Resource allocation
Protection
Accounting
6. OS Services
i) Program execution: loads and executes programs, allows
debugging
ii) I/O operation: does all read and write operations which
may involve a file or an I/O device like printer
iii) File system manipulation: allows to read/write files as
well as create and delete them
iv) Communication: allows processes to communicate with
each other
v) Error detection: CPU, hardware, instructions, device errors
vi) Resource allocation: manages resources so that they are
optimally utilized
vii) Security/protection: stop unauthorized access, protect
system from imposters
viii)Accounting: user bills, user/system, statistics
7. OS functions
1. Provides interface to computer system
2. Control program: controls the various I/O devices
and user programs
3. Resource allocator: operating system acts as the
manager of these resources and allocates them to
specific programs and user. (CPU time, memory
space, file storage space and I/O device)
OS deals with :
1. storage management
2. memory management
3. process management
4. I/O management
8.
9. Components of OS
1. Shell
Shell is the outer layer of an operating system
is the part of the operating system that interfaces with
the user.
is the interface to which accepts, interprets and then executes
commands from users.
2 types :
Command Line Interpreter (CLI)
Graphical User Interface (GUI)
10. Components of OS (cont.)
2. Kernel
the central module of an operating system.
Kernel's position is between application
program and device driver.
is responsible for resource allocation, lowlevel hardware interfaces, security
is the part of the operating system that is
loaded first, and remains in main memory.
Because it stays in memory, it is important
for kernel to be as small as possible.
11. Functions performed by
kernel
Amongst the functions performed by the kernel are:
Managing the machine’s memory and allocating it to
each process
Scheduling the work done by the CPU so that the work
of each user is carried out as efficiently as is possible
Organizing the transfer of data from one part of the
machine to another
Accepting instructions from the shell and carrying them
out
Enforcing the access permissions that are in force on
the file system
12. 7.2 Primary Storage
Management
Execution of a program involves a sequence
of instructions within that program.
The execution of an individual program is
called a process or task.
Many processes are loaded into RAM. All
these processes compete in using the CPU.
A part of kernel, which is called memory
manager, manages processes in RAM.
13. Functions of a memory manager:
1.keeping track of which parts of memory are currently being
used and by whom
2.deciding which processes are to be loaded into memory
when memory space becomes available
3. allocating and deallocating memory space as needed
4. allowing the running of programs that are larger than
memory.
14. For a program to be executed, it must be mapped to
absolute addresses and loaded into memory. As the
program executes, it accesses program instructions and
data from memory by generating these absolute
addresses.
Eventually, when the program terminates, its memory
space is declared available, and the next program can be
loaded and executed.
15. 7.2 Secondary Memory
Management
Secondary storage is needed because :
I.Main memory is not big enough to keep all data and program
2. Data kept in main memory will be lost if computer is shut
down
In relation to secondary memory management, OS is
concerned about:
•
Free space management
•
Storage allocation
•
Disk scheduling
16. 7.2 Virtual Memory
Computer can get out of memory when:
we run more applications than is allowed by the amount of RAM in
the computer
we run a program that requires amount of memory that exceeds
the amount of RAM in the computer
to overcome this problem, virtual memory can be used.
In virtual memory, part of hard disk is used as an extension to RAM.
When a program is executed, part of it is stored on RAM and another
part is stored on disk. The part of the program that is stored on disk is
only brought into memory when it is executed.
Implementation of virtual memory uses paging method.
17. 7.2 Virtual Memory
Paging method:
a program is divided into small segments
called page.
A page size is normally between 1-4 kB.
This size is constant.
An area of hard disk called swap page is
specified to hold pages not held in main
memory.
Main memory is also divided into pages
called frame.
During program execution, one or more
pages are held in memory and the
remainder is held in disk or swap space.
The pages in disk are copied into main
memory when needed for current
processing.
18. 7.2 Virtual Memory
Paging method:
If necessary, existing pages in main memory can be swapped
out to disk to make room for pages being swapped in. The
selected page that is swapped out is called victim.
Pages that are least recently used or least frequently used are
usually victims. When a victim is selected, it might or might not
be copied back to the swap space.
Each memory reference made by a program must be checked
to see it refers to a page in memory. References to pages not
contained in memory are referred as page faults.
Tables are used to store information about the allocation of
program segments to pages and the location of those pages in
primary and secondary storage.
Each active process will have a table page or portion of a table
page dedicated to storing its page information.
19. 7.2 Virtual Memory
Advantages of virtual memory:
Programs requiring memory space larger than physical
memory can be run. No constraint on the memory size.
More programs can be run at one time because one program
could take less physical memory.
Less I/O would be needed to load or swap each user
program into memory, so each user program would
run faster. Only pages from disk are swapped instead
of programs.
Programming task is easier. Programmer does not
have to worry about memory available.
20. Swapping Vs Paging
Swapping:
Moving the entire
process which is
idle/not running
together with all the
data from main
memory to be used
by other
processes.
Moves the entire
address spaces
between disk and
memory.
Paging:
Dividing the whole process
into pages and the memory
into frames. Size of page and
frames are the same. When
loading a program, any page
of the program can be placed
in any unused page frame.
Moves individual pages only
between disk and memory,
so part of an address space
on disk while the other part is
in main memory.
21. 7.3 Process Management
A process is a program in execution.
Example : saving a Word document, spooling files to
printer, a single mathematical calculation
A process is active, it contains a program counter that
indicates the address in memory of the next instruction to
be executed.
A process can be in 5 states:
1. New
2. Running
3. Waiting : for some event to occur
4. Ready : to be assigned to a processor
5. Terminated : finished execution
22. Only one process can be running at one time. However, many
processes may be ready or waiting.
Processes are executed in a sequential manner. The CPU
executes one process after another until the process is completed.
Management of processes by the operating system is called
scheduling.
Scheduling is done by scheduler (a part of the kernel).
Job scheduler: only concerned with selecting jobs from a
queue of incoming jobs and placing them in the process queue
Process scheduler: assigns the CPU to execute the processes
of those jobs placed on the ready queue by the Job Scheduler.
In scheduling, a queue is created where each process will wait for
its execution.
When a process is created, it will be placed at the end of the
queue. Process in the front of the queue is taken out and is
executed.
23. A process waiting for a source to use can give in to
another process : e.g. waiting for an I/O transfer to
finish.
Operating system puts the process in a waiting list
and does context switching.
In context switching, operating system switches
execution from one process to another and comes
back to the first process.
Various policies for managing processes : FCFS,
SPN, Round Robin.
24. Job and Process Status
New
Admitted
Finished
Interrupt
Ready
I/O or event
completion
Exit
Running
Scheduler dispatch
Waiting
I/O or event
wait
Handled by Process Scheduler (low-level scheduler)
Handled by Job Scheduler (high-level scheduler)
25. Transition Among Process
States
•
•
•
•
•
•
NEW to READY : Job Scheduler using a predefined
policy.
READY to RUNNING : Process Scheduler using some
predefined algorithm
RUNNING back to READY : Process Scheduler
according to some predefined time limit or other criterion.
RUNNING to WAITING : Process Scheduler and is
initiated by an instruction in the job.
WAITING to READY : Process Scheduler and is initiated
by signal from I/O device manager that I/O request has
been satisfied and job can continue.
RUNNING to FINISHED : Process Scheduler or Job
Scheduler.
26. 7.3 Time-sharing
Occurs when several processes run concurrently on one
processor or in parallel on many processors at the same
time.
A time-sharing operating system uses CPU scheduling
and multiprogramming to allow users to share time.
Time-sharing systems are developed to provide interactive
use of a computer system at a reasonable cost.
Almost all mainframes and minicomputers are timesharing systems.
27. Mechanism of TimeSharing
A time-sharing operating system uses CPU scheduling
and multiprogramming to provide each user with a small
portion of time-shared computer-> to share time.
Several processes are run concurrently -> must be in
memory
Needs memory management and protection.
Virtual memory may be used
28. Advantages of TimeSharing
Allows many users use the computer
simultaneously.
At any one time, only a little CPU time is given to
a user.
->System switches very rapidly from one user to
the next : user assumes that he/she owns the
computer (actually is shared with other users)
29. Time Slicing
Is a technique where each process is given a slice of time
before being preempted.
(each process is given a portion of computer time)
When a process is running and the period to run the
process has ended, the process will be preempted. Then
the next process in the queue will run.
A process will be preempted if the period has ended even
though it is not finished. CPU will execute the unfinished
process later.
30. To preempt a process, a clock interrupt is generated.
The running process will be put into a ready queue and
the first process in the ready queue will be selected.
The order of which process to run at one time is
determined by the CPU scheduling algorithms.
31. A Good Scheduling Policy
•
•
•
Maximize throughput
by running as many
jobs as possible in a
given amount of time.
Maximize CPU
efficiency by keeping
CPU busy 100 % of
time.
Ensure fairness for all
jobs by giving
everyone an equal
amount of CPU and
I/O time.
•
•
•
Minimize response
time by quickly turning
around interactive
requests.
Minimize turnaround
time by moving entire
jobs in/out of system
quickly.
Minimize waiting time
by moving jobs out of
READY queue as
quickly as possible.
32. Characterization of Scheduling
Policies
The selection function: determines which process in the
ready queue is selected next for execution
The decision mode: specifies the instants in time at which the
selection function is exercised
Non-preemptive
Once a process is in the running state, it will continue
until it terminates or blocks itself for I/O
Preemptive
Currently running process may be interrupted and
moved to the Ready state by the OS
Allows for better service since any one process cannot
monopolize the processor for very long
33. Non-Preemptive Scheduling
A process stays on the CPU until it voluntarily
releases the CPU
Long waiting and response times
May lead to starvation
Simple, easy to implement
Not suited for multi-user systems
Euphemism: “cooperative multitasking”
34. Preemptive Scheduling
The execution of a process may be interrupted by
the operating system at any time
interrupt
higher priority process
Arrival of a new process, change the status
time limit
Prevents a process from using the CPU for too
long
May lead to race conditions
Can be solved by using process synchronization
35. Scheduling Policies
First Come First Served (FCFS)
Round Robin (RR)
Shortest Process Next (SPN)
Shortest Remaining Time (SRT)
Highest Response Ratio Next (HRRN)
36. Example to Discuss Various
Scheduling Policies
Process
Arrival
Time
Service
Time
1
0
3
2
2
6
3
4
4
4
6
5
5
8
2
Service time = total processor time needed in one (CPU-I/O) cycle
Jobs with long service time are CPU-bound jobs
and are referred to as “long jobs”
37. First Come First Served
(FCFS)
•Selection function: the process that has been waiting
the longest in the ready queue (hence, FCFS)
•Decision mode: non-preemptive
a process runs until it blocks itself
38. First Come First Served
(FCFS)
Non-preemptive.
Handles jobs according to their arrival time the earlier they arrive, the sooner they’re
served.
Simple algorithm to implement - uses a FIFO
queue.
39. Disadvantages of FCFS
-A process that does not perform any I/O will monopolize
the processor
-Favors CPU-bound processes
•I/O-bound processes have to wait until CPU-bound
process completes
•They may have to wait even when their I/O are
completed (poor device utilization)
•We could have kept the I/O devices busy by giving a
bit more priority to I/O bound processes
40. Round-Robin
Selection function: same as FCFS
Decision mode: preemptive
•a process is allowed to run until the time slice period (quantum,
typically from 10 to 100 ms) has expired
•then a clock interrupt occurs and the running process is put on
the ready queue
41. Round Robin
Preemptive.
Used extensively in interactive systems because
it’s easy to implement.
Isn’t based on job characteristics but on a
predetermined slice of time that’s given to each
job.
Ensures CPU is equally shared among all active
processes and isn’t monopolized by any one
job.
Time slice is called a time quantum
size crucial to system performance (100 ms to
1-2 secs)
43. Quantum
Quantum is a specific time interval used to
prevent any one process monopolizing the
system.
If the process does not voluntarily release the
CPU when the interval/quantum is over, a clock
interrupt is generated.
44. Shortest Process Next (SPN)
•Selection function: the process with the shortest
expected CPU burst time
•Decision mode: non-preemptive
•I/O bound processes will be picked first
•We need to estimate the required processing time (CPU
burst time) for each process
45. Shortest Job Next (SJN)
Non-preemptive.
Handles jobs based on length of their CPU cycle
time.
Use lengths to schedule process with shortest
time.
Optimal – gives minimum average waiting time for a
given set of processes.
optimal only when all of jobs are available at same
time and the CPU estimates are available and
accurate.
Doesn’t work in interactive systems (time-sharing
systems) because users don’t estimate in advance
CPU time required to run their jobs.
46. Shortest Remaining Time
(SRT)
Preemptive - version of the SJN algorithm.
Processor allocated to job closest to completion.
This job can be preempted if a newer job in
READY queue has a “time to completion” that's
shorter.
Can’t be implemented in interactive system requires advance knowledge of CPU time required to
finish each job.
SRT involves more overhead than SJN.
OS monitors CPU time for all jobs in READY
queue and performs “context switching”.
47. 7.4 I/O Management
I/O Operation and Interrupt
Every request from a user program must be done
through the OS.
When the device is ready to provide service, the
device tells the operating system of its status by
giving an interrupt.
I/O interrupts happen when:
• an I/O operation completes,
• an I/O error occurs, or
• a device is made ready.
48. When an interrupt happens, operating system
does the following:
• The operating system gains control.
• The operating system saves the state of the interrupted
process.
• The operating system analyzes the interrupt and passes
the control to the appropriate routine to handle interrupt.
• The interrupt handler routine processes the interrupt.
• The state of the interrupted process is restored.
• The interrupted process executes.
49. Spooling
Spooling uses buffer to manage files to be
printed.
Files which are spooled are queued and copied to
printer one at a time.
To manage I/O requests, operating system has a
component that is called spooler.
Spooler manages I/O requests to a printer.
Spooler operates in the background and creates a
printing schedule.
50. Importance of Spooling
1. In spooling, programs can run to completion faster. Therefore, other
programs can start sooner. Spooling improves the system by
disassociating a program from the slow operating speed of devices
such as printers.
2. Since files are stored in a buffer, where the printer can access them;
we can perform other operations on the computer while the printing
takes place.
Therefore, computation of one job can overlap with the I/O of other
jobs. Thus, spooling can keep both the CPU and the I/O devices
working at much higher performance rates.
3. Spooling lets us put a number of print jobs in queue instead of waiting
for each one to finish before specifying the next one. If we need to
remove unwanted jobs before the jobs print, we are able to do so. We
can also suspend a printing job if the printing job is still on queue.
51. Discussion
What are main services provided by an
operating system?
What are some advantages of paging?
Distinguish swapping and paging.
What potential CPU-allocation problems
exist if a purely round-robin (no-priority)
system is used to select the next job
from the ready queue?
What is the main difference of a timesharing system and how is it usually
implemented?
52. Discussion
Five jobs are in the READY queue waiting
to be processed. Their estimated CPU
cycles are as follows: 10, 3, 6, 6 and 2.
Using SJN, in what order should they be
processed to minimize average waiting
time?
53. Discussion
Given the following information:
Draw a time line for each of the following
scheduling algorithms:
FCFS
SJN
SRT
Round-Robin (using a time quantum of 2)