Name of Faculty: Reena Murali
Dept. of Computer science
Introduction and Overview
What is an operating system?
How have operating systems evolved?
Why study operating systems?
What is an Operating System?
• Not easy to define precisely…
Everything in system that isn’t an application or hardware
Software that converts hardware into a useful form for
What is the role of the OS?
• Role #1: Resources provider
What is a resource?
– Anything valuable (e.g., CPU, memory, disk)
What is the role of the OS?
• Role #2: Resource coordinator (I.e.,
• Advantages of resource coordinator
– Virtualize resources so multiple users or
applications can share
– Protect applications from one another
– Provide efficient and fair access to resources
What Functionality belongs in
• No single right answer
• Desired functionality depends on outside
– OS must adapt to both user expectations
and technology changes
An Operating system is
• An interface between users and hardware
- an environment "architecture”
• Allows convenient and efficient usage of
resources (Gives each user a slice of the
• Provides information protection
• Acts as a control program.
Evolution of Operating Systems
Early Systems (1950)
Simple Batch Systems (1960)
Multiprogrammed Batch Systems (1970)
Personal/Desktop Systems (1980)
Multiprocessor Systems (1980)
Networked/Distributed Systems (1980)
Real-Time (1970) and Handheld (1990)
Batch: Group of jobs submitted together
– Operator collects jobs; orders efficiently; runs one at a
– Keep machine busy while programmer thinks
– Improves throughput and utilization
– User must wait until batch is done for results
– Machine idle when job is reading from cards and
writing to printers
• Goal of OS
– Improve performance by always running a job
– Keep multiple jobs resident in memory
– When job waits for disk I/O, OS switches to another
• OS Functionality
– Job scheduling policies
– Memory management and protection
• Advantage: Improves throughput and utilization
• Disadvantage: Machine not interactive
Time Sharing Systems (TSS)
• Batch multiprogramming does not support
interaction with users.
• In time sharing systems multiple users
simultaneously access the system through
• Processor’s time is shared among multiple
Why does Time-Sharing
• Because of slow human reaction time, a
typical user needs 2 seconds of
processing time per minute.
• Then many users should be able to share
the same system without noticeable delay
in the computer reaction time.
• The user should get a good response
• Personal computers –
dedicated to a single user.
• I/O devices – keyboards, mice, display screens,
• May run several different types of operating
systems (Windows, MacOS, UNIX, Linux)
• Multi User Systems
• Multi Tasking Systems
• Requires networking infrastructure.
• Local area networks (LAN) or Wide
area networks (WAN).
• May be either Centralized Sever or
Client-Server or Peer-to-Peer (P2P)
Distribute resources and the
among several physical processors.
• Loosely coupled system:
– each processor has its own local memory.
– processors communicate with one another
through various communications lines.
– Resources Sharing
– Computation speed up – load sharing
• Network Operating System (NOS):
– provides mainly file sharing.
– Each computer runs independently from other
computers on the network.
• Distributed Operating System (DOS):
– gives the impression there is a single operating
system controlling the network.
– network is mostly transparent – it’s a powerful
Real-Time Systems (RTS)
• Note that not all Operating Systems are
• Real-Time (RT) systems are dedicated
systems that need to adhere to deadlines ,
i.e., time constraints.
• Correctness of the computation depends
not only on the logical result but also on
the time at which the results are produced.
Hard Real-Time Systems
• Hard real-time system:
– Must meet its deadline.
• Often used as a control device in a dedicated
– Industrial control
• Secondary storage limited or absent, data stored
in short term memory, or read-only memory
Soft Real-Time Systems
• Soft real-time system:
– Deadline desirable but not mandatory.
– Limited utility in industrial control or
– Useful in modern applications
(multimedia, virtual reality) requiring
advanced operating-system features.
• Handheld systems are also dedicated.
– Personal Digital Assistants (PDAs).
– Cellular telephones.
– Limited memory
– Slow processors
– Small display screens
– Support for multimedia (images, video)
functions of OS
Operating System Services
Functions of OS
Main Memory Management
I/O System Management
• A process is a program in execution. A process
needs certain resources, including CPU time,
memory, files, and I/O devices, to accomplish
• The operating system is responsible for the
following activities in connection with process
– Process creation and deletion.
– process suspension and resumption.
– Provision of mechanisms for:
• process synchronization
• process communication
• Memory is a large array of words or bytes, each with its
own address. It is a repository of quickly accessible data
shared by the CPU and I/O devices.
• Main memory is a volatile storage device. It loses its
contents in the case of system failure.
• The operating system is responsible for the following
activities in connections with memory management:
– Keep track of which parts of memory are currently
being used and by whom.
– Decide which processes to load when memory space
– Allocate and deallocate memory space as needed.
• A file is a collection of related information defined by its creator.
Commonly, files represent programs (both source and object forms)
• The operating system is responsible for the following activities in
connections with file management:
– File creation and deletion.
– Directory creation and deletion.
– Mapping files onto secondary storage.
– File backup on stable (nonvolatile) storage media.
I/O System Management
• The I/O system consists of:
– A buffer-caching system
– A general device-driver interface
– Drivers for specific hardware devices
• Since main memory (primary storage) is volatile and too
small to accommodate all data and programs
permanently, the computer system must provide
secondary storage to back up main memory.
• Most modern computer systems use disks as the
principle on-line storage medium, for both programs and
• The operating system is responsible for the following
activities in connection with disk management:
– Free space management
– Storage allocation
– Disk scheduling
Networking (Distributed Systems)
• A distributed system is a collection processors that do not share
memory or a clock. Each processor has its own local memory.
• The processors in the system are connected through a
• Communication takes place using a protocol.
• A distributed system provides user access to various system
• Access to a shared resource allows:
– Computation speed-up
– Increased data availability
– Enhanced reliability
• Protection refers to a mechanism for
processes, or users to both system and
• The protection mechanism must:
– distinguish between authorized and
– specify the controls to be imposed.
– provide a means of enforcement.
• Many commands are given to the operating system by control
statements which deal with:
– process creation and management
– I/O handling
– secondary-storage management
– main-memory management
– file-system access
• The program that reads and interprets
control statements is called variously:
– command-line interpreter
– shell (in UNIX)
Its function is to get and execute the next
Additional Operating System
Additional functions exist not for helping the user, but rather for
ensuring efficient system operations.
• Resource allocation – allocating resources to multiple
users or multiple jobs running at the same time.
• Accounting – keep track of and record which users
use how much and what kinds of computer resources
for account billing or for accumulating usage statistics.
• Protection – ensuring that all access to system
resources is controlled.
Operating System Services
• Program execution – system capability to load a program
into memory and to run it.
• I/O operations – since user programs cannot execute I/O
operations directly, the operating system must provide
some means to perform I/O.
• File-system manipulation – program capability to read,
write, create, and delete files.
• Communications – exchange of information between
processes executing either on the same computer or on
different systems tied together by a network.
Implemented via shared memory or message passing.
• Error detection – ensure correct computing by detecting
errors in the CPU and memory hardware, in I/O devices,
or in user programs.
• System calls provide the interface between a running
program and the operating system.
– Generally available as assembly-language
– Languages defined to replace assembly language for
systems programming allow system calls to be made
directly (e.g., C, C++)
• Three general methods are used to pass parameters
between a running program and the operating system.
– Pass parameters in registers.
– Store the parameters in a table in memory, and the
table address is passed as a parameter in a register.
– Push (store) the parameters onto the stack by the
program, and pop off the stack by operating system.
The OS Shell
• Defines interface between OS and
– Windows GUI
– UNIX command line
– UNIX users can choose among a variety of
• csh is the “C shell”
• tcsh is an enhanced “C shell”
OS Shell interface
The OS Kernel
• The internal part of the OS is often
called the kernel
• Kernel Components
– File Manager
– Device Drivers
– Memory Manager
OS File Manager
• Maintains information about the
files that are available on the
• Where files are located in mass
storage, their size and type and
their protections, what part of
mass storage is available
• Files usually allowed to be
grouped in directories or
folders. Allows hierarchical
OS Device Drivers
• Software to communicate with peripheral
devices or controllers
• Each driver is unique
• Translates general requests into specific
steps for that device
OS Memory Manager
• Responsible for coordinating the
use of the machine’s main
• Decides what area of memory is
to be allocated for a program
and its data
• Allocates and deallocates
memory for different programs
and always knows what areas
• Maintains a record of processes that are
present, adds new processes, removes
– memory area(s) assigned
– state of readiness to execute (ready/wait)
• Ensures that processes that are
ready to run are actually
• Time is divided into small (50
ms) segments called a time
• When the time slice is over, the
dispatcher allows scheduler to
update process state for each
process, then selects the next
process to run
Kernel Based Approach
Kernel contains a collection of primitives which are used to build the OS
OS implements policy, Kernel implements mechanisms
The advantage is performance, the disadvantage are complexity and
Why use this approach? If you have a relatively “small” kernel the gains in
performance and efficiency outweigh the disadvantages.
1. Brinch Hansen, P., "The Nucleus of a Multiprogramming System", Communications of the ACM, Apr. 1970, pp. 238-241.
2. D. Ritchie, and K. Thompson, “The UNIX Time-Sharing System”, Communications of the ACM, Vol. 17, No. 7, Jul. 1974, pp. 365-375.
3. Wulf, W., E. Cohen, W. Corwin, A. Jones, R. Levin, C. Pierson, and F. Pollack, "HYDRA: The Kernel of a Multiprocessor Operating
System", Communications of the ACM, June 1974, pp. 337-345.
MS-DOS System Structure
• MS-DOS – written to provide the most
functionality in the least space:
– not divided into modules (monolithic).
– Although MS-DOS has some structure, its
interfaces and levels of functionality are not
UNIX System Structure
UNIX – limited by hardware functionality, the original UNIX OS
had limited structuring.
The UNIX OS consists of two separable parts:
1. Systems Programs:
2. The Kernel:
• Consists of everything below the system-call interface and
above the physical hardware
• Provides the file system, CPU scheduling, memory
management, and other operating-system functions; a
large number of functions for one level.
2. Microkernel System Structure (1)
• Move as much functionality as possible from the kernel into “user”
• Only a few essential functions in the kernel
– primitive memory management (address space)
– I/O and interrupt management
– Inter-Process Communication (IPC)
– basic scheduling
• Other OS services are provided by processes running in user mode
(vertical servers) – device drivers, file system, virtual memory…
2. Microkernel System Structure (2)
• Communication takes place between user
modules using message passing.
• But a performance penalty caused by
replacing service calls with message
exchanges between process.
• More flexibility, extensibility, portability and
reliability (details in next 4 slides).
Benefits of a Microkernel Organization
– modular design.
– easy to add services.
– small microkernel can be rigorously tested.
– changes needed to port the system to a new
processor is done in the microkernel in the other services.
Benefits of Microkernel Organization
• Distributed system support
– message are sent without knowing
what the target machine is.
• Object-oriented operating system
– components are objects with clearly
defined interfaces that can be
interconnected to form software.
• Takes micro-kernel to the extreme
• Services implemented as user space library linked against
• Only a minimum of functionality is implemented in the kernel,
such as context switching and MMU management.
• Export hardware resources that may be managed by user-level
applications while the kernel implements the protection
• Apps may optimize to a given hardware platform or create new
• D. R. Engler M. F. Kaashoek J. O'Toole, Jr., “Exokernel: an operating system architecture for application-level resource management”,
Proceedings of the fifteenth ACM symposium on Operating systems principles, pp. 251-266, Copper Mountain, Colorado, 1995.
• The operating system is divided into a
number of layers (levels), each built on top
of lower layers. The bottom layer (layer
0), is the hardware; the highest (layer N) is
the user interface.
• With modularity, layers are selected such
that each uses functions (operations) and
services of only lower layers.
Virtual Machine Approach
• Virtual software layer over hardware
• Illusion of multiple instances of hardware
• Supports multiple instances of OSs
Virtual machine software
• Seawright, L., and R. MacKinnon, "VM/370 - A Study of Multiplicity and Usefulness", IBM Systems Journal, 1979, pp. 4-17.
• A virtual machine takes the layered approach to its logical
conclusion. It treats hardware and the operating system kernel as
though they were all hardware.
• A virtual machine provides an interface identical to the underlying
• The operating system creates the illusion of multiple processes,
each executing on its own processor with its own (virtual) memory.
• Shell -- interface to user
• File Manager -- manages mass
• Device Drivers -- communicate with
• Memory Manager -- manages main
• Scheduler & Dispatcher -- manage
Different Operating Systems
on the Same Machine ?
• It is possible to have more than one
operating system available to be used
on a machine.
• Only one operating system is run at a
– VAX -- VMS or Ultrix
– PCs -- DOS, Windows, or Linux
• Operating systems usually come with
some associated utility programs
• UNIX usually has the text editors emacs
and vi (and sometimes pico)
• UNIX has its own sort utility
• UNIX has its own mail utility