SlideShare a Scribd company logo
1 of 90
Download to read offline
OPERATING SYSTEMS. 
SY B.Sc. I.T. 
Author: Fahad Shaikh.
Operating Systems SY-IT 
Unit - I 
COMPONENTS OF COMPUTER SYSTEM 
User 1 User2 User n 
System and Application Program 
Operating System 
Computer Hardware 
The above diagram gives a view of the components of a computer system, which can be roughly divided 
into four components. The hardware, the operating system, the application program and the users. 
The hardware consists of the CPU, memory and I/O devices which provide the basic computing 
resources. The application program consists of programs such as word processors, compilers, and web 
browsers. The operating system controls and co- ordinates the use of the hardware among the various 
application programs for the various users. The Operating System provides the means for proper use of 
the recourses in operation of the computer system. 
The two basic goals of an operating system are convenience and efficiency 
TYPES OF SYSTEMS: 
1) Mainframe Systems: 
Mainframe computer system where the first computers which were used to solve many 
commercial and scientific applications. They are divided into three types. 
i) Batch systems 
ii) Multi programmed systems 
iii) Time sharing systems 
Fahad Shaikh (System Administrator) Page 2
Operating Systems SY-IT 
i) Batch systems 
Operating 
system 
User 
program 
Batch system consists of large machines in which the input devices were card readers and tape 
drives. The output consists of line printers, tape drives and card punches. The user did not 
interact directly with the computer system; the user prepared a job and submits it to the 
computer operator. The job was usually in the form of punch cards. 
The operating system in these early systems was very simple. Its major task was to 
transfer control automatically from one job the next. The operating system is always resident in 
the memory. In this execution environment, the CPU is often idle due to the speeds of the 
peripheral devices. Hence the major disadvantage of this system was inefficient use of the CPU. 
ii) Multi programmed systems 
Operating system 
Job 1 
Job 2 
Job 3 
Job 4 
Multi programming increases CPU utilization by organizing jobs so that the CPU always 
has at least one job to execute. The operating system keeps many jobs in the memory at a time 
and picks up a job to execute. In case the job has to wait for some task (such as I/O) the 
operating system switches to execute another job. Hence the CPU is never idle. 
Fahad Shaikh (System Administrator) Page 3
Operating Systems SY-IT 
Multi programming is the first instance where the operating system must make decisions 
for the users. Hence multi programmed operating system are more sophisticated. 
iii) Time sharing operating system /Multitasking system /Interactive system 
A time shared operating system allows many users to share the computer simultaneously. 
Since each action or command in a time share system is very short, only a little CPU time is 
need for each user. As the system switches rapidly from one user to the next, each user is given 
the impression that the entire computer system is dedicated to him even though it may be 
shared among many users. 
A time shared operating system uses CPU scheduling & multi programming to provide each 
user with a small portion of time shared computer. Time sharing operating system are more 
complex than multi programmed operating system because they need to provide protection, a 
way to handle the file system, job synchronization, communication and deadlock free 
environment 
2) Desktop System: 
Desktop systems during their initial stages did not had the feature to protect the operating 
system from the user programs. Hence pc operating system where neither multiuser nor 
multitasking. Since every user is having all the resources hence there is no question of efficiency 
therefore the main goal of such system was maximizing user convenience and responsiveness. 
With the advent of network this system needed protection hence this feature was also added. 
3) Multiprocessor System: 
Multiprocessor system (parallel system or tightly couple system) have more than one processor 
in closed communicating sharing the computer bus, the clock as well as memory and peripheral 
devices. 
Multiprocessor systems have three main advantages: 
I) Increased Throughput:- By increasing the number of processor more work is done in less time. 
The speed up ratio with N processors is not almost equal to N due to overhead in keeping all 
the work correctly also due to sharing of resources the speedup ratio decrease further 
II) Economy: - Multiprocessor systems are more economic as compared to multiple single 
processor system because they share peripheral, memory and power supplies 
Fahad Shaikh (System Administrator) Page 4
Operating Systems SY-IT 
III) Increase Reliability: - If functions (if work) are properly distributed among several processors 
then the failure of one processor will not hold the system but only slow down the system. 
Suppose if we have 10 processors and if one fails, then each of the remaining would continue 
the processing. The overall may degrade by 10%. The ability of a system to provide service 
proportional to the level of surviving hardware is called graceful degradation. Systems designed 
for graceful degradation are called fault tolerant. 
Multiprocessor system can be realized in the following ways 
 Tandem system : 
This system uses both hardware and software duplication to ensure continuous operation even 
in case of failures. The system consists of two identical processor each having its own local 
memory the processor is connected by a bus. One processor is primary and other is backup. 
Two copies of each processor (1 in primary in backup). At some fixed interval of time the 
information of each process is copied from primary to backup. If a failure is detected, the 
backup copies activated and restarted from the most recent check point. 
The drawback of the system is at it is expensive. 
 Symmetric multiprocessing system(SMP): 
Memory 
CPU …………… CPU …………………. CPU ………….. CPU 
In symmetric multiprocessing each processor runs and identical copies of the operating system 
and these copies communicate each other as needed. There is no master slaves relationship 
between processor, all processor are peers. The benefit of this model is that many processors 
can run simultaneously. The problem with this system would be that one processor (CPU) may 
be sitting idle and other may be overloaded. This can be avoided if processors share certain 
Fahad Shaikh (System Administrator) Page 5
Operating Systems SY-IT 
data structures. A multiprocessor system of this form will allow processes and resources to be 
shared properly among various processors. 
 Asymmetric multiprocessing: 
MASTER 
CPU 
CPU . . . . . . CPU . . . . . CPU . . . . . CPU 
SLAVES 
In asymmetric multiprocessing each processing assigned a specific task. A master processor 
controls the system. Other processors depend on the master for instruction or may have a 
predefined task. This scheme defined a master slave relationship. The master processor 
schedules and allocates work to the slave processors. 
The difference between symmetric and asymmetric multiprocessing may be the result of either 
hardware or software. Special hardware can differentiate the multiprocessor or the software 
can be written to allow only one master and multiple slaves. 
3) Distributed System: 
Distributed systems depend on networking for their functionality, they are able to share 
computational task and provide a rich set of features to users. Networks are of various types, 
the type may depend by the protocols used, the distance between the nodes and the transport 
media. TCP/IP is the most common network protocol a part from others. Most operating 
system support TCP/IP. 
A LAN exists within a room or a building. A VAN exists between cities and countries and so on. 
The different transport media include copper wires, fiber optic, satellites and radios. 
The following are the most common type of the system 
1) Client server system 
2) Peer to peer system 
Fahad Shaikh (System Administrator) Page 6
Operating Systems SY-IT 
i) Client server system: 
Client . . . . . . Client . . . . . . Client . . . . . . . Client 
Network 
SERVER 
The above diagram gives the general structure of client server system in which we have a server 
systems which satisfied request generated by client system. 
Server systems can be broadly categorized as compute servers and file servers. 
Compute server system provide an interface to which client can send request to perform an 
action. In response to which they execute the action and send back result to the client. 
File server system provide a file system interface where clients can create, update, read, and 
delete files. 
ii) Peer to peer system: 
In this system the processors communicate with one and other through various communication 
lines such as high speed buses or telephone lines. These systems are usually referred as loosely 
coupled system. 
The operating system designed for such a system is called network operating system which 
provides features such as file sharing across the network, allowing features to allow different 
process on different computer to exchange messages. 
4) Real Time System: 
A real time system has a well define fixed time constraints. Processing must be done within the 
defined constraints otherwise the system will failed. A real time system functions correctly only 
if it returns the correct result within the time limit. Real time systems are of two types. 
Fahad Shaikh (System Administrator) Page 7
Operating Systems SY-IT 
i) Hard Real Time System 
ii) Soft Real Time System 
A hard real time system guarantees that critical task must be complete on time. In such a 
system all delays must be bounded. The use of secondary storage should be extremely limited. 
Also most advanced operating system features are absent in such system. 
Soft real time systems are less restrictive than hard real time system. In this system a 
critical real time task gets priority over other task. Soft real time can be easily achieved and can 
be mixed with other type of system. These systems have limited applications as compared to 
hard real time systems. 
These systems are useful in multimedia, virtual reality and advanced scientific projects 
5) Hand Held System: 
Hand held systems include personal digital assistant (PDA) as well as cellular phone. These 
devices have small size, less memory, slow processors and small screens. 
Due to small amount of memory the operating system the applications must manage memory 
efficiently. Faster processors are not included in these devices because fast processor would 
require more power hence recharging and replacing the battery more frequently. Hence they 
are designed so as to utilize the processor efficiently. Since the monitors of these devices are 
very small, reading or browsing web pages becomes difficult. 
6) Clustered System: 
Clustered systems are composed of two or more individual systems coupled together. 
Clustering is usually performed to provide high availability. A layer of clustered software runs 
on the clustered nodes. Each node can monitor one or more of the other nodes. Each node can 
monitor one or more of the other nodes. If a machine fails, the monitoring machine can take 
ownership of its storage and restart the applications that were running on the failed machine. 
The most common form of clustering are Asymmetric clustering and Symmetric clustering. 
In Asymmetric clustering one machine is in hot standby mode which other is running the application. 
The hot stand by a machine only monitors the active server. If that server fails, the hot standby host 
(machine) becomes the active sever. 
Fahad Shaikh (System Administrator) Page 8
Operating Systems SY-IT 
In symmetric clustering mode two or more host are running applications and are also 
monitoring each other. These methods are more efficient as compared to asymmetric 
clustering. 
Unit - II 
OPERATING SYSTEM STRUCTURE 
Process Management: 
 A process is a program in execution. A process needs certain resources, including CPU time, 
memory, files, and I/O devices, to accomplish its task. 
 The operating system is responsible for the following activities in connection with process 
management. 
I. Process creation and deletion 
II. Process suspension and resumption 
III. Provision of mechanisms for: 
a) Process synchronization 
b) Process communication on handling deadlock 
A process is unit of work in a system. Such a system consists of collection of process, some of 
which are operating system process and the rest are user process all these process execute 
concurrently by multiplexing the CPU among theme. 
Main Memory Management: 
 Memory is large array of words or bytes, each with its own address. It is a repository of 
quickly accessible data shared by the CPU and input output devices. 
 Main memory is a volatile storage device. It loses its contents in case of system failure. 
 The operating system is responsible for the following activities in connections with. 
Fahad Shaikh (System Administrator) Page 9
Operating Systems SY-IT 
I. Keep track of which parts of memory are currently being used and by 
whom. 
II. Decide which processes to load when memory space become available. 
III. Allocate and de-allocate memory space as needed. 
File Management: 
File management is one of the most visible components of the most visible components of an 
operating system for convenient use of computer system the operating system provides a 
uniform logical view of information storage. The operating system hides the physical properties 
of its storage unit called as file. 
 A file is a collection of related information defined by its creator. Commonly, files 
represent programs (both source and object forms) and data. 
 The operating system is responsible for the following activities in connection will file 
management: 
I. File creation and deletion. 
II. Directory creation and deletion. 
III. Support of primitives for manipulating files and directories. 
IV. Mapping files onto secondary storage. 
V. File backup on stable (nonvolatile) storage media. 
Input output system management: 
One of the purpose of the operating system is to hide the specification of the operating system 
is to hide the specification of the hardware devices from the user only the device driver knows 
the specification of the device to which it is assigned. In UNIX the specification of input output 
devices are hidden from the bulk of operating system by input output sub system. 
 The input output system consist of 
I. A buffer caching system. 
II. A general device driver interface. 
III. Drivers for specific hardware device. 
Secondary Storage Management: 
Fahad Shaikh (System Administrator) Page 10
Operating Systems SY-IT 
 Since main memory (primary storage) is volatile and too small to accommodate all data 
programs permanently, the computer system must provide secondary storage to back 
up main memory. 
 Most modern computer systems use disks as the principle on line storage medium, for 
both programs and data. 
 The operating system is responsible for the following activities in connection with disk 
management. 
I. Free space management. 
II. Storage allocation. 
III. Disk scheduling. 
Command Interpreter System: 
One of the most importance systems program for an operating systems is the command 
interpreter which is the interface between the user and the operating system some operating 
system includes the command interpreter in the kernel other operating system such as Unix 
and Ms-Dos consider the command interpreter as a special program that is running when a job 
is initiated. 
When a new job is started in a batch system a program that reads and interprets control 
statements is executed automatically. These programs are called as command line interpreter 
or shell. 
Many commands are giving to the operating system by control statements which deal with: 
I. Process creation and management. 
II. Input Output handling. 
III. Secondary storage management 
IV. Main memory management 
V. File system access 
VI. Protection 
VII. Networking. 
Operating System Services: 
An operating system provides an environment for the execution of the programs it provides 
certain services to program and to the user of those programs the services are 
Fahad Shaikh (System Administrator) Page 11
Operating Systems SY-IT 
1. Program Execution: The system must be able to load a program into a memory and run 
that program. The program must be able to end its execution either normally or 
abnormally. 
2. Input Output Operations: A running program may require input output. This input 
output may involve a file or an input output device. 
3. File System Manipulation: The programs need to create and delete file and also read 
and write files. 
4. Communication: One process need to exchange information to another process either 
on the same computer or different computer systems, this is handled by the operating 
system through shared memory or message passing. 
5. Error detection: When a program is executing error may occur in the CPU, Memory, 
Input Output devices or the user program for each type of error the operating system 
should take proper action to ensure proper functioning of the system. 
6. Resource Allocation: When multiple users are using a system it is the responsibility of 
operating system to allocate or de-allocate the various resources of the system. 
7. Accounting: Operating system keeps a track of use of the computer resources by each 
user. This record may be use for accounting. 
8. Protection: protection involves insuring that all access to system resources is controlled. 
The system should also provide a security from outsiders. 
System Calls: 
System calls provide an interface between a process and the operating system. These calls are 
generally available as assembly language instructions. A part from assembly language higher 
level languages such as c, c++. Could replace the assembly language for writing the systems calls 
systems calls are generated in the following way 
Consider writing a simple program to read data from one file and to copy them to another file, 
suppose the file names are obtained the program must open the input file and create the 
output file. Each of these operation required a system calls when the program tries to open the 
input file, it may find that no such file exits hence display s on error message through a system 
call and the program terminates abnormally(another a system call). 
Fahad Shaikh (System Administrator) Page 12
Operating Systems SY-IT 
Three general methods are used to pass parameters between a running program and the 
operating system. 
 Pass parameters in registers. 
 Store the parameters in a table in memory, and the table address is passed as 
parameters in a register. 
 Push (store) the parameters onto the stack by the program, and pop off the stack by 
operating system. 
System calls can be grouped into five categories 
1. Process control 
2. File management 
3. Device Management 
4. Information maintenance 
5. Communications 
1. Process control: The following are the system calls with respect to process control 
i. End, Abort: A running process may end normally or due to error condition the process 
may be aborted. 
ii. Load, Execute: A process may want to load and execute another program. 
iii. Create process, Terminate process: In multiprogramming environment new processes 
are created as well as terminated. 
iv. Get process attribute, Set process attributes: When several process are executing we 
can control its execution. This control requires the ability to determined and reset the 
attribute of a process. 
v. Wait for time: After creating new processes, the parent process may need to wait for 
them (child process) to finish their execution. 
vi. Wait event, Signal event: In case processors are sharing some data a particular process 
may wait for certain amount of time or would wait for some specific event. 
vii. Allocate memory and free memory: When a process is created or loaded it is allocated a 
memory space. When the process completes its execution it is destroyed by the 
operating system and its memory space is free. 
Fahad Shaikh (System Administrator) Page 13
Operating Systems SY-IT 
2. File Management: The systems calls with respect to file management are 
i. Create file, Delete file 
ii. Open file, close file 
iii. Read, write, reposition the file 
iv. Get file attributes, set file attributes. 
We need to crate and delete files for which the system calls are generated, after creating a file 
we need open it and performed read or write operation or reposition it. Finally we need to 
close the file each of these operations requires the system calls. 
The various file attribute such as file name, file types, protection source and accounting 
information, these attributes can be set by using two system call get file attributes, set file 
attributes. 
3. Device Management: The system calls related to device management are 
i. Request device, Release device 
ii. Read, write, reposition 
iii. Get device attributes, Set device attribute 
iv. Logically attached or Detached devices 
A process may need some resources, if the resources are available, they can be granted 
otherwise the process would fail. After getting the resource the process could use the resource 
and finally released it. 
We can also set device attribute and get device attributes through system call 
4. Information maintenance: The various system calls related to information maintenance are 
i. Get time or date, set time or date 
ii. Get system data, set system data 
iii. Get process, file, device attributes 
iv. Set process, file, or device attributes. 
We can get the current time and date and set it through system calls. Apart from it we can get 
information regarding number of current user, operating system version, amount of free 
memory space through system calls. We can even get the process attribute and set the process 
attribute through system calls. 
Fahad Shaikh (System Administrator) Page 14
Operating Systems SY-IT 
5. Communication: The various systems related to communications are 
I. Create, delete communication connection 
II. Send, receive messages 
III. Transverse status information 
IV. Attach or detach remote devices 
There are two common models of communication 
1) Message passing model 
2) Shared memory model 
Process A M Process A 1 
Process B M shared memory 2 
Process B 
2 1 
Kernel M kernel 
Message passing shared memory 
In message passing model information is exchange through and interposes communication 
facility provide by the operating system. Before communication can take place a connection 
must be opened. The name of the other communicator must also be known. After 
identification, the identifiers are passed to general purpose open and closed calls provided by 
the file system or open connection and close connection system calls depending upon the 
system. The receiver process must give it permission for communication. Once the connection 
is established they exchange messages by read message and write message system calls. The 
close communication call communicates the communication model. 
Fahad Shaikh (System Administrator) Page 15
Operating Systems SY-IT 
In shared memory module the processes used memory map system calls to gain access a region 
of memory of other processes. 
Processor may exchange information by reading and writing data in these shared areas. The 
form of data and the locations are determined by these processor are not under operating 
system controls. 
Message passing is useful to be exchange. Shared memory allows maximum speed and 
convenience of communication. However we need to deal with problems such as protection 
and synchronization. 
System Programs: 
System programs provide a convenient environment for program development and execution. 
They can be divided into following categories 
1. File Management: These programs create, delete, copy, rename, print etc as well as 
manipulate file and directories. 
2. Status Information: Some programs provide the information of the system regarding 
data, time, memory, number of users etc. 
3. File Modification: several text editor may be available to create and modify the 
contents of file stored on disk and tape. 
4. Programming Language Support: compilers, assemblers and interpreters are provided 
to the user with the operating system. 
5. Program Loading and Execution: Once a program is assembled or compiled it must be 
loaded into memory to be executed. The system may provide absolute loaders, re-locatable 
loaders, and linkage editors and overlay loaders. 
6. Communication: These programs provide the mechanism for creating virtual connection 
among processes users and computer systems. 
Fahad Shaikh (System Administrator) Page 16
Operating Systems SY-IT 
System structure: 
1. Layered approach: 
In layered approach they operating system is broken up into a number of layers, each build on 
top of lower layers. The bottom layer is the hardware and the highest layer is the user interface. 
A typical operating system layer may consist of data structures and a set of routines that can be 
invoked by higher level layers. 
The main advantage of layer approach is modularity. The layers are selected in such a way 
that each layer uses functions and services of only lower level layers hence debugging become 
must easier. The design and implementation of the systems are simplified when the system is 
broken down into layer. 
The major difficulty with layer approach involves the careful definition of the layers 
because layer can use only those layers below it. Another difficulty of layered approach is that 
they are less efficient than other. 
2. Micro Kernel Approach: It was seen that as the Unix operating system expanded the 
kernel become large and difficult to manage. Hence an approach called as microkernel 
approach was used by modularizing the kernel. This method structures the operating 
system by removing all non-essential components from the kernel and implementing 
Fahad Shaikh (System Administrator) Page 17
Operating Systems SY-IT 
them as system and user level programs which result in smaller kernel. The main 
function of the microkernel is to provide a communication facility between the client 
programmed and various services which are running in user space. Communication is 
provided by message passing. 
The benefits of microkernel approach are that the operating system can be 
easily extended. All new services are added to user space hence kernel need not be 
modified. Since kernel is not modified while the microkernel is a smaller kernel. The 
resulting operating system is easier to port from one hardware designed to another. The 
microkernel also provides more security and reliability because most services are 
running as user processes and not as kernel processes. If a service fails the rest of the 
operating system remains intact. 
3. System Design and Implementation: 
i. Design Goals: The first problem in designing a system is to define the goals and 
specification of the systems. The requirements can be divided into two basic 
goals. From the user point of view the system should be easy (convenient) to use 
easy to learn, reliable, safe and fast. 
Form the designer point of view the system should be easy to design, 
implement and maintained. 
ii. Mechanisms and Policies: mechanisms determine how to do something while 
policies will determine what will be done. Policies may change across places and 
with respect to time. A general mechanism would be more desirable a change in 
policy would that required redefinition of only certain parameter of the system. 
iii. Implementation: After designing an operating system it must be implemented. It 
can be implemented either by using assembly language or using by higher level 
languages such as C or C++. 
The advantages of using higher level languages are 
 The code can be written faster and is more compact. 
 It is easier to port from one hardware to other hardware for example Ms 
Dos was written in assembly language hence is only available for the Intel 
family of processor. While the UNIX operating system which was written 
in c is available on different processors such as Intel, Motorola, ultra-spars 
etc. 
Fahad Shaikh (System Administrator) Page 18
Operating Systems SY-IT 
Unit - III 
PROCESS MANAGEMENT 
A process can be defined as a program in execution. Two essential elements of a 
process are program code and a set of data associated with that code at any given 
point in time, while the program is executing, it can be characterized by a number of 
elements called process control block (PCB). The elements of PCB are 
I. Identifier: every process has a unique identifier to differentiate it from other 
processes. 
II. State of the processor: It provides the information regarding the current state of 
the process. 
III. Priority: It gives the priority level of each process. 
IV. Program Counter: It provides the address of the next instruction which is to be 
executed. 
V. Memory Pointers: These memory pointers point to the memory location 
containing program code and data. 
VI. Context Data: These are the data that are present in register of the processor 
when the process is executing. 
VII. Input Output Status Information: It includes pending input output request, input 
output devices, which are assigned to the process etc. 
VIII. Accounting Information: It includes the amount of processor time and clock time 
used time limits and so on. 
Fahad Shaikh (System Administrator) Page 19
Operating Systems SY-IT 
Process States: 
Two State Process Model 
The above diagram gives the simplest two state process models. A process is either being 
executed by processor or not execute. In this model, a process may be in one of the two 
states running or not running. When the operating system creates a new process and enters 
that process into the system in the not running state. From time to time the currently 
running process may be interrupted, the operating system will select to other process to run. 
Hence a process may switched from running state to not running state while the other may 
move from not running to running state. 
Reasons for process creation 
1) New batch job: The operating system is providing with a batch job control 
stream. When the operating system is prepared to take on a new work, it will 
read the next sequence of job controls command. 
Fahad Shaikh (System Administrator) Page 20
Operating Systems SY-IT 
2) Interactive login: A user at a terminal logs on to the system. 
3) Created by operating system to provide service: The operating system can 
create a process on behalf of a user program to perform a function. 
4) Created by existing process: A user program can create a number of processes 
for the purpose of modularity. 
Reasons for process termination 
1. Normal completion. 
2. Time limit exceeded. 
3. Memory unavailable. 
4. Bounds violation. 
5. Protection error. 
6. Arithmetic error. 
7. Time over run. 
8. Input output failure. 
9. Invalid instruction. 
10. Privileged instruction. 
11. Data misuse. 
12. Operating system intervention. 
13. Parent termination. 
14. Parent request. 
Five State Process Model 
Fahad Shaikh (System Administrator) Page 21
Operating Systems SY-IT 
The various states of the process in the given model are 
1. New: A process has just been created and is not admitted to the pool(queue) of 
executable process by the operating system . 
2. Ready: The process is prepared to execute and is waiting for the processor. 
3. Running: The process which is currently being executed. 
4. Blocked: A process that cannot execute until some event occurs (such as input 
output). 
5. Exit: A process which is released by the operating system either because it is halted 
or aborted due to some reason. 
POSSIBLE TRANSITIONS: 
The following are the possible transitions from one state to another 
I. Null New: A new process is created to execute a program. 
II. New Ready: The operating system moves a process from new state to 
ready state in case memory space is available or there is a room for new 
process so as to keep the number of process to constant. 
III. Ready Running: A process jumps from read state to running state it 
processor is running in the processor. The operating system selects a 
particular process for the processor. 
IV. Running Exit: The currently running process is terminated or aborted. 
V. Running Ready: The most common reasons for this transitions are 
a. A process exceeds its time limit. 
b. Currently running process is preempted due to the arrival of high 
priority process in the ready queue. 
c. A process may itself release control of the processor. 
VI. Running Blocked: A process is put in the blocked state if it request for 
something for which it must wait. For example a process may request a 
service from the operating system, operating system is not prepared to 
service the request immediately are the process may wait for some input 
output operation. 
VII. Blocked Ready: A process in the blocked state is mode to the ready 
state when the event for which it has been waiting occurs. 
Fahad Shaikh (System Administrator) Page 22
Operating Systems SY-IT 
VIII. Ready Exit: A parent may terminate a child process at any time. Also if a 
parent terminates all child process of the parent process terminate. 
Process Description 
In the above diagram we see that there are a number of processes each process needs 
certain resources for its execution. Process p1 is running and has control of two input output 
devices and occupying a part of main memory process p2 is also in the main memory but its 
blocked and waiting for input output device. The process pn is swapped out and is 
suspended. 
The operating system controls the processes and manages the resources for the 
processes using some control structures which are divided into four categories. 
Fahad Shaikh (System Administrator) Page 23
Operating Systems SY-IT 
Memory tables are used to keep a track of both main memory and virtual memory. The 
memory tables must include the following information. 
i. The allocation of main memory to processes. 
ii. The allocation of secondary memory to process. 
iii. Any protection attributes of blocks of main and virtual memory. 
iv. Any information needed to manage the virtual memory. 
Input Output Tables: Input output tables are by the operating system to manage the input 
output devices of the computer system. At any given time an input output device may be 
available or not available. Hence the operating system must know the status of the input 
output operation and also the location in the main memory where the transfer is carried 
out. 
File Tables: The operating system also maintains file table which provide information about 
the existence of file, their location on the secondary memory, their current status and other 
attributes. 
Fahad Shaikh (System Administrator) Page 24
Operating Systems SY-IT 
Process Tables: The operating system must maintain process tables to manage processes 
control them. In doing so the operating system must know where the process is located and 
the attribute of the process. The various process attributes which is also called as process 
control blocked is grouped into three categories. 
1) Process identification. 
2) Processor state information. 
3) Process control information. 
1. Process identification: 
Identifiers: Numeric identifiers may be stored in the process control blocked which 
include 
i) Identifier of this process. 
ii) Identifier of the parent process 
iii) User identifier 
2. Processor state information: 
i) User visible registers: A user visible register is available to the user, there may be 
about 8 to 32 of these registers. 
ii) Control and status register: These registers are used to control the operation of 
the processor which include program counter, condition codes and status 
information. 
iii) Stack pointers: Each process has one or more LIFO system stack associated with 
it. 
3. Process control information: 
i) Scheduling and state information: These information is needed by the operating 
system to performed scheduling. The information’s are process state, priority, 
scheduling related information and event. 
ii) Data structuring: A process may be linked to other process in a queue, ring or 
some other structure. 
iii) Inter-process communication: Various flag, signals and messages may be 
associated with communication between two independent processes. 
iv) Process privileges: Processes are granted privileges in terms of memory that may 
be accessed and the types of instructions that may be executed. 
Fahad Shaikh (System Administrator) Page 25
Operating Systems SY-IT 
v) Memory management: It includes pointers to segments and page tables. 
vi) Resource ownership and utilization: Resources controlled by the process may be 
indicated. 
Operations and processes: 
The processes in the system can execute concurrently and they must created and deleted 
dynamically. Hence the operating system must provide a mechanism for process creation 
and termination. 
Process creation: A process may create several new processes through create process 
system call. The creating process is called the parent process while the new processes called 
the children of that processes. Each of these new processes may in turn create new 
processes forming a tree of processes. When a process creates a new process, two 
possibilities exist in terms of execution. 
i) The parent continues to execute concurrently with its children. 
ii) The parent waits until or all of its children have terminated. 
There are two possibilities 
(1) The child process is a duplicate of the parent process. 
(2) The child process has a program loaded into it. 
Process termination: A process terminates when it finishes executing its final statement and 
ask the operating system to delete it by using exit system call. At that point, the process may 
return the data to its parent process through wait system call. All the resources of the 
process, including memory, files, input output buffers are de-allocated by the operating 
system. 
A parent may terminate its children for the following reasons. 
1) The child has exceeded its usage of some of it resources which is allocated to it. 
2) The task assigned to the child is no longer required. 
Fahad Shaikh (System Administrator) Page 26
Operating Systems SY-IT 
3) The parent is exiting, in such a case the operating system does not allow a child to 
continue if its parent terminates. 
Co-operating Processes: 
A process is co-operating process if it can affect or be affected by other processes executing 
in the system. Co-operating processes has several advantages. 
 Information sharing: Several users may be interested in same piece of information 
we must provide an environment to allow concurrent access to these type of 
resources. 
 Computation speed up: We can break a task into smaller sub task, each of which will 
be executing in parallel with the others. 
 Modularity: We can construct the system by dividing the system functions into 
separate process. 
 Convenience: A user may have many task on which it can work at a time (editing, 
printing and compiling). 
Inter process communication (IPC): 
Inter process communication facility is the means by which processes communicate by 
among themselves. Inter process communication provides a mechanism to allow processes 
to synchronize their action without sharing the same address space. Inter process 
communication is particularly useful in distributed environment. 
Inter process communication is best provided by a message passing system. The 
function of a message system is to allow processes to communicate with one and another 
without any shared memory. Communication among the user processes is achieved to 
through passing of messages. An inter process communication facility provides at least two 
operations send and receive. 
If processes P and Q want to communicate they must send messages and receive 
message from each other through a communication link. 
There are several methods for logically implementing a link and send / receive operation 
Fahad Shaikh (System Administrator) Page 27
Operating Systems SY-IT 
a) Direct communication OR Indirect communication: 
Direct communication: With direct communication each process that wants to 
communicate should name the receiver or sender of communication as follows 
Send (P, message) - send a message to process P 
Receive (Q, message) - receive a message from process Q 
With direct communication exactly one link exist between each pair of processes. 
Indirect communication: In indirect communication the messages are send and 
received from mail boxes or ports. In which message can be placed or removed. Two 
process can communicated only if they share a mail box. Communication is done in 
the following way 
Send (A, message) - send a message to mailbox A. 
Receive (A, message) - receive a message to mailbox A. 
In indirect communication a link may be associated with more than two processes. 
More than 1 link may exist between each pair of communicating processes. The 
mailbox may be owned either by a process or by the operating system. 
b) Synchronization: Communication between process through message passing may be 
either blocking or non-blocking (synchronous or asynchronous). 
i. Blocking send: The sending process is blocked until the message is received 
by the receiving process or by the mailbox. 
ii. Non-Blocking send: The sending process sends the message and resumes 
operation. 
iii. Blocking receive: The receive block until a messages is available. 
iv. Non-Blocking receive: The receiver gets receives either a valid message or a 
null. 
c) Buffering: buffering can be implemented in three ways 
i. Zero capacity: The queue has maximum length zero. Hence the link cannot 
have any messages waiting in it. 
Fahad Shaikh (System Administrator) Page 28
Operating Systems SY-IT 
ii. Bounded capacity: The queue has finite length. If the link is fall the sender 
must block until space is available in the queue. 
iii. Unbounded capacity: The queue has almost infinite length hence the sender 
never blocks. 
Mutual exclusion using messages: 
Count int n /* number of processes */ 
Void p (int) 
{ 
Message msg; 
While (true) 
{ 
Receiver (box, msg); /* critical section */ 
Send (box, msg); /* remainder*/ 
} 
} 
Void main () 
{ 
Create mailbox (box); 
Send (box, null); 
Par begin (p (1), p (2), ... , p (n)); 
} 
The above algorithm show how we can use message passing to get mutual exclusion. A set of 
concurrent processes share a mailbox which can be use by all processes to send and receive. 
The mail box is initialized to contain a single message which null content. A process wishing 
to enter its critical section first attempts to receive a message. If the mailbox is empty then 
the process is blocked. 
Once a process gets the message, it performs it critical section and places the message back 
into the mailbox. If more than one process perform the receive operation concurrently than 
two cases arise. 
 If there is a message, it is delivered to only one process and other are blocked. 
 If the message queue is empty, all the processes are blocked, when the message is 
available only one blocked process is activated and given the message. 
Fahad Shaikh (System Administrator) Page 29
Operating Systems SY-IT 
THREADS: 
A thread is a light weight process; it is a basic unit of CPU utilization and consists of thread 
id, a program counter, a register set and a stack. All the threads belong to a process may 
share code section, data section and other operating system resources. If a process has 
multiple threads control it can do more tasks at a time. 
Advantages of multithreading are 
I. Responsiveness: Multithreading may allow a program to continue even if a part of it 
is blocked for performing a lengthy operation. 
II. Resource sharing: Threads share the memory and the resources of the process to 
which they belong. 
III. Economy: Creating, maintaining and switching a process is difficult as compared to 
creating, maintaining and switching a thread. 
E.g.: In Solaris creating a process is about 30 times slower than creating a thread. 
IV. Utilization of multiprocessor: Architecture – The benefits of multithreading can be 
greatly increased in a multiprocessor architecture, where each thread may run in 
parallel on different processors. 
Fahad Shaikh (System Administrator) Page 30
Operating Systems SY-IT 
Types of Threads: 
1. User threads. 
2. Kernel threads. 
User Threads: User threads are supported above the kernel and are implemented at the 
user level. All threads creation and scheduling are done in user space without kernel 
intervention. Hence user level threads are fast to create and easy to manage. The drawback 
with user level thread is that if kernel is single threaded, any user level thread performing a 
blocking system call will cost the entire process to block. 
Kernel Threads: Kernel threads are supported directly by the operating system. The kernel 
performs threads creation, scheduling and management in the kernel space. Due to this 
reasons they are generally slower to create and not easy to manage as compared to user 
threads. Since the kernel is managing the threads, if a threads perform a blocking system 
call, the kernel can schedule another thread for execution. 
Multithreading Models: 
The three common types of multithreading models. 
1. Many to One. 
2. One to One. 
3. Many to Many. 
Fahad Shaikh (System Administrator) Page 31
Operating Systems SY-IT 
Many to One: 
The many to one model maps many user level threads to one kernel thread. Thread 
management is done in user space, hence it is efficient but the disadvantage is that the 
entire process will blocked if a thread makes a blocking system call. Since only one thread 
can access the kernel at a time multiple threads are unable to run in parallel on 
multiprocessors. 
One to One: 
One to One model maps each user thread to a kernel thread. It provides more concurrency 
as compared to many to one model. In case a thread makes a blocking system calls only that 
particular thread will be blocked while other still execute. In multiprocessor environment 
multiple threads can run in parallel. The only drawback of this model is that creating a user 
thread required creating the corresponding kernel threads. 
Fahad Shaikh (System Administrator) Page 32
Operating Systems SY-IT 
Many to Many: 
The many to many model multiplex many user level threads to a smaller or equal number of 
kernel threads. The number of kernel threads may depend upon the particular application or 
a particular machine. This model is better on the other two models. Developers can create as 
many user threads as necessary and the corresponding kernel threads can run in parallel on 
a multiprocessor. 
Threading Issues: 
I. The fork and exec system calls: If one thread in a program calls a fork system calls, the 
new process may duplicate all threads or a new process may be a single threaded. If a 
thread invokes the exec system call, the program specified in the entire process. 
II. Cancellation: Thread cancellation is the task of terminating a thread before it has 
completed. A thread which is to be cancelled is called as target thread. Thread 
cancellation occurs in two different ways. 
i) Asynchronous cancellation: One thread immediately terminates the target 
thread. 
Fahad Shaikh (System Administrator) Page 33
Operating Systems SY-IT 
ii) Differed cancellation: The target thread can periodically check if it should 
terminate. 
III. Signal handling: A signal is used to notify a process that a particular event has occurred. 
Signal can be generated through following reasons. 
 Generated by the occurrence of a particular event, whenever it is generated it 
must be delivered to a process; once the signal is delivered it must be 
handled. Signal can handled by two handlers. 
I. A default signal handler. 
II. User define signal handler. 
In order to deliver the signal there are few options. 
a) Deliver the signal to the thread to which the signal applies. 
b) Deliver the signal to every thread in the process. 
c) Deliver the signal to certain threads in the process. 
d) Assign a specific thread to receive all signals for the process. 
IV. Thread pools: The general idea behind a thread pool is to create a number of threads at 
process start up and place them into a pool, where they sit and wait for work. The 
benefits of thread pools are 
i) We get fast service 
ii) A thread pool limits the number of threads. 
V. Thread specific data: Threads belonging to a process share the data of the process. 
However each thread may need its own copy of certain data, such data is called thread 
specific data. 
CPU SCHEDULING 
 Basic concepts. 
 Scheduling criteria. 
 Scheduling algorithm. 
 Multiprogramming algorithm. 
CPU Scheduler: 
Fahad Shaikh (System Administrator) Page 34
Operating Systems SY-IT 
Whenever the CPU becomes idle, the operating system must select one of the processes in 
the ready queue which is to be executed. The selection of the process is carried out by CPU 
Scheduler (short term scheduler). 
CPU scheduling decisions may take place under following four situations. 
1. When a process switches from running state to waiting state. 
2. When a process switches from running state to ready state. 
3. When a process switches from waiting state to ready state. 
4. When a process terminates. 
Scheduling in case of 1 and 4 is called non-preemptive while under case 2 and 3 
are called preemptive. 
Dispatcher: Dispatcher module gives control of the CPU to process by the short term 
scheduler, this involves: 
i. Switching context. 
ii. Switching to user mode. 
iii. Jumping to the proper location in the user program to restart that program. 
The dispatcher should be as fast as possible because it is used during every process 
switch. The time it tackles for the dispatcher to stop one process and start another 
process is known as dispatcher latency. 
Scheduling criteria: The criteria which are used to compare the various scheduling 
algorithms are 
1. CPU utilization: It means CPU must be kept as busy as possible CPU utilization may 
range from 0 – 100%. 
2. Throughput: It gives the number of processes which are completed in a given unit 
time. 
3. Turnaround time: It is the sum of time a process spends waiting to get into the 
memory. Waiting in the ready queue, executing and doing input output. 
4. Waiting time: It is the amount of time a process has been waiting in the ready queue. 
5. Response time: It is the amount of time it takes from when a request was submitted 
until the first response is produced, not output (for time sharing environment). 
Fahad Shaikh (System Administrator) Page 35
Operating Systems SY-IT 
Scheduling Algorithm 
1) First Come First Serve Scheduling (FCFS): It is a purely non-preemptive algorithm. In this 
scheme the process which requests the CPU first is allocated the CPU first. The 
implementation of First come first serve policy is easily managed with a FIFO queue. The 
code for FCFS scheduling is simple to write and understand. 
The disadvantage of FCFS policy is that the average waiting time is quite high as 
compared to other algorithms. 
Eg: consider the following example with processes arriving in the order p1,p2,p3 and 
the burst time in milliseconds. 
The Gantt chart for the scheduling is: 
P1 P2 P3 
0 24 27 30 
Waiting time for P1 = 0; P2 = 24; P3 = 27 
Average waiting time: (0 + 24 + 27)/3 = 17 
Convoy effect—In FCFS scheme if we have a one big process which is CPU bounded 
and many small process which are input output bounded, it is seen that all other 
small process wait for one big process to get the CPU. This results in lower CPU 
utilization. This effect is called as convey effect FCFS algorithm is not suitable for time 
sharing system because it is non-preemptive. 
2) Shortest Job First (SJF) Scheduling: In this scheme, when the CPU is available it is 
assigned to the process which has the smallest next CPU burst. If two processes has the 
same next CPU burst, FCFS is used. 
SJF algorithm is optional because it gives minimum average waiting time for a 
given set of processes. The real difficulty with SJF algorithm is knowing the length of 
the next CPU request. 
SJF algorithm may be either preemptive or non-preemptive. 
A preemptive SJF algorithm will preemptive the currently process if a new process 
arrives in the ready queue having CPU burst time left for the current executing, 
process. A non-preemptive SJF algorithm will allow the currently running process to 
finish its CPU burst. 
Fahad Shaikh (System Administrator) Page 36
Operating Systems SY-IT 
Eg: Non preemptive SJF 
P1 P3 P2 
P4 
0 3 7 16 
Eg: Preemptive SJF 
Process Arrival Time 
Burst Time 
P0.0 7 
1 P2.0 4 
2 8 12 
P 4.0 1 
Process Arrival Time 
Burst Time 
P1 0.0 7 
P2 2.0 4 
P 4.0 1 
P1 P2 P3 
0 2 4 11 
P2 P4 
P1 
5 7 
16 
3) Priority Scheduling: In this scheduling a priority is associated with each process and the 
CPU is allocated to the process with higher priority. Priorities are generally some fixed 
range of numbers. Priorities can be defined either internally or externally. Internally 
defined priorities are use some measurable quantity to compute the priority of a process 
(for example: time limits, memory, file etc).external priorities are said by criteria such as 
importance of the process, the type of the process, departments sponsoring the process 
etc. 
Priority scheduling can be either preemptive or non-preemptive. When a process 
arrives at the ready queue its priority is compared with the running process. In a 
preemptive priority scheduling algorithm the running process will be preempted if 
the newly arrived process is having higher priority. A non preemptive priority 
scheduling algorithm will not preempt the running process. 
The major drawback of priority scheduling algorithm is indefinite blocking 
(starvation). A solution to the problem of starvation is aging. Aging is a technique of 
increasing the priority of processes that wait in the system for long time. 
Fahad Shaikh (System Administrator) Page 37
Operating Systems SY-IT 
Q. For the following set of processes find the average waiting time considering small 
number to have higher priority. 
Process Burst time Priority 
P1 10 3 
P2 1 1 
P3 2 4 
P4 1 5 
P5 5 2 
P2 p5 p1 p3 p4 
0 1 6 16 18 19 
Waiting time of p1=6 
Waiting time of p2=0 
Waiting time of p3=16 
Waiting time of p4=18 
Waiting time of p5=1 
Average waiting time = (6+0+16+18+1)/5= 8.2 
4) Round Robin (RR): Round robin scheduling algorithm is designed specially for time 
sharing system. It is a purely preemptive algorithm. In this scheme every process is 
provided a time slice (time point of view) in case the process is unable to complete in the 
given time slice, it is preempted and another process is executed. The ready queue is 
treated as a circular queue. The CPU scheduler goes around the ready queue allocation 
the CPU to each process for a time interval of one time slice. If there are n processes in 
the ready queue and the time quantum is q, then each process gets 1/n of the CPU time 
in chunks of at most q time units at once. No process waits more than (n-1)q time units. 
Example of Round robin with time quantum= 20 
Process Burst 
Time 
P1 53 
P2 17 
P 68 
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 
0 20 37 57 77 97 117 121134 154162 
Fahad Shaikh (System Administrator) Page 38
Operating Systems SY-IT 
Multilevel Queue Scheduling: 
A multilevel queue scheduling algorithm partition the ready queue into several separates 
queue. The processes are permanently assigned to a particular queue. Its queue has its own 
scheduling algorithm. 
For example: The interactive processes queue may use round robin algorithm while batch 
processes queue would use FCFS algorithm. A part from it there is a scheduling among a 
queue which is implemented through preemptive priority scheduling algorithm. Each queue 
has higher priority over low priority queue. 
For example: Low process in the batch queue would run unless system process queue, 
interactive process queue and interactive editing process queue are empty. 
Multilevel Feedback Queue Scheduling: 
Fahad Shaikh (System Administrator) Page 39
Operating Systems SY-IT 
Multilevel feedback queue scheduling allows a process to move between the queues. 
Separate queues are formed on the basis of CPU burst time. 
If a process uses too much CPU time, it is moved to a low priority queue. If a process waits 
too long in a low priority queue it is moved to a higher priority queue. 
For example: Consider the above diagram, a process entering the ready queue, it is put in 
queue 0. A process in queue 0 is given in a time slice (quantum) of 8ms. If it does not finishes 
within the given time, it is moved at the end of queue 1 and so on. 
A multilevel feedback queue scheduling is defined by the following parameters. 
 Number of queues. 
 Scheduling algorithms for each queue. 
 Method used to determine when to upgrade a process. 
 Method used to determine when to demote a process. 
 Method used to determine which queue a process will enter when that process 
needs service. 
PROCESS SYNCHRONIZATION 
Definitions: 
1. Critical section: A section of code within a process which cannot be executed while 
one process is executing it is called as critical section. 
2. Deadlock: A situation in which two or more processes are unable to proceed and 
wait for each other is called as deadlock. 
3. Live lock: A situation in which two or more processes continuously change their state 
in response to changes in the other processes without doing any useful work. 
4. Mutual exclusion: The requirement that only one process execute the critical 
section. 
5. Race condition: A situation in which processes read and write a shared data item and 
the final result depend on the relative timing of their execution. 
6. Starvation: A situation in which a run able process is not executed for an indefinite 
period. 
Fahad Shaikh (System Administrator) Page 40
Operating Systems SY-IT 
Principles of concurrency: concurrency has to deal with the following issues. 
Communication among processes, sharing of resources, synchronization of the multiple 
processes and allocation of the processor time. 
Concurrency arises in three different contexts. 
i. Multiple applications. 
ii. Structured application. 
iii. Operating system structure. 
In a single processor multiprogramming system, processes are int leaved in time but 
still they appear to be simultaneously executing. In a multiprocessor system the processes 
are overlapped that is two or more processes simultaneously execute. In both the situation 
the following difficulties arise. 
a) Sharing of global resources. 
b) Allocation of resources. 
c) Locating a programming error. 
Consider a simple example in a uni-processor environment. 
void echo() 
{ 
chin = getchar(); 
chout = chin; 
putchar(chout); 
} 
Operating System Concerns: The following are the issues with respect to existence 
of the concurrency. 
1. The operating system must be able to keep track of various processes. 
Fahad Shaikh (System Administrator) Page 41
Operating Systems SY-IT 
2. The operating system must allocate and de-allocate various resources for each 
process. The resources are processor time, memory, input output device, files. 
3. The operating system must protect the data and critical resources of each 
process. 
4. The functioning of the process and its result must be independents of its speed at 
which its execution is carried out with respect to other processes. 
Process Interaction: The following are the various process interaction. 
 Processes unaware of each other. 
 Processes directly aware of each other. 
Process interaction: 
Degree of 
awareness 
Relationship Influence that one 
process has on the 
other 
Potential control 
problem 
Processes 
unaware of each 
other 
competition  Results of 
process 
independent 
of the action 
of others 
 Timing of 
process may 
be affected. 
 Mutual 
exclusion. 
 Deadlock 
(renewable 
resource) 
 Starvation. 
Process indirectly 
aware of each 
other (eg: shared 
object) 
Cooperation by 
sharing 
 Results of one 
process may 
depend on 
information’s 
obtained 
from others. 
 Timing of 
process may 
be affected. 
 Mutual 
exclusion. 
 Deadlock 
(renewable 
resource). 
 Starvation. 
 Data 
coherence. 
Process directly 
aware of each 
other (have 
communication 
primitives 
available to 
them) 
Cooperation by 
communication. 
 Results of one 
process may 
depend on 
information 
obtained 
from others. 
 Timing of 
process may 
be affected. 
 Deadlock 
(consumable 
resource). 
 Starvation. 
Fahad Shaikh (System Administrator) Page 42
Operating Systems SY-IT 
Requirements For Mutual Exclusion: Any solution for enforcing mutual 
exclusion must satisfied the following requirements. 
1. Mutual exclusion must be enforced that is when one process is in its critical 
section no other process should be in its critical section. 
2. A process that halts in its non critical section must not interfere with other 
processes. 
3. A process should not wait indefinitely for entry into the critical section. 
4. When no process is in its critical section, any process which request entry to 
the critical section must be granted without delay. 
5. A process must remain in its critical section for a finite time. 
Mutual Exclusion: Algorithm Approach 
Algorithm 1 
/* process 0 */ /* process 1*/ 
While (turn!=0) While (turn!=1) 
/* do nothing */; /*do nothing */; 
/* critical section */; /* critical section */; 
Turn 1; Turn 0; 
In this case the two process share a variable turn. A process which wants to enter the critical 
section checks the turn variable. If the value of the turn is equal to the number of process, 
then the process may go into the critical section. The drawback of these algorithm is that are 
get busy waiting. If one process fails the other is permanently blocked. 
Algorithm 2 
/* process 0 */ 
While (flag [1]) 
/* do nothing */ 
Flag [0]= true; 
/* critical section */ 
Flag [0]= false; 
/* process 1 */ 
While (flag [0]) 
/* do nothing */ 
Flag [1]= true; 
Fahad Shaikh (System Administrator) Page 43
Operating Systems SY-IT 
/* critical section */ 
Flag [1]= false; 
In this algorithm we used a Boolean variable flag. When a process wants to enter its critical 
section, it checks other process flag, if it is false, it indicates that other process is not in its 
critical section. The checking process immediately sets its own flag to true and goes into the 
critical section. After leaving its critical section it resets its flag to false. 
This algorithm does not ensure mutual exclusion. It can happen that both the process 
check each other’s flag and find it false, hence set their own flag to true and both the critical 
section simultaneously. 
Algorithm 3 
/* process 0 */ 
Flag [0]= true 
While (flag[1]) 
/* do nothing */ 
/* critical section*/ 
Flag[0]= false; 
/* process 1 */ 
Flag [1]= true 
While (flag[0]) 
/* do nothing */ 
/* critical section*/ 
Flag[1]= false; 
In this algorithm a process which wants to enter the critical section sets its flag to true and 
check other process flag if it is not set it enter the critical section if it is set the process wait. 
The drawback of this algorithm is that it can happen that both process set their flag to 
true and check each other’s flag causing deadlock. Also if a process fails inside the critical 
section, the other process is blocked. 
Algorithm 4 
/* process 0 */ 
Flag [0]= true 
While (flag[1]) 
{ 
Flag[0]= false; 
Fahad Shaikh (System Administrator) Page 44
Operating Systems SY-IT 
/* delay */ 
Flag[0]=true; 
} 
/* critical section */ 
Flag [0]= false; 
/* process 1 */ 
Flag [1]= true 
While (flag[0]) 
{ 
Flag[1]= false; 
/* delay */ 
Flag[1]=true; 
} 
/* critical section */ 
Flag [1]= false; 
In this algorithm it can be shown that a live lock situation occurs in which the two processes 
continuously set and reset their flag without doing any useful works. If one of the process 
slows down than live lock situation would be broke down and one of the process enter the 
critical section. 
Peterson’s Algorithm: 
Boolean flag[2]; 
Void p0() 
{ 
While (true) 
{ 
Flag[0]=true; 
Turn=1; 
While(flag[1] && turn==1) 
/*do nothing */ 
/* critical section */ 
Flag[0]=false; 
/* remainder section */ 
} 
} 
Void p1() 
{ 
While (true) 
{ 
Flag[1]=true; 
Turn=0; 
Fahad Shaikh (System Administrator) Page 45
Operating Systems SY-IT 
While(flag[0] && turn==0) 
/*do nothing */ 
/* critical section */ 
Flag[1]=false; 
/* remainder section */ 
} 
} 
Void main() 
{ 
Flag[0]=false; 
Flag[1]=false; 
Par begin(p0,p1) 
} 
In Peterson algorithm gives the simple solution to the problem of mutual exclusion. A global 
variable turn decides which process must go into the critical section. We see that mutual 
exclusion is easily preserved. 
Suppose p0 wants to enter its critical section, it sets its flag to true and the turn value 
to 1 because of which p1 can not enter its critical section. If p1 is already in its critical section 
than p0 is blocked from entering its critical section. It can be shown that Peterson’s 
algorithm provide a better solution which is free from deadlock and live lock. This algorithm 
can be easily generalized for n number of processes. 
Semaphores: Two or more processes can co-operate by means of simple signals such that a 
process can be forced to stop at a specific place until it has receive a specific signal. Any co 
ordination requirement can be satisfied by proper signals. For signaling special variable 
called semaphores are used to transmit a signal through semaphores S a process execute the 
primitive semSignal (S). To receive a signal through semaphores S a process executes the 
primitive semWait(S). 
To achieve the desired effect we can view the semaphore as a variable that has an integer 
value on which three operations are defined. 
1) A semaphore may be initialize to a non negative value. 
2) The semWait Operation: The semWait decrements the semaphore value. If the value 
become negative then the process executing the semWait is blocked, otherwise the 
process continue. 
Fahad Shaikh (System Administrator) Page 46
Operating Systems SY-IT 
3) The semSignal Operation: The semSignal operation increments the semaphore value. 
If the value is less than or equal to 0, a process blocked by semWait operation is 
unblocked. 
Semaphore primitives are defined as follows 
Struct semaphore 
{ 
Int count; 
Queue type queue; 
} 
Void semaWait(semaphore.S) 
{ 
S.count--; 
If (S.count<0) 
{ 
Place this process in S.queue; 
Block this process 
} 
} 
Void semaSignal(semaphore.S) 
{ 
S.count++; 
If (S.count<=0) 
{ 
Remove a process p from S.queue; 
Place process p on ready list; 
} 
} 
Binary Semaphores: Binary semaphore may only take the value 0 and 1 and can be defined 
by the following three operation. 
1. Initialization: A binary semaphore may be initialized to 0 or 1. 
2. SemWaitB operation: The semWait B operation checks the semaphore value. If the 
value is 0, than the process executing the semWait B is blocked. If it is 1 it is change 
to 0 and the process continues. 
3. SemSignal B operation: The semSignal B operation checks to see if any processes are 
blocked on this semaphore. If so, than a process blocked by a semwait B operation is 
unblocked. 
If no processes are blocked, than the value of semaphore is set to 1. 
Fahad Shaikh (System Administrator) Page 47
Operating Systems SY-IT 
Binary semaphore primitives are defined as follows. 
Struct binary—semaphore 
{ 
Enum {zero, one} value 
Queue type queue; 
} 
Void semWait B (binary-semaphore.S) 
{ 
If(S.value==1) 
S.value=0; 
{ 
Place this process in S.queue; 
Block this process 
} 
} 
Void semSignal(binary-semaphore.S) 
{ 
If (S.queue is empty()) 
S.value=1; 
{ 
Remove a process p from s. queue; 
Place process p on ready list; 
} 
} 
Fahad Shaikh (System Administrator) Page 48
Operating Systems SY-IT 
Unit - IV 
MEMORY MANAGEMENT 
Memory consists of a large array of words or bytes each having its own address.The CPU fetches 
instructions from the memory according to the value of the program counter.To improvethe utilization 
of CPU the computer must keep several processes in the memory.To utilize the memory many memory 
management schemes are proposed.Selection of a memory management schemes depends on many 
factors such as hardware of the system. 
ADDRESS BINDING 
A user program goes through several steps such as compiling ,loading ,linking.Addresses may be 
represented in different ways during these steps.Addresses in the source program are generally 
symbolic.A compiler will bind these addresses to relocatable addresses.The loader will inturn bind these 
addresses to absolute addresses.Each binding is a mapping from one address space to another. 
The binding of instructions and data can be done in the following ways. 
1..Compile time-If it is known at compile time where the process will reside in memory ,then absolute 
code can be generated.If address changes at compile time the program is recompiled. 
2..Load time-If it is not known at compile time where the process will reside in the memory,than the 
compiler must generate relocatable code. 
3..Execution time-Binding delayed until run time if the process can be moved during its execution from 
one memory segment to another.Need hardware support for address maps.(eg.base and limit registers). 
LOGICAL V/S PHYSICAL ADDRESS SPACE 
Logical address-generated by the CPU,also referred to as virtual address. 
Physical address-address seen by the memory unit. 
Logical and physical addresses are the same in compile time and load time address binding 
schemes,logical(virtual)and physical addresses differ in execution time address binding scheme. 
Fahad Shaikh (System Administrator) Page 49
Operating Systems SY-IT 
DYNAMIC LOADING 
 Routine is no loaded until it is called . 
 Better memory space utilization,unused routine is never loaded. 
 Useful when large amounts of code are needed to handle in frequently occurring cases. 
 No special support from the OS is required implemented through program design. 
OVERLAYS : 
OVERLAYS FOR A TWO PASS ASSEMBLER 
 Keep in memory only those instructions and data that are needed at any given time. 
 Needed when process is larger than amount of memory allocated to it. 
 Implemented by user, no special support needed from OS, programming design of overlay 
structure is complex. 
SWAPPING 
Fahad Shaikh (System Administrator) Page 50
Operating Systems SY-IT 
A process needs to be in memory to be executed. A process can be swapped out temporarily 
from the main memory to a backing store and then brought back into main memory for 
continued execution. When a process completes its time slice it will be swapped with another 
process (in case of RR scheduling algorithm) 
Another type of swapping policy which is used for priority based algorithm is that is that a higher 
priority process arrives and wants service,the memory manager can swap out the lower priority 
process and swap in the higher priority process.This type of swapping is called as roll out –roll in. 
Swapping requires a backing store.The backing store is commonly a fast disk and is large enough 
to accommodate copies of all memory images for all users,and it must provide direct access to 
these memory images. 
Major part of swap time is transfer time,total transfer time is directly proportional to the 
amount of memory swapped. 
SCHEMATIC VIEW OF SWAPPING 
Fahad Shaikh (System Administrator) Page 51
Operating Systems SY-IT 
CONTIGUOUS ALLOCATION: 
The memory is usually divided into two partitions one for the OS and other for the user processes.We 
want several user process to reside in memory at the same time.Hence we need to consider how to 
allocate the available memory to the processes.In contiguous memory allocation each process is 
contained in a single contiguous part of the memory. 
When the CPU scheduler selects a process for execution,the dispatcher loads the relocation and limit 
register and every address generated by the CPU is checked with these registers.So that the OS and user 
programs are not modified by the running process. 
Limit 
register 
Relocation 
register 
+ Memory 
Logical add yes physical add 
No 
< 
Trap;addressing error 
HARDWARE SUPPORT FOR RELOCATION AND LIMIT REGISTERS 
CPU 
MEMORY ALLOCATION 
One of the simplest method for memory allocation is to divide memory into several fixed sized 
partitions.Each partition may contain exactly one process.When a partition is free a process is selected 
from the input queue and is loaded into the free partition.When the process terminates the partition 
become available for another process. 
The OS keeps a table indicating which parts of memory are available and which are occupied.Initially all 
memory is available for user processes and is considered as one large block of user memory called 
hole.When a process arrives and needs memory we search for a large enough for this process.The set of 
holes is searched to determine which hole is best to allocate.The following strategies are considered to 
select a free hole. 
Fahad Shaikh (System Administrator) Page 52
Operating Systems SY-IT 
1..First-fit:-Allocate the first hole that is big enough.We can search the set of holes either at the 
beginning or at the end of the previous first fit search.We can stop searching as soon as we find a free 
hole that is large enough. 
2..Best fit:-Allocate the smallest hole that is big enough.For this purpose we must search entire list.It 
produces the smallest left over hole. 
3..Worst fit:-Allocate the largest hole.For this purpose we must search the entire list.It produces the 
largest left over hole.First-fit and best fit are better than worst fit in terms of speed and storage 
utilization. 
All the above algorithms suffer from external fragmentation.Memory fragmentation can be internal or 
external. 
 External fragmentation –total memory space exists to satisfy a request,but it is not contiguous. 
 Internal fragmentation-allocated memory may be slightly larger than requested memory,the 
size difference is memory internal to a partition,but not being used. 
PAGING: 
Paging is a memory management scheme which permits the physical address space of a process to 
be non contiguous.Paging avoids the problem of fitting the different process of different sizes into 
the memory.In this scheme physical memory is broken into fixed sized blocks called frames while 
the logical memory is broken into blocks of same size calles pages.When a process is to be 
executed,its pages are loaded into any available memory frames from the backing store. 
The following diagram gives the required paging hardware. 
Fahad Shaikh (System Administrator) Page 53
Operating Systems SY-IT 
Every address generated by the CPU is divided into two parts –a page number p and page offset d.The 
page number is used as index into a page table. The page tables contains the base address of each page 
in physical memory .This base address is combined with the page offset to define the physical memory 
address which is sent to the memory unit. 
The paging model of a memory is as shown below 
Fahad Shaikh (System Administrator) Page 54
Operating Systems SY-IT 
The size of a page is a power of 2 and lies between 512 bytes and 16mb per page depending on the 
computer architecture. The selection of power of 2 as the page size makes the translation of logical 
address into a page number and page offset easy. 
When we use the paging scheme we have no external fragmentation. However there may be some 
internal fragmentations. When a process arrives in a system to be executed its size in terms of pages 
is examined. Each page of the process needs one frame. The first page of the process is loaded into 
one of the allocated frame and the frame number is put in the page table for this process. This is 
done for all the pages. The OS maintains a list of free frames in a data structure called the frame 
table as given in the diagram below. 
SEGMENTATION : 
A program may consists of a main program, subroutines, procedures, functions, etc. Each of which 
can be considered as a segment of variable length. Elements within a segment are identified by their 
offset from the beginning of the segment. Segmentation is a memory management scheme which 
supports this view of the memory. A logical address space is a collection of segments. Each segment 
has a name and a length. The addresses specify the segment name and the offset within the 
segment. A logical address consists of segment number and offset. 
Fahad Shaikh (System Administrator) Page 55
Operating Systems SY-IT 
SEGMENTATION HARDWARE: 
A logical address consists of two parts a segment number s and an offset d. The segment number is used 
as an index into the segment table. The offset d of the logical address must be between O and the 
segment limit. If it is not we trap to the OS. If this offset is valid it is added to the segment base to 
produce the address in the physical memory. 
A particular advantage of segmentation is the association of protection with the segments. 
Another advantage of segmentation involves the sharing of code or data. 
Segmentation may cause external fragmentation when all blocks of free memory are too small to 
accommodate a segment. 
SEGMENTATION WITH PAGING: 
By combining segmentation and paging we can get best of both. In this model the logical address 
space of a process is divided into two partitions. The first partition consist of upto 8kb segments that 
are private to that process. The second partition consist of upto 8 kb segments which are shared 
among all the processes. Information about the first partition is kept in the local description 
table(LDT) while the information about the second partition is kept in the global descriptor table 
(GDT). Each entry in the LDT and GTD cosist of 8 bytes with detailed information about a particular 
Fahad Shaikh (System Administrator) Page 56
Operating Systems SY-IT 
segment including the base location and length of that segment. The logical address consist of 
selector and offset, the selector is 16 bit number given as 
S G P 
13 1 2 
Where s = segment number. 
G= indicates whether segment is LDT of GDT 
P = deals with protection / for protection 
The offset is a 32 bit number specifying the location of the byte within the segment. It is given as 
Page number page offset 
P1 P2 D 
10 10 12 
VIRTUAL MEMORY 
Virtual memory is a technique that allows the execution of processes which may not be completely in 
memory. One major advantage of this scheme is that we can have programs which are larger than the 
physical memory. It is seen that many programs have code to handle unusual error conditions and these 
errors never occur hence these codes are never executed. Also programs may contains arrays, list and 
tables which may be allocated more memory than required. Apart from this certain options and features 
of a program are rarely used. Virtual memory makes the task of programming much easier because the 
programmer no longer needs to worry about the amount of physical memory. Virtual memory is 
implemented by demand paging as well as demand segmentation 
DEMAND PAGING 
A Demand paging system is similar to paging system with swapping in which we have a swapper which 
swaps only those pages which are needed and does not swaps the entire process. When a process is to 
be swapped in the swapper(pager) guesses which pages will be used before the process is swapped out 
again, it brings only those pages. Hence the pager decreased the swap time and the amount of memory 
needed. 
To implement demand paging we need some hardware to differentiate between those pages which are 
in the memory and those pages that are on the disk. 
Fahad Shaikh (System Administrator) Page 57
Operating Systems SY-IT 
When the valid bit is set it means that the page is legal and is in the memory. If the bit is set to invalid it 
means that either the page is not valid or is valid but is currently on the disk. 
Fahad Shaikh (System Administrator) Page 58
Operating Systems SY-IT 
STEPS FOR HANDLING PAGE FAULT: 
The procedure for handling page fault is as follows. 
1. We check an internal table to determine whether the reference was valid or invalid. 
2. If it was valid but is not in the physical memory we now bring it. 
3. We find a free frame. 
4. Bring the desired page from disk into the frame. 
5. Modify the table. 
6. Restart the instruction. 
PAGE REPLACEMENT 
If no frame is free, we find a frame that is not currently being used and free it, we can free a frame by 
writing the contents to swap space and change the page table entries. It is done by the following steps. 
1. Find the location of the desired page on disk. 
2. Find a free frame: 
Fahad Shaikh (System Administrator) Page 59
Operating Systems SY-IT 
 If there is a free frame, use it, 
 If there is no free frame, use a page replacement algorithm to select a victim frame. 
3. Read the desired page into the (newly) free frame. Update the page and frame table. 
4. Restart the process. 
If no frames are free, two pages transfers are required (one out, one in). We can reduce this overhead 
by using a modify bit (dirty bit). The dirty bit is set for a page if the page is modified. Hence in case of 
page replacement we need to replace that page. If the dirty bit is reset we don’t need to replace that 
page, it can be overwritten by other page. 
PAGE REPLACEMENT ALGORITHMS. 
First –In – first –Out (FIFO) Algorithm. 
The simplest page replacement algorithm is the FIFO algorithm. A FIFO replacement algorithm 
associates each page the time when that page was brought into the memory. When a page must be 
replaced the oldest page is choosen. FIFO algorithm is easy to understand and program. However its 
performance is not always good. 
Fahad Shaikh (System Administrator) Page 60
Operating Systems SY-IT 
Prob: Consider the following reference string 
7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. Find the number of page faults using FIFO page replacement 
algorithm with three frames. 
Belady,s Anamoly : 
In some page replacement algorithms, the number of page faults may increase as the number of 
allocated frames increased. Consider the following curve showing Belady’s Anamoly. 
Fahad Shaikh (System Administrator) Page 61
Operating Systems SY-IT 
From the above graph we see that if the number of frames are three we get a page faults and when the 
number of frames are 4 we get 10 page faults. 
OPTIMAL PAGE REPLACEMENT ALGORITHM: 
In the algorithm we replace the page which will not be used for the longest period of times. This 
algorithm has the lowest page fault rate as compared to other algorithms and does not suffers from 
Belady’s Anamoly. This algorithm is difficult to implement because it requires the future knowledge of 
the reference string. This algorithm is mainly used for comparison studies. 
Eg: For the following reference string find the number of page faults using optimal page replacement 
algorithm with 3 frames. 
RS : 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. 
9 pagefaults 
LEAST RECENTLY USED ALGORITHM(LRU): 
In this algorithm when we need to replace a page we replace it with the page that has not been used for 
the longest period of time LRU Replacement associates with each page the time of that pages last use. 
When a page must be replaced we choose a page which has not been used for the longest period of 
time. Hence we look into the part. 
The major problem with this algorithm is to implement it for which additional hardware is require. The 
two types of implementations are 
1. Counter implementation 
 Every page entry has a counter, every time page is referenced through this entry, copy the 
clock into the counter. 
Fahad Shaikh (System Administrator) Page 62
Operating Systems SY-IT 
 When a page needs to be changed look at the counters to determine which are to change. 
2. Stack implementation. 
Another approach to implement is to keep a stack of page number. Whenever a page is referenced, 
it is removed from the stack and put on the top. Hence the top of the stack is the most recently used 
page and the bottom of the stack is the least recently used. 
Eg : For the following reference string find the number of page faults using LRU algorithm with 3 
frames. 
RS-7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1 
THRASHING: 
For execution, a process needs free frames, if the process does not have the required number of 
free frames, we get a page fault. Hence we must replace some page, since all the pages are in 
active use, we replace a page which is needed. Hence we get a page fault again, and again, and 
again. The process continued to fault, this high paging activity is called thrashing. 
CAUSE OF THRASHING: 
The OS monitors the CPU utilization, if it is low we increase the degree of multiprogramming by 
introducing new process. A global page replacement algorithm is used which replaces process 
they belong. Suppose a process needs more frames, hence it starts faulting and takes frames 
away from the other processes. These processes need those pages, hence they also fault taking 
away frames from other processes. Hence we get a high paging activity due to which the 
utilization decreases. 
Fahad Shaikh (System Administrator) Page 63
Operating Systems SY-IT 
As the CPU utilization decreases, the CPU scheduler brings in more new processes causing more 
page faults as a result CPU utilization drops even further. The CPU schedule again brings in more 
processes. At this stage thrashing has occurred. 
Fahad Shaikh (System Administrator) Page 64
Operating Systems SY-IT 
Unit - V 
FILE SYSTEM. 
A file is a collection of related information which is recorded on the secondary storage.From users point 
of view data cannot be written to secondary storage unless they are within a file.Files represent 
programs and data.A file has a certain designed structure according to its type.A text file is a sequence 
of characters.A source file is a sequence of subroutines and functions.An object file is a sequence of 
bytes organize into blocks.An executable file is a series of codes. 
FILE ATTRIBUTES: 
A file has certain attributes which may change from OS to another and consists of the following 
1..Name – The symbolic file name is the only information through which the user can identify the file. 
2..Identifier – It consists of a number which identifies the file within the file system. 
3..Type – It is required for those systems which support different file types. 
4..Location – It provides information regarding the location of the file and to the device on which the file 
resides. 
5..Size – It gives the information regarding the size of the file. 
6..Protection – It gives the access control information so as to decide who can do reading writing and 
executing. 
7..Time,date and user identification – This information may be kept for creation,last modification and 
last use.The data can be useful for protection,security as well as monitoring the usage. 
FILE OPERATIONS: 
The various file operations are: 
1..Creating a file – To create a file two steps are necessary. 
Fahad Shaikh (System Administrator) Page 65
Operating Systems SY-IT 
 Space in the file system must be found. 
 An entry for the new file must be made in the directory. 
2..Writing a file – To write a file we make a system call specifying the name of the file and the 
information to be written to the file.The system must keep a write pointer to the location in the file 
where the next write is to take place. 
3..Reading a file – To read from a file we make a system call specifying the name of the file and the 
blocks of information which is to be read.The system searches the directory to find the file and 
maintains a read pointer from where the next read is to take place. 
4..Repositioning within a file – The directory is searched for a particular entry and the current file 
position is set to the given value. 
5..Deleting a file – To delete a file we search the directory for the given file and release all file space so 
that it can be reused by other files and erase the directory entry. 
6..Truncating a file – This operation allows a user to retain the file attribute and erase the contents of 
the file. 
The other common attribute include file appending and file renaming. 
The following information are associated with an open file. 
1..File pointer – It keeps the track of last read-write operation. 
2..File open count – It keeps a tarck of the counter giving the number of open and closes of a file and 
becomes zero on the last close. 
3..Disk location of the file – It is required for modifying the data within the file. 
4..Access rights – This information can be used to allow or deny any request. 
FILE TYPES: 
A common technique for implementing file types is to include the type as part of the file name .The 
name is split into 2 parts that is name and extension.The common file types are 
File type usual extension function 
1..executable exe,com,bin,or none read to run machine language program 
2..object obj,o compiled,machine language,not linked 
Fahad Shaikh (System Administrator) Page 66
Operating Systems SY-IT 
3..text txt,doc textual data,documents 
4..batch bat,sh commands to the command interpreter 
5..word processor wp,tex,rrf,doc various word processor formats. 
FILE STRUCTURE : 
Since there are various files.It is required that the OS must support multiple file structures,due to this 
the resulting size of the OS would become very large.If the OS defines five different file structures, it 
needs to contain the code to support these file structures. Severe problems may arise if new 
applications require different file structure not supported by the OS. 
UNIX consider each file to be a sequence of 8 bits (byte), no interpretation of these bits is made by the 
OS. Each application program must include its own code to interpret an input file into proper structure. 
However all OS must support at least executable file structure. 
ACCESS METHODS 
The various access methods are 
1..Sequential access 
2..Direct access 
3..Indirect access 
1..Sequential access-The simplest access method is sequential access which is used by editors 
compilers,etc.Information in the file is processed in order one record after the other.The operations on 
the files are reads and writes.A read operation reads the next portion of the file and advances the 
pointer.The write operations appends to the end of the file and advances the pointer.Sequential access 
is based on a tape model of a fil. 
2..Direct access-We consider file to be made up of fixed length logical records which allows program to 
read and write records without following any order.The direct access method is based on disc model of 
a file,since disc allows random access to any file block.For direct access the file is viewed as a numbered 
sequence of blocks or records.Since there is no restriction to follow any order we may read block 14 
then block 54 and write block 6. 
Fahad Shaikh (System Administrator) Page 67
Operating Systems SY-IT 
For the direct access method ,the file operations are Read n and Write n. 
3..Indexed access- 
With large files we see that the indexed file become too large to be kept in the memory.Hence we 
create an indexed for the index file.The primary index file would contain pointers to secondary index 
files which would point to the actual data items. 
DIRECTORY STRUCTURES 
The directory can be viewed as a symbol table which translates file names into their directory entries.A 
directory can be organize in many ways .The following are the various operations perform on a 
directory. 
1..Search for a file 
2..Create a file 
3..Delete a file 
4..List a directory 
5..Rename a file 
Fahad Shaikh (System Administrator) Page 68
Operating Systems SY-IT 
A directory has the following logical structures 
1..Single level directory 
The simplest directory structure is the single level directory.Since all files are stored in the same 
directory.The advantage of this structure is that it is easy to support and understand. 
The drawback of this implementation is that if the number of files increases the user may find it difficult 
to remember the names of all the files.If thesystem has more than one user the file naming issue would 
also arise because each file must have a unique name. 
2..Two level directory 
In two level directory structure each user has his own user file directory (UFD).Each UFD has a similar 
structure and contains the files of a single user.When a user logs in the systems master file 
directory(MFD) is searched.When a user refers to a particular file,only his own UFD is searched.Hence 
different users may have files with the same name as long as all the file names within each UFD are 
unique. 
Fahad Shaikh (System Administrator) Page 69
Operating Systems SY-IT 
To create a file for a user ,the OS searches only that users UFD to confirm whether another file of that 
name exits.To delete a file,the OS confines its search to the local UFD,hence it cannot accidentally delete 
another users file which has the same name. 
UFD are created by a special system program through proper user name and account information.The 
program creates a new UFD and adds entry in the MFD. 
The disadvantage of two level directory structure is that it isolates one user from another.In some 
systems if the path name is given and access is possible to the file residing in other UFD.A two level 
directory can be thought as a tree of height 2.The root of the tree is the MFD,UFD’s are the branches 
and the files are the leaves. 
3..TREE DIRECTORY DIRECTORY 
The tree structured directory is the most common directory structure.It contains a root directory.Every 
file in the system has a unique path name.A directory (or subdirectory) contains a set of files or 
subdirectories.A directory is simply another file treated in a special way.All directories have the same 
internal format.One bit in each directory entry defines the entry as a file(0) or as a 
subdirectory(1).Special system calls are used to create and delete directories. 
Each user has a current directory,when a reference is made to a file the current directory is searched.If 
the file is not in the current directory than the user must specify a path name or change the current 
Fahad Shaikh (System Administrator) Page 70
Operating Systems SY-IT 
directory to the directory containing that file.The user can change his current directory whenever 
required. 
In this structure path names can be of two types.Absolute path names or Relative path names. 
There are two approaches to handle the deletion of a directory. 
1..Some systems will not delete a directory unless it is empty. 
2..Some systems delete a directory even if it contains subdirectories or files. 
4..ACYCLIC – GRAPH DIRECTORIES 
An Acyclic graph directory structure allows directories to have shared subdirectories and files such that 
same file or subdirectories may be in two different directories.A shared fileor directory is not the same 
as two copies,it is such that any changes made by one user are immediately visible to the othe.A new 
file created by one person will automatically apper in all the shared sub directories. 
This structure is more complex since a file may have multiple absolute path names.Another problem 
involves deletion of aq shared directory or a file. 
Fahad Shaikh (System Administrator) Page 71
Operating Systems SY-IT 
ALLOCATION METHODS: 
1..Contiguous Allocation- The contiguous allocation method requires each file to occupy a set of 
contiguous blocks on the disk.Contiguous allocation of the first block and the length of the 
file.Thedirectory entry for each file indicate the address of the starting block and the length of the area 
allocated for this file.Accessibg a file in this method is very easy,it supports both sequential and direct 
access method. 
The main drawback of this method is that it suffers from external fragmentation,another problem is 
finding space for new file.Also with contiguous allocation there is a problem of determining how much 
space is needed for a file. 
2..Linked Allocation – In Linked allocation each file is a linked list of disk blocks,the disk blocks ,the disk 
blocks may be scattered anywhere on the disk.The directory contains a pointer to the first and last block 
of the file.Each block contains a pointer to the next block.These pointers are not made available to the 
user.In the allocation there is no external fragmentation and any free block can be used for a file.A file 
can continue to grow as long as free blocks are available.The disadvantage with linked allocation is that 
it does not supports direct access methods.Another disadvantage is the space required for the 
pointers.A problem may arise if a pointer is damage or lost hence this type of allocation would be 
unreliable. 
3..Indexed Allocation – In this allocation all the pointers are occupied in a block called as indexed 
block.The directory contains the address of the indexed block.When a file is created,all pointers in the 
indexed block are set to nil.When a block is first written its address is put in the indexed block. 
Indexed allocation supports direct access and does not suffers from external fragmentation.The main 
drawback of this allocation is that it suffers from wasted space used to store blocks information. 
FREE SPACE MANAGEMENT 
1..Bit vector-In this approach each block is represented by 1 bit.If the block is free,the bit is 1,if the block 
is allocated the bit is 1 
The main advantage of this approach is its relative simplicity and efficiency in finding the free 
blocks.However the disadvantage is that for fast access these bit vectors are kept in main memor 
y.Hence they occupy large spaces. 
2..Linked list-In this approach we keep a pointer to the first free block.This free block contains a pointer 
to the next free block and so on.This scheme is not efficient because to go through the list of free blocks 
we must read each block which takes much time. 
Fahad Shaikh (System Administrator) Page 72
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems
Operating Systems

More Related Content

What's hot

Operating system services 9
Operating system services 9Operating system services 9
Operating system services 9myrajendra
 
Operating system overview concepts ppt
Operating system overview concepts pptOperating system overview concepts ppt
Operating system overview concepts pptRajendraPrasad Alladi
 
Operating system presentation
Operating system presentationOperating system presentation
Operating system presentationEhetzaz Khan
 
Introduction to operating syatem
Introduction to operating syatemIntroduction to operating syatem
Introduction to operating syatemRafi Dar
 
Class 1: Introduction - What is an Operating System?
Class 1: Introduction - What is an Operating System?Class 1: Introduction - What is an Operating System?
Class 1: Introduction - What is an Operating System?David Evans
 
Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating systemAviroop Mandal
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating systemAisyah Rafiuddin
 
Services provided by os
Services provided by osServices provided by os
Services provided by osSumant Diwakar
 
Operating system structures
Operating system structuresOperating system structures
Operating system structuresRahul Nagda
 
Functions of Operating System
Functions of Operating SystemFunctions of Operating System
Functions of Operating SystemDr.Suresh Isave
 
Principles of operating system
Principles of operating systemPrinciples of operating system
Principles of operating systemAnil Dharmapuri
 
Operating system notes
Operating system notesOperating system notes
Operating system notesSANTOSH RATH
 
Operating system lecture1
Operating system lecture1Operating system lecture1
Operating system lecture1AhalyaSri
 
Operating Systems
Operating SystemsOperating Systems
Operating SystemsDan Hess
 
Operating Systems 1 (12/12) - Summary
Operating Systems 1 (12/12) - SummaryOperating Systems 1 (12/12) - Summary
Operating Systems 1 (12/12) - SummaryPeter Tröger
 
Basic of operating system
Basic of operating systemBasic of operating system
Basic of operating systempriyanka jain
 
Operating system || Chapter 1: Introduction
Operating system || Chapter 1: IntroductionOperating system || Chapter 1: Introduction
Operating system || Chapter 1: IntroductionAnkonGopalBanik
 
Unit 1 introduction to Operating System
Unit 1 introduction to Operating SystemUnit 1 introduction to Operating System
Unit 1 introduction to Operating Systemzahid7578
 

What's hot (20)

Operating system services 9
Operating system services 9Operating system services 9
Operating system services 9
 
Operating system overview concepts ppt
Operating system overview concepts pptOperating system overview concepts ppt
Operating system overview concepts ppt
 
Operating system presentation
Operating system presentationOperating system presentation
Operating system presentation
 
Introduction to operating syatem
Introduction to operating syatemIntroduction to operating syatem
Introduction to operating syatem
 
Class 1: Introduction - What is an Operating System?
Class 1: Introduction - What is an Operating System?Class 1: Introduction - What is an Operating System?
Class 1: Introduction - What is an Operating System?
 
Introduction to operating system
Introduction to operating systemIntroduction to operating system
Introduction to operating system
 
chapter 1 introduction to operating system
chapter 1 introduction to operating systemchapter 1 introduction to operating system
chapter 1 introduction to operating system
 
Services provided by os
Services provided by osServices provided by os
Services provided by os
 
Operating system structures
Operating system structuresOperating system structures
Operating system structures
 
Functions of Operating System
Functions of Operating SystemFunctions of Operating System
Functions of Operating System
 
Principles of operating system
Principles of operating systemPrinciples of operating system
Principles of operating system
 
Basic os-concepts
Basic os-conceptsBasic os-concepts
Basic os-concepts
 
Operating system notes
Operating system notesOperating system notes
Operating system notes
 
Operating system lecture1
Operating system lecture1Operating system lecture1
Operating system lecture1
 
Operating systems1[1]
Operating systems1[1]Operating systems1[1]
Operating systems1[1]
 
Operating Systems
Operating SystemsOperating Systems
Operating Systems
 
Operating Systems 1 (12/12) - Summary
Operating Systems 1 (12/12) - SummaryOperating Systems 1 (12/12) - Summary
Operating Systems 1 (12/12) - Summary
 
Basic of operating system
Basic of operating systemBasic of operating system
Basic of operating system
 
Operating system || Chapter 1: Introduction
Operating system || Chapter 1: IntroductionOperating system || Chapter 1: Introduction
Operating system || Chapter 1: Introduction
 
Unit 1 introduction to Operating System
Unit 1 introduction to Operating SystemUnit 1 introduction to Operating System
Unit 1 introduction to Operating System
 

Viewers also liked

Revealation of Glorious Qur'an
Revealation of Glorious Qur'anRevealation of Glorious Qur'an
Revealation of Glorious Qur'anFahad Shaikh
 
Sunnahs of Prophet Mohammed SAW
Sunnahs of Prophet Mohammed SAWSunnahs of Prophet Mohammed SAW
Sunnahs of Prophet Mohammed SAWFahad Shaikh
 
Advanced Java - Practical File
Advanced Java - Practical FileAdvanced Java - Practical File
Advanced Java - Practical FileFahad Shaikh
 
Cryptography & Network Security
Cryptography & Network SecurityCryptography & Network Security
Cryptography & Network SecurityFahad Shaikh
 
Booting Process OS
Booting Process OSBooting Process OS
Booting Process OSanilinvns
 
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...vtunotesbysree
 

Viewers also liked (9)

Ubuntu Handbook
Ubuntu HandbookUbuntu Handbook
Ubuntu Handbook
 
Revealation of Glorious Qur'an
Revealation of Glorious Qur'anRevealation of Glorious Qur'an
Revealation of Glorious Qur'an
 
Sunnahs of Prophet Mohammed SAW
Sunnahs of Prophet Mohammed SAWSunnahs of Prophet Mohammed SAW
Sunnahs of Prophet Mohammed SAW
 
Advanced Java - Practical File
Advanced Java - Practical FileAdvanced Java - Practical File
Advanced Java - Practical File
 
A Standard Dictionary of Muslim Names
A Standard Dictionary of Muslim NamesA Standard Dictionary of Muslim Names
A Standard Dictionary of Muslim Names
 
Islamic Names
Islamic NamesIslamic Names
Islamic Names
 
Cryptography & Network Security
Cryptography & Network SecurityCryptography & Network Security
Cryptography & Network Security
 
Booting Process OS
Booting Process OSBooting Process OS
Booting Process OS
 
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
SOLUTION MANUAL OF OPERATING SYSTEM CONCEPTS BY ABRAHAM SILBERSCHATZ, PETER B...
 

Similar to Operating Systems

Similar to Operating Systems (20)

OS UNIT1.pptx
OS UNIT1.pptxOS UNIT1.pptx
OS UNIT1.pptx
 
Unit 1os processes and threads
Unit 1os processes and threadsUnit 1os processes and threads
Unit 1os processes and threads
 
LM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system callsLM1 - Computer System Overview, system calls
LM1 - Computer System Overview, system calls
 
Operating Systems
Operating SystemsOperating Systems
Operating Systems
 
Os notes
Os notesOs notes
Os notes
 
Os
OsOs
Os
 
Operting system
Operting systemOperting system
Operting system
 
Ch1 OS
Ch1 OSCh1 OS
Ch1 OS
 
OS_Ch1
OS_Ch1OS_Ch1
OS_Ch1
 
OSCh1
OSCh1OSCh1
OSCh1
 
An Introduction to Operating Systems
An Introduction to Operating SystemsAn Introduction to Operating Systems
An Introduction to Operating Systems
 
Os unit 1
Os unit 1Os unit 1
Os unit 1
 
Operating System Simple Introduction
Operating System Simple IntroductionOperating System Simple Introduction
Operating System Simple Introduction
 
Operating System
Operating SystemOperating System
Operating System
 
ITM(2).ppt
ITM(2).pptITM(2).ppt
ITM(2).ppt
 
MYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptxMYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptx
 
Ch3 OS
Ch3 OSCh3 OS
Ch3 OS
 
OSCh3
OSCh3OSCh3
OSCh3
 
OS_Ch3
OS_Ch3OS_Ch3
OS_Ch3
 
Demo.pptx
Demo.pptxDemo.pptx
Demo.pptx
 

Recently uploaded

Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 

Recently uploaded (20)

Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 

Operating Systems

  • 1. OPERATING SYSTEMS. SY B.Sc. I.T. Author: Fahad Shaikh.
  • 2. Operating Systems SY-IT Unit - I COMPONENTS OF COMPUTER SYSTEM User 1 User2 User n System and Application Program Operating System Computer Hardware The above diagram gives a view of the components of a computer system, which can be roughly divided into four components. The hardware, the operating system, the application program and the users. The hardware consists of the CPU, memory and I/O devices which provide the basic computing resources. The application program consists of programs such as word processors, compilers, and web browsers. The operating system controls and co- ordinates the use of the hardware among the various application programs for the various users. The Operating System provides the means for proper use of the recourses in operation of the computer system. The two basic goals of an operating system are convenience and efficiency TYPES OF SYSTEMS: 1) Mainframe Systems: Mainframe computer system where the first computers which were used to solve many commercial and scientific applications. They are divided into three types. i) Batch systems ii) Multi programmed systems iii) Time sharing systems Fahad Shaikh (System Administrator) Page 2
  • 3. Operating Systems SY-IT i) Batch systems Operating system User program Batch system consists of large machines in which the input devices were card readers and tape drives. The output consists of line printers, tape drives and card punches. The user did not interact directly with the computer system; the user prepared a job and submits it to the computer operator. The job was usually in the form of punch cards. The operating system in these early systems was very simple. Its major task was to transfer control automatically from one job the next. The operating system is always resident in the memory. In this execution environment, the CPU is often idle due to the speeds of the peripheral devices. Hence the major disadvantage of this system was inefficient use of the CPU. ii) Multi programmed systems Operating system Job 1 Job 2 Job 3 Job 4 Multi programming increases CPU utilization by organizing jobs so that the CPU always has at least one job to execute. The operating system keeps many jobs in the memory at a time and picks up a job to execute. In case the job has to wait for some task (such as I/O) the operating system switches to execute another job. Hence the CPU is never idle. Fahad Shaikh (System Administrator) Page 3
  • 4. Operating Systems SY-IT Multi programming is the first instance where the operating system must make decisions for the users. Hence multi programmed operating system are more sophisticated. iii) Time sharing operating system /Multitasking system /Interactive system A time shared operating system allows many users to share the computer simultaneously. Since each action or command in a time share system is very short, only a little CPU time is need for each user. As the system switches rapidly from one user to the next, each user is given the impression that the entire computer system is dedicated to him even though it may be shared among many users. A time shared operating system uses CPU scheduling & multi programming to provide each user with a small portion of time shared computer. Time sharing operating system are more complex than multi programmed operating system because they need to provide protection, a way to handle the file system, job synchronization, communication and deadlock free environment 2) Desktop System: Desktop systems during their initial stages did not had the feature to protect the operating system from the user programs. Hence pc operating system where neither multiuser nor multitasking. Since every user is having all the resources hence there is no question of efficiency therefore the main goal of such system was maximizing user convenience and responsiveness. With the advent of network this system needed protection hence this feature was also added. 3) Multiprocessor System: Multiprocessor system (parallel system or tightly couple system) have more than one processor in closed communicating sharing the computer bus, the clock as well as memory and peripheral devices. Multiprocessor systems have three main advantages: I) Increased Throughput:- By increasing the number of processor more work is done in less time. The speed up ratio with N processors is not almost equal to N due to overhead in keeping all the work correctly also due to sharing of resources the speedup ratio decrease further II) Economy: - Multiprocessor systems are more economic as compared to multiple single processor system because they share peripheral, memory and power supplies Fahad Shaikh (System Administrator) Page 4
  • 5. Operating Systems SY-IT III) Increase Reliability: - If functions (if work) are properly distributed among several processors then the failure of one processor will not hold the system but only slow down the system. Suppose if we have 10 processors and if one fails, then each of the remaining would continue the processing. The overall may degrade by 10%. The ability of a system to provide service proportional to the level of surviving hardware is called graceful degradation. Systems designed for graceful degradation are called fault tolerant. Multiprocessor system can be realized in the following ways  Tandem system : This system uses both hardware and software duplication to ensure continuous operation even in case of failures. The system consists of two identical processor each having its own local memory the processor is connected by a bus. One processor is primary and other is backup. Two copies of each processor (1 in primary in backup). At some fixed interval of time the information of each process is copied from primary to backup. If a failure is detected, the backup copies activated and restarted from the most recent check point. The drawback of the system is at it is expensive.  Symmetric multiprocessing system(SMP): Memory CPU …………… CPU …………………. CPU ………….. CPU In symmetric multiprocessing each processor runs and identical copies of the operating system and these copies communicate each other as needed. There is no master slaves relationship between processor, all processor are peers. The benefit of this model is that many processors can run simultaneously. The problem with this system would be that one processor (CPU) may be sitting idle and other may be overloaded. This can be avoided if processors share certain Fahad Shaikh (System Administrator) Page 5
  • 6. Operating Systems SY-IT data structures. A multiprocessor system of this form will allow processes and resources to be shared properly among various processors.  Asymmetric multiprocessing: MASTER CPU CPU . . . . . . CPU . . . . . CPU . . . . . CPU SLAVES In asymmetric multiprocessing each processing assigned a specific task. A master processor controls the system. Other processors depend on the master for instruction or may have a predefined task. This scheme defined a master slave relationship. The master processor schedules and allocates work to the slave processors. The difference between symmetric and asymmetric multiprocessing may be the result of either hardware or software. Special hardware can differentiate the multiprocessor or the software can be written to allow only one master and multiple slaves. 3) Distributed System: Distributed systems depend on networking for their functionality, they are able to share computational task and provide a rich set of features to users. Networks are of various types, the type may depend by the protocols used, the distance between the nodes and the transport media. TCP/IP is the most common network protocol a part from others. Most operating system support TCP/IP. A LAN exists within a room or a building. A VAN exists between cities and countries and so on. The different transport media include copper wires, fiber optic, satellites and radios. The following are the most common type of the system 1) Client server system 2) Peer to peer system Fahad Shaikh (System Administrator) Page 6
  • 7. Operating Systems SY-IT i) Client server system: Client . . . . . . Client . . . . . . Client . . . . . . . Client Network SERVER The above diagram gives the general structure of client server system in which we have a server systems which satisfied request generated by client system. Server systems can be broadly categorized as compute servers and file servers. Compute server system provide an interface to which client can send request to perform an action. In response to which they execute the action and send back result to the client. File server system provide a file system interface where clients can create, update, read, and delete files. ii) Peer to peer system: In this system the processors communicate with one and other through various communication lines such as high speed buses or telephone lines. These systems are usually referred as loosely coupled system. The operating system designed for such a system is called network operating system which provides features such as file sharing across the network, allowing features to allow different process on different computer to exchange messages. 4) Real Time System: A real time system has a well define fixed time constraints. Processing must be done within the defined constraints otherwise the system will failed. A real time system functions correctly only if it returns the correct result within the time limit. Real time systems are of two types. Fahad Shaikh (System Administrator) Page 7
  • 8. Operating Systems SY-IT i) Hard Real Time System ii) Soft Real Time System A hard real time system guarantees that critical task must be complete on time. In such a system all delays must be bounded. The use of secondary storage should be extremely limited. Also most advanced operating system features are absent in such system. Soft real time systems are less restrictive than hard real time system. In this system a critical real time task gets priority over other task. Soft real time can be easily achieved and can be mixed with other type of system. These systems have limited applications as compared to hard real time systems. These systems are useful in multimedia, virtual reality and advanced scientific projects 5) Hand Held System: Hand held systems include personal digital assistant (PDA) as well as cellular phone. These devices have small size, less memory, slow processors and small screens. Due to small amount of memory the operating system the applications must manage memory efficiently. Faster processors are not included in these devices because fast processor would require more power hence recharging and replacing the battery more frequently. Hence they are designed so as to utilize the processor efficiently. Since the monitors of these devices are very small, reading or browsing web pages becomes difficult. 6) Clustered System: Clustered systems are composed of two or more individual systems coupled together. Clustering is usually performed to provide high availability. A layer of clustered software runs on the clustered nodes. Each node can monitor one or more of the other nodes. Each node can monitor one or more of the other nodes. If a machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The most common form of clustering are Asymmetric clustering and Symmetric clustering. In Asymmetric clustering one machine is in hot standby mode which other is running the application. The hot stand by a machine only monitors the active server. If that server fails, the hot standby host (machine) becomes the active sever. Fahad Shaikh (System Administrator) Page 8
  • 9. Operating Systems SY-IT In symmetric clustering mode two or more host are running applications and are also monitoring each other. These methods are more efficient as compared to asymmetric clustering. Unit - II OPERATING SYSTEM STRUCTURE Process Management:  A process is a program in execution. A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task.  The operating system is responsible for the following activities in connection with process management. I. Process creation and deletion II. Process suspension and resumption III. Provision of mechanisms for: a) Process synchronization b) Process communication on handling deadlock A process is unit of work in a system. Such a system consists of collection of process, some of which are operating system process and the rest are user process all these process execute concurrently by multiplexing the CPU among theme. Main Memory Management:  Memory is large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and input output devices.  Main memory is a volatile storage device. It loses its contents in case of system failure.  The operating system is responsible for the following activities in connections with. Fahad Shaikh (System Administrator) Page 9
  • 10. Operating Systems SY-IT I. Keep track of which parts of memory are currently being used and by whom. II. Decide which processes to load when memory space become available. III. Allocate and de-allocate memory space as needed. File Management: File management is one of the most visible components of the most visible components of an operating system for convenient use of computer system the operating system provides a uniform logical view of information storage. The operating system hides the physical properties of its storage unit called as file.  A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.  The operating system is responsible for the following activities in connection will file management: I. File creation and deletion. II. Directory creation and deletion. III. Support of primitives for manipulating files and directories. IV. Mapping files onto secondary storage. V. File backup on stable (nonvolatile) storage media. Input output system management: One of the purpose of the operating system is to hide the specification of the operating system is to hide the specification of the hardware devices from the user only the device driver knows the specification of the device to which it is assigned. In UNIX the specification of input output devices are hidden from the bulk of operating system by input output sub system.  The input output system consist of I. A buffer caching system. II. A general device driver interface. III. Drivers for specific hardware device. Secondary Storage Management: Fahad Shaikh (System Administrator) Page 10
  • 11. Operating Systems SY-IT  Since main memory (primary storage) is volatile and too small to accommodate all data programs permanently, the computer system must provide secondary storage to back up main memory.  Most modern computer systems use disks as the principle on line storage medium, for both programs and data.  The operating system is responsible for the following activities in connection with disk management. I. Free space management. II. Storage allocation. III. Disk scheduling. Command Interpreter System: One of the most importance systems program for an operating systems is the command interpreter which is the interface between the user and the operating system some operating system includes the command interpreter in the kernel other operating system such as Unix and Ms-Dos consider the command interpreter as a special program that is running when a job is initiated. When a new job is started in a batch system a program that reads and interprets control statements is executed automatically. These programs are called as command line interpreter or shell. Many commands are giving to the operating system by control statements which deal with: I. Process creation and management. II. Input Output handling. III. Secondary storage management IV. Main memory management V. File system access VI. Protection VII. Networking. Operating System Services: An operating system provides an environment for the execution of the programs it provides certain services to program and to the user of those programs the services are Fahad Shaikh (System Administrator) Page 11
  • 12. Operating Systems SY-IT 1. Program Execution: The system must be able to load a program into a memory and run that program. The program must be able to end its execution either normally or abnormally. 2. Input Output Operations: A running program may require input output. This input output may involve a file or an input output device. 3. File System Manipulation: The programs need to create and delete file and also read and write files. 4. Communication: One process need to exchange information to another process either on the same computer or different computer systems, this is handled by the operating system through shared memory or message passing. 5. Error detection: When a program is executing error may occur in the CPU, Memory, Input Output devices or the user program for each type of error the operating system should take proper action to ensure proper functioning of the system. 6. Resource Allocation: When multiple users are using a system it is the responsibility of operating system to allocate or de-allocate the various resources of the system. 7. Accounting: Operating system keeps a track of use of the computer resources by each user. This record may be use for accounting. 8. Protection: protection involves insuring that all access to system resources is controlled. The system should also provide a security from outsiders. System Calls: System calls provide an interface between a process and the operating system. These calls are generally available as assembly language instructions. A part from assembly language higher level languages such as c, c++. Could replace the assembly language for writing the systems calls systems calls are generated in the following way Consider writing a simple program to read data from one file and to copy them to another file, suppose the file names are obtained the program must open the input file and create the output file. Each of these operation required a system calls when the program tries to open the input file, it may find that no such file exits hence display s on error message through a system call and the program terminates abnormally(another a system call). Fahad Shaikh (System Administrator) Page 12
  • 13. Operating Systems SY-IT Three general methods are used to pass parameters between a running program and the operating system.  Pass parameters in registers.  Store the parameters in a table in memory, and the table address is passed as parameters in a register.  Push (store) the parameters onto the stack by the program, and pop off the stack by operating system. System calls can be grouped into five categories 1. Process control 2. File management 3. Device Management 4. Information maintenance 5. Communications 1. Process control: The following are the system calls with respect to process control i. End, Abort: A running process may end normally or due to error condition the process may be aborted. ii. Load, Execute: A process may want to load and execute another program. iii. Create process, Terminate process: In multiprogramming environment new processes are created as well as terminated. iv. Get process attribute, Set process attributes: When several process are executing we can control its execution. This control requires the ability to determined and reset the attribute of a process. v. Wait for time: After creating new processes, the parent process may need to wait for them (child process) to finish their execution. vi. Wait event, Signal event: In case processors are sharing some data a particular process may wait for certain amount of time or would wait for some specific event. vii. Allocate memory and free memory: When a process is created or loaded it is allocated a memory space. When the process completes its execution it is destroyed by the operating system and its memory space is free. Fahad Shaikh (System Administrator) Page 13
  • 14. Operating Systems SY-IT 2. File Management: The systems calls with respect to file management are i. Create file, Delete file ii. Open file, close file iii. Read, write, reposition the file iv. Get file attributes, set file attributes. We need to crate and delete files for which the system calls are generated, after creating a file we need open it and performed read or write operation or reposition it. Finally we need to close the file each of these operations requires the system calls. The various file attribute such as file name, file types, protection source and accounting information, these attributes can be set by using two system call get file attributes, set file attributes. 3. Device Management: The system calls related to device management are i. Request device, Release device ii. Read, write, reposition iii. Get device attributes, Set device attribute iv. Logically attached or Detached devices A process may need some resources, if the resources are available, they can be granted otherwise the process would fail. After getting the resource the process could use the resource and finally released it. We can also set device attribute and get device attributes through system call 4. Information maintenance: The various system calls related to information maintenance are i. Get time or date, set time or date ii. Get system data, set system data iii. Get process, file, device attributes iv. Set process, file, or device attributes. We can get the current time and date and set it through system calls. Apart from it we can get information regarding number of current user, operating system version, amount of free memory space through system calls. We can even get the process attribute and set the process attribute through system calls. Fahad Shaikh (System Administrator) Page 14
  • 15. Operating Systems SY-IT 5. Communication: The various systems related to communications are I. Create, delete communication connection II. Send, receive messages III. Transverse status information IV. Attach or detach remote devices There are two common models of communication 1) Message passing model 2) Shared memory model Process A M Process A 1 Process B M shared memory 2 Process B 2 1 Kernel M kernel Message passing shared memory In message passing model information is exchange through and interposes communication facility provide by the operating system. Before communication can take place a connection must be opened. The name of the other communicator must also be known. After identification, the identifiers are passed to general purpose open and closed calls provided by the file system or open connection and close connection system calls depending upon the system. The receiver process must give it permission for communication. Once the connection is established they exchange messages by read message and write message system calls. The close communication call communicates the communication model. Fahad Shaikh (System Administrator) Page 15
  • 16. Operating Systems SY-IT In shared memory module the processes used memory map system calls to gain access a region of memory of other processes. Processor may exchange information by reading and writing data in these shared areas. The form of data and the locations are determined by these processor are not under operating system controls. Message passing is useful to be exchange. Shared memory allows maximum speed and convenience of communication. However we need to deal with problems such as protection and synchronization. System Programs: System programs provide a convenient environment for program development and execution. They can be divided into following categories 1. File Management: These programs create, delete, copy, rename, print etc as well as manipulate file and directories. 2. Status Information: Some programs provide the information of the system regarding data, time, memory, number of users etc. 3. File Modification: several text editor may be available to create and modify the contents of file stored on disk and tape. 4. Programming Language Support: compilers, assemblers and interpreters are provided to the user with the operating system. 5. Program Loading and Execution: Once a program is assembled or compiled it must be loaded into memory to be executed. The system may provide absolute loaders, re-locatable loaders, and linkage editors and overlay loaders. 6. Communication: These programs provide the mechanism for creating virtual connection among processes users and computer systems. Fahad Shaikh (System Administrator) Page 16
  • 17. Operating Systems SY-IT System structure: 1. Layered approach: In layered approach they operating system is broken up into a number of layers, each build on top of lower layers. The bottom layer is the hardware and the highest layer is the user interface. A typical operating system layer may consist of data structures and a set of routines that can be invoked by higher level layers. The main advantage of layer approach is modularity. The layers are selected in such a way that each layer uses functions and services of only lower level layers hence debugging become must easier. The design and implementation of the systems are simplified when the system is broken down into layer. The major difficulty with layer approach involves the careful definition of the layers because layer can use only those layers below it. Another difficulty of layered approach is that they are less efficient than other. 2. Micro Kernel Approach: It was seen that as the Unix operating system expanded the kernel become large and difficult to manage. Hence an approach called as microkernel approach was used by modularizing the kernel. This method structures the operating system by removing all non-essential components from the kernel and implementing Fahad Shaikh (System Administrator) Page 17
  • 18. Operating Systems SY-IT them as system and user level programs which result in smaller kernel. The main function of the microkernel is to provide a communication facility between the client programmed and various services which are running in user space. Communication is provided by message passing. The benefits of microkernel approach are that the operating system can be easily extended. All new services are added to user space hence kernel need not be modified. Since kernel is not modified while the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware designed to another. The microkernel also provides more security and reliability because most services are running as user processes and not as kernel processes. If a service fails the rest of the operating system remains intact. 3. System Design and Implementation: i. Design Goals: The first problem in designing a system is to define the goals and specification of the systems. The requirements can be divided into two basic goals. From the user point of view the system should be easy (convenient) to use easy to learn, reliable, safe and fast. Form the designer point of view the system should be easy to design, implement and maintained. ii. Mechanisms and Policies: mechanisms determine how to do something while policies will determine what will be done. Policies may change across places and with respect to time. A general mechanism would be more desirable a change in policy would that required redefinition of only certain parameter of the system. iii. Implementation: After designing an operating system it must be implemented. It can be implemented either by using assembly language or using by higher level languages such as C or C++. The advantages of using higher level languages are  The code can be written faster and is more compact.  It is easier to port from one hardware to other hardware for example Ms Dos was written in assembly language hence is only available for the Intel family of processor. While the UNIX operating system which was written in c is available on different processors such as Intel, Motorola, ultra-spars etc. Fahad Shaikh (System Administrator) Page 18
  • 19. Operating Systems SY-IT Unit - III PROCESS MANAGEMENT A process can be defined as a program in execution. Two essential elements of a process are program code and a set of data associated with that code at any given point in time, while the program is executing, it can be characterized by a number of elements called process control block (PCB). The elements of PCB are I. Identifier: every process has a unique identifier to differentiate it from other processes. II. State of the processor: It provides the information regarding the current state of the process. III. Priority: It gives the priority level of each process. IV. Program Counter: It provides the address of the next instruction which is to be executed. V. Memory Pointers: These memory pointers point to the memory location containing program code and data. VI. Context Data: These are the data that are present in register of the processor when the process is executing. VII. Input Output Status Information: It includes pending input output request, input output devices, which are assigned to the process etc. VIII. Accounting Information: It includes the amount of processor time and clock time used time limits and so on. Fahad Shaikh (System Administrator) Page 19
  • 20. Operating Systems SY-IT Process States: Two State Process Model The above diagram gives the simplest two state process models. A process is either being executed by processor or not execute. In this model, a process may be in one of the two states running or not running. When the operating system creates a new process and enters that process into the system in the not running state. From time to time the currently running process may be interrupted, the operating system will select to other process to run. Hence a process may switched from running state to not running state while the other may move from not running to running state. Reasons for process creation 1) New batch job: The operating system is providing with a batch job control stream. When the operating system is prepared to take on a new work, it will read the next sequence of job controls command. Fahad Shaikh (System Administrator) Page 20
  • 21. Operating Systems SY-IT 2) Interactive login: A user at a terminal logs on to the system. 3) Created by operating system to provide service: The operating system can create a process on behalf of a user program to perform a function. 4) Created by existing process: A user program can create a number of processes for the purpose of modularity. Reasons for process termination 1. Normal completion. 2. Time limit exceeded. 3. Memory unavailable. 4. Bounds violation. 5. Protection error. 6. Arithmetic error. 7. Time over run. 8. Input output failure. 9. Invalid instruction. 10. Privileged instruction. 11. Data misuse. 12. Operating system intervention. 13. Parent termination. 14. Parent request. Five State Process Model Fahad Shaikh (System Administrator) Page 21
  • 22. Operating Systems SY-IT The various states of the process in the given model are 1. New: A process has just been created and is not admitted to the pool(queue) of executable process by the operating system . 2. Ready: The process is prepared to execute and is waiting for the processor. 3. Running: The process which is currently being executed. 4. Blocked: A process that cannot execute until some event occurs (such as input output). 5. Exit: A process which is released by the operating system either because it is halted or aborted due to some reason. POSSIBLE TRANSITIONS: The following are the possible transitions from one state to another I. Null New: A new process is created to execute a program. II. New Ready: The operating system moves a process from new state to ready state in case memory space is available or there is a room for new process so as to keep the number of process to constant. III. Ready Running: A process jumps from read state to running state it processor is running in the processor. The operating system selects a particular process for the processor. IV. Running Exit: The currently running process is terminated or aborted. V. Running Ready: The most common reasons for this transitions are a. A process exceeds its time limit. b. Currently running process is preempted due to the arrival of high priority process in the ready queue. c. A process may itself release control of the processor. VI. Running Blocked: A process is put in the blocked state if it request for something for which it must wait. For example a process may request a service from the operating system, operating system is not prepared to service the request immediately are the process may wait for some input output operation. VII. Blocked Ready: A process in the blocked state is mode to the ready state when the event for which it has been waiting occurs. Fahad Shaikh (System Administrator) Page 22
  • 23. Operating Systems SY-IT VIII. Ready Exit: A parent may terminate a child process at any time. Also if a parent terminates all child process of the parent process terminate. Process Description In the above diagram we see that there are a number of processes each process needs certain resources for its execution. Process p1 is running and has control of two input output devices and occupying a part of main memory process p2 is also in the main memory but its blocked and waiting for input output device. The process pn is swapped out and is suspended. The operating system controls the processes and manages the resources for the processes using some control structures which are divided into four categories. Fahad Shaikh (System Administrator) Page 23
  • 24. Operating Systems SY-IT Memory tables are used to keep a track of both main memory and virtual memory. The memory tables must include the following information. i. The allocation of main memory to processes. ii. The allocation of secondary memory to process. iii. Any protection attributes of blocks of main and virtual memory. iv. Any information needed to manage the virtual memory. Input Output Tables: Input output tables are by the operating system to manage the input output devices of the computer system. At any given time an input output device may be available or not available. Hence the operating system must know the status of the input output operation and also the location in the main memory where the transfer is carried out. File Tables: The operating system also maintains file table which provide information about the existence of file, their location on the secondary memory, their current status and other attributes. Fahad Shaikh (System Administrator) Page 24
  • 25. Operating Systems SY-IT Process Tables: The operating system must maintain process tables to manage processes control them. In doing so the operating system must know where the process is located and the attribute of the process. The various process attributes which is also called as process control blocked is grouped into three categories. 1) Process identification. 2) Processor state information. 3) Process control information. 1. Process identification: Identifiers: Numeric identifiers may be stored in the process control blocked which include i) Identifier of this process. ii) Identifier of the parent process iii) User identifier 2. Processor state information: i) User visible registers: A user visible register is available to the user, there may be about 8 to 32 of these registers. ii) Control and status register: These registers are used to control the operation of the processor which include program counter, condition codes and status information. iii) Stack pointers: Each process has one or more LIFO system stack associated with it. 3. Process control information: i) Scheduling and state information: These information is needed by the operating system to performed scheduling. The information’s are process state, priority, scheduling related information and event. ii) Data structuring: A process may be linked to other process in a queue, ring or some other structure. iii) Inter-process communication: Various flag, signals and messages may be associated with communication between two independent processes. iv) Process privileges: Processes are granted privileges in terms of memory that may be accessed and the types of instructions that may be executed. Fahad Shaikh (System Administrator) Page 25
  • 26. Operating Systems SY-IT v) Memory management: It includes pointers to segments and page tables. vi) Resource ownership and utilization: Resources controlled by the process may be indicated. Operations and processes: The processes in the system can execute concurrently and they must created and deleted dynamically. Hence the operating system must provide a mechanism for process creation and termination. Process creation: A process may create several new processes through create process system call. The creating process is called the parent process while the new processes called the children of that processes. Each of these new processes may in turn create new processes forming a tree of processes. When a process creates a new process, two possibilities exist in terms of execution. i) The parent continues to execute concurrently with its children. ii) The parent waits until or all of its children have terminated. There are two possibilities (1) The child process is a duplicate of the parent process. (2) The child process has a program loaded into it. Process termination: A process terminates when it finishes executing its final statement and ask the operating system to delete it by using exit system call. At that point, the process may return the data to its parent process through wait system call. All the resources of the process, including memory, files, input output buffers are de-allocated by the operating system. A parent may terminate its children for the following reasons. 1) The child has exceeded its usage of some of it resources which is allocated to it. 2) The task assigned to the child is no longer required. Fahad Shaikh (System Administrator) Page 26
  • 27. Operating Systems SY-IT 3) The parent is exiting, in such a case the operating system does not allow a child to continue if its parent terminates. Co-operating Processes: A process is co-operating process if it can affect or be affected by other processes executing in the system. Co-operating processes has several advantages.  Information sharing: Several users may be interested in same piece of information we must provide an environment to allow concurrent access to these type of resources.  Computation speed up: We can break a task into smaller sub task, each of which will be executing in parallel with the others.  Modularity: We can construct the system by dividing the system functions into separate process.  Convenience: A user may have many task on which it can work at a time (editing, printing and compiling). Inter process communication (IPC): Inter process communication facility is the means by which processes communicate by among themselves. Inter process communication provides a mechanism to allow processes to synchronize their action without sharing the same address space. Inter process communication is particularly useful in distributed environment. Inter process communication is best provided by a message passing system. The function of a message system is to allow processes to communicate with one and another without any shared memory. Communication among the user processes is achieved to through passing of messages. An inter process communication facility provides at least two operations send and receive. If processes P and Q want to communicate they must send messages and receive message from each other through a communication link. There are several methods for logically implementing a link and send / receive operation Fahad Shaikh (System Administrator) Page 27
  • 28. Operating Systems SY-IT a) Direct communication OR Indirect communication: Direct communication: With direct communication each process that wants to communicate should name the receiver or sender of communication as follows Send (P, message) - send a message to process P Receive (Q, message) - receive a message from process Q With direct communication exactly one link exist between each pair of processes. Indirect communication: In indirect communication the messages are send and received from mail boxes or ports. In which message can be placed or removed. Two process can communicated only if they share a mail box. Communication is done in the following way Send (A, message) - send a message to mailbox A. Receive (A, message) - receive a message to mailbox A. In indirect communication a link may be associated with more than two processes. More than 1 link may exist between each pair of communicating processes. The mailbox may be owned either by a process or by the operating system. b) Synchronization: Communication between process through message passing may be either blocking or non-blocking (synchronous or asynchronous). i. Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. ii. Non-Blocking send: The sending process sends the message and resumes operation. iii. Blocking receive: The receive block until a messages is available. iv. Non-Blocking receive: The receiver gets receives either a valid message or a null. c) Buffering: buffering can be implemented in three ways i. Zero capacity: The queue has maximum length zero. Hence the link cannot have any messages waiting in it. Fahad Shaikh (System Administrator) Page 28
  • 29. Operating Systems SY-IT ii. Bounded capacity: The queue has finite length. If the link is fall the sender must block until space is available in the queue. iii. Unbounded capacity: The queue has almost infinite length hence the sender never blocks. Mutual exclusion using messages: Count int n /* number of processes */ Void p (int) { Message msg; While (true) { Receiver (box, msg); /* critical section */ Send (box, msg); /* remainder*/ } } Void main () { Create mailbox (box); Send (box, null); Par begin (p (1), p (2), ... , p (n)); } The above algorithm show how we can use message passing to get mutual exclusion. A set of concurrent processes share a mailbox which can be use by all processes to send and receive. The mail box is initialized to contain a single message which null content. A process wishing to enter its critical section first attempts to receive a message. If the mailbox is empty then the process is blocked. Once a process gets the message, it performs it critical section and places the message back into the mailbox. If more than one process perform the receive operation concurrently than two cases arise.  If there is a message, it is delivered to only one process and other are blocked.  If the message queue is empty, all the processes are blocked, when the message is available only one blocked process is activated and given the message. Fahad Shaikh (System Administrator) Page 29
  • 30. Operating Systems SY-IT THREADS: A thread is a light weight process; it is a basic unit of CPU utilization and consists of thread id, a program counter, a register set and a stack. All the threads belong to a process may share code section, data section and other operating system resources. If a process has multiple threads control it can do more tasks at a time. Advantages of multithreading are I. Responsiveness: Multithreading may allow a program to continue even if a part of it is blocked for performing a lengthy operation. II. Resource sharing: Threads share the memory and the resources of the process to which they belong. III. Economy: Creating, maintaining and switching a process is difficult as compared to creating, maintaining and switching a thread. E.g.: In Solaris creating a process is about 30 times slower than creating a thread. IV. Utilization of multiprocessor: Architecture – The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may run in parallel on different processors. Fahad Shaikh (System Administrator) Page 30
  • 31. Operating Systems SY-IT Types of Threads: 1. User threads. 2. Kernel threads. User Threads: User threads are supported above the kernel and are implemented at the user level. All threads creation and scheduling are done in user space without kernel intervention. Hence user level threads are fast to create and easy to manage. The drawback with user level thread is that if kernel is single threaded, any user level thread performing a blocking system call will cost the entire process to block. Kernel Threads: Kernel threads are supported directly by the operating system. The kernel performs threads creation, scheduling and management in the kernel space. Due to this reasons they are generally slower to create and not easy to manage as compared to user threads. Since the kernel is managing the threads, if a threads perform a blocking system call, the kernel can schedule another thread for execution. Multithreading Models: The three common types of multithreading models. 1. Many to One. 2. One to One. 3. Many to Many. Fahad Shaikh (System Administrator) Page 31
  • 32. Operating Systems SY-IT Many to One: The many to one model maps many user level threads to one kernel thread. Thread management is done in user space, hence it is efficient but the disadvantage is that the entire process will blocked if a thread makes a blocking system call. Since only one thread can access the kernel at a time multiple threads are unable to run in parallel on multiprocessors. One to One: One to One model maps each user thread to a kernel thread. It provides more concurrency as compared to many to one model. In case a thread makes a blocking system calls only that particular thread will be blocked while other still execute. In multiprocessor environment multiple threads can run in parallel. The only drawback of this model is that creating a user thread required creating the corresponding kernel threads. Fahad Shaikh (System Administrator) Page 32
  • 33. Operating Systems SY-IT Many to Many: The many to many model multiplex many user level threads to a smaller or equal number of kernel threads. The number of kernel threads may depend upon the particular application or a particular machine. This model is better on the other two models. Developers can create as many user threads as necessary and the corresponding kernel threads can run in parallel on a multiprocessor. Threading Issues: I. The fork and exec system calls: If one thread in a program calls a fork system calls, the new process may duplicate all threads or a new process may be a single threaded. If a thread invokes the exec system call, the program specified in the entire process. II. Cancellation: Thread cancellation is the task of terminating a thread before it has completed. A thread which is to be cancelled is called as target thread. Thread cancellation occurs in two different ways. i) Asynchronous cancellation: One thread immediately terminates the target thread. Fahad Shaikh (System Administrator) Page 33
  • 34. Operating Systems SY-IT ii) Differed cancellation: The target thread can periodically check if it should terminate. III. Signal handling: A signal is used to notify a process that a particular event has occurred. Signal can be generated through following reasons.  Generated by the occurrence of a particular event, whenever it is generated it must be delivered to a process; once the signal is delivered it must be handled. Signal can handled by two handlers. I. A default signal handler. II. User define signal handler. In order to deliver the signal there are few options. a) Deliver the signal to the thread to which the signal applies. b) Deliver the signal to every thread in the process. c) Deliver the signal to certain threads in the process. d) Assign a specific thread to receive all signals for the process. IV. Thread pools: The general idea behind a thread pool is to create a number of threads at process start up and place them into a pool, where they sit and wait for work. The benefits of thread pools are i) We get fast service ii) A thread pool limits the number of threads. V. Thread specific data: Threads belonging to a process share the data of the process. However each thread may need its own copy of certain data, such data is called thread specific data. CPU SCHEDULING  Basic concepts.  Scheduling criteria.  Scheduling algorithm.  Multiprogramming algorithm. CPU Scheduler: Fahad Shaikh (System Administrator) Page 34
  • 35. Operating Systems SY-IT Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue which is to be executed. The selection of the process is carried out by CPU Scheduler (short term scheduler). CPU scheduling decisions may take place under following four situations. 1. When a process switches from running state to waiting state. 2. When a process switches from running state to ready state. 3. When a process switches from waiting state to ready state. 4. When a process terminates. Scheduling in case of 1 and 4 is called non-preemptive while under case 2 and 3 are called preemptive. Dispatcher: Dispatcher module gives control of the CPU to process by the short term scheduler, this involves: i. Switching context. ii. Switching to user mode. iii. Jumping to the proper location in the user program to restart that program. The dispatcher should be as fast as possible because it is used during every process switch. The time it tackles for the dispatcher to stop one process and start another process is known as dispatcher latency. Scheduling criteria: The criteria which are used to compare the various scheduling algorithms are 1. CPU utilization: It means CPU must be kept as busy as possible CPU utilization may range from 0 – 100%. 2. Throughput: It gives the number of processes which are completed in a given unit time. 3. Turnaround time: It is the sum of time a process spends waiting to get into the memory. Waiting in the ready queue, executing and doing input output. 4. Waiting time: It is the amount of time a process has been waiting in the ready queue. 5. Response time: It is the amount of time it takes from when a request was submitted until the first response is produced, not output (for time sharing environment). Fahad Shaikh (System Administrator) Page 35
  • 36. Operating Systems SY-IT Scheduling Algorithm 1) First Come First Serve Scheduling (FCFS): It is a purely non-preemptive algorithm. In this scheme the process which requests the CPU first is allocated the CPU first. The implementation of First come first serve policy is easily managed with a FIFO queue. The code for FCFS scheduling is simple to write and understand. The disadvantage of FCFS policy is that the average waiting time is quite high as compared to other algorithms. Eg: consider the following example with processes arriving in the order p1,p2,p3 and the burst time in milliseconds. The Gantt chart for the scheduling is: P1 P2 P3 0 24 27 30 Waiting time for P1 = 0; P2 = 24; P3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 Convoy effect—In FCFS scheme if we have a one big process which is CPU bounded and many small process which are input output bounded, it is seen that all other small process wait for one big process to get the CPU. This results in lower CPU utilization. This effect is called as convey effect FCFS algorithm is not suitable for time sharing system because it is non-preemptive. 2) Shortest Job First (SJF) Scheduling: In this scheme, when the CPU is available it is assigned to the process which has the smallest next CPU burst. If two processes has the same next CPU burst, FCFS is used. SJF algorithm is optional because it gives minimum average waiting time for a given set of processes. The real difficulty with SJF algorithm is knowing the length of the next CPU request. SJF algorithm may be either preemptive or non-preemptive. A preemptive SJF algorithm will preemptive the currently process if a new process arrives in the ready queue having CPU burst time left for the current executing, process. A non-preemptive SJF algorithm will allow the currently running process to finish its CPU burst. Fahad Shaikh (System Administrator) Page 36
  • 37. Operating Systems SY-IT Eg: Non preemptive SJF P1 P3 P2 P4 0 3 7 16 Eg: Preemptive SJF Process Arrival Time Burst Time P0.0 7 1 P2.0 4 2 8 12 P 4.0 1 Process Arrival Time Burst Time P1 0.0 7 P2 2.0 4 P 4.0 1 P1 P2 P3 0 2 4 11 P2 P4 P1 5 7 16 3) Priority Scheduling: In this scheduling a priority is associated with each process and the CPU is allocated to the process with higher priority. Priorities are generally some fixed range of numbers. Priorities can be defined either internally or externally. Internally defined priorities are use some measurable quantity to compute the priority of a process (for example: time limits, memory, file etc).external priorities are said by criteria such as importance of the process, the type of the process, departments sponsoring the process etc. Priority scheduling can be either preemptive or non-preemptive. When a process arrives at the ready queue its priority is compared with the running process. In a preemptive priority scheduling algorithm the running process will be preempted if the newly arrived process is having higher priority. A non preemptive priority scheduling algorithm will not preempt the running process. The major drawback of priority scheduling algorithm is indefinite blocking (starvation). A solution to the problem of starvation is aging. Aging is a technique of increasing the priority of processes that wait in the system for long time. Fahad Shaikh (System Administrator) Page 37
  • 38. Operating Systems SY-IT Q. For the following set of processes find the average waiting time considering small number to have higher priority. Process Burst time Priority P1 10 3 P2 1 1 P3 2 4 P4 1 5 P5 5 2 P2 p5 p1 p3 p4 0 1 6 16 18 19 Waiting time of p1=6 Waiting time of p2=0 Waiting time of p3=16 Waiting time of p4=18 Waiting time of p5=1 Average waiting time = (6+0+16+18+1)/5= 8.2 4) Round Robin (RR): Round robin scheduling algorithm is designed specially for time sharing system. It is a purely preemptive algorithm. In this scheme every process is provided a time slice (time point of view) in case the process is unable to complete in the given time slice, it is preempted and another process is executed. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue allocation the CPU to each process for a time interval of one time slice. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Example of Round robin with time quantum= 20 Process Burst Time P1 53 P2 17 P 68 P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121134 154162 Fahad Shaikh (System Administrator) Page 38
  • 39. Operating Systems SY-IT Multilevel Queue Scheduling: A multilevel queue scheduling algorithm partition the ready queue into several separates queue. The processes are permanently assigned to a particular queue. Its queue has its own scheduling algorithm. For example: The interactive processes queue may use round robin algorithm while batch processes queue would use FCFS algorithm. A part from it there is a scheduling among a queue which is implemented through preemptive priority scheduling algorithm. Each queue has higher priority over low priority queue. For example: Low process in the batch queue would run unless system process queue, interactive process queue and interactive editing process queue are empty. Multilevel Feedback Queue Scheduling: Fahad Shaikh (System Administrator) Page 39
  • 40. Operating Systems SY-IT Multilevel feedback queue scheduling allows a process to move between the queues. Separate queues are formed on the basis of CPU burst time. If a process uses too much CPU time, it is moved to a low priority queue. If a process waits too long in a low priority queue it is moved to a higher priority queue. For example: Consider the above diagram, a process entering the ready queue, it is put in queue 0. A process in queue 0 is given in a time slice (quantum) of 8ms. If it does not finishes within the given time, it is moved at the end of queue 1 and so on. A multilevel feedback queue scheduling is defined by the following parameters.  Number of queues.  Scheduling algorithms for each queue.  Method used to determine when to upgrade a process.  Method used to determine when to demote a process.  Method used to determine which queue a process will enter when that process needs service. PROCESS SYNCHRONIZATION Definitions: 1. Critical section: A section of code within a process which cannot be executed while one process is executing it is called as critical section. 2. Deadlock: A situation in which two or more processes are unable to proceed and wait for each other is called as deadlock. 3. Live lock: A situation in which two or more processes continuously change their state in response to changes in the other processes without doing any useful work. 4. Mutual exclusion: The requirement that only one process execute the critical section. 5. Race condition: A situation in which processes read and write a shared data item and the final result depend on the relative timing of their execution. 6. Starvation: A situation in which a run able process is not executed for an indefinite period. Fahad Shaikh (System Administrator) Page 40
  • 41. Operating Systems SY-IT Principles of concurrency: concurrency has to deal with the following issues. Communication among processes, sharing of resources, synchronization of the multiple processes and allocation of the processor time. Concurrency arises in three different contexts. i. Multiple applications. ii. Structured application. iii. Operating system structure. In a single processor multiprogramming system, processes are int leaved in time but still they appear to be simultaneously executing. In a multiprocessor system the processes are overlapped that is two or more processes simultaneously execute. In both the situation the following difficulties arise. a) Sharing of global resources. b) Allocation of resources. c) Locating a programming error. Consider a simple example in a uni-processor environment. void echo() { chin = getchar(); chout = chin; putchar(chout); } Operating System Concerns: The following are the issues with respect to existence of the concurrency. 1. The operating system must be able to keep track of various processes. Fahad Shaikh (System Administrator) Page 41
  • 42. Operating Systems SY-IT 2. The operating system must allocate and de-allocate various resources for each process. The resources are processor time, memory, input output device, files. 3. The operating system must protect the data and critical resources of each process. 4. The functioning of the process and its result must be independents of its speed at which its execution is carried out with respect to other processes. Process Interaction: The following are the various process interaction.  Processes unaware of each other.  Processes directly aware of each other. Process interaction: Degree of awareness Relationship Influence that one process has on the other Potential control problem Processes unaware of each other competition  Results of process independent of the action of others  Timing of process may be affected.  Mutual exclusion.  Deadlock (renewable resource)  Starvation. Process indirectly aware of each other (eg: shared object) Cooperation by sharing  Results of one process may depend on information’s obtained from others.  Timing of process may be affected.  Mutual exclusion.  Deadlock (renewable resource).  Starvation.  Data coherence. Process directly aware of each other (have communication primitives available to them) Cooperation by communication.  Results of one process may depend on information obtained from others.  Timing of process may be affected.  Deadlock (consumable resource).  Starvation. Fahad Shaikh (System Administrator) Page 42
  • 43. Operating Systems SY-IT Requirements For Mutual Exclusion: Any solution for enforcing mutual exclusion must satisfied the following requirements. 1. Mutual exclusion must be enforced that is when one process is in its critical section no other process should be in its critical section. 2. A process that halts in its non critical section must not interfere with other processes. 3. A process should not wait indefinitely for entry into the critical section. 4. When no process is in its critical section, any process which request entry to the critical section must be granted without delay. 5. A process must remain in its critical section for a finite time. Mutual Exclusion: Algorithm Approach Algorithm 1 /* process 0 */ /* process 1*/ While (turn!=0) While (turn!=1) /* do nothing */; /*do nothing */; /* critical section */; /* critical section */; Turn 1; Turn 0; In this case the two process share a variable turn. A process which wants to enter the critical section checks the turn variable. If the value of the turn is equal to the number of process, then the process may go into the critical section. The drawback of these algorithm is that are get busy waiting. If one process fails the other is permanently blocked. Algorithm 2 /* process 0 */ While (flag [1]) /* do nothing */ Flag [0]= true; /* critical section */ Flag [0]= false; /* process 1 */ While (flag [0]) /* do nothing */ Flag [1]= true; Fahad Shaikh (System Administrator) Page 43
  • 44. Operating Systems SY-IT /* critical section */ Flag [1]= false; In this algorithm we used a Boolean variable flag. When a process wants to enter its critical section, it checks other process flag, if it is false, it indicates that other process is not in its critical section. The checking process immediately sets its own flag to true and goes into the critical section. After leaving its critical section it resets its flag to false. This algorithm does not ensure mutual exclusion. It can happen that both the process check each other’s flag and find it false, hence set their own flag to true and both the critical section simultaneously. Algorithm 3 /* process 0 */ Flag [0]= true While (flag[1]) /* do nothing */ /* critical section*/ Flag[0]= false; /* process 1 */ Flag [1]= true While (flag[0]) /* do nothing */ /* critical section*/ Flag[1]= false; In this algorithm a process which wants to enter the critical section sets its flag to true and check other process flag if it is not set it enter the critical section if it is set the process wait. The drawback of this algorithm is that it can happen that both process set their flag to true and check each other’s flag causing deadlock. Also if a process fails inside the critical section, the other process is blocked. Algorithm 4 /* process 0 */ Flag [0]= true While (flag[1]) { Flag[0]= false; Fahad Shaikh (System Administrator) Page 44
  • 45. Operating Systems SY-IT /* delay */ Flag[0]=true; } /* critical section */ Flag [0]= false; /* process 1 */ Flag [1]= true While (flag[0]) { Flag[1]= false; /* delay */ Flag[1]=true; } /* critical section */ Flag [1]= false; In this algorithm it can be shown that a live lock situation occurs in which the two processes continuously set and reset their flag without doing any useful works. If one of the process slows down than live lock situation would be broke down and one of the process enter the critical section. Peterson’s Algorithm: Boolean flag[2]; Void p0() { While (true) { Flag[0]=true; Turn=1; While(flag[1] && turn==1) /*do nothing */ /* critical section */ Flag[0]=false; /* remainder section */ } } Void p1() { While (true) { Flag[1]=true; Turn=0; Fahad Shaikh (System Administrator) Page 45
  • 46. Operating Systems SY-IT While(flag[0] && turn==0) /*do nothing */ /* critical section */ Flag[1]=false; /* remainder section */ } } Void main() { Flag[0]=false; Flag[1]=false; Par begin(p0,p1) } In Peterson algorithm gives the simple solution to the problem of mutual exclusion. A global variable turn decides which process must go into the critical section. We see that mutual exclusion is easily preserved. Suppose p0 wants to enter its critical section, it sets its flag to true and the turn value to 1 because of which p1 can not enter its critical section. If p1 is already in its critical section than p0 is blocked from entering its critical section. It can be shown that Peterson’s algorithm provide a better solution which is free from deadlock and live lock. This algorithm can be easily generalized for n number of processes. Semaphores: Two or more processes can co-operate by means of simple signals such that a process can be forced to stop at a specific place until it has receive a specific signal. Any co ordination requirement can be satisfied by proper signals. For signaling special variable called semaphores are used to transmit a signal through semaphores S a process execute the primitive semSignal (S). To receive a signal through semaphores S a process executes the primitive semWait(S). To achieve the desired effect we can view the semaphore as a variable that has an integer value on which three operations are defined. 1) A semaphore may be initialize to a non negative value. 2) The semWait Operation: The semWait decrements the semaphore value. If the value become negative then the process executing the semWait is blocked, otherwise the process continue. Fahad Shaikh (System Administrator) Page 46
  • 47. Operating Systems SY-IT 3) The semSignal Operation: The semSignal operation increments the semaphore value. If the value is less than or equal to 0, a process blocked by semWait operation is unblocked. Semaphore primitives are defined as follows Struct semaphore { Int count; Queue type queue; } Void semaWait(semaphore.S) { S.count--; If (S.count<0) { Place this process in S.queue; Block this process } } Void semaSignal(semaphore.S) { S.count++; If (S.count<=0) { Remove a process p from S.queue; Place process p on ready list; } } Binary Semaphores: Binary semaphore may only take the value 0 and 1 and can be defined by the following three operation. 1. Initialization: A binary semaphore may be initialized to 0 or 1. 2. SemWaitB operation: The semWait B operation checks the semaphore value. If the value is 0, than the process executing the semWait B is blocked. If it is 1 it is change to 0 and the process continues. 3. SemSignal B operation: The semSignal B operation checks to see if any processes are blocked on this semaphore. If so, than a process blocked by a semwait B operation is unblocked. If no processes are blocked, than the value of semaphore is set to 1. Fahad Shaikh (System Administrator) Page 47
  • 48. Operating Systems SY-IT Binary semaphore primitives are defined as follows. Struct binary—semaphore { Enum {zero, one} value Queue type queue; } Void semWait B (binary-semaphore.S) { If(S.value==1) S.value=0; { Place this process in S.queue; Block this process } } Void semSignal(binary-semaphore.S) { If (S.queue is empty()) S.value=1; { Remove a process p from s. queue; Place process p on ready list; } } Fahad Shaikh (System Administrator) Page 48
  • 49. Operating Systems SY-IT Unit - IV MEMORY MANAGEMENT Memory consists of a large array of words or bytes each having its own address.The CPU fetches instructions from the memory according to the value of the program counter.To improvethe utilization of CPU the computer must keep several processes in the memory.To utilize the memory many memory management schemes are proposed.Selection of a memory management schemes depends on many factors such as hardware of the system. ADDRESS BINDING A user program goes through several steps such as compiling ,loading ,linking.Addresses may be represented in different ways during these steps.Addresses in the source program are generally symbolic.A compiler will bind these addresses to relocatable addresses.The loader will inturn bind these addresses to absolute addresses.Each binding is a mapping from one address space to another. The binding of instructions and data can be done in the following ways. 1..Compile time-If it is known at compile time where the process will reside in memory ,then absolute code can be generated.If address changes at compile time the program is recompiled. 2..Load time-If it is not known at compile time where the process will reside in the memory,than the compiler must generate relocatable code. 3..Execution time-Binding delayed until run time if the process can be moved during its execution from one memory segment to another.Need hardware support for address maps.(eg.base and limit registers). LOGICAL V/S PHYSICAL ADDRESS SPACE Logical address-generated by the CPU,also referred to as virtual address. Physical address-address seen by the memory unit. Logical and physical addresses are the same in compile time and load time address binding schemes,logical(virtual)and physical addresses differ in execution time address binding scheme. Fahad Shaikh (System Administrator) Page 49
  • 50. Operating Systems SY-IT DYNAMIC LOADING  Routine is no loaded until it is called .  Better memory space utilization,unused routine is never loaded.  Useful when large amounts of code are needed to handle in frequently occurring cases.  No special support from the OS is required implemented through program design. OVERLAYS : OVERLAYS FOR A TWO PASS ASSEMBLER  Keep in memory only those instructions and data that are needed at any given time.  Needed when process is larger than amount of memory allocated to it.  Implemented by user, no special support needed from OS, programming design of overlay structure is complex. SWAPPING Fahad Shaikh (System Administrator) Page 50
  • 51. Operating Systems SY-IT A process needs to be in memory to be executed. A process can be swapped out temporarily from the main memory to a backing store and then brought back into main memory for continued execution. When a process completes its time slice it will be swapped with another process (in case of RR scheduling algorithm) Another type of swapping policy which is used for priority based algorithm is that is that a higher priority process arrives and wants service,the memory manager can swap out the lower priority process and swap in the higher priority process.This type of swapping is called as roll out –roll in. Swapping requires a backing store.The backing store is commonly a fast disk and is large enough to accommodate copies of all memory images for all users,and it must provide direct access to these memory images. Major part of swap time is transfer time,total transfer time is directly proportional to the amount of memory swapped. SCHEMATIC VIEW OF SWAPPING Fahad Shaikh (System Administrator) Page 51
  • 52. Operating Systems SY-IT CONTIGUOUS ALLOCATION: The memory is usually divided into two partitions one for the OS and other for the user processes.We want several user process to reside in memory at the same time.Hence we need to consider how to allocate the available memory to the processes.In contiguous memory allocation each process is contained in a single contiguous part of the memory. When the CPU scheduler selects a process for execution,the dispatcher loads the relocation and limit register and every address generated by the CPU is checked with these registers.So that the OS and user programs are not modified by the running process. Limit register Relocation register + Memory Logical add yes physical add No < Trap;addressing error HARDWARE SUPPORT FOR RELOCATION AND LIMIT REGISTERS CPU MEMORY ALLOCATION One of the simplest method for memory allocation is to divide memory into several fixed sized partitions.Each partition may contain exactly one process.When a partition is free a process is selected from the input queue and is loaded into the free partition.When the process terminates the partition become available for another process. The OS keeps a table indicating which parts of memory are available and which are occupied.Initially all memory is available for user processes and is considered as one large block of user memory called hole.When a process arrives and needs memory we search for a large enough for this process.The set of holes is searched to determine which hole is best to allocate.The following strategies are considered to select a free hole. Fahad Shaikh (System Administrator) Page 52
  • 53. Operating Systems SY-IT 1..First-fit:-Allocate the first hole that is big enough.We can search the set of holes either at the beginning or at the end of the previous first fit search.We can stop searching as soon as we find a free hole that is large enough. 2..Best fit:-Allocate the smallest hole that is big enough.For this purpose we must search entire list.It produces the smallest left over hole. 3..Worst fit:-Allocate the largest hole.For this purpose we must search the entire list.It produces the largest left over hole.First-fit and best fit are better than worst fit in terms of speed and storage utilization. All the above algorithms suffer from external fragmentation.Memory fragmentation can be internal or external.  External fragmentation –total memory space exists to satisfy a request,but it is not contiguous.  Internal fragmentation-allocated memory may be slightly larger than requested memory,the size difference is memory internal to a partition,but not being used. PAGING: Paging is a memory management scheme which permits the physical address space of a process to be non contiguous.Paging avoids the problem of fitting the different process of different sizes into the memory.In this scheme physical memory is broken into fixed sized blocks called frames while the logical memory is broken into blocks of same size calles pages.When a process is to be executed,its pages are loaded into any available memory frames from the backing store. The following diagram gives the required paging hardware. Fahad Shaikh (System Administrator) Page 53
  • 54. Operating Systems SY-IT Every address generated by the CPU is divided into two parts –a page number p and page offset d.The page number is used as index into a page table. The page tables contains the base address of each page in physical memory .This base address is combined with the page offset to define the physical memory address which is sent to the memory unit. The paging model of a memory is as shown below Fahad Shaikh (System Administrator) Page 54
  • 55. Operating Systems SY-IT The size of a page is a power of 2 and lies between 512 bytes and 16mb per page depending on the computer architecture. The selection of power of 2 as the page size makes the translation of logical address into a page number and page offset easy. When we use the paging scheme we have no external fragmentation. However there may be some internal fragmentations. When a process arrives in a system to be executed its size in terms of pages is examined. Each page of the process needs one frame. The first page of the process is loaded into one of the allocated frame and the frame number is put in the page table for this process. This is done for all the pages. The OS maintains a list of free frames in a data structure called the frame table as given in the diagram below. SEGMENTATION : A program may consists of a main program, subroutines, procedures, functions, etc. Each of which can be considered as a segment of variable length. Elements within a segment are identified by their offset from the beginning of the segment. Segmentation is a memory management scheme which supports this view of the memory. A logical address space is a collection of segments. Each segment has a name and a length. The addresses specify the segment name and the offset within the segment. A logical address consists of segment number and offset. Fahad Shaikh (System Administrator) Page 55
  • 56. Operating Systems SY-IT SEGMENTATION HARDWARE: A logical address consists of two parts a segment number s and an offset d. The segment number is used as an index into the segment table. The offset d of the logical address must be between O and the segment limit. If it is not we trap to the OS. If this offset is valid it is added to the segment base to produce the address in the physical memory. A particular advantage of segmentation is the association of protection with the segments. Another advantage of segmentation involves the sharing of code or data. Segmentation may cause external fragmentation when all blocks of free memory are too small to accommodate a segment. SEGMENTATION WITH PAGING: By combining segmentation and paging we can get best of both. In this model the logical address space of a process is divided into two partitions. The first partition consist of upto 8kb segments that are private to that process. The second partition consist of upto 8 kb segments which are shared among all the processes. Information about the first partition is kept in the local description table(LDT) while the information about the second partition is kept in the global descriptor table (GDT). Each entry in the LDT and GTD cosist of 8 bytes with detailed information about a particular Fahad Shaikh (System Administrator) Page 56
  • 57. Operating Systems SY-IT segment including the base location and length of that segment. The logical address consist of selector and offset, the selector is 16 bit number given as S G P 13 1 2 Where s = segment number. G= indicates whether segment is LDT of GDT P = deals with protection / for protection The offset is a 32 bit number specifying the location of the byte within the segment. It is given as Page number page offset P1 P2 D 10 10 12 VIRTUAL MEMORY Virtual memory is a technique that allows the execution of processes which may not be completely in memory. One major advantage of this scheme is that we can have programs which are larger than the physical memory. It is seen that many programs have code to handle unusual error conditions and these errors never occur hence these codes are never executed. Also programs may contains arrays, list and tables which may be allocated more memory than required. Apart from this certain options and features of a program are rarely used. Virtual memory makes the task of programming much easier because the programmer no longer needs to worry about the amount of physical memory. Virtual memory is implemented by demand paging as well as demand segmentation DEMAND PAGING A Demand paging system is similar to paging system with swapping in which we have a swapper which swaps only those pages which are needed and does not swaps the entire process. When a process is to be swapped in the swapper(pager) guesses which pages will be used before the process is swapped out again, it brings only those pages. Hence the pager decreased the swap time and the amount of memory needed. To implement demand paging we need some hardware to differentiate between those pages which are in the memory and those pages that are on the disk. Fahad Shaikh (System Administrator) Page 57
  • 58. Operating Systems SY-IT When the valid bit is set it means that the page is legal and is in the memory. If the bit is set to invalid it means that either the page is not valid or is valid but is currently on the disk. Fahad Shaikh (System Administrator) Page 58
  • 59. Operating Systems SY-IT STEPS FOR HANDLING PAGE FAULT: The procedure for handling page fault is as follows. 1. We check an internal table to determine whether the reference was valid or invalid. 2. If it was valid but is not in the physical memory we now bring it. 3. We find a free frame. 4. Bring the desired page from disk into the frame. 5. Modify the table. 6. Restart the instruction. PAGE REPLACEMENT If no frame is free, we find a frame that is not currently being used and free it, we can free a frame by writing the contents to swap space and change the page table entries. It is done by the following steps. 1. Find the location of the desired page on disk. 2. Find a free frame: Fahad Shaikh (System Administrator) Page 59
  • 60. Operating Systems SY-IT  If there is a free frame, use it,  If there is no free frame, use a page replacement algorithm to select a victim frame. 3. Read the desired page into the (newly) free frame. Update the page and frame table. 4. Restart the process. If no frames are free, two pages transfers are required (one out, one in). We can reduce this overhead by using a modify bit (dirty bit). The dirty bit is set for a page if the page is modified. Hence in case of page replacement we need to replace that page. If the dirty bit is reset we don’t need to replace that page, it can be overwritten by other page. PAGE REPLACEMENT ALGORITHMS. First –In – first –Out (FIFO) Algorithm. The simplest page replacement algorithm is the FIFO algorithm. A FIFO replacement algorithm associates each page the time when that page was brought into the memory. When a page must be replaced the oldest page is choosen. FIFO algorithm is easy to understand and program. However its performance is not always good. Fahad Shaikh (System Administrator) Page 60
  • 61. Operating Systems SY-IT Prob: Consider the following reference string 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. Find the number of page faults using FIFO page replacement algorithm with three frames. Belady,s Anamoly : In some page replacement algorithms, the number of page faults may increase as the number of allocated frames increased. Consider the following curve showing Belady’s Anamoly. Fahad Shaikh (System Administrator) Page 61
  • 62. Operating Systems SY-IT From the above graph we see that if the number of frames are three we get a page faults and when the number of frames are 4 we get 10 page faults. OPTIMAL PAGE REPLACEMENT ALGORITHM: In the algorithm we replace the page which will not be used for the longest period of times. This algorithm has the lowest page fault rate as compared to other algorithms and does not suffers from Belady’s Anamoly. This algorithm is difficult to implement because it requires the future knowledge of the reference string. This algorithm is mainly used for comparison studies. Eg: For the following reference string find the number of page faults using optimal page replacement algorithm with 3 frames. RS : 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1. 9 pagefaults LEAST RECENTLY USED ALGORITHM(LRU): In this algorithm when we need to replace a page we replace it with the page that has not been used for the longest period of time LRU Replacement associates with each page the time of that pages last use. When a page must be replaced we choose a page which has not been used for the longest period of time. Hence we look into the part. The major problem with this algorithm is to implement it for which additional hardware is require. The two types of implementations are 1. Counter implementation  Every page entry has a counter, every time page is referenced through this entry, copy the clock into the counter. Fahad Shaikh (System Administrator) Page 62
  • 63. Operating Systems SY-IT  When a page needs to be changed look at the counters to determine which are to change. 2. Stack implementation. Another approach to implement is to keep a stack of page number. Whenever a page is referenced, it is removed from the stack and put on the top. Hence the top of the stack is the most recently used page and the bottom of the stack is the least recently used. Eg : For the following reference string find the number of page faults using LRU algorithm with 3 frames. RS-7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1 THRASHING: For execution, a process needs free frames, if the process does not have the required number of free frames, we get a page fault. Hence we must replace some page, since all the pages are in active use, we replace a page which is needed. Hence we get a page fault again, and again, and again. The process continued to fault, this high paging activity is called thrashing. CAUSE OF THRASHING: The OS monitors the CPU utilization, if it is low we increase the degree of multiprogramming by introducing new process. A global page replacement algorithm is used which replaces process they belong. Suppose a process needs more frames, hence it starts faulting and takes frames away from the other processes. These processes need those pages, hence they also fault taking away frames from other processes. Hence we get a high paging activity due to which the utilization decreases. Fahad Shaikh (System Administrator) Page 63
  • 64. Operating Systems SY-IT As the CPU utilization decreases, the CPU scheduler brings in more new processes causing more page faults as a result CPU utilization drops even further. The CPU schedule again brings in more processes. At this stage thrashing has occurred. Fahad Shaikh (System Administrator) Page 64
  • 65. Operating Systems SY-IT Unit - V FILE SYSTEM. A file is a collection of related information which is recorded on the secondary storage.From users point of view data cannot be written to secondary storage unless they are within a file.Files represent programs and data.A file has a certain designed structure according to its type.A text file is a sequence of characters.A source file is a sequence of subroutines and functions.An object file is a sequence of bytes organize into blocks.An executable file is a series of codes. FILE ATTRIBUTES: A file has certain attributes which may change from OS to another and consists of the following 1..Name – The symbolic file name is the only information through which the user can identify the file. 2..Identifier – It consists of a number which identifies the file within the file system. 3..Type – It is required for those systems which support different file types. 4..Location – It provides information regarding the location of the file and to the device on which the file resides. 5..Size – It gives the information regarding the size of the file. 6..Protection – It gives the access control information so as to decide who can do reading writing and executing. 7..Time,date and user identification – This information may be kept for creation,last modification and last use.The data can be useful for protection,security as well as monitoring the usage. FILE OPERATIONS: The various file operations are: 1..Creating a file – To create a file two steps are necessary. Fahad Shaikh (System Administrator) Page 65
  • 66. Operating Systems SY-IT  Space in the file system must be found.  An entry for the new file must be made in the directory. 2..Writing a file – To write a file we make a system call specifying the name of the file and the information to be written to the file.The system must keep a write pointer to the location in the file where the next write is to take place. 3..Reading a file – To read from a file we make a system call specifying the name of the file and the blocks of information which is to be read.The system searches the directory to find the file and maintains a read pointer from where the next read is to take place. 4..Repositioning within a file – The directory is searched for a particular entry and the current file position is set to the given value. 5..Deleting a file – To delete a file we search the directory for the given file and release all file space so that it can be reused by other files and erase the directory entry. 6..Truncating a file – This operation allows a user to retain the file attribute and erase the contents of the file. The other common attribute include file appending and file renaming. The following information are associated with an open file. 1..File pointer – It keeps the track of last read-write operation. 2..File open count – It keeps a tarck of the counter giving the number of open and closes of a file and becomes zero on the last close. 3..Disk location of the file – It is required for modifying the data within the file. 4..Access rights – This information can be used to allow or deny any request. FILE TYPES: A common technique for implementing file types is to include the type as part of the file name .The name is split into 2 parts that is name and extension.The common file types are File type usual extension function 1..executable exe,com,bin,or none read to run machine language program 2..object obj,o compiled,machine language,not linked Fahad Shaikh (System Administrator) Page 66
  • 67. Operating Systems SY-IT 3..text txt,doc textual data,documents 4..batch bat,sh commands to the command interpreter 5..word processor wp,tex,rrf,doc various word processor formats. FILE STRUCTURE : Since there are various files.It is required that the OS must support multiple file structures,due to this the resulting size of the OS would become very large.If the OS defines five different file structures, it needs to contain the code to support these file structures. Severe problems may arise if new applications require different file structure not supported by the OS. UNIX consider each file to be a sequence of 8 bits (byte), no interpretation of these bits is made by the OS. Each application program must include its own code to interpret an input file into proper structure. However all OS must support at least executable file structure. ACCESS METHODS The various access methods are 1..Sequential access 2..Direct access 3..Indirect access 1..Sequential access-The simplest access method is sequential access which is used by editors compilers,etc.Information in the file is processed in order one record after the other.The operations on the files are reads and writes.A read operation reads the next portion of the file and advances the pointer.The write operations appends to the end of the file and advances the pointer.Sequential access is based on a tape model of a fil. 2..Direct access-We consider file to be made up of fixed length logical records which allows program to read and write records without following any order.The direct access method is based on disc model of a file,since disc allows random access to any file block.For direct access the file is viewed as a numbered sequence of blocks or records.Since there is no restriction to follow any order we may read block 14 then block 54 and write block 6. Fahad Shaikh (System Administrator) Page 67
  • 68. Operating Systems SY-IT For the direct access method ,the file operations are Read n and Write n. 3..Indexed access- With large files we see that the indexed file become too large to be kept in the memory.Hence we create an indexed for the index file.The primary index file would contain pointers to secondary index files which would point to the actual data items. DIRECTORY STRUCTURES The directory can be viewed as a symbol table which translates file names into their directory entries.A directory can be organize in many ways .The following are the various operations perform on a directory. 1..Search for a file 2..Create a file 3..Delete a file 4..List a directory 5..Rename a file Fahad Shaikh (System Administrator) Page 68
  • 69. Operating Systems SY-IT A directory has the following logical structures 1..Single level directory The simplest directory structure is the single level directory.Since all files are stored in the same directory.The advantage of this structure is that it is easy to support and understand. The drawback of this implementation is that if the number of files increases the user may find it difficult to remember the names of all the files.If thesystem has more than one user the file naming issue would also arise because each file must have a unique name. 2..Two level directory In two level directory structure each user has his own user file directory (UFD).Each UFD has a similar structure and contains the files of a single user.When a user logs in the systems master file directory(MFD) is searched.When a user refers to a particular file,only his own UFD is searched.Hence different users may have files with the same name as long as all the file names within each UFD are unique. Fahad Shaikh (System Administrator) Page 69
  • 70. Operating Systems SY-IT To create a file for a user ,the OS searches only that users UFD to confirm whether another file of that name exits.To delete a file,the OS confines its search to the local UFD,hence it cannot accidentally delete another users file which has the same name. UFD are created by a special system program through proper user name and account information.The program creates a new UFD and adds entry in the MFD. The disadvantage of two level directory structure is that it isolates one user from another.In some systems if the path name is given and access is possible to the file residing in other UFD.A two level directory can be thought as a tree of height 2.The root of the tree is the MFD,UFD’s are the branches and the files are the leaves. 3..TREE DIRECTORY DIRECTORY The tree structured directory is the most common directory structure.It contains a root directory.Every file in the system has a unique path name.A directory (or subdirectory) contains a set of files or subdirectories.A directory is simply another file treated in a special way.All directories have the same internal format.One bit in each directory entry defines the entry as a file(0) or as a subdirectory(1).Special system calls are used to create and delete directories. Each user has a current directory,when a reference is made to a file the current directory is searched.If the file is not in the current directory than the user must specify a path name or change the current Fahad Shaikh (System Administrator) Page 70
  • 71. Operating Systems SY-IT directory to the directory containing that file.The user can change his current directory whenever required. In this structure path names can be of two types.Absolute path names or Relative path names. There are two approaches to handle the deletion of a directory. 1..Some systems will not delete a directory unless it is empty. 2..Some systems delete a directory even if it contains subdirectories or files. 4..ACYCLIC – GRAPH DIRECTORIES An Acyclic graph directory structure allows directories to have shared subdirectories and files such that same file or subdirectories may be in two different directories.A shared fileor directory is not the same as two copies,it is such that any changes made by one user are immediately visible to the othe.A new file created by one person will automatically apper in all the shared sub directories. This structure is more complex since a file may have multiple absolute path names.Another problem involves deletion of aq shared directory or a file. Fahad Shaikh (System Administrator) Page 71
  • 72. Operating Systems SY-IT ALLOCATION METHODS: 1..Contiguous Allocation- The contiguous allocation method requires each file to occupy a set of contiguous blocks on the disk.Contiguous allocation of the first block and the length of the file.Thedirectory entry for each file indicate the address of the starting block and the length of the area allocated for this file.Accessibg a file in this method is very easy,it supports both sequential and direct access method. The main drawback of this method is that it suffers from external fragmentation,another problem is finding space for new file.Also with contiguous allocation there is a problem of determining how much space is needed for a file. 2..Linked Allocation – In Linked allocation each file is a linked list of disk blocks,the disk blocks ,the disk blocks may be scattered anywhere on the disk.The directory contains a pointer to the first and last block of the file.Each block contains a pointer to the next block.These pointers are not made available to the user.In the allocation there is no external fragmentation and any free block can be used for a file.A file can continue to grow as long as free blocks are available.The disadvantage with linked allocation is that it does not supports direct access methods.Another disadvantage is the space required for the pointers.A problem may arise if a pointer is damage or lost hence this type of allocation would be unreliable. 3..Indexed Allocation – In this allocation all the pointers are occupied in a block called as indexed block.The directory contains the address of the indexed block.When a file is created,all pointers in the indexed block are set to nil.When a block is first written its address is put in the indexed block. Indexed allocation supports direct access and does not suffers from external fragmentation.The main drawback of this allocation is that it suffers from wasted space used to store blocks information. FREE SPACE MANAGEMENT 1..Bit vector-In this approach each block is represented by 1 bit.If the block is free,the bit is 1,if the block is allocated the bit is 1 The main advantage of this approach is its relative simplicity and efficiency in finding the free blocks.However the disadvantage is that for fast access these bit vectors are kept in main memor y.Hence they occupy large spaces. 2..Linked list-In this approach we keep a pointer to the first free block.This free block contains a pointer to the next free block and so on.This scheme is not efficient because to go through the list of free blocks we must read each block which takes much time. Fahad Shaikh (System Administrator) Page 72