SlideShare a Scribd company logo
1 of 18
INTRODUCTION TO OPERATING SYSTEMS
Solved Question Bank
Q.1) Write Short Notes
a) What is Context Switch?
Ans. Context switching is the procedure of storing the state of an active process for the
CPU when it has to start executing a new one. For example, process A with its
address space and stack is currently being executed by the CPU and there is a
system call to jump to a higher priority process B; the CPU needs to remember the
current state of the process A so that it can suspend its operation, begin executing
the new process B and when done, return to its previously executing process A.
Context switches are resource intensive and most operating system designers try to
reduce the need for a context switch. They can be software or hardware governed
depending upon the CPU architecture.
Context switches can relate to either a process switch, a thread switch within a
process or a register switch. The major need for a context switch arises when CPU
has to switch between user mode and kernel mode but some OS designs may
obviate it.
A common approach to context switching is making use of a separate stack per
switchable entity (thread/process), and using the stack to store the context itself.
This way the context itself is merely the stack pointer.
b) Define Rollback
Ans. The process of restoring a database or program to a previously defined state,
typically to recover from an error.
c) What is System Call?
Ans.
• System calls provide the interface between a process and the operating
system.
• System calls are instructions that generate an interrupt that causes the
operating system to gain control of the processor.
• The operating system then determines what kind of system call it is and
performs the appropriate services for the system caller.
A system call is made using the system call machine language instruction. These
calls are generally available as assembly language instructions and are usually listed
in the manuals used by assembly – language programmers. Certain systems allow
system calls to be made directly from a higher language program, in which case the
calls normally resemble predefined function or subroutine calls. They may generate
a call to a special run-time routine that makes the system call.
i) File and I/O System Calls:
open Get reading to read or write a file.
create Create a new file and open it.
read Read bytes from an open file.
write Write bytes to an open file.
close Indicate that you are done reading or writing a file
ii) Process Management System Calls:
create process Create a new process
exit Terminate the process making the system call
wait Wait for another process to exit
fork Create a duplicate of the process working the system call
execv Run a new program in the process making the system call
iii) Interprocess Communication System Calls:
createMessageQueue Create a queue to hold messages
SendMessage Send a message to a message queue
ReceiveMessage Receive a message from a message queue
System calls can be roughly grouped into following major categories:
1) Process or Job Control
2) File Management
3) Device Management
4) Information Maintenance
d) Define OS
Ans. An Operating System is a computer program that manages the resources of a
computer. It accepts keyboard or mouse inputs from users and displays the results
of the actions and allows the user to run applications, or communicate with other
computers via networked connections.
e) What is Swapping?
Ans. Swapping is a mechanism in which a process can be swapped temporarily out of
main memory to a backing store, and then brought back into memory for continued
execution. Lifting the program from the memory and placing it on the disk is called as
“swapping out”. To bring the program again from the disk to main memory is called
as “swapping in”.
Normally, a blocked process is swapped out to make room for a ready process to
improve the CPU utilization. If more than one process is blocked, the swapper
chooses a process with lowest or a process waiting for a slow I/O event for swapping
out. The operating system has to find a place on the disk for the swapped out
process image. There are two alternatives:
a) To create a separate swap file for each process.
b) To keep a common swap file on the disk and not the location of each
swapped out process image within that file.
f) What is Semaphore?
Ans. A semaphore is a shared integer variable with non-negative values which can only
be subjected to following two operations:
1) Initialization and
2) Invisible operations
A Semaphore mechanism basically consists of two primitive operations SIGNAL and
WAIT, which operate on a special type of semaphore variable s.
Semaphore variable can assume integer values, and except possibly for initialization
may be accessed and manipulated only by means of the SIGNAL and WAIT
operations.
g) Explain Beledy’s Anomaly
Ans. Belady’s anomaly demonstrates that increasing the number of page frames may also
increase the number of page faults.
h) Define Waiting Time.
Ans. The CPU scheduling algorithm does not affect the amount of time during which a
process executes or does I/O. The CPU – scheduling algorithm affects only the
amount of time during which a process spends waiting in the ready queue. Waiting
time is the addition of the periods spends waiting in the ready queue.
i) What is claim edge in Resource Allocation Graph?
Ans. A claim edge from Pi to Rj indicates that process Pi may
require Rj sometime in the future (a future request
edge). It is represented in the graph by a dashed line.
Before a process starts executing, it must declare all its
claim edges.
The sequence of operations is:
claim -> request -> assignment -> claim
j) Is Round Robin Algorithm is non-preemptive? Comment and Justify
Ans. Round Robbin scheduling is designed for time-sharing system. RR scheduling is
also called as FCFS scheduling along with pre-emption to switch between
processes. RR Scheduling algorithm is preemptive because no process is allocated
to the CPU for more than one time quantum in a row. If a process CPU burst
exceeds 1 time quantum, that process is pre-empted and is put back in ready queue.
If a process does not complete before its CPU-time expires, the CPU is preempted
and given to the next process waiting in a queue. The preempted process is then
placed at the back of the ready list. Round Robin Scheduling is preemptive (at the
end of time-slice) therefore it is effective in time-sharing environments in which the
system needs to guarantee reasonable response times for interactive users.
k) Define the Term Editor.
Ans. The editor is a software programme that allows users to create or manipulate plain
text computer files. An editor may also refer to any other program capable of editing
any other file. For example, an image editor is a program capable of editing any
number of different image files.
l) What is meant by Fragmentation?
Ans. The process in which files are divided into pieces scattered around the disk
Fragmentation occurs naturally when you use a disk frequently, creating, deleting,
and modifying files. At some point, the operating system needs to store parts of a file
in noncontiguous clusters.
Fragmentation is categorised into:
a) External Fragmentation: It occurs when a region is unused and available, but
too small for any waiting job.
b) Internal Fragmentation: A job which needs m words of memory; may be run
in a region of n words where n >= m. The difference between those two
numbers (n-m) is Internal Fragmentation, memory which is internal to a
region, but is not being used.
m) What is file? List any two attributes of a file.
Ans. A file is a named collection of related information that is recorded on a secondary
storage. Commonly, files represent programs (source and object forms) and data. It
is a sequence of bits, bytes, lines or records whose meaning is defined by the file’s
creator and user.
There are various attributes for a file. Some of them are listed as below:
a) Name: The symbolic file name is the only information kept in human readable
form.
b) Type: This information is needed for those systems that support different file
types.
n) What is page fault?
Ans. A page is a fixed length memory block used as a transferring unit between physical
memory and an external storage. A page fault occurs when a program accesses a
page that has been mapped in address space, but has not been loaded in the
physical memory. When the page (data) requested by a program is not available in
the memory, it is called as a page fault. This usually results in the application being
shut down.
In other words, a page fault is a hardware or software interrupt, it occurs when an
access to a page that has not been brought into main memory takes place.
o) What do you mean by Turnaround Time?
Ans. The amount of time to execute a particular process is called as ‘Turnaround Time’. It
is the sum of the periods spends waiting to get into memory, waiting in the ready
queue, executing on the CPU and doing I/O.
p) What is meant by multiprogramming?
Ans. Multiprogramming is a form of parallel processing in which several programs are run
at the same time on a single processor. Since there is only one processor, there can
be no true simultaneous execution of different programs. Instead, the operating
system executes part of one program, then part of another, and so on. To the user it
appears that all programs are executing at the same time.
In multiprogramming system, when one program is waiting for I/O transfer; there is
another program ready to utilize the CPU. So it is possible for several jobs to share
the time of the CPU.
q) What is the use of overlays in Memory Management?
Ans. The process of transferring a block of program code or other data into internal
memory, replacing what is already stored is called as Overlay. The entire program
and data of a process must be in the physical memory for the process to execute. If
a process is larger than the amount of memory, then overlay technique can be used.
Overlays is to keep in memory only those instructions and data that are needed at
any given time. Overlays are implemented by user, no special support needed from
operating system. Hence, programming design of overlay structure is complex.
r) Define Process
Ans. A process is a programme in execution. As the program executes the process
changes state. The state of a process is defined by its current activitity. Process
execution is an alternating sequence of CPU and I/O bursts, beginning and ending
with a CPU burst. Thus, each process may be in one of the following states: New,
Active, Waiting or Halted.
s) Define the term Compile Time
Ans. The period of time during which a program's source code is being translated
into executable code, is called as ‘Compile Time’. In other words, Compile time is the
amount of time required for compilation. The operations performed at compile time
usually include syntax analysis, various kinds of semantic analysis and code
generation.
t) What is a Dead Lock?
Ans. The permanent blocking of a set of processes that either compete for system
resources or communicate with each other, is called as a ‘Dead Lock’. In other
words, when a process request resources, if the resources are not available at that
time, the process enters a wait state. All deadlocks involve conflicting needs for
resources by two or more processes. A common example is the traffic deadlock.
u) List basic operations of file
Ans. A file is an abstract data type. There are following operations which can be
performed on a file.
a) Creating a file: To create a file, space in the file system must be found for the
file and An entry for the new file must be made in the directory.
b) Writing a file: To write a file, we make a system call specifying both the name
of the file and the information to be written to the file. The system must keep
a write pointer to the location in the file where the next write is to take place.
The write pointer must be updated whenever a write occurs.
c) Reading a file: To read from a file, we use a system call that specifies the
name of the file and where (in memory) the next block of the file should be
put. The system needs to keep a read pointer to the location in the file where
the next read is to take place. Once the read has taken place, the read pointer
is updated.
d) Repositioning within a file: The directory is searched for the appropriate
entry, and the current-file-position pointer is repositioned to a given value.
Repositioning within a file need not involve any actual I/O. This file operation
is also known as a file seeks.
e) Deleting a file: To delete a file, we search the directory for the named file.
Having found the associated directory entry, we release all file space, so that
it can be reused by other files, and erase the directory entry.
f) Truncating a file: The user may want to erase the contents of a file but keep
its attributes. Rather than forcing the user to delete the file and then recreate
it, this function allows all attributes to remain unchanged (except for file
length) but lets the file be reset to length zero and its file space released.
These six basic operations comprise the minimal set of required file operations.
v) What is Dispatcher?
Ans. A Dispatcher is a module which connects the CPU to the process selected by the
short-term scheduler. The main function of the dispatcher is switching, it means
switching the CPU from one process to another process. The function of the
dispatcher is ‘jumping to the proper location in the user program and ready to start
execution. The dispatcher should be fast, because it is invoked during each and
every process switch.
w) List the Classic Synchronization Problems.
Ans. One of the biggest challenges that the programmer must solve is to correctly identify
their problem as an instance of one of the classic problems. It may require thinking
about the problem or framing it in a less than obvious way, so that a known solution
may be used. The advantage to using a using known solution is assurance that it is
correct.
The problems are listed as below:
a) Bounded Buffer Problem: This problem is also called the Producers and
Consumers problem. A finite supply of containers is available. Producers take
an empty container and fill it with a product. Consumers take a full container,
consume the product and leave an empty container. The main complexity of
this problem is that we must maintain the count for both the number of empty
and full containers that are available.
b) Readers and Writers Problems: It is another classical problem in concurrent
programming. It basically resolves around a number of processes using a
shared global data structure. The processes are categorized depending on
their usage of the resource, as either readers or writers.
If one notebook exists where writers may write information to, only one writer
may write at a time. Confusion may arise if a reader is trying read at the same
as a writer is writing. Since readers only look at the data, but do not modify
the data, we can allow more than one reader to read at the same time.
The main complexity with this problems stems from allowing more than one
reader to access the data at the same time.
c) Dining Philosopher’s Problem: Let us consider five philosophers (the tasks)
spend their time thinking and eating spaghetti. They eat at a round table with
five individual seats. To eat, each philosopher needs two forks (the
resources). There are five forks on the table, one to the left and one to the
right of each seat. When a philosopher can not grab both forks, he sits and
waits. Eating takes random time, and then the philosopher puts the forks
down and leaves the dining room. After spending some random time thinking
about the nature of the universe, he again becomes hungry, and the circle
repeats itself.
It can be observed that a straightforward solution, when forks are
implemented by semaphores, is exposed to deadlock. There exist two
deadlock states when all five philosophers are sitting at the table holding one
fork each. One deadlock state is when each philosopher has grabbed the fork
left of him, and another is when each has the fork on his right.
x) Define pages & frames in memory management.
Ans. Pages: A page is a piece of software or data divided into sections, keeping the most
frequently accessed in main memory and storing the rest in virtual memory.
Frames: A frame refers to physical storage hardware used for storage, like a Storage
Area Network (SAN) or Network Attached Storage (NAS).
Q.2) Explain Deadlock Prevention Strategies in Detail.
Ans. For deadlock to occur, each of the four necessary conditions must hold. By ensuring
that atleast one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
a) Mutual Exclusion:
• The mutual-exclusion condition must hold for non-sharable resources.
• Sharable resources, on the other hand, do not require mutually
exclusive access, and thus, cannot be involved in a deadlock.
• In general, we cannot prevent deadlocks by denying the mutual
exclusion condition because some resources are intrinsically non-
sharable.
b) Hold and Wait:
• One protocol requires each process to request and be allocated all its
resources before it begins execution. We can implement this provision
by requiring that system calls requesting resources for a process
precede all other system calls.
• An alternative protocol allows a process to request resources only
when it has none. A process may request some resources and use
them. Before it can request any additional resources, however, it must
release all the resources that it is currently allocated.
c) No Preemption:
• If a process that is holding some resources requests another resource
that cannot be immediately allocated to it, then all resources currently
being held are released implicitly. Then the preempted resources are
added to the list of resources for which the process is waiting.
• This makes pre-emption of resources even more difficult than voluntary
release and resumption of resources.
d) Circular Wait:
• One way to prevent the circular wait condition is by linear ordering of
different types of system resources. In this approach, system resources
are divided into different classes Cj where j = 1,.., n.
Q.3) State the role of Short Term Process Scheduler.
Ans. Schedulers are special system software which handles process scheduling in
various ways. Their main task is to select the jobs to be submitted into the system
and to decide which process to run. Schedulers are of three types.
Short Term Scheduler: It is also called CPU scheduler. Main objective is increasing
system performance in accordance with the chosen set of criteria. It is the change of
ready state to running state of the process. CPU scheduler selects process among
the processes that are ready to execute and allocates CPU to one of them.
Short term scheduler also known as dispatcher, execute most frequently and makes
the fine grained decision of which process to execute next. Short term scheduler is
faster than long term scheduler.
The Short Term Scheduler or dispatcher is the module that gives control of the CPU
to the process selected by the short term scheduler. This function involves:
• Switching context
• Switching to user mode
• Jumping to the proper location in the user program to restart that program.
Q.4) Explain operation on process.
Ans.
a) Process Creation:
• Parent process create children processes, which, in turn create other
processes, forming a tree of processes
• Resource sharing
 Parent and children share all resources
 Children share subset of parent’s resources
 Parent and child share no resources
• Execution
 Parent and children execute concurrently
 Parent waits until children terminate
• Address space
 Child duplicate of parent
 Child has a program loaded into it
b) Process Termination
• Process executes last statement and asks the operating system to delete it
(exit)
 Output data from child to parent (via wait)
 Process’ resources are deallocated by operating system
• Parent may terminate execution of children processes (abort)
 Child has exceeded allocated resources
 Task assigned to child is no longer required
 If parent is exiting
 Some operating system do not allow child to continue if its parent
terminates
 All children terminated - cascading termination
Q.5) Explain Indexed Allocation Method in detail.
Ans. From the user’s point of view, a file is an abstract data type. It can be created,
opened, written, read, closed and deleted without any real concern for its
implementation. The implementation of a file is a problem for the operating system.
There are three major methods of allocating disk space widely in use. One of them is
Indexed Allocation Method.
• Chained allocation cannot support efficient direct access, since pointers are
scattered with the blocks themselves all over the disk and need to be retrieved in
order.
• Indexed allocation solves this problem by bringing all the pointer is together into
one location: the index block. In this case the FAT contains a separate one-level
index for each file, the index has one entry for each portion allocated to file.
• File indexes are not physically stored as part of the FAT, but it is kept in a
separate block and entry for the file in the FAT points to that block.
• Allocation may be on the basis of either fixed sized blocks or variable size
partitions. Allocation by blocks eliminates external fragmentation, whereas
allocation by variable size portions improve locality.
• Indexed allocation supports both sequential and direct access to the file and thus
is the most popular form of file allocation.
Advantages of:
• Does not suffer from external fragmentation.
• Support both sequential and direct access to the file.
Q.6) List and explain types of scheduling.
Ans. The aim of processor scheduling is to assign processes to be executed by the
processor, in a way that meets system objectives, such as response time,
throughput and processor efficiency. In many systems, this scheduling activity is
broken down into three separate functions:
a) Long Term Scheduling
b) Medium Term Scheduling
c) Short Term Scheduling.
a) Long Term Scheduling:
• Long term scheduling is performed when a new process is created.
• If the number of ready processes in the ready queue becomes very
high, then there is a overhead on the operating system (i.e., processor)
for maintaining long lists, context switching and dispatching increases.
• The long-term scheduler limits the number of processes to allow for
processing by taking the decision to add one or more new jobs, based
on FCFS (First-Come, first-serve) basis or priority or execution time or
Input/Output requirements. Long-term scheduler executes relatively
infrequently. Long-term scheduler determines which programs are
admitted into the system for processing.
• Once when admit a process or job, it becomes process and is added to
the queue for the short-term scheduler.
• In some systems, a newly created process begins in a swapped-out
condition, in which case it is added to a queue for the medium-term
scheduler scheduling manage queues to minimize queueing delay and
to optimize performance.
b) Medium-term Scheduling
• Medium-term scheduling is a part of the swapping function.
• When part of the main memory gets freed, the operating system looks at
the list of suspend ready processes, decides which one is to be swapped
in (depending on priority, memory and other resources required, etc).
• This scheduler works in close conjunction with the long-term scheduler.
• It will perform the swapping-in function among the swapped-out
processes.
• Medium-term scheduler executes some what more frequently.
c) Short-term Scheduling
• Short-term scheduler is also called as dispatcher.
• Short-term scheduler is invoked whenever an event occurs, that may lead
to the interruption of the current running process.
• For example clock interrupts, I/O interrupts, operating system calls,
signals, etc. Short-term scheduler executes most frequently.
• It selects from among the processes that are ready to execute and
allocates the CPU to one of them.
• It must select a new process for the CPU frequently. It must be very fast.
Q.7) What is virtual memory? How it is achieved by using Demand Paging?
Ans. Virtual Memory:
• Virtual memory is the separation of user logical memory from physical memory.
• This separation allows an extremely large virtual memory to be provided for
programmers when only a smaller physical memory is available.
• Virtual memory also allows files and memory to be shared by several different
processes through page sharing.
Implementation of Virtual Memory using Demand Paging:
• Demand paging is a process which involves the copying and relocation of data
from a secondary storage system to random access memory (RAM), a main
memory storage system.
• Demand paging copies and relocates data to facilitate the fastest access to that
data. Once the data is relocated, demand paging sends a command to the
operating system to inform it that the data file or files are now ready to be loaded.
• When we want to execute a process, then we swap into memory
• Demand paging is performed on demand, or after a command has been sent to
retrieve specific data.
Q.8) Define Dynamic Loading and Dynamic Linking?
Ans. Dynamic Loading: Dynamic loading is the process in which one can attach a shared
library to the address space of the process during execution, look up the address of
a function in the library, call that function and then detach the shared library when it
is no longer needed.
Dynamic Linking: Dynamic linking refers to the linking that is done during load or
run-time and not when the exe is created.
Q.9) Define Banker’s Algorithm
Ans. The algorithm which avoids deadlock by denying or postponing the request if it
determines that accepting the request could put the system in an unsafe state is
called as Banker’s Algorithm.
Q.10) What is polling? How it is achieved to control more than one device?
Ans. Polling: It is the continuous checking of other programs or devices by one progam or
device to see what state they are in, usually to see whether they are still connected
or want to communicate.
The processor continuously polls or tests every device in turn as to whether it
requires attention. Polling is the process where the computer waits for an external
device to check for it readiness. The computer does not do anything else than check
the status of the device. In Leighman’s terms “Polling is like picking up your phone
every few seconds to see if you have a call”.
Q.11) Explain the strategies First Fit, Best Fit, Worst Fit to use to select a free hole from
the set of available holes.
Ans. BEST - FIT: Best-fit memory allocation makes the best use of memory space but
slower in making allocation. In the illustration below, on the first processing cycle,
jobs 1 to 5 are submitted and be processed first. After the first cycle, job 2 and 4
located on block 5 and block 3 respectively and both having one turnaround are
replace by job 6 and 7 while job 1, job 3 and job 5 remain on their designated block.
In the third cycle, job 1 remain on block 4, while job 8 and job 9 replace job 7 and job
5 respectively (both having 2 turnaround). On the next cycle, job 9 and job 8 remain
on their block while job 10 replace job 1 (having 3 turnaround). On the fifth cycle only
job 9 and 10 are the remaining jobs to be process and there are 3 free memory
blocks for the incoming jobs. But since there are only 10 jobs, so it will remain free.
On the sixth cycle, job 10 is the only remaining job to be process and finally on the
seventh cycle, all 0jobs are successfully process and executed and all the memory
blocks are now free.
FIRST - FIT: First-fit memory allocation is faster in making allocation but leads to
memory waste. The illustration below shows that on the first cycle, job 1 to job 4 are
submitted first while job 6 occupied block 5 because the remaining memory space is
enough to its required memory size to be process. While job 5 is in waiting queue
because the memory size in block 5 is not enough for the job 5 to be process. Then
on the next cycle, job 5 replace job 2 on block 1 and job 7 replace job 4 on block 4
after both job 2 and job 4 finish their process. Job 8 is in waiting queue because the
remaining block is not enough to accommodate the memory size of job 8. On the
third cycle, job 8 replace job 3 and job 9 occupies block 4 after processing job 7.
While Job 1 and job 5 remain on its designated block. After the third cycle block 1
and block 5 are free to serve the incoming jobs but since there are 10 jobs so it will
remain free. And job 10 occupies block 2 after job 1 finish its turns. On the other
hand, job 8 and job 9 remain on their block. Then on the fifth cycle, only job 9 and
job 10 are to be process while there are 3 memory blocks free. In the sixth cycle, job
10 is the only remaining job to be process and lastly in the seventh cycle, all jobs are
successfully process and executed and all the memory blocks are now free.
WORST - FIT
Worst-fit memory allocation is opposite to best-fit. It allocates free available block to
the new job and it is not the best choice for an actual system. In the illustration, on
the first cycle, job 5 is in waiting queue while job 1 to job 4 and job 6 are the jobs to
be first process. After then, job 5 occupies the free block replacing job 2. Block 5 is
now free to accommodate the next job which is job 8 but since the size in block 5 is
not enough for job 8, so job 8 is in waiting queue. Then on the next cycle, block 3
accommodate job 8 while job 1 and job 5 remain on their memory block. In this
cycle, there are 2 memory blocks are free. In the fourth cycle, only job 8 on block 3
remains while job 1 and job 5 are respectively replace by job 9 and job 10. Just the
same in the previous cycle, there are still two free memory blocks. At fifth cycle, job 8
finish its job while the job 9 and job 10 are still on block 2 and block 4 respectively
and there is additional memory block free. The same scenario happen on the sixth
cycle. Lastly, on the seventh cycle, both job 9 and job 10 finish its process and in this
cycle, all jobs are successfully process and executed. And all the memory blocks are
now free.
Q.12) Explain PCB with the help of diagram.
Ans. Process Control Block (PCB): Each process is represented in the operating system
by a Process Control Block (PCB) also called as task control block. The operating
system groups all information that it needs about a particular process into a data
structure called a PCB or process descriptor. When a process is created, the
operating system creates a corresponding PCB and releases whenever, the process
terminates. The information stored in a PCB includes: Process name (ID) & Priority.
• Process State: The state may be new ready, running,
waiting, halted and so on.
• Program Counter: The counter indicates the address
of the next instruction to be executed for this process.
• CPU Registers: The registers vary in number and
type, depending on the computer architecture.
• CPU Scheduling Information: This information
includes a process priority, pointers to scheduling
queues, and any other scheduling parameters.
• Memory Management Information: This information
may include such information as the value of the
base and limit registers, the page tables, or the
segment tables, depending on the memory system
used by the OS.
• Accounting information: This information includes
the amount of CPU and real time used, time limits,
account numbers, job or process numbers, and so
on.
• I/O status information: This information includes the list of I/O devices allocated
to the process, a list of open files, and so on.
Q.13) Define Process States in detail with diagram
Ans. A process is a program in execution which includes the current activity and this state
is depicted by the program counter and the contents of the processor’s register.
There is a process stack for storage of temporary data. A user can have several
programs running and all these programs may be of a similar nature but they must
have different processes.
Processes may be in one of 5 states:
• New - The process is in the stage of being
created.
• Ready - The process has all the resources
available that it needs to run, but the CPU
is not currently working on this process's
instructions.
• Running - The CPU is working on this
process's instructions.
• Waiting - The process cannot run at the
moment, because it is waiting for some
resource to become available or for some
event to occur. For example the process
may be waiting for keyboard input, disk access request, inter-process messages, a timer
to go off, or a child process to finish.
• Terminated - The process has completed.
Q.14) Explain Internal Fragmentation & External Fragmentation with the help of an
example.
Ans. External & Internal Fragmentation:
a) External Fragmentation: When memory allocated to a process is slightly
larger than the requested memory, space at the end of a partition is unused
and wasted. This wasted space within a partition is called as internal
fragmentation. When enough total memory space exists to satisfy a request,
but it is not contiguous; storage is fragmented into a larger number of small
holes. This wasted space not allocated to any partition is called external
fragmentation. It occurs when a region is unused and available, but too small
for any waiting job.
b) Internal Fragmentation: A job which needs m words of memory; may be run
in a region of n words where n >= m. The difference between those two
numbers (n-m) is Internal Fragmentation, memory which is internal to a
region, but is not being used.
Fig.: Internal & External Fragmentation

More Related Content

What's hot

Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
Haris456
 
Process management in os
Process management in osProcess management in os
Process management in os
Miong Lazaro
 

What's hot (20)

Advanced computer architechture -Memory Hierarchies and its Properties and Type
Advanced computer architechture -Memory Hierarchies and its Properties and TypeAdvanced computer architechture -Memory Hierarchies and its Properties and Type
Advanced computer architechture -Memory Hierarchies and its Properties and Type
 
CS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMSCS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMS
 
Multi Processors And Multi Computers
 Multi Processors And Multi Computers Multi Processors And Multi Computers
Multi Processors And Multi Computers
 
Chapter 13 - I/O Systems
Chapter 13 - I/O SystemsChapter 13 - I/O Systems
Chapter 13 - I/O Systems
 
Multithreading computer architecture
 Multithreading computer architecture  Multithreading computer architecture
Multithreading computer architecture
 
Os Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual MemoryOs Swapping, Paging, Segmentation and Virtual Memory
Os Swapping, Paging, Segmentation and Virtual Memory
 
Cs8493 unit 2
Cs8493 unit 2Cs8493 unit 2
Cs8493 unit 2
 
Processor allocation in Distributed Systems
Processor allocation in Distributed SystemsProcessor allocation in Distributed Systems
Processor allocation in Distributed Systems
 
Chapter 11 - File System Implementation
Chapter 11 - File System ImplementationChapter 11 - File System Implementation
Chapter 11 - File System Implementation
 
Process management in os
Process management in osProcess management in os
Process management in os
 
Cs8493 unit 4
Cs8493 unit 4Cs8493 unit 4
Cs8493 unit 4
 
Operating System-Process Scheduling
Operating System-Process SchedulingOperating System-Process Scheduling
Operating System-Process Scheduling
 
Memory management
Memory managementMemory management
Memory management
 
cpu scheduling
cpu schedulingcpu scheduling
cpu scheduling
 
Centralized shared memory architectures
Centralized shared memory architecturesCentralized shared memory architectures
Centralized shared memory architectures
 
Chapter 3 - Processes
Chapter 3 - ProcessesChapter 3 - Processes
Chapter 3 - Processes
 
File models and file accessing models
File models and file accessing modelsFile models and file accessing models
File models and file accessing models
 
OS Memory Management
OS Memory ManagementOS Memory Management
OS Memory Management
 
CS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMSCS9222 ADVANCED OPERATING SYSTEMS
CS9222 ADVANCED OPERATING SYSTEMS
 
1.prallelism
1.prallelism1.prallelism
1.prallelism
 

Viewers also liked

operating system question bank
operating system question bankoperating system question bank
operating system question bank
rajatdeep kaur
 
Operating system notes
Operating system notesOperating system notes
Operating system notes
SANTOSH RATH
 

Viewers also liked (10)

Operating System Notes
Operating System NotesOperating System Notes
Operating System Notes
 
Prim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning treePrim's Algorithm on minimum spanning tree
Prim's Algorithm on minimum spanning tree
 
Algorithms Lecture 5: Sorting Algorithms II
Algorithms Lecture 5: Sorting Algorithms IIAlgorithms Lecture 5: Sorting Algorithms II
Algorithms Lecture 5: Sorting Algorithms II
 
operating system lecture notes
operating system lecture notesoperating system lecture notes
operating system lecture notes
 
Kruskal Algorithm
Kruskal AlgorithmKruskal Algorithm
Kruskal Algorithm
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
 
operating system question bank
operating system question bankoperating system question bank
operating system question bank
 
Operating system notes
Operating system notesOperating system notes
Operating system notes
 
Operating system notes pdf
Operating system notes pdfOperating system notes pdf
Operating system notes pdf
 
Algorithm Analysis and Design Class Notes
Algorithm Analysis and Design Class NotesAlgorithm Analysis and Design Class Notes
Algorithm Analysis and Design Class Notes
 

Similar to Introduction to Operating System (Important Notes)

operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.ppt
gezaegebre1
 
Os files 2
Os files 2Os files 2
Os files 2
Amit Pal
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/A
Abdul Munam
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptx
krishnajoshi70
 
operating system over view.ppt operating sysyems
operating system over view.ppt operating sysyemsoperating system over view.ppt operating sysyems
operating system over view.ppt operating sysyems
JyoReddy9
 
OS - Ch1
OS - Ch1OS - Ch1
OS - Ch1
sphs
 
Chapter 1 - Introduction
Chapter 1 - IntroductionChapter 1 - Introduction
Chapter 1 - Introduction
Wayne Jones Jnr
 
Operating systems. replace ch1 with numbers for next chapters
Operating systems. replace ch1 with numbers for next chaptersOperating systems. replace ch1 with numbers for next chapters
Operating systems. replace ch1 with numbers for next chapters
sphs
 

Similar to Introduction to Operating System (Important Notes) (20)

Os
OsOs
Os
 
Os
OsOs
Os
 
operating system for computer engineering ch3.ppt
operating system for computer engineering ch3.pptoperating system for computer engineering ch3.ppt
operating system for computer engineering ch3.ppt
 
Os files 2
Os files 2Os files 2
Os files 2
 
Operating system - Process and its concepts
Operating system - Process and its conceptsOperating system - Process and its concepts
Operating system - Process and its concepts
 
Chapter 5
Chapter 5Chapter 5
Chapter 5
 
Operating system Q/A
Operating system Q/AOperating system Q/A
Operating system Q/A
 
Operating system
Operating systemOperating system
Operating system
 
Chapter 1 Introduction to Operating System Concepts
Chapter 1 Introduction to Operating System ConceptsChapter 1 Introduction to Operating System Concepts
Chapter 1 Introduction to Operating System Concepts
 
Bt0070
Bt0070Bt0070
Bt0070
 
operatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptxoperatinndnd jdj jjrg-system-1(1) (1).pptx
operatinndnd jdj jjrg-system-1(1) (1).pptx
 
operating system over view.ppt operating sysyems
operating system over view.ppt operating sysyemsoperating system over view.ppt operating sysyems
operating system over view.ppt operating sysyems
 
OS - Ch1
OS - Ch1OS - Ch1
OS - Ch1
 
Chapter 1 - Introduction
Chapter 1 - IntroductionChapter 1 - Introduction
Chapter 1 - Introduction
 
Operating systems. replace ch1 with numbers for next chapters
Operating systems. replace ch1 with numbers for next chaptersOperating systems. replace ch1 with numbers for next chapters
Operating systems. replace ch1 with numbers for next chapters
 
Chapter 3 chapter reading task
Chapter 3 chapter reading taskChapter 3 chapter reading task
Chapter 3 chapter reading task
 
Os
OsOs
Os
 
Basics of Operating System
Basics of Operating SystemBasics of Operating System
Basics of Operating System
 
Ch1
Ch1Ch1
Ch1
 
Operating system
Operating systemOperating system
Operating system
 

Recently uploaded

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Recently uploaded (20)

Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 

Introduction to Operating System (Important Notes)

  • 1. INTRODUCTION TO OPERATING SYSTEMS Solved Question Bank Q.1) Write Short Notes a) What is Context Switch? Ans. Context switching is the procedure of storing the state of an active process for the CPU when it has to start executing a new one. For example, process A with its address space and stack is currently being executed by the CPU and there is a system call to jump to a higher priority process B; the CPU needs to remember the current state of the process A so that it can suspend its operation, begin executing the new process B and when done, return to its previously executing process A. Context switches are resource intensive and most operating system designers try to reduce the need for a context switch. They can be software or hardware governed depending upon the CPU architecture. Context switches can relate to either a process switch, a thread switch within a process or a register switch. The major need for a context switch arises when CPU has to switch between user mode and kernel mode but some OS designs may obviate it. A common approach to context switching is making use of a separate stack per switchable entity (thread/process), and using the stack to store the context itself. This way the context itself is merely the stack pointer. b) Define Rollback Ans. The process of restoring a database or program to a previously defined state, typically to recover from an error.
  • 2. c) What is System Call? Ans. • System calls provide the interface between a process and the operating system. • System calls are instructions that generate an interrupt that causes the operating system to gain control of the processor. • The operating system then determines what kind of system call it is and performs the appropriate services for the system caller. A system call is made using the system call machine language instruction. These calls are generally available as assembly language instructions and are usually listed in the manuals used by assembly – language programmers. Certain systems allow system calls to be made directly from a higher language program, in which case the calls normally resemble predefined function or subroutine calls. They may generate a call to a special run-time routine that makes the system call. i) File and I/O System Calls: open Get reading to read or write a file. create Create a new file and open it. read Read bytes from an open file. write Write bytes to an open file. close Indicate that you are done reading or writing a file ii) Process Management System Calls: create process Create a new process exit Terminate the process making the system call wait Wait for another process to exit fork Create a duplicate of the process working the system call execv Run a new program in the process making the system call iii) Interprocess Communication System Calls: createMessageQueue Create a queue to hold messages SendMessage Send a message to a message queue ReceiveMessage Receive a message from a message queue System calls can be roughly grouped into following major categories: 1) Process or Job Control 2) File Management 3) Device Management 4) Information Maintenance d) Define OS Ans. An Operating System is a computer program that manages the resources of a computer. It accepts keyboard or mouse inputs from users and displays the results of the actions and allows the user to run applications, or communicate with other computers via networked connections.
  • 3. e) What is Swapping? Ans. Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store, and then brought back into memory for continued execution. Lifting the program from the memory and placing it on the disk is called as “swapping out”. To bring the program again from the disk to main memory is called as “swapping in”. Normally, a blocked process is swapped out to make room for a ready process to improve the CPU utilization. If more than one process is blocked, the swapper chooses a process with lowest or a process waiting for a slow I/O event for swapping out. The operating system has to find a place on the disk for the swapped out process image. There are two alternatives: a) To create a separate swap file for each process. b) To keep a common swap file on the disk and not the location of each swapped out process image within that file. f) What is Semaphore? Ans. A semaphore is a shared integer variable with non-negative values which can only be subjected to following two operations: 1) Initialization and 2) Invisible operations A Semaphore mechanism basically consists of two primitive operations SIGNAL and WAIT, which operate on a special type of semaphore variable s. Semaphore variable can assume integer values, and except possibly for initialization may be accessed and manipulated only by means of the SIGNAL and WAIT operations. g) Explain Beledy’s Anomaly Ans. Belady’s anomaly demonstrates that increasing the number of page frames may also increase the number of page faults. h) Define Waiting Time. Ans. The CPU scheduling algorithm does not affect the amount of time during which a process executes or does I/O. The CPU – scheduling algorithm affects only the amount of time during which a process spends waiting in the ready queue. Waiting time is the addition of the periods spends waiting in the ready queue. i) What is claim edge in Resource Allocation Graph? Ans. A claim edge from Pi to Rj indicates that process Pi may require Rj sometime in the future (a future request edge). It is represented in the graph by a dashed line. Before a process starts executing, it must declare all its claim edges.
  • 4. The sequence of operations is: claim -> request -> assignment -> claim
  • 5. j) Is Round Robin Algorithm is non-preemptive? Comment and Justify Ans. Round Robbin scheduling is designed for time-sharing system. RR scheduling is also called as FCFS scheduling along with pre-emption to switch between processes. RR Scheduling algorithm is preemptive because no process is allocated to the CPU for more than one time quantum in a row. If a process CPU burst exceeds 1 time quantum, that process is pre-empted and is put back in ready queue. If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next process waiting in a queue. The preempted process is then placed at the back of the ready list. Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing environments in which the system needs to guarantee reasonable response times for interactive users. k) Define the Term Editor. Ans. The editor is a software programme that allows users to create or manipulate plain text computer files. An editor may also refer to any other program capable of editing any other file. For example, an image editor is a program capable of editing any number of different image files. l) What is meant by Fragmentation? Ans. The process in which files are divided into pieces scattered around the disk Fragmentation occurs naturally when you use a disk frequently, creating, deleting, and modifying files. At some point, the operating system needs to store parts of a file in noncontiguous clusters. Fragmentation is categorised into: a) External Fragmentation: It occurs when a region is unused and available, but too small for any waiting job. b) Internal Fragmentation: A job which needs m words of memory; may be run in a region of n words where n >= m. The difference between those two numbers (n-m) is Internal Fragmentation, memory which is internal to a region, but is not being used. m) What is file? List any two attributes of a file. Ans. A file is a named collection of related information that is recorded on a secondary storage. Commonly, files represent programs (source and object forms) and data. It is a sequence of bits, bytes, lines or records whose meaning is defined by the file’s creator and user. There are various attributes for a file. Some of them are listed as below: a) Name: The symbolic file name is the only information kept in human readable form. b) Type: This information is needed for those systems that support different file types. n) What is page fault?
  • 6. Ans. A page is a fixed length memory block used as a transferring unit between physical memory and an external storage. A page fault occurs when a program accesses a page that has been mapped in address space, but has not been loaded in the physical memory. When the page (data) requested by a program is not available in the memory, it is called as a page fault. This usually results in the application being shut down. In other words, a page fault is a hardware or software interrupt, it occurs when an access to a page that has not been brought into main memory takes place. o) What do you mean by Turnaround Time? Ans. The amount of time to execute a particular process is called as ‘Turnaround Time’. It is the sum of the periods spends waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. p) What is meant by multiprogramming? Ans. Multiprogramming is a form of parallel processing in which several programs are run at the same time on a single processor. Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time. In multiprogramming system, when one program is waiting for I/O transfer; there is another program ready to utilize the CPU. So it is possible for several jobs to share the time of the CPU. q) What is the use of overlays in Memory Management? Ans. The process of transferring a block of program code or other data into internal memory, replacing what is already stored is called as Overlay. The entire program and data of a process must be in the physical memory for the process to execute. If a process is larger than the amount of memory, then overlay technique can be used. Overlays is to keep in memory only those instructions and data that are needed at any given time. Overlays are implemented by user, no special support needed from operating system. Hence, programming design of overlay structure is complex. r) Define Process Ans. A process is a programme in execution. As the program executes the process changes state. The state of a process is defined by its current activitity. Process execution is an alternating sequence of CPU and I/O bursts, beginning and ending with a CPU burst. Thus, each process may be in one of the following states: New, Active, Waiting or Halted. s) Define the term Compile Time Ans. The period of time during which a program's source code is being translated into executable code, is called as ‘Compile Time’. In other words, Compile time is the amount of time required for compilation. The operations performed at compile time
  • 7. usually include syntax analysis, various kinds of semantic analysis and code generation. t) What is a Dead Lock? Ans. The permanent blocking of a set of processes that either compete for system resources or communicate with each other, is called as a ‘Dead Lock’. In other words, when a process request resources, if the resources are not available at that time, the process enters a wait state. All deadlocks involve conflicting needs for resources by two or more processes. A common example is the traffic deadlock.
  • 8. u) List basic operations of file Ans. A file is an abstract data type. There are following operations which can be performed on a file. a) Creating a file: To create a file, space in the file system must be found for the file and An entry for the new file must be made in the directory. b) Writing a file: To write a file, we make a system call specifying both the name of the file and the information to be written to the file. The system must keep a write pointer to the location in the file where the next write is to take place. The write pointer must be updated whenever a write occurs. c) Reading a file: To read from a file, we use a system call that specifies the name of the file and where (in memory) the next block of the file should be put. The system needs to keep a read pointer to the location in the file where the next read is to take place. Once the read has taken place, the read pointer is updated. d) Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-position pointer is repositioned to a given value. Repositioning within a file need not involve any actual I/O. This file operation is also known as a file seeks. e) Deleting a file: To delete a file, we search the directory for the named file. Having found the associated directory entry, we release all file space, so that it can be reused by other files, and erase the directory entry. f) Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than forcing the user to delete the file and then recreate it, this function allows all attributes to remain unchanged (except for file length) but lets the file be reset to length zero and its file space released. These six basic operations comprise the minimal set of required file operations. v) What is Dispatcher? Ans. A Dispatcher is a module which connects the CPU to the process selected by the short-term scheduler. The main function of the dispatcher is switching, it means switching the CPU from one process to another process. The function of the dispatcher is ‘jumping to the proper location in the user program and ready to start execution. The dispatcher should be fast, because it is invoked during each and every process switch. w) List the Classic Synchronization Problems. Ans. One of the biggest challenges that the programmer must solve is to correctly identify their problem as an instance of one of the classic problems. It may require thinking about the problem or framing it in a less than obvious way, so that a known solution may be used. The advantage to using a using known solution is assurance that it is correct. The problems are listed as below:
  • 9. a) Bounded Buffer Problem: This problem is also called the Producers and Consumers problem. A finite supply of containers is available. Producers take an empty container and fill it with a product. Consumers take a full container, consume the product and leave an empty container. The main complexity of this problem is that we must maintain the count for both the number of empty and full containers that are available. b) Readers and Writers Problems: It is another classical problem in concurrent programming. It basically resolves around a number of processes using a shared global data structure. The processes are categorized depending on their usage of the resource, as either readers or writers. If one notebook exists where writers may write information to, only one writer may write at a time. Confusion may arise if a reader is trying read at the same as a writer is writing. Since readers only look at the data, but do not modify the data, we can allow more than one reader to read at the same time. The main complexity with this problems stems from allowing more than one reader to access the data at the same time. c) Dining Philosopher’s Problem: Let us consider five philosophers (the tasks) spend their time thinking and eating spaghetti. They eat at a round table with five individual seats. To eat, each philosopher needs two forks (the resources). There are five forks on the table, one to the left and one to the right of each seat. When a philosopher can not grab both forks, he sits and waits. Eating takes random time, and then the philosopher puts the forks down and leaves the dining room. After spending some random time thinking about the nature of the universe, he again becomes hungry, and the circle repeats itself. It can be observed that a straightforward solution, when forks are implemented by semaphores, is exposed to deadlock. There exist two deadlock states when all five philosophers are sitting at the table holding one fork each. One deadlock state is when each philosopher has grabbed the fork left of him, and another is when each has the fork on his right. x) Define pages & frames in memory management.
  • 10. Ans. Pages: A page is a piece of software or data divided into sections, keeping the most frequently accessed in main memory and storing the rest in virtual memory. Frames: A frame refers to physical storage hardware used for storage, like a Storage Area Network (SAN) or Network Attached Storage (NAS).
  • 11. Q.2) Explain Deadlock Prevention Strategies in Detail. Ans. For deadlock to occur, each of the four necessary conditions must hold. By ensuring that atleast one of these conditions cannot hold, we can prevent the occurrence of a deadlock. a) Mutual Exclusion: • The mutual-exclusion condition must hold for non-sharable resources. • Sharable resources, on the other hand, do not require mutually exclusive access, and thus, cannot be involved in a deadlock. • In general, we cannot prevent deadlocks by denying the mutual exclusion condition because some resources are intrinsically non- sharable. b) Hold and Wait: • One protocol requires each process to request and be allocated all its resources before it begins execution. We can implement this provision by requiring that system calls requesting resources for a process precede all other system calls. • An alternative protocol allows a process to request resources only when it has none. A process may request some resources and use them. Before it can request any additional resources, however, it must release all the resources that it is currently allocated. c) No Preemption: • If a process that is holding some resources requests another resource that cannot be immediately allocated to it, then all resources currently being held are released implicitly. Then the preempted resources are added to the list of resources for which the process is waiting. • This makes pre-emption of resources even more difficult than voluntary release and resumption of resources. d) Circular Wait: • One way to prevent the circular wait condition is by linear ordering of different types of system resources. In this approach, system resources are divided into different classes Cj where j = 1,.., n. Q.3) State the role of Short Term Process Scheduler. Ans. Schedulers are special system software which handles process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types. Short Term Scheduler: It is also called CPU scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them. Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler.
  • 12. The Short Term Scheduler or dispatcher is the module that gives control of the CPU to the process selected by the short term scheduler. This function involves: • Switching context • Switching to user mode • Jumping to the proper location in the user program to restart that program. Q.4) Explain operation on process. Ans. a) Process Creation: • Parent process create children processes, which, in turn create other processes, forming a tree of processes • Resource sharing  Parent and children share all resources  Children share subset of parent’s resources  Parent and child share no resources • Execution  Parent and children execute concurrently  Parent waits until children terminate • Address space  Child duplicate of parent  Child has a program loaded into it b) Process Termination • Process executes last statement and asks the operating system to delete it (exit)  Output data from child to parent (via wait)  Process’ resources are deallocated by operating system • Parent may terminate execution of children processes (abort)  Child has exceeded allocated resources  Task assigned to child is no longer required  If parent is exiting  Some operating system do not allow child to continue if its parent terminates  All children terminated - cascading termination Q.5) Explain Indexed Allocation Method in detail. Ans. From the user’s point of view, a file is an abstract data type. It can be created, opened, written, read, closed and deleted without any real concern for its implementation. The implementation of a file is a problem for the operating system. There are three major methods of allocating disk space widely in use. One of them is Indexed Allocation Method. • Chained allocation cannot support efficient direct access, since pointers are scattered with the blocks themselves all over the disk and need to be retrieved in order. • Indexed allocation solves this problem by bringing all the pointer is together into one location: the index block. In this case the FAT contains a separate one-level index for each file, the index has one entry for each portion allocated to file.
  • 13. • File indexes are not physically stored as part of the FAT, but it is kept in a separate block and entry for the file in the FAT points to that block. • Allocation may be on the basis of either fixed sized blocks or variable size partitions. Allocation by blocks eliminates external fragmentation, whereas allocation by variable size portions improve locality. • Indexed allocation supports both sequential and direct access to the file and thus is the most popular form of file allocation. Advantages of: • Does not suffer from external fragmentation. • Support both sequential and direct access to the file. Q.6) List and explain types of scheduling. Ans. The aim of processor scheduling is to assign processes to be executed by the processor, in a way that meets system objectives, such as response time, throughput and processor efficiency. In many systems, this scheduling activity is broken down into three separate functions: a) Long Term Scheduling b) Medium Term Scheduling c) Short Term Scheduling. a) Long Term Scheduling: • Long term scheduling is performed when a new process is created. • If the number of ready processes in the ready queue becomes very high, then there is a overhead on the operating system (i.e., processor) for maintaining long lists, context switching and dispatching increases. • The long-term scheduler limits the number of processes to allow for processing by taking the decision to add one or more new jobs, based on FCFS (First-Come, first-serve) basis or priority or execution time or Input/Output requirements. Long-term scheduler executes relatively infrequently. Long-term scheduler determines which programs are admitted into the system for processing. • Once when admit a process or job, it becomes process and is added to the queue for the short-term scheduler. • In some systems, a newly created process begins in a swapped-out condition, in which case it is added to a queue for the medium-term scheduler scheduling manage queues to minimize queueing delay and to optimize performance. b) Medium-term Scheduling • Medium-term scheduling is a part of the swapping function.
  • 14. • When part of the main memory gets freed, the operating system looks at the list of suspend ready processes, decides which one is to be swapped in (depending on priority, memory and other resources required, etc). • This scheduler works in close conjunction with the long-term scheduler. • It will perform the swapping-in function among the swapped-out processes. • Medium-term scheduler executes some what more frequently. c) Short-term Scheduling • Short-term scheduler is also called as dispatcher. • Short-term scheduler is invoked whenever an event occurs, that may lead to the interruption of the current running process. • For example clock interrupts, I/O interrupts, operating system calls, signals, etc. Short-term scheduler executes most frequently. • It selects from among the processes that are ready to execute and allocates the CPU to one of them. • It must select a new process for the CPU frequently. It must be very fast. Q.7) What is virtual memory? How it is achieved by using Demand Paging? Ans. Virtual Memory: • Virtual memory is the separation of user logical memory from physical memory. • This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. • Virtual memory also allows files and memory to be shared by several different processes through page sharing. Implementation of Virtual Memory using Demand Paging: • Demand paging is a process which involves the copying and relocation of data from a secondary storage system to random access memory (RAM), a main memory storage system. • Demand paging copies and relocates data to facilitate the fastest access to that data. Once the data is relocated, demand paging sends a command to the operating system to inform it that the data file or files are now ready to be loaded. • When we want to execute a process, then we swap into memory • Demand paging is performed on demand, or after a command has been sent to retrieve specific data. Q.8) Define Dynamic Loading and Dynamic Linking? Ans. Dynamic Loading: Dynamic loading is the process in which one can attach a shared library to the address space of the process during execution, look up the address of a function in the library, call that function and then detach the shared library when it is no longer needed. Dynamic Linking: Dynamic linking refers to the linking that is done during load or run-time and not when the exe is created. Q.9) Define Banker’s Algorithm
  • 15. Ans. The algorithm which avoids deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state is called as Banker’s Algorithm. Q.10) What is polling? How it is achieved to control more than one device? Ans. Polling: It is the continuous checking of other programs or devices by one progam or device to see what state they are in, usually to see whether they are still connected or want to communicate. The processor continuously polls or tests every device in turn as to whether it requires attention. Polling is the process where the computer waits for an external device to check for it readiness. The computer does not do anything else than check the status of the device. In Leighman’s terms “Polling is like picking up your phone every few seconds to see if you have a call”.
  • 16. Q.11) Explain the strategies First Fit, Best Fit, Worst Fit to use to select a free hole from the set of available holes. Ans. BEST - FIT: Best-fit memory allocation makes the best use of memory space but slower in making allocation. In the illustration below, on the first processing cycle, jobs 1 to 5 are submitted and be processed first. After the first cycle, job 2 and 4 located on block 5 and block 3 respectively and both having one turnaround are replace by job 6 and 7 while job 1, job 3 and job 5 remain on their designated block. In the third cycle, job 1 remain on block 4, while job 8 and job 9 replace job 7 and job 5 respectively (both having 2 turnaround). On the next cycle, job 9 and job 8 remain on their block while job 10 replace job 1 (having 3 turnaround). On the fifth cycle only job 9 and 10 are the remaining jobs to be process and there are 3 free memory blocks for the incoming jobs. But since there are only 10 jobs, so it will remain free. On the sixth cycle, job 10 is the only remaining job to be process and finally on the seventh cycle, all 0jobs are successfully process and executed and all the memory blocks are now free. FIRST - FIT: First-fit memory allocation is faster in making allocation but leads to memory waste. The illustration below shows that on the first cycle, job 1 to job 4 are submitted first while job 6 occupied block 5 because the remaining memory space is enough to its required memory size to be process. While job 5 is in waiting queue because the memory size in block 5 is not enough for the job 5 to be process. Then on the next cycle, job 5 replace job 2 on block 1 and job 7 replace job 4 on block 4 after both job 2 and job 4 finish their process. Job 8 is in waiting queue because the remaining block is not enough to accommodate the memory size of job 8. On the third cycle, job 8 replace job 3 and job 9 occupies block 4 after processing job 7. While Job 1 and job 5 remain on its designated block. After the third cycle block 1 and block 5 are free to serve the incoming jobs but since there are 10 jobs so it will remain free. And job 10 occupies block 2 after job 1 finish its turns. On the other hand, job 8 and job 9 remain on their block. Then on the fifth cycle, only job 9 and job 10 are to be process while there are 3 memory blocks free. In the sixth cycle, job 10 is the only remaining job to be process and lastly in the seventh cycle, all jobs are successfully process and executed and all the memory blocks are now free. WORST - FIT Worst-fit memory allocation is opposite to best-fit. It allocates free available block to the new job and it is not the best choice for an actual system. In the illustration, on the first cycle, job 5 is in waiting queue while job 1 to job 4 and job 6 are the jobs to be first process. After then, job 5 occupies the free block replacing job 2. Block 5 is now free to accommodate the next job which is job 8 but since the size in block 5 is not enough for job 8, so job 8 is in waiting queue. Then on the next cycle, block 3 accommodate job 8 while job 1 and job 5 remain on their memory block. In this cycle, there are 2 memory blocks are free. In the fourth cycle, only job 8 on block 3 remains while job 1 and job 5 are respectively replace by job 9 and job 10. Just the same in the previous cycle, there are still two free memory blocks. At fifth cycle, job 8 finish its job while the job 9 and job 10 are still on block 2 and block 4 respectively and there is additional memory block free. The same scenario happen on the sixth cycle. Lastly, on the seventh cycle, both job 9 and job 10 finish its process and in this cycle, all jobs are successfully process and executed. And all the memory blocks are now free.
  • 17. Q.12) Explain PCB with the help of diagram. Ans. Process Control Block (PCB): Each process is represented in the operating system by a Process Control Block (PCB) also called as task control block. The operating system groups all information that it needs about a particular process into a data structure called a PCB or process descriptor. When a process is created, the operating system creates a corresponding PCB and releases whenever, the process terminates. The information stored in a PCB includes: Process name (ID) & Priority. • Process State: The state may be new ready, running, waiting, halted and so on. • Program Counter: The counter indicates the address of the next instruction to be executed for this process. • CPU Registers: The registers vary in number and type, depending on the computer architecture. • CPU Scheduling Information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. • Memory Management Information: This information may include such information as the value of the base and limit registers, the page tables, or the segment tables, depending on the memory system used by the OS. • Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on. • I/O status information: This information includes the list of I/O devices allocated to the process, a list of open files, and so on. Q.13) Define Process States in detail with diagram Ans. A process is a program in execution which includes the current activity and this state is depicted by the program counter and the contents of the processor’s register. There is a process stack for storage of temporary data. A user can have several programs running and all these programs may be of a similar nature but they must have different processes. Processes may be in one of 5 states: • New - The process is in the stage of being created. • Ready - The process has all the resources available that it needs to run, but the CPU is not currently working on this process's instructions. • Running - The CPU is working on this process's instructions. • Waiting - The process cannot run at the moment, because it is waiting for some resource to become available or for some event to occur. For example the process
  • 18. may be waiting for keyboard input, disk access request, inter-process messages, a timer to go off, or a child process to finish. • Terminated - The process has completed. Q.14) Explain Internal Fragmentation & External Fragmentation with the help of an example. Ans. External & Internal Fragmentation: a) External Fragmentation: When memory allocated to a process is slightly larger than the requested memory, space at the end of a partition is unused and wasted. This wasted space within a partition is called as internal fragmentation. When enough total memory space exists to satisfy a request, but it is not contiguous; storage is fragmented into a larger number of small holes. This wasted space not allocated to any partition is called external fragmentation. It occurs when a region is unused and available, but too small for any waiting job. b) Internal Fragmentation: A job which needs m words of memory; may be run in a region of n words where n >= m. The difference between those two numbers (n-m) is Internal Fragmentation, memory which is internal to a region, but is not being used. Fig.: Internal & External Fragmentation