SlideShare a Scribd company logo
1 of 42
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
1 | P a g e
Operating System
Figure 1: Block Diagram OS
An operating system is a program that acts as an interface between the user and the computer hardware and
controls the execution of all kinds of programs. It is a software which performs all the basic tasks like file
management, memory management, process management, handling input and output, and controlling
peripheral devices such as disk drives and printers.
An operating system performs these services for applications:
 In a multitasking operating system where multiple programs can be running at the same time, the
operating system determines which applications should run in what order and how much time should
be allowed for each application before giving another application a turn.
 It manages the sharing of internal memory (like RAM) among multiple applications.
 It handles input and output to and from attached hardware devices, such as hard disks, printers, and
dial-up ports.
 It sends messages to each application or interactive user (or to a system operator) about the status of
operation and any errors that may have occurred.
 It can offload the management of what are called batch jobs (for example, printing) so that the
initiating application is freed from this work.
 On computers that can provide parallel processing, an operating system can manage how to divide
the program so that it runs on more than one processor at a time.
All major computer platforms (hardware and software) require and sometimes include an operating system,
and operating systems must be developed with different features to meet the specific needs of various form
factors.
Common desktop operating systems:
 Windows is Microsoft’s flagship operating system, the de facto standard for home and business
computers. Introduced in 1985, the GUI-based OS has been released in many versions since then.
The user-friendly Windows 95 was largely responsible for the rapid development of personal
computing.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
2 | P a g e
 Mac OS is the operating system for Apple's Macintosh line of personal computers and workstations.
 Linux is a Unix-like operating system that was designed to provide personal computer users a free or
very low-cost alternative. Linux has a reputation as a very efficient and fast-performing system.
A Mobile OS allows smartphones, tablet PCs and other mobile devices to run applications and programs.
Mobile operating systems include Apple iOS, Google Android, BlackBerry OS and Windows 10 Mobile.
Functions of Operating System
1. Booting:
Booting is a process of starting the computer operating system starts the computer to work. It checks the
computer and makes it ready to work.
2. Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory is a
large array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be
executed, it must in the main memory. An Operating System does the following activities for memory
management
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
 In multiprogramming, the OS decides which process will get memory when and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.
3. File Management
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are often known as file
system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
4. Security
By means of password and similar other techniques, it prevents unauthorized access to programs and
data.
5. Disk Management
Operating system manages the disk space. It manages the stored files and folders in a proper way.
6. Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and for how
much time. This function is called process scheduling. An Operating System does the following
activities for processor management
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
3 | P a g e
 Keeps tracks of processor and status of process. The program responsible for this task is known as
traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required
7. Device Management
Operating system also controls all devices attached to computer. The hardware devices are controlled
with the help of small software called device drivers. It manages device communication via their
respective drivers. It does the following activities for device management
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
8. Printing controlling
Operating system also controls printing function. If a user issues two print commands at a time, it does
not mix data of these files and prints them separately.
9. Providing interface
It is used in order that user interface acts with a computer mutually. User interface controls how you
input data and instruction and how information is displayed on screen. The operating system offers two
types of the interface to the user:
a. Graphical-line interface: It interacts with of visual environment to communicate with the computer. It
uses windows, icons, menus and other graphical objects to issues commands.
b. Command-line interface: it provides an interface to communicate with the computer by typing
commands.
Different Types Of System
Simple Batch Systems
Early computers were (physically) enormously large machines run from a console. The common input
devices were card readers and tape drives. The common output devices were line printers, tape drives, and
card punches. The users of such systems did not interact directly with the computer systems. Rather, the user
prepared a job—which consisted of the program, the data, and some control information about the nature of
the job (control cards)—and submitted it to the computer operator. The job would usually be in the form of
punch cards. At some later time (perhaps minutes, hours, or days), the output appeared. The output consisted
of the result of the program, as well as a dump of memory and registers in case of program error.
The operating system in these early computers was fairly simple. Its major task was to transfer control
automatically from one job to the next. The operating system was always (resident) in memory (Figure 1.1).
To speed up processing, jobs with similar needs were batched together and were run through the computer
as a group. Thus, the programmers would leave their programs with the operator. The operator would sort
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
4 | P a g e
programs into batches with similar requirements and, as the computer became available, would run each
batch. The output from each job would be sent back to the appropriate programmer.
A batch operating system, thus, normally reads a stream of separate jobs-(from a card reader, for example),
each with its own control cards that predefine what the job does.
Figure 2: Memory Layout For A Simple Batch System
When the job is complete, its output is usually printed (on a line printer, for example). The definitive feature
of a batch system is the lack of interaction between the user and the job while that job is executing. The job
is prepared and submitted, and at some later time, the output appears. The delay between job submission and
job completion (called turnaround time) may result from the amount of computing needed or from delays
before the operating system starts to process the job.
In this execution environment, the CPU is often idle. This idleness occurs because the speeds of the
mechanical I/O devices are intrinsically slower than those of electronic devices. Even a slow CPU works in
the microsecond range, with thousands of instructions executed per second. A fast card reader, on the other
hand, might read 1200 cards per minute (17 cards per second). Thus, the difference in speed between the
CPU and its I/O devices may be three orders of magnitude or more. Over time, of course, improvements in
technology resulted in faster I/O devices. Unfortunately, CPU speeds increased even faster, so that the
problem was not only unresolved, but also exacerbated.
The introduction of disk technology has helped in this regard. Rather than the cards being read from the card
reader directly into memory, and then the job being processed, cards are read directly from the card reader
onto the disk. The location of card images is recorded in a table kept by the operating system. When a job is
executed, the operating system satisfies its requests for card-reader input by reading from the disk. Similarly,
when the job requests the printer to output a line, that line is copied into a system buffer and is written to the
disk. When the job is completed, the output is actually printed. This form of processing is called spooling;
the name is an acronym for simultaneous peripheral operation on-line. Spooling, in essence, uses the disk as
a huge buffer, for reading as far ahead as possible on input devices and for storing output files until the
output devices are able to accept them.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
5 | P a g e
Spooling is also used for processing data at remote sites. The CPU sends the data via communications paths
to a remote printer (or accepts an entire input job from a remote card reader). The remote processing is done
at its own speed, with no CPU intervention. The CPU just needs to be notified when the processing is
completed, so that it can spool the next batch of data.
Spooling overlaps the I/O of one job with the computation of other jobs. Even in a simple system, the
spooler may be reading the input of one job while printing the output of a different job. During this time,
still another job (or jobs) may be executed, reading their "cards" from disk and "printing" their output lines
onto the disk.
Spooling has a direct beneficial effect on the performance of the system. For the cost of some disk space and
a few tables, the computation of one job can overlap with the I/O of other jobs. Thus, spooling can keep both
the CPU and the I/O devices working at much higher rates.
Advantages:
 maximum processor utilization
 the setup time for jobs is saved
 performance increases, since, the job are sequenced together
Disadvantages:
 difficult to debug
 one job affects all the pending jobs.
 job could enter an infinite loop, and others will never be processed.
 Lack of interaction between the user and the job.
 CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.
 Difficult to provide the desired priority.
Simple Batch System
 Use of high-level languages, magnetic tapes.
 Jobs are batched together by type of languages.
 An operator was hired to perform the repetitive tasks of loading jobs, starting the computer, and
collecting the output (Operator-driven Shop).
 It was not feasible for users to inspect memory or patch programs directly.
Operation Of Simple Batch Systems :
 The user submits a job (written on cards or tape) to a computer operator.
 The computer operator place a batch of several jobs on an input device.
 A special program, the monitor, manages the execution of each program in the batch.
 Monitor utilities are loaded when needed.
 Resident monitor is always in main memory and available for execution.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
6 | P a g e
Multiprogrammed Batch System
In multiprogramming batch system, multiple programs (or jobs) of different users can be executed
simultaneously (i.e. at the same time). The multiple jobs that have to be run simultaneously must be kept in
main memory and the operating system must manage them properly. If these jobs are ready to run, the
processor must decide which one to run.
In multi-programmed batch system, the operating system keeps multiple jobs in main memory at a time.
There may be many jobs that enter the system. Since in general, the main memory is too small to
accommodate all jobs. So the jobs that enter the system to be executed are kept initially on the disk in the
job pool. In other words, you can say that a job pool consists of all jobs residing on the disk awaiting
allocation of main memory. When the operating system selects a job from a job pool, it loads that job into
memory for execution.
Normally, the jobs in main memory are smaller than the jobs in job pool. The jobs in job pool are awaiting
allocation of main memory. If several jobs are ready to be brought into memory, and if there is not enough
room for all of them, then the system must require memory management. Similarly, if many jobs are ready
to run at the same time, the system must schedule these jobs.
The processor picks and begins to execute one of the jobs in main memory. Some jobs may have to wait for
certain tasks (such as I/O operation), to complete. In a simple batch system or non-multi-programmed
system, the processor would sit idle. In multi-programmed system, the CPU switches to second job and
begins to execute it. Similarly, when second job needs to wait, the processor is switched to third job, and so
on. The processor also checks the status of previous jobs, whether they are completed or not.
The multi-programmed system takes less time to complete the same jobs than the simple batch system. The
multi-programmed systems do not allow interaction between the processes (or jobs) when they are running
on the computer.
Multiprogramming increases the CPU's utilization. Multi-programmed system provides an environment in
which various computer resources are utilized effectively. The CPU always remains busy to run one of the
jobs until all jobs complete their execution.
In multi-programmed system, the hardware must have the facilities to support multiprogramming.
Since several jobs are available on disk, OS can select which job to run, this essentially leads to job
scheduling. Idea is to ensure that CPU is always busy. A single job may not be able to be keep CPU busy.
Multiprogramming allows to keep the CPU more busy as follows:
There are several jobs in the job pool. Some are selected to be loaded in memory (many policies), since
memory is much smaller than the job pool. A job is picked from the memory, CPU executes it till it can and
then there may be an I/O. Since CPU has to wait, it loads another job and executes it till it can and so on.
The trace of these jobs looks like a spaghetti and the task of OS is to manage them neatly. There are several
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
7 | P a g e
issues - which job should sit in memory, which should CPU pick up - how to manage memory - what
happens to the job which is suspended - what happens when one has to activate it - multiple jobs running
should effect each other in a limited way
 The I/O devices are much slower than the processor, leaving the processor idle most of the time waiting
for the I/O devices to finish their operations.
 Uniprogramming: the processor starts executing a certain program and when it reaches an I/O
instructions, it must wait until that I/O instructions is fully executed before proceeding.
 Multiprogramming: in contrast to the uniprogramming, when a job needs to wait for an I/O instruction,
the processor switches to another job executing it until the first job finishes its waiting I/O instructions,
the processor continue to swap between jobs as it reaches an I/O operation.
 Multiprogramming batch system must rely on certain hardware capabilities such as process switching
when swapping between program execution.
 Interrupt-driven I/O or DMA helps a lot in multiprogramming environments, allowing the processor to
issues an I/O command and proceed executing another program.
Simple batch operating system provides the automatic job sequencing so that the processor can work more
effectively but it sometimes becomes idle and inactive. Input and output devices are responsible for this
problem as they become slow in comparison to the processor. It is calculated on the basis of the programs
that processes a file of records and work on 100 machine instructions approximately. For example, here the
system would waste more than 96% of time as this time would be taken by the input and output devices to
cover all the data from the file. Here is the following figure where a single program is present which is
known as Uni-programming. The user is spending a lot of time in executing the whole process until and
until it reaches an I/O instruction. It must spend a little bit of time here so that I/O instruction concludes
before it proceeds in the next stage.
Inefficiency is not required. The memory is capable of holding the resident monitor (OS) and one user
program. Let’s consider that there is an option of a room for the resident monitor and two user programs.
Even if one job spends time for the input and output devices, the other processor can easily start with
another job so that it don’t need to waste his time for I/O. There is always an option of expanding the
memory so that more than two programs can be accessed. This approach is termed as multiprogramming or
in other words multi tasking. Multi – programmed operating systems is the base of the operating system.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
8 | P a g e
It would be easier for you to understand the concept of multi programming with this example. For instance-
Let’s say that a system which has 250 Mbytes of unused memory along with a disk, terminal and printer.
JOB1, JOB2 and JOB 3 are the programs which are being presented so that they can be executed at the same
period of time. JOB2 and JOB 3 are required to meet the requirements of the processor. Continuous disk and
printer can be used for JOB3. A simple batch environment can only be created when jobs are being executed
in sequential manner. Now, JOB1 would be done within five minutes. JOB2 would have to wait for 5
minutes and continue with the next 15 minutes so that it can complete it. After 20 minutes, JOB 3 can start
with the process and would complete it within 30 minutes from the time it was started. Check out the
following columns to get an idea on average resource utilization and response time. It can be easily
understood that gross underutilization is taking place for every resource in an average manner which is more
than the required 30-minute.
Time Sharing System
A time sharing system allows many users to share the computer resources simultaneously. In other words,
time sharing refers to the allocation of computer resources in time slots to several programs simultaneously.
For example a mainframe computer that has many users logged on to it. Each user uses the resources of the
mainframe -i.e. memory, CPU etc. The users feel that they are exclusive user of the CPU, even though this is
not possible with one CPU i.e. shared among different users.
The time sharing systems were developed to provide an interactive use of the computer system. A time
shared system uses CPU scheduling and multiprogramming to provide each user with a small portion of a
time-shared computer. It allows many users to share the computer resources simultaneously. As the system
switches rapidly from one user to the other, a short time slot is given to each user for their executions.
The time sharing system provides the direct access to a large number of users where CPU time is divided
among all the users on scheduled basis. The OS allocates a set of time to each user. When this time is
expired, it passes control to the next user on the system. The time allowed is extremely small and the users
are given the impression that they each have their own CPU and they are the sole owner of the CPU. This
short period of time during that a user gets attention of the CPU; is known as a time slice or a quantum. The
concept of time sharing system is shown in figure.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
9 | P a g e
In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state whereas user 6 is
in ready status.
As soon as the time slice of user 5 is completed, the control moves on to the next ready user i.e. user 6. In
this state user 2, user 3, user 4, and user 5 are in waiting state and user 1 is in ready state. The process
continues in the same way and so on.
The time-shared systems are more complex than the multi-programming systems. In time-shared systems
multiple processes are managed simultaneously which requires an adequate management of main memory
so that the processes can be swapped in or swapped out within a short time.
Advantages of Timesharing operating systems are as follows −
 Provides the advantage of quick response.
 Avoids duplication of software.
 Reduces CPU idle time.
Disadvantages of Time-sharing operating systems are as follows −
 Problem of reliability.
 Question of security and integrity of user programs and data.
 Problem of data communication.
Parallel Operating Systems
Parallel operating systems are a type of computer processing platform that breaks large tasks into smaller
pieces that are done at the same time in different places and by different mechanisms. They are sometimes
also described as “multi-core” processors. This type of system is usually very efficient at handling very large
files and complex numerical codes. It’s most commonly seen in research settings where central server
systems are handling a lot of different jobs at once, but can be useful any time multiple computers are doing
similar jobs and connecting to shared infrastructures simultaneously. They can be difficult to set up at first
and can require a bit of expertise, but most technology experts agree that, over the long term, they’re much
more cost effective and efficient than their single-computer counterparts.
A parallel operating system works by dividing sets of calculations into smaller parts and distributing them
between the machines on a network. To facilitate communication between the processor cores and memory
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
10 | P a g e
arrays, routing software has to either share its memory by assigning the same address space to all of the
networked computers, or distribute its memory by assigning a different address space to each processing
core. Sharing memory allows the operating system to run very quickly, but it is usually not as powerful.
When using distributed shared memory, processors have access to both their own local memory and the
memory of other processors; this distribution may slow the operating system, but it is often more flexible
and efficient.
The architecture of the software is typically build around a UNIX-based platform, which allows it to
coordinate distributed loads between multiple computers in a network. Parallel systems are able to use
software to manage all of the different resources of the computers running in parallel, such as memory,
caches, storage space, and processing power. These systems also allow a user to directly interface with all of
the computers in the network.
Parallel operating systems are the interface between parallel computers and the applications (parallel or not)
that are executed on them. They translate the hardware’s capabilities into concepts usable by programming
languages.
Distributed Operating Systems
Distributed Operating System is a model where distributed applications are running on multiple computers
linked by communications. A distributed operating system is an extension of the network operating system
that supports higher levels of communication and integration of the machines on the network.
These systems are referred as loosely coupled systems where each processor has its own local memory and
processors communicate with one another through various communication lines, such as high speed buses or
telephone lines.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
11 | P a g e
This system looks to its users like an ordinary centralized operating system but runs on multiple,
independent central processing units(CPUs).
These systems are referred as loosely coupled systems where each processor has its own local memory and
processors communicate with one another through various communication lines, such as high speed buses or
telephone lines. By loosely coupled systems, we mean that such computers possess no hardware connections
at the CPU - memory bus level, but are connected by external interfaces that run under the control of
software.
The Distributed Os involves a collection of autonomous computer systems, capable of communicating and
cooperating with each other through a LAN / WAN. A Distributed Os provides a virtual machine abstraction
to its users and wide sharing of resources like as computational capacity, I/O and files etc.
The structure shown in fig contains a set of individual computer systems and workstations connected via
communication systems, but by this structure we cannot say it is a distributed system because it is the
software, not the hardware, that determines whether a system is distributed or not.
The users of a true distributed system should not know, on which machine their programs are running and
where their files are stored. LOCUS and MICROS are the best examples of distributed operating systems.
Using LOCUS operating system it was possible to access local and distant files in uniform manner. This
feature enabled a user to log on any node of the network and to utilize the resources in a network without the
reference of his/her location. MICROS provided sharing of resources in an automatic manner. The jobs were
assigned to different nodes of the whole system to balance the load on different nodes.
Advantages:
1. Sharing of resources.
2. Reliability.
3. Communication.
4. Computation speedup.
5. Give more performance than single system
6. If one pc in distributed system malfunction or corrupts then other node or pc will take care of
7. More resources can be added easily
8. Resources like printers can be shared on multiple pc’s
Disadvantages of distributed operating systems:-
 Security problem due to sharing
 Some messages can be lost in the network system
 Bandwidth is another problem if there is large data then all network wires to be replaced which tends
to become expensive
 Overloading is another problem in distributed operating systems
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
12 | P a g e
 If there is a database connected on local system and many users accessing that database through
remote or distributed way then performance become slow
 The databases in network operating is difficult to administrate then single user system
Examples of distributed operating systems:-
 Windows server 2003
 Windows server 2008
 Windows server 2012
Real Time Operating Systems
Real time Operating Systems are very fast and quick respondent systems. These systems are used in an
environment where a large number of events (generally external) must be accepted and processed in a short
time. Real time processing requires quick transaction and characterized by supplying immediate response.
For example, a measurement from a petroleum refinery indicating that temperature is getting too high and
might demand for immediate attention to avoid an explosion.
(BSP board support package)
In real time operating system there is a little swapping of programs between primary and secondary
memory. Most of the time, processes remain in primary memory in order to provide quick response,
therefore, memory management in real time system is less demanding compared to other systems.
The primary functions of the real time operating system are to:
1. Manage the processor and other system resources to meet the requirements of an application.
2. Synchronize with and respond to the system events.
3. Move the data efficiently among processes and to perform coordination among these processes.
The Real Time systems are used in the environments where a large number of events (generally external to
the computer system) is required to be accepted and is to be processed in the form of quick response. Such
systems have to be the multitasking. So the primary function of the real time operating system is to manage
certain system resources, such as the CPU, memory, and time. Each resource must be shared among the
competing processes to accomplish the overall function of the system Apart from these primary functions of
the real time operating system there are certain secondary functions that are not mandatory but are included
to enhance the performance:
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
13 | P a g e
1. To provide an efficient management of RAM.
2. To provide an exclusive access to the computer resources.
The term real time refers to the technique of updating files with the transaction data immediately just after
the event that it relates with.
Few more examples of real time processing are:
1. Airlines reservation system.
2. Air traffic control system.
3. Systems that provide immediate updating.
4. Systems that provide up to the minute information on stock prices.
5. Defense application systems like as RADAR.
Real time operating systems mostly use the preemptive priority scheduling. These support more than one
scheduling policy and often allow the user to set parameters associated with such policies, such as the time-
slice in Round Robin scheduling where each task in the task queue is scheduled up to a maximum time, set
by the time-slice parameter, in a round robin manner. Hundred of the priority levels are commonly available
for scheduling. Some specific tasks can also be indicated to be non-preemptive.
Real time system is divided into two systems
 Hard Real Time Systems.
 Soft Real Time Systems.
Hard Real Time Systems:
Hard real time system is purely deterministic and time constraint system for example users expected the
output for the given input in 10sec then system should process the input data and give the output exactly by
10th second. Here in the above example 10 sec. is the deadline to complete process for given data. Hard real
systems should complete the process and give the output by 10th second. It should not give the output by
11th second or by 9th second, exactly by 10th second it should give the output. In the hard real time system
meeting the deadline is very important if deadline is not met the system performance will fail. Another
example is defense system if a country launched a missile to another country the missile system should
reach the destiny at 4:00 to touch the ground what if missile is launched at correct time but it reached the
destination ground by 4:05 because of performance of the system, with 5 minutes of difference destination is
changed from one place to another place or even to another country. Here system should meet the deadline.
Soft Real Time System:
In soft real time system, the meeting of deadline is not compulsory for every time for every task but process
should get processed and give the result. Even the soft real time systems cannot miss the deadline for every
task or process according to the priority it should meet the deadline or can miss the deadline. If system is
missing the deadline for every time the performance of the system will be worse and cannot be used by the
users. Best example for soft real time system is personal computer, audio and video systems, etc.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
14 | P a g e
Memory Management: In simple words how to allocate memory for every program which is to be run and
get processed in the memory (RAM or ROM). The schemes like demand paging, virtual memory,
segmentation will under this management only.
Segmentation: It is a memory management scheme where the physical memory is dividing into logical
segments according to the length of the program. In the segmentation it will avoid unused memory, sharing
will be done easily, protection for the program. Sometime Main memory cannot allocate memory to the
segments Because of it variable length and large segments
Paging: in this scheme the physical memory is divided in to fixed size pages. It has all functions of
segmentation and also solves its disadvantages. Virtual memory is a memory management scheme where
some part of secondary storage device will be used a physical memory when program lacks the physical
memory to run the program.
Process Management: thread contains set of instructions which can execute independently of other
programs. Collection of thread is called the process or we can say process contains the sequential execution
of program and state control of the operating system. Every operating system works by executing series of
processes by the processor and give result back to the main memory. Operating systems contains two types
of process
System process: these processes are main responsible for working of operating system.
Application process: These processes are invoked when particular application is stared and start executing
with the help of other system process.
Operating system should process each and every process given by the user and give results back, OS will
also process the process according to the priority. Scheduling algorithm will take care of processes
processing and Inter Process communications (IPC) semaphore’s, message queues, shared memory, pipes,
FIFO’s will take care of resource allocation of the processes.
File Management: How the files are place in the memory which file should be used by user, file permission
(read, write and execute permissions), arrangement of files in the secondary memory and primary memory
using file system etc. all the above functions are done by the file management.
Device Management: management of devices like tape drive, hard drives, processor speed, optical drive,
and memory devices will be done by the operating system.
Some of the facilities an RTOS provide:
 Priority-based Scheduler
 System Clock interrupt routine
 Deterministic behavior
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
15 | P a g e
Priority-based Scheduler
Most RTOS have between 32 and 256 possible priorities for individual tasks/processes. The scheduler will
run the task with the highest priority. When a running task gives up the CPU, the next highest priority task
runs, and so on.
The highest priority task in the system will have the CPU until:
 it runs to completion (i.e. it voluntarily give up the CPU)
 a higher priority task is made ready, in which case the original task is pre-empted by the new (higher
priority) task.
As a developer, it is your job to assign the task priorities such that your deadlines will be met.
 Ready to run: When task have all the resources to run, but not in running state. It’s called a ready to
run task. It’s the state before running.
 Running – When a task is executing. It is known as running.
 Blocked – When a task doesn’t have enough resources to run, it is sent to a blocked state.
System Clock Interrupt routines
The RTOS will typically provide some sort of system clock (anywhere from 500 uS to 100ms) that allows
you to perform time sensitive operations. If you have a 1ms system clock, and you need to do a task every
50ms, there is usually an API that allows you to say "In 50ms, wake me up". At that point, the task would be
sleeping until the RTOS wakes it up.
Note that just being woken up does not insure you will run exactly at that time. It depends on the priority. If
a task with a higher priority is currently running, you could be delayed.
Deterministic Behavior
The RTOS goes to great length to ensure that weather you have 10 tasks, or 100 tasks, it does not take any
longer to switch context, determine what the next highest priority task is, etc...
In general, the RTOS operation tries to be O(1).
One of the prime area for deterministic behavior in an RTOS is the interrupt handling. When an interrupt
line is signaled, the RTOS immediately switches to the correct Interrupt Service Routine and handles the
interrupt without delay (regardless of the priority of any task currently running).
Note that most hardware specific ISRs would be written by the developers on the project. The RTOS might
already provide ISRs for serial ports, system clock, maybe networking hardware but anything specialized
(pacemaker signals, actuators, etc...) would not be part of the RTOS.
This is a gross generalization and as with everything else, there is a large variety of RTOS implementations.
Some RTOS do things differently, but the description above should be applicable to a large portion of
existing RTOSes.
Process Management:
Process Concept
a process is what a program becomes when it is loaded into memory from a secondary storage medium like
a hard disk drive or an optical drive. Each process has its own address space, which typically contains both
program instructions and data. Despite the fact that an individual processor or processor core can only
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
16 | P a g e
execute one program instruction at a time, a large number of processes can be executed over a relatively
short period of time by briefly assigning each process to the processor in turn. While a process is executing
it has complete control of the processor, but at some point the operating system needs to regain control, such
as when it must assign the processor to the next process. Execution of a particular process will be suspended
if that process requests an I/O operation, if an interrupt occurs, or if the process times out.
When a user starts an application program, the operating system's high-level scheduler (HLS) loads all or
part of the program code from secondary storage into memory. It then creates a data structure in memory
called a process control block (PCB) that will be used to hold information about the process, such as its
current status and where in memory it is located. The operating system also maintains a separate process
table in memory that lists all the user processes currently loaded. When a new process is created, it is given a
unique process identification number (PID) and a new record is created for it in the process table which
includes the address of the process control block in memory. As well as allocating memory space, loading
the process, and creating the necessary data structures, the operating system must also allocate resources
such as access to I/O devices and disk space if the process requires them. Information about the resources
allocated to a process is also held within the process control block. The operating system's low-level
scheduler (LLS) is responsible for allocating CPU time to each process in turn.
A process is basically a program in execution. The execution of a process must progress in a sequential
fashion.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we execute this program,
it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─
stack, heap, text and data. The following image shows a simplified layout of a process inside main memory
S.N. Component & Description
1 Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
17 | P a g e
2 Heap
This is dynamically allocated memory to a process during its run time.
3 Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.
4 Data
This section contains the global and static variables.
Process Life Cycle
When a process executes, it passes through different states. These stages may differ in different operating
systems, and the names of these states are also not standardized.
In general, a process can have one of the following five states at a time.
S.N. State & Description
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are waiting to have the
processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
some other process.
3 Running
Once the process has been assigned to a processor by the OS scheduler, the process state is set to
running and the processor executes its instructions.
4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating system, it is moved to
the terminated state where it waits to be removed from main memory.
Process Control Block (PCB)
A Process Control Block is a data structure maintained by the Operating System for every process. The PCB
is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a
process as listed below in the table −
S.N. Information & Description
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
18| P a g e
1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
2 Process privileges
This is required to allow/disallow access to system resources.
3 Process ID
Unique identification for each of the process in the operating system.
4 Pointer
A pointer to parent process.
5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
6 CPU registers
Various CPU registers where process need to be stored for execution for running state.
7 CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
8 Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
10 IO status information
This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −
The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates.
Process scheduling
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
19 | P a g e
Process scheduling is a major element in process management, since the efficiency with which processes are
assigned to the processor will affect the overall performance of the system. It is essentially a matter of
managing queues, with the aim of minimizing delay while making the most effective use of the processor's
time. The operating system carries out four types of process scheduling:
 Long-term (high-level) scheduling
 Medium-term scheduling
 Short-term (low-level) scheduling
 I/O scheduling
The long-term scheduler determines which programs are admitted to the system for processing, and as such
controls the degree of multiprogramming. Before accepting a new program, the long-term scheduler must
first decide whether the processor is able to cope effectively with another process. The more active
processes there are, the smaller the percentage of the processor's time that can be allocated to each process.
The long-term scheduler may limit the total number of active processes on the system in order to ensure that
each process receives adequate processor time. New processes may subsequently be created, as existing
processes are terminated or suspended. If several programs are waiting for the long-term scheduler the
decision as to which job to admit first might be done on a first-come-first-served basis, or by using some
other criteria such as priority, expected execution time, or I/O requirements.
Medium-term scheduling is part of the swapping function. The term "swapping" refers to transferring a
process out of main memory and into virtual memory (secondary storage) or vice-versa. This may occur
when the operating system needs to make space for a new process, or in order to restore a process to main
memory that has previously been swapped out. Any process that is inactive or blocked may be swapped into
virtual memory and placed in a suspend queue until it is needed again, or until space becomes available. The
swapped-out process is replaced in memory either by a new process or by one of the previously suspended
processes.
The task of the short-term scheduler (sometimes referred to as the dispatcher) is to determine which process
to execute next. This will occur each time the currently running process is halted. A process may cease
execution because it requests an I/O operation, or because it times out, or because a hardware interrupt has
occurred. The objectives of short-term scheduling are to ensure efficient utilization of the processor and to
provide an acceptable response time to users. Note that these objectives are not always completely
compatible with one another. On most systems, a good user response time is more important than efficient
processor utilization, and may necessitate switching between processes frequently, which will increase
system overhead and reduce overall processor throughput.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
20 | P a g e
Figure 3: Queuing diagram for scheduling
Operations On Process
The operations of process carried out by an operating system are primarily of two types:
1. Process creation
2. Process termination
1.Process Creation
Process creation is a task of creating new processes. There are different situations in which a new process
is created. There are different ways to create new process. A new process can be created at the time of
initialization of operating system or when system calls such as fork () are initiated by other processes. The
process, which creates a new process using system calls, is called parent process while the new process
that is created is called child process. The child processes can create new processes using system calls. A
new process can also create by an operating system based on the request received from the user.
The process creation is very common in running computer system because corresponding to every task
that is performed there is a process associated with it. For instance, a new process is created every time a
user logs on to a computer system, an application program such a MS Word is initiated, or when a
document printed.
2.Process termination
Process termination is an operation in which a process is terminated after the execution of its last
instruction. This operation is used to terminate or end any process. When a process is terminated, the
resources that were being utilized by the process are released by the operating system. When a child
process terminates, it sends the status information back to the parent process before terminating. The child
process can also be terminated by the parent process if the task performed by the child process is no longer
needed. In addition, when a parent process terminates, it has to terminate the child process as well became
a child process cannot run when its parent process has been terminated.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
21 | P a g e
The above figure shows the hierarchical structure of processes.
The termination of a process when all its instruction has been executed successfully is called normal
termination. However, there are instances when a process terminates due to some error. This termination is
called as abnormal termination of a process.
1. Process creation: A user requests, and already running process can creates new processes. Parent
process creates children processes using a system call, which, in turn create other processes, forming a
tree of processes.
2. Process preempting: A process preempted if I/O event or timeout occurs. Then process moves from
running state to ready state and CPU loads another process from ready state to running state, if available.
3. Process blocking: When a process needs I/O event during its execution, then process moves from
running state to waiting state and dispatches another process to CPU.
4. Process termination: A process terminated if when a process completes its execution. Also, these
events: OS, Hardware interrupt, and Software interrupt can cause termination of a process.
Cooperating Process
A process is said to be a cooperating process if it can affect or be affected by other processes in the system.
A process that shares data with other processes is known as cooperating.
Cooperation is done to provide information sharing, computational speedups, modularity and convenience.
To allow cooperation there should be some mechanism for communication (called IPC: Inter-Process
Comm.) and to synchronize their actions.
 Cooperating processes are those that share state. (May or may not actually be "cooperating")
 Behavior is nondeterministic: depends on relative execution sequence and cannot be predicted a
priori.
 Behavior is irreproducible.
 Example: one process writes "ABC", another writes "CBA".
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
22 | P a g e
When discussing concurrent processes, multiprogramming is as dangerous as multiprocessing unless you
have tight control over the multiprogramming. Also bear in mind that smart I/O devices are as bad as
cooperating processes (they share the memory).
Why permit processes to cooperate?
 Want to share resources:
o One computer, many users.
o One file of checking account records, many tellers.
 Want to do things faster:
o Read next block while processing current one.
o Divide job into sub-jobs, execute in parallel.
Advantages of Cooperating Processes:
There are some advantages of cooperating processes:
Information Sharing: Several users may which to share the same information e.g. a shared file. The O/S
needs to provide a way of allowing concurrent access.
Computation Speedup: Some problems can be solved quicker by sub-dividing it into smaller tasks that can
be executed in parallel on several processors.
Modularity: The solution of a problem is structured into parts with well-defined interfaces, and where the
parts run in parallel.
Convenience: A user may be running multiple processes to achieve a single goal, or where a utility may
invoke multiple components, which interconnect via a pipe structure that attaches the stdout of one stage to
stdin of the next etc.
If we allow processes to execute concurrently and share data, then we must either provide some mechanisms
to handle conflicts e.g. writing and reading the same piece of data. We must also be prepared to handle
inconsistent or corrupted data.
Threads
A thread is a flow of execution through the process code, with its own program counter that keeps track of
which instruction to execute next, system registers which hold its current working variables, and a stack
which contains the execution history.
A thread shares with its peer threads few information like code segment, data segment and open files. When
one thread alters a code segment memory item, all other threads see that.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
23 | P a g e
A thread is also called a lightweight process. Threads provide a way to improve application performance
through parallelism. Threads represent a software approach to improving performance of operating system
by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents
a separate flow of control. Threads have been successfully used in implementing network servers and web
server. They also provide a suitable foundation for parallel execution of applications on shared memory
multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.
Difference between Process and Thread
S.N. Process Thread
1 Process is heavy weight or resource
intensive.
Thread is light weight, taking lesser resources
than a process.
2 Process switching needs interaction with
operating system.
Thread switching does not need to interact with
operating system.
3 In multiple processing environments,
each process executes the same code but
has its own memory and file resources.
All threads can share same set of open files, child
processes.
4 If one process is blocked, then no other
process can execute until the first
process is unblocked.
While one thread is blocked and waiting, a second
thread in the same task can run.
5 Multiple processes without using threads
use more resources.
Multiple threaded processes use fewer resources.
6 In multiple processes each process
operates independently of the others.
One thread can read, write or change another
thread's data.
Advantages of Thread
 Threads minimize the context switching time.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
24 | P a g e
 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale and efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on kernel, an operating system
core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of threads. The thread library
contains code for creating and destroying threads, for passing message and data between threads, for
scheduling thread execution and for saving and restoring thread contexts. The application starts with a single
thread.
Advantages
 Thread switching does not require Kernel mode privileges.
 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.
Disadvantages
 In a typical operating system, most system calls are blocking.
 Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread management code in the
application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
25 | P a g e
The Kernel maintains context information for the process as a whole and for individuals threads within the
process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than
the user threads.
Advantages
 Kernel can simultaneously schedule multiple threads from the same process on multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
 Kernel routines themselves can be multithreaded.
Disadvantages
 Kernel threads are generally slower to create and manage than the user threads.
 Transfer of control from one thread to another within the same process requires a mode switch to the
Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process. Multithreading models are three types
 Many to many relationship.
 Many to one relationship.
 One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel
threads.
The following diagram shows the many-to-many threading model where 6 user level threads are
multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model
provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel
can schedule another thread for execution.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
26 | P a g e
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in
user space by the thread library. When thread makes a blocking system call, the entire process will be
blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on
multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the system does
not support them, then the Kernel threads use the many-to-one relationship modes.
One to One Model
There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more
concurrency than the many-to-one model. It also allows another thread to run when a thread makes a
blocking system call. It supports multiple threads to execute in parallel on microprocessors.
Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2,
windows NT and windows 2000 use one to one relationship model.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
27 | P a g e
Difference between User-Level & Kernel-Level Thread
S.N. User-Level Threads Kernel-Level Thread
1 User-level threads are faster to create and manage. Kernel-level threads are slower to create
and manage.
2 Implementation is by a thread library at the user
level.
Operating system supports creation of
Kernel threads.
3 User-level thread is generic and can run on any
operating system.
Kernel-level thread is specific to the
operating system.
4 Multi-threaded applications cannot take advantage of
multiprocessing.
Kernel routines themselves can be
multithreaded.
Inter-Process Communication
A mechanism through which data is shared among the process in the system is referred to as Inter-process
communication. Multiple processes communicate with each other to share data and resources. A set of
functions is required for the communication of process with each other. In multiprogramming systems, some
common storage is used where process can share data. The shared storage may be the main memory or it
may be a shared file. Files are the most commonly used mechanism for data sharing between processes. One
process can write in the file while another process cam read the data for the same file.
Various techniques can be used to implement the Inter-Process Communication. There are two fundamental
models of Inter-Process communication that are commonly used, these are:
1. Shared Memory Model
2. Message Passing Model
Shared Memory Model
In shared memory model. The co operating process shares a region of memory for sharing of information.
Some operating systems use the supervisor call to create a share memory space. Similarly, Some operating
system use file system to create RAM disk, which is a virtual disk created in the RAM. The shared files are
stored in RAM disk to share the information between processes. The shared files in RAM disk are actually
stored in the memory. The Process can share information by writing and reading data to the shared memory
location or RAM disk.
Message Passing Model
In this model, data is shared between process by passing and receiving messages between co-operating
process. Message passing mechanism is easier to implement than shared memory but it is useful for
exchanging smaller amount of data. In message passing mechanism data is exchange between processes
through kernel of operating system using system calls. Message passing mechanism is particularly useful in
a distributed environment where the communicating processes may reside on different components
connected by the network. For example, A data program used on the internet could be designed so that chat
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
28 | P a g e
participants communicate with each other by exchanging messages. It must be noted that passing message
technique is slower than shared memory technique.
 A message contains the following information:
 Header of message that identifies the sending and receiving processes
 Block of data
 Pointer to block of data
 Some control information about the process
Typically Inter-Process Communication is based on the ports associated with process. A port represent a
queue of processes. Ports are controlled and managed by the kernel. The processes communicate with each
other through kernel.
In message passing mechanism, two operations are performed. These are sending message and receiving
message. The function send() and receive() are used to implement these operations. Supposed P1 and P2
want to communicate with each other. A communication link must be created between them to send and
receive messages. The communication link can be created using different ways. The most important
methods are:
1. Direct model
2. Indirect model
3. Buffering
Inter-process communication techniques can be divided into various types. These are:
1. Pipes
2. FIFO
3. Shared memory
4. Mapped memory
5. Message queues
6. Sockets
Pipes:
The most basic versions of the UNIX operating system gave birth to pipes. These were used to facilitate
one-directional communication between single-system processes. We can create a pipe by using the pipe
system call, thus creating a pair of file descriptors.
FIFO:
A FIFO or 'first in, first out' is a one-way flow of data. FIFOs are similar to pipes, the only difference being
that FIFOs are identified in the file system with a name. In simple terms, FIFOs are 'named pipes'.
Shared memory:
Shared memory is an efficient means of passing data between programs. An area is created in memory by a
process, which is accessible by another process. Therefore, processes communicate by reading and writing
to that memory space.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
29 | P a g e
Mapped memory:
This method can be used to share memory or files between different processors in a Windows environment.
A 32-bit API can be used with Windows. This mechanism speeds up file access, and also facilitates inter-
process communication.
Message queues:
By using this method, a developer can pass messages between messages via a single queue or a number of
message queues. A system kernel manages this mechanism. An application program interface (API)
coordinates the messages.
Sockets:
We use this mechanism to communicate over a network, between a client and a server. This method
facilitates a standard connection that is independent of the type of computer and the type of operating system
used.
CPU Scheduling
CPU scheduling is a process which allows one process to use the CPU while the execution of another
process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use
of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair.
Why do we need scheduling?
A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time
spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one process
can use CPU while another is waiting for I/O. This is possible only with process scheduling.
Scheduling Criteria
There are many different criteria's to check when considering the "best" scheduling algorithm :
 CPU utilization
To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the
time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly
loaded) to 90% (heavily loaded.)
 Throughput
It is the total number of processes completed per unit time or rather say total amount of work done in a
unit of time. This may range from 10/second to 1/hour depending on the specific processes.
 Turnaround time
It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of
the process to the time of completion of the process(Wall clock time).
 Waiting time
The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in
the ready queue to acquire get control on the CPU.
 Load average
It is the average number of processes residing in the ready queue waiting for their turn to get into the
CPU.
 Response time
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
30 | P a g e
Amount of time it takes from when a request was submitted until the first response is produced.
Remember, it is the time till the first response and not the completion of process execution(final
response).
In general CPU utilization and Throughput are maximized and other factors are reduced for proper
optimization.
1. CPU Utilization
We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100
percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for
heavily used system).
2. Throughput
If the CPU is busy executing processes, then work is being done. One measure of work is the number of
processes that are completely for time unit, called throughput. For long processes, this rate may be one
process per hour, for short transaction, it may be 10 processes per second.
3. Turnaround Time
From the point of view of a particular process, the important criterion is how long it takes to execute that
process. The interval from the time of submission of a process to the time of completion is
the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory,
waiting in the ready queue, executing on the CPU, and doing input/output.
4. Waiting Time
The CPU scheduling algorithm does not affect the amount of time during which a process executes or
does input/output; it affects only the amount of time that a process sends waiting in the ready
queue. Waiting Time is the sum of the periods spends waiting in the ready queue.
5. Response Time
Often, a process can produce some output fairly early and can continue computing new results while
previous results are being output to the user. Thus, another measure is the time from the submission of a
request until the first response is produced. This measure, called response time, is the time it takes to
start responding, not the time it takes to output the response.
Scheduling Criteria of CPU for a scheduler varies from one scheduler to another. There are many scheduling
algorithms. Different scheduling algorithms have different properties. The selection of a proper scheduling
algorithm may improve the system performance. We must consider the properties of various scheduling
algorithm and the computer system for selecting a particular scheduling algorithm.
Many criteria have been suggested for evaluating the scheduling algorithm. Some commonly used
scheduling criteria are described below.
 CPU Utilization Scheduling Criteria:
The CPU must be busy as much as possible to perform different activities. The percentage of time, the
CPU is executing a process may range from 0 to 100 percent. CPU utilization is very important in real
time and multiprogramming system. In a real time system the CPI utilization should be 50 percent
(lightly loaded system) to 95 percent (heavily loaded system). It means that load on a system affects the
CPU utilization. The high CPU utilization is achieved on heavily loaded system.
 Balanced Utilization Scheduling Criteria:
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
31 | P a g e
Balanced utilization represents the percentage of the time al the resource utilized. In addition to
considering he CPU utilization the utilization of memory, I/O devices and other system resources are
also considered.
 Through Put Scheduling Criteria:
The number of process executed by the system in a specific period of time this time unit is called
through put. For long process this rate may be one process per minute. Similarly for short process, it may
be 100 processes per minute. The evaluation of through must be considered on the basis of average
length.
 Turnaround Time Scheduling Criteria:
Turnaround time represents the average period of time taken by a process executes. The turnaround time
is computed by subtracting the time, when the process was created from the time is terminated. The
turnaround time is inversely proportional to through put.
 Waiting Time Scheduling Criteria:
Waiting time represents the average period of time, a process spends waiting in the ready queue to get a
chance for execution. It does not include the time, a process is executing on the CPU or performing I/O.
waiting time is also very important factor to measure the performance of the system.
 Response Time Scheduling Criteria:
Response time represents the average time take by the system to start responding to user request. The
response time is considered in interactive systems. For example, ATM is an interactive system, which is
used in banks for withdrawal of money. The user expects that the system should response quickly. In
interactive system the turnaround time is not a best criterion and this mostly depends on the speed of the
users responses to the turnaround time in interactive system has no importance. Therefore the response
time in an interactive system should be very less.
 Predictability Scheduling Criteria:
Predictability represents the consistency in the average response time in interactive system. It is another
measure of performance of a system because users prefer consistency. Suppose an interactive system
that normally responds within a microsecond, but on some occasions, it takes 5 to 15 milliseconds or
more. In this case the user may be confused. Mostly the users prefer the system with reasonable and
predicable response time, than a system that is faster but is highly variable is response time.
 Fairness Scheduling Criteria:
Fairness represents the degree to which all processes are given equal opportunity of execution. This
criterion is codified in time shared system.
 Priorities Scheduling Criteria:
The process with higher priorities must be given preference for execution.
Disk Scheduling Algorithms
First Come -First Serve (FCFS)
FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order
they arrive in the disk queue.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
32 | P a g e
All incoming requests are placed at the end of the queue. Whatever number that is next in the queue will be
the next number served. Using this algorithm doesn't provide the best results. To determine the number of
head movements you would simply find the number of tracks it took to move from one request to the next.
For this case it went from 50 to 95 to 180 and so on. From 50 to 95 it moved 45 tracks. If you tally up the
total number of tracks you will find how many tracks it had to go through before finishing the entire request.
In this example, it had a total head movement of 640 tracks. The disadvantage of this algorithm is noted by
the oscillation from track 50 to track 180 and then back to track 11 to 123 then to 64. As you will soon see,
this is the worse algorithm that one can use.
Figure 4: FCFS
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.
Wait time of each process is as follows −
Process Wait Time : Service Time - Arrival Time
P0 0 - 0 = 0
P1 5 - 1 = 4
P2 8 - 2 = 6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
33 | P a g e
Advantages:
 Every request gets a fair chance
 No indefinite postponement
Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service
Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.
Wait time of each process is as follows −
Process Wait Time : Service Time - Arrival Time
P0 3 - 0 = 3
P1 0 - 0 = 0
P2 16 - 2 = 14
P3 8 - 3 = 5
Average Wait Time: (3+0+14+5) / 4 = 5.50
Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
34 | P a g e
Wait time of each process is as follows −
Process Wait Time : Service Time - Arrival Time
P0 9 - 0 = 9
P1 6 - 1 = 5
P2 14 - 2 = 12
P3 0 - 0 = 0
Average Wait Time: (9+5+12+0) / 4 = 6.5
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes for a
given time period.
 Context switching is used to save states of preempted processes.
Wait time of each process is as follows −
Process Wait Time : Service Time - Arrival Time
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
35 | P a g e
Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.
Multilevel Feedback Queue Scheduling
Multilevel feedback queue scheduling allows a process to move between queues. The idea is to separate
processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to
a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority
queues. Similarly, a process that waits too long in a lower- priority queue may be moved to a higher-priority
queue. This form of aging prevents starvation.
For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2
(Figure ).
The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes
in queue 1.Similarly, processes in queue 2 will only be executed if queues 0 and I are empty.
A process that arrives for queue 1 will preempt a process in queue 2. A process in queue 1 will in turn be
preempted by a process arriving for queue 0.
A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of 8
milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the
process at the head of queue I is given a quantum of 16 milliseconds.
If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS
basis, only when queues 0 and I are empty.
This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less.
Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/O burst. Processes that
need more than 8, but less than 24, milliseconds are also served quickly, although with lower priority than
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
36 | P a g e
shorter processes. Long processes automatically sink to queue 2 and are served in FCFS order with any CPU
cycles left over from queues 0 and 1.
In general, a multilevel feedback queue scheduler is defined by the following parameters:
 The number of queues
 The scheduling algorithm for each queue
 The method used to determine when to upgrade a process to a higher- priority queue
 The method used to determine when to demote a process to a lower-priority queue
 The method used to determine which queue a process will enter when that process needs service
The definition of a multilevel feedback queue scheduler makes it the most general CPU scheduling
algorithm. It can be configured to match a specific system under design. Unfortunately, it also requires some
means of selecting values for all the parameters to define the best scheduler. Although a multilevel feedback
queue is the most general scheme, it is also the most complex.
Advantages
A process that waits too long in a lower priority queue may be moved to a higher priority queue.
Disadvantage
Moving the process around queue produce more CPU overhead.
Algorithm
Multiple FIFO queues are used and the operation is as follows:
1. A new process is inserted at the end (tail) of the top-level FIFO queue.
2. At some stage the process reaches the head of the queue and is assigned the CPU.
3. If the process is completed within the time quantum of the given queue, it leaves the system.
4. If the process voluntarily relinquishes control of the CPU, it leaves the queuing network, and when
the process becomes ready again it is inserted at the tail of the same queue which it relinquished
earlier.
5. If the process uses all the quantum time, it is pre-empted and inserted at the end of the next lower
level queue. This next lower level queue will have a time quantum which is more than that of the
previous higher level queue.
6. This scheme will continue until the process completes or it reaches the base level queue.
 At the base level queue the processes circulate in round robin fashion until they complete and
leave the system. Processes in the base level queue can also be scheduled on a first come first
served basis.
 Optionally, if a process blocks for I/O, it is 'promoted' one level, and placed at the end of the
next-higher queue. This allows I/O bound processes to be favored by the scheduler and allows
processes to 'escape' the base level queue.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
37 | P a g e
For scheduling, the scheduler always starts picking up processes from the head of the highest level queue.
Only if the highest level queue has become empty will the scheduler take up a process from the next lower
level queue. The same policy is implemented for picking up in the subsequent lower level queues.
Meanwhile, if a process comes into any of the higher level queues, it will preempt a process in the lower
level queue.
Also, a new process is always inserted at the tail of the top level queue with the assumption that it will
complete in a short amount of time. Long processes will automatically sink to lower level queues based on
their time consumption and interactivity level. In the multilevel feedback queue a process is given just one
chance to complete at a given queue level before it is forced down to a lower level queue.
Multiple-Processor Scheduling
 If multiple CPUs are available, the scheduling problem is correspondingly more complex. Even
within homogeneous multiprocessor, there are sometimes limitations on scheduling. Consider a
system with an I/O device attached to a private bus of one processor. Processes wishing to use that
device must be scheduled to run on that processor, otherwise the device would not be available.
 If several identical processors are available, then load sharing can occur. It would be possible to
provide a separate queue for each processor.
 In this case, however, one processor could be idle, with an empty queue, while another processor was
very busy.
 To prevent this situation, we use a common ready queue. All processes go into one queue and are
scheduled onto any available processor.
 In such a scheme, one of two scheduling approaches may be used. In one approach, each processor is
self-scheduling.
 Each processor examines the common ready queue and selects a process to execute. If we have
multiple processors trying to access and update a common data structure, each processor must be
programmed very carefully.
 We must ensure that two processors do not choose the same process, and that processes are not lost
from the queue. The other approach avoids this problem by appointing one processor as scheduler for
the other processors, thus creating a master—slave structure.
 Some systems carry this structure one step further, by having all scheduling decisions, i/o processing,
and other system activities handled by one single processor — the master server.
 The other processors only execute user code. This asymmetric multiprocessing is far simpler than
symmetric multiprocessing, because only one processor accesses the system data structures,
alleviating the need for data sharing.
Real-Time Scheduling
 Real-time computing is divided into two types. Hard real-time systems are required to complete a
critical task within a guaranteed amount of time.
 Generally, a process is submitted along with a statement of the amount of time in which it needs to
complete or perform I/O. The scheduler then either admits the process, guaranteeing that the process
will complete on time, or rejects the request as
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
38 | P a g e
 impossible. This is known as resource reservation. Such a guarantee requires that the scheduler know
exactly how long each type of operating-system function takes to perform, and therefore each
operation must be guaranteed to take a maximum amount of time.
 Such a guarantee is impossible in a system with secondary storage or virtual memory because these
subsystems cause unavoidable and unforeseeable variation in the amount of time to execute a
particular process.
 Therefore, hard real-time systems are composed of special-purpose software running on hardware
dedicated to their critical process, and lack the full functionality of modern computers and operating
systems.
 Soft real-time computing is less restrictive.
 It requires that critical processes receive priority over less fortunate ones.
 Although adding soft real-time functionality to a time-sharing system may cause an unfair allocation
of resources and may result in longer delays, or even starvation, for some processes, it is at least
possible to achieve.
 The result is a general-purpose system that can also support multimedia, high-speed interactive
graphics, and a variety of tasks that would not function acceptably in an environment that does not
support soft real-time computing.
 Implementing soft real-time functionality requires careful design of the scheduler and related aspects
of the operating system.
 First, the system must have priority scheduling, and real-time processes must have the highest
priority.
 The priority of real-time processes must not degrade over time, even though the priority of non—
real-time processes may.
 Second, the dispatch latency must be small. The smaller the latency, the faster a realtime process can
start executing once it is run able.
 It is relatively simple to ensure that the former property holds. However, ensuring the latter property
is much more involved.
 The problem is that many operating systems, including most versions of UNIX, are forced to wait for
either a system call to complete or for an I/O block to take place before doing a context switch. The
dispatch latency in such systems can be long, since some system calls are complex and some I/O
devices are slow.
 To keep dispatch latency low, we need to allow system calls to be pre-emptily.
 There are several ways to achieve this goal.
 One is to insert preemption points in long-duration system calls, which check to see whether a high-
priority process needs to be run. If so, a context switch takes place and, when the high priority
process terminates, the interrupted process continues with the system call.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
39 | P a g e
 Preemption points can be placed at only “safe” locations in the kernel — only where kernel data
structures are not being modified. Even with preemption points dispatch latency can be large,
because only a few preemption points can be practically added
 to a kernel.
 Another method for dealing with preemption is to make the entire kernel pre-emptily. So that correct
operation is ensured, all kernel data structures must be protected through the use of various
synchronization mechanisms.
 With this method, the kernel can always be pre-emptily, because any kernel data being updated are
protected from modification by the high-priority process. This is the method used in Solaris 2.
 But what happens if the higher-priority process needs to read or modify kernel data that are currently
being accessed by another, lower-priority process? The high priority process would be waiting for a
lower-priority one to finish. This situation is known as priority inversion. In fact, there could be a
chain of processes, all accessing resources that the high-priority process needs.
 This problem can be solved via the priority-inheritance protocol, in which all these processes (the
processes that are accessing resources that the high-priority process needs) inherit the high priority
until they are done with the resource in question.
 When they are finished, their priority reverts to its natural value.
 In Figure, we show the makeup of dispatch latency. The conflict phase of dispatch latency has three
components:
1. Preemption of any process running in the kernel
2. Low-priority processes releasing resources needed by the high-priority process
3. Context switching from the current process to the high-priority process.
 As an example, in Solaris 2, the dispatch latency with preemption disabled is over 100 milliseconds.
However, the dispatch latency with preemption enabled is usually reduced to 2 milliseconds.
What are the options for real-time scheduling?
A number of scheduling concepts have been developed for implementation in a real-time operating system
(RTOS). The most commonly encountered is the pre-emptive scheduler even though it is not inherently a
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
40 | P a g e
real-time algorithm in contrast to, for example, deadline scheduling, which aims to ensure that critical
threads are executed within a given timeframe.
Desktop operating systems are designed around the concept of fairness – that no application should be
starved of processing cycles by another. These systems tend to use round-robin scheduling, in which each
task will run for a set period of time before being forced to yield access to the processor so that execution
can switch to a different task that is ready to run. Once all tasks that are not blocked from running have been
allotted a time-slice, execution resumes with the first task and the cycle continues.
In a real-time system, it is generally acceptable to starve less important tasks of processor cycles if there are
critical tasks with work to do – although determining how 'unimportant' a task really is can be problematic
for guaranteeing overall system stability.
How does the typical scheduler operate?
The simplest possible scheduler conceptually is the main() loop – it simply cycles through a series of
functions. As long as the critical functions execute within the maximum allowable processing latency of the
system, the loop will provide satisfactory performance. However, every logical task within the system is
provided with the same execution priority and will consume processor cycles even if they have no work to
do. It becomes very difficult to guarantee that the loop will finish execution within the maximum allowable
latency for all situations. Applications also become difficult to maintain beyond a certain size. At this point,
it makes sense to break the application down into discrete tasks and use an RTOS scheduler to control their
execution.
A pre-emptive RTOS works on the basis that the task with the highest priority and which is ready to run will
be the one that is scheduled for execution. Typically, the RTOS will examine the list of tasks after any
change of task status – usually after a system call or an interrupt. For example, a task may relinquish control
of a mutual-exclusion semaphore (mutex) on which a higher-priority task is blocked. The RTOS will note
that the high-priority task is now ready to run and pick it for scheduling. That task will continue execution
until it is replaced by a higher-priority task, yields the processor, or becomes blocked again. Because the
task can remain running, it is possible that it could starve other tasks of execution time – a risk that system
designers need to take into account. Conversely, the RTOS guarantees that the most critical thread that is
ready to run will be able to access the processor as soon as it requires it.
What are the common pitfalls in scheduling?
In principle, it is possible to analyze a system for potential scheduling problems and to ensure that the
system will meet its deadlines. However, the analysis is greatly complicated by any inter-processor
communication. Basic rate-monotonic analysis, one of the earlier theories used for determining
schedulability – and the subject of one of the 20 most commonly cited papers in computer science – can
only guarantee schedulability for tasks that do not share resources. In practice, most systems demand shared
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
41 | P a g e
access to memory objects and peripherals that make schedulability, as well as the tendency to deadlock,
difficult to predict.
One problem encountered with conventional pre-emptive RTOS schedulers is that of priority inversion. In
this situation, a low-priority task obtains access to a shared resource but is pre-empted by a higher priority
task, blocking all other tasks that need that resource. If a critical task requires that resource, it cannot run
until the low-priority task has released the mutex. But until activity has subsided far enough to allow the
low-priority task to run, it will be unable to continue far enough to release the mutex. During this time, the
effective priority of the critical task is reduced to that of the low-priority thread: hence priority inversion.
One workaround, although it can introduce other schedulability problems if implemented without
safeguards, is to use the priority-inheritance protocol. This mode provides any thread that owns a mutex with
the same priority as a more important task that is blocked on it until the semaphore is released.
Many RTOS implementations support priority inheritance or a close relative of the technique, the priority
ceiling protocol, which prevents a low-priority task from being elevated to the highest possible priority in
the system. There are dangers in using the protocol: designers need to ensure that a normally low-priority
task will not simply hog a resource and keep running indefinitely in a state in which it cannot easily be pre-
empted.
There also subtleties in implementation. If an application is moved from a single-core to a dual-core
processor that uses the priority-ceiling protocol, it cannot guarantee mutual exclusion. So a distributed
priority ceiling protocol has to be used instead.
Because of the problems of analyzing schedulability in asynchronous, interrupt-driven real-time systems,
many systems that have to guarantee dependable behavior resort to some form of strict time-sharing. In this
scenario, important tasks are guaranteed a number of cycles within a period of time to run, even though they
have nothing to do, just in case they do need to respond to a problem. ARINC 653 avionics systems have
used this approach for years and a number of automotive systems have adopted the Flex-ray architecture,
which is based on a similar time-triggered approach.
Each partition in an ARINC 653 system has its own dedicated, protected memory space and each partition
can run a multitasking system. Vital functions usually have dedicated partitions. Even with such rigidly
enforced partitions, timing problems can still arise through interactions with hardware. One problem that has
been identified in a paper by GE Aviation and Wind River Systems lies in the use of direct memory access
(DMA). If a partition towards the end of its time-slice decides to initiate a long DMA transfer, the partition
that runs immediately afterwards can stall because the DMA hardware has exclusive access to the memory
bus – effectively shortening the new partition’s time-slice and creating the potential for it to miss its own
deadline.
M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game)
42 | P a g e
The recommendation in this case is to transfer the responsibility for setting up DMA transfers to a system-
level task that takes into account the amount of time a partition has remaining before it is forced to
relinquish the processor.
Similarly, interrupt handling can upset the operation of an otherwise strictly time-sliced system. A number
of systems prevent all but the system timer, which is used to help manage scheduling, from being able to
assert an interrupt. Others may record the interrupt and then allow the affected system to poll for the
associated data when it next runs.

More Related Content

What's hot

Operating systems basics (Graphical User Interfaces (GUIs) GUI Tools Applic...
Operating systems basics (Graphical User Interfaces (GUIs)  GUI Tools  Applic...Operating systems basics (Graphical User Interfaces (GUIs)  GUI Tools  Applic...
Operating systems basics (Graphical User Interfaces (GUIs) GUI Tools Applic...Maryam Fida
 
Computer Science Class 11 India PPT
Computer Science Class 11 India PPTComputer Science Class 11 India PPT
Computer Science Class 11 India PPTRat Devil
 
operating systems By ZAK
operating systems By ZAKoperating systems By ZAK
operating systems By ZAKTabsheer Hasan
 
Operating Systems Basics
Operating Systems BasicsOperating Systems Basics
Operating Systems Basicsnishantsri
 
Computer skills vocabulary
Computer skills vocabularyComputer skills vocabulary
Computer skills vocabularyEllie Simons
 
Basic computer and_intenet_terminologies
Basic computer and_intenet_terminologiesBasic computer and_intenet_terminologies
Basic computer and_intenet_terminologiesaticar
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating SystemDivya S
 
Bba i-introduction to computer-u-3-functions operating systems
Bba  i-introduction to computer-u-3-functions operating systemsBba  i-introduction to computer-u-3-functions operating systems
Bba i-introduction to computer-u-3-functions operating systemsRai University
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecturevenkateswarlu G
 
Principles of operating system
Principles of operating systemPrinciples of operating system
Principles of operating systemAnil Dharmapuri
 

What's hot (20)

Introduction to computer system
Introduction to computer systemIntroduction to computer system
Introduction to computer system
 
Operating systems basics (Graphical User Interfaces (GUIs) GUI Tools Applic...
Operating systems basics (Graphical User Interfaces (GUIs)  GUI Tools  Applic...Operating systems basics (Graphical User Interfaces (GUIs)  GUI Tools  Applic...
Operating systems basics (Graphical User Interfaces (GUIs) GUI Tools Applic...
 
Computer Science Class 11 India PPT
Computer Science Class 11 India PPTComputer Science Class 11 India PPT
Computer Science Class 11 India PPT
 
Os by nishant raghav
Os by nishant raghavOs by nishant raghav
Os by nishant raghav
 
operating systems By ZAK
operating systems By ZAKoperating systems By ZAK
operating systems By ZAK
 
Chapter 2 operating systems
Chapter 2 operating systemsChapter 2 operating systems
Chapter 2 operating systems
 
Organization of a computer
Organization of a computerOrganization of a computer
Organization of a computer
 
OPERATING SYSTEM
OPERATING SYSTEMOPERATING SYSTEM
OPERATING SYSTEM
 
computer Unit 7
computer Unit 7computer Unit 7
computer Unit 7
 
Introduction of operating system
Introduction of operating systemIntroduction of operating system
Introduction of operating system
 
Operating Systems Basics
Operating Systems BasicsOperating Systems Basics
Operating Systems Basics
 
operating system
operating systemoperating system
operating system
 
operating system
operating systemoperating system
operating system
 
Pankaj kumar
Pankaj kumar Pankaj kumar
Pankaj kumar
 
Computer skills vocabulary
Computer skills vocabularyComputer skills vocabulary
Computer skills vocabulary
 
Basic computer and_intenet_terminologies
Basic computer and_intenet_terminologiesBasic computer and_intenet_terminologies
Basic computer and_intenet_terminologies
 
Introduction to Operating System
Introduction to Operating SystemIntroduction to Operating System
Introduction to Operating System
 
Bba i-introduction to computer-u-3-functions operating systems
Bba  i-introduction to computer-u-3-functions operating systemsBba  i-introduction to computer-u-3-functions operating systems
Bba i-introduction to computer-u-3-functions operating systems
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecture
 
Principles of operating system
Principles of operating systemPrinciples of operating system
Principles of operating system
 

Similar to Operating system

Similar to Operating system (20)

introduce computer .pptx
introduce computer .pptxintroduce computer .pptx
introduce computer .pptx
 
Unit 1 q&a
Unit  1 q&aUnit  1 q&a
Unit 1 q&a
 
Os unit 1
Os unit 1Os unit 1
Os unit 1
 
Ch1
Ch1Ch1
Ch1
 
Application software and system software
Application software and system softwareApplication software and system software
Application software and system software
 
Computer Fundamental
Computer Fundamental Computer Fundamental
Computer Fundamental
 
Cs1 3-operating systems
Cs1 3-operating systemsCs1 3-operating systems
Cs1 3-operating systems
 
Fundamentals of Computers & Information System
Fundamentals of Computers & Information System  Fundamentals of Computers & Information System
Fundamentals of Computers & Information System
 
Operating System Unit 1
Operating System Unit 1Operating System Unit 1
Operating System Unit 1
 
LEC 1.pptx
LEC 1.pptxLEC 1.pptx
LEC 1.pptx
 
Operating systems
Operating systems Operating systems
Operating systems
 
Ch1 - OS.pdf
Ch1 - OS.pdfCh1 - OS.pdf
Ch1 - OS.pdf
 
209979479 study-material
209979479 study-material209979479 study-material
209979479 study-material
 
chapter 1 intoduction to operating system
chapter 1 intoduction to operating systemchapter 1 intoduction to operating system
chapter 1 intoduction to operating system
 
Operating System Lecture Notes
Operating System Lecture NotesOperating System Lecture Notes
Operating System Lecture Notes
 
Introduction to OS 1.ppt
Introduction to OS 1.pptIntroduction to OS 1.ppt
Introduction to OS 1.ppt
 
MYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptxMYSQL DATABASE Operating System Part2 (1).pptx
MYSQL DATABASE Operating System Part2 (1).pptx
 
Ch1
Ch1Ch1
Ch1
 
operating systems
operating systemsoperating systems
operating systems
 
L-3 BCE OS FINAL.ppt
L-3 BCE OS FINAL.pptL-3 BCE OS FINAL.ppt
L-3 BCE OS FINAL.ppt
 

Recently uploaded

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...RajaP95
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 

Recently uploaded (20)

High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 

Operating system

  • 1. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 1 | P a g e Operating System Figure 1: Block Diagram OS An operating system is a program that acts as an interface between the user and the computer hardware and controls the execution of all kinds of programs. It is a software which performs all the basic tasks like file management, memory management, process management, handling input and output, and controlling peripheral devices such as disk drives and printers. An operating system performs these services for applications:  In a multitasking operating system where multiple programs can be running at the same time, the operating system determines which applications should run in what order and how much time should be allowed for each application before giving another application a turn.  It manages the sharing of internal memory (like RAM) among multiple applications.  It handles input and output to and from attached hardware devices, such as hard disks, printers, and dial-up ports.  It sends messages to each application or interactive user (or to a system operator) about the status of operation and any errors that may have occurred.  It can offload the management of what are called batch jobs (for example, printing) so that the initiating application is freed from this work.  On computers that can provide parallel processing, an operating system can manage how to divide the program so that it runs on more than one processor at a time. All major computer platforms (hardware and software) require and sometimes include an operating system, and operating systems must be developed with different features to meet the specific needs of various form factors. Common desktop operating systems:  Windows is Microsoft’s flagship operating system, the de facto standard for home and business computers. Introduced in 1985, the GUI-based OS has been released in many versions since then. The user-friendly Windows 95 was largely responsible for the rapid development of personal computing.
  • 2. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 2 | P a g e  Mac OS is the operating system for Apple's Macintosh line of personal computers and workstations.  Linux is a Unix-like operating system that was designed to provide personal computer users a free or very low-cost alternative. Linux has a reputation as a very efficient and fast-performing system. A Mobile OS allows smartphones, tablet PCs and other mobile devices to run applications and programs. Mobile operating systems include Apple iOS, Google Android, BlackBerry OS and Windows 10 Mobile. Functions of Operating System 1. Booting: Booting is a process of starting the computer operating system starts the computer to work. It checks the computer and makes it ready to work. 2. Memory Management Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address. Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management  Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.  In multiprogramming, the OS decides which process will get memory when and how much.  Allocates the memory when a process requests it to do so.  De-allocates the memory when a process no longer needs it or has been terminated. 3. File Management A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions. An Operating System does the following activities for file management −  Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.  Decides who gets the resources.  Allocates the resources.  De-allocates the resources. 4. Security By means of password and similar other techniques, it prevents unauthorized access to programs and data. 5. Disk Management Operating system manages the disk space. It manages the stored files and folders in a proper way. 6. Processor Management In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This function is called process scheduling. An Operating System does the following activities for processor management
  • 3. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 3 | P a g e  Keeps tracks of processor and status of process. The program responsible for this task is known as traffic controller.  Allocates the processor (CPU) to a process.  De-allocates processor when a process is no longer required 7. Device Management Operating system also controls all devices attached to computer. The hardware devices are controlled with the help of small software called device drivers. It manages device communication via their respective drivers. It does the following activities for device management  Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.  Decides which process gets the device when and for how much time.  Allocates the device in the efficient way.  De-allocates devices. 8. Printing controlling Operating system also controls printing function. If a user issues two print commands at a time, it does not mix data of these files and prints them separately. 9. Providing interface It is used in order that user interface acts with a computer mutually. User interface controls how you input data and instruction and how information is displayed on screen. The operating system offers two types of the interface to the user: a. Graphical-line interface: It interacts with of visual environment to communicate with the computer. It uses windows, icons, menus and other graphical objects to issues commands. b. Command-line interface: it provides an interface to communicate with the computer by typing commands. Different Types Of System Simple Batch Systems Early computers were (physically) enormously large machines run from a console. The common input devices were card readers and tape drives. The common output devices were line printers, tape drives, and card punches. The users of such systems did not interact directly with the computer systems. Rather, the user prepared a job—which consisted of the program, the data, and some control information about the nature of the job (control cards)—and submitted it to the computer operator. The job would usually be in the form of punch cards. At some later time (perhaps minutes, hours, or days), the output appeared. The output consisted of the result of the program, as well as a dump of memory and registers in case of program error. The operating system in these early computers was fairly simple. Its major task was to transfer control automatically from one job to the next. The operating system was always (resident) in memory (Figure 1.1). To speed up processing, jobs with similar needs were batched together and were run through the computer as a group. Thus, the programmers would leave their programs with the operator. The operator would sort
  • 4. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 4 | P a g e programs into batches with similar requirements and, as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. A batch operating system, thus, normally reads a stream of separate jobs-(from a card reader, for example), each with its own control cards that predefine what the job does. Figure 2: Memory Layout For A Simple Batch System When the job is complete, its output is usually printed (on a line printer, for example). The definitive feature of a batch system is the lack of interaction between the user and the job while that job is executing. The job is prepared and submitted, and at some later time, the output appears. The delay between job submission and job completion (called turnaround time) may result from the amount of computing needed or from delays before the operating system starts to process the job. In this execution environment, the CPU is often idle. This idleness occurs because the speeds of the mechanical I/O devices are intrinsically slower than those of electronic devices. Even a slow CPU works in the microsecond range, with thousands of instructions executed per second. A fast card reader, on the other hand, might read 1200 cards per minute (17 cards per second). Thus, the difference in speed between the CPU and its I/O devices may be three orders of magnitude or more. Over time, of course, improvements in technology resulted in faster I/O devices. Unfortunately, CPU speeds increased even faster, so that the problem was not only unresolved, but also exacerbated. The introduction of disk technology has helped in this regard. Rather than the cards being read from the card reader directly into memory, and then the job being processed, cards are read directly from the card reader onto the disk. The location of card images is recorded in a table kept by the operating system. When a job is executed, the operating system satisfies its requests for card-reader input by reading from the disk. Similarly, when the job requests the printer to output a line, that line is copied into a system buffer and is written to the disk. When the job is completed, the output is actually printed. This form of processing is called spooling; the name is an acronym for simultaneous peripheral operation on-line. Spooling, in essence, uses the disk as a huge buffer, for reading as far ahead as possible on input devices and for storing output files until the output devices are able to accept them.
  • 5. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 5 | P a g e Spooling is also used for processing data at remote sites. The CPU sends the data via communications paths to a remote printer (or accepts an entire input job from a remote card reader). The remote processing is done at its own speed, with no CPU intervention. The CPU just needs to be notified when the processing is completed, so that it can spool the next batch of data. Spooling overlaps the I/O of one job with the computation of other jobs. Even in a simple system, the spooler may be reading the input of one job while printing the output of a different job. During this time, still another job (or jobs) may be executed, reading their "cards" from disk and "printing" their output lines onto the disk. Spooling has a direct beneficial effect on the performance of the system. For the cost of some disk space and a few tables, the computation of one job can overlap with the I/O of other jobs. Thus, spooling can keep both the CPU and the I/O devices working at much higher rates. Advantages:  maximum processor utilization  the setup time for jobs is saved  performance increases, since, the job are sequenced together Disadvantages:  difficult to debug  one job affects all the pending jobs.  job could enter an infinite loop, and others will never be processed.  Lack of interaction between the user and the job.  CPU is often idle, because the speed of the mechanical I/O devices is slower than the CPU.  Difficult to provide the desired priority. Simple Batch System  Use of high-level languages, magnetic tapes.  Jobs are batched together by type of languages.  An operator was hired to perform the repetitive tasks of loading jobs, starting the computer, and collecting the output (Operator-driven Shop).  It was not feasible for users to inspect memory or patch programs directly. Operation Of Simple Batch Systems :  The user submits a job (written on cards or tape) to a computer operator.  The computer operator place a batch of several jobs on an input device.  A special program, the monitor, manages the execution of each program in the batch.  Monitor utilities are loaded when needed.  Resident monitor is always in main memory and available for execution.
  • 6. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 6 | P a g e Multiprogrammed Batch System In multiprogramming batch system, multiple programs (or jobs) of different users can be executed simultaneously (i.e. at the same time). The multiple jobs that have to be run simultaneously must be kept in main memory and the operating system must manage them properly. If these jobs are ready to run, the processor must decide which one to run. In multi-programmed batch system, the operating system keeps multiple jobs in main memory at a time. There may be many jobs that enter the system. Since in general, the main memory is too small to accommodate all jobs. So the jobs that enter the system to be executed are kept initially on the disk in the job pool. In other words, you can say that a job pool consists of all jobs residing on the disk awaiting allocation of main memory. When the operating system selects a job from a job pool, it loads that job into memory for execution. Normally, the jobs in main memory are smaller than the jobs in job pool. The jobs in job pool are awaiting allocation of main memory. If several jobs are ready to be brought into memory, and if there is not enough room for all of them, then the system must require memory management. Similarly, if many jobs are ready to run at the same time, the system must schedule these jobs. The processor picks and begins to execute one of the jobs in main memory. Some jobs may have to wait for certain tasks (such as I/O operation), to complete. In a simple batch system or non-multi-programmed system, the processor would sit idle. In multi-programmed system, the CPU switches to second job and begins to execute it. Similarly, when second job needs to wait, the processor is switched to third job, and so on. The processor also checks the status of previous jobs, whether they are completed or not. The multi-programmed system takes less time to complete the same jobs than the simple batch system. The multi-programmed systems do not allow interaction between the processes (or jobs) when they are running on the computer. Multiprogramming increases the CPU's utilization. Multi-programmed system provides an environment in which various computer resources are utilized effectively. The CPU always remains busy to run one of the jobs until all jobs complete their execution. In multi-programmed system, the hardware must have the facilities to support multiprogramming. Since several jobs are available on disk, OS can select which job to run, this essentially leads to job scheduling. Idea is to ensure that CPU is always busy. A single job may not be able to be keep CPU busy. Multiprogramming allows to keep the CPU more busy as follows: There are several jobs in the job pool. Some are selected to be loaded in memory (many policies), since memory is much smaller than the job pool. A job is picked from the memory, CPU executes it till it can and then there may be an I/O. Since CPU has to wait, it loads another job and executes it till it can and so on. The trace of these jobs looks like a spaghetti and the task of OS is to manage them neatly. There are several
  • 7. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 7 | P a g e issues - which job should sit in memory, which should CPU pick up - how to manage memory - what happens to the job which is suspended - what happens when one has to activate it - multiple jobs running should effect each other in a limited way  The I/O devices are much slower than the processor, leaving the processor idle most of the time waiting for the I/O devices to finish their operations.  Uniprogramming: the processor starts executing a certain program and when it reaches an I/O instructions, it must wait until that I/O instructions is fully executed before proceeding.  Multiprogramming: in contrast to the uniprogramming, when a job needs to wait for an I/O instruction, the processor switches to another job executing it until the first job finishes its waiting I/O instructions, the processor continue to swap between jobs as it reaches an I/O operation.  Multiprogramming batch system must rely on certain hardware capabilities such as process switching when swapping between program execution.  Interrupt-driven I/O or DMA helps a lot in multiprogramming environments, allowing the processor to issues an I/O command and proceed executing another program. Simple batch operating system provides the automatic job sequencing so that the processor can work more effectively but it sometimes becomes idle and inactive. Input and output devices are responsible for this problem as they become slow in comparison to the processor. It is calculated on the basis of the programs that processes a file of records and work on 100 machine instructions approximately. For example, here the system would waste more than 96% of time as this time would be taken by the input and output devices to cover all the data from the file. Here is the following figure where a single program is present which is known as Uni-programming. The user is spending a lot of time in executing the whole process until and until it reaches an I/O instruction. It must spend a little bit of time here so that I/O instruction concludes before it proceeds in the next stage. Inefficiency is not required. The memory is capable of holding the resident monitor (OS) and one user program. Let’s consider that there is an option of a room for the resident monitor and two user programs. Even if one job spends time for the input and output devices, the other processor can easily start with another job so that it don’t need to waste his time for I/O. There is always an option of expanding the memory so that more than two programs can be accessed. This approach is termed as multiprogramming or in other words multi tasking. Multi – programmed operating systems is the base of the operating system.
  • 8. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 8 | P a g e It would be easier for you to understand the concept of multi programming with this example. For instance- Let’s say that a system which has 250 Mbytes of unused memory along with a disk, terminal and printer. JOB1, JOB2 and JOB 3 are the programs which are being presented so that they can be executed at the same period of time. JOB2 and JOB 3 are required to meet the requirements of the processor. Continuous disk and printer can be used for JOB3. A simple batch environment can only be created when jobs are being executed in sequential manner. Now, JOB1 would be done within five minutes. JOB2 would have to wait for 5 minutes and continue with the next 15 minutes so that it can complete it. After 20 minutes, JOB 3 can start with the process and would complete it within 30 minutes from the time it was started. Check out the following columns to get an idea on average resource utilization and response time. It can be easily understood that gross underutilization is taking place for every resource in an average manner which is more than the required 30-minute. Time Sharing System A time sharing system allows many users to share the computer resources simultaneously. In other words, time sharing refers to the allocation of computer resources in time slots to several programs simultaneously. For example a mainframe computer that has many users logged on to it. Each user uses the resources of the mainframe -i.e. memory, CPU etc. The users feel that they are exclusive user of the CPU, even though this is not possible with one CPU i.e. shared among different users. The time sharing systems were developed to provide an interactive use of the computer system. A time shared system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer. It allows many users to share the computer resources simultaneously. As the system switches rapidly from one user to the other, a short time slot is given to each user for their executions. The time sharing system provides the direct access to a large number of users where CPU time is divided among all the users on scheduled basis. The OS allocates a set of time to each user. When this time is expired, it passes control to the next user on the system. The time allowed is extremely small and the users are given the impression that they each have their own CPU and they are the sole owner of the CPU. This short period of time during that a user gets attention of the CPU; is known as a time slice or a quantum. The concept of time sharing system is shown in figure.
  • 9. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 9 | P a g e In above figure the user 5 is active but user 1, user 2, user 3, and user 4 are in waiting state whereas user 6 is in ready status. As soon as the time slice of user 5 is completed, the control moves on to the next ready user i.e. user 6. In this state user 2, user 3, user 4, and user 5 are in waiting state and user 1 is in ready state. The process continues in the same way and so on. The time-shared systems are more complex than the multi-programming systems. In time-shared systems multiple processes are managed simultaneously which requires an adequate management of main memory so that the processes can be swapped in or swapped out within a short time. Advantages of Timesharing operating systems are as follows −  Provides the advantage of quick response.  Avoids duplication of software.  Reduces CPU idle time. Disadvantages of Time-sharing operating systems are as follows −  Problem of reliability.  Question of security and integrity of user programs and data.  Problem of data communication. Parallel Operating Systems Parallel operating systems are a type of computer processing platform that breaks large tasks into smaller pieces that are done at the same time in different places and by different mechanisms. They are sometimes also described as “multi-core” processors. This type of system is usually very efficient at handling very large files and complex numerical codes. It’s most commonly seen in research settings where central server systems are handling a lot of different jobs at once, but can be useful any time multiple computers are doing similar jobs and connecting to shared infrastructures simultaneously. They can be difficult to set up at first and can require a bit of expertise, but most technology experts agree that, over the long term, they’re much more cost effective and efficient than their single-computer counterparts. A parallel operating system works by dividing sets of calculations into smaller parts and distributing them between the machines on a network. To facilitate communication between the processor cores and memory
  • 10. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 10 | P a g e arrays, routing software has to either share its memory by assigning the same address space to all of the networked computers, or distribute its memory by assigning a different address space to each processing core. Sharing memory allows the operating system to run very quickly, but it is usually not as powerful. When using distributed shared memory, processors have access to both their own local memory and the memory of other processors; this distribution may slow the operating system, but it is often more flexible and efficient. The architecture of the software is typically build around a UNIX-based platform, which allows it to coordinate distributed loads between multiple computers in a network. Parallel systems are able to use software to manage all of the different resources of the computers running in parallel, such as memory, caches, storage space, and processing power. These systems also allow a user to directly interface with all of the computers in the network. Parallel operating systems are the interface between parallel computers and the applications (parallel or not) that are executed on them. They translate the hardware’s capabilities into concepts usable by programming languages. Distributed Operating Systems Distributed Operating System is a model where distributed applications are running on multiple computers linked by communications. A distributed operating system is an extension of the network operating system that supports higher levels of communication and integration of the machines on the network. These systems are referred as loosely coupled systems where each processor has its own local memory and processors communicate with one another through various communication lines, such as high speed buses or telephone lines.
  • 11. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 11 | P a g e This system looks to its users like an ordinary centralized operating system but runs on multiple, independent central processing units(CPUs). These systems are referred as loosely coupled systems where each processor has its own local memory and processors communicate with one another through various communication lines, such as high speed buses or telephone lines. By loosely coupled systems, we mean that such computers possess no hardware connections at the CPU - memory bus level, but are connected by external interfaces that run under the control of software. The Distributed Os involves a collection of autonomous computer systems, capable of communicating and cooperating with each other through a LAN / WAN. A Distributed Os provides a virtual machine abstraction to its users and wide sharing of resources like as computational capacity, I/O and files etc. The structure shown in fig contains a set of individual computer systems and workstations connected via communication systems, but by this structure we cannot say it is a distributed system because it is the software, not the hardware, that determines whether a system is distributed or not. The users of a true distributed system should not know, on which machine their programs are running and where their files are stored. LOCUS and MICROS are the best examples of distributed operating systems. Using LOCUS operating system it was possible to access local and distant files in uniform manner. This feature enabled a user to log on any node of the network and to utilize the resources in a network without the reference of his/her location. MICROS provided sharing of resources in an automatic manner. The jobs were assigned to different nodes of the whole system to balance the load on different nodes. Advantages: 1. Sharing of resources. 2. Reliability. 3. Communication. 4. Computation speedup. 5. Give more performance than single system 6. If one pc in distributed system malfunction or corrupts then other node or pc will take care of 7. More resources can be added easily 8. Resources like printers can be shared on multiple pc’s Disadvantages of distributed operating systems:-  Security problem due to sharing  Some messages can be lost in the network system  Bandwidth is another problem if there is large data then all network wires to be replaced which tends to become expensive  Overloading is another problem in distributed operating systems
  • 12. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 12 | P a g e  If there is a database connected on local system and many users accessing that database through remote or distributed way then performance become slow  The databases in network operating is difficult to administrate then single user system Examples of distributed operating systems:-  Windows server 2003  Windows server 2008  Windows server 2012 Real Time Operating Systems Real time Operating Systems are very fast and quick respondent systems. These systems are used in an environment where a large number of events (generally external) must be accepted and processed in a short time. Real time processing requires quick transaction and characterized by supplying immediate response. For example, a measurement from a petroleum refinery indicating that temperature is getting too high and might demand for immediate attention to avoid an explosion. (BSP board support package) In real time operating system there is a little swapping of programs between primary and secondary memory. Most of the time, processes remain in primary memory in order to provide quick response, therefore, memory management in real time system is less demanding compared to other systems. The primary functions of the real time operating system are to: 1. Manage the processor and other system resources to meet the requirements of an application. 2. Synchronize with and respond to the system events. 3. Move the data efficiently among processes and to perform coordination among these processes. The Real Time systems are used in the environments where a large number of events (generally external to the computer system) is required to be accepted and is to be processed in the form of quick response. Such systems have to be the multitasking. So the primary function of the real time operating system is to manage certain system resources, such as the CPU, memory, and time. Each resource must be shared among the competing processes to accomplish the overall function of the system Apart from these primary functions of the real time operating system there are certain secondary functions that are not mandatory but are included to enhance the performance:
  • 13. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 13 | P a g e 1. To provide an efficient management of RAM. 2. To provide an exclusive access to the computer resources. The term real time refers to the technique of updating files with the transaction data immediately just after the event that it relates with. Few more examples of real time processing are: 1. Airlines reservation system. 2. Air traffic control system. 3. Systems that provide immediate updating. 4. Systems that provide up to the minute information on stock prices. 5. Defense application systems like as RADAR. Real time operating systems mostly use the preemptive priority scheduling. These support more than one scheduling policy and often allow the user to set parameters associated with such policies, such as the time- slice in Round Robin scheduling where each task in the task queue is scheduled up to a maximum time, set by the time-slice parameter, in a round robin manner. Hundred of the priority levels are commonly available for scheduling. Some specific tasks can also be indicated to be non-preemptive. Real time system is divided into two systems  Hard Real Time Systems.  Soft Real Time Systems. Hard Real Time Systems: Hard real time system is purely deterministic and time constraint system for example users expected the output for the given input in 10sec then system should process the input data and give the output exactly by 10th second. Here in the above example 10 sec. is the deadline to complete process for given data. Hard real systems should complete the process and give the output by 10th second. It should not give the output by 11th second or by 9th second, exactly by 10th second it should give the output. In the hard real time system meeting the deadline is very important if deadline is not met the system performance will fail. Another example is defense system if a country launched a missile to another country the missile system should reach the destiny at 4:00 to touch the ground what if missile is launched at correct time but it reached the destination ground by 4:05 because of performance of the system, with 5 minutes of difference destination is changed from one place to another place or even to another country. Here system should meet the deadline. Soft Real Time System: In soft real time system, the meeting of deadline is not compulsory for every time for every task but process should get processed and give the result. Even the soft real time systems cannot miss the deadline for every task or process according to the priority it should meet the deadline or can miss the deadline. If system is missing the deadline for every time the performance of the system will be worse and cannot be used by the users. Best example for soft real time system is personal computer, audio and video systems, etc.
  • 14. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 14 | P a g e Memory Management: In simple words how to allocate memory for every program which is to be run and get processed in the memory (RAM or ROM). The schemes like demand paging, virtual memory, segmentation will under this management only. Segmentation: It is a memory management scheme where the physical memory is dividing into logical segments according to the length of the program. In the segmentation it will avoid unused memory, sharing will be done easily, protection for the program. Sometime Main memory cannot allocate memory to the segments Because of it variable length and large segments Paging: in this scheme the physical memory is divided in to fixed size pages. It has all functions of segmentation and also solves its disadvantages. Virtual memory is a memory management scheme where some part of secondary storage device will be used a physical memory when program lacks the physical memory to run the program. Process Management: thread contains set of instructions which can execute independently of other programs. Collection of thread is called the process or we can say process contains the sequential execution of program and state control of the operating system. Every operating system works by executing series of processes by the processor and give result back to the main memory. Operating systems contains two types of process System process: these processes are main responsible for working of operating system. Application process: These processes are invoked when particular application is stared and start executing with the help of other system process. Operating system should process each and every process given by the user and give results back, OS will also process the process according to the priority. Scheduling algorithm will take care of processes processing and Inter Process communications (IPC) semaphore’s, message queues, shared memory, pipes, FIFO’s will take care of resource allocation of the processes. File Management: How the files are place in the memory which file should be used by user, file permission (read, write and execute permissions), arrangement of files in the secondary memory and primary memory using file system etc. all the above functions are done by the file management. Device Management: management of devices like tape drive, hard drives, processor speed, optical drive, and memory devices will be done by the operating system. Some of the facilities an RTOS provide:  Priority-based Scheduler  System Clock interrupt routine  Deterministic behavior
  • 15. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 15 | P a g e Priority-based Scheduler Most RTOS have between 32 and 256 possible priorities for individual tasks/processes. The scheduler will run the task with the highest priority. When a running task gives up the CPU, the next highest priority task runs, and so on. The highest priority task in the system will have the CPU until:  it runs to completion (i.e. it voluntarily give up the CPU)  a higher priority task is made ready, in which case the original task is pre-empted by the new (higher priority) task. As a developer, it is your job to assign the task priorities such that your deadlines will be met.  Ready to run: When task have all the resources to run, but not in running state. It’s called a ready to run task. It’s the state before running.  Running – When a task is executing. It is known as running.  Blocked – When a task doesn’t have enough resources to run, it is sent to a blocked state. System Clock Interrupt routines The RTOS will typically provide some sort of system clock (anywhere from 500 uS to 100ms) that allows you to perform time sensitive operations. If you have a 1ms system clock, and you need to do a task every 50ms, there is usually an API that allows you to say "In 50ms, wake me up". At that point, the task would be sleeping until the RTOS wakes it up. Note that just being woken up does not insure you will run exactly at that time. It depends on the priority. If a task with a higher priority is currently running, you could be delayed. Deterministic Behavior The RTOS goes to great length to ensure that weather you have 10 tasks, or 100 tasks, it does not take any longer to switch context, determine what the next highest priority task is, etc... In general, the RTOS operation tries to be O(1). One of the prime area for deterministic behavior in an RTOS is the interrupt handling. When an interrupt line is signaled, the RTOS immediately switches to the correct Interrupt Service Routine and handles the interrupt without delay (regardless of the priority of any task currently running). Note that most hardware specific ISRs would be written by the developers on the project. The RTOS might already provide ISRs for serial ports, system clock, maybe networking hardware but anything specialized (pacemaker signals, actuators, etc...) would not be part of the RTOS. This is a gross generalization and as with everything else, there is a large variety of RTOS implementations. Some RTOS do things differently, but the description above should be applicable to a large portion of existing RTOSes. Process Management: Process Concept a process is what a program becomes when it is loaded into memory from a secondary storage medium like a hard disk drive or an optical drive. Each process has its own address space, which typically contains both program instructions and data. Despite the fact that an individual processor or processor core can only
  • 16. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 16 | P a g e execute one program instruction at a time, a large number of processes can be executed over a relatively short period of time by briefly assigning each process to the processor in turn. While a process is executing it has complete control of the processor, but at some point the operating system needs to regain control, such as when it must assign the processor to the next process. Execution of a particular process will be suspended if that process requests an I/O operation, if an interrupt occurs, or if the process times out. When a user starts an application program, the operating system's high-level scheduler (HLS) loads all or part of the program code from secondary storage into memory. It then creates a data structure in memory called a process control block (PCB) that will be used to hold information about the process, such as its current status and where in memory it is located. The operating system also maintains a separate process table in memory that lists all the user processes currently loaded. When a new process is created, it is given a unique process identification number (PID) and a new record is created for it in the process table which includes the address of the process control block in memory. As well as allocating memory space, loading the process, and creating the necessary data structures, the operating system must also allocate resources such as access to I/O devices and disk space if the process requires them. Information about the resources allocated to a process is also held within the process control block. The operating system's low-level scheduler (LLS) is responsible for allocating CPU time to each process in turn. A process is basically a program in execution. The execution of a process must progress in a sequential fashion. A process is defined as an entity which represents the basic unit of work to be implemented in the system. To put it in simple terms, we write our computer programs in a text file and when we execute this program, it becomes a process which performs all the tasks mentioned in the program. When a program is loaded into the memory and it becomes a process, it can be divided into four sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside main memory S.N. Component & Description 1 Stack The process Stack contains the temporary data such as method/function parameters, return address and local variables.
  • 17. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 17 | P a g e 2 Heap This is dynamically allocated memory to a process during its run time. 3 Text This includes the current activity represented by the value of Program Counter and the contents of the processor's registers. 4 Data This section contains the global and static variables. Process Life Cycle When a process executes, it passes through different states. These stages may differ in different operating systems, and the names of these states are also not standardized. In general, a process can have one of the following five states at a time. S.N. State & Description 1 Start This is the initial state when a process is first started/created. 2 Ready The process is waiting to be assigned to a processor. Ready processes are waiting to have the processor allocated to them by the operating system so that they can run. Process may come into this state after Start state or while running it by but interrupted by the scheduler to assign CPU to some other process. 3 Running Once the process has been assigned to a processor by the OS scheduler, the process state is set to running and the processor executes its instructions. 4 Waiting Process moves into the waiting state if it needs to wait for a resource, such as waiting for user input, or waiting for a file to become available. 5 Terminated or Exit Once the process finishes its execution, or it is terminated by the operating system, it is moved to the terminated state where it waits to be removed from main memory. Process Control Block (PCB) A Process Control Block is a data structure maintained by the Operating System for every process. The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep track of a process as listed below in the table − S.N. Information & Description
  • 18. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 18| P a g e 1 Process State The current state of the process i.e., whether it is ready, running, waiting, or whatever. 2 Process privileges This is required to allow/disallow access to system resources. 3 Process ID Unique identification for each of the process in the operating system. 4 Pointer A pointer to parent process. 5 Program Counter Program Counter is a pointer to the address of the next instruction to be executed for this process. 6 CPU registers Various CPU registers where process need to be stored for execution for running state. 7 CPU Scheduling Information Process priority and other scheduling information which is required to schedule the process. 8 Memory management information This includes the information of page table, memory limits, Segment table depending on memory used by the operating system. 9 Accounting information This includes the amount of CPU used for process execution, time limits, execution ID etc. 10 IO status information This includes a list of I/O devices allocated to the process. The architecture of a PCB is completely dependent on Operating System and may contain different information in different operating systems. Here is a simplified diagram of a PCB − The PCB is maintained for a process throughout its lifetime, and is deleted once the process terminates. Process scheduling
  • 19. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 19 | P a g e Process scheduling is a major element in process management, since the efficiency with which processes are assigned to the processor will affect the overall performance of the system. It is essentially a matter of managing queues, with the aim of minimizing delay while making the most effective use of the processor's time. The operating system carries out four types of process scheduling:  Long-term (high-level) scheduling  Medium-term scheduling  Short-term (low-level) scheduling  I/O scheduling The long-term scheduler determines which programs are admitted to the system for processing, and as such controls the degree of multiprogramming. Before accepting a new program, the long-term scheduler must first decide whether the processor is able to cope effectively with another process. The more active processes there are, the smaller the percentage of the processor's time that can be allocated to each process. The long-term scheduler may limit the total number of active processes on the system in order to ensure that each process receives adequate processor time. New processes may subsequently be created, as existing processes are terminated or suspended. If several programs are waiting for the long-term scheduler the decision as to which job to admit first might be done on a first-come-first-served basis, or by using some other criteria such as priority, expected execution time, or I/O requirements. Medium-term scheduling is part of the swapping function. The term "swapping" refers to transferring a process out of main memory and into virtual memory (secondary storage) or vice-versa. This may occur when the operating system needs to make space for a new process, or in order to restore a process to main memory that has previously been swapped out. Any process that is inactive or blocked may be swapped into virtual memory and placed in a suspend queue until it is needed again, or until space becomes available. The swapped-out process is replaced in memory either by a new process or by one of the previously suspended processes. The task of the short-term scheduler (sometimes referred to as the dispatcher) is to determine which process to execute next. This will occur each time the currently running process is halted. A process may cease execution because it requests an I/O operation, or because it times out, or because a hardware interrupt has occurred. The objectives of short-term scheduling are to ensure efficient utilization of the processor and to provide an acceptable response time to users. Note that these objectives are not always completely compatible with one another. On most systems, a good user response time is more important than efficient processor utilization, and may necessitate switching between processes frequently, which will increase system overhead and reduce overall processor throughput.
  • 20. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 20 | P a g e Figure 3: Queuing diagram for scheduling Operations On Process The operations of process carried out by an operating system are primarily of two types: 1. Process creation 2. Process termination 1.Process Creation Process creation is a task of creating new processes. There are different situations in which a new process is created. There are different ways to create new process. A new process can be created at the time of initialization of operating system or when system calls such as fork () are initiated by other processes. The process, which creates a new process using system calls, is called parent process while the new process that is created is called child process. The child processes can create new processes using system calls. A new process can also create by an operating system based on the request received from the user. The process creation is very common in running computer system because corresponding to every task that is performed there is a process associated with it. For instance, a new process is created every time a user logs on to a computer system, an application program such a MS Word is initiated, or when a document printed. 2.Process termination Process termination is an operation in which a process is terminated after the execution of its last instruction. This operation is used to terminate or end any process. When a process is terminated, the resources that were being utilized by the process are released by the operating system. When a child process terminates, it sends the status information back to the parent process before terminating. The child process can also be terminated by the parent process if the task performed by the child process is no longer needed. In addition, when a parent process terminates, it has to terminate the child process as well became a child process cannot run when its parent process has been terminated.
  • 21. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 21 | P a g e The above figure shows the hierarchical structure of processes. The termination of a process when all its instruction has been executed successfully is called normal termination. However, there are instances when a process terminates due to some error. This termination is called as abnormal termination of a process. 1. Process creation: A user requests, and already running process can creates new processes. Parent process creates children processes using a system call, which, in turn create other processes, forming a tree of processes. 2. Process preempting: A process preempted if I/O event or timeout occurs. Then process moves from running state to ready state and CPU loads another process from ready state to running state, if available. 3. Process blocking: When a process needs I/O event during its execution, then process moves from running state to waiting state and dispatches another process to CPU. 4. Process termination: A process terminated if when a process completes its execution. Also, these events: OS, Hardware interrupt, and Software interrupt can cause termination of a process. Cooperating Process A process is said to be a cooperating process if it can affect or be affected by other processes in the system. A process that shares data with other processes is known as cooperating. Cooperation is done to provide information sharing, computational speedups, modularity and convenience. To allow cooperation there should be some mechanism for communication (called IPC: Inter-Process Comm.) and to synchronize their actions.  Cooperating processes are those that share state. (May or may not actually be "cooperating")  Behavior is nondeterministic: depends on relative execution sequence and cannot be predicted a priori.  Behavior is irreproducible.  Example: one process writes "ABC", another writes "CBA".
  • 22. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 22 | P a g e When discussing concurrent processes, multiprogramming is as dangerous as multiprocessing unless you have tight control over the multiprogramming. Also bear in mind that smart I/O devices are as bad as cooperating processes (they share the memory). Why permit processes to cooperate?  Want to share resources: o One computer, many users. o One file of checking account records, many tellers.  Want to do things faster: o Read next block while processing current one. o Divide job into sub-jobs, execute in parallel. Advantages of Cooperating Processes: There are some advantages of cooperating processes: Information Sharing: Several users may which to share the same information e.g. a shared file. The O/S needs to provide a way of allowing concurrent access. Computation Speedup: Some problems can be solved quicker by sub-dividing it into smaller tasks that can be executed in parallel on several processors. Modularity: The solution of a problem is structured into parts with well-defined interfaces, and where the parts run in parallel. Convenience: A user may be running multiple processes to achieve a single goal, or where a utility may invoke multiple components, which interconnect via a pipe structure that attaches the stdout of one stage to stdin of the next etc. If we allow processes to execute concurrently and share data, then we must either provide some mechanisms to handle conflicts e.g. writing and reading the same piece of data. We must also be prepared to handle inconsistent or corrupted data. Threads A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history. A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.
  • 23. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 23 | P a g e A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process. Difference between Process and Thread S.N. Process Thread 1 Process is heavy weight or resource intensive. Thread is light weight, taking lesser resources than a process. 2 Process switching needs interaction with operating system. Thread switching does not need to interact with operating system. 3 In multiple processing environments, each process executes the same code but has its own memory and file resources. All threads can share same set of open files, child processes. 4 If one process is blocked, then no other process can execute until the first process is unblocked. While one thread is blocked and waiting, a second thread in the same task can run. 5 Multiple processes without using threads use more resources. Multiple threaded processes use fewer resources. 6 In multiple processes each process operates independently of the others. One thread can read, write or change another thread's data. Advantages of Thread  Threads minimize the context switching time.
  • 24. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 24 | P a g e  Use of threads provides concurrency within a process.  Efficient communication.  It is more economical to create and context switch threads.  Threads allow utilization of multiprocessor architectures to a greater scale and efficiency. Types of Thread Threads are implemented in following two ways −  User Level Threads − User managed threads.  Kernel Level Threads − Operating System managed threads acting on kernel, an operating system core. User Level Threads In this case, the thread management kernel is not aware of the existence of threads. The thread library contains code for creating and destroying threads, for passing message and data between threads, for scheduling thread execution and for saving and restoring thread contexts. The application starts with a single thread. Advantages  Thread switching does not require Kernel mode privileges.  User level thread can run on any operating system.  Scheduling can be application specific in the user level thread.  User level threads are fast to create and manage. Disadvantages  In a typical operating system, most system calls are blocking.  Multithreaded application cannot take advantage of multiprocessing. Kernel Level Threads In this case, thread management is done by the Kernel. There is no thread management code in the application area. Kernel threads are supported directly by the operating system. Any application can be programmed to be multithreaded. All of the threads within an application are supported within a single process.
  • 25. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 25 | P a g e The Kernel maintains context information for the process as a whole and for individuals threads within the process. Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and management in Kernel space. Kernel threads are generally slower to create and manage than the user threads. Advantages  Kernel can simultaneously schedule multiple threads from the same process on multiple processes.  If one thread in a process is blocked, the Kernel can schedule another thread of the same process.  Kernel routines themselves can be multithreaded. Disadvantages  Kernel threads are generally slower to create and manage than the user threads.  Transfer of control from one thread to another within the same process requires a mode switch to the Kernel. Multithreading Models Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a good example of this combined approach. In a combined system, multiple threads within the same application can run in parallel on multiple processors and a blocking system call need not block the entire process. Multithreading models are three types  Many to many relationship.  Many to one relationship.  One to one relationship. Many to Many Model The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel threads. The following diagram shows the many-to-many threading model where 6 user level threads are multiplexing with 6 kernel level threads. In this model, developers can create as many user threads as necessary and the corresponding Kernel threads can run in parallel on a multiprocessor machine. This model provides the best accuracy on concurrency and when a thread performs a blocking system call, the kernel can schedule another thread for execution.
  • 26. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 26 | P a g e Many to One Model Many-to-one model maps many user level threads to one Kernel-level thread. Thread management is done in user space by the thread library. When thread makes a blocking system call, the entire process will be blocked. Only one thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors. If the user-level thread libraries are implemented in the operating system in such a way that the system does not support them, then the Kernel threads use the many-to-one relationship modes. One to One Model There is one-to-one relationship of user-level thread to the kernel-level thread. This model provides more concurrency than the many-to-one model. It also allows another thread to run when a thread makes a blocking system call. It supports multiple threads to execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.
  • 27. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 27 | P a g e Difference between User-Level & Kernel-Level Thread S.N. User-Level Threads Kernel-Level Thread 1 User-level threads are faster to create and manage. Kernel-level threads are slower to create and manage. 2 Implementation is by a thread library at the user level. Operating system supports creation of Kernel threads. 3 User-level thread is generic and can run on any operating system. Kernel-level thread is specific to the operating system. 4 Multi-threaded applications cannot take advantage of multiprocessing. Kernel routines themselves can be multithreaded. Inter-Process Communication A mechanism through which data is shared among the process in the system is referred to as Inter-process communication. Multiple processes communicate with each other to share data and resources. A set of functions is required for the communication of process with each other. In multiprogramming systems, some common storage is used where process can share data. The shared storage may be the main memory or it may be a shared file. Files are the most commonly used mechanism for data sharing between processes. One process can write in the file while another process cam read the data for the same file. Various techniques can be used to implement the Inter-Process Communication. There are two fundamental models of Inter-Process communication that are commonly used, these are: 1. Shared Memory Model 2. Message Passing Model Shared Memory Model In shared memory model. The co operating process shares a region of memory for sharing of information. Some operating systems use the supervisor call to create a share memory space. Similarly, Some operating system use file system to create RAM disk, which is a virtual disk created in the RAM. The shared files are stored in RAM disk to share the information between processes. The shared files in RAM disk are actually stored in the memory. The Process can share information by writing and reading data to the shared memory location or RAM disk. Message Passing Model In this model, data is shared between process by passing and receiving messages between co-operating process. Message passing mechanism is easier to implement than shared memory but it is useful for exchanging smaller amount of data. In message passing mechanism data is exchange between processes through kernel of operating system using system calls. Message passing mechanism is particularly useful in a distributed environment where the communicating processes may reside on different components connected by the network. For example, A data program used on the internet could be designed so that chat
  • 28. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 28 | P a g e participants communicate with each other by exchanging messages. It must be noted that passing message technique is slower than shared memory technique.  A message contains the following information:  Header of message that identifies the sending and receiving processes  Block of data  Pointer to block of data  Some control information about the process Typically Inter-Process Communication is based on the ports associated with process. A port represent a queue of processes. Ports are controlled and managed by the kernel. The processes communicate with each other through kernel. In message passing mechanism, two operations are performed. These are sending message and receiving message. The function send() and receive() are used to implement these operations. Supposed P1 and P2 want to communicate with each other. A communication link must be created between them to send and receive messages. The communication link can be created using different ways. The most important methods are: 1. Direct model 2. Indirect model 3. Buffering Inter-process communication techniques can be divided into various types. These are: 1. Pipes 2. FIFO 3. Shared memory 4. Mapped memory 5. Message queues 6. Sockets Pipes: The most basic versions of the UNIX operating system gave birth to pipes. These were used to facilitate one-directional communication between single-system processes. We can create a pipe by using the pipe system call, thus creating a pair of file descriptors. FIFO: A FIFO or 'first in, first out' is a one-way flow of data. FIFOs are similar to pipes, the only difference being that FIFOs are identified in the file system with a name. In simple terms, FIFOs are 'named pipes'. Shared memory: Shared memory is an efficient means of passing data between programs. An area is created in memory by a process, which is accessible by another process. Therefore, processes communicate by reading and writing to that memory space.
  • 29. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 29 | P a g e Mapped memory: This method can be used to share memory or files between different processors in a Windows environment. A 32-bit API can be used with Windows. This mechanism speeds up file access, and also facilitates inter- process communication. Message queues: By using this method, a developer can pass messages between messages via a single queue or a number of message queues. A system kernel manages this mechanism. An application program interface (API) coordinates the messages. Sockets: We use this mechanism to communicate over a network, between a client and a server. This method facilitates a standard connection that is independent of the type of computer and the type of operating system used. CPU Scheduling CPU scheduling is a process which allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair. Why do we need scheduling? A typical process involves both I/O time and CPU time. In a uniprogramming system like MS-DOS, time spent waiting for I/O is wasted and CPU is free during this time. In multiprogramming systems, one process can use CPU while another is waiting for I/O. This is possible only with process scheduling. Scheduling Criteria There are many different criteria's to check when considering the "best" scheduling algorithm :  CPU utilization To make out the best use of CPU and not to waste any CPU cycle, CPU would be working most of the time(Ideally 100% of the time). Considering a real system, CPU usage should range from 40% (lightly loaded) to 90% (heavily loaded.)  Throughput It is the total number of processes completed per unit time or rather say total amount of work done in a unit of time. This may range from 10/second to 1/hour depending on the specific processes.  Turnaround time It is the amount of time taken to execute a particular process, i.e. The interval from time of submission of the process to the time of completion of the process(Wall clock time).  Waiting time The sum of the periods spent waiting in the ready queue amount of time a process has been waiting in the ready queue to acquire get control on the CPU.  Load average It is the average number of processes residing in the ready queue waiting for their turn to get into the CPU.  Response time
  • 30. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 30 | P a g e Amount of time it takes from when a request was submitted until the first response is produced. Remember, it is the time till the first response and not the completion of process execution(final response). In general CPU utilization and Throughput are maximized and other factors are reduced for proper optimization. 1. CPU Utilization We want to keep the CPU as busy as possible. Conceptually, CPU utilization can range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for heavily used system). 2. Throughput If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completely for time unit, called throughput. For long processes, this rate may be one process per hour, for short transaction, it may be 10 processes per second. 3. Turnaround Time From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing input/output. 4. Waiting Time The CPU scheduling algorithm does not affect the amount of time during which a process executes or does input/output; it affects only the amount of time that a process sends waiting in the ready queue. Waiting Time is the sum of the periods spends waiting in the ready queue. 5. Response Time Often, a process can produce some output fairly early and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the time it takes to start responding, not the time it takes to output the response. Scheduling Criteria of CPU for a scheduler varies from one scheduler to another. There are many scheduling algorithms. Different scheduling algorithms have different properties. The selection of a proper scheduling algorithm may improve the system performance. We must consider the properties of various scheduling algorithm and the computer system for selecting a particular scheduling algorithm. Many criteria have been suggested for evaluating the scheduling algorithm. Some commonly used scheduling criteria are described below.  CPU Utilization Scheduling Criteria: The CPU must be busy as much as possible to perform different activities. The percentage of time, the CPU is executing a process may range from 0 to 100 percent. CPU utilization is very important in real time and multiprogramming system. In a real time system the CPI utilization should be 50 percent (lightly loaded system) to 95 percent (heavily loaded system). It means that load on a system affects the CPU utilization. The high CPU utilization is achieved on heavily loaded system.  Balanced Utilization Scheduling Criteria:
  • 31. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 31 | P a g e Balanced utilization represents the percentage of the time al the resource utilized. In addition to considering he CPU utilization the utilization of memory, I/O devices and other system resources are also considered.  Through Put Scheduling Criteria: The number of process executed by the system in a specific period of time this time unit is called through put. For long process this rate may be one process per minute. Similarly for short process, it may be 100 processes per minute. The evaluation of through must be considered on the basis of average length.  Turnaround Time Scheduling Criteria: Turnaround time represents the average period of time taken by a process executes. The turnaround time is computed by subtracting the time, when the process was created from the time is terminated. The turnaround time is inversely proportional to through put.  Waiting Time Scheduling Criteria: Waiting time represents the average period of time, a process spends waiting in the ready queue to get a chance for execution. It does not include the time, a process is executing on the CPU or performing I/O. waiting time is also very important factor to measure the performance of the system.  Response Time Scheduling Criteria: Response time represents the average time take by the system to start responding to user request. The response time is considered in interactive systems. For example, ATM is an interactive system, which is used in banks for withdrawal of money. The user expects that the system should response quickly. In interactive system the turnaround time is not a best criterion and this mostly depends on the speed of the users responses to the turnaround time in interactive system has no importance. Therefore the response time in an interactive system should be very less.  Predictability Scheduling Criteria: Predictability represents the consistency in the average response time in interactive system. It is another measure of performance of a system because users prefer consistency. Suppose an interactive system that normally responds within a microsecond, but on some occasions, it takes 5 to 15 milliseconds or more. In this case the user may be confused. Mostly the users prefer the system with reasonable and predicable response time, than a system that is faster but is highly variable is response time.  Fairness Scheduling Criteria: Fairness represents the degree to which all processes are given equal opportunity of execution. This criterion is codified in time shared system.  Priorities Scheduling Criteria: The process with higher priorities must be given preference for execution. Disk Scheduling Algorithms First Come -First Serve (FCFS) FCFS is the simplest of all the Disk Scheduling Algorithms. In FCFS, the requests are addressed in the order they arrive in the disk queue.
  • 32. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 32 | P a g e All incoming requests are placed at the end of the queue. Whatever number that is next in the queue will be the next number served. Using this algorithm doesn't provide the best results. To determine the number of head movements you would simply find the number of tracks it took to move from one request to the next. For this case it went from 50 to 95 to 180 and so on. From 50 to 95 it moved 45 tracks. If you tally up the total number of tracks you will find how many tracks it had to go through before finishing the entire request. In this example, it had a total head movement of 640 tracks. The disadvantage of this algorithm is noted by the oscillation from track 50 to track 180 and then back to track 11 to 123 then to 64. As you will soon see, this is the worse algorithm that one can use. Figure 4: FCFS  Jobs are executed on first come, first serve basis.  It is a non-preemptive, pre-emptive scheduling algorithm.  Easy to understand and implement.  Its implementation is based on FIFO queue.  Poor in performance as average wait time is high. Wait time of each process is as follows − Process Wait Time : Service Time - Arrival Time P0 0 - 0 = 0 P1 5 - 1 = 4 P2 8 - 2 = 6 P3 16 - 3 = 13 Average Wait Time: (0+4+6+13) / 4 = 5.75
  • 33. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 33 | P a g e Advantages:  Every request gets a fair chance  No indefinite postponement Disadvantages:  Does not try to optimize seek time  May not provide the best possible service Shortest Job Next (SJN)  This is also known as shortest job first, or SJF  This is a non-preemptive, pre-emptive scheduling algorithm.  Best approach to minimize waiting time.  Easy to implement in Batch systems where required CPU time is known in advance.  Impossible to implement in interactive systems where required CPU time is not known.  The processer should know in advance how much time process will take. Wait time of each process is as follows − Process Wait Time : Service Time - Arrival Time P0 3 - 0 = 3 P1 0 - 0 = 0 P2 16 - 2 = 14 P3 8 - 3 = 5 Average Wait Time: (3+0+14+5) / 4 = 5.50 Priority Based Scheduling  Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems.  Each process is assigned a priority. Process with highest priority is to be executed first and so on.  Processes with same priority are executed on first come first served basis.  Priority can be decided based on memory requirements, time requirements or any other resource requirement.
  • 34. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 34 | P a g e Wait time of each process is as follows − Process Wait Time : Service Time - Arrival Time P0 9 - 0 = 9 P1 6 - 1 = 5 P2 14 - 2 = 12 P3 0 - 0 = 0 Average Wait Time: (9+5+12+0) / 4 = 6.5 Round Robin Scheduling  Round Robin is the preemptive process scheduling algorithm.  Each process is provided a fix time to execute, it is called a quantum.  Once a process is executed for a given time period, it is preempted and other process executes for a given time period.  Context switching is used to save states of preempted processes. Wait time of each process is as follows − Process Wait Time : Service Time - Arrival Time P0 (0 - 0) + (12 - 3) = 9 P1 (3 - 1) = 2 P2 (6 - 2) + (14 - 9) + (20 - 17) = 12 P3 (9 - 3) + (17 - 12) = 11 Average Wait Time: (9+2+12+11) / 4 = 8.5 Multiple-Level Queues Scheduling
  • 35. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 35 | P a g e Multiple-level queues are not an independent scheduling algorithm. They make use of other existing algorithms to group and schedule jobs with common characteristics.  Multiple queues are maintained for processes with common characteristics.  Each queue can have its own scheduling algorithms.  Priorities are assigned to each queue. For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the algorithm assigned to the queue. Multilevel Feedback Queue Scheduling Multilevel feedback queue scheduling allows a process to move between queues. The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues. Similarly, a process that waits too long in a lower- priority queue may be moved to a higher-priority queue. This form of aging prevents starvation. For example, consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2 (Figure ). The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes in queue 1.Similarly, processes in queue 2 will only be executed if queues 0 and I are empty. A process that arrives for queue 1 will preempt a process in queue 2. A process in queue 1 will in turn be preempted by a process arriving for queue 0. A process entering the ready queue is put in queue 0. A process in queue 0 is given a time quantum of 8 milliseconds. If it does not finish within this time, it is moved to the tail of queue 1. If queue 0 is empty, the process at the head of queue I is given a quantum of 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. Processes in queue 2 are run on an FCFS basis, only when queues 0 and I are empty. This scheduling algorithm gives highest priority to any process with a CPU burst of 8 milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go off to its next I/O burst. Processes that need more than 8, but less than 24, milliseconds are also served quickly, although with lower priority than
  • 36. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 36 | P a g e shorter processes. Long processes automatically sink to queue 2 and are served in FCFS order with any CPU cycles left over from queues 0 and 1. In general, a multilevel feedback queue scheduler is defined by the following parameters:  The number of queues  The scheduling algorithm for each queue  The method used to determine when to upgrade a process to a higher- priority queue  The method used to determine when to demote a process to a lower-priority queue  The method used to determine which queue a process will enter when that process needs service The definition of a multilevel feedback queue scheduler makes it the most general CPU scheduling algorithm. It can be configured to match a specific system under design. Unfortunately, it also requires some means of selecting values for all the parameters to define the best scheduler. Although a multilevel feedback queue is the most general scheme, it is also the most complex. Advantages A process that waits too long in a lower priority queue may be moved to a higher priority queue. Disadvantage Moving the process around queue produce more CPU overhead. Algorithm Multiple FIFO queues are used and the operation is as follows: 1. A new process is inserted at the end (tail) of the top-level FIFO queue. 2. At some stage the process reaches the head of the queue and is assigned the CPU. 3. If the process is completed within the time quantum of the given queue, it leaves the system. 4. If the process voluntarily relinquishes control of the CPU, it leaves the queuing network, and when the process becomes ready again it is inserted at the tail of the same queue which it relinquished earlier. 5. If the process uses all the quantum time, it is pre-empted and inserted at the end of the next lower level queue. This next lower level queue will have a time quantum which is more than that of the previous higher level queue. 6. This scheme will continue until the process completes or it reaches the base level queue.  At the base level queue the processes circulate in round robin fashion until they complete and leave the system. Processes in the base level queue can also be scheduled on a first come first served basis.  Optionally, if a process blocks for I/O, it is 'promoted' one level, and placed at the end of the next-higher queue. This allows I/O bound processes to be favored by the scheduler and allows processes to 'escape' the base level queue.
  • 37. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 37 | P a g e For scheduling, the scheduler always starts picking up processes from the head of the highest level queue. Only if the highest level queue has become empty will the scheduler take up a process from the next lower level queue. The same policy is implemented for picking up in the subsequent lower level queues. Meanwhile, if a process comes into any of the higher level queues, it will preempt a process in the lower level queue. Also, a new process is always inserted at the tail of the top level queue with the assumption that it will complete in a short amount of time. Long processes will automatically sink to lower level queues based on their time consumption and interactivity level. In the multilevel feedback queue a process is given just one chance to complete at a given queue level before it is forced down to a lower level queue. Multiple-Processor Scheduling  If multiple CPUs are available, the scheduling problem is correspondingly more complex. Even within homogeneous multiprocessor, there are sometimes limitations on scheduling. Consider a system with an I/O device attached to a private bus of one processor. Processes wishing to use that device must be scheduled to run on that processor, otherwise the device would not be available.  If several identical processors are available, then load sharing can occur. It would be possible to provide a separate queue for each processor.  In this case, however, one processor could be idle, with an empty queue, while another processor was very busy.  To prevent this situation, we use a common ready queue. All processes go into one queue and are scheduled onto any available processor.  In such a scheme, one of two scheduling approaches may be used. In one approach, each processor is self-scheduling.  Each processor examines the common ready queue and selects a process to execute. If we have multiple processors trying to access and update a common data structure, each processor must be programmed very carefully.  We must ensure that two processors do not choose the same process, and that processes are not lost from the queue. The other approach avoids this problem by appointing one processor as scheduler for the other processors, thus creating a master—slave structure.  Some systems carry this structure one step further, by having all scheduling decisions, i/o processing, and other system activities handled by one single processor — the master server.  The other processors only execute user code. This asymmetric multiprocessing is far simpler than symmetric multiprocessing, because only one processor accesses the system data structures, alleviating the need for data sharing. Real-Time Scheduling  Real-time computing is divided into two types. Hard real-time systems are required to complete a critical task within a guaranteed amount of time.  Generally, a process is submitted along with a statement of the amount of time in which it needs to complete or perform I/O. The scheduler then either admits the process, guaranteeing that the process will complete on time, or rejects the request as
  • 38. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 38 | P a g e  impossible. This is known as resource reservation. Such a guarantee requires that the scheduler know exactly how long each type of operating-system function takes to perform, and therefore each operation must be guaranteed to take a maximum amount of time.  Such a guarantee is impossible in a system with secondary storage or virtual memory because these subsystems cause unavoidable and unforeseeable variation in the amount of time to execute a particular process.  Therefore, hard real-time systems are composed of special-purpose software running on hardware dedicated to their critical process, and lack the full functionality of modern computers and operating systems.  Soft real-time computing is less restrictive.  It requires that critical processes receive priority over less fortunate ones.  Although adding soft real-time functionality to a time-sharing system may cause an unfair allocation of resources and may result in longer delays, or even starvation, for some processes, it is at least possible to achieve.  The result is a general-purpose system that can also support multimedia, high-speed interactive graphics, and a variety of tasks that would not function acceptably in an environment that does not support soft real-time computing.  Implementing soft real-time functionality requires careful design of the scheduler and related aspects of the operating system.  First, the system must have priority scheduling, and real-time processes must have the highest priority.  The priority of real-time processes must not degrade over time, even though the priority of non— real-time processes may.  Second, the dispatch latency must be small. The smaller the latency, the faster a realtime process can start executing once it is run able.  It is relatively simple to ensure that the former property holds. However, ensuring the latter property is much more involved.  The problem is that many operating systems, including most versions of UNIX, are forced to wait for either a system call to complete or for an I/O block to take place before doing a context switch. The dispatch latency in such systems can be long, since some system calls are complex and some I/O devices are slow.  To keep dispatch latency low, we need to allow system calls to be pre-emptily.  There are several ways to achieve this goal.  One is to insert preemption points in long-duration system calls, which check to see whether a high- priority process needs to be run. If so, a context switch takes place and, when the high priority process terminates, the interrupted process continues with the system call.
  • 39. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 39 | P a g e  Preemption points can be placed at only “safe” locations in the kernel — only where kernel data structures are not being modified. Even with preemption points dispatch latency can be large, because only a few preemption points can be practically added  to a kernel.  Another method for dealing with preemption is to make the entire kernel pre-emptily. So that correct operation is ensured, all kernel data structures must be protected through the use of various synchronization mechanisms.  With this method, the kernel can always be pre-emptily, because any kernel data being updated are protected from modification by the high-priority process. This is the method used in Solaris 2.  But what happens if the higher-priority process needs to read or modify kernel data that are currently being accessed by another, lower-priority process? The high priority process would be waiting for a lower-priority one to finish. This situation is known as priority inversion. In fact, there could be a chain of processes, all accessing resources that the high-priority process needs.  This problem can be solved via the priority-inheritance protocol, in which all these processes (the processes that are accessing resources that the high-priority process needs) inherit the high priority until they are done with the resource in question.  When they are finished, their priority reverts to its natural value.  In Figure, we show the makeup of dispatch latency. The conflict phase of dispatch latency has three components: 1. Preemption of any process running in the kernel 2. Low-priority processes releasing resources needed by the high-priority process 3. Context switching from the current process to the high-priority process.  As an example, in Solaris 2, the dispatch latency with preemption disabled is over 100 milliseconds. However, the dispatch latency with preemption enabled is usually reduced to 2 milliseconds. What are the options for real-time scheduling? A number of scheduling concepts have been developed for implementation in a real-time operating system (RTOS). The most commonly encountered is the pre-emptive scheduler even though it is not inherently a
  • 40. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 40 | P a g e real-time algorithm in contrast to, for example, deadline scheduling, which aims to ensure that critical threads are executed within a given timeframe. Desktop operating systems are designed around the concept of fairness – that no application should be starved of processing cycles by another. These systems tend to use round-robin scheduling, in which each task will run for a set period of time before being forced to yield access to the processor so that execution can switch to a different task that is ready to run. Once all tasks that are not blocked from running have been allotted a time-slice, execution resumes with the first task and the cycle continues. In a real-time system, it is generally acceptable to starve less important tasks of processor cycles if there are critical tasks with work to do – although determining how 'unimportant' a task really is can be problematic for guaranteeing overall system stability. How does the typical scheduler operate? The simplest possible scheduler conceptually is the main() loop – it simply cycles through a series of functions. As long as the critical functions execute within the maximum allowable processing latency of the system, the loop will provide satisfactory performance. However, every logical task within the system is provided with the same execution priority and will consume processor cycles even if they have no work to do. It becomes very difficult to guarantee that the loop will finish execution within the maximum allowable latency for all situations. Applications also become difficult to maintain beyond a certain size. At this point, it makes sense to break the application down into discrete tasks and use an RTOS scheduler to control their execution. A pre-emptive RTOS works on the basis that the task with the highest priority and which is ready to run will be the one that is scheduled for execution. Typically, the RTOS will examine the list of tasks after any change of task status – usually after a system call or an interrupt. For example, a task may relinquish control of a mutual-exclusion semaphore (mutex) on which a higher-priority task is blocked. The RTOS will note that the high-priority task is now ready to run and pick it for scheduling. That task will continue execution until it is replaced by a higher-priority task, yields the processor, or becomes blocked again. Because the task can remain running, it is possible that it could starve other tasks of execution time – a risk that system designers need to take into account. Conversely, the RTOS guarantees that the most critical thread that is ready to run will be able to access the processor as soon as it requires it. What are the common pitfalls in scheduling? In principle, it is possible to analyze a system for potential scheduling problems and to ensure that the system will meet its deadlines. However, the analysis is greatly complicated by any inter-processor communication. Basic rate-monotonic analysis, one of the earlier theories used for determining schedulability – and the subject of one of the 20 most commonly cited papers in computer science – can only guarantee schedulability for tasks that do not share resources. In practice, most systems demand shared
  • 41. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 41 | P a g e access to memory objects and peripherals that make schedulability, as well as the tendency to deadlock, difficult to predict. One problem encountered with conventional pre-emptive RTOS schedulers is that of priority inversion. In this situation, a low-priority task obtains access to a shared resource but is pre-empted by a higher priority task, blocking all other tasks that need that resource. If a critical task requires that resource, it cannot run until the low-priority task has released the mutex. But until activity has subsided far enough to allow the low-priority task to run, it will be unable to continue far enough to release the mutex. During this time, the effective priority of the critical task is reduced to that of the low-priority thread: hence priority inversion. One workaround, although it can introduce other schedulability problems if implemented without safeguards, is to use the priority-inheritance protocol. This mode provides any thread that owns a mutex with the same priority as a more important task that is blocked on it until the semaphore is released. Many RTOS implementations support priority inheritance or a close relative of the technique, the priority ceiling protocol, which prevents a low-priority task from being elevated to the highest possible priority in the system. There are dangers in using the protocol: designers need to ensure that a normally low-priority task will not simply hog a resource and keep running indefinitely in a state in which it cannot easily be pre- empted. There also subtleties in implementation. If an application is moved from a single-core to a dual-core processor that uses the priority-ceiling protocol, it cannot guarantee mutual exclusion. So a distributed priority ceiling protocol has to be used instead. Because of the problems of analyzing schedulability in asynchronous, interrupt-driven real-time systems, many systems that have to guarantee dependable behavior resort to some form of strict time-sharing. In this scenario, important tasks are guaranteed a number of cycles within a period of time to run, even though they have nothing to do, just in case they do need to respond to a problem. ARINC 653 avionics systems have used this approach for years and a number of automotive systems have adopted the Flex-ray architecture, which is based on a similar time-triggered approach. Each partition in an ARINC 653 system has its own dedicated, protected memory space and each partition can run a multitasking system. Vital functions usually have dedicated partitions. Even with such rigidly enforced partitions, timing problems can still arise through interactions with hardware. One problem that has been identified in a paper by GE Aviation and Wind River Systems lies in the use of direct memory access (DMA). If a partition towards the end of its time-slice decides to initiate a long DMA transfer, the partition that runs immediately afterwards can stall because the DMA hardware has exclusive access to the memory bus – effectively shortening the new partition’s time-slice and creating the potential for it to miss its own deadline.
  • 42. M.TECH (HPTU) UNIT 1 Operating System and Case Study (Pee Game) 42 | P a g e The recommendation in this case is to transfer the responsibility for setting up DMA transfers to a system- level task that takes into account the amount of time a partition has remaining before it is forced to relinquish the processor. Similarly, interrupt handling can upset the operation of an otherwise strictly time-sliced system. A number of systems prevent all but the system timer, which is used to help manage scheduling, from being able to assert an interrupt. Others may record the interrupt and then allow the affected system to poll for the associated data when it next runs.