An operating system is a software component of a computer system that is responsible
for the management of various activities of the computer and the sharing of computer
resources. It hosts the several applications that run on a computer and handles the
operations of computer hardware. Users and application programs access the services
offered by the operating systems, by means of system calls and application
programming interfaces. Users interact with operating systems through Command
Line Interfaces (CLIs) or Graphical User Interfaces known as GUIs. In short,
operating system enables user interaction with computer systems by acting as an
interface between users or application programs and the computer hardware. Here is
an overview of the different types of operating systems.
Operating systems offer a number of services to application programs and users.
Applications access these services through application programming interfaces (APIs)
or system calls. By invoking these interfaces, the application can request a service
from the operating system, pass parameters, and receive the results of the operation.
Users may also interact with the operating system with some kind of software user
interface like typing commands by using command line interface (CLI) or using a
graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and
desktop computers, the user interface is generally considered part of the operating
system. On large multi-user systems like Unix and Unix-like systems, the user
interface is generally implemented as an application program that runs outside the
operating system. (Whether the user interface should be included as part of the
operating system is a point of contention.)
EVOLUTION OF OPERATING SYSTEM
THE MAINFRAME ERA
It is generally thought that the first operating system used for real work was GM-
NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. 
Most other early operating systems for IBM mainframes were also produced by
Early operating systems were very diverse, with each vendor or customer producing
one or more operating systems specific to their particular mainframe computer.
Every operating system, even from the same vendor, could have radically different
models of commands, operating procedures, and such facilities as debugging aids.
Typically, each time the manufacturer brought out a new machine, there would be a
new operating system, and most applications would have to be manually adjusted,
recompiled, and retested.
The state of affairs continued until the 1960s when IBM, already a leading hardware
vendor, stopped the work on existing systems, and put all the effort into developing
the System/360 series of machines, all of which used the same instruction
architecture. IBM intended to develop also a single operating system for the new
hardware, the OS/360. The problems encountered in the development of the OS/360
are legendary, and are described by Fred Brooks in The Mythical Man-Month—a
book that has become a classic of software engineering. Because of performance
differences across the hardware range and delays with software development, a whole
family of operating systems were introduced instead of a single OS/360.
IBM wound up releasing a series of stop-gaps followed by three longer-lived
• OS/MFT for mid-range systems. This had one successor, OS/VS1, which was
discontinued in the 1980s.
• OS/MVT for large systems. This was similar in most ways to OS/MFT
(programs could be ported between the two without being re-compiled), but
has more sophisticated memory management and a time-sharing facility, TSO.
MVT had several successors including the current z/OS.
• DOS/360 for small System/360 models had several successors including the
current z/VSE. It was significantly different from OS/MFT and OS/MVT.
IBM maintained full compatibility with the past, so that programs developed in the
sixties can still run under z/VSE (if developed for DOS/360) or z/OS (if developed
for OS/MFT or OS/MVT) with no change.
OTHER MAINFRAME OPERATING SYSTEM
Control Data Corporation developed the SCOPE operating system in the 1960s, for
batch processing. In cooperation with the University of Minnesota, the KRONOS and
later the NOS operating systems were developed during the 1970s, which supported
simultaneous batch and timesharing use. Like many commercial timesharing systems,
its interface was an extension of the DTSS time sharing system, one of the pioneering
efforts in timesharing and programming languages.
In the late 1970s, Control Data and the University of Illinois developed the PLATO
system, which used plasma panel displays and long-distance time sharing networks.
PLATO was remarkably innovative for its time; the shared memory model of
PLATO's TUTOR programming language allowed applications such as real-time chat
and multi-user graphical games.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC
operating systems. Like all early main-frame systems, this was a batch-oriented
system that managed magnetic drums, disks, card readers and line printers. In the
1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale
time sharing, also patterned after the Dartmouth BASIC system.
Burroughs Corporation introduced the B5000 in 1961 with the MCP (Master Control
Program) operating system. The B5000 was a stack machine designed to exclusively
support high-level languages with no machine language or assembler and indeed the
MCP was the first OS to be written exclusively in a high-level language (ESPOL, a
dialect of ALGOL). MCP also introduced many other ground-breaking innovations,
such as being the first commercial implementation of virtual memory. MCP is still in
use today in the Unisys ClearPath/MCP line of computers.
Project MAC at MIT, working with GE, developed Multics and General Electric
Comprehensive Operating Supervisor (GECOS), which introduced the concept of
ringed security privilege levels. After Honeywell acquired GE's computer business, it
was renamed to General Comprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various
computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit
PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a
particularly popular system in universities, and in the early ARPANET community.
In the late 1960s through the late 1970s, several hardware capabilities evolved that
allowed similar or ported software to run on more than one system. Early systems
had utilized microprogramming to implement features on their systems in order to
permit different underlying architecture to appear to be the same as others in a
series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were
microprogrammed implementations. But soon other means of achieving application
compatibility were proven to be more significant
MINI- COMPUTERS AND THE UNIX
The beginnings of the UNIX operating system was developed at AT&T Bell
Laboratories in the late 1960s. Because it was essentially free in early editions, easily
obtainable, and easily modified, it achieved wide acceptance. It also became a
requirement within the Bell systems operating companies. Since it was written in a
high level C language, when that language was ported to a new machine architecture
UNIX was also able to be ported. This portability permitted it to become the choice
for a second generation of minicomputers and the first generation of workstations. By
widespread use it exemplified the idea of an operating system that was conceptually
the same across various hardware platforms. It still was owned by AT&T and that
limited its use to groups or corporations who could afford to license it. It became one
of the roots of the open source movement.
Other than that Digital Equipment Corporation created the simple RT-11 system for
its 16-bit PDP-11 class machines, and the VMS system for the 32-bit VAX computer.
Another system which evolved in this time frame was the Pick operating system. The
Pick system was developed and sold by Microdata Corporation who created the
precursors of the system. The system is an example of a system which started as a
database application support program and graduated to system work.
Although most small 8-bit home computers of the 1980s, such as the Commodore 64,
the Atari 8-bit, the Amstrad CPC, ZX Spectrum series and others could use a disk-
loading operating system, such as CP/M or GEOS they could generally work
without one. In fact, most if not all of these computers shipped with a built-in
BASIC interpreter on ROM, which also served as a crude operating system, allowing
minimal file management operations (such as deletion, copying, etc.) to be performed
and sometimes disk formatting, along of course with application loading and
execution, which sometimes required a non-trivial command sequence, like with the
The fact that the majority of these machines were bought for entertainment and
educational purposes and were seldom used for more "serious" or business/science
oriented applications, partly explains why a "true" operating system was not
Another reason is that they were usually single-task and single-user machines and
shipped with minimal amounts of RAM, usually between 4 and 256 kilobytes, with 64
and 128 being common figures, and 8-bit processors, so an operating system's
overhead would likely compromise the performance of the machine without really
being necessary. Even the available word processor and integrated software
applications were mostly self-contained programs which took over the machine
completely, as also did video games.
GAME CONSOLES AND VIDES GAMES
Since virtually all video game consoles and arcade cabinets designed and built after
1980 were true digital machines (unlike the analog Pong clones and derivatives),
some of them carried a minimal form of BIOS or built-in game, such as the
ColecoVision, the Sega Master System and the SNK Neo Geo. There were however
successful designs where a BIOS was not necessary, such as the Nintendo NES and its
Modern day game consoles and videogames, starting with the PC-Engine, all have a
minimal BIOS that also provides some interactive utilities such as memory card
management, Audio or Video CD playback, copy protection and sometimes carry
libraries for developers to use etc. Few of these cases, however, would qualify as a
"true" operating system.
The most notable exceptions are probably the Dreamcast game console which includes
a minimal BIOS, like the PlayStation, but can load the Windows CE operating system
from the game disk allowing easily porting of games from the PC world, and the
Xbox game console, which is little more than a disguised Intel-based PC running a
secret, modified version of Microsoft Windows in the background. Furthermore,
there are Linux versions that will run on a Dreamcast and later game consoles as
Long before that, Sony had released a kind of development kit called the Net Yaroze
for its first PlayStation platform, which provided a series of programming and
developing tools to be used with a normal PC and a specially modified "Black
PlayStation" that could be interfaced with a PC and download programs from it.
These operations require in general a functional OS on both platforms involved.
In general, it can be said that videogame consoles and arcade coin operated machines
used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while
from the PlayStation era and beyond they started getting more and more
sophisticated, to the point of requiring a generic or custom-built OS for aiding in
development and expandability.
THE PERSONAL COMPUTER ERA
The development of microprocessors made inexpensive computing available for the
small business and hobbyist, which in turn led to the widespread use of
interchangeable hardware components using a common interconnection (such as the
S-100, SS-50, Apple II, ISA, and PCI buses), and an increasing need for 'standard'
operating systems to control them. The most important of the early OSes on these
machines was Digital Research's CP/M-80 for the 8080 / 8085 / Z-80 CPUs. It was
based on several Digital Equipment Corporation operating systems, mostly for the
PDP-11 architecture. Microsoft's first Operating System, M-DOS, was designed along
many of the PDP-11 features, but for microprocessor based system. MS-DOS (or PC-
DOS when supplied by IBM) was based originally on CP/M-80. Each of these
machines had a small boot program in ROM which loaded the OS itself from disk.
The BIOS on the IBM-PC class machines was an extension of this idea and has
accreted more features and functions in the 20 years since the first IBM-PC was
introduced in 1981.
The decreasing cost of display equipment and processors made it practical to provide
graphical user interfaces for many operating systems, such as the generic X Window
System that is provided with many UNIX systems, or other graphical systems such as
Microsoft Windows, the RadioShack Color Computer's OS-9 Level II/MultiVue,
Commodore's AmigaOS, Apple's Mac OS, or even IBM's OS/2. The original GUI was
developed at Xerox Palo Alto Research Center in the early '70s (the Alto computer
system) and imitated by many vendors.
Operating systems were originally running directly on the hardware itself, and
provided services to applications. With VM/CMS on System/370, IBM introduced the
notion of virtual machine, where the operating system itself runs under the control of
an hypervisor, instead of being in direct control of the hardware. VMware
popularized this technology on personal computers. Over time, the line between
virtual machines monitors and operating systems was blurred:
• Hypervisors grew more complex, gaining their own application programming
interface, memory management or file system 
• Virtualization becomes a key feature of operating systems, as exemplified by
Hyper-V in Windows Server 2008 or HP Integrity Virtual Machines in HP-UX
• In some systems, such as POWER5 and POWER6-based servers from IBM,
the hypervisor is no longer optional
• Applications have been re-designed to run directly on a virtual machine
In many ways, virtual machine software today plays the role formerly held by the
operating system, including managing the hardware resources (processor, memory, I/
O devices), applying scheduling policies, or allowing system administrators to manage
BASIC FUNCTIONS OF ANY OPERATING SYSTEM
Every computer has an operating system and, regardless of the size and complexity of
the computer and its operating system, all operating systems perform the same basic
These programs coordinate all the computer’s resources including keyboard, mouse,
printer, monitor, storage devices and memory. An operating system creates a file
structure on the computer hard drive where user data can be stored and retrieved.
When a file is saved, the operating system saves it, attaches a name to it, and
remembers where it put the file for future use. The way an operating system
organizes information into files is called the file system. Most operating systems use a
hierarchical file system, which organizes files into directories (folders) under a tree
structure. The beginning of the directory system is called the root directory.
PROVIDING A USER INTERFACE
Users interact with application programs and computer hardware through a user
interface. Almost all operating systems today provide a windows-like Graphical User
Interface (GUI) in which graphic objects called icons are used to represent commonly
These programs load and run applications such as word processors and spreadsheets.
Most operating systems support multitasking, or the ability to run more than one
application at a time. When a user requests a program, the operating system locates
the application and loads it into the primary memory or RAM of the computer. As
more programs are loaded, the operating system must allocate the computer
SUPPORT FOR BUILT-IN UTILITY PROGRAMS
The operating system uses utility programs for maintenance and repairs. Utility
programs help identify problems, locate lost files, repair damaged files, and backup
data. The figure here shows the progress of the Disk Defragmenter, which is found in
Programs > Accessories > System Tools.
· Control to the computer hardware – The operating system sits between the
programs and the Basic Input Output System (BIOS). The BIOS controls the
hardware. All programs that need hardware resources must go through the operating
system. The operating system can either acces the hardware through the BIOS or
through the device drivers. Utility Programs are specialized programs that make
computing easier. All kinds of things can happen to a computer system – hard disks
can crash, viruses can invade a system, computers can freeze up, operations can slow
down, and so on. That’s where utility programs come in. Many operating systems
(such as Windows) have utility programs built in for common purposes – they are
also known as System Tools ( to find these tools, click on Start / Programs /
Accessories / System Tools ). Examples of utility programs are Format, Scan Disk,
Disk Cleanup, Disk Defragmenter, and Anti-Virus.
TYPES OF OPERATING SYSTEMS
The precise nature of an operating system will depend on the application in which it
is to be used. For example, the characteristics required for an airline seat reservation
system will differ from those required for the control of scientific experiments, or for
a desktop computer. Clearly, the operating system design must be strongly influenced
by the type of use for which the computer system is intended. Unfortunately it is often
the case with ‘general purpose machines’ that the type of use cannot easily be
identified; a common criticism of many systems is that, in attempting to be all things
to all individuals, they end up being totally satisfactory to no one. We shall examine
various types of system and identify some of their characteristics.
SINGLE USER SYSTEMS
Single user systems, as their name implies, provide a computer system for only one
user at a time. They are appropriate for computer dedicated to a single function, or
which are so inexpensive as to make sharing not worthwhile. Most microcomputer
operating systems (e.g. Microsoft Windows®, which runs on millions of computers
world wide) are of the single user type. Single user systems generally provide a
simple computer system, which facilitates the running of a variety of software
packages (e.g. word processing or spreadsheet) as well as allowing uses to develop
and execute programs of their own. The major emphasis is on the provision of an
easily used interface, a simple file system, and I/O facilities for keyboard, display,
disk and printer.
BATCH PROCESSING SYSTEM
Batch processing was the very first method of processing which was adapted. The
main purpose of this system was to enable the computer to move automatically from
one job to another, without the operator having to intervene. Jobs (that consist of
data and programs) are queued. The computer would then process the jobs one at a
time without further human intervention. Batch processing is still used nowadays –
e.g. printing thousands of mailing labels. In any computer system, the speed by which
the CPU can execute instructions is much higher than that which other peripheral
devices can reach. Thus, peripheral devices such as the printer, disk drives, and
others, waste a lot of the CPU’s time, because they cannot process their part of the
job as quickly as the CPU. This results in a large percentage of CPU idle time.
Another inefficiency is that, when a small program is being run, most of main
memory remains unused. Therefore, the most two expensive resources of the system –
memory and time – are wasted. As processing speed and memory size increased with
the advance in technology.
The multiprogramming operating system could manage resources more efficiently.
More than one user’s program can be resident in main memory at one time. In this
system, multiple jobs are loaded into the central memory, and each is allotted some
CPU-TIME, a tiny fraction of a second during which it receives the CPU’s attention.
When a job’s CPU-TIME is up, it is suspended and control passes to the next job,
which can continue from where it left off before. In simpler terms the CPU is
switched rapidly between the different programs. This means the system does not
have to wait for one job to be completed before starting the next. The simplest
multiprogramming systems used a round-robin method, where each job received the
same CPU-TIME. More sophisticated systems allowed priorities to be defined for
each job, such that the job with the highest priority received the longest CPU-TIME.
Such a system minimizes the amount of idle time of the CPU and the amount of
unused memory. A complication arises when printing. What happens when two jobs
are working concurrently (at the same time) and both are printing their results? A
technique known as SPOOLING is adopted, where each job sends any output to be
printed to a spool file on backing storage instead of directly to the printer. This spool
file joins a queue of other spool files to be printed. The general idea behind
multiprogramming is that the computer is working on several programs, which are in
memory at one time.
TIME-SHARING SYSTEM (MULTI-USER)
In an interactive system, the user sits at a terminal, which is hooked up to the
computer. The user can execute the job, and the output is expected to be reasonably
instantaneous, even if other users are executing their own jobs on the same computer.
To meet these situations, the principle of time-sharing was introduced in the design of
operating systems. The aim of a time-sharing operating system is to give each
terminal user a response time (called time-slice) of about three to five seconds.
Immediate processing and up-to-date information are major characteristics of a real-
time system. Such a system, the information in files has to be located very quickly,
and the updating of records must be fast. The system must be able to respond quickly
to an enquiry otherwise it becomes impractical. In a realtime system a transaction is
processed to completion at the time is occurs, ensuring that the information in the
files reflects the true (real) situation. Examples of a real-time system would be: a
flight reservation system, a banking situation.
As technology advanced, computers became cheap enough for all users of a system to
have a microcomputer of their own. Of course, a stand-alone, single-user
microcomputer can do a lot of work, but it has the following disadvantages:
· Sharing data between different users becomes difficult.
· Peripherals such as printers have to be bought for each microcomputer, and will lay
idle for most of the time.
Such considerations led to the development of networked systems, where
manycomputers are connected together to facilitate the sharing of data and
peripherals. A network operating system must handle the communication between the
networked computers, managing the data traffic and the sharing of system’s