1. What is an operating system?
An operating system is an Interface between a
computer user and the computer hardware. An
operating system is a software which performs all
the basic tasks like file management, memory
management, process management, handling input
and output, and controlling peripheral devices such
as disk and printer drives.
Definition: “An operating system is a program that
acts ass an interface between the user and the
computer hardware and controls the execution of all
kinds of programs”
3. An operating system (OS) is the program that, after
being initially loaded into the computer by
a boot program, manages all the other programs in a
computer. The other programs are
called applications or application programs.
The application programs make use of the operating
system by making requests for services through a
defined application program interface (API).
In addition, users can interact directly with the
operating system through a user interface such as a
command line or a graphical user interface (GUI).
4. Important Functions of OS:
• Process management
• Memory management
• Device management
• File management
• Security
• Control over system performance
• Job accounting
• Error detecting aids
• Coordination among different softwares and
users.
5. Types of OS:
Operating systems are there from the very first computer generation
and they keep evolving with time. The following are some of the
important types of operating systems which are most commonly
used.
1. Mainframe Systems
• Batch systems
• Multi programmed systems
• Time-sharing systems
2. Desktop systems
3. Multiprocessor systems
4. Distributed systems
• Client-server systems
• Peer-to-peer systems
5. Clustered systems
6. Real-time systems
• Hard real-time systems
• Soft real-time systems
7. Hand-held systems
6. • Batch operating system:
The users of a batch operating system do not interact
with the computer directly. Each user prepares his job on
an off-line device like punch cards and submits it to the
computer operator. To speed up processing, jobs with
similar needs are batched together and run as a group.
The programmers leave their programs with the operator
and the operator then sorts the programs with similar
requirements into batches.
The problems with Batch Systems are as follows −
• Lack of interaction between the user and the job.
• CPU is often idle, because the speed of the mechanical
I/O devices is slower than the CPU.
• Difficult to provide the desired priority.
7. Time-sharing operating systems
• Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-sharing or
multitasking is a logical extension of multiprogramming. Processor's time which is
shared among multiple users simultaneously is termed as time-sharing.
• The main difference between Multiprogrammed Batch Systems and Time-Sharing
Systems is that in case of Multiprogrammed batch systems, the objective is to
maximize processor use, whereas in Time-Sharing Systems, the objective is to
minimize response time.
• Multiple jobs are executed by the CPU by switching between them, but the
switches occur so frequently. Thus, the user can receive an immediate response.
The operating system uses CPU scheduling and multiprogramming to provide
each user with a small portion of a time. Computer systems that were designed
primarily as batch systems have been modified to time-sharing systems.
Advantages:
• Provides the advantage of quick response.
• Avoids duplication of software.
• Reduces CPU idle time.
Disadvantages:
• Problem of reliability.
• Question of security and integrity of user programs and data.
• Problem of data communication.
8. Distributed operating System
• Distributed systems use multiple central processors to serve
multiple real-time applications and multiple users. Data processing
jobs are distributed among the processors accordingly.
• The processors communicate with one another through various
communication lines (such as high-speed buses or telephone lines).
These are referred as loosely coupled systems or distributed
systems. Processors in a distributed system may vary in size and
function. These processors are referred as sites, nodes, computers,
and so on.
Advantages :
• With resource sharing facility, a user at one site may be able to use
the resources available at another.
• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can
potentially continue operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
9. Network operating System
• A Network Operating System runs on a server and provides the server the
capability to manage data, users, groups, security, applications, and other
networking functions. The primary purpose of the network operating
system is to allow shared file and printer access among multiple
computers in a network, typically a local area network (LAN), a private
network or to other networks.
• Ex: Microsoft Windows Server 2003, Microsoft Windows Server 2008,
UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
Advantages:
• Centralized servers are highly stable.
• Security is server managed.
• Upgrades to new technologies and hardware can be easily integrated into
the system.
• Remote access to servers is possible from different locations and types of
systems.
Disadvantages:
• High cost of buying and running a server.
• Dependency on a central location for most operations.
• Regular maintenance and updates are required.
10. Real Time operating System
• A real-time system is defined as a data processing system in which the time
interval required to process and respond to inputs is so small that it controls the
environment. The time taken by the system to respond to an input and display of
required updated information is termed as the response time. So in this method,
the response time is very less as compared to online processing.
• Real-time systems are used when there are rigid time requirements on the
operation of a processor or the flow of data and real-time systems can be used as a
control device in a dedicated application. A real-time operating system must have
well-defined, fixed time constraints, otherwise the system will fail. For example,
Scientific experiments, medical imaging systems, industrial control systems,
weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems:
1. Hard real-time systems:
Hard real-time systems guarantee that critical tasks complete on time. In hard real-
time systems, secondary storage is limited or missing and the data is stored in
ROM. In these systems, virtual memory is almost never found.
2. Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over
other tasks and retains the priority until it completes. Soft real-time systems have
limited utility than hard real-time systems. For example, multimedia, virtual
reality, Advanced Scientific Projects like undersea exploration and planetary
rovers, etc.
11.
12.
13. SYSTEM CALLS
• In computing, a system call is the programmatic way in which a computer
program requests a service from the kernel of the operating system it is executed
on. A system call is a way for programs to interact with the operating system.
• Application developers often do not have direct access to the system calls, but can
access them through an application programming interface (API). The functions
that are included in the API invoke the actual system calls. System
call provides the services of the operating system to the user programs via
Application Program Interface(API).
• By using the API, certain benefits can be gained:
– Portability: as long a system supports an API, any program using that API can compile
and run.
– Ease of Use: using the API can be significantly easier then using the actual system call.
14. System Call Parameters:
Three general methods exist for passing parameters to the OS:
1. Parameters can be passed in registers.
2. When there are more parameters than registers, parameters can be stored in a block
and the block address can be passed as a parameter to a register.
3. Parameters can also be pushed on or popped off the stack by the operating system.
Fig. Passing of parameters as a table
15. Types of System Calls: There are 5 different categories of system calls
1. Process Control: A running program needs to be able to stop execution either
normally or abnormally. When execution is stopped abnormally, often a dump of
memory is taken and can be examined with a debugger.
2. File Management: Some common system calls are create, delete, read, write, reposition,
or close. Also, there is a need to determine the file attributes – get and set file
attribute. Many times the OS provides an API to make these system calls.
3. Device Management: Process usually require several resources to execute, if these
resources are available, they will be granted and control returned to the user
process. These resources are also thought of as devices. Some are physical, such as a
video card, and others are abstract, such as a file.
• User programs request the device, and when finished they release the device. Similar
to files, we can read, write, and reposition the device.
4. Information Management: Some system calls exist purely for transferring
information between the user program and the operating system. An example of
this is time, or date.
• The OS also keeps information about all its processes and provides system calls to
report this information.
5. Communication: There are two models of interprocess communication, the message-
passing model and the shared memory model.
a. Message-passing uses a common mailbox to pass messages between processes.
b. Shared memory use certain system calls to create and gain access to create and gain
access to regions of memory owned by other processes. The two processes exchange
information by reading and writing in the shared data.
16. 1. Process Control:
– Load,execute
– end, abort
– create process , terminate
process
– get/set process attributes
– wait for time, wait
event, signal event
– allocate/free memory
2. File Management
– create file, delete file
– open, close
– read, write, reposition
– get/set file attributes
3. Device Management
– request device, release device
– read, write, reposition
– get/set device attributes
– logically attach or detach
devices
4. Information Maintenance
– get/set time or date
– get/set system data
– get/set process, file, or device
attributes
5. Communication
– create, delete communication
connection
– send, receive messages
– transfer status information
– attach or detach remote devices
17. SYSTEM CALLS WINDOWS UNIX
Process Control
CreateProcess()
ExitProcess()
WaitForSingleObject()
fork()
exit()
wait()
File Manipulation
CreateFile()
ReadFile()
WriteFile()
CloseHandle()
open()
read()
write()
close()
Device Manipulation
SetConsoleMode()
ReadConsole()
WriteConsole()
ioctl()
read()
write()
Information
Maintenance
GetCurrentProcessID()
SetTimer()
Sleep()
getpid()
alarm()
sleep()
Communication
CreatePipe()
CreateFileMapping()
MapViewOfFile()
pipe()
shmget()
mmap()
Protection
SetFileSecurity()
InitlializeSecurityDescriptor()
SetSecurityDescriptorGroup()
chmod()
umask()
chown()
18. Services Provided by System Calls :
• Process creation and management
• Main memory management
• File Access, Directory and File system management
• Device handling(I/O)
• Protection
• Networking, etc.
20. System Bus Structure :
• Bus: “a common electrical pathway between multiple devices”
• Address lines (unidirectional, generated by CPU)
• Data lines (bidirectional)
• Control lines (individual lines specify size of data transfer, direction, timing,
interrupts, etc).
24. I/O STRUCTURE
Direct Memory Access(DMA) : For devices that transfer large quantities of data ( such
as disk controllers ), it is wasteful to tie up the CPU transferring data in and out of
registers one byte at a time. Instead this work can be off-loaded to a special
processor, known as the Direct Memory Access, DMA, Controller.
• The host issues a command to the DMA controller, indicating the location where the
data is located, the location where the data is to be transferred to, and the number of
bytes of data to transfer. The DMA controller handles the data transfer, and then
interrupts the CPU when the transfer is complete.
• A simple DMA controller is a standard component in modern PCs, and many bus-
mastering I/O cards contain their own DMA hardware. Handshaking between DMA
controllers and their devices is accomplished through two wires called the DMA-
request and DMA-acknowledge wires.
• While the DMA transfer is going on the CPU does not have access to the PCI bus
( including main memory ), but it does have access to its internal registers and
primary and secondary caches.
• DMA can be done in terms of either physical addresses or virtual addresses that are
mapped to physical addresses. The latter approach is known as Direct Virtual
Memory Access, DVMA, and allows direct data transfer from one memory-mapped
device to another without using the main memory chips.
Direct DMA access by user processes can speed up operations, but is generally
forbidden by modern systems for security and protection reasons. ( I.e. DMA is a
kernel-mode operation. )
27. STORAGE STRUCTURE
• INTRODUCTION
In computing memory refers to the physical devices used to store programs or data on a
temporary or permanent basis for use in computer or the digital electronic devices.
28.
29. REGISTER:
• The register is the small set of data holding places that are part of a
computer processor.
• A register may hold a computer instruction, a storage address, or any kind
of data(such as a bit sequence or individual characters).
30. CACHE:
Cache memory is a Smaller, faster and expensive memory that stores the copies of
data that need to be accessed frequently from the main memory.
Two Types of Cache Memories:
1. PRIMARY CACHE: knows as ‘L1 cache’ or ‘internal cache’, located inside the
CPU.
2. SECONDARY CACHE: knows as ‘L2 cache’ or ‘External cache’, located on
motherboard of a computer.
31. Types of Cache Memory in a CPU
A Cache is used by the CPU to access data from the main memory in short
time. It is a small and very fast temporary storage memory. It is designed
to speed up the transfer of data or instructions. CPU Cache is located
inside or near to the CPU chip. The data/instructions which are most
recently or frequently used by the CPU are stored in CPU. A copy of
data/instructions is stored as a cache when the CPU uses them for the first
time which retrieved from RAM. The next time when CPU needs the
data/instruction, it looks in the cache. If the required data/instruction is
found there, then it is retrieved from the cache memory instead of main
memory.
Fig. three level cache organization
32. Types/Levels of cache memory
• A computer has several different levels of cache memory. All levels of cache memory are
faster than the RAM. The cache which is closer to the CPU is always faster than the other
levels but it costs more and stores less data than other levels. The types/levels of cache
memory are explained below.
Level 1 or L1 Cache Memory
• The L1 cache memory is built on processor chip and it is very fast because it runs on the speed
of the processor. It is also called primary or internal cache. It has less memory compared to
other levels of cache and can store up to the 64kb cache memory. This cache is made of SRAM
(Static RAM). Each time the processor requests information from memory, the cache
controller on the chip uses special circuitry to first check if the memory data is already in the
cache. If it is present, then the system is spared from the time-consuming access to the main
memory.
Level 2 or L2 Cache Memory
• The L2 cache memory is larger but slower than L1 cache. It is used to see recent accesses that
are not picked by the L1 cache and it usually stores 64kb to the 2MB cache memory. An L2
cache is also found on the CPU. If L1 and L2 cache are used together, then the missing
information that is not present in the L1 cache can be retrieved quickly from the L2 cache.
Like L1 caches, L2 caches are composed of SRAM but they are larger. L2 is usually a separate
static RAM (SRAM) chip and it is located between the CPU and DRAM (Main memory).
Level 3 or L3 Cache Memory
• The L3 Cache memory is an enhanced form of memory present on the motherboard of the
computer. It is an extra cache built into the motherboard between the processor and main
memory to speed up the processing operations. It reduces the time gap between request and
retrieving of the data and instructions much more quickly than the main memory. L3 cache is
being used with processors nowadays, having more than 3MB of storage in it.
•
33. 1.8.3 Caching
Caching is an important principle of computer
systems. Information is normally kept in some storage system
(such as main memory). As it is used, it is copied into a faster
storage system—the cache—on a temporary basis. When we
need a particular piece of information, we first check whether it
is in the cache. If it is, we use the information directly from the
cache; if it is not, we use the information from the source, putting
a copy in the cache under the assumption that we will need it
again soon.
Main memory can be viewed as a fast cache for secondary
storage, since data in secondary storage must be copied into
main memory for use, and data must be in main memory before
being moved to secondary storage for safekeeping. At the highest
level, the operating system may maintain a cache of file-system
data in main memory Also, electronic RAM disks (also known
as solid-state disks) may be used for high-speed storage that is
accessed through the file-system interface.
35. SYSTEM DESIGN AND IMPLEMENTATION
• Design goals
• Mechanisms and policies
• Implementation
36. • An operating system is a construct that allows
the user application programs to interact with
the system hardware. Operating system by
itself does not provide any function but it
provides an atmosphere in which different
applications and programs can do useful
work.
• There are many problems that can occur while
designing and implementing an operating
system. These are covered in operating system
design and implementation.
37.
38. Operating System Design Goals
• It is quite complicated to define all the goals and specifications
of the operating system while designing it. The design
changes depending on the type of the operating system i.e if it
is batch system, time shared system, single user system, multi
user system, distributed system etc.
• There are basically two types of goals while designing an
operating system. These are:
1. User Goals: The operating system should be convenient, easy
to use, reliable, safe and fast according to the users. However,
these specifications are not very useful as there is no set
method to achieve these goals.
2. System Goals: The operating system should be easy to design,
implement and maintain. These are specifications required by
those who create, maintain and operate the operating system.
But there is not specific method to achieve these goals as well.
39. Operating System Mechanisms and Policies
• There is no specific way to design an operating system as it
is a highly creative task. However, there are general
software principles that are applicable to all operating
systems.
• A subtle difference between mechanism and policy is that
mechanism shows how to do something and policy shows
what to do. Policies may change over time and this would
lead to changes in mechanism. So, it is better to have a
general mechanism that would require few changes even
when a policy change occurs.
Ex: If the mechanism and policy are independent, then few
changes are required in mechanism if policy changes. If a
policy favours I/O intensive processes over CPU intensive
processes, then a policy change to preference of CPU
intensive processes will not change the mechanism.
40. Operating System Implementation
• The operating system needs to be
implemented after it is designed. Earlier they
were written in assembly language but now
higher level languages are used.
• The first system not written in assembly
language was the Master Control Program
(MCP) for Burroughs Computers.