SlideShare a Scribd company logo
1 of 55
Unit – V
Device Management
Device Management: Techniques for Device Management, Dedicated
Devices, Shared Devices, Virtual Devices; Input or Output Devices,
Storage Devices, Buffering, Secondary-Storage. Structure: Disk
Structure, Disk Scheduling, Disk Management, Swap-Space
Management, Disk Reliability.
Introduction
• The operating system has an important role I managing the I/O
operations.
• Keeping track of the status of all devices, which requires special
mechanisms. One commonly used mechanism is to have a
database such as a Unit Control Block (UCB) associated with each
device.
• Deciding on policy to determine who gets a device, for how long,
and when. A wide range of techniques is available for
implementing theses polices. There are three basic techniques for
implementing the policies of device management.
Introduction
• Allocation:- Physical assigning a device to process. Likewise the
corresponding control unit and channel must be assigned.
• De allocation policy and techniques. De allocation may be done on
either process or a job. On a job level, a device is assigned for as
long as the job exists in the system. On a process level, a device
may be assigned for as long as the process needs it.
• The module that keeps track of the status of device is called the
I/O traffic controller.
Introduction
• I/O devices can be roughly grouped under three categories.
• Human readable: Those devices that establish communication between
computer and user. For example: Keyboard, mouse, printer etc.
• Machine readable: those devices that are suitable for communication with
electronic equipment. For example: disk, sensors, controllers etc.
• Communication: Those devices that are suitable for communication with
remote devices. For example: Modems, routers, switches, etc.
The main functions of device management in the
operating system
• Keep tracks of all devices and the program which is responsible to
perform this is called I/O controller.
• Monitoring the status of each device such as storage drivers, printers and
other peripheral devices.
• Enforcing preset policies and taking a decision which process gets the
device when and for how long.
• Allocates and Deallocates the device in an efficient way. De-allocating
them at two levels: at the process level when I/O command has been
executed and the device is temporarily released, and at the job level,
when the job is finished and the device is permanently released.
• Optimizes the performance of individual devices.
Techniques for Device Management
• Three major techniques are used to managing and allocating
devices.
• Dedicated Devices
• Shared Devices
• Virtual devices
Dedicated Devices
• A dedicated device is allocated to a job for the job’s entire duration.
• Unfortunately, dedicated assignment may be inefficient if the job does
not fully and continually utilize the device.
• The other techniques, shared and virtual, are usually preferred
whenever they are applicable.
• Devices like printers, tape drivers, plotters etc. demand such allocation
scheme since it would be awkward if several users share them at the
same point of time.
• The disadvantages of such kind of devices is the inefficiency resulting
from the allocation of the device to a single user for the entire duration
of job execution even though the device is not put to use 100% of the
time.
Shared Devices
• Some devices such as disks, drums, and most other Direct Access
Storage Devices (DASD) may be shared concurrently by several
processes.
• Several processes can read from a single disk at essentially at the
same time.
• The management of a shared device can become quite
complicated, particularly if utmost efficiency is desired. For
example, if two processes simultaneously request a read from Disk
A, some mechanism must be employed to determine which
request should be handled first.
Virtual Devices
• Some devices that would normally have to be dedicated may be
converted into shared devices through techniques such as SPOOLING.
(Simultaneous Peripheral Operation Online).
• Spooling refers to a process of transferring data by placing it in a
temporary working area where another program may access it for
processing at a later point in time.
• For example, a spooling program can read and copy all card input onto a
disk at high speed. Later, when. A process tries to read a card, the
spooling program intercepts the request and converts it to read from the
Disk.
• Since a disk may be easily shared by several users, we have converted a
dedicated device, changing one Card reader into many “virtual” card
readers. This technique is equally applicable to a large number of
Peripheral devices, such as printers and most dedicated slow
input/output devices.
Ways to access a device in device
management in operating system
• Polling:-
In this, a CPU continuously checks the device status for exchanging data.
The plus point is that it is simple and the negative point is – Busy-waiting.
Interrupt-driven I/O:-
A device controller notifies the corresponding device driver about the
availability of the device. The advantages can be a more efficient use of
CPU cycles and the drawbacks can be data copying and movements and
slow for character devices-one interrupt per keyboard input.
Ways to access a device in device
management in operating system
• Direct memory access (DMA):-To perform data movements
additional controller bought into use. The benefit of such a
method is that CPU is not involved in copying data but a con is
that a process cannot access in-transit data.
• Double buffering: This methodology access advice makes use of
two buffers so that while one is being used, the other is being
filled. Such a way is quite popular in graphics and animation so
that the viewer does not see the line-by-line scanning.
INPUT OUTPUT DEVICES
• An I/O system is required to take an application I/O request and send it
to the physical device, then take whatever response comes back from
the device and send it to the application. I/O devices can be divided
into two categories −
• Block devices − A block device is one with which the driver
communicates by sending entire blocks of data. For example, Hard disks,
USB cameras, Disk-On-Key etc.
• Character devices − A character device is one with which the driver
communicates by sending and receiving single characters (bytes, octets).
For example, serial ports, parallel ports, sounds cards etc
Device Controllers
• Device drivers are software modules that can be plugged into an
OS to handle a particular device. Operating System takes help
from device drivers to handle all I/O devices.
• The Device Controller works like an interface between a device
and a device driver. I/O units (Keyboard, mouse, printer, etc.)
typically consist of a mechanical component and an electronic
component where electronic component is called the device
controller.
Device Controllers
Synchronous vs asynchronous I/O
• Synchronous I/O − In this scheme CPU execution waits while I/O
proceeds
• Asynchronous I/O − I/O proceeds concurrently with CPU execution
Communication to I/O Devices
• The CPU must have a way to pass information to and from an I/O
device. There are three approaches available to communicate
with the CPU and Device.
• Special Instruction I/O
• Memory-mapped I/O
• Direct memory access (DMA)
• Special Instruction I/O
This uses CPU instructions that are specifically made for controlling I/O
devices. These instructions typically allow data to be sent to an I/O device
or read from an I/O device.
• Memory-mapped I/O
When using memory-mapped I/O, the same address space is shared by
memory and I/O devices. The device is connected directly to certain main
memory locations so that I/O device can transfer block of data to/from
memory without going through CPU.
Communication to I/O Devices
Communication to I/O Devices
• While using memory mapped IO, OS allocates buffer in
memory and informs I/O device to use that buffer to
send data to the CPU. I/O device operates
asynchronously with CPU, interrupts CPU when finished.
• The advantage to this method is that every instruction
which can access memory can be used to manipulate an
I/O device. Memory mapped IO is used for most high-
speed I/O devices like disks, communication interfaces.
Direct Memory Access (DMA)
• Slow devices like keyboards will generate an interrupt to the main
CPU after each byte is transferred. If a fast device such as a disk
generated an interrupt for each byte, the operating system would
spend most of its time handling these interrupts. So a typical
computer uses direct memory access (DMA) hardware to reduce
this overhead.
• Direct Memory Access (DMA) means CPU grants I/O module
authority to read from or write to memory without involvement.
DMA module itself controls exchange of data between main
memory and the I/O device. CPU is only involved at the beginning
and end of the transfer and interrupted only after entire block has
been transferred.
Direct Memory Access (DMA)
The operating system uses the DMA hardware
as follows
Step Description
1 Device driver is instructed to transfer disk data to a buffer address X.
2 Device driver then instruct disk controller to transfer data to buffer.
3 Disk controller starts DMA transfer.
4 Disk controller sends each byte to DMA controller.
5 DMA controller transfers bytes to buffer, increases the memory address, decreases
the counter C until C becomes zero.
6 When C becomes zero, DMA interrupts CPU to signal transfer completion.
Polling I/O
• Polling is the simplest way for an I/O device to communicate with
the processor. The process of periodically checking status of the
device to see if it is time for the next I/O operation, is called
polling. The I/O device simply puts the information in a Status
register, and the processor must come and get the information.
• Most of the time, devices will not require attention and when one
does it will have to wait until it is next interrogated by the polling
program. This is an inefficient method and much of the processors
time is wasted on unnecessary polls.
Interrupts I/O
• An alternative scheme for dealing with I/O is the interrupt-driven
method. An interrupt is a signal to the microprocessor from a
device that requires attention.
• A device controller puts an interrupt signal on the bus when it
needs CPU’s attention when CPU receives an interrupt, It saves its
current state and invokes the appropriate interrupt handler using
the interrupt vector (addresses of OS routines to handle various
events). When the interrupting device has been dealt with, the
CPU continues with its original task as if it had never been
interrupted.
Secondary Storage
• Secondary storage devices are those devices whose memory is non volatile,
meaning, the stored data will be intact even if the system is turned off. Here
are a few things worth noting about secondary storage.
• Secondary storage is also called auxiliary storage.
• Secondary storage is less expensive when compared to primary memory like
RAMs.
• The speed of the secondary storage is also lesser than that of primary storage.
• Hence, the data which is less frequently accessed is kept in the secondary
storage.
• A few examples are magnetic disks, magnetic tapes, removable thumb drives
etc.
Magnetic Disk Structure
• In modern computers, most of the secondary storage is in the form
of magnetic disks. Hence, knowing the structure of a magnetic
disk is necessary to understand how the data in the disk is
accessed by the computer.
Magnetic Disk Structure
• A magnetic disk contains several platters. Each platter is divided into circular
shaped tracks. The length of the tracks near the centre is less than the length of
the tracks farther from the centre. Each track is further divided into sectors, as
shown in the figure.
• Tracks of the same distance from centre form a cylinder. A read-write head is used
to read data from a sector of the magnetic disk.
• The speed of the disk is measured as two parts:
• Transfer rate: This is the rate at which the data moves from disk to the computer.
• Random access time: It is the sum of the seek time and rotational latency.
Magnetic Disk Structure
• Seek time is the time taken by the arm to move to the required
track. Rotational latency is defined as the time taken by the arm
to reach the required sector in the track.
• Even though the disk is arranged as sectors and tracks physically,
the data is logically arranged and addressed as an array of blocks
of fixed size. The size of a block can be 512 or 1024 bytes. Each
logical block is mapped with a sector on the disk, sequentially. In
this way, each sector in the disk will have a logical address.
Disk Scheduling Algorithms
• First Come First Serve
• This algorithm performs requests in the same order asked by the system.
Let's take an example where the queue has the following requests with
cylinder numbers as follows:
• 98, 183, 37, 122, 14, 124, 65, 67
• Assume the head is initially at cylinder 56. The head moves in the given
order in the queue i.e., 56→98→183→...→67.
Disk Scheduling Algorithms
Shortest Seek Time First (SSTF)
• Here the position which is closest to the current head position is
chosen first. Consider the previous example where disk queue
looks like,
• 98, 183, 37, 122, 14, 124, 65, 67
• Assume the head is initially at cylinder 56. The next closest
cylinder to 56 is 65, and then the next nearest one is 67, then 37,
14, so on.
Shortest Seek Time First (SSTF)
SCAN algorithm
• This algorithm is also called the elevator algorithm because of it's
behavior. Here, first the head moves in a direction (say backward) and
covers all the requests in the path. Then it moves in the opposite
direction and covers the remaining requests in the path. This behavior is
similar to that of an elevator. Let's take the previous example,
• 98, 183, 37, 122, 14, 124, 65, 67
• Assume the head is initially at cylinder 56. The head moves in backward
direction and accesses 37 and 14. Then it goes in the opposite direction
and accesses the cylinders as they come in the path.
SCAN algorithm
Disk Management
• Low-level formatting, or physical formatting — Dividing a disk into
sectors that the disk controller can read and write.
• To use a disk to hold files, the operating system still needs to
record its own data structures on the disk
• Partition the disk into one or more groups of cylinders
• Logical formatting or “making a file system”
• Boot block initializes system
• The bootstrap is stored in ROM
• Bootstrap loader program z Methods such as sector sparing used to handle
bad blocks
Disk Formatting
• A new magnetic disk is a blank slate. It is just platters of a
magnetic recording material. Before a disk can store data, it must
be divided into sectors that the disk controller can read and write.
This process is called low-level formatting (or physical
formatting).
• Low-level formatting fills the disk with a special data structure for
each sector. The data structure for a sector consists of a header, a
data area, and a trailer. The header and trailer contain
information used by the disk controller, such as a sector number
and an error-correcting code (ECC).
Disk Formatting
• To use a disk to hold files, the operating system still needs to
record its own data structures on the disk. It does so in two steps.
• The first step is to partition the disk into one or more groups of
cylinders. The operating system can treat each partition as though
it were a separate disk.
• For instance, one partition can hold a copy of the operating
system’s executable code, while another holds user files. After
partitioning, the second step is logical formatting (or creation of a
file system). In this step, the operating system stores the initial
file-system data structures onto the disk.
Boot Block
• When a computer is powered up or rebooted, it needs to have an
initial program to run. This initial program is called the bootstrap
program. It initializes all aspects of the system (i.e. from CPU
registers to device controllers and the contents of main memory)
and then starts the operating system.
• To do its job, the bootstrap program finds the operating system
kernel on disk, loads that kernel into memory, and jumps to an
initial address to begin the operating-system execution.
Boot Block
• For most computers, the bootstrap is stored in read-only memory
(ROM). This location is convenient because ROM needs no
initialization and is at a fixed location that the processor can start
executing when powered up or reset. And since ROM is read-only,
it cannot be infected by a computer virus. The problem is that
changing this bootstrap code requires changing the ROM hardware
chips.
• For this reason, most systems store a tiny bootstrap loader
program in the boot ROM, whose only job is to bring in a full
bootstrap program from disk. The full bootstrap program can be
changed easily: A new version is simply written onto the disk. The
full bootstrap program is stored in a partition (at a fixed location
on the disk) is called the boot blocks. A disk that has a boot
partition is called a boot disk or system disk.
Bad Blocks
• Since disks have moving parts and small tolerances, they are prone
to failure. Sometimes the failure is complete, and the disk needs
to be replaced, and its contents restored from backup media to
the new disk.
• More frequently, one or more sectors become defective. Most
disks even come from the factory with bad blocks. Depending on
the disk and controller in use, these blocks are handled in a
variety of ways.
Bad Blocks
• The controller maintains a list of bad blocks on the disk. The list is
initialized during the low-level format at the factory and is
updated over the life of the disk. The controller can be told to
replace each bad sector logically with one of the spare sectors.
This scheme is known as sector sparing or forwarding.
Swap-Space Management
• Swapping is a memory management technique used in multi-
programming to increase the number of process sharing the CPU.
It is a technique of removing a process from main memory and
storing it into secondary memory, and then bringing it back into
main memory for continued execution. This action of moving a
process out from main memory to secondary memory is called
Swap Out and the action of moving a process out from secondary
memory to main memory is called Swap In.
Swap-Space Management
• Swap-Space :
• The area on the disk where the swapped out processes are stored
is called swap space.
• A swap space can reside in one of the two places –
• Normal file system
• Separate disk partition
An Example –
• The traditional UNIX kernel started with an implementation of
swapping that copied entire process between contiguous disk
regions and memory. UNIX later evolve to a combination of
swapping and paging as paging hardware became available. In
Solaris, the designers changed standard UNIX methods to improve
efficiency. More changes were made in later versions of Solaris, to
improve the efficiency.
• Linux is almost similar to Solaris system. In both the systems the
swap space is used only for anonymous memory, it is that kind of
memory which is not backed by any file. In the Linux system, one or
more swap areas are allowed to be established. A swap area may
be in either in a swap file on a regular file system or a dedicated file
partition.
Redundant Array of Independent Disks (RAID)
• Redundant Array of Independent Disks (RAID) is a set of several
physical disk drives that Operating System see as a single logical
unit. It played a significant role in narrowing the gap between
increasingly fast processors and slow disk drives.
• The basic principle behind RAID is that several smaller-capacity
disk drives are better in performance than some large-capacity
disk drives because through distributing the data among several
smaller disks, the system can access data from them faster,
resulting in improved I/O performance and improved data
recovery in case of disk failure.
Redundant Array of Independent Disks (RAID)
Redundant Array of Independent Disks (RAID)
• A typical disk array configuration consists of small disk drives connected
to a controller housing the software and coordinating the transfer of
data in the disks to a large capacity disk connected to I/O subsystem.
• Note that this whole configuration is viewed as a single large-capacity
disk by the OS.
• Data is divided into segments called strips, which are distributed across
the disks in the array.
• A set of consecutive strips across the disks is called a stripe.
• The whole process is called striping.
Redundant Array of Independent Disks (RAID)
• The whole system of RAID is divided in seven levels from level 0 to
level 6. Here, the level does not indicate hierarchy, but indicate
different types of configurations and error correction capabilities.
• Level0 :
RAID level 0 is the only level that cannot recover from hardware failure, as it
doesn’t provide error correction or redundancy.
• Level1 :
RAID level 1 not only uses the process of striping, but also uses mirrored
configuration by providing redundancy, i.e., it creates a duplicate set of all the
data in a mirrored array of disks, which as a backup in case of hardware
failure.
Redundant Array of Independent Disks (RAID)
• Level2 :
RAID level 2 makes the use of very small strips (often of the size of 1 byte) and
a hamming code to provide redundancy (for the task of error detection,
correction, etc.).
Hamming Code : It is an algorithm used for error detection and correction when
the data is being transferred. It adds extra, redundant bits to the data. It is able to
correct single-bit errors and correct double-bit errors.
• Level3 :
RAID level 3 is a configuration that only needs one disk for redundancy. Only one
parity bit is computed for each strip and is stored in designated redundant disk.
If a drive malfunctions, the RAID controller considers all the bits coming from that
disk to be 0 and notes the location of that malfunctioning disk. So, if the data
being read has a parity error, then the controller knows that the bit should be 1
and corrects it.
Redundant Array of Independent Disks (RAID)
• Level4 :
RAID level 4 uses the same concept used in level 0 & level 1, but also
computes a parity for each strip and stores this parity in the corresponding
strip of the parity disk.
The advantage of this configuration is that if a disk fails, the data can be still
recovered from the parity disk.
• Level 5 :
• RAID level 5 is a modification level 4. In level 4, only one disk is designated
for parity storing parities. But in level 5, it distributes the parity disks across
the disks in the array.
Redundant Array of Independent Disks (RAID)
• Level6 :
RAID level 6 provides an extra degree of error detection and correction. It
requires 2 different parity calculations.
One calculation is the same as the one used level 4 and 5, other calculation
is an independent data-check algorithm. Both the parities are stored on
separate disks across the array, which corresponds to the data strips in the
array.

More Related Content

What's hot

Direct memory access
Direct memory accessDirect memory access
Direct memory accessshubham kuwar
 
Operating Systems
Operating SystemsOperating Systems
Operating Systemsvampugani
 
Kernel I/O subsystem
Kernel I/O subsystemKernel I/O subsystem
Kernel I/O subsystemAtiKa Bhatti
 
Presentation on Operating System & its Components
Presentation on Operating System & its ComponentsPresentation on Operating System & its Components
Presentation on Operating System & its ComponentsMahmuda Rahman
 
Principles of I/O Hardware and Software
Principles of I/O Hardware and SoftwarePrinciples of I/O Hardware and Software
Principles of I/O Hardware and SoftwareKarandeep Singh Sehgal
 
Storage management in operating system
Storage management in operating systemStorage management in operating system
Storage management in operating systemDeepikaT13
 
Operating Systems 1 (11/12) - Input / Output
Operating Systems 1 (11/12) - Input / OutputOperating Systems 1 (11/12) - Input / Output
Operating Systems 1 (11/12) - Input / OutputPeter Tröger
 
Input output in computer Orgranization and architecture
Input output in computer Orgranization and architectureInput output in computer Orgranization and architecture
Input output in computer Orgranization and architecturevikram patel
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OSKumar Pritam
 
Difference Program vs Process vs Thread
Difference Program vs Process vs ThreadDifference Program vs Process vs Thread
Difference Program vs Process vs Threadjeetendra mandal
 
Direct Memory Access(DMA)
Direct Memory Access(DMA)Direct Memory Access(DMA)
Direct Memory Access(DMA)Page Maker
 
Memory organization in computer architecture
Memory organization in computer architectureMemory organization in computer architecture
Memory organization in computer architectureFaisal Hussain
 
Computer memory
Computer memoryComputer memory
Computer memoryJayapal Jp
 

What's hot (20)

Direct memory access
Direct memory accessDirect memory access
Direct memory access
 
Operating Systems
Operating SystemsOperating Systems
Operating Systems
 
Kernel I/O subsystem
Kernel I/O subsystemKernel I/O subsystem
Kernel I/O subsystem
 
Presentation on Operating System & its Components
Presentation on Operating System & its ComponentsPresentation on Operating System & its Components
Presentation on Operating System & its Components
 
Principles of I/O Hardware and Software
Principles of I/O Hardware and SoftwarePrinciples of I/O Hardware and Software
Principles of I/O Hardware and Software
 
Direct Memory Access ppt
Direct Memory Access pptDirect Memory Access ppt
Direct Memory Access ppt
 
Bus interconnection
Bus interconnectionBus interconnection
Bus interconnection
 
Memory management
Memory managementMemory management
Memory management
 
Device management
Device managementDevice management
Device management
 
Storage management in operating system
Storage management in operating systemStorage management in operating system
Storage management in operating system
 
Operating Systems 1 (11/12) - Input / Output
Operating Systems 1 (11/12) - Input / OutputOperating Systems 1 (11/12) - Input / Output
Operating Systems 1 (11/12) - Input / Output
 
Direct access memory
Direct access memoryDirect access memory
Direct access memory
 
Modes of data transfer
Modes of data transferModes of data transfer
Modes of data transfer
 
Input output in computer Orgranization and architecture
Input output in computer Orgranization and architectureInput output in computer Orgranization and architecture
Input output in computer Orgranization and architecture
 
Memory Management in OS
Memory Management in OSMemory Management in OS
Memory Management in OS
 
Difference Program vs Process vs Thread
Difference Program vs Process vs ThreadDifference Program vs Process vs Thread
Difference Program vs Process vs Thread
 
Direct Memory Access(DMA)
Direct Memory Access(DMA)Direct Memory Access(DMA)
Direct Memory Access(DMA)
 
Memory organization in computer architecture
Memory organization in computer architectureMemory organization in computer architecture
Memory organization in computer architecture
 
Computer memory
Computer memoryComputer memory
Computer memory
 
Defragmentation
DefragmentationDefragmentation
Defragmentation
 

Similar to Unit v: Device Management

I/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxI/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxwebip34973
 
Ch 7 io_management & disk scheduling
Ch 7 io_management & disk schedulingCh 7 io_management & disk scheduling
Ch 7 io_management & disk schedulingmadhuributani
 
A transfer from I/O device to memory requires the execution of several instru...
A transfer from I/O device to memory requires the execution of several instru...A transfer from I/O device to memory requires the execution of several instru...
A transfer from I/O device to memory requires the execution of several instru...rsaravanakumar13
 
computer system structure
computer system structurecomputer system structure
computer system structureHAMZA AHMED
 
I/O systems chapter 12 OS
I/O systems chapter 12 OS I/O systems chapter 12 OS
I/O systems chapter 12 OS ssuser45ae56
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301cpjcollege
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.pptshreesha16
 
Io techniques & its types
Io techniques & its typesIo techniques & its types
Io techniques & its typesNehal Naik
 
IO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTUREIO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTUREHariharan Anand
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecturejeetesh036
 
COMPUTER ORGANIZATION NOTES Unit 3 4
COMPUTER ORGANIZATION NOTES  Unit 3 4COMPUTER ORGANIZATION NOTES  Unit 3 4
COMPUTER ORGANIZATION NOTES Unit 3 4Dr.MAYA NAYAK
 
Io management disk scheduling algorithm
Io management disk scheduling algorithmIo management disk scheduling algorithm
Io management disk scheduling algorithmlalithambiga kamaraj
 

Similar to Unit v: Device Management (20)

I/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxI/o management and disk scheduling .pptx
I/o management and disk scheduling .pptx
 
Ch 7 io_management & disk scheduling
Ch 7 io_management & disk schedulingCh 7 io_management & disk scheduling
Ch 7 io_management & disk scheduling
 
I/O Organization
I/O OrganizationI/O Organization
I/O Organization
 
Io (2)
Io (2)Io (2)
Io (2)
 
A transfer from I/O device to memory requires the execution of several instru...
A transfer from I/O device to memory requires the execution of several instru...A transfer from I/O device to memory requires the execution of several instru...
A transfer from I/O device to memory requires the execution of several instru...
 
computer system structure
computer system structurecomputer system structure
computer system structure
 
I/O systems chapter 12 OS
I/O systems chapter 12 OS I/O systems chapter 12 OS
I/O systems chapter 12 OS
 
Lecture 9.pptx
Lecture 9.pptxLecture 9.pptx
Lecture 9.pptx
 
Operating System BCA 301
Operating System BCA 301Operating System BCA 301
Operating System BCA 301
 
Ch1 introduction
Ch1   introductionCh1   introduction
Ch1 introduction
 
Module 1 Introduction.ppt
Module 1 Introduction.pptModule 1 Introduction.ppt
Module 1 Introduction.ppt
 
Unit 6
Unit 6Unit 6
Unit 6
 
1_to_10.pdf
1_to_10.pdf1_to_10.pdf
1_to_10.pdf
 
Io techniques & its types
Io techniques & its typesIo techniques & its types
Io techniques & its types
 
IO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTUREIO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTURE
 
Computer system architecture
Computer system architectureComputer system architecture
Computer system architecture
 
Cao u1
Cao u1Cao u1
Cao u1
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
 
COMPUTER ORGANIZATION NOTES Unit 3 4
COMPUTER ORGANIZATION NOTES  Unit 3 4COMPUTER ORGANIZATION NOTES  Unit 3 4
COMPUTER ORGANIZATION NOTES Unit 3 4
 
Io management disk scheduling algorithm
Io management disk scheduling algorithmIo management disk scheduling algorithm
Io management disk scheduling algorithm
 

More from Arnav Chowdhury

Startup Funding and Strategies for Future
Startup Funding and Strategies for FutureStartup Funding and Strategies for Future
Startup Funding and Strategies for FutureArnav Chowdhury
 
Marketing Management Introduction.pptx
Marketing Management Introduction.pptxMarketing Management Introduction.pptx
Marketing Management Introduction.pptxArnav Chowdhury
 
Marketing Management Product.pptx
Marketing Management Product.pptxMarketing Management Product.pptx
Marketing Management Product.pptxArnav Chowdhury
 
Institutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipInstitutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipArnav Chowdhury
 
New Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesNew Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesArnav Chowdhury
 
Creating a Business Plan
Creating a Business PlanCreating a Business Plan
Creating a Business PlanArnav Chowdhury
 
Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Arnav Chowdhury
 
Business Research Methods (Introduction)
Business Research Methods (Introduction)Business Research Methods (Introduction)
Business Research Methods (Introduction)Arnav Chowdhury
 
Planning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VenturePlanning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VentureArnav Chowdhury
 
Fundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipFundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipArnav Chowdhury
 
Unit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismUnit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismArnav Chowdhury
 
UNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesUNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesArnav Chowdhury
 
Unit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesUnit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesArnav Chowdhury
 
Information Technology and Modern Gadgets
Information Technology and Modern GadgetsInformation Technology and Modern Gadgets
Information Technology and Modern GadgetsArnav Chowdhury
 

More from Arnav Chowdhury (20)

Startup Funding and Strategies for Future
Startup Funding and Strategies for FutureStartup Funding and Strategies for Future
Startup Funding and Strategies for Future
 
Marketing Management Introduction.pptx
Marketing Management Introduction.pptxMarketing Management Introduction.pptx
Marketing Management Introduction.pptx
 
Marketing Management Product.pptx
Marketing Management Product.pptxMarketing Management Product.pptx
Marketing Management Product.pptx
 
Institutional Support to Entrepreneurship
Institutional Support to EntrepreneurshipInstitutional Support to Entrepreneurship
Institutional Support to Entrepreneurship
 
New Venture Expansion and Exit Strategies
New Venture Expansion and Exit StrategiesNew Venture Expansion and Exit Strategies
New Venture Expansion and Exit Strategies
 
Creating a Business Plan
Creating a Business PlanCreating a Business Plan
Creating a Business Plan
 
Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)Business Research Methodology ( Data Collection)
Business Research Methodology ( Data Collection)
 
Business Research Methods (Introduction)
Business Research Methods (Introduction)Business Research Methods (Introduction)
Business Research Methods (Introduction)
 
Planning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial VenturePlanning and organizing Entrepreneurial Venture
Planning and organizing Entrepreneurial Venture
 
Fundamentals of Entrepreneurship
Fundamentals of EntrepreneurshipFundamentals of Entrepreneurship
Fundamentals of Entrepreneurship
 
ICT tools in Education
ICT tools in EducationICT tools in Education
ICT tools in Education
 
Unit v: Cyber Safety Mechanism
Unit v: Cyber Safety MechanismUnit v: Cyber Safety Mechanism
Unit v: Cyber Safety Mechanism
 
UNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement StrategiesUNIT IV:Security Measurement Strategies
UNIT IV:Security Measurement Strategies
 
Unit iii: Common Hacking Techniques
Unit iii: Common Hacking TechniquesUnit iii: Common Hacking Techniques
Unit iii: Common Hacking Techniques
 
Cyber Crime
Cyber CrimeCyber Crime
Cyber Crime
 
Information Technology and Modern Gadgets
Information Technology and Modern GadgetsInformation Technology and Modern Gadgets
Information Technology and Modern Gadgets
 
Unit iv FMIS
Unit iv FMISUnit iv FMIS
Unit iv FMIS
 
Unit iii FMIS
Unit iii FMISUnit iii FMIS
Unit iii FMIS
 
Unit ii FMIS
Unit ii FMISUnit ii FMIS
Unit ii FMIS
 
Unit iv graphics
Unit iv  graphicsUnit iv  graphics
Unit iv graphics
 

Recently uploaded

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsPrecisely
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 

Recently uploaded (20)

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Unlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power SystemsUnlocking the Potential of the Cloud for IBM Power Systems
Unlocking the Potential of the Cloud for IBM Power Systems
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 

Unit v: Device Management

  • 1. Unit – V Device Management Device Management: Techniques for Device Management, Dedicated Devices, Shared Devices, Virtual Devices; Input or Output Devices, Storage Devices, Buffering, Secondary-Storage. Structure: Disk Structure, Disk Scheduling, Disk Management, Swap-Space Management, Disk Reliability.
  • 2. Introduction • The operating system has an important role I managing the I/O operations. • Keeping track of the status of all devices, which requires special mechanisms. One commonly used mechanism is to have a database such as a Unit Control Block (UCB) associated with each device. • Deciding on policy to determine who gets a device, for how long, and when. A wide range of techniques is available for implementing theses polices. There are three basic techniques for implementing the policies of device management.
  • 3. Introduction • Allocation:- Physical assigning a device to process. Likewise the corresponding control unit and channel must be assigned. • De allocation policy and techniques. De allocation may be done on either process or a job. On a job level, a device is assigned for as long as the job exists in the system. On a process level, a device may be assigned for as long as the process needs it. • The module that keeps track of the status of device is called the I/O traffic controller.
  • 4. Introduction • I/O devices can be roughly grouped under three categories. • Human readable: Those devices that establish communication between computer and user. For example: Keyboard, mouse, printer etc. • Machine readable: those devices that are suitable for communication with electronic equipment. For example: disk, sensors, controllers etc. • Communication: Those devices that are suitable for communication with remote devices. For example: Modems, routers, switches, etc.
  • 5. The main functions of device management in the operating system • Keep tracks of all devices and the program which is responsible to perform this is called I/O controller. • Monitoring the status of each device such as storage drivers, printers and other peripheral devices. • Enforcing preset policies and taking a decision which process gets the device when and for how long. • Allocates and Deallocates the device in an efficient way. De-allocating them at two levels: at the process level when I/O command has been executed and the device is temporarily released, and at the job level, when the job is finished and the device is permanently released. • Optimizes the performance of individual devices.
  • 6. Techniques for Device Management • Three major techniques are used to managing and allocating devices. • Dedicated Devices • Shared Devices • Virtual devices
  • 7. Dedicated Devices • A dedicated device is allocated to a job for the job’s entire duration. • Unfortunately, dedicated assignment may be inefficient if the job does not fully and continually utilize the device. • The other techniques, shared and virtual, are usually preferred whenever they are applicable. • Devices like printers, tape drivers, plotters etc. demand such allocation scheme since it would be awkward if several users share them at the same point of time. • The disadvantages of such kind of devices is the inefficiency resulting from the allocation of the device to a single user for the entire duration of job execution even though the device is not put to use 100% of the time.
  • 8.
  • 9. Shared Devices • Some devices such as disks, drums, and most other Direct Access Storage Devices (DASD) may be shared concurrently by several processes. • Several processes can read from a single disk at essentially at the same time. • The management of a shared device can become quite complicated, particularly if utmost efficiency is desired. For example, if two processes simultaneously request a read from Disk A, some mechanism must be employed to determine which request should be handled first.
  • 10.
  • 11. Virtual Devices • Some devices that would normally have to be dedicated may be converted into shared devices through techniques such as SPOOLING. (Simultaneous Peripheral Operation Online). • Spooling refers to a process of transferring data by placing it in a temporary working area where another program may access it for processing at a later point in time. • For example, a spooling program can read and copy all card input onto a disk at high speed. Later, when. A process tries to read a card, the spooling program intercepts the request and converts it to read from the Disk. • Since a disk may be easily shared by several users, we have converted a dedicated device, changing one Card reader into many “virtual” card readers. This technique is equally applicable to a large number of Peripheral devices, such as printers and most dedicated slow input/output devices.
  • 12.
  • 13. Ways to access a device in device management in operating system • Polling:- In this, a CPU continuously checks the device status for exchanging data. The plus point is that it is simple and the negative point is – Busy-waiting. Interrupt-driven I/O:- A device controller notifies the corresponding device driver about the availability of the device. The advantages can be a more efficient use of CPU cycles and the drawbacks can be data copying and movements and slow for character devices-one interrupt per keyboard input.
  • 14. Ways to access a device in device management in operating system • Direct memory access (DMA):-To perform data movements additional controller bought into use. The benefit of such a method is that CPU is not involved in copying data but a con is that a process cannot access in-transit data. • Double buffering: This methodology access advice makes use of two buffers so that while one is being used, the other is being filled. Such a way is quite popular in graphics and animation so that the viewer does not see the line-by-line scanning.
  • 15. INPUT OUTPUT DEVICES • An I/O system is required to take an application I/O request and send it to the physical device, then take whatever response comes back from the device and send it to the application. I/O devices can be divided into two categories − • Block devices − A block device is one with which the driver communicates by sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-Key etc. • Character devices − A character device is one with which the driver communicates by sending and receiving single characters (bytes, octets). For example, serial ports, parallel ports, sounds cards etc
  • 16. Device Controllers • Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System takes help from device drivers to handle all I/O devices. • The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic component where electronic component is called the device controller.
  • 18. Synchronous vs asynchronous I/O • Synchronous I/O − In this scheme CPU execution waits while I/O proceeds • Asynchronous I/O − I/O proceeds concurrently with CPU execution
  • 19. Communication to I/O Devices • The CPU must have a way to pass information to and from an I/O device. There are three approaches available to communicate with the CPU and Device. • Special Instruction I/O • Memory-mapped I/O • Direct memory access (DMA)
  • 20. • Special Instruction I/O This uses CPU instructions that are specifically made for controlling I/O devices. These instructions typically allow data to be sent to an I/O device or read from an I/O device. • Memory-mapped I/O When using memory-mapped I/O, the same address space is shared by memory and I/O devices. The device is connected directly to certain main memory locations so that I/O device can transfer block of data to/from memory without going through CPU. Communication to I/O Devices
  • 21. Communication to I/O Devices • While using memory mapped IO, OS allocates buffer in memory and informs I/O device to use that buffer to send data to the CPU. I/O device operates asynchronously with CPU, interrupts CPU when finished. • The advantage to this method is that every instruction which can access memory can be used to manipulate an I/O device. Memory mapped IO is used for most high- speed I/O devices like disks, communication interfaces.
  • 22. Direct Memory Access (DMA) • Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast device such as a disk generated an interrupt for each byte, the operating system would spend most of its time handling these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this overhead. • Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
  • 24. The operating system uses the DMA hardware as follows Step Description 1 Device driver is instructed to transfer disk data to a buffer address X. 2 Device driver then instruct disk controller to transfer data to buffer. 3 Disk controller starts DMA transfer. 4 Disk controller sends each byte to DMA controller. 5 DMA controller transfers bytes to buffer, increases the memory address, decreases the counter C until C becomes zero. 6 When C becomes zero, DMA interrupts CPU to signal transfer completion.
  • 25. Polling I/O • Polling is the simplest way for an I/O device to communicate with the processor. The process of periodically checking status of the device to see if it is time for the next I/O operation, is called polling. The I/O device simply puts the information in a Status register, and the processor must come and get the information. • Most of the time, devices will not require attention and when one does it will have to wait until it is next interrogated by the polling program. This is an inefficient method and much of the processors time is wasted on unnecessary polls.
  • 26. Interrupts I/O • An alternative scheme for dealing with I/O is the interrupt-driven method. An interrupt is a signal to the microprocessor from a device that requires attention. • A device controller puts an interrupt signal on the bus when it needs CPU’s attention when CPU receives an interrupt, It saves its current state and invokes the appropriate interrupt handler using the interrupt vector (addresses of OS routines to handle various events). When the interrupting device has been dealt with, the CPU continues with its original task as if it had never been interrupted.
  • 27. Secondary Storage • Secondary storage devices are those devices whose memory is non volatile, meaning, the stored data will be intact even if the system is turned off. Here are a few things worth noting about secondary storage. • Secondary storage is also called auxiliary storage. • Secondary storage is less expensive when compared to primary memory like RAMs. • The speed of the secondary storage is also lesser than that of primary storage. • Hence, the data which is less frequently accessed is kept in the secondary storage. • A few examples are magnetic disks, magnetic tapes, removable thumb drives etc.
  • 28. Magnetic Disk Structure • In modern computers, most of the secondary storage is in the form of magnetic disks. Hence, knowing the structure of a magnetic disk is necessary to understand how the data in the disk is accessed by the computer.
  • 29. Magnetic Disk Structure • A magnetic disk contains several platters. Each platter is divided into circular shaped tracks. The length of the tracks near the centre is less than the length of the tracks farther from the centre. Each track is further divided into sectors, as shown in the figure. • Tracks of the same distance from centre form a cylinder. A read-write head is used to read data from a sector of the magnetic disk. • The speed of the disk is measured as two parts: • Transfer rate: This is the rate at which the data moves from disk to the computer. • Random access time: It is the sum of the seek time and rotational latency.
  • 30. Magnetic Disk Structure • Seek time is the time taken by the arm to move to the required track. Rotational latency is defined as the time taken by the arm to reach the required sector in the track. • Even though the disk is arranged as sectors and tracks physically, the data is logically arranged and addressed as an array of blocks of fixed size. The size of a block can be 512 or 1024 bytes. Each logical block is mapped with a sector on the disk, sequentially. In this way, each sector in the disk will have a logical address.
  • 31.
  • 32. Disk Scheduling Algorithms • First Come First Serve • This algorithm performs requests in the same order asked by the system. Let's take an example where the queue has the following requests with cylinder numbers as follows: • 98, 183, 37, 122, 14, 124, 65, 67 • Assume the head is initially at cylinder 56. The head moves in the given order in the queue i.e., 56→98→183→...→67.
  • 34. Shortest Seek Time First (SSTF) • Here the position which is closest to the current head position is chosen first. Consider the previous example where disk queue looks like, • 98, 183, 37, 122, 14, 124, 65, 67 • Assume the head is initially at cylinder 56. The next closest cylinder to 56 is 65, and then the next nearest one is 67, then 37, 14, so on.
  • 35. Shortest Seek Time First (SSTF)
  • 36. SCAN algorithm • This algorithm is also called the elevator algorithm because of it's behavior. Here, first the head moves in a direction (say backward) and covers all the requests in the path. Then it moves in the opposite direction and covers the remaining requests in the path. This behavior is similar to that of an elevator. Let's take the previous example, • 98, 183, 37, 122, 14, 124, 65, 67 • Assume the head is initially at cylinder 56. The head moves in backward direction and accesses 37 and 14. Then it goes in the opposite direction and accesses the cylinders as they come in the path.
  • 38. Disk Management • Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller can read and write. • To use a disk to hold files, the operating system still needs to record its own data structures on the disk • Partition the disk into one or more groups of cylinders • Logical formatting or “making a file system” • Boot block initializes system • The bootstrap is stored in ROM • Bootstrap loader program z Methods such as sector sparing used to handle bad blocks
  • 39. Disk Formatting • A new magnetic disk is a blank slate. It is just platters of a magnetic recording material. Before a disk can store data, it must be divided into sectors that the disk controller can read and write. This process is called low-level formatting (or physical formatting). • Low-level formatting fills the disk with a special data structure for each sector. The data structure for a sector consists of a header, a data area, and a trailer. The header and trailer contain information used by the disk controller, such as a sector number and an error-correcting code (ECC).
  • 40. Disk Formatting • To use a disk to hold files, the operating system still needs to record its own data structures on the disk. It does so in two steps. • The first step is to partition the disk into one or more groups of cylinders. The operating system can treat each partition as though it were a separate disk. • For instance, one partition can hold a copy of the operating system’s executable code, while another holds user files. After partitioning, the second step is logical formatting (or creation of a file system). In this step, the operating system stores the initial file-system data structures onto the disk.
  • 41. Boot Block • When a computer is powered up or rebooted, it needs to have an initial program to run. This initial program is called the bootstrap program. It initializes all aspects of the system (i.e. from CPU registers to device controllers and the contents of main memory) and then starts the operating system. • To do its job, the bootstrap program finds the operating system kernel on disk, loads that kernel into memory, and jumps to an initial address to begin the operating-system execution.
  • 42. Boot Block • For most computers, the bootstrap is stored in read-only memory (ROM). This location is convenient because ROM needs no initialization and is at a fixed location that the processor can start executing when powered up or reset. And since ROM is read-only, it cannot be infected by a computer virus. The problem is that changing this bootstrap code requires changing the ROM hardware chips. • For this reason, most systems store a tiny bootstrap loader program in the boot ROM, whose only job is to bring in a full bootstrap program from disk. The full bootstrap program can be changed easily: A new version is simply written onto the disk. The full bootstrap program is stored in a partition (at a fixed location on the disk) is called the boot blocks. A disk that has a boot partition is called a boot disk or system disk.
  • 43. Bad Blocks • Since disks have moving parts and small tolerances, they are prone to failure. Sometimes the failure is complete, and the disk needs to be replaced, and its contents restored from backup media to the new disk. • More frequently, one or more sectors become defective. Most disks even come from the factory with bad blocks. Depending on the disk and controller in use, these blocks are handled in a variety of ways.
  • 44. Bad Blocks • The controller maintains a list of bad blocks on the disk. The list is initialized during the low-level format at the factory and is updated over the life of the disk. The controller can be told to replace each bad sector logically with one of the spare sectors. This scheme is known as sector sparing or forwarding.
  • 45. Swap-Space Management • Swapping is a memory management technique used in multi- programming to increase the number of process sharing the CPU. It is a technique of removing a process from main memory and storing it into secondary memory, and then bringing it back into main memory for continued execution. This action of moving a process out from main memory to secondary memory is called Swap Out and the action of moving a process out from secondary memory to main memory is called Swap In.
  • 46. Swap-Space Management • Swap-Space : • The area on the disk where the swapped out processes are stored is called swap space. • A swap space can reside in one of the two places – • Normal file system • Separate disk partition
  • 47. An Example – • The traditional UNIX kernel started with an implementation of swapping that copied entire process between contiguous disk regions and memory. UNIX later evolve to a combination of swapping and paging as paging hardware became available. In Solaris, the designers changed standard UNIX methods to improve efficiency. More changes were made in later versions of Solaris, to improve the efficiency. • Linux is almost similar to Solaris system. In both the systems the swap space is used only for anonymous memory, it is that kind of memory which is not backed by any file. In the Linux system, one or more swap areas are allowed to be established. A swap area may be in either in a swap file on a regular file system or a dedicated file partition.
  • 48.
  • 49. Redundant Array of Independent Disks (RAID) • Redundant Array of Independent Disks (RAID) is a set of several physical disk drives that Operating System see as a single logical unit. It played a significant role in narrowing the gap between increasingly fast processors and slow disk drives. • The basic principle behind RAID is that several smaller-capacity disk drives are better in performance than some large-capacity disk drives because through distributing the data among several smaller disks, the system can access data from them faster, resulting in improved I/O performance and improved data recovery in case of disk failure.
  • 50. Redundant Array of Independent Disks (RAID)
  • 51. Redundant Array of Independent Disks (RAID) • A typical disk array configuration consists of small disk drives connected to a controller housing the software and coordinating the transfer of data in the disks to a large capacity disk connected to I/O subsystem. • Note that this whole configuration is viewed as a single large-capacity disk by the OS. • Data is divided into segments called strips, which are distributed across the disks in the array. • A set of consecutive strips across the disks is called a stripe. • The whole process is called striping.
  • 52. Redundant Array of Independent Disks (RAID) • The whole system of RAID is divided in seven levels from level 0 to level 6. Here, the level does not indicate hierarchy, but indicate different types of configurations and error correction capabilities. • Level0 : RAID level 0 is the only level that cannot recover from hardware failure, as it doesn’t provide error correction or redundancy. • Level1 : RAID level 1 not only uses the process of striping, but also uses mirrored configuration by providing redundancy, i.e., it creates a duplicate set of all the data in a mirrored array of disks, which as a backup in case of hardware failure.
  • 53. Redundant Array of Independent Disks (RAID) • Level2 : RAID level 2 makes the use of very small strips (often of the size of 1 byte) and a hamming code to provide redundancy (for the task of error detection, correction, etc.). Hamming Code : It is an algorithm used for error detection and correction when the data is being transferred. It adds extra, redundant bits to the data. It is able to correct single-bit errors and correct double-bit errors. • Level3 : RAID level 3 is a configuration that only needs one disk for redundancy. Only one parity bit is computed for each strip and is stored in designated redundant disk. If a drive malfunctions, the RAID controller considers all the bits coming from that disk to be 0 and notes the location of that malfunctioning disk. So, if the data being read has a parity error, then the controller knows that the bit should be 1 and corrects it.
  • 54. Redundant Array of Independent Disks (RAID) • Level4 : RAID level 4 uses the same concept used in level 0 & level 1, but also computes a parity for each strip and stores this parity in the corresponding strip of the parity disk. The advantage of this configuration is that if a disk fails, the data can be still recovered from the parity disk. • Level 5 : • RAID level 5 is a modification level 4. In level 4, only one disk is designated for parity storing parities. But in level 5, it distributes the parity disks across the disks in the array.
  • 55. Redundant Array of Independent Disks (RAID) • Level6 : RAID level 6 provides an extra degree of error detection and correction. It requires 2 different parity calculations. One calculation is the same as the one used level 4 and 5, other calculation is an independent data-check algorithm. Both the parities are stored on separate disks across the array, which corresponds to the data strips in the array.