SlideShare a Scribd company logo
Chapter 7:
I/O Management & Disk
Scheduling
BY: MADHURI VAGHASIA
External devices that engage in I/O with computer systems
can be grouped into three categories:
• suitable for communicating with the computer user
• printers, terminals, video display, keyboard, mouse
Human readable
• suitable for communicating with electronic equipment
• disk drives, USB keys, sensors, controllers
Machine readable
• suitable for communicating with remote devices
• modems, digital line drivers
Communication
How to ?
-I/O devices are differentiated based on following parameters.
◦ Data rates: data transfer rates range from 101 to 109 according to IO
devices. Keyboard has the lowest data rate among input devices.
◦ Applications: how the device is used has an influence on the software
and policies in the OS and supporting utilities
◦ Complexity of control: depends on the device. Printer –simple control
when compared to a disk
◦ Unit of transfer: as bytes or characters in larger blocks
◦ Data representation: different data encoding schemes for different
devices
◦ Error Conditions : The nature of errors differ from one device to
another.
Three techniques for performing I/O are:
Programmed I/O
◦ the processor issues an I/O command on behalf of a process to an I/O module; that
process then busy waits for the operation to be completed before proceeding
Interrupt-driven I/O
◦ the processor issues an I/O command on behalf of a process
◦ if non-blocking – processor continues to execute instructions from the process that
issued the I/O command
◦ if blocking – the next instruction the processor executes is from the OS, which will
put the current process in a blocked state and schedule another process
Direct Memory Access (DMA)
◦ a DMA module controls the exchange of data between main memory and an I/O
module
1
• Processor directly controls a peripheral device
2
• A controller or I/O module is added. Processor uses programmed I/O with
interrupts.
3
• Same configuration as step 2, but now interrupts are added
4
• The I/O module is given direct control of memory via DMA
5
• The I/O module is enhanced to become a separate processor, CPU directs
the I/O processor to execute an I/O program in main memory
6
• I/O module has a local memory of its own. With this architecture, a large
set of I/O devices can be control
Device Controllers
◦ Device drivers are software modules that can be plugged into an OS to handle
a particular device. Operating System takes help from device drivers to handle
all I/O devices.
◦ The Device Controller works like an interface between a device and a device
driver. I/O units (Keyboard, mouse, printer, etc.) typically consist of a
mechanical component and an electronic component where electronic
component is called the device controller.
◦ There is always a device controller and a device driver for each device to
communicate with the Operating Systems.
Device Controllers
◦ Slow devices like keyboards will generate an interrupt to the main CPU after each
byte is transferred. If a fast device such as a disk generated an interrupt for each
byte, the operating system would spend most of its time handling these interrupts.
So a typical computer uses direct memory access (DMA) hardware to reduce this
overhead.
◦ Direct Memory Access needs a special hardware called DMA controller
(DMAC) that manages the data transfers and arbitrates access to the system
bus.
◦ The controllers are programmed with
source and destination pointers (where to
read/write the data), store in Registers
◦ counters to track the number of transferred
bytes
◦ After completion of transfer it generate
interrupt.
1. Device driver is instructed to transfer disk
data to a buffer address X.
2. Device driver then instruct disk controller
to transfer data to buffer.
3. Disk controller starts DMA transfer.
4. Disk controller sends each byte to DMA
controller.
5. DMA controller transfers bytes to buffer,
increases the memory address, decreases
the counter C until C becomes zero.
6. When C becomes zero, DMA interrupts
CPU to signal transfer completion.
Types of DMA Configurations
Single bus detached DMA
Single-bus, integrated DMA-I/O
Inefficient
same bus
is shared
I/O bus
Types of DMA Configurations
Expandable
configuration
OS Design Issues
Design objectives
◦Efficiency:- Major work is done in increasing the efficiency
of disk I/O
◦Generality
◦ use a hierarchical, modular function to design a I/O function.
◦ Hides most of the details of device I/O in lower-level routines so that user
processes and upper levels of the OS see devices in terms of general
functions, such as read, write, open, close, lock, unlock.
Logical Structure of the I/O Function
◦ The hierarchical philosophy is that the functions of OS should be
separated according to their complexity, characteristic time scale,
level of abstraction
◦ This leads to layered approach where each layer performs a
related subset of functions
◦ Changes in one layer does not require changes in other layers
◦ I/O also follows the same approach
Logical structures
1. Local peripheral device
2. Communications port
3. File system
Local peripheral device
•Concerned with managing general I/O
functions on behalf of user processes
•User processes deals with device in
terms of device identifier and commands
like open, close , read, write
• Operations and data are converted into
appropriate sequences of I/O instructions,
channel commands, and controller orders.
• Buffering improves utilization.
• Queuing ,scheduling and controlling of I/O
operations
• Interacts with I/O module and h/w
Communications port
May consist of many layers
Eg:- TCP/IP
File system •symbolic file names are converted to
Identifiers that ref files thro’ file
descriptors
•files can be added ,deleted ,reorganized
•Deals with logical structure of files
•Operations open, close, read, write
•Access rights are managed
•logical references to files and records
must be converted to physical secondary
storage addresses
•Allocation of secondary storage space
and main storage buffers
I/O Buffering
Why buffering is required?
◦ When a user process wants to read blocks of data from a disk, process waits for the transfer
◦ It waits either by
◦ Busy waiting
◦ Process suspension on an interrupt
◦ The problems with this approach
◦ Program waits for slow I/O
◦ Virtual locations should stay in the main memory during the course of block transfer
◦ Risk of single-process deadlock
◦ Process is blocked during transfer and may not be swapped out
The above inefficiencies can be resolved if input transfers in advance of requests are being
made and output transfers are performed some time after the request is made. This
technique is known as buffering.
Types of I/O Devices
block-oriented:
◦ Stores information in blocks that are usually of fixed size, and transfers are made one block
at a time
◦ Reference to data is made by its block number
◦ Eg: Disks and USB keys
stream-oriented
◦ Transfers data in and out as a stream of bytes, with no block structure
◦ Eg: Terminals, printers, communications ports , mouse
Types of Buffering
1. Single Buffering
2. Double Buffering
3. Circular Buffering
4. No Buffering
Single Buffer (Block-oriented data)
•When a user process issues an I/O request, the OS assigns a buffer in the system portion of
main memory to the operation
Reading ahead:
Input transfers are made to the system buffer. When the transfer is complete, the process
moves the block into user space and immediately requests another block.
When data are being transmitted to a device, they are first copied from the user space into
the system buffer, from which they will ultimately be written.
Performance comparison between single
buffering and no buffering
•Without buffering
–Execution time per block is essentially T + C
T - time required to input one block
C- computation time the between complication and input requests
•With Buffering
–the time is max [C, T]+ M
M - time required to move the data from the system buffer to user memory
Single Buffer (Stream-oriented data)
line-at-a-time fashion:
◦ user input is one line at a time, with a carriage return signalling the
end of a line
◦ output to the terminal is similarly one line at a time Eg: Line
Printer
byte-at-a-time fashion
◦ used on forms-mode terminals when each key stroke is significant
◦ user process follows the producer/consumer model
Double Buffer or buffer swapping
A process now transfers data to (or from) one buffer while the operating system empties (or fills)
the other. This technique is known as double buffering
Block oriented transfer : the execution time as max [C, T]
stream-oriented input:
line-at-a-time I/O the user process need not be suspended for input or output, unless the
process runs ahead of the double buffers
byte-at-a-time operation no particular advantage over a single buffer
In both cases, the producer/consumer model is followed
Circular Buffer
When more than two buffers are used then collection of buffers is known as circular
buffer with each individual buffer being one unit of the circular buffer
The Utility of Buffering
Buffering is one tool that can increase the efficiency of the operating
system and the performance of individual processes.
28
Physical disk organization
read-write
head
track
sectors
Components of a Disk
Platters
Spindle
The platters spin (say, 90
rps (Revolutions Per
Second)).
The arm assembly is
moved in or out to position
a head on a desired track.
Tracks under heads make
a cylinder (imaginary!).
Only one head
reads/writes at any one
time.
Disk head
Arm movement
Arm assembly
Tracks
Sector
 Block size is a multiple of sector size (which is fixed).
Disk Device Terminology
Several platters, with information recorded magnetically on both surfaces (usually)
• Actuator moves head (end of arm,1/surface) over track (“seek”), select
surface, wait for sector rotate under head, then read or write
– “Cylinder”: all tracks under heads
• Bits recorded in tracks, which in turn divided into sectors (e.g., 512
Bytes)
Platter
Outer
Track
Inner
Track
Sector
Actuator
Head
Arm
Disk Head, Arm, Actuator
Actuator
Arm
Head
Platters (12)
Spindle
Disk Device Performance
Platter
Arm
Actuator
Head
Sector
Inner
Track
Outer
Track
Controller
Spindle
Cylinder View
35
Physical disk organization
To read or write, the disk head must be positioned on the desired
track and at the beginning of the desired sector
Disk Performance Parameters:
Seek time is the time it takes to position the head on the desired track
Rotational delay or rotational latency is waiting time for block to rotate
under head
Transfer time is the time for the sector to pass under the head
Seek Time
Rotational delay or rotational latency
Transfer time(Rate)
39
Physical disk organization
Access time
= seek time + rotational latency + transfer time
Efficiency of a sequence of disk accesses strongly depends on the order of the
requests
Adjacent requests on the same track avoid additional seek and rotational
latency times
Loading a file as a unit is efficient when the file has been stored on consecutive
sectors on the same cylinder of the disk
41
Disk Scheduling
The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth.
Access time has two major components
◦ Seek time is the time for the disk are to move the heads to the cylinder
containing the desired sector.
◦ Rotational latency is the additional time waiting for the disk to rotate the
desired sector to the disk head.
Minimize seek time
Seek time  seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer.
42
Disk Scheduling (Cont.)
Several algorithms exist to schedule the servicing of disk I/O requests.
We illustrate them with a request queue (0-199).
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
43
FCFS
Illustration shows total head movement of 640 cylinders.
44
SSTF
◦ Selects the request with the minimum seek time from the current head
position.(shortest seek time first)
◦ SSTF scheduling is a form of SJF scheduling; may cause starvation of
some requests.
45
SSTF (Cont.)
total head movement of 236 cylinders
46
SCAN
◦ The disk arm starts at one end of the disk, and moves toward the other
end, servicing requests until it gets to the other end of the disk, where
the head movement is reversed and servicing continues.
◦ Direction of head movement is towards 0
◦ Sometimes called the elevator algorithm.
47
SCAN (Cont.)
total head movement of 236 cylinders
48
C-SCAN
◦ Provides a more uniform wait time than SCAN.
◦ The head moves from one end of the disk to the other. servicing requests as it
goes. When it reaches the other end, however, it immediately
returns to the beginning of the disk, without servicing any
requests on the return trip.
◦ Treats the cylinders as a circular list that wraps around from the last cylinder
to the first one.
49
C-SCAN (Cont.)
LOOK
Look
Head start at 100
52
C-LOOK
Version of C-SCAN
◦ Arm only goes as far as the last request in each direction, then reverses
direction immediately, without first going all the way to the end of the disk.
53
C-LOOK (Cont.)
54
Selecting a Disk-Scheduling Algorithm
◦ SSTF is common and has a natural appeal
◦ SCAN and C-SCAN perform better for systems that place a heavy
load on the disk.
◦ Performance depends on the number and types of requests.
◦ Requests for disk service can be influenced by the file-allocation
method.
◦ The disk-scheduling algorithm should be written as a separate
module of the operating system, allowing it to be replaced with a
different algorithm if necessary.
◦ Either SSTF or LOOK is a reasonable choice for the default
algorithm.
◦ RAID is a technology that is used to
increase the performance and/or
reliability of data storage. The
abbreviation stands for Redundant
Array of Inexpensive Disks.
◦ A RAID system consists of two or
more drives working in parallel.
These disks can be hard discs, but
there is a trend to also use the
technology for SSD (solid state
drives). There are different RAID
levels, each optimized for a specific
situation
Design
architectures
share three
characteristics:
RAID is a set of
physical disk drives
viewed by the
operating system as a
single logical drive
data are
distributed across
the physical drives
of an array in a
scheme known as
striping
redundant disk capacity is
used to store parity
information, which
guarantees data
recoverability in case of a
disk failure
RAID level 0 – Striping
◦ In a RAID 0 system data are split up into blocks that get written across all the
drives in the array. By using multiple disks (at least 2) at the same time, this
offers superior I/O performance.
◦ Not a true RAID because it does not include redundancy to improve
performance or provide data protection
RAID level 0 – Striping
Advantages
1. RAID 0 offers great performance, both in read and write operations. There is no overhead caused
by parity controls.
2. All storage capacity is used, there is no overhead.
3. The technology is easy to implement.
Disadvantages
RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used
for mission-critical systems.
Ideal use
RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as
on an image retouching or video editing station.
RAID level 1 – Mirroring
◦ Data are stored twice by writing them to both the data drive (or set of data drives)
and a mirror drive (or set of drives). If a drive fails, the controller uses either the data
drive or the mirror drive for data recovery and continues operation. You need at least
2 drives for a RAID 1 array.
RAID level 1 – Mirroring
Advantages
1. RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
2. In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement
drive.
Disadvantages
1. The main disadvantage is that the effective storage capacity is only half of the total drive capacity
because all data get written twice.
2. Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed
drive can only be replaced after powering down the computer it is attached to. For servers that
are used simultaneously by many people, this may not be acceptable. Such systems typically use
hardware controllers that do support hot swapping.
Ideal use
RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for
small servers in which only two data drives will be used.
RAID level 5
◦ It requires at least 3 drives but can work with up to 16. Data blocks are striped across
the drives and on one drive a parity checksum of all the block data is written. The
parity data are not written to a fixed drive
◦ Using the parity data, the computer can recalculate the data of one of the
other data blocks, should those data no longer be available
RAID level 5
Advantages:
1. Read data transactions are very fast while write data transactions are somewhat slower (due to
the parity that has to be calculated).
2. If a drive fails, you still have access to all data, even while the failed drive is being replaced and
the storage controller rebuilds the data on the new drive.
Disadvantages
1. Drive failures have an effect on throughput, although this is still acceptable.
2. This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced,
restoring the data (the rebuild time) may take a day or longer, depending on the load on the array
and the speed of the controller. If another disk goes bad during that time, data are lost forever.
Ideal use
RAID 5 is a good all-round system that combines efficient storage with excellent security and decent
performance. It is ideal for file and application servers that have a limited number of data drives.
RAID level 6 – Striping with double parity
◦ RAID 6 is like RAID 5, but the parity data are written to two drives. That means it
requires at least 4 drives and can withstand 2 drives dying simultaneously. The
chances that two drives break down at exactly the same moment are of course very
small
RAID level 6 – Striping with double parity
Advantages
1. Like with RAID 5, read data transactions are very fast.
2. If two drives fail, you still have access to all data, even while the failed drives are being
replaced. So RAID 6 is more secure than RAID 5.
Disadvantages
1. Drive failures have an effect on throughput, although this is still acceptable.
2. This is complex technology. Rebuilding an array in which one drive failed can take a long
time.
RAID 0-6

More Related Content

What's hot

Distributed architecture (SAD)
Distributed architecture (SAD)Distributed architecture (SAD)
Distributed architecture (SAD)
Khubaib Ahmad Kunjahi
 
chapter 2 architecture
chapter 2 architecturechapter 2 architecture
chapter 2 architecture
Sharda University Greater Noida
 
Introduction to distributed system
Introduction to distributed systemIntroduction to distributed system
Introduction to distributed system
ishapadhy
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memoryAshish Kumar
 
Back-End application for Distributed systems
Back-End application for Distributed systemsBack-End application for Distributed systems
Back-End application for Distributed systems
Atif Imam
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
Sunita Sahu
 
Aca module 1
Aca module 1Aca module 1
Aca module 1
Avinash_N Rao
 
Distributed Processing
Distributed ProcessingDistributed Processing
Distributed ProcessingImtiaz Hussain
 
Types of operating system
Types of operating systemTypes of operating system
Types of operating system
Mohammad Alam
 
Design issues of dos
Design issues of dosDesign issues of dos
Design issues of dos
vanamali_vanu
 
Intro (Distributed computing)
Intro (Distributed computing)Intro (Distributed computing)
Intro (Distributed computing)Sri Prasanna
 
Unit 1 computer architecture (1)
Unit 1   computer architecture (1)Unit 1   computer architecture (1)
Unit 1 computer architecture (1)
DevaKumari Vijay
 
Distributed Operating System_1
Distributed Operating System_1Distributed Operating System_1
Distributed Operating System_1
Dr Sandeep Kumar Poonia
 
Distributed and clustered systems
Distributed and clustered systemsDistributed and clustered systems
Distributed and clustered systems
V.V.Vanniaperumal College for Women
 
Distributed information system
Distributed information systemDistributed information system
Distributed information system
District Administration
 
Introduction to transaction management
Introduction to transaction managementIntroduction to transaction management
Introduction to transaction management
Dr. C.V. Suresh Babu
 

What's hot (20)

Distributed architecture (SAD)
Distributed architecture (SAD)Distributed architecture (SAD)
Distributed architecture (SAD)
 
chapter 2 architecture
chapter 2 architecturechapter 2 architecture
chapter 2 architecture
 
Ch 13 D B Admin
Ch 13  D B  AdminCh 13  D B  Admin
Ch 13 D B Admin
 
Introduction to distributed system
Introduction to distributed systemIntroduction to distributed system
Introduction to distributed system
 
distributed shared memory
 distributed shared memory distributed shared memory
distributed shared memory
 
Cao u1
Cao u1Cao u1
Cao u1
 
Back-End application for Distributed systems
Back-End application for Distributed systemsBack-End application for Distributed systems
Back-End application for Distributed systems
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
 
Lec2
Lec2Lec2
Lec2
 
Aca module 1
Aca module 1Aca module 1
Aca module 1
 
Distributed Processing
Distributed ProcessingDistributed Processing
Distributed Processing
 
Types of operating system
Types of operating systemTypes of operating system
Types of operating system
 
Design issues of dos
Design issues of dosDesign issues of dos
Design issues of dos
 
Intro (Distributed computing)
Intro (Distributed computing)Intro (Distributed computing)
Intro (Distributed computing)
 
Unit 1 computer architecture (1)
Unit 1   computer architecture (1)Unit 1   computer architecture (1)
Unit 1 computer architecture (1)
 
Distributed Operating System_1
Distributed Operating System_1Distributed Operating System_1
Distributed Operating System_1
 
Distributed and clustered systems
Distributed and clustered systemsDistributed and clustered systems
Distributed and clustered systems
 
Chapter 20
Chapter 20Chapter 20
Chapter 20
 
Distributed information system
Distributed information systemDistributed information system
Distributed information system
 
Introduction to transaction management
Introduction to transaction managementIntroduction to transaction management
Introduction to transaction management
 

Similar to Ch 7 io_management & disk scheduling

I/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxI/o management and disk scheduling .pptx
I/o management and disk scheduling .pptx
webip34973
 
Unit v: Device Management
Unit v: Device ManagementUnit v: Device Management
Unit v: Device Management
Arnav Chowdhury
 
Lecture 9.pptx
Lecture 9.pptxLecture 9.pptx
Lecture 9.pptx
JavedIqbal549896
 
I/O Organization
I/O OrganizationI/O Organization
I/O Organization
Dhaval Bagal
 
I/O systems chapter 12 OS
I/O systems chapter 12 OS I/O systems chapter 12 OS
I/O systems chapter 12 OS
ssuser45ae56
 
Unit 6
Unit 6Unit 6
Unit 6
pm_ghate
 
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptxghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
EliasPetros
 
Io management disk scheduling algorithm
Io management disk scheduling algorithmIo management disk scheduling algorithm
Io management disk scheduling algorithm
lalithambiga kamaraj
 
computer system structure
computer system structurecomputer system structure
computer system structure
HAMZA AHMED
 
Computer Architecture and Organization.pptx
Computer Architecture and Organization.pptxComputer Architecture and Organization.pptx
Computer Architecture and Organization.pptx
LearnersCoach
 
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
SOLOMONCHINAEMEUCHEA
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
Sweta Kumari Barnwal
 
Input_Output_Organization.pptx
Input_Output_Organization.pptxInput_Output_Organization.pptx
Input_Output_Organization.pptx
SherinRappai
 
Hardware I/O organization
Hardware  I/O organization Hardware  I/O organization
Hardware I/O organization faria_khan
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdf
wisard1
 
Input Output Operations
Input Output OperationsInput Output Operations
Input Output Operationskdisthere
 
IOhardware_operting_systems_2022_libya.pdf
IOhardware_operting_systems_2022_libya.pdfIOhardware_operting_systems_2022_libya.pdf
IOhardware_operting_systems_2022_libya.pdf
aptinstallpython3
 
IO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTUREIO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTURE
Hariharan Anand
 
Operating System Case Study and I/O System
Operating System Case Study and I/O SystemOperating System Case Study and I/O System
Operating System Case Study and I/O System
prakash ganesan
 
ch1.pptx
ch1.pptxch1.pptx
ch1.pptx
berekethailu2
 

Similar to Ch 7 io_management & disk scheduling (20)

I/o management and disk scheduling .pptx
I/o management and disk scheduling .pptxI/o management and disk scheduling .pptx
I/o management and disk scheduling .pptx
 
Unit v: Device Management
Unit v: Device ManagementUnit v: Device Management
Unit v: Device Management
 
Lecture 9.pptx
Lecture 9.pptxLecture 9.pptx
Lecture 9.pptx
 
I/O Organization
I/O OrganizationI/O Organization
I/O Organization
 
I/O systems chapter 12 OS
I/O systems chapter 12 OS I/O systems chapter 12 OS
I/O systems chapter 12 OS
 
Unit 6
Unit 6Unit 6
Unit 6
 
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptxghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
ghgfjfhgdjfdhgdhgfdgfdhgdhgfdhgzeka.pptx
 
Io management disk scheduling algorithm
Io management disk scheduling algorithmIo management disk scheduling algorithm
Io management disk scheduling algorithm
 
computer system structure
computer system structurecomputer system structure
computer system structure
 
Computer Architecture and Organization.pptx
Computer Architecture and Organization.pptxComputer Architecture and Organization.pptx
Computer Architecture and Organization.pptx
 
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
HHCJ AMUMARA: COMPUTER STUDIES LECTURE NOTE FOR SS2-001
 
Computer Architecture
Computer ArchitectureComputer Architecture
Computer Architecture
 
Input_Output_Organization.pptx
Input_Output_Organization.pptxInput_Output_Organization.pptx
Input_Output_Organization.pptx
 
Hardware I/O organization
Hardware  I/O organization Hardware  I/O organization
Hardware I/O organization
 
MK Sistem Operasi.pdf
MK Sistem Operasi.pdfMK Sistem Operasi.pdf
MK Sistem Operasi.pdf
 
Input Output Operations
Input Output OperationsInput Output Operations
Input Output Operations
 
IOhardware_operting_systems_2022_libya.pdf
IOhardware_operting_systems_2022_libya.pdfIOhardware_operting_systems_2022_libya.pdf
IOhardware_operting_systems_2022_libya.pdf
 
IO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTUREIO SYSTEM AND CASE STUDY STRUCTURE
IO SYSTEM AND CASE STUDY STRUCTURE
 
Operating System Case Study and I/O System
Operating System Case Study and I/O SystemOperating System Case Study and I/O System
Operating System Case Study and I/O System
 
ch1.pptx
ch1.pptxch1.pptx
ch1.pptx
 

Recently uploaded

ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
Vijay Dialani, PhD
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
AJAYKUMARPUND1
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
ongomchris
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
ydteq
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
JoytuBarua2
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
seandesed
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
TeeVichai
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
Divya Somashekar
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
ViniHema
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
AafreenAbuthahir2
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
Massimo Talia
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
VENKATESHvenky89705
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
bakpo1
 

Recently uploaded (20)

ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
 
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
Pile Foundation by Venkatesh Taduvai (Sub Geotechnical Engineering II)-conver...
 
space technology lecture notes on satellite
space technology lecture notes on satellitespace technology lecture notes on satellite
space technology lecture notes on satellite
 
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
一比一原版(UofT毕业证)多伦多大学毕业证成绩单如何办理
 
Planning Of Procurement o different goods and services
Planning Of Procurement o different goods and servicesPlanning Of Procurement o different goods and services
Planning Of Procurement o different goods and services
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
 
block diagram and signal flow graph representation
block diagram and signal flow graph representationblock diagram and signal flow graph representation
block diagram and signal flow graph representation
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
 
Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024Nuclear Power Economics and Structuring 2024
Nuclear Power Economics and Structuring 2024
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
road safety engineering r s e unit 3.pdf
road safety engineering  r s e unit 3.pdfroad safety engineering  r s e unit 3.pdf
road safety engineering r s e unit 3.pdf
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
一比一原版(SFU毕业证)西蒙菲莎大学毕业证成绩单如何办理
 

Ch 7 io_management & disk scheduling

  • 1. Chapter 7: I/O Management & Disk Scheduling BY: MADHURI VAGHASIA
  • 2. External devices that engage in I/O with computer systems can be grouped into three categories: • suitable for communicating with the computer user • printers, terminals, video display, keyboard, mouse Human readable • suitable for communicating with electronic equipment • disk drives, USB keys, sensors, controllers Machine readable • suitable for communicating with remote devices • modems, digital line drivers Communication
  • 3. How to ? -I/O devices are differentiated based on following parameters. ◦ Data rates: data transfer rates range from 101 to 109 according to IO devices. Keyboard has the lowest data rate among input devices. ◦ Applications: how the device is used has an influence on the software and policies in the OS and supporting utilities ◦ Complexity of control: depends on the device. Printer –simple control when compared to a disk ◦ Unit of transfer: as bytes or characters in larger blocks ◦ Data representation: different data encoding schemes for different devices ◦ Error Conditions : The nature of errors differ from one device to another.
  • 4. Three techniques for performing I/O are: Programmed I/O ◦ the processor issues an I/O command on behalf of a process to an I/O module; that process then busy waits for the operation to be completed before proceeding Interrupt-driven I/O ◦ the processor issues an I/O command on behalf of a process ◦ if non-blocking – processor continues to execute instructions from the process that issued the I/O command ◦ if blocking – the next instruction the processor executes is from the OS, which will put the current process in a blocked state and schedule another process Direct Memory Access (DMA) ◦ a DMA module controls the exchange of data between main memory and an I/O module
  • 5. 1 • Processor directly controls a peripheral device 2 • A controller or I/O module is added. Processor uses programmed I/O with interrupts. 3 • Same configuration as step 2, but now interrupts are added 4 • The I/O module is given direct control of memory via DMA 5 • The I/O module is enhanced to become a separate processor, CPU directs the I/O processor to execute an I/O program in main memory 6 • I/O module has a local memory of its own. With this architecture, a large set of I/O devices can be control
  • 6. Device Controllers ◦ Device drivers are software modules that can be plugged into an OS to handle a particular device. Operating System takes help from device drivers to handle all I/O devices. ◦ The Device Controller works like an interface between a device and a device driver. I/O units (Keyboard, mouse, printer, etc.) typically consist of a mechanical component and an electronic component where electronic component is called the device controller. ◦ There is always a device controller and a device driver for each device to communicate with the Operating Systems.
  • 8. ◦ Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast device such as a disk generated an interrupt for each byte, the operating system would spend most of its time handling these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this overhead. ◦ Direct Memory Access needs a special hardware called DMA controller (DMAC) that manages the data transfers and arbitrates access to the system bus.
  • 9. ◦ The controllers are programmed with source and destination pointers (where to read/write the data), store in Registers ◦ counters to track the number of transferred bytes ◦ After completion of transfer it generate interrupt.
  • 10. 1. Device driver is instructed to transfer disk data to a buffer address X. 2. Device driver then instruct disk controller to transfer data to buffer. 3. Disk controller starts DMA transfer. 4. Disk controller sends each byte to DMA controller. 5. DMA controller transfers bytes to buffer, increases the memory address, decreases the counter C until C becomes zero. 6. When C becomes zero, DMA interrupts CPU to signal transfer completion.
  • 11. Types of DMA Configurations Single bus detached DMA Single-bus, integrated DMA-I/O Inefficient same bus is shared
  • 12. I/O bus Types of DMA Configurations Expandable configuration
  • 13. OS Design Issues Design objectives ◦Efficiency:- Major work is done in increasing the efficiency of disk I/O ◦Generality ◦ use a hierarchical, modular function to design a I/O function. ◦ Hides most of the details of device I/O in lower-level routines so that user processes and upper levels of the OS see devices in terms of general functions, such as read, write, open, close, lock, unlock.
  • 14. Logical Structure of the I/O Function ◦ The hierarchical philosophy is that the functions of OS should be separated according to their complexity, characteristic time scale, level of abstraction ◦ This leads to layered approach where each layer performs a related subset of functions ◦ Changes in one layer does not require changes in other layers ◦ I/O also follows the same approach
  • 15. Logical structures 1. Local peripheral device 2. Communications port 3. File system
  • 16. Local peripheral device •Concerned with managing general I/O functions on behalf of user processes •User processes deals with device in terms of device identifier and commands like open, close , read, write • Operations and data are converted into appropriate sequences of I/O instructions, channel commands, and controller orders. • Buffering improves utilization. • Queuing ,scheduling and controlling of I/O operations • Interacts with I/O module and h/w
  • 17. Communications port May consist of many layers Eg:- TCP/IP
  • 18. File system •symbolic file names are converted to Identifiers that ref files thro’ file descriptors •files can be added ,deleted ,reorganized •Deals with logical structure of files •Operations open, close, read, write •Access rights are managed •logical references to files and records must be converted to physical secondary storage addresses •Allocation of secondary storage space and main storage buffers
  • 19. I/O Buffering Why buffering is required? ◦ When a user process wants to read blocks of data from a disk, process waits for the transfer ◦ It waits either by ◦ Busy waiting ◦ Process suspension on an interrupt ◦ The problems with this approach ◦ Program waits for slow I/O ◦ Virtual locations should stay in the main memory during the course of block transfer ◦ Risk of single-process deadlock ◦ Process is blocked during transfer and may not be swapped out The above inefficiencies can be resolved if input transfers in advance of requests are being made and output transfers are performed some time after the request is made. This technique is known as buffering.
  • 20. Types of I/O Devices block-oriented: ◦ Stores information in blocks that are usually of fixed size, and transfers are made one block at a time ◦ Reference to data is made by its block number ◦ Eg: Disks and USB keys stream-oriented ◦ Transfers data in and out as a stream of bytes, with no block structure ◦ Eg: Terminals, printers, communications ports , mouse
  • 21. Types of Buffering 1. Single Buffering 2. Double Buffering 3. Circular Buffering 4. No Buffering
  • 22. Single Buffer (Block-oriented data) •When a user process issues an I/O request, the OS assigns a buffer in the system portion of main memory to the operation Reading ahead: Input transfers are made to the system buffer. When the transfer is complete, the process moves the block into user space and immediately requests another block. When data are being transmitted to a device, they are first copied from the user space into the system buffer, from which they will ultimately be written.
  • 23. Performance comparison between single buffering and no buffering •Without buffering –Execution time per block is essentially T + C T - time required to input one block C- computation time the between complication and input requests •With Buffering –the time is max [C, T]+ M M - time required to move the data from the system buffer to user memory
  • 24. Single Buffer (Stream-oriented data) line-at-a-time fashion: ◦ user input is one line at a time, with a carriage return signalling the end of a line ◦ output to the terminal is similarly one line at a time Eg: Line Printer byte-at-a-time fashion ◦ used on forms-mode terminals when each key stroke is significant ◦ user process follows the producer/consumer model
  • 25. Double Buffer or buffer swapping A process now transfers data to (or from) one buffer while the operating system empties (or fills) the other. This technique is known as double buffering Block oriented transfer : the execution time as max [C, T] stream-oriented input: line-at-a-time I/O the user process need not be suspended for input or output, unless the process runs ahead of the double buffers byte-at-a-time operation no particular advantage over a single buffer In both cases, the producer/consumer model is followed
  • 26. Circular Buffer When more than two buffers are used then collection of buffers is known as circular buffer with each individual buffer being one unit of the circular buffer
  • 27. The Utility of Buffering Buffering is one tool that can increase the efficiency of the operating system and the performance of individual processes.
  • 29. Components of a Disk Platters Spindle The platters spin (say, 90 rps (Revolutions Per Second)). The arm assembly is moved in or out to position a head on a desired track. Tracks under heads make a cylinder (imaginary!). Only one head reads/writes at any one time. Disk head Arm movement Arm assembly Tracks Sector  Block size is a multiple of sector size (which is fixed).
  • 30. Disk Device Terminology Several platters, with information recorded magnetically on both surfaces (usually) • Actuator moves head (end of arm,1/surface) over track (“seek”), select surface, wait for sector rotate under head, then read or write – “Cylinder”: all tracks under heads • Bits recorded in tracks, which in turn divided into sectors (e.g., 512 Bytes) Platter Outer Track Inner Track Sector Actuator Head Arm
  • 31. Disk Head, Arm, Actuator Actuator Arm Head Platters (12) Spindle
  • 34. 35 Physical disk organization To read or write, the disk head must be positioned on the desired track and at the beginning of the desired sector Disk Performance Parameters: Seek time is the time it takes to position the head on the desired track Rotational delay or rotational latency is waiting time for block to rotate under head Transfer time is the time for the sector to pass under the head
  • 36. Rotational delay or rotational latency
  • 38. 39 Physical disk organization Access time = seek time + rotational latency + transfer time Efficiency of a sequence of disk accesses strongly depends on the order of the requests Adjacent requests on the same track avoid additional seek and rotational latency times Loading a file as a unit is efficient when the file has been stored on consecutive sectors on the same cylinder of the disk
  • 39. 41 Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth. Access time has two major components ◦ Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector. ◦ Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head. Minimize seek time Seek time  seek distance Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer.
  • 40. 42 Disk Scheduling (Cont.) Several algorithms exist to schedule the servicing of disk I/O requests. We illustrate them with a request queue (0-199). 98, 183, 37, 122, 14, 124, 65, 67 Head pointer 53
  • 41. 43 FCFS Illustration shows total head movement of 640 cylinders.
  • 42. 44 SSTF ◦ Selects the request with the minimum seek time from the current head position.(shortest seek time first) ◦ SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests.
  • 43. 45 SSTF (Cont.) total head movement of 236 cylinders
  • 44. 46 SCAN ◦ The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues. ◦ Direction of head movement is towards 0 ◦ Sometimes called the elevator algorithm.
  • 45. 47 SCAN (Cont.) total head movement of 236 cylinders
  • 46. 48 C-SCAN ◦ Provides a more uniform wait time than SCAN. ◦ The head moves from one end of the disk to the other. servicing requests as it goes. When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip. ◦ Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.
  • 48. LOOK
  • 50. 52 C-LOOK Version of C-SCAN ◦ Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk.
  • 52. 54 Selecting a Disk-Scheduling Algorithm ◦ SSTF is common and has a natural appeal ◦ SCAN and C-SCAN perform better for systems that place a heavy load on the disk. ◦ Performance depends on the number and types of requests. ◦ Requests for disk service can be influenced by the file-allocation method. ◦ The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary. ◦ Either SSTF or LOOK is a reasonable choice for the default algorithm.
  • 53. ◦ RAID is a technology that is used to increase the performance and/or reliability of data storage. The abbreviation stands for Redundant Array of Inexpensive Disks. ◦ A RAID system consists of two or more drives working in parallel. These disks can be hard discs, but there is a trend to also use the technology for SSD (solid state drives). There are different RAID levels, each optimized for a specific situation Design architectures share three characteristics: RAID is a set of physical disk drives viewed by the operating system as a single logical drive data are distributed across the physical drives of an array in a scheme known as striping redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure
  • 54. RAID level 0 – Striping ◦ In a RAID 0 system data are split up into blocks that get written across all the drives in the array. By using multiple disks (at least 2) at the same time, this offers superior I/O performance. ◦ Not a true RAID because it does not include redundancy to improve performance or provide data protection
  • 55. RAID level 0 – Striping Advantages 1. RAID 0 offers great performance, both in read and write operations. There is no overhead caused by parity controls. 2. All storage capacity is used, there is no overhead. 3. The technology is easy to implement. Disadvantages RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used for mission-critical systems. Ideal use RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as on an image retouching or video editing station.
  • 56. RAID level 1 – Mirroring ◦ Data are stored twice by writing them to both the data drive (or set of data drives) and a mirror drive (or set of drives). If a drive fails, the controller uses either the data drive or the mirror drive for data recovery and continues operation. You need at least 2 drives for a RAID 1 array.
  • 57. RAID level 1 – Mirroring Advantages 1. RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive. 2. In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement drive. Disadvantages 1. The main disadvantage is that the effective storage capacity is only half of the total drive capacity because all data get written twice. 2. Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed drive can only be replaced after powering down the computer it is attached to. For servers that are used simultaneously by many people, this may not be acceptable. Such systems typically use hardware controllers that do support hot swapping. Ideal use RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for small servers in which only two data drives will be used.
  • 58. RAID level 5 ◦ It requires at least 3 drives but can work with up to 16. Data blocks are striped across the drives and on one drive a parity checksum of all the block data is written. The parity data are not written to a fixed drive ◦ Using the parity data, the computer can recalculate the data of one of the other data blocks, should those data no longer be available
  • 59. RAID level 5 Advantages: 1. Read data transactions are very fast while write data transactions are somewhat slower (due to the parity that has to be calculated). 2. If a drive fails, you still have access to all data, even while the failed drive is being replaced and the storage controller rebuilds the data on the new drive. Disadvantages 1. Drive failures have an effect on throughput, although this is still acceptable. 2. This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced, restoring the data (the rebuild time) may take a day or longer, depending on the load on the array and the speed of the controller. If another disk goes bad during that time, data are lost forever. Ideal use RAID 5 is a good all-round system that combines efficient storage with excellent security and decent performance. It is ideal for file and application servers that have a limited number of data drives.
  • 60. RAID level 6 – Striping with double parity ◦ RAID 6 is like RAID 5, but the parity data are written to two drives. That means it requires at least 4 drives and can withstand 2 drives dying simultaneously. The chances that two drives break down at exactly the same moment are of course very small
  • 61. RAID level 6 – Striping with double parity Advantages 1. Like with RAID 5, read data transactions are very fast. 2. If two drives fail, you still have access to all data, even while the failed drives are being replaced. So RAID 6 is more secure than RAID 5. Disadvantages 1. Drive failures have an effect on throughput, although this is still acceptable. 2. This is complex technology. Rebuilding an array in which one drive failed can take a long time.

Editor's Notes

  1. external devices that engage in I/O with computer systems can be roughly grouped into three categories: • Human readable: Suitable for communicating with the computer user. Examples include printers and terminals, the latter consisting of video display, keyboard, and perhaps other devices such as a mouse. Machine readable: Suitable for communicating with electronic equipment. Examples are disk drives, USB keys, sensors, controllers, and actuators. • Communication: Suitable for communicating with remote devices. Examples are digital line drivers and modems.
  2. • Programmed I/O : The processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy waits for the operation to be completed before proceeding. • Interrupt-driven I/O : The processor issues an I/O command on behalf of a process. There are then two possibilities. If the I/O instruction from the process is nonblocking, then the processor continues to execute instructions from the process that issued the I/O command. If the I/O instruction is blocking, then the next instruction that the processor executes is from the OS, which will put the current process in a blocked state and schedule another process. • Direct memory access (DMA) : A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.
  3. As computer systems have evolved, there has been a pattern of increasing complexity and sophistication of individual components. Nowhere is this more evident than in the I/O function. The evolutionary steps can be summarized as Follows: 1. The processor directly controls a peripheral device. This is seen in simple microprocessor-controlled devices. 2. A controller or I/O module is added. The processor uses programmed I/O without interrupts. With this step, the processor becomes somewhat divorced from the specific details of external device interfaces. 3. The same configuration as step 2 is used, but now interrupts are employed. The processor need not spend time waiting for an I/O operation to be performed, thus increasing efficiency. 4. The I/O module is given direct control of memory via DMA. It can now move a block of data to or from memory without involving the processor, except at the beginning and end of the transfer. 5. The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O. The central processing unit (CPU) directs the I/O processor to execute an I/O program in main memory. The I/O processor fetches and executes these instructions without processor intervention. This allows the processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed. 6. The I/O module has a local memory of its own and is, in fact, a computer in its own right. With this architecture, a large set of I/O devices can be controlled, with minimal processor involvement. A common use for such an architecture has been to control communications with interactive terminals. The I/O processor takes care of most of the tasks involved in controlling the terminals. As one proceeds along this evolutionary path, more and more of the I/O function is performed without processor involvement. The central processor is increasingly relieved of I/O-related tasks, improving performance. With the last two steps (5 and 6), a major change occurs with the introduction of the concept of an I/O module capable of executing a program. A note about terminology: For all of the modules described in steps 4 through 6, the term direct memory access is appropriate, because all of these types involve direct control of main memory by the I/O module. Also, the I/O module in step 5 is often referred to as an I/O channel , and that in step 6 as an I/O processor ; however, each term is, on occasion, applied to both situations. In the latter part of this section, we will use the term I/O channel to refer to both types of I/O modules.
  4. Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
  5. The DMA unit is capable of mimicking the processor and, indeed, of taking over control of the system bus just like a processor. It needs to do this to transfer data to and from memory over the system bus. The DMA technique works as follows. When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information: • Whether a read or write is requested, using the read or write control line between the processor and the DMA module • The address of the I/O device involved, communicated on the data lines • The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register • The number of words to be read or written, again communicated via the data lines and stored in the data count register The processor then continues with other work. It has delegated this I/O operation to the DMA module. The DMA module transfers the entire block of data, one word at a time, directly to or from memory, without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor. Thus, the processor is involved only at the beginning and end of the transfer .
  6. The DMA mechanism can be configured in a variety of ways. Some possibilities are shown in Figure. In the first example, all modules share the same system bus. The DMA module, acting as a surrogate processor, uses programmed I/O to exchange data between memory and an I/O module through the DMA module. This configuration, while it may be inexpensive, is clearly inefficient: As with processor-controlled programmed I/O, each transfer of a word consumes two bus cycles (transfer request followed by transfer). The number of required bus cycles can be cut substantially by integrating the DMA and I/O functions. As Figure 11.3b indicates, this means that there is a path between the DMA module and one or more I/O modules that does not include the system bus. The DMA logic may actually be a part of an I/O module, or it may be a separate module that controls one or more I/O modules. This concept can be taken one step further by connecting I/O modules to the DMA module using an I/O bus ( Figure 11.3c ). This reduces the number of I/O interfaces in the DMA module to one and provides for an easily expandable configuration. In all of these cases ( Figure 11.3b and 11.3c ), the system bus that the DMA module shares with the processor and main memory is used by the DMA module only to exchange data with memory and to exchange control signals with the processor. The exchange of data between the DMA and I/O modules takes place off the system bus.
  7. 21
  8. With the use of multiple disks, there is a wide variety of ways in which the data can be organized and in which redundancy can be added to improve reliability. This could make it difficult to develop database schemes that are usable on a number of platforms and operating systems. Fortunately, industry has agreed on a standardized scheme for multiple-disk database design, known as RAID (redundant array of independent disks). The RAID scheme consists of seven levels, 2 zero through six. These levels do not imply a hierarchical relationship but designate different design architectures that share three common characteristics: 1. RAID is a set of physical disk drives viewed by the operating system as a single logical drive. 2. Data are distributed across the physical drives of an array in a scheme known as striping, described subsequently. 3. Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure.