This document discusses input/output (I/O) management and disk scheduling. It begins by categorizing I/O devices as those for communicating with users, electronic equipment, and remote devices. It then describes how I/O devices differ in data rates, applications, control complexity, data transfer units, data representation, and error handling. The document outlines three I/O techniques - programmed I/O, interrupt-driven I/O, and direct memory access (DMA). It also discusses the evolution of I/O architectures and covers I/O buffering, disk organization, and disk terminology.
In the 17th century, Blaise Pascal, a French mathematician and philosopher, invented Pascaline.
In the late 17th century, a German mathematician called Gottfried Leibnitz invented what is known as Leibnitz’ Wheel.
The first machine that used the idea of storage and programming was the Jacquard loom, invented by Joseph-Marie Jacquard at the beginning of the 19th century.
A device which is used to perform complex task briskly called computer.
The mechanical equipment necessary for conducting an activity, usually distinguished from the theory and design that make the activity possible is called Hardware.
In the 17th century, Blaise Pascal, a French mathematician and philosopher, invented Pascaline.
In the late 17th century, a German mathematician called Gottfried Leibnitz invented what is known as Leibnitz’ Wheel.
The first machine that used the idea of storage and programming was the Jacquard loom, invented by Joseph-Marie Jacquard at the beginning of the 19th century.
A device which is used to perform complex task briskly called computer.
The mechanical equipment necessary for conducting an activity, usually distinguished from the theory and design that make the activity possible is called Hardware.
Subject: Software Architecture Design
Topic: Distributed Architecture
In this presentation, you will learn about design pattern, softawre architecture, distributed architecture, basis of distributed architecture, why distributed architecture, need of distributed architecture, advantages and disadvantages of DA and much more.
Rate my presentation, It's designed graphically.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
This presentation discusses about the following topics:
Transaction processing systems
Introduction to TRANSACTION
Need for TRANSACTION
Operations
Transaction Execution and Problems
Transaction States
Transaction Execution with SQL
Transaction Properties
Transaction Log
Subject: Software Architecture Design
Topic: Distributed Architecture
In this presentation, you will learn about design pattern, softawre architecture, distributed architecture, basis of distributed architecture, why distributed architecture, need of distributed architecture, advantages and disadvantages of DA and much more.
Rate my presentation, It's designed graphically.
INTRODUCTIONTO OPERATING SYSTEM
What is an Operating System?
Mainframe Systems
Desktop Systems
Multiprocessor Systems
Distributed Systems
Clustered System
Real -Time Systems
Handheld Systems
Computing Environments
This presentation discusses about the following topics:
Transaction processing systems
Introduction to TRANSACTION
Need for TRANSACTION
Operations
Transaction Execution and Problems
Transaction States
Transaction Execution with SQL
Transaction Properties
Transaction Log
Computer Architecture and Organization.pptxLearnersCoach
Computer architecture is the definition of basic attributes of hardware components and their interconnections, in order to achieve certain specified goals in terms of functions and performance. Computer Architecture refers to those attributes of a system that have a direct impact on the logical execution of a program. Examples:
- the instruction set
- the number of bits used to represent various data types
- I/O mechanisms
- memory addressing techniques
Read More: https://www.learnerscoach.co.ke/introduction-to-computer-architecture/
Computer organization: the design and physical arrangement of various hardware units to work in tandem, in a orderly manner, in order to achieve the goals specified in the architecture.
Read More: https://www.learnerscoach.co.ke/introduction-to-computer-architecture-part2/
Introduction, Central Processing Unit (CPU) Memory, Communication between Various Units of a Computer System, The Instruction Format, Instruction Set, Processor Speed, Multiprocessor Systems.
I/O System and Case Study of Operating System its easy way to find how the I/O's are connected with the Operating System and And the mechanism of the Operating System
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
2. External devices that engage in I/O with computer systems
can be grouped into three categories:
• suitable for communicating with the computer user
• printers, terminals, video display, keyboard, mouse
Human readable
• suitable for communicating with electronic equipment
• disk drives, USB keys, sensors, controllers
Machine readable
• suitable for communicating with remote devices
• modems, digital line drivers
Communication
3. How to ?
-I/O devices are differentiated based on following parameters.
◦ Data rates: data transfer rates range from 101 to 109 according to IO
devices. Keyboard has the lowest data rate among input devices.
◦ Applications: how the device is used has an influence on the software
and policies in the OS and supporting utilities
◦ Complexity of control: depends on the device. Printer –simple control
when compared to a disk
◦ Unit of transfer: as bytes or characters in larger blocks
◦ Data representation: different data encoding schemes for different
devices
◦ Error Conditions : The nature of errors differ from one device to
another.
4. Three techniques for performing I/O are:
Programmed I/O
◦ the processor issues an I/O command on behalf of a process to an I/O module; that
process then busy waits for the operation to be completed before proceeding
Interrupt-driven I/O
◦ the processor issues an I/O command on behalf of a process
◦ if non-blocking – processor continues to execute instructions from the process that
issued the I/O command
◦ if blocking – the next instruction the processor executes is from the OS, which will
put the current process in a blocked state and schedule another process
Direct Memory Access (DMA)
◦ a DMA module controls the exchange of data between main memory and an I/O
module
5. 1
• Processor directly controls a peripheral device
2
• A controller or I/O module is added. Processor uses programmed I/O with
interrupts.
3
• Same configuration as step 2, but now interrupts are added
4
• The I/O module is given direct control of memory via DMA
5
• The I/O module is enhanced to become a separate processor, CPU directs
the I/O processor to execute an I/O program in main memory
6
• I/O module has a local memory of its own. With this architecture, a large
set of I/O devices can be control
6. Device Controllers
◦ Device drivers are software modules that can be plugged into an OS to handle
a particular device. Operating System takes help from device drivers to handle
all I/O devices.
◦ The Device Controller works like an interface between a device and a device
driver. I/O units (Keyboard, mouse, printer, etc.) typically consist of a
mechanical component and an electronic component where electronic
component is called the device controller.
◦ There is always a device controller and a device driver for each device to
communicate with the Operating Systems.
8. ◦ Slow devices like keyboards will generate an interrupt to the main CPU after each
byte is transferred. If a fast device such as a disk generated an interrupt for each
byte, the operating system would spend most of its time handling these interrupts.
So a typical computer uses direct memory access (DMA) hardware to reduce this
overhead.
◦ Direct Memory Access needs a special hardware called DMA controller
(DMAC) that manages the data transfers and arbitrates access to the system
bus.
9. ◦ The controllers are programmed with
source and destination pointers (where to
read/write the data), store in Registers
◦ counters to track the number of transferred
bytes
◦ After completion of transfer it generate
interrupt.
10. 1. Device driver is instructed to transfer disk
data to a buffer address X.
2. Device driver then instruct disk controller
to transfer data to buffer.
3. Disk controller starts DMA transfer.
4. Disk controller sends each byte to DMA
controller.
5. DMA controller transfers bytes to buffer,
increases the memory address, decreases
the counter C until C becomes zero.
6. When C becomes zero, DMA interrupts
CPU to signal transfer completion.
11. Types of DMA Configurations
Single bus detached DMA
Single-bus, integrated DMA-I/O
Inefficient
same bus
is shared
13. OS Design Issues
Design objectives
◦Efficiency:- Major work is done in increasing the efficiency
of disk I/O
◦Generality
◦ use a hierarchical, modular function to design a I/O function.
◦ Hides most of the details of device I/O in lower-level routines so that user
processes and upper levels of the OS see devices in terms of general
functions, such as read, write, open, close, lock, unlock.
14. Logical Structure of the I/O Function
◦ The hierarchical philosophy is that the functions of OS should be
separated according to their complexity, characteristic time scale,
level of abstraction
◦ This leads to layered approach where each layer performs a
related subset of functions
◦ Changes in one layer does not require changes in other layers
◦ I/O also follows the same approach
16. Local peripheral device
•Concerned with managing general I/O
functions on behalf of user processes
•User processes deals with device in
terms of device identifier and commands
like open, close , read, write
• Operations and data are converted into
appropriate sequences of I/O instructions,
channel commands, and controller orders.
• Buffering improves utilization.
• Queuing ,scheduling and controlling of I/O
operations
• Interacts with I/O module and h/w
18. File system •symbolic file names are converted to
Identifiers that ref files thro’ file
descriptors
•files can be added ,deleted ,reorganized
•Deals with logical structure of files
•Operations open, close, read, write
•Access rights are managed
•logical references to files and records
must be converted to physical secondary
storage addresses
•Allocation of secondary storage space
and main storage buffers
19. I/O Buffering
Why buffering is required?
◦ When a user process wants to read blocks of data from a disk, process waits for the transfer
◦ It waits either by
◦ Busy waiting
◦ Process suspension on an interrupt
◦ The problems with this approach
◦ Program waits for slow I/O
◦ Virtual locations should stay in the main memory during the course of block transfer
◦ Risk of single-process deadlock
◦ Process is blocked during transfer and may not be swapped out
The above inefficiencies can be resolved if input transfers in advance of requests are being
made and output transfers are performed some time after the request is made. This
technique is known as buffering.
20. Types of I/O Devices
block-oriented:
◦ Stores information in blocks that are usually of fixed size, and transfers are made one block
at a time
◦ Reference to data is made by its block number
◦ Eg: Disks and USB keys
stream-oriented
◦ Transfers data in and out as a stream of bytes, with no block structure
◦ Eg: Terminals, printers, communications ports , mouse
21. Types of Buffering
1. Single Buffering
2. Double Buffering
3. Circular Buffering
4. No Buffering
22. Single Buffer (Block-oriented data)
•When a user process issues an I/O request, the OS assigns a buffer in the system portion of
main memory to the operation
Reading ahead:
Input transfers are made to the system buffer. When the transfer is complete, the process
moves the block into user space and immediately requests another block.
When data are being transmitted to a device, they are first copied from the user space into
the system buffer, from which they will ultimately be written.
23. Performance comparison between single
buffering and no buffering
•Without buffering
–Execution time per block is essentially T + C
T - time required to input one block
C- computation time the between complication and input requests
•With Buffering
–the time is max [C, T]+ M
M - time required to move the data from the system buffer to user memory
24. Single Buffer (Stream-oriented data)
line-at-a-time fashion:
◦ user input is one line at a time, with a carriage return signalling the
end of a line
◦ output to the terminal is similarly one line at a time Eg: Line
Printer
byte-at-a-time fashion
◦ used on forms-mode terminals when each key stroke is significant
◦ user process follows the producer/consumer model
25. Double Buffer or buffer swapping
A process now transfers data to (or from) one buffer while the operating system empties (or fills)
the other. This technique is known as double buffering
Block oriented transfer : the execution time as max [C, T]
stream-oriented input:
line-at-a-time I/O the user process need not be suspended for input or output, unless the
process runs ahead of the double buffers
byte-at-a-time operation no particular advantage over a single buffer
In both cases, the producer/consumer model is followed
26. Circular Buffer
When more than two buffers are used then collection of buffers is known as circular
buffer with each individual buffer being one unit of the circular buffer
27. The Utility of Buffering
Buffering is one tool that can increase the efficiency of the operating
system and the performance of individual processes.
29. Components of a Disk
Platters
Spindle
The platters spin (say, 90
rps (Revolutions Per
Second)).
The arm assembly is
moved in or out to position
a head on a desired track.
Tracks under heads make
a cylinder (imaginary!).
Only one head
reads/writes at any one
time.
Disk head
Arm movement
Arm assembly
Tracks
Sector
Block size is a multiple of sector size (which is fixed).
30. Disk Device Terminology
Several platters, with information recorded magnetically on both surfaces (usually)
• Actuator moves head (end of arm,1/surface) over track (“seek”), select
surface, wait for sector rotate under head, then read or write
– “Cylinder”: all tracks under heads
• Bits recorded in tracks, which in turn divided into sectors (e.g., 512
Bytes)
Platter
Outer
Track
Inner
Track
Sector
Actuator
Head
Arm
31. Disk Head, Arm, Actuator
Actuator
Arm
Head
Platters (12)
Spindle
34. 35
Physical disk organization
To read or write, the disk head must be positioned on the desired
track and at the beginning of the desired sector
Disk Performance Parameters:
Seek time is the time it takes to position the head on the desired track
Rotational delay or rotational latency is waiting time for block to rotate
under head
Transfer time is the time for the sector to pass under the head
38. 39
Physical disk organization
Access time
= seek time + rotational latency + transfer time
Efficiency of a sequence of disk accesses strongly depends on the order of the
requests
Adjacent requests on the same track avoid additional seek and rotational
latency times
Loading a file as a unit is efficient when the file has been stored on consecutive
sectors on the same cylinder of the disk
39. 41
Disk Scheduling
The operating system is responsible for using hardware efficiently — for the disk
drives, this means having a fast access time and disk bandwidth.
Access time has two major components
◦ Seek time is the time for the disk are to move the heads to the cylinder
containing the desired sector.
◦ Rotational latency is the additional time waiting for the disk to rotate the
desired sector to the disk head.
Minimize seek time
Seek time seek distance
Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer.
40. 42
Disk Scheduling (Cont.)
Several algorithms exist to schedule the servicing of disk I/O requests.
We illustrate them with a request queue (0-199).
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
42. 44
SSTF
◦ Selects the request with the minimum seek time from the current head
position.(shortest seek time first)
◦ SSTF scheduling is a form of SJF scheduling; may cause starvation of
some requests.
44. 46
SCAN
◦ The disk arm starts at one end of the disk, and moves toward the other
end, servicing requests until it gets to the other end of the disk, where
the head movement is reversed and servicing continues.
◦ Direction of head movement is towards 0
◦ Sometimes called the elevator algorithm.
46. 48
C-SCAN
◦ Provides a more uniform wait time than SCAN.
◦ The head moves from one end of the disk to the other. servicing requests as it
goes. When it reaches the other end, however, it immediately
returns to the beginning of the disk, without servicing any
requests on the return trip.
◦ Treats the cylinders as a circular list that wraps around from the last cylinder
to the first one.
50. 52
C-LOOK
Version of C-SCAN
◦ Arm only goes as far as the last request in each direction, then reverses
direction immediately, without first going all the way to the end of the disk.
52. 54
Selecting a Disk-Scheduling Algorithm
◦ SSTF is common and has a natural appeal
◦ SCAN and C-SCAN perform better for systems that place a heavy
load on the disk.
◦ Performance depends on the number and types of requests.
◦ Requests for disk service can be influenced by the file-allocation
method.
◦ The disk-scheduling algorithm should be written as a separate
module of the operating system, allowing it to be replaced with a
different algorithm if necessary.
◦ Either SSTF or LOOK is a reasonable choice for the default
algorithm.
53. ◦ RAID is a technology that is used to
increase the performance and/or
reliability of data storage. The
abbreviation stands for Redundant
Array of Inexpensive Disks.
◦ A RAID system consists of two or
more drives working in parallel.
These disks can be hard discs, but
there is a trend to also use the
technology for SSD (solid state
drives). There are different RAID
levels, each optimized for a specific
situation
Design
architectures
share three
characteristics:
RAID is a set of
physical disk drives
viewed by the
operating system as a
single logical drive
data are
distributed across
the physical drives
of an array in a
scheme known as
striping
redundant disk capacity is
used to store parity
information, which
guarantees data
recoverability in case of a
disk failure
54. RAID level 0 – Striping
◦ In a RAID 0 system data are split up into blocks that get written across all the
drives in the array. By using multiple disks (at least 2) at the same time, this
offers superior I/O performance.
◦ Not a true RAID because it does not include redundancy to improve
performance or provide data protection
55. RAID level 0 – Striping
Advantages
1. RAID 0 offers great performance, both in read and write operations. There is no overhead caused
by parity controls.
2. All storage capacity is used, there is no overhead.
3. The technology is easy to implement.
Disadvantages
RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0 array are lost. It should not be used
for mission-critical systems.
Ideal use
RAID 0 is ideal for non-critical storage of data that have to be read/written at a high speed, such as
on an image retouching or video editing station.
56. RAID level 1 – Mirroring
◦ Data are stored twice by writing them to both the data drive (or set of data drives)
and a mirror drive (or set of drives). If a drive fails, the controller uses either the data
drive or the mirror drive for data recovery and continues operation. You need at least
2 drives for a RAID 1 array.
57. RAID level 1 – Mirroring
Advantages
1. RAID 1 offers excellent read speed and a write-speed that is comparable to that of a single drive.
2. In case a drive fails, data do not have to be rebuild, they just have to be copied to the replacement
drive.
Disadvantages
1. The main disadvantage is that the effective storage capacity is only half of the total drive capacity
because all data get written twice.
2. Software RAID 1 solutions do not always allow a hot swap of a failed drive. That means the failed
drive can only be replaced after powering down the computer it is attached to. For servers that
are used simultaneously by many people, this may not be acceptable. Such systems typically use
hardware controllers that do support hot swapping.
Ideal use
RAID-1 is ideal for mission critical storage, for instance for accounting systems. It is also suitable for
small servers in which only two data drives will be used.
58. RAID level 5
◦ It requires at least 3 drives but can work with up to 16. Data blocks are striped across
the drives and on one drive a parity checksum of all the block data is written. The
parity data are not written to a fixed drive
◦ Using the parity data, the computer can recalculate the data of one of the
other data blocks, should those data no longer be available
59. RAID level 5
Advantages:
1. Read data transactions are very fast while write data transactions are somewhat slower (due to
the parity that has to be calculated).
2. If a drive fails, you still have access to all data, even while the failed drive is being replaced and
the storage controller rebuilds the data on the new drive.
Disadvantages
1. Drive failures have an effect on throughput, although this is still acceptable.
2. This is complex technology. If one of the disks in an array using 4TB disks fails and is replaced,
restoring the data (the rebuild time) may take a day or longer, depending on the load on the array
and the speed of the controller. If another disk goes bad during that time, data are lost forever.
Ideal use
RAID 5 is a good all-round system that combines efficient storage with excellent security and decent
performance. It is ideal for file and application servers that have a limited number of data drives.
60. RAID level 6 – Striping with double parity
◦ RAID 6 is like RAID 5, but the parity data are written to two drives. That means it
requires at least 4 drives and can withstand 2 drives dying simultaneously. The
chances that two drives break down at exactly the same moment are of course very
small
61. RAID level 6 – Striping with double parity
Advantages
1. Like with RAID 5, read data transactions are very fast.
2. If two drives fail, you still have access to all data, even while the failed drives are being
replaced. So RAID 6 is more secure than RAID 5.
Disadvantages
1. Drive failures have an effect on throughput, although this is still acceptable.
2. This is complex technology. Rebuilding an array in which one drive failed can take a long
time.
external devices that engage in I/O with computer systems can be roughly grouped into three categories:
• Human readable: Suitable for communicating with the computer user.
Examples include printers and terminals, the latter consisting of video display,
keyboard, and perhaps other devices such as a mouse.
Machine readable: Suitable for communicating with electronic equipment.
Examples are disk drives, USB keys, sensors, controllers, and actuators.
• Communication: Suitable for communicating with remote devices. Examples
are digital line drivers and modems.
• Programmed I/O : The processor issues an I/O command, on behalf of a process, to an I/O module; that process then busy waits for the operation to be completed before proceeding.
• Interrupt-driven I/O : The processor issues an I/O command on behalf of a process. There are then two possibilities. If the I/O instruction from the process is nonblocking, then the processor continues to execute instructions from the process that issued the I/O command. If the I/O instruction is blocking, then the next instruction that the processor executes is from the OS, which will put the current process in a blocked state and schedule another process.
• Direct memory access (DMA) : A DMA module controls the exchange of data between main memory and an I/O module. The processor sends a request for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.
As computer systems have evolved, there has been a pattern of increasing complexity and sophistication of individual components. Nowhere is this more evident than in the I/O function. The evolutionary steps can be summarized as
Follows:
1. The processor directly controls a peripheral device. This is seen in simple microprocessor-controlled devices.
2. A controller or I/O module is added. The processor uses programmed I/O without interrupts. With this step, the processor becomes somewhat divorced from the specific details of external device interfaces.
3. The same configuration as step 2 is used, but now interrupts are employed. The processor need not spend time waiting for an I/O operation to be performed, thus increasing efficiency.
4. The I/O module is given direct control of memory via DMA. It can now move a block of data to or from memory without involving the processor, except at the beginning and end of the transfer.
5. The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O. The central processing unit (CPU) directs the I/O processor to execute an I/O program in main memory. The I/O processor fetches and executes these instructions without processor intervention. This allows the processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed.
6. The I/O module has a local memory of its own and is, in fact, a computer in its own right. With this architecture, a large set of I/O devices can be controlled, with minimal processor involvement. A common use for such an architecture has been to control communications with interactive terminals. The I/O processor takes care of most of the tasks involved in controlling the terminals.
As one proceeds along this evolutionary path, more and more of the I/O function is performed without processor involvement. The central processor is increasingly relieved of I/O-related tasks, improving performance. With the last two steps (5 and 6), a major change occurs with the introduction of the concept of an I/O module capable of executing a program.
A note about terminology: For all of the modules described in steps 4 through 6, the term direct memory access is appropriate, because all of these types involve direct control of main memory by the I/O module. Also, the I/O module in step 5 is often referred to as an I/O channel , and that in step 6 as an I/O processor ; however, each term is, on occasion, applied to both situations. In the latter part of this section, we will use the term I/O channel to refer to both types of I/O modules.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.
The DMA unit is capable of mimicking the processor and, indeed, of taking over control of the system bus just like a processor. It needs to do this to transfer data to and from memory over the system bus.
The DMA technique works as follows. When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the
DMA module the following information:
• Whether a read or write is requested, using the read or write control line between the processor and the DMA module
• The address of the I/O device involved, communicated on the data lines
• The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register
• The number of words to be read or written, again communicated via the data
lines and stored in the data count register
The processor then continues with other work. It has delegated this I/O operation to the DMA module. The DMA module transfers the entire block of data, one word at a time, directly to or from memory, without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor. Thus, the processor is involved only at the beginning and end of the transfer .
The DMA mechanism can be configured in a variety of ways. Some possibilities are shown in Figure. In the first example, all modules share the same system bus. The DMA module, acting as a surrogate processor, uses programmed I/O to exchange data between memory and an I/O module through the DMA module. This configuration, while it may be inexpensive, is clearly inefficient: As with processor-controlled programmed I/O, each transfer of a word consumes two bus cycles (transfer request followed by transfer).
The number of required bus cycles can be cut substantially by integrating the DMA and I/O functions. As Figure 11.3b indicates, this means that there is a path between the DMA module and one or more I/O modules that does not include the system bus. The DMA logic may actually be a part of an I/O module, or it may be a separate module that controls one or more I/O modules. This concept can be taken one step further by connecting I/O modules to the DMA module using an I/O bus ( Figure 11.3c ). This reduces the number of I/O interfaces in the DMA module to one and provides for an easily expandable configuration. In all of these cases ( Figure 11.3b and 11.3c ), the system bus that the DMA module shares with the processor and main memory is used by the DMA module only to exchange data with memory and to exchange control signals with the processor. The exchange of data between the DMA and I/O modules takes place off the system bus.
21
With the use of multiple disks, there is a wide variety of ways in which the data can be organized and in which redundancy can be added to improve reliability. This could make it difficult to develop database schemes that are usable on a number of platforms and operating systems. Fortunately, industry has agreed on a standardized scheme for multiple-disk database design, known as RAID (redundant array of independent disks). The RAID scheme consists of seven levels, 2 zero through six. These levels do not imply a hierarchical relationship but designate different design architectures that share three common characteristics:
1. RAID is a set of physical disk drives viewed by the operating system as a single
logical drive.
2. Data are distributed across the physical drives of an array in a scheme known
as striping, described subsequently.
3. Redundant disk capacity is used to store parity information, which guarantees
data recoverability in case of a disk failure.