INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
INTRODUCTION : Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access.
INTELLIGENT DISK SUBSYSTEMS – 1
Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels;
SAN ARCHITECTURE AND HARDWARE DEVICES : Overview, creating a Network for storage; SAN Hardware devices, The fibre channel switch, Host Bus adaptors; Putting the storage in SAN; Fabric operation from a Hardware perspective
Redundant Arrays of independent disks is a family of techniques that use multiple disks that are organized to provide high performance and/or reliability
Complete configuration of SAN using ESXI Environment and Installation guide. Now you will be able to configure storage area network with the help of these slides.
This configuration helps user to configure ESXI 4, ESXI 3.0 Servers
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
INTRODUCTION : Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access.
INTELLIGENT DISK SUBSYSTEMS – 1
Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels;
SAN ARCHITECTURE AND HARDWARE DEVICES : Overview, creating a Network for storage; SAN Hardware devices, The fibre channel switch, Host Bus adaptors; Putting the storage in SAN; Fabric operation from a Hardware perspective
Redundant Arrays of independent disks is a family of techniques that use multiple disks that are organized to provide high performance and/or reliability
Complete configuration of SAN using ESXI Environment and Installation guide. Now you will be able to configure storage area network with the help of these slides.
This configuration helps user to configure ESXI 4, ESXI 3.0 Servers
If you really want to understand what exactly Database Security is all about,this presentation is yours.
You will understand it just by having one look at the slides.
Presentation contains things which are really simple to understand.
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
Overview of Mass Storage Structure
Disk Structure
Disk Attachment
Disk Scheduling
Disk Management
Swap-Space Management
RAID Structure
Disk Attachment
Stable-Storage Implementation
Tertiary Storage Devices
Operating System Issues
Performance Issues
If you really want to understand what exactly Database Security is all about,this presentation is yours.
You will understand it just by having one look at the slides.
Presentation contains things which are really simple to understand.
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
Overview of Mass Storage Structure
Disk Structure
Disk Attachment
Disk Scheduling
Disk Management
Swap-Space Management
RAID Structure
Disk Attachment
Stable-Storage Implementation
Tertiary Storage Devices
Operating System Issues
Performance Issues
Understanding the Windows Server Administration Fundamentals (Part-2)Tuan Yang
Windows Server Administration is an advanced computer networking topic that includes server installation and configuration, server roles, storage, Active Directory and Group Policy, file, print, and web services, remote access, virtualization, application servers, troubleshooting, performance, and reliability. With these slides, explore the key fundamentals of the Windows Server Administration.
Learn more about:
» Storage technologies.
» File Systems.
» HDD managements.
» Troubleshooting methodology.
» Server boot process.
» System configuration.
» System monitoring.
» High Availability & fault tolerance.
» Back up.
In this presentation, we will discuss in details about challenges in managing the IT infrastructure with a focus on server sizing, storage capacity planning and internet connectivity. We will also discuss about how to set up security architecture and disaster recovery plan.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Web applications and web servers, HTML form Development, GET and POST, ASP.NET application, ASP.NET namespaces, creating sample C# web Applications, architecture, Debugging and Tracing of ASP.NET, Introduction to web Form controls. Building Web Services- web service namespaces, building simple web
Advantages of .NET over the other languages, overview of .NET binaries, Intermediate Language, metadata, .NET Namespaces, Common Language runtime, common type system, common Language Specification.
C# fundamentals – C# class, object, string formatting, Types, scope, constants, C# iteration, control flow, operators, array, string, Enumerations, structures, custom Namespaces
Architectural Styles and Case Studies, Software architecture ,unit–2Sudarshan Dhondaley
Architectural styles; Pipes and filters; Data abstraction and object-oriented organization; Event-based, implicit invocation; Layered systems; Repositories; Interpreters; Process control; Other familiar architectures; Heterogeneous architectures. Case Studies: Keyword in Context; Instrumentation software; Mobile robotics; Cruise control; three vignettes in mixed style.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. 16
INTELLIGENT DISK SUBSYSTEMS
Caching: Acceleration of Hard Disk Access
In the field of disk subsystems, caches are designed to accelerate write and read
accesses to physical hard disks.
In this connection we can differentiate between two types of cache:
1. Cache on the hard disk
2. Cache in the RAID controller.
a. Write cache
b. Read cache
Cache on the hard disk
Each individual hard disk comes with a very small cache.
This is necessary because the transfer rate of the I/O channel to the disk controller is
significantly higher than the speed at which the disk controller can write to or read
from the physical hard disk.
If a server or a RAID controller writes a block to a physical hard disk, the disk
controller stores this in its cache. The disk controller can thus write the block to the
physical hard disk in its own time while the I/O channel can be used for data traffic
to the other hard disks.
The hard disk controller transfers the block from its cache to the RAID controller or
to the server at the higher data rate of the I/O channel
Note1: The disk controller is the circuit which enables the CPU to communicate with a hard disk.
Note2: A RAID controller is a hardware device or software program used to manage hard disk drives.
Note3: A RAID controller is often improperly shortened to a disk controller. The two should not be confused as they
provide very different functionality.
Write cache in the disk subsystem controller(RAID Controller)
Disk subsystems controller come with their own cache, which in some models is gigabytes
in size.
Many applications do not write data at a continuous rate, but in batches. If a server sends
several data blocks to the disk subsystem, the controller initially buffers all blocks into a
write cache with a battery backup and immediately reports back to the server that all data
has been securely written to the drive.
The battery backup is necessary to allow the data in the write cache to survive a power cut.
Read cache in the disk subsystem controller(RAID Controller)
To speed up read access by the server, the disk subsystem’s controller must copy the relevant data blocks from
the slower physical hard disk to the fast cache before the server requests the data in question
The problem with this is that it is very difficult for the disk subsystem’s controller to work out in advance what
data the server will ask for next
The controller can only analyse past data access and use this to extrapolate which data blocks the server will
access next
2. 17
Intelligent disk subsystems
Intelligent disk subsystems represent the third level of complexity for controllers after JBODs
and RAID arrays.
The controllers of intelligent disk subsystems offer additional functions over and above
those offered by RAID.
These functions include, instant copies, remote mirroring and LUN masking.
Instant copies
Instant copies can virtually copy data sets of several terabytes within a disk subsystem in a
few seconds.
Virtual copying means that disk subsystems fool the attached servers into believing that they
are capable of copying such large data quantities in such a short space of time.
The actual copying process takes significantly longer. However, the same server, or a second
server, can access the virtually copied data after a few seconds
1. Server 1 works on the original data
2. The original data is virtually copied in a few seconds
3. Then server 2 can work with the data copy, whilst server 1 continues to operate with the original data
Instant copies are used, for example, for the generation of test data, for the backup of data
and for the generation of data copies for data mining.
When copying data using instant copies, attention should be paid to the consistency of the
copied data
The background process for copying all the data of an instant copy requires many hours
when very large data volumes are involved, making this not a viable alternative
One remedy is incremental instant copy where data is only copied in its entirety the first
time around. Afterwards the instant copy is repeated – for example, perhaps daily –
whereby only those changes since the previous instant copy are copied
3. 18
Remote mirroring
Something as simple as a power failure can prevent access to production data and data
copies for several hours. A fire in the disk subsystem would destroy original data and data
copies.
Remote mirroring offers protection against such disasters.
Modern disk subsystems can now mirror their data, or part of their data, independently to a
second disk subsystem, which is a long way away.
The entire remote mirroring operation is handled by the two participating disk subsystems.
Remote mirroring is invisible to application servers and does not consume their resources.
1. The application server stores its data on a local disk subsystem.
2. The disk subsystem saves the data to several physical drives by means of RAID.
3. The local disk subsystem uses remote mirroring to mirror the data onto a second disk subsystem
located in the backup data centre
4. Users use the application via the LAN.
5. The stand-by server in the backup data centre is used as a test system.
6. If the first disk subsystem fails, the application is started up on the stand-by server using the data of the
second disk subsystem
7. Users use the application via the WAN.
4. 19
Two types of remote mirroring:
1. Synchronous
2. Asynchronous
Synchronous remote mirroring
In synchronous remote mirroring a disk subsystem does not acknowledge write operations
until it has saved a block itself and received write confirmation from the second disk
subsystem.
Synchronous remote mirroring has the advantage that the copy of the data held by the
second disk subsystem is always up-to-date.
Asynchronous remote mirroring
In asynchronous remote mirroring one disk subsystem acknowledges a write operation as soon as it has
saved the block itself.
In asynchronous remote mirroring there is no guarantee that the data on the second disk
subsystem is up-to-date.
LUN masking (LUN – Logical Unit Number)
over a storage area network (SAN).
LUN masking limits the access to the hard disks that the disk subsystem exports to the
connected server.
A disk subsystem makes the storage capacity of its internal physical hard disks available to
servers by permitting access to individual physical hard disks, or to virtual hard disks created
using RAID, via the connection ports.
Based upon the SCSI protocol, all hard disks – physical and virtual – that are visible outside
the disk subsystem are also known as LUN
Without LUN masking every server would see all hard disks that the disk subsystem
provides. As a result, considerably more hard disks are visible to each server than is
necessary
Each server now sees only the hard disks that it actually requires. LUN masking thus acts as a
filter between the exported hard disks and the accessing servers
With LUN masking, each server sees only its own hard disks. A configuration error on server
1 can no longer destroy the data of the two other servers. The data is now protected.
5. 20
Availability of disk subsystems
Today, disk subsystems can be constructed so that they can withstand the failure of any
component without data being lost or becoming inaccessible
The following list describes the individual measures that can be taken to increase the
availability of data
• The data is distributed over several hard disks using RAID processes and supplemented by further
data for error correction. After the failure of a physical hard disk, the data of the defective hard disk
can be reconstructed from the remaining data and the additional data.
• Individual hard disks store the data using the so-called Hamming code. The Hamming code allows
data to be correctly restored even if individual bits are changed on the hard disk.
• Each internal physical hard disk can be connected to the controller via two internal I/O channels. If
one of the two channels fails, the other can still be used.
• The controller in the disk subsystem can be realised by several controller instances. If one of the
controller instances fails, one of the remaining instances takes over the tasks of the defective
instance.
• Server and disk subsystem are connected together via several I/O channels. If one of the channels
fails, the remaining ones can still be used.
• Instant copies can be used to protect against logical errors. For example, it would be possible
to create an instant copy of a database every hour. If a table is ‘accidentally’ deleted, then the
database could revert to the last instant copy in which the database is still complete.
• Remote mirroring protects against physical damage. If, for whatever reason, the original data can
no longer be accessed, operation can continue using the data copy that was generated
using remote mirroring
• LUN masking limits the visibility of virtual hard disks. This prevents data being changed or deleted
unintentionally by other servers.
6. 21
The Physical I/O path from the CPU to the Storage System
One or more CPUs process data that is stored in the CPU cache or in the RAM. CPU cache
and RAM are very fast; however, their data is lost when the power is been switched off.
Therefore, the data is moved from the RAM to storage devices such as disk subsystems and
tape libraries via system bus, host bus and I/O bus
Although storage devices are slower than CPU cache and RAM they compensate for this by
being cheaper and by their ability to store data even when the power is switched off
The physical I/O path from the CPU to the storage system consists of system bus, host I/O bus
and I/O bus. More recent technologies such as InfiniBand, Fibre Channel and Internet SCSI (iSCSI)
replace individual buses with a serial network. For historic reasons the corresponding
connections are still called host I/O bus or I/O bus.
7. 22
SCSI(Small Computer System Interface)
Fast transfer speeds, up to 320 megabytes per second
SCSI defines a parallel bus for the transmission of data with additional lines for the control of
communication
The bus can be realised in the form of printed conductors on the circuit board or as a cable
A so-called daisy chain can connect up to 16 devices together
The SCSI protocol defines how the devices communicate with each other via the SCSI bus.
It specifies how the devices reserve the SCSI bus and in which format data is transferred.
An SCSI bus connects one server to several peripheral devices by means of
a daisy chain. SCSI defines both the characteristics of the connection cable and also the
transmission protocol.
The SCSI protocol introduces SCSI IDs and Logical Unit Numbers (LUNs) for the addressing of
devices
Each device in the SCSI bus must have an unambiguous ID with the HBA in the server
requiring its own ID
Note: A host bus adapter (HBA) is a circuit board and/or integrated circuit adapter that provides input/output (I/O)
processing and physical connectivity between a server and a storage device
Depending upon the version of the SCSI standard, a maximum of 8 or 16 IDs are permitted
per SCSI bus
SCSI Target IDs with a higher priority win the arbitration of the SCSI bus.
Devices (servers and storage devices) must reserve the SCSI bus (arbitrate) before they may
send data through it. During the arbitration of the bus, the device that has the highest
priority SCSI ID always wins.
In the event that the bus is heavily loaded, this can lead to devices with lower priorities never
being allowed to send data. The SCSI arbitration procedure is therefore ‘unfair’.
SCSI and storage networks
SCSI is only suitable for the realisation of storage networks to a limited degree
First, a SCSI daisy chain can only connect a very few devices with each other.
Second, the maximum lengths of SCSI buses greatly limit the construction of storage
networks
8. 23
I/O TECHNIQUES
Fibre Channel Protocol Stack
Fibre Channel is currently (2009) the technique most frequently used for implementing
storage networks.
Fibre Channel was originally developed as a backbone technology for the connection of LANs
By coincidence, the design goals of Fibre Channel are covered by the requirements of a
transmission technology for storage networks such as:
• Serial transmission for high speed and long distances;
• Low rate of transmission errors;
• Low delay (latency) of the transmitted data;
The Fibre Channel protocol stack is subdivided into five layers
The lower four layers, FC-0 to FC-3 define the fundamental communication techniques, i.e.
the physical levels, the transmission and the addressing. The upper layer, FC-4, defines how
application protocols (upper layer protocols, ULPs) are mapped on the underlying Fibre
Channel network.
Links, ports and topologies
The Fibre Channel standard defines three different topologies:
1. Point- to-point : - Point-to-point defines a bi-directional connection between two
Devices
2. Arbitrated loop : - Arbitrated loop defines a unidirectional ring in which only two devices
can ever exchange data with one another at any one time
3. Fabric : - Fabric defines a network in which several devices can exchange data
simultaneously at full bandwidth
9. 24
Links
The connection between two ports is called a link
In the point-to-point topology and in the fabric topology the links are always bi-directional
The links of the arbitrated loop topology are unidirectional
Ports:
A Fibre Channel (FC) port is a hardware pathway into and out of a node that performs data
communications over an FC link. (An FC link is sometimes called an FC channel.)
B-Port Bridge Port B-Ports serve to connect two Fibre Channel switches together via
Asynchronous Transfer Mode
FC-1: 8b/10b encoding, ordered sets and link control protocol
FC-1 defines how data is encoded before it is transmitted via a Fibre Channel cable
(8b/10b encoding). FC-1 also describes certain transmission words (ordered sets) that are
required for the administration of a Fibre Channel connection (link control protocol).
8b/10b encoding: An encoding procedure that converts an 8-bit data byte sequence into a
10-bit transmission word sequence that is optimised for serial transmission.
Ordered sets: The Ordered Sets are four byte transmission words containing data and
special characters which have a special meaning.
Link Control Protocol: With the aid of ordered sets, FC-1 defines various link level protocols
for the initialisation and administration of a link. Examples of link level protocols are the
initialisation and arbitration of an arbitrated loop.
FC-2: Data transfer
FC-2 is the most comprehensive layer in the Fibre Channel protocol stack.
It determines how larger data units (for example, a file) are transmitted via the Fibre
Channel network.
It regulates the flow control that ensures that the transmitter only sends the data at a speed
that the receiver can process it.
10. 25
FC-3: Common services
The FC-3 level of the FC standard is intended to provide the common services required for advanced
features such as:
Striping -To multiply bandwidth using multiple N_ports in parallel to transmit a single information
unit across multiple links.
Hunt groups - The ability for more than one Port to respond to the same alias address. This improves
efficiency by decreasing the chance of reaching a busy N_Port.
Multicast - Multicast delivers a single transmission to multiple destination ports. This includes
sending to all N_Ports on a Fabric (broadcast) or to only a subset of the N_Ports on a Fabric.
Fibre Channel SAN
This section expands our view of Fibre Channel with the aim of realising storage networks
with Fibre Channel. To this end, we will first consider the three Fibre Channel topologies
point-to-point, fabric and arbitrated loop more closely
Point-to-point topology
Fabric
arbitrated loop
Same as above The Fibre Channel standard defines three different topologies:
11. 26
IP STORAGE:
IP storage is an approach to build storage networks upon TCP, IP and Ethernet.
IP Storage in a generalized way can be termed as utilizing Internet Protocol (IP) in a Storage Area
Network (SAN). The Internet Protocol used in the SAN environment will be usually over Gigabit
Ethernet. It is usually termed as a substitute to the Fibre Channel framework, which is seen in
traditional SAN infrastructure.
It is a known fact that due to FC infrastructure in SAN, there were certain issues like expense,
complexity and interoperability which were hindering the implementation of FC SAN. But the
proponents of IP Storage have a strong view that with the benefits offered by IP based storage
infrastructure over Fibre channel alternative; wide- spread adoption of SAN can take place.
The advantage of a common network hardware and technologies will surely make the IP SAN
deployment less complicated than FC SAN. As the hardware components are less expensive and the
technology is widely known and is well used, the interoperability issues and training costs will
automatically be less. Furthermore, the ubiquity factor offered by TCP/IP will make extend or
connect SANs worldwide.
Benefits offered by IP Storage
With the use of IP, SAN connectivity is possible on an universal front
When compared to Fiber Channel, IP Storage connectivity will offer operable architecture
and will have fewer issues
It will lessen the operational costs, as less expensive hardware components can be used than
in FC
IP Storage can allow storage traffic to be routed over a separate network
Use of existing infrastructure will surely reduce the deployment costs
High performance levels can be achieved if the infrastructure is properly laid
Offers wider connectivity options and that too less complex than Fibre Channel
Minimum disruption levels are guaranteed in IP Storage
12. 27
The NAS Architecture
NAS is a specialized computer that provides file access and storage for a client/server
network.
Because it is a specialized solution, its major components are proprietary and optimized for
one activity: shared file I/O within a network
NAS is considered a bundled solution consisting of pre-packaged hardware and installed
software regardless of vendor selection
The bundled solution encompasses a server part, a computer system with RAM, a CPU,
buses, network and storage adapters, storage controllers, and disk storage
The software portion contains the operating system, file system, and network drivers.
The last element is the network segment, which consists of the network interfaces and
related connectivity components.
Plug and Play is an important characteristic of the NAS packaged solution.
NAS can be one of the most difficult to administer and manage when numbers of boxes
exceed reasonable complexities
NAS uses existing network resources by attaching to Ethernet-based TCP/IP network
topologies.
Although NAS uses existing network resources, care should be taken to design expansions to
networks so increases in NAS storage traffic do not impact end-user transactional traffic
NAS uses a Real Time Operating System (RTOS) that is optimized through proprietary
enhancements to the kernel. Unfortunately, you can’t get to it, see it, or change it. It’s a
closed box
13. 28
The NAS Hardware Architecture
Figure 9-1 is a generic layout of NAS hardware components.
As you can see, they are just like any other server system. They have CPUs built on
motherboards that are attached to bus systems. RAM and system cache, meanwhile, are the
same with preconfigured capacities, given the file I/O specialization. What we don’t see in
the underlying system functionality are the levels of storage optimization that have evolved.
NAS internal hardware configurations are optimized for I/O, and in particular file I/O. The
results of this optimization turns the entire system into one large I/O manager for processing
I/O requests on a network.
The NAS Software Architecture
The operating system for NAS is a UNIX kernel derivative,
Components such as memory management, resource management, and process
management are all optimized to service file I/O requests
Memory management will be optimized to handle as many file requests as possible
Blah .. blah.. and more blah :P
14. 29
Performance bottlenecks in file servers
Current NAS servers and NAS gateways, as well as classical file servers, provide their storage capacity
via conventional network file systems such as NFS and CIFS or Internet protocols such as FTP and
HTTP. Although these may be suitable for classical file sharing, such protocols are not powerful
enough for I/O-intensive applications such as databases or video processing. Nowadays, therefore,
I/O-intensive databases draw their storage from disk subsystems rather than file servers.
Let us assume for a moment that a user wishes to read a file on an NFS client, which is stored on a
NAS server with internal SCSI disks. The NAS server’s operating system first of all loads the file into
the main memory from the hard disk via the SCSI bus, the PCI bus and the system bus, only to
forward it from there to the network card via the system bus and the PCI bus. The data is thus
shovelled through the system bus and the PCI bus on the file server twice (Figure). If the load on a
file server is high enough, its buses can thus become a performance bottleneck.
When using classical network file systems the data to be transported is additionally copied from the
private storage area of the application into the buffer cache of the kernel
15. 30
Network Connectivity
A limiting factor of file I/O processing with NAS is the TCP/IP network environment and the
processing of TCP layers.
NAS as a storage system
Putting all the parts together allows a highly optimized storage device to be placed on the network
with a minimum of network disruption, not to mention significantly reduced server OS latency and
costs, where configuration flexibility and management are kept to a minimum. The NAS solutions are
available for a diversity of configurations and workloads. They range from departmental solutions
where NAS devices can be deployed quickly and in departmental environment settings to mid-range
and enterprise-class products that are generally deployed in data center settings.