INTRODUCTION : Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data Access problem; The Battle for size and access.
INTELLIGENT DISK SUBSYSTEMS – 1
Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD, Storage virtualization using RAID and different RAID levels;
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
SAN ARCHITECTURE AND HARDWARE DEVICES : Overview, creating a Network for storage; SAN Hardware devices, The fibre channel switch, Host Bus adaptors; Putting the storage in SAN; Fabric operation from a Hardware perspective
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
Complete configuration of SAN using ESXI Environment and Installation guide. Now you will be able to configure storage area network with the help of these slides.
This configuration helps user to configure ESXI 4, ESXI 3.0 Servers
INTELLIGENT DISK SUBSYSTEMS – 2, I/O TECHNIQUES – 1
Caching: Acceleration of Hard Disk Access; Intelligent disk subsystems; Availability of disk subsystems. The Physical I/O path from the CPU to the Storage System; SCSI.
I/O TECHNIQUES – 2, NETWORK ATTACHED STORAGE
Fibre Channel Protocol Stack; Fibre Channel SAN; IP Storage. The NAS Architecture, The NAS hardware Architecture, The NAS Software Architecture, Network connectivity, NAS as a storage system.
FILE SYSTEM AND NAS: Local File Systems; Network file Systems and file servers; Shared Disk file systems; Comparison of fiber Channel and NAS.
STORAGE VIRTUALIZATION: Definition of Storage virtualization; Implementation Considerations; Storage virtualization on Block or file level; Storage virtualization on various levels of the storage Network; Symmetric and Asymmetric storage virtualization in the Network
SAN ARCHITECTURE AND HARDWARE DEVICES : Overview, creating a Network for storage; SAN Hardware devices, The fibre channel switch, Host Bus adaptors; Putting the storage in SAN; Fabric operation from a Hardware perspective
A brief study on Storage Area Network (SAN), SAN architecture & its importance. It focuses on the techniques and the technologies that have evolved around SAN & its Security.
Complete configuration of SAN using ESXI Environment and Installation guide. Now you will be able to configure storage area network with the help of these slides.
This configuration helps user to configure ESXI 4, ESXI 3.0 Servers
Cloud deployment models: public, private, hybrid, community – Categories of cloud computing: Everything as a service: Infrastructure, platform, software - Pros and Cons of cloud computing – Implementation levels of virtualization – virtualization structure – virtualization of CPU, Memory and I/O devices – virtual clusters and Resource Management – Virtualization for data center automation.
Virtualization refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network.
The main goal of virtualization is to manage workloads by radically transforming traditional computing to make it more scalable.
The server administrator uses a software application to divide one physical server into multiple isolated virtual environments.
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Storage Virtualization: Towards an Efficient and Scalable FrameworkCSCJournals
Enterprises in the corporate world demand high speed data protection for all kinds of data. Issues such as complex server environments with high administrative costs and low data protection have to be resolved. In addition to data protection, enterprises demand the ability to recover/restore critical information in various situations. Traditional storage management solutions such as direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN) have been devised to address such problems. Storage virtualization is the emerging technology that amends the underlying complications of physical storage by introducing the concept of cloud storage environments. This paper covers the DAS, NAS and SAN solutions of storage management and emphasizes the benefits of storage virtualization. The paper discusses a potential cloud storage structure based on which storage virtualization architecture will be proposed.
Cloud deployment models: public, private, hybrid, community – Categories of cloud computing: Everything as a service: Infrastructure, platform, software - Pros and Cons of cloud computing – Implementation levels of virtualization – virtualization structure – virtualization of CPU, Memory and I/O devices – virtual clusters and Resource Management – Virtualization for data center automation.
Virtualization refers to the creation of a virtual resource such as a server, desktop, operating system, file, storage or network.
The main goal of virtualization is to manage workloads by radically transforming traditional computing to make it more scalable.
The server administrator uses a software application to divide one physical server into multiple isolated virtual environments.
Learn about Positioning IBM Flex System 16 Gb Fibre Channel Fabric for Storage-Intensive Enterprise Workloads. This IBM Redpaper discusses server performance imbalance that can be found in typical application environments and how to address this issue with the 16 Gb Fibre Channel technology to provide required levels of performance and availability for the storage-intensive applications. For more information on Pure Systems, visit http://ibm.co/18vDnp6.
Visit the official Scribd Channel of IBM India Smarter Computing at http://bit.ly/VwO86R to get access to more documents.
Storage Virtualization: Towards an Efficient and Scalable FrameworkCSCJournals
Enterprises in the corporate world demand high speed data protection for all kinds of data. Issues such as complex server environments with high administrative costs and low data protection have to be resolved. In addition to data protection, enterprises demand the ability to recover/restore critical information in various situations. Traditional storage management solutions such as direct-attached storage (DAS), network-attached storage (NAS) and storage area networks (SAN) have been devised to address such problems. Storage virtualization is the emerging technology that amends the underlying complications of physical storage by introducing the concept of cloud storage environments. This paper covers the DAS, NAS and SAN solutions of storage management and emphasizes the benefits of storage virtualization. The paper discusses a potential cloud storage structure based on which storage virtualization architecture will be proposed.
In this presentation, we will discuss in details about challenges in managing the IT infrastructure with a focus on server sizing, storage capacity planning and internet connectivity. We will also discuss about how to set up security architecture and disaster recovery plan.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
Postponed Optimized Report Recovery under Lt Based Cloud MemoryIJARIIT
Fountain code based conveyed stockpiling system give solid online limit course of action through putting unlabeled
subset pieces into various stockpiling hubs. Luby Transformation (LT) code is one of the predominant wellspring codes for limit
systems in view of its viable recuperation. In any case, to ensure high accomplishment deciphering of wellspring code based limit
recuperation of additional segments in required and this need could avoid additional put off. We give the idea that distinctive stage
recuperation of piece is powerful to lessen the document recovery delay. We first develop a postpone display for various stage
recuperation arranges pertinent to our considered system with the made model. We focus on perfect recuperation arranges given
essentials on accomplishment decipher limit. Our numerical outcomes propose a focal tradeoff between the record recuperation
delay and the target of fruitful document unraveling and that the report recuperation deferral can be on a very basic level decrease
by in a perfect world bundle requests in a multi arrange style.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.
The purpose of this paper is to outline best practices for improving overall business application availability by building a highly available data infrastructure.
Download this paper to:
- Learn how to develop a High Availability strategy for your applications
- Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
- Learn how to build a Highly Available data infrastructure using Hyper-converged storage
Why is Virtualization Creating Storage Sprawl? By Storage SwitzerlandINFINIDAT
Desktop and server virtualization have brought many benefits to the data center. These two initiatives have allowed IT to respond quickly to the needs of the organization while driving down IT costs, physical footprint requirements and energy demands. But there is one area of the data center that has actually increased in cost since virtualization started to make its way into production… storage. Because of virtualization, more data centers need "ash to meet the random I/O nature of the virtualized environment, which of course is more expensive, on a dollar per GB basis, than hard disk drives. The single biggest problem however is the signi!cant increase in the number of discrete storage systems that service the environment. This “storage sprawl” threatens the return on investment (ROI) of virtualization projects and makes storage more complex to manage.
Learn more at www.infinidat.com.
Efficient and scalable multitenant placement approach for in memory database ...CSITiaesprime
Of late Multitenant model with In-Memory database has become prominent area for research. The paper has used advantages of multitenancy to reduce the cost for hardware, labor and make availability of storage by sharing database memory and file execution. The purpose of this paper is to give overview of proposed Supple architecture for implementing in-memory database backend and multitenancy, applicable in public and private cloud settings. Backend in memory database uses column-oriented approach with dictionary based compression technique. We used dedicated sample benchmark for the workload processing and also adopt the SLA penalty model. In particular, we present two approximation algorithms, multi-tenant placement (MTP) and best-fit greedy to show the quality of tenant placement. The experimental results show that MTP algorithm is scalable and efficient in comparison with best-fit greedy algorithm over proposed architecture.
50 Shades of Grey in Software-Defined StorageStorMagic
Software-Defined Storage (SDS) has become a meme in industry and trade press discussions of storage technology lately, though the term itself lacks rigorous technical definition. Essentially, SDS is touted as a model for building storage that will work better with virtualized workloads running under server hypervisor technology than do "legacy" NAS and SAN infrastructure. Regardless of the veracity of these claims, the business-savvy IT planner should base his or her choice of storage infrastructure not on trendy memes, but on traditional selection criteria: cost, availability, and simplicity.
Read Jon Toigo's analysis of SDS, and then see for yourself what a cost effective, high availability and simple solution can do for you. Get your free trial of StorMagic SvSAN today: http://stormagic.com/trial/
Historically, the tradeoff of hard disk drives (HDDs) versus solid state drives (SSDs) in enterprises has revolved around three variables: capacity, endurance and price. This whitepaper looks at how increased capacity and durability is expanding SSD applications in the data center.
Web applications and web servers, HTML form Development, GET and POST, ASP.NET application, ASP.NET namespaces, creating sample C# web Applications, architecture, Debugging and Tracing of ASP.NET, Introduction to web Form controls. Building Web Services- web service namespaces, building simple web
Advantages of .NET over the other languages, overview of .NET binaries, Intermediate Language, metadata, .NET Namespaces, Common Language runtime, common type system, common Language Specification.
C# fundamentals – C# class, object, string formatting, Types, scope, constants, C# iteration, control flow, operators, array, string, Enumerations, structures, custom Namespaces
Architectural Styles and Case Studies, Software architecture ,unit–2Sudarshan Dhondaley
Architectural styles; Pipes and filters; Data abstraction and object-oriented organization; Event-based, implicit invocation; Layered systems; Repositories; Interpreters; Process control; Other familiar architectures; Heterogeneous architectures. Case Studies: Keyword in Context; Instrumentation software; Mobile robotics; Cruise control; three vignettes in mixed style.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Storage Area Networks Unit 1 Notes
1. 1
Unit – I
INTRODUCTION
Covers: Server Centric IT Architecture and its Limitations; Storage – Centric IT Architecture and its
advantages; Case study: Replacing a server with Storage Networks; The Data Storage and Data
Access problem; The Battle for size and access.
Server Centric IT Architecture and its Limitations
In conventional IT architectures, storage devices are normally only connected to a single server. To
increase fault tolerance, storage devices are sometimes connected to two servers, with only one
server actually able to use the storage device at any one time
In both cases, the storage device exists only in relation to the server to which it is connected. Other
servers cannot directly access the data; they always have to go through the server that is connected
to the storage device. This conventional IT architecture is therefore called server-centric IT
architecture.
In this approach, servers and storage devices are generally connected together by SCSI cables.
Limitation
In conventional server-centric IT architecture storage devices exist only in relation to the one or two
servers to which they are connected. The failure of both of these computers would make it impossible
to access this data
If a computer requires more storage space than is connected to it, it is no help whatsoever that
another computer still has attached storage space, which is not currently used
Consequently, it is necessary to connect ever more storage devices to a computer. This throws up the
problem that each computer can accommodate only a limited number of I/O cards
The length of SCSI cables is limited to a maximum of 25 m. This means that the storage capacity that
can be connected to a computer using conventional technologies is limited. Conventional
technologies are therefore no longer sufficient to satisfy the growing demand for storage capacity
2. 2
Storage – Centric IT Architecture and its advantages
In storage networks storage devices exist completely independently of any computer.
Several servers can access the same storage device directly over the storage network
without another server having to be involved.
The idea behind storage networks is that the SCSI cable is replaced by a network that is
installed in addition to the existing LAN and is primarily used for data exchange between
computers and storage devices.
Advantages:
The storage network permits all computers to access the disk subsystem and share it. Free
storage capacity can thus be flexibly assigned to the computer that needs it at the time
3. 3
Case study: Replacing a server with Storage Networks
In the following we will illustrate some advantages of storage-centric IT architecture using a case study: in
a production environment an application server is no longer powerful enough. The ageing computer must
be replaced by a higher-performance device. Whereas such a measure can be very complicated in a
conventional, server-centric IT architecture, it can be carried out very elegantly in a storage network.
4. 4
The Data Storage and Data Access problem
Although we don’t always realize it, accessing information on a daily basis the way we do means
there must be computers out there that store the data we need, making certain it’s available when
we need it, and ensuring the data’s both accurate and up-to-date. Rapid changes within the
computer networking industry have had a dynamic effect on our ability to retrieve information, and
networking innovations have provided powerful tools that allow us to access data on a personal and
global scale.
With so much data to store and with such global access to it, the collision between networking
technology and storage innovations was inevitable. The gridlock of too much data coupled with too
many requests for access has long challenged IT professionals. To storage and networking vendors,
as well as computer researchers, the problem is not new. And as long as personal computing devices
and corporate data centers demand greater storage capacity to offset our increasing appetite for
access, the challenge will be with us.
The Challenge of Designing Applications
Non-Linear Performance in Applications
The major factors influencing non-linear performance are twofold. First is the availability of
sufficient online storage capacity for application data coupled with adequate temporary storage
resources, including RAM and cache storage for processing application transactions. Second is the
number of users who will interact with the application and thus access the online storage for
application data retrieval and storage of new data. With this condition is the utilization of the
temporary online storage resources (over and above the RAM and system cache required), used by
the application to process the number of planned transactions in a timely manner
Storing and accessing data starts with the requirements of a business
application.
In all fairness to the application designers and product developers,
the choice of database is really very limited. Most designs just note
the type of database or databases required, be it relational or non-
relational. This decision in many cases is made from economic and
existing infrastructure factors. For example, how many times does an
application come online using a database purely because that’s the
existing database of choice for the enterprise? In other cases,
applications may be implemented using file systems, when they were
actually designed to leverage the relational operations of an RDBMS.
5. 5
The Battle for size and access
The Problem: Size
Wider bandwidth is needed. The connection between the server and storage unit requires a faster
data transfer rate. The client/server storage model uses bus technology to connect and a device
protocol tocommunicate, limiting the data transfer to about 10MB per second (maybe 40MB per
second, tops).
The problem is size. The database and supporting online storage currently installed has exceeded its
limitations, resulting in lagging requests for data and subsequent unresponsive applications. You
may be able to physically store 500GB on the storage devices; however, it's unlikely the single server
will provide sufficient connectivity to service application requests for data in a timely fashion-
thereby bringing on the non-linear performance window quite rapidly.
Solution Storage networking enables faster data transfers, as well as the capability for servers to
access larger data stores through applications and systems that share storage devices and data.
The Problem: Access
The problem is access. There are too many users for the supported configuration. The network cannot deliver
the user transactions into the server nor respond in a timely manner. Given the server cannot handle the
number of transactions submitted, the storage and server components are grid locked in attempting to satisfy
requests for data to be read or written to storage.
The single distribution strategy needs revisiting. A single distribution strategy can create an information
bottleneck at the disembarkation point. We will explore this later in Parts III and IV of this book where
application of SAN and NAS solutions are discussed. It's important to note, however, that a single distribution
strategy is only a logical term for placing user data where it is most effectively accessed. It doesn't necessarily
mean they are placed in a single physical location.
Solution With storage networking, user transactions can access data more directly, bypassing the overhead
of I/O operations and unnecessary data movement operations to and through the server.
6. 6
INTELLIGENT DISK SUBSYSTEMS – 1
Covers: Architecture of Intelligent Disk Subsystems; Hard disks and Internal I/O Channels, JBOD,
Storage virtualization using RAID and different RAID levels
Architecture of Intelligent Disk Subsystems
In contrast to a file server, a disk subsystem can be visualised as a hard disk server
Servers are connected to the connection port of the disk subsystem using standard I/O
techniques such as Small Computer System Interface (SCSI), Fibre Channel or Internet SCSI
(iSCSI) and can thus use the storage capacity that the disk subsystem provides
The internal structure of the disk subsystem is completely hidden from the server, which sees
only the hard disks that the disk subsystem provides to the server.
The connection ports are extended to the hard disks of the disk subsystem by means of internal I/O
channels (Figure 2.2). In most disk subsystems there is a controller between the connection ports
and the hard disks. The controller can significantly increase the data availability and data access
performance with the aid of a so-called RAID procedure. Furthermore, some controllers realise the
copying services instant copy and remote mirroring and further additional services. The controller
uses a cache in an attempt to accelerate read and write accesses to the server.
Disk subsystems are available in all sizes. Small disk subsystems have one to two connection
ports for servers or storage networks, six to eight hard disks and, depending on the disk capacity,
storage capacity of a few terabytes
Regardless of storage networks, most disk subsystems have the advantage that free disk space
can be flexibly assigned to each server connected to the disk subsystem (storage pooling)
All servers are either directly connected to the disk subsystem or indirectly connected via a
storage network
8. 8
Hard disks and Internal I/O Channels
I/O channels can be designed with built-in redundancy in order to increase the fault-tolerance of a
disk subsystem. The following cases can be differentiated here:
• Active
In active cabling the individual physical hard disks are only connected via one I/O channel. If this
access path fails, then it is no longer possible to access the data.
• Active/passive
In active/passive cabling the individual hard disks are connected via two I/O channels (Figure 2.5,
right). In normal operation the controller communicates with the hard disks via the first I/O channel
and the second I/O channel is not used. In the event of the failure of the first I/O channel, the disk
subsystem switches from the first to the second I/O channel.
9. 9
• Active/active (no load sharing)
In this cabling method the controller uses both I/O channels in normal operation. The hard disks are
divided into two groups: in normal operation the first group is addressed via the first I/O channel
and the second via the second I/O channel. If one I/O channel fails, both groups are addressed via
the other I/O channel.
• Active/active (load sharing)
In this approach all hard disks are addressed via both I/O channels in normal operation. The
controller divides the load dynamically between the two I/O channels so that the available hardware
can be optimally utilised. If one I/O channel fails, then the communication goes through the other
channel only.
10. 10
JBOD: JUST A BUNCH OF DISKS
If we compare disk subsystems with regard to their controllers we can differentiate between three
levels of complexity:
(1) No controller;
(2) RAID controller and
(3) Intelligent controller with additional services such as instant copy and remote mirroring .
If the disk subsystem has no internal controller, it is only an enclosure full of disks (JBODs).
In this instance, the hard disks are permanently fitted into the enclosure and the
connections for I/O channels and power supply are taken outwards at a single point.
Therefore, a JBOD is simpler to manage than a few loose hard disks
Typical JBOD disk subsystems have space for 8 or 16 hard disks. A connected server
recognises all these hard disks as independent disks. Therefore, 16 device addresses are
required for a JBOD disk subsystem incorporating 16 hard disks
STORAGE VIRTUALISATION USING RAID
A disk subsystem with a RAID controller offers greater functional scope than a JBOD disk
subsystem.
RAID was originally called ‘Redundant Array of Inexpensive Disks’. Today RAID stands for
‘Redundant Array of Independent Disks’
RAID has two main goals: to increase performance by striping and to increase fault-tolerance
by redundancy.
Striping distributes the data over several hard disks and thus distributes the load over more
hardware. Redundancy means that additional information is stored so that the operation of
the application itself can continue in the event of the failure of a hard disk.
A RAID controller can distribute the data that a server writes to the virtual hard disk amongst
the individual physical hard disks in various manners. These different procedures are known
as RAID levels
11. 11
Different RAID Levels
RAID 0: block-by-block striping
RAID 0 distributes the data that the server writes to the virtual hard disk onto one physical
hard disk after another block-by-block (block-by-block striping)
RAID 0 increases the performance of the virtual hard disk, but not its fault-tolerance. If a
physical hard disk is lost, all the data on the virtual hard disk is lost. To be precise, therefore,
the ‘R’ for ‘Redundant’ in RAID is incorrect in the case of RAID 0, with ‘RAID 0’ standing
instead for ‘zero redundancy’.
12. 12
RAID 1: block-by-block mirroring
In contrast to RAID 0, in RAID 1 fault-tolerance is of primary importance
The basic form of RAID 1 brings together two physical hard disks to form a virtual hard disk by
mirroring the data on the two physical hard disks. If the server writes a block to the virtual
hard disk, the RAID controller writes this block to both physical hard disks.
Performance increases are only possible in read operations.
Performance in writing is affected, This is because the RAID controller has to send the data to both
hard disks
13. 13
RAID 0+1/RAID 10: striping and mirroring combined
The problem with RAID 0 and RAID 1 is that they increase either performance (RAID 0) or
fault-tolerance (RAID 1).
It would be nice to have both performance and fault-tolerance, This is where RAID 0+1 and
RAID 10 come into play
14. 14
RAID 4 and RAID 5: parity instead of mirroring
RAID 10 provides excellent performance at a high level of fault-tolerance. The problem with
this is that mirroring using RAID 1 means that all data is written to the physical hard disk
twice. RAID 10 thus doubles the required storage capacity
The idea of RAID 4 and RAID 5 is to replace all mirror disks of RAID 10 with a single parity
hard disk
RAID controller calculates the parity block PABCD for the blocks A, B, C and D. If one of the
four data disks fails, the RAID controller can reconstruct the data of the defective disks using
the three other data disks and the parity disk.
From a mathematical point of view the parity block is calculated with the aid of the logical
XOR operator
The space saving offered by RAID 4 and RAID 5, which remains to be discussed, comes at a
price in relation to RAID 10. Changing a data block changes the value of the associated parity
block. This means that each write operation to the virtual hard disk requires
1. The physical writing of the data block
2. The recalculation of the parity block
3. The physical writing of the newly calculated parity block.
This extra cost for write operations in RAID 4 and RAID 5 is called the write penalty of RAID 4
or the write penalty of RAID 5.
15. 15
How RAID 5 overcomes limitations of RAID 5
To get around this performance bottleneck, RAID 5 distributes the parity blocks over all hard disks.
Figure above, illustrates the procedure. As in RAID 4, the RAID controller writes the parity block
PABCD for the blocks A, B, C and D onto the fifth physical hard disk. Unlike RAID 4, however, in RAID 5
the parity block PEFGH moves to the fourth physical hard disk for the next four blocks E, F, G, H.