SlideShare a Scribd company logo
<Tags> RAID technology, various RAID architectures, RAID 0, RAID 1, RAID
5, types of RAID managers, hardware solutions
RAID/Redundant Array of Independent Disks Technology Overview
An overview of RAID technology
RAID (Redundant Array of Independent Disks) is a technology allowing a
higher level of storage reliability and performance from disk-drive components
via the technique of arranging them into arrays.
A RAID array is a configuration with multiple physical disks set up to use
RAID architecture like RAID 0, RAID 1, RAID 5, etc. While the RAID array
distributes data across multiple disks, it is considered as a single disk by the
server operating system.
The various RAID architectures are designed to meet at least one of these
two goals:
o increase data reliability
o increase Input/Output (I/O) performance
A RAID array is composed of two or more physical hard disks combined into a
single logical storage unit. To give RAID array additional features compared to
JBOD (Just a Bunch of Disk), three main concepts are used:
o Mirroring
o Striping
o Error correction
Mirroring is the writing of identical data to more than one disk. The basic
example of mirroring is a RAID 1 array formed by two disks. Both disks have
the same content at any time. If the first drive fails, read and write operation
can be done directly on the second disk. Read operations on mirrored arrays
is faster compared to a single disk since the system can fetch data from
multiple disks at the same time. However, write operations are slower since
the data must be written to all disks instead of only one. The reconstruction
of a failed mirror array is quite simple: data must be copied from the healthy
disk to the new one. During reconstruction, the read performance boost of
the mirror array is reduced since only the healthy disk is fully usable.
Striping is the splitting of data across multiple disks. For example, a RAID 0
array formed by two disks strips data to both disks. Striping does not provide
fault tolerance, only a performance boost. Read and write operations on a
striped array are faster compared to a single disks as both operation are split
between the available disks.
Error correction stores parity data on disk to allow the detection and possibly
the correction of problems. RAID 5 is a good example of the error correction
mechanism. For example, a RAID 5 array composed of three drive strips data
on the first two disks and stores parity on the third disk to provide fault
tolerance. The error correction mechanism will slow down performance
especially for write operation since both data and parity information needs to
be written instead of data only. Moreover, the reconstruction of a failed array
using parity information incurs severe performance degradation as data needs
to be fetched from all drives in the array to rebuild the information for the
new disk.
The design of any RAID scheme is a compromise between data protection
and performance. The comprehension of your server requirements in terms of
storage is crucial to select the appropriate RAID configuration.
Hardware vs. Software RAID
There are two types of RAID managers:
o hardware
o software
Hardware solutions are specialized hardware components connected to the
server motherboard. Most of the time, these components will provide a post-
BIOS configuration interface that can be run before booting your server
operating system. Each configured RAID array will present himself to the
operating system as a single storage drive. The RAID array can be partitioned
into various RAID volumes at the operating system level.
On the other hand, software solutions are implemented at the operating
system level and directly create RAID volumes from entire physical disks or
partitions. Each RAID volume is seen as a standard storage space for the
applications running within the operating system. Both approaches have
advantages and disadvantages compared to each other.
Depending on the manufacturer, an hardware RAID card supporting up to 8
drives is usually sold between 400$ and 1200$ while a software RAID solution
is usually included free of charge with the operating system of your server.
Under Linux, the md RAID subsystem is able to support most RAID
configurations. Under Microsoft Windows, Software RAID is provided through
the use of dynamic disks in the disk management console.
The required processing power for RAID 0, RAID 1 and RAID 10 is relatively
low. Parity-based arrays like RAID 5, RAID 6, RAID 50 and RAID 60 require
more complex data processing during write or integrity check operations.
However, this processing time is minimal on modern CPU units as the
increase in speed of commodity CPUs has been consistently greater than the
increase in speed of hard disk drive throughput over history. Thus, the
percentage of server CPU time required to saturate an hard disk RAID array
throughput has been dropping and will probably continue to do so in the
future.
A more serious issue with software RAID array is how the operating system
deals with the boot process. Since the RAID information is kept at the
operating system level, booting a faulty RAID array is problematic. At boot
time, the operating system is not available to coordinate the failover to
another drive if the usual boot drive fails. Such systems may require manual
intervention to make them bootable again after a failure. A hardware RAID
controller is initialized before the boot process starts looking for information
on the disk drives. Therefore, hardware RAID controller will increase the
robustness of your server compared to software RAID.
A hardware RAID controller may also support hot swappable hard drives. With
such a feature, hard disks can be changed in a server without having to turn
off the power and open up server case. Removing a failed hard drive and
replacing it with a new one is a simple task with a hardware RAID controller
supporting hot swappable disks. Without this feature, the server needs to be
powered off before replacing the failed drive. This will lead to downtime
unless your web solution is properly clustered.
Finally, only hardware RAID controllers can carry a Battery Backup Unit (BBU)
to preserve the cache memory of the controller if the server is shut down
abruptly. Without such a protection, write-back cache should be disabled on
the RAID array to prevent data corruption. Turning off write-back cache
comes with a performance penalty for write operations on the RAID array.
The use of a BBU on your RAID controller is a solution to safely enable write-
back caching and improve write performance.
A RAID array is not a backup solution
Most RAID arrays provide protection in case of a disk failure. While such a
protection is important to protect yourself from data loss due to hardware
failure, it does not provide historical data. A RAID array does not allow to
recover a deleted or corrupted file due to a bug in your application. A backup
solution will allow you to go back in time to recover deleted or corrupted files.
Implementation
Note: images were adapted from those available on Wikipedia.
RAID 0
RAID 0 is a pure implementation of striping. A minimum of two (2) disks is
required for RAID 0. No parity information is stored for redundancy. It is
important to note that RAID 0 was not one of the original RAID levels and
provides no data redundancy. RAID 0 is normally used to increase
performance. RAID 0 is useful for setups where redundancy is irrelevant.
A RAID 0 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
For example, if a 450GB disk is striped together with a 300GB disk, the usable
size of the array will be 2 x min(450GB, 300GB) = 600GB.
For reads and writes operations dealing with small data blocks such as
database access, the data will be fetched independently on each disk of the
RAID 1 array. If the data sectors accessed are spread evenly between the two
disks, the apparent seek time of the array will be half that of a single disk.
The transfer speed of the array will be the transfer speed of all the disks
added together, limited only by the speed of the RAID controller. For reads
and writes operations dealing with large data blocks such as copying files or
video playback, the data will most likely be fetch on a single disk reducing the
performance gain of the RAID 0 array.
RAID 1
RAID 1 is a pure implementation of mirroring. A minimum of two (2) disks is
required for RAID 1. This is useful when read performance or reliability are
more important than data storage capacity. A classic RAID 1 mirrored pair
contains two disks (see diagram), which increases reliability over a single disk.
Since each member contains a complete copy of the data, and can be
addressed independently, ordinary wear-and-tear reliability is raised.
A RAID 1 array can be created with disks of differing sizes, but the total
available storage space in the array is equal to the size of the smallest disk.
For example, if a 450GB disk is mirrored with a 300GB disk, the usable size of
the array will be min(450GB, 300GB) = 300GB.
The read performance of a RAID 1 array can go up roughly as a linear
multiple of the number of copies. That is, a RAID 1 array of two disks can
query two different places at the same time so the read performance should
be two times higher than the performance of a single disk. RAID 1 is a good
starting point for applications such as email and web servers as well as for
any other use requiring above average read I/O performance and hardware
failure protection.
RAID 5
RAID 5 array uses block-level striping with distributed parity blocks across all
member disks. The disk used for the parity block is staggered from one stripe
to the next, hence the term distributed parity blocks. A minimum of three (3)
disks is required for RAID 5. This RAID configuration is mainly used to
maximize disk space while providing a protection for your data in case of a
disk failure.
Given the diagram of the RAID 5 array, where each column is a disk, let
assume A1=00000101and A2=00000011. The parity block Ap is generated
by applying the XOR operator on A1 and A2: Ap = A1 XOR A2 = 00000110
If the first disk fails, A1 will no longer be accessible, but can be reconstructed:
A1 = A2 XOR Ap = 00000101
A RAID 5 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The parity data consumes a complete disk, leaving N-1 disks for usable
storage space in an array composed of N disks. For example, on an array
formed of three 450GB disks and one 300GB disk, the usable size of the array
will be (4-1) x min(450GB, 300GB) = 900GB.
RAID 5 writes are expensive in terms of disk operations and traffic between
the disks and the RAID controller since both data and parity information need
to be written to disk. The parity blocks are not read on data reads, since this
would add unnecessary overhead and would diminish performance. However,
the parity blocks are read when a defective disk sector is present in the
required data blocks. Likewise, should a disk fail in the array, the parity blocks
and the data blocks from the surviving disks are combined mathematically to
reconstruct data from the failed drive in real-time. This situation leads to
severe performance degradation on the array for read and write operations.
RAID 6
RAID 6 extends RAID 5 by adding an additional parity block. Block-level
striping is combined with two parity blocks distributed across all member disks.
A minimum of four (4) disks is required for RAID 6. This RAID configuration is
mainly used to maximize disk space while providing a protection for up to two
disk failures.
Both parity blocks Ap and Aq are generated from the data blocks A1, A2 and
A3. Ap is generated by applying the XOR operator on A1, A2 and A3. Aq is
generated using a more complex variant of the Ap formulae. If the first disk
fails, A1 will no longer be accessible, but can be reconstructed using A2 and
A3 plus the Ap parity block. If both the first and the second disk fail, A1 and
A2 will no longer be accessible, but can be reconstructed using A3 plus both
Ap and Aq parity blocks. The computation of Aq is CPU intensive, in contrast
to the simplicity of Ap. Thus, a software RAID 6 implementation may have a
significant effect on system performance especially during the reconstruction
of a failed disk.
A RAID 6 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The parity data consumes two complete disks, leaving N-2 disks for usable
storage space in an array composed of N disks. For example, on an array
formed of four 450GB disks and one 300GB disk, the usable size of the array
will be (5-2) x min(450GB, 300GB) = 900GB.
RAID 6 writes are even more expensive than RAID 5 writes in terms of disk
operations and traffic between the disks and the RAID controller since both
data and parity information need to be written to disk. The parity blocks are
not read on data reads, since this would add unnecessary overhead and
would diminish performance. However, the parity blocks are read when a
defective disk sector is present in the required data blocks. Likewise, should a
disk fail in the array, the parity blocks and the data blocks from the surviving
disks are combined mathematically to reconstruct data from the failed drive in
real-time. This situation leads to severe performance degradation on the array
for read and write operations.
RAID 10
RAID 10 is a combination of RAID 1 (mirroring) and RAID 0 (striping) where
4N mirrored disks are striped together. A minimum of four (4) disks are
required for RAID 10. One disk in each RAID 1 mirror can fail without
damaging the data contained in the entire array.
A RAID 10 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The mirroring consumes half of disk space, leaving 2N disks for usable
storage space in an array composed of 4N disks. For example, on an array
formed of seven 450GB disks and one 300GB disk, the usable size of the
array will be (7+1)/2 x min(450GB, 300GB) = 1200GB.
RAID 10 provides better performance than all other redundant RAID
levels. It is the preferable RAID level for I/O intensive applications such as
database servers as well as for any other use requiring high disk performance.
RAID 50
RAID 50 is a combination of RAID 5 (striping and error correction)
and RAID 0 (striping) where RAID 5 sub-arrays are striped together.
A minimum of six (6) disks are required for RAID 50. One disk in each RAID 5
sub-array can fail without damaging the data contained in the entire array.
A RAID 50 array can be created with disks of differing sizes, but the
total available storage space in the array is limited by the size of the
smallest disk. The parity data consumes a complete disk in each RAID 5
sub-array, leaving N-2 disks for usable storage space in an array composed of
N disks. For example, on an array formed of seven 450GB disks and one
300GB disk, the usable size of the array will be (8-2) x min (450GB, 300GB) =
1800GB.
RAID 50 provides better performance than RAID 5 but requires more disks.
The performance gain is particularly observed for write operations. This level
is recommended for applications that require high fault tolerance along with
high capacity.
Hot spare disks
Both hardware and software redundant RAID arrays may support the use of
hot spare disks. Such disks are physically installed in the array and are
inactive until an active disk fails. The RAID controller automatically replaces
the failed drive with the spare and starts the rebuilding process for the
affected array. This reduces the vulnerability window of the array by providing
a healthy disk to the array as soon as a problematic disk is identified.
For example, a RAID 5 array with a single hot spare disk uses the same
number of disks as a RAID 6 array while providing a similar level of protection.
The use of hot spare disks is particularly important for RAID arrays formed by
multiple disks. For example, a RAID 10 array formed of 12 disks will most
likely have a higher disk failure rate than a RAID 10 array of 4 disks. Putting
aside one or two disks as hot spare for your large RAID array will provide
additional protection in case of disk failure.
RAID arrays allow a higher level of reliability and performance for your server
storage. While RAID 1 is a good starting point for applications such as email
and web servers, RAID 10 is recommended for database applications. RAID 5
or RAID 50 can be used for backup appliances where high fault tolerance
along with high capacity are needed.
Info from http://blog.iweb.com/en/2010/05/an-overview-of-raid-
technology/4283.html
More info
o Wikipedia article, RAID
o Art S. Kagel, RAID 5 vs 10 RAID
This article was written by Patrice Guay. It was originately published on his
blog at the address: http://www.patriceguay.com/webhosting/raid and
reprinted with permission. Patrice is a sales engineer at iWeb Technologies.
More Related
How to Buy a Server for Your Business?
A Guide for Storage Newbies: RAID Levels Explained
How to Buy a Server for Your Business?
How to Choose a Server for Your Data Center’s Needs?
Configuring the hpe proliant dl380 gen9 24 sff cto server as a vertica node
Use Cases: Cisco UCS S3260 Storage Server with MapR Converged Data
Platform and Cloudera Enterprise

More Related Content

What's hot

Raid Level
Raid LevelRaid Level
Raid Level
gaurav singh
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
Jason Augustine
 
Raid
RaidRaid
Raid
Pari Soni
 
Various raid levels pros &amp; cons
Various raid levels pros &amp; consVarious raid levels pros &amp; cons
Various raid levels pros &amp; cons
IT Tech
 
What is R.A.I.D?
What is R.A.I.D?What is R.A.I.D?
What is R.A.I.D?
Sumit kumar
 
Raid and its levels
Raid and its levelsRaid and its levels
Raid and its levels
IGZ Software house
 
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer ArchitectureRaid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
Aiman Hafeez
 
Performance evolution of raid
Performance evolution of raidPerformance evolution of raid
Performance evolution of raid
Zubair Sami
 
Raid
Raid Raid
RAID CONCEPT
RAID CONCEPTRAID CONCEPT
RAID CONCEPT
Ramasubbu .P
 
Raid intro
Raid introRaid intro
Storage systems reliability
Storage systems reliabilityStorage systems reliability
Storage systems reliability
Juha Salenius
 
Database 3
Database 3Database 3
Raid
RaidRaid
Raid
dinaselim
 
Diy raid-recovery
Diy raid-recoveryDiy raid-recovery
Diy raid-recovery
UTPAL SINGH
 
Present of Raid and Its Type
Present of Raid and Its TypePresent of Raid and Its Type
Present of Raid and Its Type
Usama ahmad
 

What's hot (20)

Raid Level
Raid LevelRaid Level
Raid Level
 
Raid_intro.ppt
Raid_intro.pptRaid_intro.ppt
Raid_intro.ppt
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
 
Raid
RaidRaid
Raid
 
Various raid levels pros &amp; cons
Various raid levels pros &amp; consVarious raid levels pros &amp; cons
Various raid levels pros &amp; cons
 
Raid
Raid Raid
Raid
 
What is R.A.I.D?
What is R.A.I.D?What is R.A.I.D?
What is R.A.I.D?
 
Raid and its levels
Raid and its levelsRaid and its levels
Raid and its levels
 
Raid
RaidRaid
Raid
 
SEMINAR
SEMINARSEMINAR
SEMINAR
 
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer ArchitectureRaid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
 
Performance evolution of raid
Performance evolution of raidPerformance evolution of raid
Performance evolution of raid
 
Raid
Raid Raid
Raid
 
RAID CONCEPT
RAID CONCEPTRAID CONCEPT
RAID CONCEPT
 
Raid intro
Raid introRaid intro
Raid intro
 
Storage systems reliability
Storage systems reliabilityStorage systems reliability
Storage systems reliability
 
Database 3
Database 3Database 3
Database 3
 
Raid
RaidRaid
Raid
 
Diy raid-recovery
Diy raid-recoveryDiy raid-recovery
Diy raid-recovery
 
Present of Raid and Its Type
Present of Raid and Its TypePresent of Raid and Its Type
Present of Raid and Its Type
 

Similar to Raid the redundant array of independent disks technology overview

RAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptxRAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptx
KathrynAnnFlorentino
 
Raid(Storage Technology)
Raid(Storage Technology)Raid(Storage Technology)
Raid(Storage Technology)
Vanitha shree Rajagopal
 
Mohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docxMohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docx
MohamedAyman183185
 
112667416 raid-seminar
112667416 raid-seminar112667416 raid-seminar
112667416 raid-seminarabhivicram
 
RAID-Presentation
RAID-PresentationRAID-Presentation
RAID-Presentation
076TalathUnNabiAnik
 
RAID
RAIDRAID
Raid Technology
Raid TechnologyRaid Technology
Raid Technology
Aman Sadhwani
 
RAID CAAL
RAID CAALRAID CAAL
RAID CAAL
GumballXD49
 
Understanding RAID Controller
Understanding RAID ControllerUnderstanding RAID Controller
Understanding RAID Controller
Raid Data Recovery
 
disk structure and multiple RAID levels .ppt
disk structure and multiple  RAID levels .pptdisk structure and multiple  RAID levels .ppt
disk structure and multiple RAID levels .ppt
RAJASEKHARV10
 
RAID (redundant array of independent disks)
RAID  (redundant array of independent disks)RAID  (redundant array of independent disks)
RAID (redundant array of independent disks)
manditalaskar123
 
Data center core elements, Data center virtualization
Data center core elements, Data center virtualizationData center core elements, Data center virtualization
Data center core elements, Data center virtualization
MadhuraNK
 
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docxExercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
nealwaters20034
 
Chapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and RetrievalChapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and Retrieval
Pratik Pradhan
 
Dr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docxDr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docx
write31
 
RAID LEVELS
RAID LEVELSRAID LEVELS
RAID LEVELS
Uzair Khan
 

Similar to Raid the redundant array of independent disks technology overview (20)

RAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptxRAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptx
 
Raid(Storage Technology)
Raid(Storage Technology)Raid(Storage Technology)
Raid(Storage Technology)
 
Raid level
Raid levelRaid level
Raid level
 
Mohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docxMohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docx
 
112667416 raid-seminar
112667416 raid-seminar112667416 raid-seminar
112667416 raid-seminar
 
RAID-Presentation
RAID-PresentationRAID-Presentation
RAID-Presentation
 
RAID
RAIDRAID
RAID
 
Raid Technology
Raid TechnologyRaid Technology
Raid Technology
 
RAID CAAL
RAID CAALRAID CAAL
RAID CAAL
 
Understanding RAID Controller
Understanding RAID ControllerUnderstanding RAID Controller
Understanding RAID Controller
 
disk structure and multiple RAID levels .ppt
disk structure and multiple  RAID levels .pptdisk structure and multiple  RAID levels .ppt
disk structure and multiple RAID levels .ppt
 
RAID (redundant array of independent disks)
RAID  (redundant array of independent disks)RAID  (redundant array of independent disks)
RAID (redundant array of independent disks)
 
Data center core elements, Data center virtualization
Data center core elements, Data center virtualizationData center core elements, Data center virtualization
Data center core elements, Data center virtualization
 
Raid
RaidRaid
Raid
 
1.2 raid
1.2  raid1.2  raid
1.2 raid
 
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docxExercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
 
Chapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and RetrievalChapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and Retrieval
 
Dr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docxDr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docx
 
Firebird and RAID
Firebird and RAIDFirebird and RAID
Firebird and RAID
 
RAID LEVELS
RAID LEVELSRAID LEVELS
RAID LEVELS
 

More from IT Tech

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setup
IT Tech
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guide
IT Tech
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guide
IT Tech
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guide
IT Tech
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faq
IT Tech
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switches
IT Tech
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi features
IT Tech
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solution
IT Tech
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switches
IT Tech
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switches
IT Tech
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modes
IT Tech
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
IT Tech
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000
IT Tech
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fex
IT Tech
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches series
IT Tech
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 series
IT Tech
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration example
IT Tech
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700
IT Tech
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration options
IT Tech
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement model
IT Tech
 

More from IT Tech (20)

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setup
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guide
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guide
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guide
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faq
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switches
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi features
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solution
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switches
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switches
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modes
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fex
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches series
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 series
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration example
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration options
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement model
 

Recently uploaded

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Tobias Schneck
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 

Recently uploaded (20)

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 

Raid the redundant array of independent disks technology overview

  • 1. <Tags> RAID technology, various RAID architectures, RAID 0, RAID 1, RAID 5, types of RAID managers, hardware solutions RAID/Redundant Array of Independent Disks Technology Overview An overview of RAID technology RAID (Redundant Array of Independent Disks) is a technology allowing a higher level of storage reliability and performance from disk-drive components via the technique of arranging them into arrays. A RAID array is a configuration with multiple physical disks set up to use RAID architecture like RAID 0, RAID 1, RAID 5, etc. While the RAID array distributes data across multiple disks, it is considered as a single disk by the server operating system. The various RAID architectures are designed to meet at least one of these two goals: o increase data reliability o increase Input/Output (I/O) performance A RAID array is composed of two or more physical hard disks combined into a single logical storage unit. To give RAID array additional features compared to JBOD (Just a Bunch of Disk), three main concepts are used: o Mirroring o Striping o Error correction Mirroring is the writing of identical data to more than one disk. The basic example of mirroring is a RAID 1 array formed by two disks. Both disks have the same content at any time. If the first drive fails, read and write operation can be done directly on the second disk. Read operations on mirrored arrays is faster compared to a single disk since the system can fetch data from multiple disks at the same time. However, write operations are slower since the data must be written to all disks instead of only one. The reconstruction of a failed mirror array is quite simple: data must be copied from the healthy disk to the new one. During reconstruction, the read performance boost of the mirror array is reduced since only the healthy disk is fully usable. Striping is the splitting of data across multiple disks. For example, a RAID 0 array formed by two disks strips data to both disks. Striping does not provide fault tolerance, only a performance boost. Read and write operations on a striped array are faster compared to a single disks as both operation are split between the available disks.
  • 2. Error correction stores parity data on disk to allow the detection and possibly the correction of problems. RAID 5 is a good example of the error correction mechanism. For example, a RAID 5 array composed of three drive strips data on the first two disks and stores parity on the third disk to provide fault tolerance. The error correction mechanism will slow down performance especially for write operation since both data and parity information needs to be written instead of data only. Moreover, the reconstruction of a failed array using parity information incurs severe performance degradation as data needs to be fetched from all drives in the array to rebuild the information for the new disk. The design of any RAID scheme is a compromise between data protection and performance. The comprehension of your server requirements in terms of storage is crucial to select the appropriate RAID configuration. Hardware vs. Software RAID There are two types of RAID managers: o hardware o software Hardware solutions are specialized hardware components connected to the server motherboard. Most of the time, these components will provide a post- BIOS configuration interface that can be run before booting your server operating system. Each configured RAID array will present himself to the operating system as a single storage drive. The RAID array can be partitioned into various RAID volumes at the operating system level. On the other hand, software solutions are implemented at the operating system level and directly create RAID volumes from entire physical disks or partitions. Each RAID volume is seen as a standard storage space for the applications running within the operating system. Both approaches have advantages and disadvantages compared to each other. Depending on the manufacturer, an hardware RAID card supporting up to 8 drives is usually sold between 400$ and 1200$ while a software RAID solution is usually included free of charge with the operating system of your server. Under Linux, the md RAID subsystem is able to support most RAID configurations. Under Microsoft Windows, Software RAID is provided through the use of dynamic disks in the disk management console. The required processing power for RAID 0, RAID 1 and RAID 10 is relatively low. Parity-based arrays like RAID 5, RAID 6, RAID 50 and RAID 60 require more complex data processing during write or integrity check operations. However, this processing time is minimal on modern CPU units as the increase in speed of commodity CPUs has been consistently greater than the increase in speed of hard disk drive throughput over history. Thus, the
  • 3. percentage of server CPU time required to saturate an hard disk RAID array throughput has been dropping and will probably continue to do so in the future. A more serious issue with software RAID array is how the operating system deals with the boot process. Since the RAID information is kept at the operating system level, booting a faulty RAID array is problematic. At boot time, the operating system is not available to coordinate the failover to another drive if the usual boot drive fails. Such systems may require manual intervention to make them bootable again after a failure. A hardware RAID controller is initialized before the boot process starts looking for information on the disk drives. Therefore, hardware RAID controller will increase the robustness of your server compared to software RAID. A hardware RAID controller may also support hot swappable hard drives. With such a feature, hard disks can be changed in a server without having to turn off the power and open up server case. Removing a failed hard drive and replacing it with a new one is a simple task with a hardware RAID controller supporting hot swappable disks. Without this feature, the server needs to be powered off before replacing the failed drive. This will lead to downtime unless your web solution is properly clustered. Finally, only hardware RAID controllers can carry a Battery Backup Unit (BBU) to preserve the cache memory of the controller if the server is shut down abruptly. Without such a protection, write-back cache should be disabled on the RAID array to prevent data corruption. Turning off write-back cache comes with a performance penalty for write operations on the RAID array. The use of a BBU on your RAID controller is a solution to safely enable write- back caching and improve write performance. A RAID array is not a backup solution Most RAID arrays provide protection in case of a disk failure. While such a protection is important to protect yourself from data loss due to hardware failure, it does not provide historical data. A RAID array does not allow to recover a deleted or corrupted file due to a bug in your application. A backup solution will allow you to go back in time to recover deleted or corrupted files. Implementation Note: images were adapted from those available on Wikipedia. RAID 0
  • 4. RAID 0 is a pure implementation of striping. A minimum of two (2) disks is required for RAID 0. No parity information is stored for redundancy. It is important to note that RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance. RAID 0 is useful for setups where redundancy is irrelevant. A RAID 0 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. For example, if a 450GB disk is striped together with a 300GB disk, the usable size of the array will be 2 x min(450GB, 300GB) = 600GB. For reads and writes operations dealing with small data blocks such as database access, the data will be fetched independently on each disk of the RAID 1 array. If the data sectors accessed are spread evenly between the two disks, the apparent seek time of the array will be half that of a single disk. The transfer speed of the array will be the transfer speed of all the disks added together, limited only by the speed of the RAID controller. For reads and writes operations dealing with large data blocks such as copying files or video playback, the data will most likely be fetch on a single disk reducing the performance gain of the RAID 0 array. RAID 1
  • 5. RAID 1 is a pure implementation of mirroring. A minimum of two (2) disks is required for RAID 1. This is useful when read performance or reliability are more important than data storage capacity. A classic RAID 1 mirrored pair contains two disks (see diagram), which increases reliability over a single disk. Since each member contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised. A RAID 1 array can be created with disks of differing sizes, but the total available storage space in the array is equal to the size of the smallest disk. For example, if a 450GB disk is mirrored with a 300GB disk, the usable size of the array will be min(450GB, 300GB) = 300GB. The read performance of a RAID 1 array can go up roughly as a linear multiple of the number of copies. That is, a RAID 1 array of two disks can query two different places at the same time so the read performance should be two times higher than the performance of a single disk. RAID 1 is a good starting point for applications such as email and web servers as well as for any other use requiring above average read I/O performance and hardware failure protection. RAID 5
  • 6. RAID 5 array uses block-level striping with distributed parity blocks across all member disks. The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. A minimum of three (3) disks is required for RAID 5. This RAID configuration is mainly used to maximize disk space while providing a protection for your data in case of a disk failure. Given the diagram of the RAID 5 array, where each column is a disk, let assume A1=00000101and A2=00000011. The parity block Ap is generated by applying the XOR operator on A1 and A2: Ap = A1 XOR A2 = 00000110 If the first disk fails, A1 will no longer be accessible, but can be reconstructed: A1 = A2 XOR Ap = 00000101 A RAID 5 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes a complete disk, leaving N-1 disks for usable storage space in an array composed of N disks. For example, on an array formed of three 450GB disks and one 300GB disk, the usable size of the array will be (4-1) x min(450GB, 300GB) = 900GB. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the RAID controller since both data and parity information need to be written to disk. The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. However, the parity blocks are read when a defective disk sector is present in the required data blocks. Likewise, should a disk fail in the array, the parity blocks
  • 7. and the data blocks from the surviving disks are combined mathematically to reconstruct data from the failed drive in real-time. This situation leads to severe performance degradation on the array for read and write operations. RAID 6 RAID 6 extends RAID 5 by adding an additional parity block. Block-level striping is combined with two parity blocks distributed across all member disks. A minimum of four (4) disks is required for RAID 6. This RAID configuration is mainly used to maximize disk space while providing a protection for up to two disk failures. Both parity blocks Ap and Aq are generated from the data blocks A1, A2 and A3. Ap is generated by applying the XOR operator on A1, A2 and A3. Aq is generated using a more complex variant of the Ap formulae. If the first disk fails, A1 will no longer be accessible, but can be reconstructed using A2 and A3 plus the Ap parity block. If both the first and the second disk fail, A1 and A2 will no longer be accessible, but can be reconstructed using A3 plus both Ap and Aq parity blocks. The computation of Aq is CPU intensive, in contrast to the simplicity of Ap. Thus, a software RAID 6 implementation may have a significant effect on system performance especially during the reconstruction of a failed disk. A RAID 6 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes two complete disks, leaving N-2 disks for usable storage space in an array composed of N disks. For example, on an array formed of four 450GB disks and one 300GB disk, the usable size of the array will be (5-2) x min(450GB, 300GB) = 900GB.
  • 8. RAID 6 writes are even more expensive than RAID 5 writes in terms of disk operations and traffic between the disks and the RAID controller since both data and parity information need to be written to disk. The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. However, the parity blocks are read when a defective disk sector is present in the required data blocks. Likewise, should a disk fail in the array, the parity blocks and the data blocks from the surviving disks are combined mathematically to reconstruct data from the failed drive in real-time. This situation leads to severe performance degradation on the array for read and write operations. RAID 10 RAID 10 is a combination of RAID 1 (mirroring) and RAID 0 (striping) where 4N mirrored disks are striped together. A minimum of four (4) disks are required for RAID 10. One disk in each RAID 1 mirror can fail without damaging the data contained in the entire array. A RAID 10 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The mirroring consumes half of disk space, leaving 2N disks for usable storage space in an array composed of 4N disks. For example, on an array
  • 9. formed of seven 450GB disks and one 300GB disk, the usable size of the array will be (7+1)/2 x min(450GB, 300GB) = 1200GB. RAID 10 provides better performance than all other redundant RAID levels. It is the preferable RAID level for I/O intensive applications such as database servers as well as for any other use requiring high disk performance. RAID 50 RAID 50 is a combination of RAID 5 (striping and error correction) and RAID 0 (striping) where RAID 5 sub-arrays are striped together. A minimum of six (6) disks are required for RAID 50. One disk in each RAID 5 sub-array can fail without damaging the data contained in the entire array. A RAID 50 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes a complete disk in each RAID 5 sub-array, leaving N-2 disks for usable storage space in an array composed of N disks. For example, on an array formed of seven 450GB disks and one 300GB disk, the usable size of the array will be (8-2) x min (450GB, 300GB) = 1800GB. RAID 50 provides better performance than RAID 5 but requires more disks. The performance gain is particularly observed for write operations. This level
  • 10. is recommended for applications that require high fault tolerance along with high capacity. Hot spare disks Both hardware and software redundant RAID arrays may support the use of hot spare disks. Such disks are physically installed in the array and are inactive until an active disk fails. The RAID controller automatically replaces the failed drive with the spare and starts the rebuilding process for the affected array. This reduces the vulnerability window of the array by providing a healthy disk to the array as soon as a problematic disk is identified. For example, a RAID 5 array with a single hot spare disk uses the same number of disks as a RAID 6 array while providing a similar level of protection. The use of hot spare disks is particularly important for RAID arrays formed by multiple disks. For example, a RAID 10 array formed of 12 disks will most likely have a higher disk failure rate than a RAID 10 array of 4 disks. Putting aside one or two disks as hot spare for your large RAID array will provide additional protection in case of disk failure. RAID arrays allow a higher level of reliability and performance for your server storage. While RAID 1 is a good starting point for applications such as email and web servers, RAID 10 is recommended for database applications. RAID 5 or RAID 50 can be used for backup appliances where high fault tolerance along with high capacity are needed. Info from http://blog.iweb.com/en/2010/05/an-overview-of-raid- technology/4283.html More info o Wikipedia article, RAID o Art S. Kagel, RAID 5 vs 10 RAID This article was written by Patrice Guay. It was originately published on his blog at the address: http://www.patriceguay.com/webhosting/raid and reprinted with permission. Patrice is a sales engineer at iWeb Technologies. More Related How to Buy a Server for Your Business? A Guide for Storage Newbies: RAID Levels Explained
  • 11. How to Buy a Server for Your Business? How to Choose a Server for Your Data Center’s Needs? Configuring the hpe proliant dl380 gen9 24 sff cto server as a vertica node Use Cases: Cisco UCS S3260 Storage Server with MapR Converged Data Platform and Cloudera Enterprise