How to use software raid in linux
Upcoming SlideShare
Loading in...5
×
 

How to use software raid in linux

on

  • 2,382 views

How to use software raid in linux

How to use software raid in linux

Statistics

Views

Total Views
2,382
Views on SlideShare
2,381
Embed Views
1

Actions

Likes
0
Downloads
123
Comments
2

1 Embed 1

http://www.linkedin.com 1

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • Thanks for appreciation. It is good to hear that my guides (that were written long time ago) still useful for some.
    Are you sure you want to
    Your message goes here
    Processing…
  • Great Guide, helped me a lot.

    I used a raid 5 configuration and had to change the command for the raid creation to
    mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sd[bcde]
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

How to use software raid in linux How to use software raid in linux Document Transcript

  • HOW TO USESOFTWARE RAID IN LINUX (Dated : January 15, 2012) Written By: Ahmer Mansoor Email : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoorOBJECTIVE: Objective of this write-up is get understanding of how to configure a Software RAID 1 in Linux based OSto provide data redundancy. This tutorial will cover configuration, management and recovery options of RAID 1.CONSIDERATIONS: Operating System : CentOS 5.1 RAID Device Name : /dev/md0 RAID Level : 1 (Mirroring) RAID Disks : /dev/sdb (2GB) /dev/sdc (2GB) /dev/sdd (2GB) /dev/sde (2GB) Note: Linux Terminal Screen shots are highlighted with grey color, and the user inputs/commandsexecuted therein will be in red color.DEFINITION:RAID 1 (Redundant Array of Inexpensive Disks): With RAID 1, data is cloned on a duplicate disk. This RAID method is therefore frequently called diskmirroring. When one of the disks in the RAID set fails, the other one continues to function. When the failed diskis replaced, the data is automatically cloned to the new disk from the surviving disk. RAID 1 also offers thepossibility of using a hot standby spare disk that will be automatically cloned in the event of a disk failure on anyof the primary RAID devices. RAID 1 offers data redundancy, without the speed advantages of RAID 0. A limitation of RAID 1 is thatthe total RAID size in gigabytes is equal to that of the smallest disk in the RAID set. Page | 2
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor1) CHECK AVAILABLE DISKS At Linux terminal, execute the following commands to get a list of disks connected to system.[root@linux1 ~]# fdisk -lDisk /dev/sda: 21.4 GB, 21474836480 bytes255 heads, 63 sectors/track, 2610 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 * 1 13 104391 83 Linux/dev/sda2 14 2610 20860402+ 8e Linux LVMDisk /dev/sdb: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdb doesnt contain a valid partition tableDisk /dev/sdc: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdc doesnt contain a valid partition tableDisk /dev/sdd: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdd doesnt contain a valid partition tableDisk /dev/sde: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sde doesnt contain a valid partition table Above output shows that we have 5 hard disks connected to the system, /dev/sda is in use by thesystem, while the /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde (size 2GB each) has not yet initialized. We will usethem to create our RAID array.2) INITIALIZE HARD DISKS Let’s initialize two Hard disks /dev/sdb & /dev/sdc to be used by our RAID array.[root@linux1 ~]# fdisk /dev/sdb Page | 3
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoorDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent wont be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-261, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-261, default 261):Using default value 261Command (m for help): tSelected partition 1Hex code (type L to list codes): L 0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot 1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris 2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx 6 FAT16 42 SFS 86 NTFS volume set da Non-FS data 7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs f W95 Extd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT1c Hidden W95 FAT3 75 PC/IXHex code (type L to list codes): fdChanged system type of partition 1 to fd (Linux raid autodetect)Command (m for help): pDisk /dev/sdb: 2147 MB, 2147483648 bytes255 heads, 63 sectors/track, 261 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Page | 4
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor Device Boot Start End Blocks Id System/dev/sdb1 1 261 2096451 fd Linux raid autodetectCommand (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.Syncing disks.[root@linux1 ~]# partprobe /dev/sdb Repeat the same step 2 for initializing disk /dev/sdc.3) CREATE A RAID LEVEL 1 ARRAY Following command will create a software RAID /dev/md0 and add /dev/sdb and /dev/sdc to it.[root@linux1 ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb/dev/sdcmdadm: array /dev/md0 started.4) CHECK RAID CONFIGURATIONS To check RAID configurations execute following command.[root@linux1 ~]# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sdc[1] sdb[0] 2097088 blocks [2/2] [UU]unused devices: <none>5) MAKE RAID CONFIGURATION PERMANENT Our RAID configurations are not permanent and will be lost when the machine will reboot. To make itpermanent we have to create a configuration file and add the information in it. A single command will besufficient to accomplish the task.[root@linux1 ~]# mdadm --detail --scan > /etc/mdadm.conf6) CREATE FILE SYSTEM FOR RAID To create file system ‘ext3’ of RAID /dev/md0, use the following command.[root@linux1 ~]# mke2fs -j /dev/md0 Page | 5
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoormke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)262144 inodes, 524272 blocks26213 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=53687091216 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912Writing inode tables: doneCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 32 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.7) MOUNT THE RAID Now our RAID is ready to use. Let’s create a mount point in your hard disk and mount the RAID.[root@linux1 ~]# mkdir /u01[root@linux1 ~]# mount -t ext3 /dev/md0 /u01[root@linux1 ~]# mount/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)/dev/sda1 on /boot type ext3 (rw)tmpfs on /dev/shm type tmpfs (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)none on /proc/fs/vmblock/mountPoint type vmblock (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)/dev/md0 on /u01 type ext3 (rw)[root@linux1 Oracle]# df -mFilesystem 1M-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 18723 2718 15039 16% //dev/sda1 99 12 83 13% /boottmpfs 252 0 252 0% /dev/shm.host:/ 127061 101436 25626 80% /mnt/hgfs/dev/md0 2016 36 1879 2% /u01 From the last line of the above screenshot it is clear that the Storage Capacity of our RAID is 2016 MB,i.e. size of the smallest disk in the array. Page | 6
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor8) MOUNT RAID ON SYSTEM STARTUP Our RAID is successfully mounted now, but we have to mount it manually every time when systemreboots. Let’s add the following entry in /etc/fstab file to automatically mount the RAID on Linux startup.[root@linux1 ~]# vi /etc/fstab/dev/VolGroup00/LogVol00 / ext3 defaults 1 1LABEL=/boot /boot ext3 defaults 1 2tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0/dev/VolGroup00/LogVol01 swap swap defaults 0 0/dev/md0 /u01 ext3 defaults 0 09) TEST NEWLY CONFIGURED RAID To test our RAID Array, copy a large file to /u01. (I have copy a 626 MB file).[root@linux1 Oracle]# cp 10201_database_win32.zip /u01[root@linux1 Oracle]# cd /u01[root@linux1 u01]# du -m *626 10201_database_win32.zip1 lost+found As we already know, In RAID 1 architecture, the files mirrored on all disks. Test it now by stopping theRAID, mount the Array Disks at different mount points, and list the disk contents.[root@linux1 u01]# cd /[root@linux1 /]# umount /u01[root@linux1 /]# mdadm --stop /dev/md0mdadm: stopped /dev/md0[root@linux1 /]# mkdir d0{1,2}[root@linux1 /]# mount -t ext3 /dev/sdb /d01[root@linux1 /]# mount -t ext3 /dev/sdc /d02[root@linux1 /]# ls /d0110201_database_win32.zip lost+found[root@linux1 /]# ls /d0210201_database_win32.zip lost+found So, it is clear from the above test that, our RAID is working absolutely fine. Now let’s start the RAIDagain.[root@linux1 Oracle]# umount /d01[root@linux1 Oracle]# umount /d02[root@linux1 Oracle]# mdadm --assemble /dev/md0 Page | 7
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoormdadm: /dev/md0 has been started with 2 drives.[root@linux1 Oracle]# mount -t ext3 /dev/md0 /u01 The ‘mdm –assemble’ command will only work if you have save your RAID configuration to/etc/mdadm.conf file (as per Step 5).10) ADD A NEW DISK TO RAID ARRAY Now let’s add one more disk /dev/sdd to our existing array. Initialize it according to step 2 above, thenexecute the following command to add it.[root@linux1 Oracle]# mdadm --manage /dev/md0 --add /dev/sddmdadm: added /dev/sdd[root@linux1 Oracle]# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sdd[2](S) sdc[1] sdb[0] 2097088 blocks [2/2] [UU]unused devices: <none> Although /dev/hdd has been added but it is not used by RAID, because our RAID is configure to use onlytwo devices, and it already have two devices, i.e. /dev/sdb /dev/sdc. Therefore /dev/sdd is added as ‘SPARE’ ((S)in above screen shot after sdd[2] represents this) disk that will become active automatically if an Active Diskfails. (It is the feature of RAID 1 we have discussed in RAID 1 definition). We have two ways to make use of /dev/sdd, either we increase the number of raid-devices or wereplace an existing disk with /dev/sdd. The last option will be discussed in next section, for now we areincreasing the raid-devices as under:[root@linux1 Oracle]# mdadm --grow /dev/md0 --raid-devices=3[root@linux1 Oracle]# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sdd[3] sdc[1] sdb[0] 2097088 blocks [3/2] [UU_] [=======>.............] recovery = 38.8% (815808/2097088) finish=0.9minspeed=21559K/secunused devices: <none> Observe the output of ‘cat’ command. The RAID has been performing some kind of recovery. Actually itis the rebalancing activity to create exact mirror at /dev/sdd. It will take some time based on the files at /u01. Page | 8
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor11) REMOVE A DISK FROM RAID ARRAY Now our RAID has 3 disks, and running in level 1. Let’s remove a disk /dev/sdd and replace it with a newone /dev/sde. To do so we have to force a device failure.[root@linux1 Oracle]# mdadm --manage /dev/md0 --fail /dev/sddmdadm: set /dev/sdd faulty in /dev/md0[root@linux1 Oracle]# cat /proc/mdstatPersonalities : [raid1]md0 : active raid1 sdd[3](F) sdc[1] sdb[0] 2097088 blocks [3/2] [UU_]unused devices: <none> Observe the output of ‘cat’ command, the disk /dev/sdd is marked as faulty spare. (F) after sdd[3] inabove screenshot represents this. To remove this disk from array use the following command.[root@linux1 Oracle]# mdadm --manage /dev/md0 --remove /dev/sddmdadm: hot removed /dev/sdd[root@linux1 Oracle]# mdadm --detail /dev/md0/dev/md0: Version : 00.90.03 Creation Time : Sun Jan 15 09:33:01 2012 Raid Level : raid1 Array Size : 2097088 (2048.28 MiB 2147.42 MB) Device Size : 2097088 (2048.28 MiB 2147.42 MB) Raid Devices : 3 Total Devices : 2Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Sun Jan 15 10:49:44 2012 State : clean, degraded Active Devices : 2Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 3c18230e:40c11f0f:fdcec7f4:d575f031 Events : 0.12 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 0 0 2 removed Last line of the above screenshot shows that the RaidDevice 2 has been removed. To add a new device /dev/sde, Initialize it as in Step 2, add it as we did in Step 11. Don’t forget to update the /etc/mdadm.conf file (Step 5), or your changes will be lost after a reboot. Page | 9
  • Written By: Ahmer MansoorEmail : ahmer_mansoor@hotmail.comProfile : http://www.linkedin.com/in/ahmermansoor12) REMOVE RAID LEVEL 1 CONFIGURATION In the end, I will show you how to remove RAID configuration from your system. The steps of shownbelow and it is reversal of the configuration that we have made so long. I don’t think these steps required anyfurther clarification.[root@linux1 u01]# cd /[root@linux1 /]# umount /u01[root@linux1 /]# mdadm --stop /dev/md0mdadm: stopped /dev/md0[root@linux1 /]# rm –rf /etc/mdadm.conf[root@linux1 /]# rmdir /u01 Also remove entry from /etc/fstabCONCLUSION: In the above write-up, I use RAID 1 example, because its architecture is relatively simple to understandand experiment as compare to other levels. I hope that after go thru this write-up you may be able configure themore complex RAIDs like 5 & 6. Page | 10