SlideShare a Scribd company logo
1 of 38
Copyright: ashwinwriter@gmail.com
Dec, 2012
FAQ on SnapDrive for UNIX
Trinity Expert Systems
IT solutions for business excellence
.
Table of Contents
FAQ 3
1.1 What is Snapdrive for UNIX? ...................................................................................3
1.2 What are the different types of storage entity that can be created using SnapDrive
tool? 3
1.3 Some examples of this hierarchy, in order of increasing complexity, are:.................3
1.4 Both hostvol and diskgroup seems same from host point of view, which should I use?
4
1.5 What are the available filesystem with snapdrive for UNIX?.....................................4
1.6 Can I change the Physical Extent (PE size) with local LVM created on top on NetApp
LUN? ................................................................................................................................4
1.7 Which of the following 4 Snapdrive for UNIX entity allows me to set the PE size on
volume group? ..................................................................................................................5
1.8 Can you show me one example, as how can I change the PE size according to my
application requirement?...................................................................................................5
1.9 How to create the different storage entities supported by snapdrive for UNIX? ......10
1.10 Creating “LUN" entity to be managed by HOST LVM. ............................................10
1.11 How to resize the storage entity ‘LUN’ that we created using snapdrive for UNIX? 15
1.12 Creating a "hostvol" to be managed by HOST LVM. ..............................................17
1.13 Creating a "diskgroup" to be managed by HOST LVM. ..........................................23
1.14 Creating a "filesys" to be managed by HOST LVM.................................................28
1.15 When I decide to disconnect the storage (LUN, filesys, hostvol & diskgroup), what is
the ideal recommended method?....................................................................................33
1.16 How to disconnect filesystem, diskgroup/volume group -hostvol, LUN via snapdrive
tool? 33
1.17 Can I not use LUN clone instead of flexclone?.......................................................36
1.18 Why FlexClone?.....................................................................................................36
1.19 What are the Use Cases for flexcloned volume?....................................................36
1.20 What is a FlexClone split?......................................................................................37
1.21 How does Data ONTAP process a FlexClone split operation? ...............................37
1.22 How much capacity is required to perform a FlexClone split operation?.................37
1.23 When do I need to split flexcloned volume? ...........................................................37
1.24 Is performance impacted when I create a flexclone volume? .................................38
1.25 How does DR works in my HP-UX COINS environment?.......................................38
1.26 How does flexclone works for my test environment?..............................................38
FAQ
1.1 What is Snapdrive for UNIX?
SnapDrive is a SAN storage management utility. SnapDrive provides a simple interface to allow for the
provisioning of LUNs for mapping LVM objects to them. SnapDrive also provides command set to connect
to snapshot or cloned lun to the same host or to a different host.
1.2 What are the different types of storage entity that can be created using SnapDrive
tool?
Snapdrive for UNIX allows 4 types of entities such as:
1. filesys
2. LUN
3. hostvol
4. diskgroup
Snapdrive provides easy management of the entire storage hierarchy, from the host-side application-
visible file down through the volume manager to the filer-side LUNs providing the actual repository.
1.3 Some examples of this hierarchy, in order of increasing complexity, are:
1. filesys --> [NetApp volume--> LUN ---> ready-made filesystem is presented to the Host] [This filesystem
contains [vg (vol group) and lv (logical volume are created as part of this operation automatically]
2. LUN --> [NetApp volume--> LUN ---> Raw disk presented to HOST] [Host can further carve the storage
using LVM: pv (physical vol) --> vg (volume group) --> lv (logical volume) ---> filesystem]
3. hostvol --> [NetApp volume--> LUN ---> logical volume is presented to the host] [In this scenario, a
logical volume is created along with volume group) [ Host: Need only format the logical volume and
mount it]
4. diskgroup --> [volume ---> LUN ---> Disk group is presented to the host [ Note: Disk group and volume
group are the same entity, Host: In this scenario, host needs to carve the logical volume (LV) out of the
volume group (VG) and then further lay and mount the filesystem.
1.4 Both hostvol and diskgroup seems same from host point of view, which should I use?
You are right, both look the same, but there is a differentiating factor with 'disk group' entity creation.
When -hostvol entity is created, volume group & logical volume both are created as part of the operation.
Whereas when you create a -diskgroup entity it is in your control to create the size of the logical volume
that you want to create filesystem upon.
1.5 What are the available filesystem with snapdrive for UNIX?
For file system creation, the supported file system types depend on the available LVMs. The desired type
can be specified using the -fstype option. For each platform, the following type strings may be specified
to -fstype:
 Solaris VxFS or UFS
 HP-UX VxFS
 AIX JFS or JFS2 or VxFs
 Linux ext3
1.6 Can I change the Physical Extent (PE size) with local LVM created on top on NetApp
LUN?
The physical extent size (PE size) of a LVM volume group (VG) is fixed upon the creation of VG. In Linux
command line, the -s option switch of vgcreate command is to explicitly set the physical extent size (PE
size) on physical volumes (PV) of the volume group (VG). PE size is defaulted to 4MB if it’s not set explicitly.
However, once this value has been set, it’s not possible to change a PE size without recreating the volume
group which would involve backing up and restoring data on any logical volumes!
As far as LVM2 is concerned – LVM version 2.02.06 (2006-05-12), Library version 1.02.07 (2006-05-11),
Driver version 4.5.0 – there is no LVM commands or utilities, not even the vgmodify in HPUX, to resize or
change the LVM PE size of an existing VG dynamically or in online mode!
So, it’s recommended to properly plan ahead before creating a LVM volume group.
For example, if the logical volume will store database tables where the database size will likely grow up
to more than 300G in near future, you should have not created a volume group with the default PE size
of 4MB!
Note: When physical volumes are used to create a volume group, its disk space is divided into 4 MB
extents, by default. This extent is the minimum amount by which the logical volume may be increased or
decreased in size.
Plan ahead:
If you do NOT want to set the default 4MB PE size, then you can explicitly set this size when you first
create the 'volume group'.
1.7 Which of the following 4 Snapdrive for UNIX entity allows me to set the PE size on
volume group?
1. fielsys
2. LUN
3. hostvol
4. diskgroup
Out of the 4 options, only 'LUN' entity allows you to set the PE size. This is b'cos for all the remaining
entities, ‘volume group' (VG) is created as part of the entity creation and hence you cannot change the
default PE size of 4MB.
1.8 Can you show me one example, as how can I change the PE size according to my
application requirement?
Yes, in the following example, we will create an entity 'LUN' and then we will create a volume group with
the customized PE size of 64M.
1. In the following example, we created an entity called 'LUN' (name - lun_pe) using SnapDrive storage
wizard create, you may also use cmd line to achieve the same.
Configuration Summary:
 Storage System : darfas01
 Volume Name : /vol/centos_iscsi
 LUN Name : lun_pe
 LUN Size : 2048.0 MB
Equivalent CLI command is:
snapdrive storage create -lun darfas01:/vol/centos_iscsi/lun_pe -lunsize 2048.0m
Do you want to create storage based on this configuration{y, n}[y]?:
Creating storage with the provided configuration. Please wait...
LUN darfas01:/vol/centos_iscsi/lun_pe to device file mapping => /dev/sdd, /dev/sde
Do you want to create more storage {y, n}[y]?: n
### Run the 'snapdrive storage list -all' cmd to check the status of the LUN we just created ####
[root@redhat /]# snapdrive storage list -all
WARNING: This operation can take several minutes based on the configuration.
Connected LUNs and devices:
device filename adapter path size proto state clone lun path backing snapshot
---------------- ------- ---- ---- ----- ----- ----- -------- ----------------
/dev/mapper/mpath45 - P 12g iscsi online No darfas01:/vol/centos_iscsi/lun_centos -
/dev/mapper/mpath57 - P 2g iscsi online No darfas01:/vol/centos_iscsi/lun_pe - <<
--- Here it is!
#### Next task is to create physical volume using 'pvcreate' cmd as shown below ####
[root@redhat /]# pvcreate /dev/mapper/mpath57
Writing physical volume data to disk "/dev/mpath/mpath57"
Physical volume "/dev/mpath/mpath57" successfully created
[root@redhat /]#
#### Run pvisplay or pvscan to check the status of physical volume we just created ###
[root@redhat /]# pvdisplay
"/dev/mpath/mpath57" is a new physical volume of "2.00 GB"
--- NEW Physical volume ---
PV Name /dev/mpath/mpath57
VG Name
PV Size 2.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID TBYrUv-NH5J-79K1-MypM-m0OR-kjOd-YqApZJ
[root@redhat /]# pvscan
PV /dev/mpath/mpath57 lvm2 [2.00 GB]
Total: 1 [2.00 GB] / in use: 0 [0 ] / in no VG: 1 [2.00 GB]
[root@redhat /]#
#### Finally, we create a vg with the switch '-s' to set the required size PE, b'cos if we don't then this cmd
will create vg with default PE size of 4 MB as shown below #####
[root@redhat /]# vgcreate netapp_pe /dev/mapper/mpath57
Volume group "netapp_pe" successfully created
[root@redhat /]# vgdisplay
--- Volume group ---
VG Name netapp_pe
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 GB
PE Size 4.00 MB <<-------- default PE size
Total PE 511
Alloc PE / Size 0 / 0
Free PE / Size 511 / 2.00 GB
VG UUID EU4Us2-0seB-SK21-vuCv-Ik7j-29N3-LJhL8e
##### In order to set the PE size of 64M for this volume group we will use the switch '-s' ####
First we need to remove this volume group and then use '-s' switch to create a volume group with PE size
'64' as shown below.
[root@redhat /]# vgremove netapp_pe
Volume group "netapp_pe" successfully removed
[root@redhat /]# vgcreate -s 64 netapp_pe /dev/mapper/mpath57
Volume group "netapp_pe" successfully created
[root@redhat /]# vgdisplay
--- Volume group ---
VG Name netapp_pe
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.94 GB
PE Size 64.00 MB <<------- PE size now shows '64M'.
Total PE 31
Alloc PE / Size 0 / 0
Free PE / Size 31 / 1.94 GB
VG UUID GDzmZG-BAVf-gEnn-tzSW-hPLw-NBAh-6DoC7B
[root@redhat /]#
### We successfully created a volume group with PE size of '64M'#####
Further we can create a logical volume, format it and mount it.
[root@redhat /]# lvcreate -l 100%VG -n netapp_pe_lun netapp_pe
Logical volume "netapp_pe_lun" created
[root@redhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/netapp_pe/netapp_pe_lun
VG Name netapp_pe
LV UUID ZhP1Vp-kuHY-4z7x-ZvQS-h2Fc-eeu6-3slvuV
LV Write Access read/write
LV Status available
# open 0
LV Size 1.94 GB
Current LE 31
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
[root@redhat /]# mkfs -t ext3 /dev/netapp_pe/netapp_pe_lun
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
253952 inodes, 507904 blocks
25395 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=520093696
16 block groups
32768 blocks per group, 32768 fragments per group
15872 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@redhat /]#
1.9 How to create the different storage entities supported by snapdrive for UNIX?
1. LUN
2. hostvol
3. diskgroup
4. filesys
This demonstration covers all the 4 entities mentioned above in order.
1.10 Creating “LUN" entity to be managed by HOST LVM.
Process consists of two steps:
1. Create a LUN on NetApp filer.
2. Use local volume manager (LVM/VERITAS Volume Manager) to manage the LUN.
Step1:
Create a LUN on NetApp storage:
[root@cloneredhat /]# Snapdrive storage wizard create
This wizard helps you create a NetApp LUN and make it available for
use on this host in one of these ways:
* LUN: creates one or more LUNs and maps it to this host. Enter 'LUN'
below.
* File system: creates one or more LUNs, makes a file system on the LUNs
(either with or without a volume manager) and mounts the file system on
this host. Enter 'filesys', 'fs' or 'filesystem' below.
* Host volume: creates one or more LUNs and creates a new logical volume
mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below.
* Disk group: creates one or more LUNs used as a pool of disks (also
called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below.
What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]:
Getting storage systems configured in the host ...
Following are the available storage systems:
darfas01
You can select the storage system name from the list above or enter a
new storage system name to configure it.
Enter the storage system name: darfas01
Enter the storage volume name or press <enter> to list them:
Following is a list of volumes in the storage system:
vol_QT_CIFS vol_show centos_iscsi vol_dell
centos_nfs vol_iscsi_win_test cl_test_centos_centos_iscsi_20130207131934
Enter the storage volume name or press <enter> to list them: centos_iscsi
You can provide comma separated multiple entity names e.g: lun1,lun2,
lun3 etc.
Enter the LUN name(s): lun_clred
Checking LUN name(s) availability. Please wait ...
Enter the LUN size for LUN(s) in the below mentioned format. (Default
unit is MB)
<size>k:m:g:t
Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes
Enter the LUN size: 2g
Configuration Summary:
 Storage System : darfas01
 Volume Name : /vol/centos_iscsi
 LUN Name : lun_clred
 LUN Size : 2048.0 MB
Equivalent CLI command is:
snapdrive storage create -lun darfas01:/vol/centos_iscsi/lun_clred -lunsize 2048.0m
Do you want to create storage based on this configuration{y, n}[y]?:
Creating storage with the provided configuration. Please wait...
LUN darfas01:/vol/centos_iscsi/lun_clred to device file mapping => /dev/sdb, /dev/sdc
Do you want to create more storage {y, n}[y]?: n
#### As you can see, LUN of size 2g has been created on the NetApp storage side. To ensure that it is
available to HOST, run multipath daemon####
[root@cloneredhat /]# multipath -l
mpath60 (360a9800042574b67575d426149715054) dm-0 NETAPP,LUN
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 1:0:0:0 sdc 8:32 [active][undef]
### As you can multipath device mapper has created mpath60 virtual device, which can now be managed
by LVM/or local host utility#######
Step 2.
Managing the LUN (virtual device) with LVM.
Checking the LVM version on my local redhat server
[root@cloneredhat /]# lvm version
LVM version: 2.02.88(2)-RHEL5 (2012-01-20)
Library version: 1.02.67-RHEL5 (2011-10-14)
Driver version: 4.11.6
[root@cloneredhat /]# pvcreate /dev/mapper/mpath60
Writing physical volume data to disk "/dev/mpath/mpath60"
Physical volume "/dev/mpath/mpath60" successfully created
[root@cloneredhat /]# pvdisplay
"/dev/mpath/mpath60" is a new physical volume of "2.00 GB"
--- NEW Physical volume ---
PV Name /dev/mpath/mpath60
VG Name
PV Size 2.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID ejdmna-wcDc-KE58-0Jyr-vn1U-iMXd-J3OM7e
### As you can see I have created physical volume using a virtual device 'mpath60' (use pvdisplay to see
the details), now I can create volume and LUN on top of it #####
Next step : Create Volume using 'vgcreate'
[root@cloneredhat /]# vgcreate netapp_vol /dev/mapper/mpath60
Volume group "netapp_vol" successfully created
## As you can see I have create volume 'netapp_vol'####
[root@cloneredhat /]# vgdisplay
--- Volume group ---
VG Name netapp_vol
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 GB
PE Size 4.00 MB
Total PE 511
Alloc PE / Size 0 / 0
Free PE / Size 511 / 2.00 GB
VG UUID ta2wLO-H23G-EImK-I03z-Vaxs-cMVy-krUd1n
Next step: Create Logical volume using 'lvcreate'.
[root@cloneredhat /]# lvcreate -l 100%VG -n netapp_lun netapp_vol
Logical volume "netapp_lun" created
[root@cloneredhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/netapp_vol/netapp_lun
VG Name netapp_vol
LV UUID 00sYi6-I69e-JBM7-7Yme-Ga0M-WfD1-D5QjCl
LV Write Access read/write
LV Status available
# open 0
LV Size 2.00 GB
Current LE 511
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
Next step: Format the logical volume with your prefered file system.
[root@cloneredhat /]# mkfs -t ext3 /dev/netapp_vol/netapp_lun
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
261632 inodes, 523264 blocks
26163 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
16352 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@cloneredhat /]#
Finally:
Create a mount point and mount the logical volume.
[root@cloneredhat /]# mount -t ext3 /dev/netapp_vol/netapp_lun /mnt/mnt_netapp/
[root@cloneredhat mnt_netapp]# ll
total 16
drwx------ 2 root root 16384 Feb 7 13:50 lost+found
1.11 How to resize the storage entity ‘LUN’ that we created using snapdrive for UNIX?
To extend the size of the filesystem | To resize the storage:
The storage resize operation can only increase the size of storage. You cannot use it to decrease the size
of an entity. All LUNs must reside in the same storage system volume.
The resize operation is not supported directly on logical host volumes, or on file systems that reside on
logical host volumes or on LUNs. In those cases, you must use the LVM commands to resize the storage.
Note: You cannot resize a LUN; you must use the -addlun option to add a new LUN.
In the following example, we will resize the storage (In other words, we will increase the storage). In of
the exercise we created a raw LUN directly on the NetApp and then managed it by LVM. Hence, inorder
to extend the storage we need to add a new lun and then add it to the volume group (netapp_vol) that
we created previously.
Two step process:
1. Add a new LUN
snapdrive storage create -lun darfas01:/vol/centos_iscsi/addlun -lunsize 2048.0m
[root@cloneredhat /]# multipath -l
mpath61 (360a9800042574b67575d426149715056) dm-2 NETAPP,LUN
[size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw]
_ round-robin 0 [prio=0][active]
_ 2:0:0:1 sdd 8:48 [active][undef]
_ 1:0:0:1 sde 8:64 [active][undef]
2. Extend the volume group:
snapdrive storage create -lun darfas01:/vol/centos_iscsi/addlun -lunsize 2048.0m
[root@cloneredhat /]# vgextend netapp_vol /dev/mpath/mpath61
No physical volume label read from /dev/mpath/mpath61
Writing physical volume data to disk "/dev/mpath/mpath61"
Physical volume "/dev/mpath/mpath61" successfully created
Volume group "netapp_vol" successfully extended
[root@cloneredhat /]# pvscan
PV /dev/mpath/mpath60 VG netapp_vol lvm2 [2.00 GB / 0 free]
PV /dev/mpath/mpath61 VG netapp_vol lvm2 [2.00 GB / 2.00 GB free]
Total: 2 [3.99 GB] / in use: 2 [3.99 GB] / in no VG: 0 [0 ]
### We can now see the volume group being extended #####
3. Extend the logical volume:
[root@cloneredhat /]# lvextend -L+1.9g /dev/netapp_vol/netapp_lun
Rounding up size to full physical extent 1.90 GB
Extending logical volume netapp_lun to 3.90 GB
Logical volume netapp_lun successfully resized
[root@cloneredhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/netapp_vol/netapp_lun
VG Name netapp_vol
LV UUID 00sYi6-I69e-JBM7-7Yme-Ga0M-WfD1-D5QjCl
LV Write Access read/write
LV Status available
# open 1
LV Size 3.90 GB
Current LE 998
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
4. Resize the filesystem. You can resize filesystem ext2/3/4 online , while the partition is mounted.
[root@cloneredhat /]# resize2fs /dev/netapp_vol/netapp_lun
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/netapp_vol/netapp_lun is mounted on /mnt/mnt_netapp; on-line resizing required
Performing an on-line resize of /dev/netapp_vol/netapp_lun to 1021952 (4k) blocks.
The filesystem on /dev/netapp_vol/netapp_lun is now 1021952 blocks long.
[root@cloneredhat /]# df
/dev/mapper/netapp_vol-netapp_lun 4022080 36896 3781024 1% /mnt/mnt_netapp
1.12 Creating a "hostvol" to be managed by HOST LVM.
[root@redhat /]# snapdrive storage wizard create
This wizard helps you create a NetApp LUN and make it available for
use on this host in one of these ways:
* LUN: creates one or more LUNs and maps it to this host. Enter 'LUN'
below.
* File system: creates one or more LUNs, makes a file system on the LUNs
(either with or without a volume manager) and mounts the file system on
this host. Enter 'filesys', 'fs' or 'filesystem' below.
* Host volume: creates one or more LUNs and creates a new logical volume
mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below.
* Disk group: creates one or more LUNs used as a pool of disks (also
called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below.
What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: hostvol
Fetching storage resources. Please wait...
Following are the available storage systems:
darfas01
You can select the storage system name from the list above or enter a
new storage system name to configure it.
Enter the storage system name:
darfas01
Enter the storage volume name or press <enter> to list them:
Following is a list of volumes in the storage system:
vol_QT_CIFS vol_show centos_iscsi vol_dell
centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test
Enter the storage volume name or press <enter> to list them: centos_iscsi
Following are the available types of Volume Manager(s):
LVM
Taking 'LVM' as volume manager.
Enter the host volume name in dg/hostvol format: dgr/vgr
Do you want to specify name(s) for the LUN(s) that will be created as a
part of this wizard: {y, n}[n]?:
Enter the Disk group size in the below mentioned format. (Default unit
is MB)
<size>k:m:g:t
Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes
The format is not case sensitive. This is the size of each disk group
that is being created.
Enter the disk group size: 2g
Configuration Summary:
 Storage System : darfas01
 Volume Name : /vol/centos_iscsi
 Disk Group Name : dgr
 Disk Group size : 2048.0 MB
 Host Volume Name : vgr
 Volume Manager : LVM
Equivalent CLI command is:
snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -lvol dgr/vgr -vmtype LVM
Do you want to create storage based on this configuration{y, n}[y]?:
Creating storage with the provided configuration. Please wait...
LUN darfas01:/vol/centos_iscsi/dgr_SdLun to device file mapping => /dev/sde, /dev/sdf
Disk group dgr created
Host volume vgr created
Do you want to create more storage {y, n}[y]?: n
[root@redhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/dgr/vgr
VG Name dgr
LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47
LV Write Access read/write
LV Status available
# open 0
LV Size 2.00 GB
Current LE 513
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
[root@redhat /]# mkfs -t ext3 /dev/dgr/vgr
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262752 inodes, 525312 blocks
26265 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=541065216
17 block groups
32768 blocks per group, 32768 fragments per group
15456 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@redhat /]# mount /dev/dgr/vgr /mnt/share/
[root@redhat /]# cd /mnt/share/
[root@redhat share]# ll
total 16
drwx------ 2 root root 16384 Feb 11 00:10 lost+found
Re-sizing the diskgroup/volume group (dgr/vlr):
[root@redhat /]# snapdrive storage resize -vg dgr -growto 4g -addlun
discovering filer LUNs in disk group dgr...done
LUN darfas01:/vol/centos_iscsi/dgr-1_SdLun ... created
mapping new lun(s) ... done
discovering new lun(s) ... done.
initializing LUN(s) and adding to disk group dgr...done
Disk group dgr has been resized
Desired resize of host volumes or file systems
contained in disk group must be done manually
### volume group 'dgr' has been resized to 4g #####
[root@redhat /]# vgdisplay
--- Volume group ---
VG Name dgr
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 4.00 GB
PE Size 4.00 MB
Total PE 1024
Alloc PE / Size 513 / 2.00 GB
Free PE / Size 511 / 2.00 GB
VG UUID BAUa7p-rTjM-VcZa-pEqk-H7OB-hbBO-7TAK8F
[root@redhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/dgr/vgr
VG Name dgr
LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 513
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
#### Now, we need to extend the logical volume from 2g to 4 g ######
Steps:
[root@redhat /]# lvextend -L+1.9g /dev/dgr/vgr
Rounding up size to full physical extent 1.90 GB
Extending logical volume vgr to 3.91 GB
Logical volume vgr successfully resized
[root@redhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/dgr/vgr
VG Name dgr
LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47
LV Write Access read/write
LV Status available
# open 1
LV Size 3.91 GB
Current LE 1000
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
##### Finally, extending the filesystem to the extended logical volume size #####
[root@redhat /]# resize2fs /dev/dgr/vgr
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/dgr/vgr is mounted on /mnt/share; on-line resizing required
Performing an on-line resize of /dev/dgr/vgr to 1024000 (4k) blocks.
The filesystem on /dev/dgr/vgr is now 1024000 blocks long.
[root@redhat /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 17981340 4103648 12949540 25% /
/dev/sda1 295561 21875 258426 8% /boot
tmpfs 647380 0 647380 0% /dev/shm
/dev/mapper/mpath45 12385456 306932 11449380 3% /mnt/netapp
/dev/sdd1 3908720 3527888 380832 91% /media/disk
/dev/mapper/dgr-vgr 4033856 69728 3765704 2% /mnt/share
File system successfully resized online
1.13 Creating a "diskgroup" to be managed by HOST LVM.
[root@cloneredhat /]# snapdrive storage wizard create
This wizard helps you create a NetApp LUN and make it available for
use on this host in one of these ways:
* LUN: creates one or more LUNs and maps it to this host. Enter 'LUN'
below.
* File system: creates one or more LUNs, makes a file system on the LUNs
(either with or without a volume manager) and mounts the file system on
this host. Enter 'filesys', 'fs' or 'filesystem' below.
* Host volume: creates one or more LUNs and creates a new logical volume
mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below.
* Disk group: creates one or more LUNs used as a pool of disks (also
called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below.
What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: diskgroup
Fetching storage resources. Please wait...
Following are the available storage systems:
darfas01
You can select the storage system name from the list above or enter a
new storage system name to configure it.
Enter the storage system name: darfas01
Enter the storage volume name or press <enter> to list them:
Following is a list of volumes in the storage system:
vol_QT_CIFS vol_show centos_iscsi vol_dell
centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test
Enter the storage volume name or press <enter> to list them: centos_iscsi
Following are the available types of Volume Manager(s):
LVM
Taking 'LVM' as volume manager.
Enter the disk group name: dgcr
Do you want to specify name(s) for the LUN(s) that will be created as a
part of this wizard: {y, n}[n]?:
Enter the Disk group size in the below mentioned format. (Default unit
is MB)
<size>k:m:g:t
Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes
The format is not case sensitive. This is the size of each disk group
that is being created.
Enter the disk group size: 2g
Configuration Summary:
Storage System : darfas01
Volume Name : /vol/centos_iscsi
Disk Group Name : dgcr
Disk Group size : 2048.0 MB
Volume Manager : LVM
Equivalent CLI command is:
snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -dg dgcr -vmtype LVM
Do you want to create storage based on this configuration{y, n}[y]?:
Creating storage with the provided configuration. Please wait...
LUN darfas01:/vol/centos_iscsi/dgcr_SdLun to device file mapping => /dev/sdg, /dev/sdh
Disk group dgcr created
Do you want to create more storage {y, n}[y]?: n
[root@cloneredhat /]#
[root@cloneredhat /]# pvdisplay
--- Physical volume ---
PV Name /dev/mpath/mpath62
VG Name dgcr
PV Size 2.01 GB / not usable 4.00 MB
Allocatable yes
PE Size (KByte) 4096
Total PE 513
Free PE 513
Allocated PE 0
PV UUID MrLKSn-fu69-2RDR-jWjL-Kb6b-w8Zh-dTVp7f
[root@cloneredhat /]# vgdisplay
--- Volume group ---
VG Name dgcr
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 GB
PE Size 4.00 MB
Total PE 513
Alloc PE / Size 0 / 0
Free PE / Size 513 / 2.00 GB
VG UUID sxL3Z2-fEYh-JTOB-WWXR-P1Kl-5bTs-I2pp6Q
##### vgdisplay command shows the volume group we just created #######
Now, we need to create logical volume
[root@cloneredhat /]# lvcreate -l 100%VG -n lun_dgcr dgcr
Logical volume "lun_dgcr" created
[root@cloneredhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/dgcr/lun_dgcr
VG Name dgcr
LV UUID FX1mr1-xqWj-sTtF-d1Rp-tl9w-GeVI-8iayFm
LV Write Access read/write
LV Status available
# open 0
LV Size 2.00 GB
Current LE 513
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
##### Next task is to format the logical volume and mount the it ######
[root@cloneredhat /]# mkfs -t ext3 /dev/dgcr/lun_dgcr
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
262752 inodes, 525312 blocks
26265 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=541065216
17 block groups
32768 blocks per group, 32768 fragments per group
15456 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@cloneredhat /]#
[root@cloneredhat mnt]# mkdir dgcr
[root@cloneredhat mnt]# cd /
[root@cloneredhat /]# mount /dev/dgcr/lun_dgcr /mnt/dgcr/
[root@cloneredhat /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 17981340 3539680 13513508 21% /
/dev/sda1 295561 27552 252749 10% /boot
tmpfs 657540 0 657540 0% /dev/shm
/dev/sdf1 3908720 3527888 380832 91% /media/disk
/dev/mapper/dgcr-lun_dgcr
2068220 68704 1894456 4% /mnt/dgcr
[root@cloneredhat /]#
1.14 Creating a "filesys" to be managed by HOST LVM.
[root@redhat /]# snapdrive storage wizard create
This wizard helps you create a NetApp LUN and make it available for
use on this host in one of these ways:
* LUN: creates one or more LUNs and maps it to this host. Enter 'LUN'
below.
* File system: creates one or more LUNs, makes a file system on the LUNs
(either with or without a volume manager) and mounts the file system on
this host. Enter 'filesys', 'fs' or 'filesystem' below.
* Host volume: creates one or more LUNs and creates a new logical volume
mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below.
* Disk group: creates one or more LUNs used as a pool of disks (also
called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below.
What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: filesys
Fetching storage resources. Please wait...
Following are the available storage systems:
darfas01
You can select the storage system name from the list above or enter a
new storage system name to configure it.
Enter the storage system name: darfas01
Enter the storage volume name or press <enter> to list them:
Following is a list of volumes in the storage system:
vol_QT_CIFS vol_show centos_iscsi vol_dell
centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test
Enter the storage volume name or press <enter> to list them: centos_iscsi
Following are the available types of file system(s):
EXT3
Taking 'EXT3' as file system.
Do you want to create a file system with lvm {y, n}[y]?:
Enter the mount path: /mnt/test
Following are the available types of Volume Manager(s):
LVM
Taking 'LVM' as volume manager.
Do you want to specify name(s) for the LUN(s) that will be created as
a part of this wizard: {y, n}[n]?:
Enter the host volume name in dg/hostvol format or press <enter> to use
default name:
Enter the disk group name or press <enter> to use default name:
Enter the Disk group size in the below mentioned format. (Default unit
is MB)
<size>k:m:g:t
Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes
The format is not case sensitive. This is the size of each disk group
that is being created.
Enter the disk group size: 2g
Configuration Summary:
 Storage System : darfas01
 Volume Name : /vol/centos_iscsi
 Disk Group size : 2048.0 MB
 Volume Manager : LVM
 File System Name : /mnt/test
 File System Type : EXT3
Equivalent CLI command is:
snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -vmtype LVM -fs /mnt/test
-fstype EXT3
Do you want to create storage based on this configuration{y, n}[y]?:
Creating storage with the provided configuration. Please wait...
LUN darfas01:/vol/centos_iscsi/test_SdLun to device file mapping => /dev/sdd, /dev/sde
Disk group test_SdDg created
Host volume test_SdHv created
File system /mnt/test created
Do you want to create more storage {y, n}[y]?: n
[root@redhat /]#
### Now, if we do 'vgdisplay' and 'lvdisplay' we can see the respective volume group (test_sdDG) and
logical volume (test_sdHv) ####
[root@redhat /]# vgdisplay
--- Volume group ---
VG Name test_SdDg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 2.00 GB
PE Size 4.00 MB
Total PE 513
Alloc PE / Size 513 / 2.00 GB
Free PE / Size 0 / 0
VG UUID 9DajT6-sRoo-7djE-DUW2-4oCv-3ODD-nh10bS
[root@redhat /]# lvdisplay
--- Logical volume ---
LV Name /dev/test_SdDg/test_SdHv
VG Name test_SdDg
LV UUID mBElhZ-ktnz-hWxS-cs1Q-CvQC-Q1KO-35Ds8u
LV Write Access read/write
LV Status available
# open 1
LV Size 2.00 GB
Current LE 513
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
#########
Similarly, as we have done in the previous excercises, we will resize the 'filesys' entity this time, using the
same 'snapdrive storage resize -vg' cmd.
[root@redhat /]# snapdrive storage resize -vg test_SdDg -growto 4g -addlun
discovering filer LUNs in disk group test_SdDg...done
LUN darfas01:/vol/centos_iscsi/test-1_SdLun ... created
mapping new lun(s) ... done
discovering new lun(s) ... done.
initializing LUN(s) and adding to disk group test_SdDg...done
Disk group test_SdDg has been resized
Desired resize of host volumes or file systems
contained in disk group must be done manually
[root@redhat /]#
### Now, if we do 'vgdisplay' we can see that extra 2g of storage space has been added to this volume
group #####
[root@redhat /]# vgdisplay
--- Volume group ---
VG Name test_SdDg
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 4.00 GB <<--------- + 2g
PE Size 4.00 MB
Total PE 1024
Alloc PE / Size 513 / 2.00 GB
Free PE / Size 511 / 2.00 GB
VG UUID 9DajT6-sRoo-7djE-DUW2-4oCv-3ODD-nh10bS
##### In order to remove/disconnect this filesys entity, we can use 'snapdrive storage disconect' with -fs
switch to remove all the layers in oder, as show in example below ####
[root@redhat /]# snapdrive storage disconnect -fs /mnt/test/
disconnect file system /mnt/test/
- fs /mnt/test ... disconnected
- hostvol test_SdDg/test_SdHv ... disconnected
- dg test_SdDg ... disconnected
- LUN darfas01:/vol/centos_iscsi/test_SdLun ... disconnected
- LUN darfas01:/vol/centos_iscsi/test-1_SdLun ... disconnected
0001-669 Warning:
Please save information provided by this command.
You will need it to re-connect disconnected filespecs.
[root@redhat /]#
1.15 When I decide to disconnect the storage (LUN, filesys, hostvol & diskgroup), what is
the ideal recommended method?
Disconnecting storage is simple and one must always remove the layers of the storage from top ->thru->
down. This is important, otherwise it might lead to defunct or orphaned logical volume or corrupted file
system.
1.16 How to disconnect filesystem, diskgroup/volume group -hostvol, LUN via snapdrive
tool?
'Snapdrive storage disconnect' can be used to disconnect the 'storage' created using snapdrive tool.
In the following exercise, various examples have been presented on 'how to perform the disconnect'.
### We created this volume group using 'host LVM utility' in one of our exercise, and later we resized this
vg by adding additional lun to it ###### Now, we will disconnect the lun associated with this vg by using
one simple command, we need not use 'host lvm' cmd such as 'vgremove' and 'pvremove' this will be
taken care of by the single snapdrive storage disconnect cmd ################
[root@cloneredhat /]# snapdrive storage disconnect -vg or dg netapp_vol
disconnect disk group netapp_vol
- dg netapp_vol ... disconnected
- LUN darfas01:/vol/centos_iscsi/lun_clred ... disconnected
- LUN darfas01:/vol/centos_iscsi/addlun ... disconnected
0001-669 Warning:
Please save information provided by this command.
You will need it to re-connect disconnected filespecs.
[root@cloneredhat /]#
How to disconnect hostvol (volumegroup/logical volume) already mounted with filesystem
## we created this hostvol and later mounted it on this mount point /mnt/dgcr ###
 Storage System : darfas01
 Volume Name : /vol/centos_iscsi
 Disk Group Name : dgr
 Disk Group size : 2048.0 MB
 Host Volume Name : vgr
 Volume Manager : LVM
#### If the file system is mounted, then in order to disconnect the storage, we will have to disconnect all
the layers, i.e. First thing to disconnect is a file system, followed by logical volume and volume group and
finally the LUN. This is made easy with snapdrive single cmd, if we use '-fs' switch, then all layers will
disconnected systematically as shown below ####
[root@cloneredhat /]# snapdrive storage disconnect -fs /mnt/dgcr/
disconnect file system /mnt/dgcr/
- fs /mnt/dgcr ... disconnected #### (1) filesystem disconnected
- hostvol dgcr/lun_dgcr ... disconnected #### (2) logical volume disconnected
- dg dgcr ... disconnected #### (3) volume group disconencted
- LUN darfas01:/vol/centos_iscsi/dgcr_SdLun ... disconnected #### (4) LUN on NetApp Filer
disconnected
0001-669 Warning:
Please save information provided by this command.
You will need it to re-connect disconnected filespecs.
[root@cloneredhat /]#
In another example, we will first umount the filesystem (In other words, we will remove the top layer
filesystem) and then use '-hostvol' switch to remove the remaining layers (volume group & logical volume)
[root@redhat /]# umount /mnt/share/
[root@redhat /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 17981340 4110368 12942820 25% /
/dev/sda1 295561 21875 258426 8% /boot
tmpfs 647380 0 647380 0% /dev/shm
/dev/mapper/mpath45 12385456 306932 11449380 3% /mnt/netapp
[root@redhat /]# snapdrive storage disconnect -hostvol dgr/vgr
disconnect host volume dgr/vgr
- hostvol dgr/vgr ... disconnected
- dg dgr ... disconnected
- LUN darfas01:/vol/centos_iscsi/dgr_SdLun ... disconnected
- LUN darfas01:/vol/centos_iscsi/dgr-1_SdLun ... disconnected
0001-669 Warning:
Please save information provided by this command.
You will need it to re-connect disconnected filespecs.
NOTE: In whichever method you have created host-side storage via snapdrive tool, if the storage is
mounted, it is best to use '-fs' switch as it removes all the layers in order as explained in one of the
examples above.
1.17 Can I not use LUN clone instead of flexclone?
LUN clones, though they share blocks, have an extra level of indirection on reads (assuming they have
not been split). This is not the case with LUNs on FlexClone volumes. There is no additional redirection on
reads. Therefore flexclone is a preferred method.
1.18 Why FlexClone?
1. Test & Dev environments
2. Mirror for data protection
3. DR site enabled for Read/Write
4. System upgrade/Deployment test
5. Data Mining
6. Data warehouse
More...
Flexclone provisions:
 Instantly provision cloned datastoresor RDMs
 Supports SAN, iSCSI, & NAS
 Clones are immediately available–Clones require zero additional storage
 Pointer driven block level replicas
1.19 What are the Use Cases for flexcloned volume?
Practical applications of FlexClone technology are :
 FlexClone technology enables multiple, instant data set clones with no storage overhead.
 It provides dramatic improvements for application test and development environments and is
tightly integrated with the file system technology and a microkernel design in a way that renders
competitive methods archaic.
 FlexClone volumes are ideal for managing production data sets.
 They allow effortless error containment for bug fixing and development.
 They simplify platform upgrades for ERP and CRM applications.
 Instant FlexClone volumes provide data for multiple simulations against large data sets for ECAD,
MCAD, and Seismic applications, all without unnecessary duplication or waste of physical space.
The ability to split FlexClone volumes from their parent lets administrators easily create new permanent,
independent volumes for forking project data.
1.20 What is a FlexClone split?
A FlexClone split is the act of splitting a FlexClone volume from its parent volume. The split results in a
full copy of all the shared data from the parent volume, and removes any relationship or dependency
between the two volumes. After the split is complete, the FlexClone volume is no longer a FlexClone
volume but a regular volume instead. It is not possible to choose a destination aggregate for a FlexClone
split; it will always be the same aggregate as the parent volume.
1.21 How does Data ONTAP process a FlexClone split operation?
Data ONTAP uses a background scanner to copy the shared data from the parent volume to the FlexClone
volume. The scanner has one active message at any time that is processing only one inode, so the split
tends to be faster on a volume with fewer inodes. Also, any data written, overwritten, or deleted on the
FlexClone volume will not be shared with the parent volume and thus does not need to be copied. During
the split operation, both the parent and FlexClone volumes are online and the operation is non-disruptive
to client access.
1.22 How much capacity is required to perform a FlexClone split operation?
Immediately after the creation of a FlexClone volume, all data is shared between it and the reference
snapshot of the parent volume, and splitting the FlexClone volume from the parent volume would require
a storage capacity equal to the used capacity of the parent active filesystem at the time of the snapshot.
As the FlexClone volume and the parent diverge due to writes, overwrites, and deletions, the amount of
shared data decreases. Data ONTAP includes a command that estimates the amount of storage capacity
required to split a FlexClone volume from its parent.
For Data ONTAP in 7-Mode, use the vol clone split estimate command. The following is a sample usage
and output of this command.
7-mode> vol clone split estimate quotas_c
An estimated 10gb available storage is required in the aggregate to split
clone volume 'quotas_c' from its parent.
1.23 When do I need to split flexcloned volume?
FlexClone volumes can be used indefinitely, but there are a number of good reasons for a storage
administrator to split off a FlexClone volume to create a fully independent FlexVol volume.
 They may wish to replace the current parent FlexVol volume with the modified FlexClone volume.
 Need to free the blocks pinned down by the FlexClone volume base Snapshot copy.
 Wish to have Data ONTAP enforce space reservations for the volume for more predictable
administration
1.24 Is performance impacted when I create a flexclone volume?
Performance is not an issue. Since the clone volume uses the same aggregate as the parent, they both
get to use the exact same disks. Both take advantage of WAFL and NVRAM for fast writes and since
changes can be written to anywhere on disk, it doesn’t matter if it is the clone or independent metadata
that gets updated.
1.25 How does DR works in my HP-UX COINS environment?
If you encounter a database corruption in the production volume, simply mount the flexcloned volume
and use it to serve the production clines.
1.26 How does flexclone works for my test environment?
Well, with flexclone you benefit from less risk, less stress, and higher service levels by using FlexClone
volumes to try out changes on clone volumes and upgrade under tight maintenance windows by simply
swapping tested FlexClone volumes for the originals.
Copyright: ashwinwriter@gmail.com
Dec, 2012

More Related Content

Similar to Faq on SnapDrive for UNIX NetApp

Plesk 8.2 for Windows Backup and Restore Utilities ...
Plesk 8.2 for Windows Backup and Restore Utilities ...Plesk 8.2 for Windows Backup and Restore Utilities ...
Plesk 8.2 for Windows Backup and Restore Utilities ...webhostingguy
 
Rh202 q&amp;a-demo-cert magic
Rh202 q&amp;a-demo-cert magicRh202 q&amp;a-demo-cert magic
Rh202 q&amp;a-demo-cert magicEllina Beckman
 
How to solve misalignment lun netapp on linux servers by Ivan
How to solve misalignment lun netapp on linux servers by IvanHow to solve misalignment lun netapp on linux servers by Ivan
How to solve misalignment lun netapp on linux servers by IvanIvan Silva
 
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)Isabella789
 
How to build a lamp server-sample
How to build a lamp server-sampleHow to build a lamp server-sample
How to build a lamp server-sampleXad Kuain
 
Xen server storage Overview
Xen server storage OverviewXen server storage Overview
Xen server storage OverviewNuno Alves
 
Cluster management (supercomputer)
Cluster management (supercomputer)Cluster management (supercomputer)
Cluster management (supercomputer)Hary HarysMatta
 
Lightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFSLightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFSJérôme Petazzoni
 
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios CoreNrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios CoreNagios
 
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.Marc Trimble
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawnGábor Nyers
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 
Install nagios
Install nagiosInstall nagios
Install nagioshassandb
 
101 2.1 design hard disk layout
101 2.1 design hard disk layout101 2.1 design hard disk layout
101 2.1 design hard disk layoutAcácio Oliveira
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualizationFranck Pachot
 
How to Become Cloud Backup Provider
How to Become Cloud Backup ProviderHow to Become Cloud Backup Provider
How to Become Cloud Backup ProviderCloudian
 
RH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationRH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationIsabella789
 
RH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationRH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationIsabella789
 

Similar to Faq on SnapDrive for UNIX NetApp (20)

Plesk 8.2 for Windows Backup and Restore Utilities ...
Plesk 8.2 for Windows Backup and Restore Utilities ...Plesk 8.2 for Windows Backup and Restore Utilities ...
Plesk 8.2 for Windows Backup and Restore Utilities ...
 
Rh202 q&amp;a-demo-cert magic
Rh202 q&amp;a-demo-cert magicRh202 q&amp;a-demo-cert magic
Rh202 q&amp;a-demo-cert magic
 
How to solve misalignment lun netapp on linux servers by Ivan
How to solve misalignment lun netapp on linux servers by IvanHow to solve misalignment lun netapp on linux servers by Ivan
How to solve misalignment lun netapp on linux servers by Ivan
 
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
RH-302 Exam-Red Hat Certified Engineer on Redhat Enterprise Linux 4 (Labs)
 
How to build a lamp server-sample
How to build a lamp server-sampleHow to build a lamp server-sample
How to build a lamp server-sample
 
Xen server storage Overview
Xen server storage OverviewXen server storage Overview
Xen server storage Overview
 
Cluster management (supercomputer)
Cluster management (supercomputer)Cluster management (supercomputer)
Cluster management (supercomputer)
 
Lightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFSLightweight Virtualization: LXC containers & AUFS
Lightweight Virtualization: LXC containers & AUFS
 
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios CoreNrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core
Nrpe - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core
 
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.
NRPE - Nagios Remote Plugin Executor. NRPE plugin for Nagios Core 4 and others.
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
Install nagios
Install nagiosInstall nagios
Install nagios
 
Install nagios
Install nagiosInstall nagios
Install nagios
 
Install nagios
Install nagiosInstall nagios
Install nagios
 
Iscsi
IscsiIscsi
Iscsi
 
101 2.1 design hard disk layout
101 2.1 design hard disk layout101 2.1 design hard disk layout
101 2.1 design hard disk layout
 
Testing Delphix: easy data virtualization
Testing Delphix: easy data virtualizationTesting Delphix: easy data virtualization
Testing Delphix: easy data virtualization
 
How to Become Cloud Backup Provider
How to Become Cloud Backup ProviderHow to Become Cloud Backup Provider
How to Become Cloud Backup Provider
 
RH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationRH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux Certification
 
RH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux CertificationRH302 Exam-Red Hat Linux Certification
RH302 Exam-Red Hat Linux Certification
 

More from Ashwin Pawar

16TB Max file size.pdf
16TB Max file size.pdf16TB Max file size.pdf
16TB Max file size.pdfAshwin Pawar
 
Our 5 senses can only perceive representation of reality but not the actual r...
Our 5 senses can only perceive representation of reality but not the actual r...Our 5 senses can only perceive representation of reality but not the actual r...
Our 5 senses can only perceive representation of reality but not the actual r...Ashwin Pawar
 
Oracle database might have problems with stale NFSv3 locks upon restart
Oracle database might have problems with stale NFSv3 locks upon restartOracle database might have problems with stale NFSv3 locks upon restart
Oracle database might have problems with stale NFSv3 locks upon restartAshwin Pawar
 
Is it possible to upgrade or revert ontap versions on a Simulator
Is it possible to upgrade or revert ontap versions on a SimulatorIs it possible to upgrade or revert ontap versions on a Simulator
Is it possible to upgrade or revert ontap versions on a SimulatorAshwin Pawar
 
Cannot split clone snapcenter 4.3
Cannot split clone snapcenter 4.3Cannot split clone snapcenter 4.3
Cannot split clone snapcenter 4.3Ashwin Pawar
 
Network port administrative speed does not display correctly on NetApp storage
Network port administrative speed does not display correctly on NetApp storageNetwork port administrative speed does not display correctly on NetApp storage
Network port administrative speed does not display correctly on NetApp storageAshwin Pawar
 
How to connect to NetApp FILER micro-USB console port
How to connect to NetApp FILER micro-USB console portHow to connect to NetApp FILER micro-USB console port
How to connect to NetApp FILER micro-USB console portAshwin Pawar
 
NDMP backup models
NDMP backup modelsNDMP backup models
NDMP backup modelsAshwin Pawar
 
How to use Active IQ tool to access filer information
How to use Active IQ tool to access filer informationHow to use Active IQ tool to access filer information
How to use Active IQ tool to access filer informationAshwin Pawar
 
San vs Nas fun series
San vs Nas fun seriesSan vs Nas fun series
San vs Nas fun seriesAshwin Pawar
 
Steps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesSteps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesAshwin Pawar
 
SnapDiff process flow chart
SnapDiff process flow chartSnapDiff process flow chart
SnapDiff process flow chartAshwin Pawar
 
SnapDiff performance issue
SnapDiff performance issueSnapDiff performance issue
SnapDiff performance issueAshwin Pawar
 
Volume level restore fails with error transient snapshot copy is not supported
Volume level restore fails with error transient snapshot copy is not supportedVolume level restore fails with error transient snapshot copy is not supported
Volume level restore fails with error transient snapshot copy is not supportedAshwin Pawar
 
Disk reports predicted failure event
Disk reports predicted failure eventDisk reports predicted failure event
Disk reports predicted failure eventAshwin Pawar
 
OCUM shows ONTAP cluster health degraded
OCUM shows ONTAP cluster health degradedOCUM shows ONTAP cluster health degraded
OCUM shows ONTAP cluster health degradedAshwin Pawar
 
NDMPCOPY lun from 7-mode NetApp to cDOT
NDMPCOPY lun from 7-mode NetApp to cDOTNDMPCOPY lun from 7-mode NetApp to cDOT
NDMPCOPY lun from 7-mode NetApp to cDOTAshwin Pawar
 

More from Ashwin Pawar (20)

16TB Max file size.pdf
16TB Max file size.pdf16TB Max file size.pdf
16TB Max file size.pdf
 
Our 5 senses can only perceive representation of reality but not the actual r...
Our 5 senses can only perceive representation of reality but not the actual r...Our 5 senses can only perceive representation of reality but not the actual r...
Our 5 senses can only perceive representation of reality but not the actual r...
 
E=C+O
E=C+OE=C+O
E=C+O
 
SnapDiff
SnapDiffSnapDiff
SnapDiff
 
Oracle database might have problems with stale NFSv3 locks upon restart
Oracle database might have problems with stale NFSv3 locks upon restartOracle database might have problems with stale NFSv3 locks upon restart
Oracle database might have problems with stale NFSv3 locks upon restart
 
Is it possible to upgrade or revert ontap versions on a Simulator
Is it possible to upgrade or revert ontap versions on a SimulatorIs it possible to upgrade or revert ontap versions on a Simulator
Is it possible to upgrade or revert ontap versions on a Simulator
 
Cannot split clone snapcenter 4.3
Cannot split clone snapcenter 4.3Cannot split clone snapcenter 4.3
Cannot split clone snapcenter 4.3
 
Network port administrative speed does not display correctly on NetApp storage
Network port administrative speed does not display correctly on NetApp storageNetwork port administrative speed does not display correctly on NetApp storage
Network port administrative speed does not display correctly on NetApp storage
 
How to connect to NetApp FILER micro-USB console port
How to connect to NetApp FILER micro-USB console portHow to connect to NetApp FILER micro-USB console port
How to connect to NetApp FILER micro-USB console port
 
NDMP backup models
NDMP backup modelsNDMP backup models
NDMP backup models
 
How to use Active IQ tool to access filer information
How to use Active IQ tool to access filer informationHow to use Active IQ tool to access filer information
How to use Active IQ tool to access filer information
 
San vs Nas fun series
San vs Nas fun seriesSan vs Nas fun series
San vs Nas fun series
 
Steps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issuesSteps to identify ONTAP latency related issues
Steps to identify ONTAP latency related issues
 
SnapDiff
SnapDiffSnapDiff
SnapDiff
 
SnapDiff process flow chart
SnapDiff process flow chartSnapDiff process flow chart
SnapDiff process flow chart
 
SnapDiff performance issue
SnapDiff performance issueSnapDiff performance issue
SnapDiff performance issue
 
Volume level restore fails with error transient snapshot copy is not supported
Volume level restore fails with error transient snapshot copy is not supportedVolume level restore fails with error transient snapshot copy is not supported
Volume level restore fails with error transient snapshot copy is not supported
 
Disk reports predicted failure event
Disk reports predicted failure eventDisk reports predicted failure event
Disk reports predicted failure event
 
OCUM shows ONTAP cluster health degraded
OCUM shows ONTAP cluster health degradedOCUM shows ONTAP cluster health degraded
OCUM shows ONTAP cluster health degraded
 
NDMPCOPY lun from 7-mode NetApp to cDOT
NDMPCOPY lun from 7-mode NetApp to cDOTNDMPCOPY lun from 7-mode NetApp to cDOT
NDMPCOPY lun from 7-mode NetApp to cDOT
 

Recently uploaded

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 

Recently uploaded (20)

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 

Faq on SnapDrive for UNIX NetApp

  • 1. Copyright: ashwinwriter@gmail.com Dec, 2012 FAQ on SnapDrive for UNIX Trinity Expert Systems IT solutions for business excellence .
  • 2. Table of Contents FAQ 3 1.1 What is Snapdrive for UNIX? ...................................................................................3 1.2 What are the different types of storage entity that can be created using SnapDrive tool? 3 1.3 Some examples of this hierarchy, in order of increasing complexity, are:.................3 1.4 Both hostvol and diskgroup seems same from host point of view, which should I use? 4 1.5 What are the available filesystem with snapdrive for UNIX?.....................................4 1.6 Can I change the Physical Extent (PE size) with local LVM created on top on NetApp LUN? ................................................................................................................................4 1.7 Which of the following 4 Snapdrive for UNIX entity allows me to set the PE size on volume group? ..................................................................................................................5 1.8 Can you show me one example, as how can I change the PE size according to my application requirement?...................................................................................................5 1.9 How to create the different storage entities supported by snapdrive for UNIX? ......10 1.10 Creating “LUN" entity to be managed by HOST LVM. ............................................10 1.11 How to resize the storage entity ‘LUN’ that we created using snapdrive for UNIX? 15 1.12 Creating a "hostvol" to be managed by HOST LVM. ..............................................17 1.13 Creating a "diskgroup" to be managed by HOST LVM. ..........................................23 1.14 Creating a "filesys" to be managed by HOST LVM.................................................28 1.15 When I decide to disconnect the storage (LUN, filesys, hostvol & diskgroup), what is the ideal recommended method?....................................................................................33 1.16 How to disconnect filesystem, diskgroup/volume group -hostvol, LUN via snapdrive tool? 33 1.17 Can I not use LUN clone instead of flexclone?.......................................................36 1.18 Why FlexClone?.....................................................................................................36 1.19 What are the Use Cases for flexcloned volume?....................................................36 1.20 What is a FlexClone split?......................................................................................37 1.21 How does Data ONTAP process a FlexClone split operation? ...............................37 1.22 How much capacity is required to perform a FlexClone split operation?.................37 1.23 When do I need to split flexcloned volume? ...........................................................37 1.24 Is performance impacted when I create a flexclone volume? .................................38 1.25 How does DR works in my HP-UX COINS environment?.......................................38 1.26 How does flexclone works for my test environment?..............................................38
  • 3. FAQ 1.1 What is Snapdrive for UNIX? SnapDrive is a SAN storage management utility. SnapDrive provides a simple interface to allow for the provisioning of LUNs for mapping LVM objects to them. SnapDrive also provides command set to connect to snapshot or cloned lun to the same host or to a different host. 1.2 What are the different types of storage entity that can be created using SnapDrive tool? Snapdrive for UNIX allows 4 types of entities such as: 1. filesys 2. LUN 3. hostvol 4. diskgroup Snapdrive provides easy management of the entire storage hierarchy, from the host-side application- visible file down through the volume manager to the filer-side LUNs providing the actual repository. 1.3 Some examples of this hierarchy, in order of increasing complexity, are: 1. filesys --> [NetApp volume--> LUN ---> ready-made filesystem is presented to the Host] [This filesystem contains [vg (vol group) and lv (logical volume are created as part of this operation automatically] 2. LUN --> [NetApp volume--> LUN ---> Raw disk presented to HOST] [Host can further carve the storage using LVM: pv (physical vol) --> vg (volume group) --> lv (logical volume) ---> filesystem] 3. hostvol --> [NetApp volume--> LUN ---> logical volume is presented to the host] [In this scenario, a logical volume is created along with volume group) [ Host: Need only format the logical volume and mount it] 4. diskgroup --> [volume ---> LUN ---> Disk group is presented to the host [ Note: Disk group and volume group are the same entity, Host: In this scenario, host needs to carve the logical volume (LV) out of the volume group (VG) and then further lay and mount the filesystem.
  • 4. 1.4 Both hostvol and diskgroup seems same from host point of view, which should I use? You are right, both look the same, but there is a differentiating factor with 'disk group' entity creation. When -hostvol entity is created, volume group & logical volume both are created as part of the operation. Whereas when you create a -diskgroup entity it is in your control to create the size of the logical volume that you want to create filesystem upon. 1.5 What are the available filesystem with snapdrive for UNIX? For file system creation, the supported file system types depend on the available LVMs. The desired type can be specified using the -fstype option. For each platform, the following type strings may be specified to -fstype:  Solaris VxFS or UFS  HP-UX VxFS  AIX JFS or JFS2 or VxFs  Linux ext3 1.6 Can I change the Physical Extent (PE size) with local LVM created on top on NetApp LUN? The physical extent size (PE size) of a LVM volume group (VG) is fixed upon the creation of VG. In Linux command line, the -s option switch of vgcreate command is to explicitly set the physical extent size (PE size) on physical volumes (PV) of the volume group (VG). PE size is defaulted to 4MB if it’s not set explicitly. However, once this value has been set, it’s not possible to change a PE size without recreating the volume group which would involve backing up and restoring data on any logical volumes! As far as LVM2 is concerned – LVM version 2.02.06 (2006-05-12), Library version 1.02.07 (2006-05-11), Driver version 4.5.0 – there is no LVM commands or utilities, not even the vgmodify in HPUX, to resize or change the LVM PE size of an existing VG dynamically or in online mode! So, it’s recommended to properly plan ahead before creating a LVM volume group. For example, if the logical volume will store database tables where the database size will likely grow up to more than 300G in near future, you should have not created a volume group with the default PE size of 4MB! Note: When physical volumes are used to create a volume group, its disk space is divided into 4 MB extents, by default. This extent is the minimum amount by which the logical volume may be increased or decreased in size.
  • 5. Plan ahead: If you do NOT want to set the default 4MB PE size, then you can explicitly set this size when you first create the 'volume group'. 1.7 Which of the following 4 Snapdrive for UNIX entity allows me to set the PE size on volume group? 1. fielsys 2. LUN 3. hostvol 4. diskgroup Out of the 4 options, only 'LUN' entity allows you to set the PE size. This is b'cos for all the remaining entities, ‘volume group' (VG) is created as part of the entity creation and hence you cannot change the default PE size of 4MB. 1.8 Can you show me one example, as how can I change the PE size according to my application requirement? Yes, in the following example, we will create an entity 'LUN' and then we will create a volume group with the customized PE size of 64M. 1. In the following example, we created an entity called 'LUN' (name - lun_pe) using SnapDrive storage wizard create, you may also use cmd line to achieve the same. Configuration Summary:  Storage System : darfas01  Volume Name : /vol/centos_iscsi  LUN Name : lun_pe  LUN Size : 2048.0 MB Equivalent CLI command is: snapdrive storage create -lun darfas01:/vol/centos_iscsi/lun_pe -lunsize 2048.0m Do you want to create storage based on this configuration{y, n}[y]?: Creating storage with the provided configuration. Please wait... LUN darfas01:/vol/centos_iscsi/lun_pe to device file mapping => /dev/sdd, /dev/sde
  • 6. Do you want to create more storage {y, n}[y]?: n ### Run the 'snapdrive storage list -all' cmd to check the status of the LUN we just created #### [root@redhat /]# snapdrive storage list -all WARNING: This operation can take several minutes based on the configuration. Connected LUNs and devices: device filename adapter path size proto state clone lun path backing snapshot ---------------- ------- ---- ---- ----- ----- ----- -------- ---------------- /dev/mapper/mpath45 - P 12g iscsi online No darfas01:/vol/centos_iscsi/lun_centos - /dev/mapper/mpath57 - P 2g iscsi online No darfas01:/vol/centos_iscsi/lun_pe - << --- Here it is! #### Next task is to create physical volume using 'pvcreate' cmd as shown below #### [root@redhat /]# pvcreate /dev/mapper/mpath57 Writing physical volume data to disk "/dev/mpath/mpath57" Physical volume "/dev/mpath/mpath57" successfully created [root@redhat /]# #### Run pvisplay or pvscan to check the status of physical volume we just created ### [root@redhat /]# pvdisplay "/dev/mpath/mpath57" is a new physical volume of "2.00 GB" --- NEW Physical volume --- PV Name /dev/mpath/mpath57 VG Name PV Size 2.00 GB Allocatable NO PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID TBYrUv-NH5J-79K1-MypM-m0OR-kjOd-YqApZJ [root@redhat /]# pvscan
  • 7. PV /dev/mpath/mpath57 lvm2 [2.00 GB] Total: 1 [2.00 GB] / in use: 0 [0 ] / in no VG: 1 [2.00 GB] [root@redhat /]# #### Finally, we create a vg with the switch '-s' to set the required size PE, b'cos if we don't then this cmd will create vg with default PE size of 4 MB as shown below ##### [root@redhat /]# vgcreate netapp_pe /dev/mapper/mpath57 Volume group "netapp_pe" successfully created [root@redhat /]# vgdisplay --- Volume group --- VG Name netapp_pe System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.00 GB PE Size 4.00 MB <<-------- default PE size Total PE 511 Alloc PE / Size 0 / 0 Free PE / Size 511 / 2.00 GB VG UUID EU4Us2-0seB-SK21-vuCv-Ik7j-29N3-LJhL8e ##### In order to set the PE size of 64M for this volume group we will use the switch '-s' #### First we need to remove this volume group and then use '-s' switch to create a volume group with PE size '64' as shown below.
  • 8. [root@redhat /]# vgremove netapp_pe Volume group "netapp_pe" successfully removed [root@redhat /]# vgcreate -s 64 netapp_pe /dev/mapper/mpath57 Volume group "netapp_pe" successfully created [root@redhat /]# vgdisplay --- Volume group --- VG Name netapp_pe System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 1.94 GB PE Size 64.00 MB <<------- PE size now shows '64M'. Total PE 31 Alloc PE / Size 0 / 0 Free PE / Size 31 / 1.94 GB VG UUID GDzmZG-BAVf-gEnn-tzSW-hPLw-NBAh-6DoC7B [root@redhat /]# ### We successfully created a volume group with PE size of '64M'##### Further we can create a logical volume, format it and mount it. [root@redhat /]# lvcreate -l 100%VG -n netapp_pe_lun netapp_pe Logical volume "netapp_pe_lun" created [root@redhat /]# lvdisplay --- Logical volume ---
  • 9. LV Name /dev/netapp_pe/netapp_pe_lun VG Name netapp_pe LV UUID ZhP1Vp-kuHY-4z7x-ZvQS-h2Fc-eeu6-3slvuV LV Write Access read/write LV Status available # open 0 LV Size 1.94 GB Current LE 31 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 [root@redhat /]# mkfs -t ext3 /dev/netapp_pe/netapp_pe_lun mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 253952 inodes, 507904 blocks 25395 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=520093696 16 block groups 32768 blocks per group, 32768 fragments per group 15872 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
  • 10. [root@redhat /]# 1.9 How to create the different storage entities supported by snapdrive for UNIX? 1. LUN 2. hostvol 3. diskgroup 4. filesys This demonstration covers all the 4 entities mentioned above in order. 1.10 Creating “LUN" entity to be managed by HOST LVM. Process consists of two steps: 1. Create a LUN on NetApp filer. 2. Use local volume manager (LVM/VERITAS Volume Manager) to manage the LUN. Step1: Create a LUN on NetApp storage: [root@cloneredhat /]# Snapdrive storage wizard create This wizard helps you create a NetApp LUN and make it available for use on this host in one of these ways: * LUN: creates one or more LUNs and maps it to this host. Enter 'LUN' below. * File system: creates one or more LUNs, makes a file system on the LUNs (either with or without a volume manager) and mounts the file system on this host. Enter 'filesys', 'fs' or 'filesystem' below. * Host volume: creates one or more LUNs and creates a new logical volume mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below. * Disk group: creates one or more LUNs used as a pool of disks (also called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below. What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: Getting storage systems configured in the host ...
  • 11. Following are the available storage systems: darfas01 You can select the storage system name from the list above or enter a new storage system name to configure it. Enter the storage system name: darfas01 Enter the storage volume name or press <enter> to list them: Following is a list of volumes in the storage system: vol_QT_CIFS vol_show centos_iscsi vol_dell centos_nfs vol_iscsi_win_test cl_test_centos_centos_iscsi_20130207131934 Enter the storage volume name or press <enter> to list them: centos_iscsi You can provide comma separated multiple entity names e.g: lun1,lun2, lun3 etc. Enter the LUN name(s): lun_clred Checking LUN name(s) availability. Please wait ... Enter the LUN size for LUN(s) in the below mentioned format. (Default unit is MB) <size>k:m:g:t Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes Enter the LUN size: 2g Configuration Summary:  Storage System : darfas01  Volume Name : /vol/centos_iscsi  LUN Name : lun_clred  LUN Size : 2048.0 MB Equivalent CLI command is: snapdrive storage create -lun darfas01:/vol/centos_iscsi/lun_clred -lunsize 2048.0m
  • 12. Do you want to create storage based on this configuration{y, n}[y]?: Creating storage with the provided configuration. Please wait... LUN darfas01:/vol/centos_iscsi/lun_clred to device file mapping => /dev/sdb, /dev/sdc Do you want to create more storage {y, n}[y]?: n #### As you can see, LUN of size 2g has been created on the NetApp storage side. To ensure that it is available to HOST, run multipath daemon#### [root@cloneredhat /]# multipath -l mpath60 (360a9800042574b67575d426149715054) dm-0 NETAPP,LUN [size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw] _ round-robin 0 [prio=0][active] _ 1:0:0:0 sdc 8:32 [active][undef] ### As you can multipath device mapper has created mpath60 virtual device, which can now be managed by LVM/or local host utility####### Step 2. Managing the LUN (virtual device) with LVM. Checking the LVM version on my local redhat server [root@cloneredhat /]# lvm version LVM version: 2.02.88(2)-RHEL5 (2012-01-20) Library version: 1.02.67-RHEL5 (2011-10-14) Driver version: 4.11.6 [root@cloneredhat /]# pvcreate /dev/mapper/mpath60 Writing physical volume data to disk "/dev/mpath/mpath60" Physical volume "/dev/mpath/mpath60" successfully created [root@cloneredhat /]# pvdisplay "/dev/mpath/mpath60" is a new physical volume of "2.00 GB" --- NEW Physical volume --- PV Name /dev/mpath/mpath60 VG Name PV Size 2.00 GB Allocatable NO
  • 13. PE Size (KByte) 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID ejdmna-wcDc-KE58-0Jyr-vn1U-iMXd-J3OM7e ### As you can see I have created physical volume using a virtual device 'mpath60' (use pvdisplay to see the details), now I can create volume and LUN on top of it ##### Next step : Create Volume using 'vgcreate' [root@cloneredhat /]# vgcreate netapp_vol /dev/mapper/mpath60 Volume group "netapp_vol" successfully created ## As you can see I have create volume 'netapp_vol'#### [root@cloneredhat /]# vgdisplay --- Volume group --- VG Name netapp_vol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.00 GB PE Size 4.00 MB Total PE 511 Alloc PE / Size 0 / 0 Free PE / Size 511 / 2.00 GB VG UUID ta2wLO-H23G-EImK-I03z-Vaxs-cMVy-krUd1n
  • 14. Next step: Create Logical volume using 'lvcreate'. [root@cloneredhat /]# lvcreate -l 100%VG -n netapp_lun netapp_vol Logical volume "netapp_lun" created [root@cloneredhat /]# lvdisplay --- Logical volume --- LV Name /dev/netapp_vol/netapp_lun VG Name netapp_vol LV UUID 00sYi6-I69e-JBM7-7Yme-Ga0M-WfD1-D5QjCl LV Write Access read/write LV Status available # open 0 LV Size 2.00 GB Current LE 511 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 Next step: Format the logical volume with your prefered file system. [root@cloneredhat /]# mkfs -t ext3 /dev/netapp_vol/netapp_lun mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 261632 inodes, 523264 blocks 26163 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=536870912 16 block groups 32768 blocks per group, 32768 fragments per group 16352 inodes per group
  • 15. Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@cloneredhat /]# Finally: Create a mount point and mount the logical volume. [root@cloneredhat /]# mount -t ext3 /dev/netapp_vol/netapp_lun /mnt/mnt_netapp/ [root@cloneredhat mnt_netapp]# ll total 16 drwx------ 2 root root 16384 Feb 7 13:50 lost+found 1.11 How to resize the storage entity ‘LUN’ that we created using snapdrive for UNIX? To extend the size of the filesystem | To resize the storage: The storage resize operation can only increase the size of storage. You cannot use it to decrease the size of an entity. All LUNs must reside in the same storage system volume. The resize operation is not supported directly on logical host volumes, or on file systems that reside on logical host volumes or on LUNs. In those cases, you must use the LVM commands to resize the storage. Note: You cannot resize a LUN; you must use the -addlun option to add a new LUN. In the following example, we will resize the storage (In other words, we will increase the storage). In of the exercise we created a raw LUN directly on the NetApp and then managed it by LVM. Hence, inorder to extend the storage we need to add a new lun and then add it to the volume group (netapp_vol) that we created previously.
  • 16. Two step process: 1. Add a new LUN snapdrive storage create -lun darfas01:/vol/centos_iscsi/addlun -lunsize 2048.0m [root@cloneredhat /]# multipath -l mpath61 (360a9800042574b67575d426149715056) dm-2 NETAPP,LUN [size=2.0G][features=1 queue_if_no_path][hwhandler=0][rw] _ round-robin 0 [prio=0][active] _ 2:0:0:1 sdd 8:48 [active][undef] _ 1:0:0:1 sde 8:64 [active][undef] 2. Extend the volume group: snapdrive storage create -lun darfas01:/vol/centos_iscsi/addlun -lunsize 2048.0m [root@cloneredhat /]# vgextend netapp_vol /dev/mpath/mpath61 No physical volume label read from /dev/mpath/mpath61 Writing physical volume data to disk "/dev/mpath/mpath61" Physical volume "/dev/mpath/mpath61" successfully created Volume group "netapp_vol" successfully extended [root@cloneredhat /]# pvscan PV /dev/mpath/mpath60 VG netapp_vol lvm2 [2.00 GB / 0 free] PV /dev/mpath/mpath61 VG netapp_vol lvm2 [2.00 GB / 2.00 GB free] Total: 2 [3.99 GB] / in use: 2 [3.99 GB] / in no VG: 0 [0 ] ### We can now see the volume group being extended ##### 3. Extend the logical volume: [root@cloneredhat /]# lvextend -L+1.9g /dev/netapp_vol/netapp_lun Rounding up size to full physical extent 1.90 GB Extending logical volume netapp_lun to 3.90 GB Logical volume netapp_lun successfully resized
  • 17. [root@cloneredhat /]# lvdisplay --- Logical volume --- LV Name /dev/netapp_vol/netapp_lun VG Name netapp_vol LV UUID 00sYi6-I69e-JBM7-7Yme-Ga0M-WfD1-D5QjCl LV Write Access read/write LV Status available # open 1 LV Size 3.90 GB Current LE 998 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 4. Resize the filesystem. You can resize filesystem ext2/3/4 online , while the partition is mounted. [root@cloneredhat /]# resize2fs /dev/netapp_vol/netapp_lun resize2fs 1.39 (29-May-2006) Filesystem at /dev/netapp_vol/netapp_lun is mounted on /mnt/mnt_netapp; on-line resizing required Performing an on-line resize of /dev/netapp_vol/netapp_lun to 1021952 (4k) blocks. The filesystem on /dev/netapp_vol/netapp_lun is now 1021952 blocks long. [root@cloneredhat /]# df /dev/mapper/netapp_vol-netapp_lun 4022080 36896 3781024 1% /mnt/mnt_netapp 1.12 Creating a "hostvol" to be managed by HOST LVM. [root@redhat /]# snapdrive storage wizard create This wizard helps you create a NetApp LUN and make it available for use on this host in one of these ways: * LUN: creates one or more LUNs and maps it to this host. Enter 'LUN' below. * File system: creates one or more LUNs, makes a file system on the LUNs (either with or without a volume manager) and mounts the file system on
  • 18. this host. Enter 'filesys', 'fs' or 'filesystem' below. * Host volume: creates one or more LUNs and creates a new logical volume mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below. * Disk group: creates one or more LUNs used as a pool of disks (also called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below. What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: hostvol Fetching storage resources. Please wait... Following are the available storage systems: darfas01 You can select the storage system name from the list above or enter a new storage system name to configure it. Enter the storage system name: darfas01 Enter the storage volume name or press <enter> to list them: Following is a list of volumes in the storage system: vol_QT_CIFS vol_show centos_iscsi vol_dell centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test Enter the storage volume name or press <enter> to list them: centos_iscsi Following are the available types of Volume Manager(s): LVM Taking 'LVM' as volume manager. Enter the host volume name in dg/hostvol format: dgr/vgr Do you want to specify name(s) for the LUN(s) that will be created as a part of this wizard: {y, n}[n]?: Enter the Disk group size in the below mentioned format. (Default unit is MB) <size>k:m:g:t
  • 19. Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes The format is not case sensitive. This is the size of each disk group that is being created. Enter the disk group size: 2g Configuration Summary:  Storage System : darfas01  Volume Name : /vol/centos_iscsi  Disk Group Name : dgr  Disk Group size : 2048.0 MB  Host Volume Name : vgr  Volume Manager : LVM Equivalent CLI command is: snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -lvol dgr/vgr -vmtype LVM Do you want to create storage based on this configuration{y, n}[y]?: Creating storage with the provided configuration. Please wait... LUN darfas01:/vol/centos_iscsi/dgr_SdLun to device file mapping => /dev/sde, /dev/sdf Disk group dgr created Host volume vgr created Do you want to create more storage {y, n}[y]?: n [root@redhat /]# lvdisplay --- Logical volume --- LV Name /dev/dgr/vgr VG Name dgr LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47 LV Write Access read/write LV Status available # open 0
  • 20. LV Size 2.00 GB Current LE 513 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 [root@redhat /]# mkfs -t ext3 /dev/dgr/vgr mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 262752 inodes, 525312 blocks 26265 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=541065216 17 block groups 32768 blocks per group, 32768 fragments per group 15456 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@redhat /]# mount /dev/dgr/vgr /mnt/share/ [root@redhat /]# cd /mnt/share/ [root@redhat share]# ll total 16 drwx------ 2 root root 16384 Feb 11 00:10 lost+found
  • 21. Re-sizing the diskgroup/volume group (dgr/vlr): [root@redhat /]# snapdrive storage resize -vg dgr -growto 4g -addlun discovering filer LUNs in disk group dgr...done LUN darfas01:/vol/centos_iscsi/dgr-1_SdLun ... created mapping new lun(s) ... done discovering new lun(s) ... done. initializing LUN(s) and adding to disk group dgr...done Disk group dgr has been resized Desired resize of host volumes or file systems contained in disk group must be done manually ### volume group 'dgr' has been resized to 4g ##### [root@redhat /]# vgdisplay --- Volume group --- VG Name dgr System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 4.00 GB PE Size 4.00 MB Total PE 1024 Alloc PE / Size 513 / 2.00 GB
  • 22. Free PE / Size 511 / 2.00 GB VG UUID BAUa7p-rTjM-VcZa-pEqk-H7OB-hbBO-7TAK8F [root@redhat /]# lvdisplay --- Logical volume --- LV Name /dev/dgr/vgr VG Name dgr LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47 LV Write Access read/write LV Status available # open 1 LV Size 2.00 GB Current LE 513 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 #### Now, we need to extend the logical volume from 2g to 4 g ###### Steps: [root@redhat /]# lvextend -L+1.9g /dev/dgr/vgr Rounding up size to full physical extent 1.90 GB Extending logical volume vgr to 3.91 GB Logical volume vgr successfully resized [root@redhat /]# lvdisplay --- Logical volume --- LV Name /dev/dgr/vgr VG Name dgr LV UUID y4gWRE-K3HM-n4NI-vmTV-y0st-z0S1-wBCF47 LV Write Access read/write LV Status available # open 1
  • 23. LV Size 3.91 GB Current LE 1000 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 ##### Finally, extending the filesystem to the extended logical volume size ##### [root@redhat /]# resize2fs /dev/dgr/vgr resize2fs 1.39 (29-May-2006) Filesystem at /dev/dgr/vgr is mounted on /mnt/share; on-line resizing required Performing an on-line resize of /dev/dgr/vgr to 1024000 (4k) blocks. The filesystem on /dev/dgr/vgr is now 1024000 blocks long. [root@redhat /]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 17981340 4103648 12949540 25% / /dev/sda1 295561 21875 258426 8% /boot tmpfs 647380 0 647380 0% /dev/shm /dev/mapper/mpath45 12385456 306932 11449380 3% /mnt/netapp /dev/sdd1 3908720 3527888 380832 91% /media/disk /dev/mapper/dgr-vgr 4033856 69728 3765704 2% /mnt/share File system successfully resized online 1.13 Creating a "diskgroup" to be managed by HOST LVM. [root@cloneredhat /]# snapdrive storage wizard create This wizard helps you create a NetApp LUN and make it available for use on this host in one of these ways: * LUN: creates one or more LUNs and maps it to this host. Enter 'LUN'
  • 24. below. * File system: creates one or more LUNs, makes a file system on the LUNs (either with or without a volume manager) and mounts the file system on this host. Enter 'filesys', 'fs' or 'filesystem' below. * Host volume: creates one or more LUNs and creates a new logical volume mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below. * Disk group: creates one or more LUNs used as a pool of disks (also called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below. What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: diskgroup Fetching storage resources. Please wait... Following are the available storage systems: darfas01 You can select the storage system name from the list above or enter a new storage system name to configure it. Enter the storage system name: darfas01 Enter the storage volume name or press <enter> to list them: Following is a list of volumes in the storage system: vol_QT_CIFS vol_show centos_iscsi vol_dell centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test Enter the storage volume name or press <enter> to list them: centos_iscsi Following are the available types of Volume Manager(s): LVM Taking 'LVM' as volume manager. Enter the disk group name: dgcr Do you want to specify name(s) for the LUN(s) that will be created as a part of this wizard: {y, n}[n]?: Enter the Disk group size in the below mentioned format. (Default unit
  • 25. is MB) <size>k:m:g:t Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes The format is not case sensitive. This is the size of each disk group that is being created. Enter the disk group size: 2g Configuration Summary: Storage System : darfas01 Volume Name : /vol/centos_iscsi Disk Group Name : dgcr Disk Group size : 2048.0 MB Volume Manager : LVM Equivalent CLI command is: snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -dg dgcr -vmtype LVM Do you want to create storage based on this configuration{y, n}[y]?: Creating storage with the provided configuration. Please wait... LUN darfas01:/vol/centos_iscsi/dgcr_SdLun to device file mapping => /dev/sdg, /dev/sdh Disk group dgcr created Do you want to create more storage {y, n}[y]?: n [root@cloneredhat /]# [root@cloneredhat /]# pvdisplay --- Physical volume --- PV Name /dev/mpath/mpath62 VG Name dgcr PV Size 2.01 GB / not usable 4.00 MB Allocatable yes PE Size (KByte) 4096 Total PE 513 Free PE 513 Allocated PE 0 PV UUID MrLKSn-fu69-2RDR-jWjL-Kb6b-w8Zh-dTVp7f
  • 26. [root@cloneredhat /]# vgdisplay --- Volume group --- VG Name dgcr System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.00 GB PE Size 4.00 MB Total PE 513 Alloc PE / Size 0 / 0 Free PE / Size 513 / 2.00 GB VG UUID sxL3Z2-fEYh-JTOB-WWXR-P1Kl-5bTs-I2pp6Q ##### vgdisplay command shows the volume group we just created ####### Now, we need to create logical volume [root@cloneredhat /]# lvcreate -l 100%VG -n lun_dgcr dgcr Logical volume "lun_dgcr" created [root@cloneredhat /]# lvdisplay --- Logical volume --- LV Name /dev/dgcr/lun_dgcr VG Name dgcr LV UUID FX1mr1-xqWj-sTtF-d1Rp-tl9w-GeVI-8iayFm LV Write Access read/write LV Status available # open 0
  • 27. LV Size 2.00 GB Current LE 513 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 ##### Next task is to format the logical volume and mount the it ###### [root@cloneredhat /]# mkfs -t ext3 /dev/dgcr/lun_dgcr mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 262752 inodes, 525312 blocks 26265 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=541065216 17 block groups 32768 blocks per group, 32768 fragments per group 15456 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 38 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@cloneredhat /]# [root@cloneredhat mnt]# mkdir dgcr [root@cloneredhat mnt]# cd /
  • 28. [root@cloneredhat /]# mount /dev/dgcr/lun_dgcr /mnt/dgcr/ [root@cloneredhat /]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 17981340 3539680 13513508 21% / /dev/sda1 295561 27552 252749 10% /boot tmpfs 657540 0 657540 0% /dev/shm /dev/sdf1 3908720 3527888 380832 91% /media/disk /dev/mapper/dgcr-lun_dgcr 2068220 68704 1894456 4% /mnt/dgcr [root@cloneredhat /]# 1.14 Creating a "filesys" to be managed by HOST LVM. [root@redhat /]# snapdrive storage wizard create This wizard helps you create a NetApp LUN and make it available for use on this host in one of these ways: * LUN: creates one or more LUNs and maps it to this host. Enter 'LUN' below. * File system: creates one or more LUNs, makes a file system on the LUNs (either with or without a volume manager) and mounts the file system on this host. Enter 'filesys', 'fs' or 'filesystem' below. * Host volume: creates one or more LUNs and creates a new logical volume mapped by an LVM. Enter 'hostvol', 'hostvolume', 'hvol' or 'lvol' below. * Disk group: creates one or more LUNs used as a pool of disks (also called a 'volume group'). Enter 'diskgroup', 'dg' or 'vg' below. What kind of storage do you want to create {LUN, diskgroup, hostvol, filesys, ?}?[LUN]: filesys Fetching storage resources. Please wait... Following are the available storage systems: darfas01 You can select the storage system name from the list above or enter a
  • 29. new storage system name to configure it. Enter the storage system name: darfas01 Enter the storage volume name or press <enter> to list them: Following is a list of volumes in the storage system: vol_QT_CIFS vol_show centos_iscsi vol_dell centos_nfs cl_test_centos_centos_iscsi_20130207131934vol_iscsi_win_test Enter the storage volume name or press <enter> to list them: centos_iscsi Following are the available types of file system(s): EXT3 Taking 'EXT3' as file system. Do you want to create a file system with lvm {y, n}[y]?: Enter the mount path: /mnt/test Following are the available types of Volume Manager(s): LVM Taking 'LVM' as volume manager. Do you want to specify name(s) for the LUN(s) that will be created as a part of this wizard: {y, n}[n]?: Enter the host volume name in dg/hostvol format or press <enter> to use default name: Enter the disk group name or press <enter> to use default name: Enter the Disk group size in the below mentioned format. (Default unit is MB) <size>k:m:g:t Where, k: Kilo Bytes m: Mega bytes g: Giga Bytes t: Tera Bytes The format is not case sensitive. This is the size of each disk group
  • 30. that is being created. Enter the disk group size: 2g Configuration Summary:  Storage System : darfas01  Volume Name : /vol/centos_iscsi  Disk Group size : 2048.0 MB  Volume Manager : LVM  File System Name : /mnt/test  File System Type : EXT3 Equivalent CLI command is: snapdrive storage create -filervol darfas01:/vol/centos_iscsi -dgsize 2048.0m -vmtype LVM -fs /mnt/test -fstype EXT3 Do you want to create storage based on this configuration{y, n}[y]?: Creating storage with the provided configuration. Please wait... LUN darfas01:/vol/centos_iscsi/test_SdLun to device file mapping => /dev/sdd, /dev/sde Disk group test_SdDg created Host volume test_SdHv created File system /mnt/test created Do you want to create more storage {y, n}[y]?: n [root@redhat /]# ### Now, if we do 'vgdisplay' and 'lvdisplay' we can see the respective volume group (test_sdDG) and logical volume (test_sdHv) #### [root@redhat /]# vgdisplay --- Volume group --- VG Name test_SdDg
  • 31. System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.00 GB PE Size 4.00 MB Total PE 513 Alloc PE / Size 513 / 2.00 GB Free PE / Size 0 / 0 VG UUID 9DajT6-sRoo-7djE-DUW2-4oCv-3ODD-nh10bS [root@redhat /]# lvdisplay --- Logical volume --- LV Name /dev/test_SdDg/test_SdHv VG Name test_SdDg LV UUID mBElhZ-ktnz-hWxS-cs1Q-CvQC-Q1KO-35Ds8u LV Write Access read/write LV Status available # open 1 LV Size 2.00 GB Current LE 513 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2
  • 32. ######### Similarly, as we have done in the previous excercises, we will resize the 'filesys' entity this time, using the same 'snapdrive storage resize -vg' cmd. [root@redhat /]# snapdrive storage resize -vg test_SdDg -growto 4g -addlun discovering filer LUNs in disk group test_SdDg...done LUN darfas01:/vol/centos_iscsi/test-1_SdLun ... created mapping new lun(s) ... done discovering new lun(s) ... done. initializing LUN(s) and adding to disk group test_SdDg...done Disk group test_SdDg has been resized Desired resize of host volumes or file systems contained in disk group must be done manually [root@redhat /]# ### Now, if we do 'vgdisplay' we can see that extra 2g of storage space has been added to this volume group ##### [root@redhat /]# vgdisplay --- Volume group --- VG Name test_SdDg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2
  • 33. VG Size 4.00 GB <<--------- + 2g PE Size 4.00 MB Total PE 1024 Alloc PE / Size 513 / 2.00 GB Free PE / Size 511 / 2.00 GB VG UUID 9DajT6-sRoo-7djE-DUW2-4oCv-3ODD-nh10bS ##### In order to remove/disconnect this filesys entity, we can use 'snapdrive storage disconect' with -fs switch to remove all the layers in oder, as show in example below #### [root@redhat /]# snapdrive storage disconnect -fs /mnt/test/ disconnect file system /mnt/test/ - fs /mnt/test ... disconnected - hostvol test_SdDg/test_SdHv ... disconnected - dg test_SdDg ... disconnected - LUN darfas01:/vol/centos_iscsi/test_SdLun ... disconnected - LUN darfas01:/vol/centos_iscsi/test-1_SdLun ... disconnected 0001-669 Warning: Please save information provided by this command. You will need it to re-connect disconnected filespecs. [root@redhat /]# 1.15 When I decide to disconnect the storage (LUN, filesys, hostvol & diskgroup), what is the ideal recommended method? Disconnecting storage is simple and one must always remove the layers of the storage from top ->thru-> down. This is important, otherwise it might lead to defunct or orphaned logical volume or corrupted file system. 1.16 How to disconnect filesystem, diskgroup/volume group -hostvol, LUN via snapdrive tool? 'Snapdrive storage disconnect' can be used to disconnect the 'storage' created using snapdrive tool.
  • 34. In the following exercise, various examples have been presented on 'how to perform the disconnect'. ### We created this volume group using 'host LVM utility' in one of our exercise, and later we resized this vg by adding additional lun to it ###### Now, we will disconnect the lun associated with this vg by using one simple command, we need not use 'host lvm' cmd such as 'vgremove' and 'pvremove' this will be taken care of by the single snapdrive storage disconnect cmd ################ [root@cloneredhat /]# snapdrive storage disconnect -vg or dg netapp_vol disconnect disk group netapp_vol - dg netapp_vol ... disconnected - LUN darfas01:/vol/centos_iscsi/lun_clred ... disconnected - LUN darfas01:/vol/centos_iscsi/addlun ... disconnected 0001-669 Warning: Please save information provided by this command. You will need it to re-connect disconnected filespecs. [root@cloneredhat /]# How to disconnect hostvol (volumegroup/logical volume) already mounted with filesystem ## we created this hostvol and later mounted it on this mount point /mnt/dgcr ###  Storage System : darfas01  Volume Name : /vol/centos_iscsi  Disk Group Name : dgr  Disk Group size : 2048.0 MB  Host Volume Name : vgr  Volume Manager : LVM #### If the file system is mounted, then in order to disconnect the storage, we will have to disconnect all the layers, i.e. First thing to disconnect is a file system, followed by logical volume and volume group and finally the LUN. This is made easy with snapdrive single cmd, if we use '-fs' switch, then all layers will disconnected systematically as shown below #### [root@cloneredhat /]# snapdrive storage disconnect -fs /mnt/dgcr/ disconnect file system /mnt/dgcr/
  • 35. - fs /mnt/dgcr ... disconnected #### (1) filesystem disconnected - hostvol dgcr/lun_dgcr ... disconnected #### (2) logical volume disconnected - dg dgcr ... disconnected #### (3) volume group disconencted - LUN darfas01:/vol/centos_iscsi/dgcr_SdLun ... disconnected #### (4) LUN on NetApp Filer disconnected 0001-669 Warning: Please save information provided by this command. You will need it to re-connect disconnected filespecs. [root@cloneredhat /]# In another example, we will first umount the filesystem (In other words, we will remove the top layer filesystem) and then use '-hostvol' switch to remove the remaining layers (volume group & logical volume) [root@redhat /]# umount /mnt/share/ [root@redhat /]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 17981340 4110368 12942820 25% / /dev/sda1 295561 21875 258426 8% /boot tmpfs 647380 0 647380 0% /dev/shm /dev/mapper/mpath45 12385456 306932 11449380 3% /mnt/netapp [root@redhat /]# snapdrive storage disconnect -hostvol dgr/vgr disconnect host volume dgr/vgr - hostvol dgr/vgr ... disconnected - dg dgr ... disconnected - LUN darfas01:/vol/centos_iscsi/dgr_SdLun ... disconnected - LUN darfas01:/vol/centos_iscsi/dgr-1_SdLun ... disconnected 0001-669 Warning: Please save information provided by this command. You will need it to re-connect disconnected filespecs.
  • 36. NOTE: In whichever method you have created host-side storage via snapdrive tool, if the storage is mounted, it is best to use '-fs' switch as it removes all the layers in order as explained in one of the examples above. 1.17 Can I not use LUN clone instead of flexclone? LUN clones, though they share blocks, have an extra level of indirection on reads (assuming they have not been split). This is not the case with LUNs on FlexClone volumes. There is no additional redirection on reads. Therefore flexclone is a preferred method. 1.18 Why FlexClone? 1. Test & Dev environments 2. Mirror for data protection 3. DR site enabled for Read/Write 4. System upgrade/Deployment test 5. Data Mining 6. Data warehouse More... Flexclone provisions:  Instantly provision cloned datastoresor RDMs  Supports SAN, iSCSI, & NAS  Clones are immediately available–Clones require zero additional storage  Pointer driven block level replicas 1.19 What are the Use Cases for flexcloned volume? Practical applications of FlexClone technology are :  FlexClone technology enables multiple, instant data set clones with no storage overhead.  It provides dramatic improvements for application test and development environments and is tightly integrated with the file system technology and a microkernel design in a way that renders competitive methods archaic.  FlexClone volumes are ideal for managing production data sets.  They allow effortless error containment for bug fixing and development.  They simplify platform upgrades for ERP and CRM applications.  Instant FlexClone volumes provide data for multiple simulations against large data sets for ECAD, MCAD, and Seismic applications, all without unnecessary duplication or waste of physical space.
  • 37. The ability to split FlexClone volumes from their parent lets administrators easily create new permanent, independent volumes for forking project data. 1.20 What is a FlexClone split? A FlexClone split is the act of splitting a FlexClone volume from its parent volume. The split results in a full copy of all the shared data from the parent volume, and removes any relationship or dependency between the two volumes. After the split is complete, the FlexClone volume is no longer a FlexClone volume but a regular volume instead. It is not possible to choose a destination aggregate for a FlexClone split; it will always be the same aggregate as the parent volume. 1.21 How does Data ONTAP process a FlexClone split operation? Data ONTAP uses a background scanner to copy the shared data from the parent volume to the FlexClone volume. The scanner has one active message at any time that is processing only one inode, so the split tends to be faster on a volume with fewer inodes. Also, any data written, overwritten, or deleted on the FlexClone volume will not be shared with the parent volume and thus does not need to be copied. During the split operation, both the parent and FlexClone volumes are online and the operation is non-disruptive to client access. 1.22 How much capacity is required to perform a FlexClone split operation? Immediately after the creation of a FlexClone volume, all data is shared between it and the reference snapshot of the parent volume, and splitting the FlexClone volume from the parent volume would require a storage capacity equal to the used capacity of the parent active filesystem at the time of the snapshot. As the FlexClone volume and the parent diverge due to writes, overwrites, and deletions, the amount of shared data decreases. Data ONTAP includes a command that estimates the amount of storage capacity required to split a FlexClone volume from its parent. For Data ONTAP in 7-Mode, use the vol clone split estimate command. The following is a sample usage and output of this command. 7-mode> vol clone split estimate quotas_c An estimated 10gb available storage is required in the aggregate to split clone volume 'quotas_c' from its parent. 1.23 When do I need to split flexcloned volume? FlexClone volumes can be used indefinitely, but there are a number of good reasons for a storage administrator to split off a FlexClone volume to create a fully independent FlexVol volume.
  • 38.  They may wish to replace the current parent FlexVol volume with the modified FlexClone volume.  Need to free the blocks pinned down by the FlexClone volume base Snapshot copy.  Wish to have Data ONTAP enforce space reservations for the volume for more predictable administration 1.24 Is performance impacted when I create a flexclone volume? Performance is not an issue. Since the clone volume uses the same aggregate as the parent, they both get to use the exact same disks. Both take advantage of WAFL and NVRAM for fast writes and since changes can be written to anywhere on disk, it doesn’t matter if it is the clone or independent metadata that gets updated. 1.25 How does DR works in my HP-UX COINS environment? If you encounter a database corruption in the production volume, simply mount the flexcloned volume and use it to serve the production clines. 1.26 How does flexclone works for my test environment? Well, with flexclone you benefit from less risk, less stress, and higher service levels by using FlexClone volumes to try out changes on clone volumes and upgrade under tight maintenance windows by simply swapping tested FlexClone volumes for the originals. Copyright: ashwinwriter@gmail.com Dec, 2012