SlideShare a Scribd company logo
vSphere 4.0 Storage: Features and Enhancements Nathan Small Staff Engineer Rev  E Last updated  3 rd  August 2009 VMware Inc
Introduction This presentation is a technical overview and deep dive of some of the features and enhancements to the storage stack and related storage components of vSphere 4.0
New Acronyms in vSphere 4 MPX = VMware Generic Multipath Device (No Unique Identifier) NAA = Network Addressing Authority PSA = Pluggable Storage Architecture MPP = Multipathing Plugin NMP = Native Multipathing SATP = Storage Array Type Plugin PSP = Path Selection Policy
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Naming Convention Change in vSphere 4 Although the  vmhbaN:C:T:L:P  naming convention is visible, it is now known as the run-time name and is no longer guaranteed to be persistent through reboots. ESX 4 now uses the unique LUN identifiers, typically the NAA (Network Addressing Authority) ID. This is true for the CLI as well as the GUI and is also the naming convention used during the install. The IQN (iSCSI Qualified Name) is still used for iSCSI targets. The WWN (World Wide Name) is still used for Fiber Channel targets. For those devices which do not have a unique ID, you will observe an  MPX  reference (which is basically stands for VMware Multipath X device).
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Pluggable Storage Architecture PSA , the  Pluggable Storage Architecture , is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path. This allows 3 rd  party software developers to design their own  load balancing techniques  and  failover mechanisms  for particular storage array types. This also means that 3 rd  party vendors can now add support for new arrays into ESX without having to provide internal information or intellectual property about the array to VMware. VMware, by default, provide a generic Multipathing Plugin ( MPP ) called NMP ( Native Multipathing Plugin ). PSA co-ordinates the operation of the NMP and any additional 3 rd  party MPP.
PSA Tasks Loads  and  unloads  multipathing plugins (MPPs). Handles physical path  discovery  and  removal  (via scanning). Routes I/O requests  for a specific logical device to an appropriate  MPP . Handles  I/O queuing  to the physical storage HBAs & to the logical devices. Implements  logical device bandwidth sharing  between Virtual Machines. Provides logical device and physical path  I/O statistics .
MPP Tasks The  PSA  discovers available storage paths and based on a set of predefined rules, the  PSA  will determine which  MPP  should be given ownership of the path. The  MPP  then associates a set of physical paths with a specific logical device.  The specific details of  handling path failover  for a given storage array are delegated to a sub-plugin called a  Storage Array Type Plugin  (SATP).  SATP is  associated  with paths. The specific details for determining which physical path is used to issue an I/O request ( load balancing ) to a storage device are handled by a sub-plugin called  Path Selection Plugin  (PSP).  PSP is  associated  with logical devices.
NMP Specific Tasks Manage  physical path claiming  and  unclaiming . Register  and  de-register   logical devices. Associate  physical paths  with  logical devices . Process I/O requests to logical devices: Select an optimal physical path  for the request (load balance) Perform actions necessary to  handle failures  and  request retries . Support  management tasks  such as  abort  or  reset  of logical devices.
Storage Array Type Plugin - SATP An  Storage Array Type Plugin  (SATP) handles  path failover operations . VMware provides a  default  SATP for each supported array as well as a  generic  SATP (an active/active version and an active/passive version) for non-specified storage arrays. If you want to take advantage of certain storage specific characteristics of your array, you can install a 3rd party SATP provided by the vendor of the storage array, or by a software company specializing in optimizing the use of your storage array. Each SATP implements the support for a specific type of storage array, e.g.  VMW_SATP_SVC  for  IBM SVC .
SATP (ctd) The primary functions of an SATP are: Implements the  switching of physical paths  to the array when a path has failed.  Determines when a  hardware component  of a physical path  has failed . Monitors the hardware state  of the physical paths to the storage array. There are many storage array type plug-ins. To see the complete list, you can use the following commands: # esxcli nmp satp list # esxcli nmp satp listrules # esxcli nmp satp listrules –s < specific SATP>
Path Selection Plugin (PSP)  If you want to take advantage of more complex I/O load balancing algorithms, you could install a 3rd party  Path Selection Plugin  (PSP). A PSP handles load balancing operations and is responsible for choosing a physical path to issue an I/O request to a logical device. VMware provide three PSP:  Fixed ,  MRU  or  Round Robin .  # esxcli nmp psp list Name  Description VMW_PSP_MRU  Most Recently Used Path Selection VMW_PSP_RR  Round Robin Path Selection VMW_PSP_FIXED  Fixed Path Selection
NMP Supported PSPs Most Recently Used (MRU)  — Selects the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. Fixed  — Uses the designated  preferred path , if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available. Round Robin (RR)  – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths.
Enabling Additional Logging on vSphere 4.0 For additional SCSI Log Messages, set: Scsi.LogCmdErrors = &quot;1“  Scsi.LogMPCmdErrors = &quot;1“ At GA, the default setting for Scsi.LogMPCmdErrors is &quot;1“ These can be found in the  Advanced Settings .
Viewing Plugin Information The following command lists all multipathing modules loaded on the system. At a minimum, this command returns the default VMware Native Multipath (NMP) plugin & the MASK_PATH plugin. Third-party MPPs will also be listed if installed: #  esxcfg-mpath -G MASK_PATH NMP For  ESXi , the following VI CLI 4.0 command can be used: #  vicfg-mpath –G –-server <IP> --username <X> --password <Y> MASK_PATH NMP LUN path masking is done via the  MASK_PATH  Plug-in.
Viewing Device Information The command  esxcli nmp device list  lists all devices managed by the NMP plug-in and the configuration of that device, e.g.: # esxcli nmp device list naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba33:C0:T0:L1 Working Paths: vmhba33:C0:T0:L1 mpx.vmhba1:C0:T0:L0 Device Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:C0:T0:L0;current=vmhba1:C0:T0:L0} Working Paths: vmhba1:C0:T0:L0 Specific configuration for EMC Clariion & Invista products mpx is used as an identifier for devices that do not have their own unique ids NAA is the Network Addressing Authority (NAA) identifier guaranteed to be unique
Viewing Device Information (ctd) Get current path information for a specified storage device managed by the NMP. # esxcli nmp device list -d naa.600601604320170080d407794f10dd11 naa.600601604320170080d407794f10dd11 Device Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba2:C0:T0:L0 Working Paths:  vmhba2:C0:T0:L0
Viewing Device Information (ctd) Lists all paths available for a specified storage device on ESX: #  esxcfg-mpath -b -d naa.600601601d311f001ee294d9e7e2dd11 naa.600601601d311f001ee294d9e7e2dd11 : DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) vmhba33:C0:T0:L1 LUN:1   state:active  iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b  Target:  IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0  Alias= Session=00023d000001 PortalTag=1 vmhba33:C0:T1:L1  LUN:1  state:standby  iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b  Target:  IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0  Alias= Session=00023d000001 PortalTag=2 ESXi has an equivalent  vicfg-mpath  command.
Viewing Device Information (ctd) # esxcfg-mpath -l -d naa.6006016043201700d67a179ab32fdc11 iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2-naa.600601601d311f001ee294d9e7e2dd11 Runtime Name: vmhba33:C0:T1:L1 Device: naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Adapter: vmhba33 Channel: 0 Target: 1 LUN: 1 Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2 Plugin: NMP State: standby Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2 iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1-naa.600601601d311f001ee294d9e7e2dd11 Runtime Name: vmhba33:C0:T0:L1 Device: naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Adapter: vmhba33 Channel: 0 Target: 0 LUN: 1 Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1 Plugin: NMP State: active Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1 Storage array (target) iSCSI Qualified Names (IQNs)
Third-Party Multipathing Plug-ins (MPPs) You can install the third-party multipathing plug-ins (MPPs) when you need to change specific load balancing and failover characteristics of ESX/ESXi. The third-party MPPs replace the behaviour of the NMP and entirely take control over the path failover and the load balancing operations for certain specified storage devices.
Third-Party SATP & PSP Third-party SATP Generally developed by third-party hardware manufacturers who have ‘expert’ knowledge of the behaviour of their storage devices. Accommodates specific characteristics of storage arrays and facilitates support for new arrays. Third-party PSP Generally developed by third-party software companies. More complex I/O load balancing algorithms. NMP coordination Third-party SATPs and PSPs   are coordinated by the NMP, and can be simultaneously used with the VMware SATPs and PSPs.
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
iSCSI Enhancements ESX 4 includes an updated iSCSI stack which offers improvements to both software iSCSI (initiator that runs at the ESX layer) and hardware iSCSI (a hardware-optimized iSCSI HBA).  For both software and hardware iSCSI,  functionality  (e.g. CHAP support, digest acceleration, etc.) and  performance  are improved.  Software iSCSI can now be configured to use host based  multipathing  if you have more than one physical network adapter. In the new ESX 4.0 Software iSCSI stack, there is no longer any requirement to have a  Service Console connection  to communicate to an iSCSI target.
Software iSCSI Enhancements iSCSI Advanced Settings In particular, data integrity checks in the form of  digests . CHAP Parameters Settings A user will be able to specify CHAP parameters as per-target CHAP and mutual per-target CHAP.  Inheritance model of parameters. A global set of configuration parameters can be set on the initiator and propagated down to all targets. Per target/discovery level configuration.  Configuration settings can now be set on a per target basis which means that a customer can uniquely configure parameters for each array discovered by the initiator.
Software iSCSI Multipathing – Port Binding You can now create a  port binding  between a  physical NIC  and a  iSCSI VMkernel  port in ESX 4.0.  Using the “port binding&quot; feature, users can map the multiple iSCSI VMkernel ports to different physical NICs. This will enable the software iSCSI initiator to use multiple physical NICs for I/O transfer. Connecting the software iSCSI initiator to the VMkernel ports can only be done from the CLI using the  esxcli swiscsi  commands. Host based multipathing can then manage the paths to the LUN. In addition,  Round Robin  path policy can be configured to simultaneously use more than one physical NIC for the iSCSI traffic to the iSCSI.
Hardware iSCSI Limitations Mutual Chap  is disabled. Discovery  is supported by IP address only (storage array name discovery not supported). Running with the  Hardware and Software iSCSI initiator enabled  on the same host at the same time is  not supported .
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
GUI Changes - Display Device Info Note that there are no further references to vmhbaC:T:L. Unique device identifiers such as the  NAA id  are now used.
GUI Changes - Display HBA Configuration Info Again, notice the use of NAA ids rather than vmhbaC:T:L.
GUI Changes - Display Path Info Note the reference to the PSP & SATP Note the (I/O) status designating the active path
GUI Changes - Data Center Rescan
Degraded Status If we detect less than 2 HBAs or 2 Targets in the paths of the datastore, we mark the datastore multipathing status as “ Partial/No Redundancy “ in the  Storage Views .
Storage Administration VI4 also provides  new  monitoring ,  reporting  and  alarm  features for storage management. This now gives an administrator of a vSphere the ability to: Manage  access/permissions  of datastores/folders Have  visibility of a Virtual Machine’s connectivity  to the storage infrastructure Account for  disk space utilization Provide  notification  in case of specific usage conditions
Datastore Monitoring & Alarms vSphere introduces  new datastore and VM-specific alarms/alerts on storage events: New  datastore  alarms: Datastore disk usage % Datastore Disk Over allocation % Datastore connection state to all hosts New  VM  alarms: VM Total size on disk (GB) VM Snapshot size (GB) Customer’s can now track snapshot usage
New Storage Alarms New Datastore specific Alarms New VM specific Alarms This alarms allow the tracking of Thin Provisioned disks This alarms will trigger if a datastore becomes  unavailble to the host This alarms will trigger if a snapshot delta file becomes too large
vSphere 4 Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Traditional Snapshot Detection When an ESX 3.x server finds a VMFS-3 LUN, it compares the  SCSI_DiskID information  returned from the storage array with the  SCSI_DiskID information  stored in the LVM Header.  If the two IDs don’t match, then by default, the VMFS-3 volume will not be mounted and thus be inaccessible.  A VMFS volume on ESX 3.x could be detected as a snapshot for a number of reasons: LUN ID changed SCSI version supported by array changed (firmware upgrade) Identifier type changed – Unit Serial Number vs NAA ID
New Snapshot Detection Mechanism When trying to determine if a device is a snapshot in ESX 4.0, the ESX uses a globally unique identifier to identify each LUN, typically the NAA (Network Addressing Authority) ID. NAA IDs are unique and are persistent across reboots.  There are many different globally unique identifiers (EUI, SNS, T10, etc). If the LUN does not support any of these globally unique identifiers, ESX will fall back to the serial number + LUN ID used in ESX 3.0.
SCSI_DiskId Structure The internal VMkernel structure  SCSI_DiskId  is populated with information about a LUN. This is stored in the metadata header of a VMFS volume. if the LUN does have a globally unique (NAA) ID, the field  SCSI_DiskId.data.uid  in the  SCSI_DiskId   structure will hold it. If the NAA ID in the  SCSI_DiskId.data.uid  stored in the metadata does not match the NAA ID returned by the LUN, the ESX knows the LUN is a snapshot. For older arrays that do not support NAA IDs, the earlier algorithm is used where  we compare other fields in the SCSI_DISKID structure to detect whether a LUN is a snapshot or not.
8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T1:L2' 8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T0:L2' 8:00:45:51.977 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T0:L2 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 8:00:45:51.978 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T1:L2 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 8:00:45:52.002 cpu2:81258)LVM: 7125:  Device naa.600601601d311f00fca3ea501beddd11:1 detected to be a snapshot: 8:00:45:52.002 cpu2:81258)LVM: 7132:  queried disk ID: <type 2, len 22, lun 2, devType 0, scsi 0, h(id) 3817547080305476947> 8:00:45:52.002 cpu2:81258)LVM: 7139:  on-disk disk ID: <type 2, len 22, lun 1, devType 0, scsi 0, h(id) 6335084141271340065> 8:00:45:52.006 cpu2:81258)ScsiDevice: 1756: Successfully registered device &quot;naa.600601601d311f00fca3ea501beddd11&quot; from plugin &quot; Snapshot Log Messages
Resignature & Force-Mount We have a new naming convention in ESX 4. “ Resignature ” is equivalent to  EnableResignature = 1  in ESX 3.x. “ Force-Mount ” is equivalent to  DisallowSnapshotLUN = 0  in ESX 3.x. The advanced configuration options EnableResignature and DisallowSnapshotLUN have been  replaced  in ESX 4 with a new CLI utility called  esxcfg-volume  ( vicfg-volume  for ESXi) .   Historically, the EnableResignature and DisallowSnapshotLUN were applied  server wide  and  applied to all volumes  on an ESX. The new Resignature and Force-mount are volume specific so offer much greater granularity in the handling of snapshots.
Persistent Or Non-Persistent Mounts If you use the  GUI  to force-mount a VMFS volume, it will make it a persistent mount which will remain in place through reboots of the ESX host. VC will not allow this volume to be resignatured. If you use the  CLI  to force-mount a VMFS volume, you can choose whether it persists or not through reboots. Through the  GUI , the  Add Storage Wizard  now displays the VMFS label. Therefore if a device is  not  mounted, but it has a label associated with it, you can make the assumption that it is a snapshot, or to use ESX 4 terminology, a  Volume Copy .
Mounting A Snapshot Original Volume is still presented to the ESX Snapshot – notice that the volume label is the same as the original volume.
Snapshot Mount Options Keep Existing Signature  – this is a force-mount operation: similar to disabling DisallowSnapshots in ESX 3.x. New datastore has original UUID saved in the file system header. If the original volume is already online, this option will not succeed and will print a ‘ Cannot change the host configuration ’ message when resolving the VMFS volumes.. Assign a new Signature  – this is a resignature operation: similar to enabling EnableResignature in ESX 3.x. New datastore has a new UUID saved in the file system header. Format the disk  – destroys the data on the disk and creates a new VMFS volume on it.
New CLI Command: esxcfg-volume There is a new CLI command in ESX 4 for resignaturing VMFS snapshots. Note the difference between ‘-m’ and ‘-M’: # esxcfg-volume esxcfg-volume <options> -l|--list  List all volumes which have been detected as snapshots/replicas. -m|--mount <VMFS UUID|label>  Mount a snapshot/replica volume,  if its original copy is not  online. -u|--umount <VMFS UUID|label>  Umount a snapshot/replica volume. -r|--resignature <VMFS UUID|label>  Resignature a snapshot/replica  volume. -M|--persistent-mount <VMFS UUID|label> Mount a snapshot/replica volume persistently, if its original copy is not online. -h|--help  Show this message.
esxcfg-volume (ctd) The difference between a mount and a  persistent  mount is that the persistent mounts will be maintained through reboots. ESX manages this by adding entries for force mounts into the  /etc/vmware/esx.conf . A typical set of entries for a force mount look like: /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ forceMount  = &quot; true &quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ lvmName  = &quot;48d247da-b18fd17c-1da1-0019993032e1&quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ readOnly  = &quot; false &quot;
Mount With the Original Volume Still Online /var/log #  esxcfg-volume -l VMFS3 UUID/label: 496f202f-3ff43d2e-7efe-001f29595f9d/Shared_VMFS_For_FT_VMs Can mount:  No  (the original volume is still online) Can resignature:  Yes Extent name: naa.600601601d311f00fca3ea501beddd11:1  range: 0 - 20223 (MB) /var/log #  esxcfg-volume -m 496f202f-3ff43d2e-7efe-001f29595f9d Mounting volume 496f202f-3ff43d2e-7efe-001f29595f9d Error: Unable to mount this VMFS3 volume due to the original volume is still online
esxcfg-volume (ctd) In this next example, a clone LUN of a VMFS LUN is presented back to the same ESX server. Therefore we cannot use either the  mount  or the  persistent mount  options since the original LUN is already presented to the host so we will have to resignature: #  esxcfg-volume -l VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol Can mount:  No Can resignature:  Yes Extent name: naa.6006016043201700f30570ed09f6da11:1  range: 0 - 15103 (MB)
esxcfg-volume (ctd) #  esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1 Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1 #  vdf Filesystem  1K-blocks  Used Available Use% Mounted on /dev/sdg2  5044188  1595804  3192148  34% / /dev/sdd1  248895  50780  185265  22% /boot . . /vmfs/volumes/48d247dd-7971f45b-5ee4-0019993032e1 15466496  5183488  10283008  33% /vmfs/volumes/cormac_grow_vol /vmfs/volumes/48d39951-19a5b934-67c3-0019993032e1 15466496  5183488  10283008  33% /vmfs/volumes/snap-397419fa-cormac_grow_vol Warning  – there is no  vdf  command in ESXi. However the  df  command reports on VMFS filesystems in ESXi.
Ineffective Workarounds Under ESX 4 Some of our customers have been running with DisallowSnapshotLUN set to ‘0’ as a workaround solution due their choice to use inconsistent LUN presentation numbers across their hosts since the early ESX 3.0.x days. Other customers have been running with this setting after enabling the SPC-2 and SC-3 bits on their FA ports for their EMC DMX/Symmetrix array and found that their LUNs were now seen as snapshots. This was due to the LUN ID’s for their LUNs changing since we reference the LUN by a unique ID (in page 0x83) rather than the serial number for the LUN or the NAA (if running ESX 3.5) once these director bits are enabled. This behavior was also observed in other arrays when upgrading firmware/OS. As there is no longer a global DisallowsnapshotLUN setting available in ESX 4, if your environment is running entirely on “snapshots” then you can use the following script to streamline the force-mount operation on each ESX host: for i in `/usr/sbin/esxcfg-volume -l|grep VMFS3|awk {print $3}`;do /usr/sbin/esxcfg-volume -M $i;done
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Storage VMotion Enhancements In VI4 The following enhancements have been made to the VI4 version of Storage VMotion: GUI Support. Leverages new features of VI4, including fast suspend/resume and Change Block Tracking. Supports moving VMDKs from  Thick  to  Thin  formats & vice versa Ability to migrate RDMs to VMDKs. Ability to migrate RDMs to RDMs. Support for FC, iSCSI & NAS. Storage VMotion no longer requires 2 x memory. No requirement to create a VMotion interface for Storage VMotion. Ability to move an individual disk without moving the VM’s home.
New Features Fast Suspend/Resume of VMs This provides the ability  to quickly transition between the source VM to the destination VM  reliably  and in a  fast switching time . This is only necessary when migrating the .vmx file Changed Block Tracking Very much like how we handle memory with standard VMotion in that a  bitmap of changed disk blocks  is used rather than a bitmap of changed memory pages. This means Storage VMotion  no longer needs to snapshot  the original VM and commit it to the destination VM so the Storage VMotion operation performs much faster. Multiple iterations of the disk copy goes through, but each time the number of disk blocks that changed reduces, until eventually all disk blocks have been copied and we have a complete copy of the disk at the destination.
Storage VMotion – GUI Support Storage VMotion is still supported via the  VI CLI 4.0  as well as the  API , so customers wishing to use this method can continue to do so. The  Change both host and datastore  option is only available to powered off VMs. For a non-passthru RDM, you can select  to convert it to either Thin Provisioned or Thick when converting it to a VMDK, or you can leave it as a non-passthru RDM.
Storage VMotion – CLI (ctd) #  svmotion --interactive Entering interactive mode.  All other options and environment variables will be ignored. Enter the VirtualCenter service url you wish to connect to (e.g. https://myvc.mycorp.com/sdk, or just myvc.mycorp.com):  VC-Linked-Mode.vi40.vmware.com Enter your username:  Administrator Enter your password:  ******** Attempting to connect to https://VC-Linked-Mode.vi40.vmware.com/sdk. Connected to server. Enter the name of the datacenter:  Embedded-ESX40 Enter the datastore path of the virtual machine (e.g. [datastore1] myvm/myvm.vmx):  [CLAR_L52] W2K3SP2/W2K3SP2.vmx Enter the name of the destination datastore:  CLAR_L53 You can also move disks independently of the virtual machine.  If you want the disks to stay with the virtual machine, then skip this step.. Would you like to individually place the disks (yes/no)?  no Performing Storage VMotion. 0% |----------------------------------------------------------------------------------------------------| 100% ##########
Limitations The migration of Virtual Machines which have snapshots will not be supported at GA.  Currently the plan is to have this in a future release The migration of Virtual Machines to a different host and a different datastore simultaneously is not yet supported. No firm date for support of this feature yet.
Storage VMotion Timeouts There are also a number of tunable timeout values:  Downtime  timeout Failure:  Source detected that destination failed to resume.   Update  fsr.maxSwitchoverSeconds  (default 100 seconds) in the VM’s .vmx file. May be observed on Virtual Machines that have lots of virtual disks. Data  timeout Failure:  Timed out waiting for migration data.   Update  migration.dataTimeout  (default 60 seconds ) in the VM’s .vmx file. May be observed when migrating from NFS to NFS on slow networks.
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Supported Disk Growth/Shrink Operations VI4 introduces the following growth/shrink operations: Grow  VMFS  volumes:  yes Grow  RDM  volumes:  yes Grow *. vmdk  :  yes Shrink  VMFS volumes :  no Shrink  RDM volumes :  yes Shrink *. vmdk  :  no
Volume Grow & Hot VMDK Extend Volume Grow   VI4 allows  dynamic expansion  of a volume partition by adding capacity to a VMFS without disrupting running Virtual Machines.  Once the LUN backing the datastore has been grown (typically through an array management utility), Volume Grow expands the VMFS partition on the expanded LUN.  Historically, the only way to grow a VMFS volume was to use the  extent-based approach.  Volume Grow   offers a different method of capacity growth .  The newly available space appears as a larger VMFS volume along with an associated grow event in vCenter.  Hot VMDK Extend   Hot extend is supported for VMFS flat virtual disks in persistent mode and without any Virtual Machine snapshots.  Used in conjunction with the new Volume Grow capability, the user has maximum flexibility in managing growing capacity in VI4.
Comparison: Volume Grow & Add Extent Volume Grow Add Extent Must power-off VMs No No Can be done on newly-provisioned LUN No Yes Can be done on existing array-expanded LUN Yes Yes (but not allowed through GUI) Limits An extent can be grown any number of times, up to 2TB. A datastore can have up to 32 extents, each up to 2TB. Results in creation of new partition No Yes VM availability impact None, if datastore has only one extent. Introduces dependency on first extent.
Volume Grow GUI Enhancements  Here I am choosing the same device on which the VMFS is installed – there is currently 4GB free. This option selects to expand the VMFS using free space on the current device Notice that the current extent capacity is 1GB.
VMFS Grow - Expansion Options  LUN Provisioned at Array VMFS Volume/Datastore Provisioned for ESX Virtual Disk Provisioned for VM VMFS Volume Grow Dynamic LUN Expansion VMDK Hot Extend
Hot VMDK Extend
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
ESX 4.0 CLI There have been a number of new storage commands introduced with ESX 4.0 as well as enhancements to the more traditional commands. Here is a list of commands that have changed or have been added: esxcli esxcfg-mpath / vicfg-mpath esxcfg-volume / vicfg-volume esxcfg-scsidevs / vicfg-scsidevs esxcfg-rescan / vicfg-rescan esxcfg-module / vicfg-module vmkfstools This slide deck will only cover a few of the above mentioned.
New/Updated CLI Commands(1) : esxcfg-scsidevs The  esxcfg-vmhbadevs  command has been replaced by the  esxcfg-scsidevs  command. To display the old VMware Legacy identifiers (vml), use: #  esxcfg-scsidevs –u To display Service Console devices: #  esxcfg-scsidevs –c To display all logical devices on this host: #  esxcfg-scsidevs –l To show the relationship between COS native devices (/dev/sd) and vmhba devices: #  esxcfg-scsidevs -m The VI CLI 4.0 has an equivalent  vicfg-scsidevs  for ESXi.
esxcfg-scsidevs (ctd) Sample output of  esxcfg-scsidevs –l : naa.600601604320170080d407794f10dd11 Device Type: Direct-Access Size: 8192 MB Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11) Plugin: NMP Console Device: /dev/sdb Devfs Path: /vmfs/devices/disks/naa.600601604320170080d407794f10dd11 Vendor: DGC  Model: RAID 5  Revis: 0224 SCSI Level: 4  Is Pseudo: false Status: on Is RDM Capable: true  Is Removable: false Is Local: false Other Names: vml.0200000000600601604320170080d407794f10dd11524149442035 Note that this is one of the few CLI commands which will report the LUN size
New/Updated CLI Commands(2): esxcfg-rescan You now have the ability to rescan based on whether devices were added or device were removed. You can also rescan the current paths and not try to discover new ones. # esxcfg-rescan -h esxcfg-rescan <options> [adapter] -a|--add  Scan for only newly added devices. -d|--delete  Scan for only deleted devices. -u|--update  Scan existing paths only and update their state. -h|--help  Display this message. The VI CLI 4.0 has an equivalent  vicfg-rescan  command for  ESXi .
New/Updated CLI Commands(3): vmkfstools The  vmkfstools  commands exists in the Service Console and VI CLI 4.0 Grow a VMFS:  vmkfstools –G Inflate a VMDK from thin to thick:  vmkfstools –j Import a thick VMDK to thin:  vmkfstools –i <src> -d thin Import a thin VMDK to thick:  vmkfstools –i <src thin disk> -d  zeroedthick
vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
Other Storage Features/Enhancements Storage General The number of LUNs that can be presented to the ESX 4.0 server is still  256 . VMFS The maximum extent volume size in VI4 is still  2TB . Maximum number of extents is still 32, so maximum volume size is still  64TB . We are still using VMFS3, not VMFS 4 (although the version has increased to 3.33). iSCSI Enhancements 10 GbE iSCSI Initiator  – iSCSI over a 10GbE interface is supported. First introduced in ESX/ESXi 3.5 u2 & extended back to ESX/ESXi 3.5 u1.
Other Storage Features/Enhancements (ctd) NFS Enhancements IPv6  support (experimental)  Support for up to  64 NFS volumes  (the old limit was 32) 10 GbE NFS Support  – NFS over a 10GbE interface is supported.  First introduced in ESX/ESXi 3.5 u2 FC Enhancements Support for  8Gb  Fibre Channel First introduced in ESX/ESXi 3.5 u2 Support for  FC over Ethernet  (FCoE)
Other Storage Features/Enhancements (ctd) Paravirtualized SCSI Driver (Guest OS) Eliminates the need to trap privileged instructions  as it uses  hypercalls  to request that the underlying hypervisor execute those privileged instructions . Handling unexpected or unallowable conditions via trapping can be time-consuming and can impact performance.
Other Storage Features/Enhancements  (ctd) ESX 3.x boot time LUN selection – which sd device represents an iSCSI disk and which represents an FC disk?
Other Storage Features/Enhancements  (ctd) ESX 4.0 boot time LUN selection. Hopefully this will address incorrect LUN selections during install/upgrade.
Questions? Nathan Small Staff Engineer [email_address]

More Related Content

What's hot

Les 19 space_db
Les 19 space_dbLes 19 space_db
Les 19 space_db
Femi Adeyemi
 
Les 12 fl_db
Les 12 fl_dbLes 12 fl_db
Les 12 fl_db
Femi Adeyemi
 
Les 00 intro
Les 00 introLes 00 intro
Les 00 intro
Femi Adeyemi
 
Les 16 resource
Les 16 resourceLes 16 resource
Les 16 resource
Femi Adeyemi
 
Les 02 config
Les 02 configLes 02 config
Les 02 config
Femi Adeyemi
 
Les 09 diag
Les 09 diagLes 09 diag
Les 09 diag
Femi Adeyemi
 
ARM AAE - System Issues
ARM AAE - System IssuesARM AAE - System Issues
ARM AAE - System Issues
Anh Dung NGUYEN
 
Les 20 dup_db
Les 20 dup_dbLes 20 dup_db
Les 20 dup_db
Femi Adeyemi
 
Xpp c user_rec
Xpp c user_recXpp c user_rec
Xpp c user_rec
Femi Adeyemi
 
AAME ARM Techcon2013 004v02 Debug and Optimization
AAME ARM Techcon2013 004v02 Debug and OptimizationAAME ARM Techcon2013 004v02 Debug and Optimization
AAME ARM Techcon2013 004v02 Debug and Optimization
Anh Dung NGUYEN
 
Server control utility reference
Server control utility referenceServer control utility reference
Server control utility reference
Femi Adeyemi
 
ARM AAE - Developing Code for ARM
ARM AAE - Developing Code for ARMARM AAE - Developing Code for ARM
ARM AAE - Developing Code for ARM
Anh Dung NGUYEN
 
Les 05 create_bu
Les 05 create_buLes 05 create_bu
Les 05 create_bu
Femi Adeyemi
 
Les 15 perf_sql
Les 15 perf_sqlLes 15 perf_sql
Les 15 perf_sql
Femi Adeyemi
 
Les 18 space
Les 18 spaceLes 18 space
Les 18 space
Femi Adeyemi
 
Linux on ARM 64-bit Architecture
Linux on ARM 64-bit ArchitectureLinux on ARM 64-bit Architecture
Linux on ARM 64-bit Architecture
Ryo Jin
 
Ch7 v70 scl_en
Ch7 v70 scl_enCh7 v70 scl_en
Ch7 v70 scl_en
confidencial
 
Ch7 v70 scl_en
Ch7 v70 scl_enCh7 v70 scl_en
Ch7 v70 scl_en
confidencial
 
ARM AAE - Memory Systems
ARM AAE - Memory SystemsARM AAE - Memory Systems
ARM AAE - Memory Systems
Anh Dung NGUYEN
 
AAME ARM Techcon2013 005v02 System Startup
AAME ARM Techcon2013 005v02 System StartupAAME ARM Techcon2013 005v02 System Startup
AAME ARM Techcon2013 005v02 System Startup
Anh Dung NGUYEN
 

What's hot (20)

Les 19 space_db
Les 19 space_dbLes 19 space_db
Les 19 space_db
 
Les 12 fl_db
Les 12 fl_dbLes 12 fl_db
Les 12 fl_db
 
Les 00 intro
Les 00 introLes 00 intro
Les 00 intro
 
Les 16 resource
Les 16 resourceLes 16 resource
Les 16 resource
 
Les 02 config
Les 02 configLes 02 config
Les 02 config
 
Les 09 diag
Les 09 diagLes 09 diag
Les 09 diag
 
ARM AAE - System Issues
ARM AAE - System IssuesARM AAE - System Issues
ARM AAE - System Issues
 
Les 20 dup_db
Les 20 dup_dbLes 20 dup_db
Les 20 dup_db
 
Xpp c user_rec
Xpp c user_recXpp c user_rec
Xpp c user_rec
 
AAME ARM Techcon2013 004v02 Debug and Optimization
AAME ARM Techcon2013 004v02 Debug and OptimizationAAME ARM Techcon2013 004v02 Debug and Optimization
AAME ARM Techcon2013 004v02 Debug and Optimization
 
Server control utility reference
Server control utility referenceServer control utility reference
Server control utility reference
 
ARM AAE - Developing Code for ARM
ARM AAE - Developing Code for ARMARM AAE - Developing Code for ARM
ARM AAE - Developing Code for ARM
 
Les 05 create_bu
Les 05 create_buLes 05 create_bu
Les 05 create_bu
 
Les 15 perf_sql
Les 15 perf_sqlLes 15 perf_sql
Les 15 perf_sql
 
Les 18 space
Les 18 spaceLes 18 space
Les 18 space
 
Linux on ARM 64-bit Architecture
Linux on ARM 64-bit ArchitectureLinux on ARM 64-bit Architecture
Linux on ARM 64-bit Architecture
 
Ch7 v70 scl_en
Ch7 v70 scl_enCh7 v70 scl_en
Ch7 v70 scl_en
 
Ch7 v70 scl_en
Ch7 v70 scl_enCh7 v70 scl_en
Ch7 v70 scl_en
 
ARM AAE - Memory Systems
ARM AAE - Memory SystemsARM AAE - Memory Systems
ARM AAE - Memory Systems
 
AAME ARM Techcon2013 005v02 System Startup
AAME ARM Techcon2013 005v02 System StartupAAME ARM Techcon2013 005v02 System Startup
AAME ARM Techcon2013 005v02 System Startup
 

Similar to Vmug V Sphere Storage (Rev E)

Vcap dca section 1
Vcap dca section 1Vcap dca section 1
Vcap dca section 1
ProfessionalVMware
 
2011 q1-indy-vmug
2011 q1-indy-vmug2011 q1-indy-vmug
2011 q1-indy-vmug
Adam Eckerle
 
Next-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageNext-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and Storage
Scott Lowe
 
3487570
34875703487570
Advanced Root Cause Analysis
Advanced Root Cause AnalysisAdvanced Root Cause Analysis
Advanced Root Cause Analysis
Eric Sloof
 
Analisis_avanzado_vmware
Analisis_avanzado_vmwareAnalisis_avanzado_vmware
Analisis_avanzado_vmware
virtualizacionTV
 
Open Hardware and Future Computing
Open Hardware and Future ComputingOpen Hardware and Future Computing
Open Hardware and Future Computing
Ganesan Narayanasamy
 
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
Amazon Web Services
 
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
IJMERJOURNAL
 
Jvm Performance Tunning
Jvm Performance TunningJvm Performance Tunning
Jvm Performance Tunning
guest1f2740
 
Jvm Performance Tunning
Jvm Performance TunningJvm Performance Tunning
Jvm Performance Tunning
Terry Cho
 
Practical SPARQL Benchmarking Revisited
Practical SPARQL Benchmarking RevisitedPractical SPARQL Benchmarking Revisited
Practical SPARQL Benchmarking Revisited
Rob Vesse
 
Solaris multipathing
Solaris multipathingSolaris multipathing
Solaris multipathing
Bui Van Cuong
 
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
abdenour boussioud
 
AIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge ShareAIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge Share
.Gastón. .Bx.
 
LPAR2RRD on CZ/SK common 2014
LPAR2RRD on CZ/SK common 2014LPAR2RRD on CZ/SK common 2014
LPAR2RRD on CZ/SK common 2014
Pavel Hampl
 
LPAR2RRD for IBM Power Systems edition - white paper
LPAR2RRD for IBM Power Systems edition - white paperLPAR2RRD for IBM Power Systems edition - white paper
LPAR2RRD for IBM Power Systems edition - white paper
Pavel Hampl
 
Squash Those IoT Security Bugs with a Hardened System Profile
Squash Those IoT Security Bugs with a Hardened System ProfileSquash Those IoT Security Bugs with a Hardened System Profile
Squash Those IoT Security Bugs with a Hardened System Profile
Steve Arnold
 
Lect01 flow
Lect01 flowLect01 flow
Lect01 flow
prabhu_vlsi
 
Intel® RDT Hands-on Lab
Intel® RDT Hands-on LabIntel® RDT Hands-on Lab
Intel® RDT Hands-on Lab
Michelle Holley
 

Similar to Vmug V Sphere Storage (Rev E) (20)

Vcap dca section 1
Vcap dca section 1Vcap dca section 1
Vcap dca section 1
 
2011 q1-indy-vmug
2011 q1-indy-vmug2011 q1-indy-vmug
2011 q1-indy-vmug
 
Next-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and StorageNext-Generation Best Practices for VMware and Storage
Next-Generation Best Practices for VMware and Storage
 
3487570
34875703487570
3487570
 
Advanced Root Cause Analysis
Advanced Root Cause AnalysisAdvanced Root Cause Analysis
Advanced Root Cause Analysis
 
Analisis_avanzado_vmware
Analisis_avanzado_vmwareAnalisis_avanzado_vmware
Analisis_avanzado_vmware
 
Open Hardware and Future Computing
Open Hardware and Future ComputingOpen Hardware and Future Computing
Open Hardware and Future Computing
 
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
Deep Dive on Amazon EC2 Instances & Performance Optimization Best Practices (...
 
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
Complex Test Pattern Generation for high speed fault diagnosis in FPGA based ...
 
Jvm Performance Tunning
Jvm Performance TunningJvm Performance Tunning
Jvm Performance Tunning
 
Jvm Performance Tunning
Jvm Performance TunningJvm Performance Tunning
Jvm Performance Tunning
 
Practical SPARQL Benchmarking Revisited
Practical SPARQL Benchmarking RevisitedPractical SPARQL Benchmarking Revisited
Practical SPARQL Benchmarking Revisited
 
Solaris multipathing
Solaris multipathingSolaris multipathing
Solaris multipathing
 
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
Best practices for catalyst 4500 4000, 5500-5000, and 6500-6000 series switch...
 
AIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge ShareAIX Advanced Administration Knowledge Share
AIX Advanced Administration Knowledge Share
 
LPAR2RRD on CZ/SK common 2014
LPAR2RRD on CZ/SK common 2014LPAR2RRD on CZ/SK common 2014
LPAR2RRD on CZ/SK common 2014
 
LPAR2RRD for IBM Power Systems edition - white paper
LPAR2RRD for IBM Power Systems edition - white paperLPAR2RRD for IBM Power Systems edition - white paper
LPAR2RRD for IBM Power Systems edition - white paper
 
Squash Those IoT Security Bugs with a Hardened System Profile
Squash Those IoT Security Bugs with a Hardened System ProfileSquash Those IoT Security Bugs with a Hardened System Profile
Squash Those IoT Security Bugs with a Hardened System Profile
 
Lect01 flow
Lect01 flowLect01 flow
Lect01 flow
 
Intel® RDT Hands-on Lab
Intel® RDT Hands-on LabIntel® RDT Hands-on Lab
Intel® RDT Hands-on Lab
 

Recently uploaded

Three New Criminal Laws in India 1 July 2024
Three New Criminal Laws in India 1 July 2024Three New Criminal Laws in India 1 July 2024
Three New Criminal Laws in India 1 July 2024
aakash malhotra
 
Figma AI Design Generator_ In-Depth Review.pdf
Figma AI Design Generator_ In-Depth Review.pdfFigma AI Design Generator_ In-Depth Review.pdf
Figma AI Design Generator_ In-Depth Review.pdf
Management Institute of Skills Development
 
How to Build a Profitable IoT Product.pptx
How to Build a Profitable IoT Product.pptxHow to Build a Profitable IoT Product.pptx
How to Build a Profitable IoT Product.pptx
Adam Dunkels
 
The importance of Quality Assurance for ICT Standardization
The importance of Quality Assurance for ICT StandardizationThe importance of Quality Assurance for ICT Standardization
The importance of Quality Assurance for ICT Standardization
Axel Rennoch
 
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
sunilverma7884
 
Feature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptxFeature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptx
ssuser1915fe1
 
Salesforce AI & Einstein Copilot Workshop
Salesforce AI & Einstein Copilot WorkshopSalesforce AI & Einstein Copilot Workshop
Salesforce AI & Einstein Copilot Workshop
CEPTES Software Inc
 
EuroPython 2024 - Streamlining Testing in a Large Python Codebase
EuroPython 2024 - Streamlining Testing in a Large Python CodebaseEuroPython 2024 - Streamlining Testing in a Large Python Codebase
EuroPython 2024 - Streamlining Testing in a Large Python Codebase
Jimmy Lai
 
IPLOOK Remote-Sensing Satellite Solution
IPLOOK Remote-Sensing Satellite SolutionIPLOOK Remote-Sensing Satellite Solution
IPLOOK Remote-Sensing Satellite Solution
IPLOOK Networks
 
Uncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in LibrariesUncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in Libraries
Brian Pichman
 
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyyActive Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
RaminGhanbari2
 
WhatsApp Spy Online Trackers and Monitoring Apps
WhatsApp Spy Online Trackers and Monitoring AppsWhatsApp Spy Online Trackers and Monitoring Apps
WhatsApp Spy Online Trackers and Monitoring Apps
HackersList
 
Evolution of iPaaS - simplify IT workloads to provide a unified view of data...
Evolution of iPaaS - simplify IT workloads to provide a unified view of  data...Evolution of iPaaS - simplify IT workloads to provide a unified view of  data...
Evolution of iPaaS - simplify IT workloads to provide a unified view of data...
Torry Harris
 
How Social Media Hackers Help You to See Your Wife's Message.pdf
How Social Media Hackers Help You to See Your Wife's Message.pdfHow Social Media Hackers Help You to See Your Wife's Message.pdf
How Social Media Hackers Help You to See Your Wife's Message.pdf
HackersList
 
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
TrustArc
 
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSECHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
kumarjarun2010
 
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and OllamaTirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
Zilliz
 
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptxDublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
Kunal Gupta
 
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
maigasapphire
 
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
shanihomely
 

Recently uploaded (20)

Three New Criminal Laws in India 1 July 2024
Three New Criminal Laws in India 1 July 2024Three New Criminal Laws in India 1 July 2024
Three New Criminal Laws in India 1 July 2024
 
Figma AI Design Generator_ In-Depth Review.pdf
Figma AI Design Generator_ In-Depth Review.pdfFigma AI Design Generator_ In-Depth Review.pdf
Figma AI Design Generator_ In-Depth Review.pdf
 
How to Build a Profitable IoT Product.pptx
How to Build a Profitable IoT Product.pptxHow to Build a Profitable IoT Product.pptx
How to Build a Profitable IoT Product.pptx
 
The importance of Quality Assurance for ICT Standardization
The importance of Quality Assurance for ICT StandardizationThe importance of Quality Assurance for ICT Standardization
The importance of Quality Assurance for ICT Standardization
 
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
Girls call Kolkata 👀 XXXXXXXXXXX 👀 Rs.9.5 K Cash Payment With Room Delivery
 
Feature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptxFeature sql server terbaru performance.pptx
Feature sql server terbaru performance.pptx
 
Salesforce AI & Einstein Copilot Workshop
Salesforce AI & Einstein Copilot WorkshopSalesforce AI & Einstein Copilot Workshop
Salesforce AI & Einstein Copilot Workshop
 
EuroPython 2024 - Streamlining Testing in a Large Python Codebase
EuroPython 2024 - Streamlining Testing in a Large Python CodebaseEuroPython 2024 - Streamlining Testing in a Large Python Codebase
EuroPython 2024 - Streamlining Testing in a Large Python Codebase
 
IPLOOK Remote-Sensing Satellite Solution
IPLOOK Remote-Sensing Satellite SolutionIPLOOK Remote-Sensing Satellite Solution
IPLOOK Remote-Sensing Satellite Solution
 
Uncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in LibrariesUncharted Together- Navigating AI's New Frontiers in Libraries
Uncharted Together- Navigating AI's New Frontiers in Libraries
 
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyyActive Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
Active Inference is a veryyyyyyyyyyyyyyyyyyyyyyyy
 
WhatsApp Spy Online Trackers and Monitoring Apps
WhatsApp Spy Online Trackers and Monitoring AppsWhatsApp Spy Online Trackers and Monitoring Apps
WhatsApp Spy Online Trackers and Monitoring Apps
 
Evolution of iPaaS - simplify IT workloads to provide a unified view of data...
Evolution of iPaaS - simplify IT workloads to provide a unified view of  data...Evolution of iPaaS - simplify IT workloads to provide a unified view of  data...
Evolution of iPaaS - simplify IT workloads to provide a unified view of data...
 
How Social Media Hackers Help You to See Your Wife's Message.pdf
How Social Media Hackers Help You to See Your Wife's Message.pdfHow Social Media Hackers Help You to See Your Wife's Message.pdf
How Social Media Hackers Help You to See Your Wife's Message.pdf
 
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-InTrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
TrustArc Webinar - 2024 Data Privacy Trends: A Mid-Year Check-In
 
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSECHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
CHAPTER-8 COMPONENTS OF COMPUTER SYSTEM CLASS 9 CBSE
 
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and OllamaTirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
Tirana Tech Meetup - Agentic RAG with Milvus, Llama3 and Ollama
 
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptxDublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
Dublin_mulesoft_meetup_Mulesoft_Salesforce_Integration (1).pptx
 
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
Girls Call Churchgate 9910780858 Provide Best And Top Girl Service And No1 in...
 
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
Premium Girls Call Mumbai 9920725232 Unlimited Short Providing Girls Service ...
 

Vmug V Sphere Storage (Rev E)

  • 1. vSphere 4.0 Storage: Features and Enhancements Nathan Small Staff Engineer Rev E Last updated 3 rd August 2009 VMware Inc
  • 2. Introduction This presentation is a technical overview and deep dive of some of the features and enhancements to the storage stack and related storage components of vSphere 4.0
  • 3. New Acronyms in vSphere 4 MPX = VMware Generic Multipath Device (No Unique Identifier) NAA = Network Addressing Authority PSA = Pluggable Storage Architecture MPP = Multipathing Plugin NMP = Native Multipathing SATP = Storage Array Type Plugin PSP = Path Selection Policy
  • 4. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 5. Naming Convention Change in vSphere 4 Although the vmhbaN:C:T:L:P naming convention is visible, it is now known as the run-time name and is no longer guaranteed to be persistent through reboots. ESX 4 now uses the unique LUN identifiers, typically the NAA (Network Addressing Authority) ID. This is true for the CLI as well as the GUI and is also the naming convention used during the install. The IQN (iSCSI Qualified Name) is still used for iSCSI targets. The WWN (World Wide Name) is still used for Fiber Channel targets. For those devices which do not have a unique ID, you will observe an MPX reference (which is basically stands for VMware Multipath X device).
  • 6. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 7. Pluggable Storage Architecture PSA , the Pluggable Storage Architecture , is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path. This allows 3 rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array types. This also means that 3 rd party vendors can now add support for new arrays into ESX without having to provide internal information or intellectual property about the array to VMware. VMware, by default, provide a generic Multipathing Plugin ( MPP ) called NMP ( Native Multipathing Plugin ). PSA co-ordinates the operation of the NMP and any additional 3 rd party MPP.
  • 8. PSA Tasks Loads and unloads multipathing plugins (MPPs). Handles physical path discovery and removal (via scanning). Routes I/O requests for a specific logical device to an appropriate MPP . Handles I/O queuing to the physical storage HBAs & to the logical devices. Implements logical device bandwidth sharing between Virtual Machines. Provides logical device and physical path I/O statistics .
  • 9. MPP Tasks The PSA discovers available storage paths and based on a set of predefined rules, the PSA will determine which MPP should be given ownership of the path. The MPP then associates a set of physical paths with a specific logical device. The specific details of handling path failover for a given storage array are delegated to a sub-plugin called a Storage Array Type Plugin (SATP). SATP is associated with paths. The specific details for determining which physical path is used to issue an I/O request ( load balancing ) to a storage device are handled by a sub-plugin called Path Selection Plugin (PSP). PSP is associated with logical devices.
  • 10. NMP Specific Tasks Manage physical path claiming and unclaiming . Register and de-register logical devices. Associate physical paths with logical devices . Process I/O requests to logical devices: Select an optimal physical path for the request (load balance) Perform actions necessary to handle failures and request retries . Support management tasks such as abort or reset of logical devices.
  • 11. Storage Array Type Plugin - SATP An Storage Array Type Plugin (SATP) handles path failover operations . VMware provides a default SATP for each supported array as well as a generic SATP (an active/active version and an active/passive version) for non-specified storage arrays. If you want to take advantage of certain storage specific characteristics of your array, you can install a 3rd party SATP provided by the vendor of the storage array, or by a software company specializing in optimizing the use of your storage array. Each SATP implements the support for a specific type of storage array, e.g. VMW_SATP_SVC for IBM SVC .
  • 12. SATP (ctd) The primary functions of an SATP are: Implements the switching of physical paths to the array when a path has failed. Determines when a hardware component of a physical path has failed . Monitors the hardware state of the physical paths to the storage array. There are many storage array type plug-ins. To see the complete list, you can use the following commands: # esxcli nmp satp list # esxcli nmp satp listrules # esxcli nmp satp listrules –s < specific SATP>
  • 13. Path Selection Plugin (PSP) If you want to take advantage of more complex I/O load balancing algorithms, you could install a 3rd party Path Selection Plugin (PSP). A PSP handles load balancing operations and is responsible for choosing a physical path to issue an I/O request to a logical device. VMware provide three PSP: Fixed , MRU or Round Robin . # esxcli nmp psp list Name Description VMW_PSP_MRU Most Recently Used Path Selection VMW_PSP_RR Round Robin Path Selection VMW_PSP_FIXED Fixed Path Selection
  • 14. NMP Supported PSPs Most Recently Used (MRU) — Selects the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available. Fixed — Uses the designated preferred path , if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available. Round Robin (RR) – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths.
  • 15. Enabling Additional Logging on vSphere 4.0 For additional SCSI Log Messages, set: Scsi.LogCmdErrors = &quot;1“ Scsi.LogMPCmdErrors = &quot;1“ At GA, the default setting for Scsi.LogMPCmdErrors is &quot;1“ These can be found in the Advanced Settings .
  • 16. Viewing Plugin Information The following command lists all multipathing modules loaded on the system. At a minimum, this command returns the default VMware Native Multipath (NMP) plugin & the MASK_PATH plugin. Third-party MPPs will also be listed if installed: # esxcfg-mpath -G MASK_PATH NMP For ESXi , the following VI CLI 4.0 command can be used: # vicfg-mpath –G –-server <IP> --username <X> --password <Y> MASK_PATH NMP LUN path masking is done via the MASK_PATH Plug-in.
  • 17. Viewing Device Information The command esxcli nmp device list lists all devices managed by the NMP plug-in and the configuration of that device, e.g.: # esxcli nmp device list naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba33:C0:T0:L1 Working Paths: vmhba33:C0:T0:L1 mpx.vmhba1:C0:T0:L0 Device Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba1:C0:T0:L0;current=vmhba1:C0:T0:L0} Working Paths: vmhba1:C0:T0:L0 Specific configuration for EMC Clariion & Invista products mpx is used as an identifier for devices that do not have their own unique ids NAA is the Network Addressing Authority (NAA) identifier guaranteed to be unique
  • 18. Viewing Device Information (ctd) Get current path information for a specified storage device managed by the NMP. # esxcli nmp device list -d naa.600601604320170080d407794f10dd11 naa.600601604320170080d407794f10dd11 Device Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11) Storage Array Type: VMW_SATP_CX Storage Array Type Device Config: {navireg ipfilter} Path Selection Policy: VMW_PSP_MRU Path Selection Policy Device Config: Current Path=vmhba2:C0:T0:L0 Working Paths: vmhba2:C0:T0:L0
  • 19. Viewing Device Information (ctd) Lists all paths available for a specified storage device on ESX: # esxcfg-mpath -b -d naa.600601601d311f001ee294d9e7e2dd11 naa.600601601d311f001ee294d9e7e2dd11 : DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) vmhba33:C0:T0:L1 LUN:1 state:active iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1 vmhba33:C0:T1:L1 LUN:1 state:standby iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2 ESXi has an equivalent vicfg-mpath command.
  • 20. Viewing Device Information (ctd) # esxcfg-mpath -l -d naa.6006016043201700d67a179ab32fdc11 iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2-naa.600601601d311f001ee294d9e7e2dd11 Runtime Name: vmhba33:C0:T1:L1 Device: naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Adapter: vmhba33 Channel: 0 Target: 1 LUN: 1 Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2 Plugin: NMP State: standby Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2 iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1-naa.600601601d311f001ee294d9e7e2dd11 Runtime Name: vmhba33:C0:T0:L1 Device: naa.600601601d311f001ee294d9e7e2dd11 Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11) Adapter: vmhba33 Channel: 0 Target: 0 LUN: 1 Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1 Plugin: NMP State: active Transport: iscsi Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1 Storage array (target) iSCSI Qualified Names (IQNs)
  • 21. Third-Party Multipathing Plug-ins (MPPs) You can install the third-party multipathing plug-ins (MPPs) when you need to change specific load balancing and failover characteristics of ESX/ESXi. The third-party MPPs replace the behaviour of the NMP and entirely take control over the path failover and the load balancing operations for certain specified storage devices.
  • 22. Third-Party SATP & PSP Third-party SATP Generally developed by third-party hardware manufacturers who have ‘expert’ knowledge of the behaviour of their storage devices. Accommodates specific characteristics of storage arrays and facilitates support for new arrays. Third-party PSP Generally developed by third-party software companies. More complex I/O load balancing algorithms. NMP coordination Third-party SATPs and PSPs are coordinated by the NMP, and can be simultaneously used with the VMware SATPs and PSPs.
  • 23. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 24. iSCSI Enhancements ESX 4 includes an updated iSCSI stack which offers improvements to both software iSCSI (initiator that runs at the ESX layer) and hardware iSCSI (a hardware-optimized iSCSI HBA). For both software and hardware iSCSI, functionality (e.g. CHAP support, digest acceleration, etc.) and performance are improved. Software iSCSI can now be configured to use host based multipathing if you have more than one physical network adapter. In the new ESX 4.0 Software iSCSI stack, there is no longer any requirement to have a Service Console connection to communicate to an iSCSI target.
  • 25. Software iSCSI Enhancements iSCSI Advanced Settings In particular, data integrity checks in the form of digests . CHAP Parameters Settings A user will be able to specify CHAP parameters as per-target CHAP and mutual per-target CHAP. Inheritance model of parameters. A global set of configuration parameters can be set on the initiator and propagated down to all targets. Per target/discovery level configuration. Configuration settings can now be set on a per target basis which means that a customer can uniquely configure parameters for each array discovered by the initiator.
  • 26. Software iSCSI Multipathing – Port Binding You can now create a port binding between a physical NIC and a iSCSI VMkernel port in ESX 4.0. Using the “port binding&quot; feature, users can map the multiple iSCSI VMkernel ports to different physical NICs. This will enable the software iSCSI initiator to use multiple physical NICs for I/O transfer. Connecting the software iSCSI initiator to the VMkernel ports can only be done from the CLI using the esxcli swiscsi commands. Host based multipathing can then manage the paths to the LUN. In addition, Round Robin path policy can be configured to simultaneously use more than one physical NIC for the iSCSI traffic to the iSCSI.
  • 27. Hardware iSCSI Limitations Mutual Chap is disabled. Discovery is supported by IP address only (storage array name discovery not supported). Running with the Hardware and Software iSCSI initiator enabled on the same host at the same time is not supported .
  • 28. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 29. GUI Changes - Display Device Info Note that there are no further references to vmhbaC:T:L. Unique device identifiers such as the NAA id are now used.
  • 30. GUI Changes - Display HBA Configuration Info Again, notice the use of NAA ids rather than vmhbaC:T:L.
  • 31. GUI Changes - Display Path Info Note the reference to the PSP & SATP Note the (I/O) status designating the active path
  • 32. GUI Changes - Data Center Rescan
  • 33. Degraded Status If we detect less than 2 HBAs or 2 Targets in the paths of the datastore, we mark the datastore multipathing status as “ Partial/No Redundancy “ in the Storage Views .
  • 34. Storage Administration VI4 also provides new monitoring , reporting and alarm features for storage management. This now gives an administrator of a vSphere the ability to: Manage access/permissions of datastores/folders Have visibility of a Virtual Machine’s connectivity to the storage infrastructure Account for disk space utilization Provide notification in case of specific usage conditions
  • 35. Datastore Monitoring & Alarms vSphere introduces new datastore and VM-specific alarms/alerts on storage events: New datastore alarms: Datastore disk usage % Datastore Disk Over allocation % Datastore connection state to all hosts New VM alarms: VM Total size on disk (GB) VM Snapshot size (GB) Customer’s can now track snapshot usage
  • 36. New Storage Alarms New Datastore specific Alarms New VM specific Alarms This alarms allow the tracking of Thin Provisioned disks This alarms will trigger if a datastore becomes unavailble to the host This alarms will trigger if a snapshot delta file becomes too large
  • 37. vSphere 4 Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 38. Traditional Snapshot Detection When an ESX 3.x server finds a VMFS-3 LUN, it compares the SCSI_DiskID information returned from the storage array with the SCSI_DiskID information stored in the LVM Header. If the two IDs don’t match, then by default, the VMFS-3 volume will not be mounted and thus be inaccessible. A VMFS volume on ESX 3.x could be detected as a snapshot for a number of reasons: LUN ID changed SCSI version supported by array changed (firmware upgrade) Identifier type changed – Unit Serial Number vs NAA ID
  • 39. New Snapshot Detection Mechanism When trying to determine if a device is a snapshot in ESX 4.0, the ESX uses a globally unique identifier to identify each LUN, typically the NAA (Network Addressing Authority) ID. NAA IDs are unique and are persistent across reboots. There are many different globally unique identifiers (EUI, SNS, T10, etc). If the LUN does not support any of these globally unique identifiers, ESX will fall back to the serial number + LUN ID used in ESX 3.0.
  • 40. SCSI_DiskId Structure The internal VMkernel structure SCSI_DiskId is populated with information about a LUN. This is stored in the metadata header of a VMFS volume. if the LUN does have a globally unique (NAA) ID, the field SCSI_DiskId.data.uid in the SCSI_DiskId structure will hold it. If the NAA ID in the SCSI_DiskId.data.uid stored in the metadata does not match the NAA ID returned by the LUN, the ESX knows the LUN is a snapshot. For older arrays that do not support NAA IDs, the earlier algorithm is used where we compare other fields in the SCSI_DISKID structure to detect whether a LUN is a snapshot or not.
  • 41. 8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T1:L2' 8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T0:L2' 8:00:45:51.977 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T0:L2 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 8:00:45:51.978 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T1:L2 0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35 8:00:45:52.002 cpu2:81258)LVM: 7125: Device naa.600601601d311f00fca3ea501beddd11:1 detected to be a snapshot: 8:00:45:52.002 cpu2:81258)LVM: 7132: queried disk ID: <type 2, len 22, lun 2, devType 0, scsi 0, h(id) 3817547080305476947> 8:00:45:52.002 cpu2:81258)LVM: 7139: on-disk disk ID: <type 2, len 22, lun 1, devType 0, scsi 0, h(id) 6335084141271340065> 8:00:45:52.006 cpu2:81258)ScsiDevice: 1756: Successfully registered device &quot;naa.600601601d311f00fca3ea501beddd11&quot; from plugin &quot; Snapshot Log Messages
  • 42. Resignature & Force-Mount We have a new naming convention in ESX 4. “ Resignature ” is equivalent to EnableResignature = 1 in ESX 3.x. “ Force-Mount ” is equivalent to DisallowSnapshotLUN = 0 in ESX 3.x. The advanced configuration options EnableResignature and DisallowSnapshotLUN have been replaced in ESX 4 with a new CLI utility called esxcfg-volume ( vicfg-volume for ESXi) . Historically, the EnableResignature and DisallowSnapshotLUN were applied server wide and applied to all volumes on an ESX. The new Resignature and Force-mount are volume specific so offer much greater granularity in the handling of snapshots.
  • 43. Persistent Or Non-Persistent Mounts If you use the GUI to force-mount a VMFS volume, it will make it a persistent mount which will remain in place through reboots of the ESX host. VC will not allow this volume to be resignatured. If you use the CLI to force-mount a VMFS volume, you can choose whether it persists or not through reboots. Through the GUI , the Add Storage Wizard now displays the VMFS label. Therefore if a device is not mounted, but it has a label associated with it, you can make the assumption that it is a snapshot, or to use ESX 4 terminology, a Volume Copy .
  • 44. Mounting A Snapshot Original Volume is still presented to the ESX Snapshot – notice that the volume label is the same as the original volume.
  • 45. Snapshot Mount Options Keep Existing Signature – this is a force-mount operation: similar to disabling DisallowSnapshots in ESX 3.x. New datastore has original UUID saved in the file system header. If the original volume is already online, this option will not succeed and will print a ‘ Cannot change the host configuration ’ message when resolving the VMFS volumes.. Assign a new Signature – this is a resignature operation: similar to enabling EnableResignature in ESX 3.x. New datastore has a new UUID saved in the file system header. Format the disk – destroys the data on the disk and creates a new VMFS volume on it.
  • 46. New CLI Command: esxcfg-volume There is a new CLI command in ESX 4 for resignaturing VMFS snapshots. Note the difference between ‘-m’ and ‘-M’: # esxcfg-volume esxcfg-volume <options> -l|--list List all volumes which have been detected as snapshots/replicas. -m|--mount <VMFS UUID|label> Mount a snapshot/replica volume, if its original copy is not online. -u|--umount <VMFS UUID|label> Umount a snapshot/replica volume. -r|--resignature <VMFS UUID|label> Resignature a snapshot/replica volume. -M|--persistent-mount <VMFS UUID|label> Mount a snapshot/replica volume persistently, if its original copy is not online. -h|--help Show this message.
  • 47. esxcfg-volume (ctd) The difference between a mount and a persistent mount is that the persistent mounts will be maintained through reboots. ESX manages this by adding entries for force mounts into the /etc/vmware/esx.conf . A typical set of entries for a force mount look like: /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ forceMount = &quot; true &quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ lvmName = &quot;48d247da-b18fd17c-1da1-0019993032e1&quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/ readOnly = &quot; false &quot;
  • 48. Mount With the Original Volume Still Online /var/log # esxcfg-volume -l VMFS3 UUID/label: 496f202f-3ff43d2e-7efe-001f29595f9d/Shared_VMFS_For_FT_VMs Can mount: No (the original volume is still online) Can resignature: Yes Extent name: naa.600601601d311f00fca3ea501beddd11:1 range: 0 - 20223 (MB) /var/log # esxcfg-volume -m 496f202f-3ff43d2e-7efe-001f29595f9d Mounting volume 496f202f-3ff43d2e-7efe-001f29595f9d Error: Unable to mount this VMFS3 volume due to the original volume is still online
  • 49. esxcfg-volume (ctd) In this next example, a clone LUN of a VMFS LUN is presented back to the same ESX server. Therefore we cannot use either the mount or the persistent mount options since the original LUN is already presented to the host so we will have to resignature: # esxcfg-volume -l VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol Can mount: No Can resignature: Yes Extent name: naa.6006016043201700f30570ed09f6da11:1 range: 0 - 15103 (MB)
  • 50. esxcfg-volume (ctd) # esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1 Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1 # vdf Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdg2 5044188 1595804 3192148 34% / /dev/sdd1 248895 50780 185265 22% /boot . . /vmfs/volumes/48d247dd-7971f45b-5ee4-0019993032e1 15466496 5183488 10283008 33% /vmfs/volumes/cormac_grow_vol /vmfs/volumes/48d39951-19a5b934-67c3-0019993032e1 15466496 5183488 10283008 33% /vmfs/volumes/snap-397419fa-cormac_grow_vol Warning – there is no vdf command in ESXi. However the df command reports on VMFS filesystems in ESXi.
  • 51. Ineffective Workarounds Under ESX 4 Some of our customers have been running with DisallowSnapshotLUN set to ‘0’ as a workaround solution due their choice to use inconsistent LUN presentation numbers across their hosts since the early ESX 3.0.x days. Other customers have been running with this setting after enabling the SPC-2 and SC-3 bits on their FA ports for their EMC DMX/Symmetrix array and found that their LUNs were now seen as snapshots. This was due to the LUN ID’s for their LUNs changing since we reference the LUN by a unique ID (in page 0x83) rather than the serial number for the LUN or the NAA (if running ESX 3.5) once these director bits are enabled. This behavior was also observed in other arrays when upgrading firmware/OS. As there is no longer a global DisallowsnapshotLUN setting available in ESX 4, if your environment is running entirely on “snapshots” then you can use the following script to streamline the force-mount operation on each ESX host: for i in `/usr/sbin/esxcfg-volume -l|grep VMFS3|awk {print $3}`;do /usr/sbin/esxcfg-volume -M $i;done
  • 52. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 53. Storage VMotion Enhancements In VI4 The following enhancements have been made to the VI4 version of Storage VMotion: GUI Support. Leverages new features of VI4, including fast suspend/resume and Change Block Tracking. Supports moving VMDKs from Thick to Thin formats & vice versa Ability to migrate RDMs to VMDKs. Ability to migrate RDMs to RDMs. Support for FC, iSCSI & NAS. Storage VMotion no longer requires 2 x memory. No requirement to create a VMotion interface for Storage VMotion. Ability to move an individual disk without moving the VM’s home.
  • 54. New Features Fast Suspend/Resume of VMs This provides the ability to quickly transition between the source VM to the destination VM reliably and in a fast switching time . This is only necessary when migrating the .vmx file Changed Block Tracking Very much like how we handle memory with standard VMotion in that a bitmap of changed disk blocks is used rather than a bitmap of changed memory pages. This means Storage VMotion no longer needs to snapshot the original VM and commit it to the destination VM so the Storage VMotion operation performs much faster. Multiple iterations of the disk copy goes through, but each time the number of disk blocks that changed reduces, until eventually all disk blocks have been copied and we have a complete copy of the disk at the destination.
  • 55. Storage VMotion – GUI Support Storage VMotion is still supported via the VI CLI 4.0 as well as the API , so customers wishing to use this method can continue to do so. The Change both host and datastore option is only available to powered off VMs. For a non-passthru RDM, you can select to convert it to either Thin Provisioned or Thick when converting it to a VMDK, or you can leave it as a non-passthru RDM.
  • 56. Storage VMotion – CLI (ctd) # svmotion --interactive Entering interactive mode. All other options and environment variables will be ignored. Enter the VirtualCenter service url you wish to connect to (e.g. https://myvc.mycorp.com/sdk, or just myvc.mycorp.com): VC-Linked-Mode.vi40.vmware.com Enter your username: Administrator Enter your password: ******** Attempting to connect to https://VC-Linked-Mode.vi40.vmware.com/sdk. Connected to server. Enter the name of the datacenter: Embedded-ESX40 Enter the datastore path of the virtual machine (e.g. [datastore1] myvm/myvm.vmx): [CLAR_L52] W2K3SP2/W2K3SP2.vmx Enter the name of the destination datastore: CLAR_L53 You can also move disks independently of the virtual machine. If you want the disks to stay with the virtual machine, then skip this step.. Would you like to individually place the disks (yes/no)? no Performing Storage VMotion. 0% |----------------------------------------------------------------------------------------------------| 100% ##########
  • 57. Limitations The migration of Virtual Machines which have snapshots will not be supported at GA. Currently the plan is to have this in a future release The migration of Virtual Machines to a different host and a different datastore simultaneously is not yet supported. No firm date for support of this feature yet.
  • 58. Storage VMotion Timeouts There are also a number of tunable timeout values: Downtime timeout Failure: Source detected that destination failed to resume. Update fsr.maxSwitchoverSeconds (default 100 seconds) in the VM’s .vmx file. May be observed on Virtual Machines that have lots of virtual disks. Data timeout Failure: Timed out waiting for migration data. Update migration.dataTimeout (default 60 seconds ) in the VM’s .vmx file. May be observed when migrating from NFS to NFS on slow networks.
  • 59. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 60. Supported Disk Growth/Shrink Operations VI4 introduces the following growth/shrink operations: Grow VMFS volumes: yes Grow RDM volumes: yes Grow *. vmdk : yes Shrink VMFS volumes : no Shrink RDM volumes : yes Shrink *. vmdk : no
  • 61. Volume Grow & Hot VMDK Extend Volume Grow VI4 allows dynamic expansion of a volume partition by adding capacity to a VMFS without disrupting running Virtual Machines. Once the LUN backing the datastore has been grown (typically through an array management utility), Volume Grow expands the VMFS partition on the expanded LUN. Historically, the only way to grow a VMFS volume was to use the extent-based approach. Volume Grow offers a different method of capacity growth . The newly available space appears as a larger VMFS volume along with an associated grow event in vCenter. Hot VMDK Extend Hot extend is supported for VMFS flat virtual disks in persistent mode and without any Virtual Machine snapshots. Used in conjunction with the new Volume Grow capability, the user has maximum flexibility in managing growing capacity in VI4.
  • 62. Comparison: Volume Grow & Add Extent Volume Grow Add Extent Must power-off VMs No No Can be done on newly-provisioned LUN No Yes Can be done on existing array-expanded LUN Yes Yes (but not allowed through GUI) Limits An extent can be grown any number of times, up to 2TB. A datastore can have up to 32 extents, each up to 2TB. Results in creation of new partition No Yes VM availability impact None, if datastore has only one extent. Introduces dependency on first extent.
  • 63. Volume Grow GUI Enhancements Here I am choosing the same device on which the VMFS is installed – there is currently 4GB free. This option selects to expand the VMFS using free space on the current device Notice that the current extent capacity is 1GB.
  • 64. VMFS Grow - Expansion Options LUN Provisioned at Array VMFS Volume/Datastore Provisioned for ESX Virtual Disk Provisioned for VM VMFS Volume Grow Dynamic LUN Expansion VMDK Hot Extend
  • 66. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 67. ESX 4.0 CLI There have been a number of new storage commands introduced with ESX 4.0 as well as enhancements to the more traditional commands. Here is a list of commands that have changed or have been added: esxcli esxcfg-mpath / vicfg-mpath esxcfg-volume / vicfg-volume esxcfg-scsidevs / vicfg-scsidevs esxcfg-rescan / vicfg-rescan esxcfg-module / vicfg-module vmkfstools This slide deck will only cover a few of the above mentioned.
  • 68. New/Updated CLI Commands(1) : esxcfg-scsidevs The esxcfg-vmhbadevs command has been replaced by the esxcfg-scsidevs command. To display the old VMware Legacy identifiers (vml), use: # esxcfg-scsidevs –u To display Service Console devices: # esxcfg-scsidevs –c To display all logical devices on this host: # esxcfg-scsidevs –l To show the relationship between COS native devices (/dev/sd) and vmhba devices: # esxcfg-scsidevs -m The VI CLI 4.0 has an equivalent vicfg-scsidevs for ESXi.
  • 69. esxcfg-scsidevs (ctd) Sample output of esxcfg-scsidevs –l : naa.600601604320170080d407794f10dd11 Device Type: Direct-Access Size: 8192 MB Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11) Plugin: NMP Console Device: /dev/sdb Devfs Path: /vmfs/devices/disks/naa.600601604320170080d407794f10dd11 Vendor: DGC Model: RAID 5 Revis: 0224 SCSI Level: 4 Is Pseudo: false Status: on Is RDM Capable: true Is Removable: false Is Local: false Other Names: vml.0200000000600601604320170080d407794f10dd11524149442035 Note that this is one of the few CLI commands which will report the LUN size
  • 70. New/Updated CLI Commands(2): esxcfg-rescan You now have the ability to rescan based on whether devices were added or device were removed. You can also rescan the current paths and not try to discover new ones. # esxcfg-rescan -h esxcfg-rescan <options> [adapter] -a|--add Scan for only newly added devices. -d|--delete Scan for only deleted devices. -u|--update Scan existing paths only and update their state. -h|--help Display this message. The VI CLI 4.0 has an equivalent vicfg-rescan command for ESXi .
  • 71. New/Updated CLI Commands(3): vmkfstools The vmkfstools commands exists in the Service Console and VI CLI 4.0 Grow a VMFS: vmkfstools –G Inflate a VMDK from thin to thick: vmkfstools –j Import a thick VMDK to thin: vmkfstools –i <src> -d thin Import a thin VMDK to thick: vmkfstools –i <src thin disk> -d zeroedthick
  • 72. vSphere Storage Section 1 - Naming Convention Change Section 2 - Pluggable Storage Architecture Section 3 - iSCSI Enhancements Section 4 - Storage Administration (VC) Section 5 - Snapshot Volumes & Resignaturing Section 6 - Storage VMotion Section 7 - Volume Grow / Hot VMDK Extend Section 8 - Storage CLI Enhancements Section 9 - Other Storage Features/Enhancements
  • 73. Other Storage Features/Enhancements Storage General The number of LUNs that can be presented to the ESX 4.0 server is still 256 . VMFS The maximum extent volume size in VI4 is still 2TB . Maximum number of extents is still 32, so maximum volume size is still 64TB . We are still using VMFS3, not VMFS 4 (although the version has increased to 3.33). iSCSI Enhancements 10 GbE iSCSI Initiator – iSCSI over a 10GbE interface is supported. First introduced in ESX/ESXi 3.5 u2 & extended back to ESX/ESXi 3.5 u1.
  • 74. Other Storage Features/Enhancements (ctd) NFS Enhancements IPv6 support (experimental) Support for up to 64 NFS volumes (the old limit was 32) 10 GbE NFS Support – NFS over a 10GbE interface is supported. First introduced in ESX/ESXi 3.5 u2 FC Enhancements Support for 8Gb Fibre Channel First introduced in ESX/ESXi 3.5 u2 Support for FC over Ethernet (FCoE)
  • 75. Other Storage Features/Enhancements (ctd) Paravirtualized SCSI Driver (Guest OS) Eliminates the need to trap privileged instructions as it uses hypercalls to request that the underlying hypervisor execute those privileged instructions . Handling unexpected or unallowable conditions via trapping can be time-consuming and can impact performance.
  • 76. Other Storage Features/Enhancements (ctd) ESX 3.x boot time LUN selection – which sd device represents an iSCSI disk and which represents an FC disk?
  • 77. Other Storage Features/Enhancements (ctd) ESX 4.0 boot time LUN selection. Hopefully this will address incorrect LUN selections during install/upgrade.
  • 78. Questions? Nathan Small Staff Engineer [email_address]

Editor's Notes

  1. New storage acronyms for vSphere 4.0
  2. The C in vmhbaN:C:T:L:P is for channel. We only see a channel 0 in FC &amp; iSCSI typically. Other identifiers might include a T10 name, as observed with the OpenFiler appliance. We’re moving away from the Controller Target Lun Partition and using more NAA id. MPX Multipath X device = unknown (cdroms, some local storage types too)
  3. The SDK for third parties to write their own code for the PSA is called the VMKDH. This eliminates the recertification issue with the storage vendors. EMC will be first for GA (Powerpath). Remember that the PSA and the Multi Pathing Plugin (MPP) are just replacing the tasks that were carried out by the SCSI Mid Layer in previous ESX versions.
  4. The multipath plug-ins are VMkernel modules, e.g. NMP Advanced functions like the quantumSched – are now apart of the PSA. Such as how much time VM’s can run on the run-queue. Note that this is still SCSI mid layer stuff, except we now split the operations between the PSA &amp; the MPP.
  5. Multi Path Plugin (MPP) Point 2: PSA discovers IBM on 2 paths and EMC on 4 Paths and determins which MPP to give to the plugin. Collaps the 4 paths for one array and notes it as one array
  6. NMP Native Multipathing Plugin
  7. 3 rd point: At this point in time, there are no known partners working on SATPs.
  8. The esxcli command also appears in the RCLI for VI4.
  9. The new Pluggable Storage Architecture (PSA) also allows for third party plug-ins to take control of the entire path failover and load balancing operations, replacing VMware’s NMP. Also in ESX 4, there are two ways to set a PSP for a device: 1. By setting the default PSP for a given SATP in a claim rule. 2. By setting the managing PSP for a given device, like so: # esxcli nmp device setpolicy --device naa.50060160ffe0000150060160ffe00001 --psp PSP_RR There is currently no way to set PSP&apos;s by vendor/model strings. As for config options, there is currently no way to provide PSP-wide config options in a claimrule the way there is for a SATP. However, the per-path and per-device configuration you set using esxcli, such as: # esxcli nmp psp setconfig --device naa.50060160ffe0000150060160ffe00001 --config iops=213 are persistent across reboots, so you only need to set it up once per device. Actually, once all your devices are claimed, you can just make shell script to set the per-device configuration for all your devices.
  10. Round Robin is the only Path Selection Policy which load balances. It has been around in experimental form since ESX 3.0 but is finally supported in ESX 4.0. It only uses active paths so is of most use on Active/Active arrays. However in Active/Passive arrays, it will load balance between on ports to the same Storage Processor.
  11. &amp;quot;mpx&amp;quot; is a VMware specific namespace. It is not an acronym - it could roughly stand for &amp;quot;Mult Pathing x&amp;quot;. The mpx name space is used when no other valid namespace can be obtained from the LUN, such as NAA. It is not globally unique and is not persistent across reboots. Typically only local devices will not have NAA, IQN, etc namespaces and so have names starting with &amp;quot;mpx.&amp;quot;. The Storage Array Type Device Config: {navireg ipfilter} config settings are specific to the SATP_CX, SATP_INV, and SATP_ALUA_CX. The accepted values are: navireg_on, navireg_off, ipfilter_on, ipfilter_off            navireg_on starts automatic registration of the device with Navisphere           navireg_off stops the automatic registration of the device           ipfilter_on stops the sending of the host name for Navisphere registration,  used if host is known as &amp;quot;localhost&amp;quot;           ipfilter_off  enables the sending of the host name during Navisphere registration # esxcli nmp satp getconfig -d naa.60060160432017003461b060f9f6da11 {navireg ipfilter}
  12. This following slides are collected in the vm-support dumps
  13. The vicfg-mpath command also takes --username &amp; --password as arguments.
  14. For FC arrays, this will show World Wide Names (WWNs)
  15. To my knowledge, there is only one MPP under development at present (Sep 2008), and that is EMC Powerpath version 5.4.
  16. No one is currently develponing SATP’s or PSP’s but this is the framework for new arrays so they don’t have to be qualified by VMware. Third party PSP and SATP’s are to be treated like third party software and can be unloaded in order to troubleshoot. and a reboot is required after unloading the module.
  17. Port binding is a mechanism of identifying certain VMkernel ports for use by the iSCSI storage stack . Port binding is necessary to enable storage multipathing policies, such as the VMware round-robin load-balancing, or MRU, or fixed-path, to apply to iSCSI NIC ports and paths. Port binding does not work in combination with IPV6. When users configure port binding they expect to see additional paths for each bound VMkernel NIC. However, when they configure the array under a IPV6 global scope address the additional paths will not be established. Users only see paths established on the IPV6 routable VMkernel NIC. For instance if users have two target portals and two VMkernel NICs, they see four paths when using IPv4. They see only two paths when using IPv6. Because there are no paths for failover, path policy setup does not make sense. Workaround: Use IPV4 and port binding, or configure the storage array and the ESX/ESXi host with LOCAL SCOPE IPv6 addresses on the same subnet (switch segment). You cannot currently use global scope IPv6 with port binding.
  18. Taken from the SAN Compatibility Guide
  19. Animation: Click 1 – Highlight the Storage Link in Hardware Configuration Click 2 – Highlight the datastores available on this host Click 3 – Highlight the details for a particular datastore Click 4 – NAA id banner launches - NAA – Network Address Authority
  20. Animation: Click 1 – Highlight the Storage Adapter Link in Hardware Configuration Click 2 – Highlight one particular adapter available on this host, in this case vmhba33. Note the NMP on the bottom right
  21. Animation: Click 1: Select a datastore on your ESX, click on the Properties link Click 2: This launches the VMFS properties window Click 3: Highlight the ‘Manage Paths’ button which, when clicked, launches the Manage Path window Click 4: The Manage Path window opens which displays the PSP, SATP and other pathing info used by these paths and LUN.
  22. Animation for Datacenter wide rescan: Click 1: Select your DataCenter, right click and select Rescan for Datastores Click 2: Acknowledge that the rescan may take a long time Click 3: Scan for either New Storage devices, New VMFS volumes or both Click 4: Notice the Rescan tasks for each ESX appearing in the Recent Tasks window in vCenter
  23. Identifiers that are persistent and globally unique:     t10     eui     naa     rtp1     tpg     lug     md5     sns This started in 3.5 where we started using the NAA id to identify each LUN.
  24. This output was achieved by creating a real snapshot of an EMC Clariion LUN which has a VMFS volume and presented it back to the same ESX. I tried changing the original LUN’s ID but the ESX handled this without a problem since we reference the LUN by the NAA.
  25. We have got rid of DisallowSnapshotLUN but we retain the EnableResignature purely for supporting SRM. If you do use the GUI, you need to be aware that if you use the CLI utility, or you use a VI Client to directly attach to a host, these do not get reflected automatically in VC. This causes an issue with customers whose environment is running with DisallowSnapshotLUN = 0. There is a solution for this covered in the coming slides.
  26. The equivalent RCLI for ESXi is vicfg-volume
  27. Note that after you have unmounted the persistently mounted LUN, these entries stay in the esx.conf except that the forceMount option is set to false . /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/forceMount = &amp;quot;false&amp;quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/lvmName = &amp;quot;48d247da-b18fd17c-1da1-0019993032e1&amp;quot; /fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]/forceMountedLvm/readOnly = &amp;quot;false&amp;quot; Readonly is for NFS volumes, but it will not be displayed in the UI.
  28. It will let you know if it can be resignatured, and if it resides or is visibile to the same ESX host. It will not allow you to mount the snapshot if you try.
  29. Attempting to mount a snapshot volume on the same ESX as the original results in the following message in /var/log/vmkernel: Oct  1 18:11:00 cs-tse-f116 vmkernel: 4:05:33:23.794 cpu5:4111)LVM: MountSnapshotVolume:9536: Volume 48d247da-b18fd17c-1da1-0019993032e1 is already mounted This appears if attempting the operation via the GUI or CLI. The GUI doesn’t show any error or event if this operation is attempted – one simply sees the volume not becoming available. From the CLI, the following is observed: root@cs-tse-f116 ~]# esxcfg-volume -l VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol Can mount: Yes Can resignature: Yes Extent name: naa.6006016043201700f30570ed09f6da11:1     range: 0 - 15103 (MB) [root@cs-tse-f116 ~]# esxcfg-volume -m 48d247dd-7971f45b-5ee4-0019993032e1 Mounting volume 48d247dd-7971f45b-5ee4-0019993032e1 Error: SysinfoException: Node (5015) ; Status(bad0007)= Bad parameter; Message= Module: lvmdriver Instance(0): Input(3) 48d247da-b18fd17c-1da1-0019993032e1 rw naa.6006016043201700f30570ed09f6da11:1 [root@cs-tse-f116 ~]# esxcfg-volume -M 48d247dd-7971f45b-5ee4-0019993032e1 Persistently mounting volume 48d247dd-7971f45b-5ee4-0019993032e1 Error: SysinfoException: Node (5015) ; Status(bad0007)= Bad parameter; Message= Module: lvmdriver Instance(0): Input(3) 48d247da-b18fd17c-1da1-0019993032e1 rw naa.6006016043201700f30570ed09f6da11:1 [root@cs-tse-f116 ~]# esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1 Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1
  30. I am working with our engineering team on a permanent fix
  31. The first point regarding GUI support is important since previously, SVMotion operations could only be initiated via the Remote CLI in ESX 3.5. The final point, the ability to move an individual disk without moving the VM’s home was not available in previous versions of SVMotion in ESX 3.5. SVMotion of power on VM with RDM preserves RDM Cold migration of powered off VM with RDM still converts RDM to flat VMDK file
  32. This works with non-pass thru RDMs only. Cold migration of an RDM when the VM is powered off converts it to a VMDK. If the VM is powered on, it preserved the RDM. You can also use the GUI to do Storage VMotions on older ESX 3.x hosts, but this continues to use the old snapshot mechanism.
  33. There is still no plan to include an svmotion executable on the Service Console of the ESX.
  34. Previously with RDMs, we had to remove the mapping file and remake it – now this is updated ‘on-the-fly’
  35. VM availability impact refers to what happens to availability if we use the feature. With Volume Grow, availability is no different. With extents, there is a dependency on the first extent, so that if the first extent goes, they all go, i.e. we loose access to the whole volume.
  36. This option first became available in ESX/ESXi 3.5 U2. There is currently no shrinking capability for a VMDK.
  37. There should be esxcli and esxcfg-volumes RCLI commands in RC &amp; GA….
  38. esxcfg-mpath used to show the size of the LUN in earlier ESX versions. It does not in ESX 4.0. This is why the output above is useful if you wish to identify a LUN based on size.
  39. vmkfstools –G takes the same partition as both the source and destination – i.e. you are growing this partition with the same partition.
  40. There is no VMFS upgrade to worry about. Older versions will stay as they are. Newly formatted volumes from ESX4.x boxes will get the new version. Newly formatted volumes from ESX3.x.x boxes will get the old version. All VMFS-3.xx versions are supported by all ESX3.x.x hosts. The 2 enhancements that lead us from 3.31 to 3.33 are both applicable to the yet unreleased VMFS-4, so nothing to report here. The following is true though (as has always been, since we throw in optimizations with every ESX release): ESX4.x.x boxes will work more efficiently on any given VMFS-3.xx volume than ESX3.x.x boxes working on the same volume.
  41. The new in-guest virtualization-optimized SCSI driver has been developed to combat performance competition with hyper-V. Paravirtual SCSI adapters are supported on the following guest operating systems: Windows 2008 Windows 2003 Red Hat Linux (RHEL) 5 The following features are not supported with Paravirtual SCSI adapters: Boot disks Record/Replay Fault Tolerance MSCS Clustering