How to extend ESXi VMFS datastore on NetApp iSCSI storage


Published on

How to extend VMFS datastore connected to NetApp iSCSI storage array

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

How to extend ESXi VMFS datastore on NetApp iSCSI storage

  1. 1. How to Grow VMFS Datastore connected to NetApp Storage Array via Software iSCSI Simple two-step process: 1. FILER SIDE: Increase Volume and LUN Size using NetApp Oncommand System Manager 2. HOST SIDE: Increase VMFS Datastore Size using VMware vSphere client Lab setup: ESXi 5.5 environment with NetApp ONTAP simulator 8.2.1 connected via iSCSI July, 2014
  2. 2. Some information to start with Types of Physical Storage The ESXi storage management process starts with storage space that your storage administrator preallocates on different storage systems. ESXi supports the following types of storage:  Local Storage -> Stores virtual machine files on internal or directly connected external storage disks.  Networked Storage -> Stores virtual machine files on external storage disks or arrays attached to your host through a direct connection or through a high-speed network. Networked Storage: [FC/iSCSI: Block protocol based] Networked storage consists of external storage systems that your ESXi host uses to store virtual machine files remotely. Typically, the host accesses these systems over a high-speed storage network. Networked storage devices are shared. Datastores on networked storage devices can be accessed by multiple hosts concurrently. ESXi supports the following networked storage technologies.  Fibre Channel (FC)  Internet SCSI (iSCSI) Note: Accessing the same storage through different transport protocols, such as iSCSI and Fibre Channel, at the same time is not supported Fibre Channel (FC): Stores virtual machine files remotely on an FC storage area network (SAN). FC SAN is a specialized high-speed network that connects your hosts to high-performance storage devices. The network uses Fibre Channel protocol to transport SCSI traffic from virtual machines to the FC SAN devices. To connect to the FC SAN, your host should be equipped with Fibre Channel host bus adapters (HBAs). Unless you use Fibre Channel direct connect storage, you need Fibre Channel switches to route storage traffic. If your host contains FCoE (Fibre Channel over Ethernet) adapters, you can connect to your shared Fibre Channel devices by using an Ethernet network.
  3. 3. Following figure depicts virtual machines using Fibre Channel storage: Internet SCSI (iSCSI): Stores virtual machine files on remote iSCSI storage devices. iSCSI packages SCSI storage traffic into the TCP/IP protocol so that it can travel through standard TCP/IP networks instead of the specialized FC network. With an iSCSI connection, your host serves as the initiator that communicates with a target, located in remote iSCSI storage systems. ESXi offers the following types of iSCSI connections:  Hardware iSCSI: Your host connects to storage through a third-party adapter capable of offloading the iSCSI and network processing. Hardware adapters can be dependent and independent.  Software iSCSI: Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, your host needs only a standard network adapter for network connectivity. Note: You must configure iSCSI initiators for the host to access and display iSCSI storage devices
  4. 4. Following figure depicts virtual machines using different types of iSCSI initiators:  In the left example: Host uses the hardware iSCSI adapter to connect to the iSCSI storage system.  In the right example: Host uses a software iSCSI adapter and an Ethernet NIC to connect to the iSCSI storage. Networked Storage: [NAS/NFS: File-based protocol] ESXi can access a designated NFS volume located on a NAS server, mount the volume, and use it for its storage needs. You can use NFS volumes to store and boot virtual machines in the same way that you use VMFS datastores. ESXi supports the following shared storage capabilities on NFS volumes:  vMotion  VMware DRS and VMware HA  ISO images, which are presented as CD-ROMs to virtual machines  Virtual machine snapshots When you work with NFS storage, the following considerations apply:  The maximum size of NFS datastores depends on the support that an NFS server provides. ESXi does not impose any limits on the NFS datastore size.  If you use non-ASCII characters to name datastores and virtual machines, make sure that the underlying NFS server offers internationalization support. If the server does not support international characters, use only ASCII characters, otherwise unpredictable failures might occur.
  5. 5. How Virtual Machines Access Storage When a virtual machine communicates with its virtual disk stored on a datastore, it issues SCSI commands. Because datastores can exist on various types of physical storage, these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device. ESXi supports:  Fibre Channel (FC)  Internet SCSI (iSCSI)  Fibre Channel over Ethernet (FCoE)  NFS protocols. Regardless of the type of storage device your host uses, the virtual disk always appears to the virtual machine as a mounted SCSI device. The virtual disk hides a physical storage layer from the virtual machines operating system. This allows you to run operating systems that are not certified for specific storage equipment, such as SAN, inside the virtual machine. The following graphic depicts five virtual machines using different types of storage to illustrate the differences between each type.
  6. 6. Following table compares the ESXi different types of storage support NetApp Storage Systems iSCSI recommendation 1. If any of your iSCSI initiators are a part of an initiator group (igroup), disable ALUA on the NetApp filer. 2. Set LUN type and initiator group type as:  LUN type – VMware (if VMware type is not available, use Linux).  Initiator group type – VMware (if VMware type is not available, use Linux) Courtesy: server-501-storage-guide.pdf
  7. 7. Steps to grow VMFS datastore on the NetApp Array 1. FILER SIDE: Increase Volume and LUN Size using NetApp Oncommand System Manager 2. HOST SIDE: Increase VMFS Datastore Size using VMware vSphere client FILER SIDE PROCESS: Please note – Volume & LUN resizing is 100% Non-disruptive, Quick & Safe process on NetApp Filer. 1. VOLUME: After making sure that the Aggregate has enough space to accommodate volume resize, open NetApp Oncommand system manager | select the volume and click resize. 2. This will start the “Volume Resize Wizard”, it is self-explanatory | Enter the new size | you will be asked whether you want to keep the snaps or not. In most situations, snapshots clean-up are done automatically according to retention schedule, therefore just click Next. 3. Review the changes and click 'Next' to resize the volume | click 'Finish' to close the wizard. 4. LUN: Now it‟s time to extend the LUN inside the VOLUME which we just extended in previous step. Select the LUN and click EDIT as shown in figure below. My thoughts: It would be more helpful had NetApp provided „resize‟ button for LUN as well, instead of „EDIT‟, for some reason it just doesn‟t sound right. To resize LUN, just click ‘EDIT’…
  8. 8. 5. Enter the „new size‟ and click save & close, as simple as that. Note: When you click „save‟ & „close‟ you will be presented with a message box – Indicating: storage side stuff is done and now do the expansion on the host side as seen in the figure below.
  9. 9. HOST SIDE PROCESS: 1. Log into vCenter server or ESXi host | Select the ESXi host | click „Storage Adapters‟ | click „Configuration‟ | Click „Refresh‟. Note: You can also click „Rescan‟ but a refresh should do the job, in my case it worked. 2. Now, Go to ‘Storage’ tab | configuration | select the datastore which we want to resize and click „Properties‟.
  10. 10. 3. Once the properties window opens, click „Increase‟ to resize VMFS volume, as shown in the figure below. Clicking 'Increase' pops-up the wizard | select the device on which the datastore resides and click 'Next'. [You may have to click refresh few times before the newly carved device shows up] Note: In my 'Lab setup', it was fairly simple as I had only one LUN carved out, but in the real production environments there could be many devices listed there and hence make sure you select the correct device, b'cos if you selected another device accidently, it will add an extent to the existing VMFS volume using another LUN, which is probably not what you want to achieve.
  11. 11. For example: If you want to extend the VMFS datastore residing on the same device [LUN] then select the LUN you resized on the Storage Array and go ahead with the rest of the steps. Basically, we are extending an extent on the same device, instead of adding a NEW EXTENT [another LUN]. Whereas, if you wanted to add a NEW EXTENT [another device / LUN] you can select another device and click NEXT as shown in figure below.
  12. 12. In the example below, I can also choose to add another LUN of size 4GB as „Extent‟ to the existing 56GB VMFS volume. 4. In the next screen – Carefully review the changes and click „Next‟.
  13. 13. 5. In the next screen, choose how much of the newly added space will be used and click Next. 6. Finally, review all the changes and click „FINISH‟. If all goes well – Datastore [VMFS Volume] would be grown as expected and then you should be able to either create more VMs on it or move VMs to this Datastore. July, 2014