FAST VP, Unisphere for VMAX,

  • 4,407 views
Uploaded on

VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, …

VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP,
Tier Advisor and symvm

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
4,407
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
281
Comments
0
Likes
2

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm
  • 2. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm Introduction to the VMAX 40K Hands on Labs1 Introduction 51.1 Exercise 1: Introduction to Unisphere for VMAX2 Introduction to Unisphere for VMAX 72.1 Exercise 2: Configuring Federated Tiered Storage (FTS)3 How to Configure Federated Tiered Storage (FTS) 243.1 Exercise 3: Using VP Snap4 Using VP Snap 424.1 Exercise 4: Configuring Virtual Provisioning and FAST VP5 How to Configure Virtual Provisioning & FAST VP 555.1 Exercise 5: Using Tier Advisor to Size an Array for FAST VP6 Using Tier Advisor 766.1 Exercise 6: Using the symvm Command to Provision Gatekeepers to a Virutal Machine7 Using symvm to Provision Gatekeepers to a VM 957.1 Exercise 7: Using Dynamic Cache Partioning's Analysis Mode8 Using Dynamic Cache Partioning's Analysis Mode 998.1
  • 3. Conclusion9 Conclusion 1119.1
  • 4. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 4 Introduction to the VMAX 40K Hands on Labs
  • 5. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 5 Introduction Welcome to the VMAX 40K Hands-on lab experience. This series of exercises provides an introduction to the latest features of Enginuity 5876 using a virtual environment. This environment provides each lab user exclusive use of a VMAX 40K, permitting a range of operations without interference to other labs. Virtual Enginuity has limitations and this is most obviously seen with small device sizes however all lab exercises have been tested and verified. It is still possible that operations outside exercise boundaries will cause errors. If this occurs, a user need only raise a hand to request assistance whereby the environment can be reset and the exercise restarted. The following exercises are available for you to run through. Each exercise can be executed independently, however if you execute them in order, there are opportunities to use components configured in one exercise in later exercises. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 6. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 6 Exercise 1: Introduction to Unisphere for VMAX
  • 7. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 7 Introduction to Unisphere for VMAX Unisphere for VMAX is a storage management product that provides a common look and feel using big button technology for simplicity and ease of use. In this lab, using Unisphere v1.0.0.5 you will examine all Main sections of the tool allowing you to take a tour of the new functionality that exists for managing a VMAX system, users, devices, hosts and other features. Launch Unisphere for VMAX Double click the Unisphere for VMAX icon. Login to Unisphere for VMAX Enter smc for the User and Password, then click the Login button.
  • 8. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 8 The Main Screen Make yourself familiar with the Unisphere's Home Page: 1. Unisphere Controls 2. Toolbar - provides links to different sections in Unisphere (Note, an array is selected, the options available on the toolbar are limited.) 3. Navigation Path - allows quick navigation between sections in Unisphere 4. The Dashboard - displays the objects or sections available to the user to manage 5. Common Tasks - provide access to Wizards used to perform common administrative tasks The Main Screen (Continued) Make yourself familiar with the bottom half of Unisphere's Home Page: 1. Capacity Indicator - available for every array managed by the Unisphere instance
  • 9. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 9 2. Capacity Indicator Legend 3. Alert Indicator - indicates the number of alerts Unisphere has received for all of the managed arrays 4. User/Role Indicator - indicates the user logged into Unisphere and their assigned role 5. Last Updated Indicator - displays the date and time Unisphere last refreshed it's information 6. Administration Link - clicking the Administration link will allow users to manage alerts, preferences, user roles and authentication options 7. Common Tasks - click the " >" to expand or collapse the Common Tasks window Explore the Unisphere Controls Hold your cursor over each of the Symmetrix Controls to view the assigned Tool tip. Click the Gear icon to display user Preferences. Enable Remote Connection Optimization Click the Optimize for Remote Connection check box and the OK button to minimize the Unisphere's graphic animations. Unisphere Administration Click the Administration link located in the lower right hand corner of the screen to display the Unisphere administrative options.
  • 10. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 10 Alert Setting Click Alert Settings to display the various options related to alerts on the Dashboard. Notifications Click Notifications to display the options for notifying users about various alerts. Email Configuration Use the scroll bar to navigate down to the Email Configuration section.
  • 11. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 11 Return to Administration After reviewing all of the options available for sending Alert notifications, click the Administration link in the navigation path. Explore the remaining Administration options available: 1. Authentication 2. Preferences 3. Users and Roles 4. Link and Launch Return Home Click Home on the toolbar to return to Unisphere's home page. The Home Page The home page displays all of the array's managed by the Unisphere instance and includes the following basic information about them: 1. The Symmetrix identification number - 000194900001 2. The indication this array is locally attached to the host running Unisphere - Local 3. The array's model number - VMAX40K
  • 12. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 12 4. The Enginuity level running on the array - 5876.80.55 5. The number and severity of the Alerts 6. The physical capacity indicator Set Unisphere's Context Click the VMAX40K's graphic to set Unisphere's context. Setting Unisphere's context has the following effects: 1. Indicates to Unisphere which array the user wishes to manage 2. Updates the main toolbar with the options available for the user to manage Examine the Options on the Toolbar Hover your cursor over the System, Storage, Hosts, and Data Protection sections on the Toolbar. Notice that links to unique sub-sections (1) and Common Tasks (2) are displayed for the user to select from for each section.
  • 13. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 13 Navigate to the System Dashboard Hover your cursor over the System section on the Toolbar and then click Dashboard to navigate to the System Dashboard. Examine the System Dashboard Examine the System Dashboard. Note that information about the array is divided into four sections:
  • 14. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 14 1. Asystem overview section, which includes the array's serial number, model number, the Enginuity level and an indication if the array is locally attached to the host running Unisphere. 2. Ahardware section, which includes links to drill down on the various director types in the array. 3. Acapacity section, which summarizes the physical and virtual capacity in the array. 4. An Alert section, which summarizes all of the alerts for the array. Navigate to the Front End Directors Sub-Section Click the Front End Directors link in the Hardware section to view the array's Front End Directors. Examine the Front End Directors Examine the Front End Directors table. Note that there are two GigE type directors - SE-7E and SE-8E - configured in the array.
  • 15. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 15 Navigate to the Thin Pools Sub-Section Hover your cursor over the Storage section on the Toolbar and select the Thin Pools link. Examine the Thin Pools Table Examine the entries in the Thin Pools table. Note the fields for the EXTERNAL_TP pool are all N/A. This indicates the pool is empty - that is it contains no data devices. Data devices provide storage to thin pools. Data devices are supported on all drive technologies (Technology) including: • EFD - Enterprise Flash Drives • FC - Fibre Channel • SAS - Serial SCSI • SATA- Serial ATA When created, data devices are assigned a Raid Protection type (Configuration). On VMAX systems the Raid protection types supported include: • Raid 1, which is denoted as 2-way-mir in Unisphere. • Raid5 3+1
  • 16. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 16 • Raid5 7+1 • Raid6 6+2 • Raid6 14+2 Beginning with Enginuity 5876, thin pools are supported for both Open Systems - Fixed Block Architecture (FBA) - and Mainframe - Count Key Data - (CKD) emulation types. Thin pools can only contain data devices created on the same disk technology, with the same Raid protection and emulation type. Note the EXTERNAL_TP pool will be used later for the Federated Tiered Storage (FTS) lab. The VPsnap pool will be used for the VP Snap lab. Review the Details of the VPsnap Pool Double click the icon next to the VPsnap pool to view it's details. Examine the Properties of the VPsnap Pool Use the scroll bar to examine the properties of the VPsnap thin pool. Notice the value of the fields below:
  • 17. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 17 • Enabled Capacity (GB) - is the agragate capacity of all the Data Volumes in the pool. • %Subscription - is a meaure of oversubscription. In this case the pool is oversubscribed nearly 6 to 1. Related Objects There are two types of objects related to thin pools. The first are the Data Volumes, also referred to as DATAdevices or TDATs, that were described in a previous step. The second type of object are the Bound Volumes, also referred to as Thin Devices or TDEVs. Athin device is a cache only device that when created is assigned a size. However, until it's bound to a thin pool it consumes zero space on disk. Once bound to a pool, a thin device consumes its first increment of 768KB of real storage from one of the data devices in the pool. When provisioned to a host, a thin device reports its full size to the host, but only consumes storage in 768KB increments as the host writes to new areas of the device. Click the Bound Volumes link to review the details about the thin devices bound to the VPsnap pool. Bound Volumes Examine the devices in the Bound Volumes table. Notice that each device's Capacity (GB) is 2, just like the pool's Total Capacity (GB).
  • 18. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 18 Scroll to the right until the Total Subscription % field is visible and note that the reported value for each device is 100. The Total Subscription % represents what percentage of the pool's capacity would be consumed if the thin device was fully allocated. In this case each device has the potential to use all of the pool's capacity. So, the pool is said to be oversubscribed - 6 to 1 or 600%. Navigate to The Hosts Section Click the Hosts button on the toolbar. Examine the Hosts Sub-Sections The Hosts section contains the four sub-sections below. Click each one in turn and examine the information they contain. Initiators - the table shows the initiators (host HBAs) that are zoned to the array and which array port(s) they're zoned to Initiator Groups - groups of initiators associated with a single host Host Cache Adapters - list the VFCache cards that are connected to the array Masking Views - are the glue that tie Initiators, Port Groups and Storage Groups together to map
  • 19. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 19 and mask devices to a host Port Groups - groups of array ports Note, Storage Groups are in the Storage section. Navigate to the Data Protection Section Click Data Protection on the toolbar. Examine the Data Protection Sub-sections The Data Protection section contains the five sub-sections below. Click each one in turn and examine the information they contain. Local Replication - allows the user to monitor and manage local TimeFinder replication sessions Migration - allows the user to monitor and manage migration sessions (symmigrate) Recover Point - allows the user to monitor and manage devices RecoverPoint replication sessions Device Groups - are containers for managing devices used for the replication sessions above Replication Groups & Pools - allows the user to monitor and manage TimeFinder Snap pools and SRDF groups
  • 20. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 20 Open the Host's Services Minimize the Firefox browser window and click the services icon on the desktop. Start the storstpd Daemon Scroll down until the EMC storstpd daemon is visible, then select it. Right click the selection and click Start to start the service. The storstpd collects performance data from the local arrays and is required to view Performance data in Unisphere for VMAX. Navigate to the Performance Settings Sub-section Expand the Firefox browser window to continue. Hover over Performance on the toolbar and click Settings on the drop down menu to navigate to
  • 21. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 21 the Settings sub-section. Open Systems Registration Click System Registrations. Select the Array Double click on the Symmetrix ID ending in 0001 to view the detailed System Registration page. Register the Array Click the Real Time and Diagnostic check boxes to begin collecting both Real Time and Diagnostic data for the array. Click the Apply button to save the changes.
  • 22. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 22 Verify the Registration Click the Browsers back button to go back to the System Registrations section. Note the green circles indicate the Symmetrix ID ending in 0001 is registered to collect both Real Time and Diagnostic data. Conclusion This brief introduction to Unisphere for VMAX has armed you with the knowledge to: • Navigate around inside the tool using the toolbar and the navigation path • The different sections inside the tool and what information they contain • How to register an array so that Unisphere can collect performance data In the remaining exercises in the VMAX 40K Hands-on lab experience, you will have opportunities to configure Federated Tiered Storage, Virtual Provisioning and FAST VP. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 23. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 23 Exercise 2: Configuring Federated Tiered Storage (FTS)
  • 24. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 24 How to Configure Federated Tiered Storage (FTS) What is FTS? Federated Tiered Storage (FTS) allows LUNs that exist on external arrays to be used to provide physical storage for Symmetrix VMAX. The external LUNs can be used as raw storage space for the creation of Symmetrix devices in the same way internal Symmetrix physical drives are used. These devices are referred to as eDisks. Data on the external LUNs can also be preserved and accessed through Symmetrix devices. This allows the use of Symmetrix Enginuity functionality such as local replication, remote replication, storage tiering, data management, and data migration with data that resides on external arrays. New Enginuity components required by FTS FTS is implemented entirely in Enginuity and does not require any additional Symmetrix hardware. Connectivity with an external array will be established through the same fibre optic SLICs currently used for configuring FAs and RFs. Instead of running FAor RF emulation, however, the processors will run a new type of emulation. DX directors Anew emulation, referred to as DX, (for DAeXternal) has been developed that adapts the traditional DAemulation model to act on external logical units as though they were physical drives. The fact that a DX is using external LUNs instead of a DAusing internal LUNs is transparent to other director emulations and to the Enginuity infrastructure in general. With respect to most non-drive-specific Enginuity functions, a DX behaves the same as a DA. eDisks An eDisk is a logical representation of an external LUN when it is added into the VMAX configuration. The terms “eDisk” and “external spindle” both refer to this external LUN once it has been placed in an external disk group and a virtual RAID group. External disk group External disk groups are virtual disk groups that are created by the user to contain eDisks. Exclusive disk group numbers for external disk groups start at 512. External spindles and internal physical spindles cannot be mixed in a disk group. Virtual RAID group An unprotected, virtual RAID group gets created for each eDisk that gets added to the system. The RAID group is virtual because eDisks are not protected locally by the VMAX; they rely on the protection provided by the external array. Virtualizing The process of adding an eDisk to a Symmetrix array is called virtualizing the eDisk. Virtualizing has two modes of operation: • External Provisioning - Allows the user to access LUNs existing on external storage as raw
  • 25. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 25 capacity for new Symmetrix devices. These devices are called externally provisioned devices. • Encapsulation - Allows the user to preserve existing data on external LUNs and access it through Symmetrix volumes. These devices are called encapsulated devices. External provisioning When an eDisk is virtualized for external provisioning, Enginuity creates an external spindle and adds it to the specified external disk group. External disk groups are separate from disk groups containing internal physicals and start at disk group number 512. Because RAID protection is provided by the external array, eDisks are added to unprotected virtual RAID groups. Symmetrix devices can then be created from the external disk group to present to users. Encapsulation Encapsulation has two modes of operation: • Encapsulation for disk group provisioning (DP encapsulation) - The external spindle is created and added to the specified external disk group and unprotected RAID group. Symmetrix devices are also created at the same time, allowing access to preserved data. Otherwise the Symm devices will be treated as any other VMAX volumes. • Encapsulation for virtual provisioning (VP encapsulation) - Just as with DP encapsulation, the external spindle is created and added to the specified external disk group and to an unprotected RAID group. Data devices (TDATs) are then created and added to a specified thin pool. Fully non-persistently allocated thin devices (TDEVs) are also created and bound to the pool. Extents are allocated to the external LUN through the TDAT. Configuring FTS Virtual Provisioning Encapsulation In this lab, an external LUN will be encapsulated and configured for Virtual Provisioning. When the device is encapsulated and a thin pool is specified, a data device (TDAT) will be created, added to the pool, and enabled. Afully allocated thin device (TDEV) will be bound to the pool. The thin device would then available be presented to the host allowing it to access the preserved data on the external array through the VMAX. Because of the 1:1 relationship required between the data device and the thin device for VP Encapsulation, there are some differences between VP in an encapsulated environment and VP using non-encapsulated external or internal data devices. Pools with VP Encapsulated devices will have thin and data devices 100% allocated (as well as the pool itself). Operations like add device, unbind, balance, and reclaim are not applicable to encapsulated thin pools and devices. Navigate to the Storage Section Click the Storage icon on the Unishpere tool bar.
  • 26. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 26 Navigate to the Disk Group Section Scroll down until the Disk Groups big button is available, then click on it to display the Disk Groups table. Examine the External Disk Group Locate disk group # 512 and examine its attributes. All disk groups numbered 512 and above contain external disks only. External disks and internal disks cannot be mixed in any disk group. [OPTIONAL] List the Disk Groups using SYMCLI To list the Disk Groups in SYMCLI, open a command prompt and execute the following command: symdisk -sid 01 list -dskgrp_summary
  • 27. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 27 Locate disk group # 512 in the output and examine its attributes. Return to the Storage Section Click the Storage link in the navigation path to return to the Storage Section. Navigate to the External Storage Section Scroll down until the External Storage big button is available, then click on it to display the External Storage tables. View the Control Ports Click the triangle next to the folder in the Control Ports table to display the DX ports. Use the scroll bar to view all of the details. DX directors are configured in dual initiator (DI) pairs like traditional DAs. They are fully redundant like DAs and a failing director will fail over when necessary to the other fully functioning director in the DI pair. DI pairs will always be configured on the same engine with the same processor number. For
  • 28. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 28 example, in a 2 engine VMAX, 7G and 8G would be a valid pair as would 9H and 10H. Both ports on a processor will be automatically configured as DX ports when the emulation is loaded and both must be cabled and part of the FTS configuration. EMC requires a minimum of 4 paths to external devices, meaning that at least 4 ports belonging to a single DX dual initiator pair must be configured. Notes: DX directors must be configured by EMC. Once the DX emulation has been loaded on the processors, FTS is completely user-configurable. If converting FAs to DXs, any previously assigned devices must be unmapped and unmasked and the FAports must be removed from any port groups. View the External Ports Click the triangle next to the folder in the External Ports table to display the external ports. Use the scroll bar to view all of the details. [OPTIONAL] View the Control Ports & External Ports in SYMCLI If you wish to view the control ports and external ports in SYMCLI, open a command prompt and execute the following command:
  • 29. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 29 symsan -sid 01 list -sanports -dx all -p all Note the control ports are listed in the first column and the external port WWNs are listed in the last column of the output. View the External LUNs Click the triangle next to the folder in the External LUNs table to display the external LUNs. Use the scroll bars to view all of the details. [OPTIONAL] View the External LUNs in SYMCLI To view the external LUNs visible to the first control port 7F:0, execute the following command: symsan -sid 01 list -dir 7F-p 0 -sanluns -wwn 50000972C0000558 Note, the WWN 50000972C0000558 at the end of the command line is the WWN of the remote port connected to the control port.
  • 30. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 30 Import External LUNs Select the first External LUN and click the Virtualize button to start the process of adding the external LUN as an eDisk. Complete the Virtualize External LUNs Dialog Complete the Virtualize External LUNs Dialog as shown above. When encapsulating a device for Virtual Provisioning, a thin pool name must be specified. If an appropriate thin pool does not already exist, one must be created. Adding the eDisk, choosing to encapsulate the data, and choosing a thin pool will cause the eDisk to be created along with the TDAT (DATAdevice) which will be added to the pool and the corresponding TDEV (thin device) which will be bound to the pool.
  • 31. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 31 Confirm Virtualizing External LUNs Read the Confirm Virtualizing External LUNs pop-up window. Note that with this option, the data on the external LUN will be preserved. Click the OK button to continue. Acknowledge the Virtualization Task has Been Added to the Job List Click the Close button to acknowledge the Virtualize External LUNs task has been added to the Job list. Navigate to the Jobs List Click the Job List link, which is located below the Common Tasks. Run the Job Select the Virtualize External Lun job and click the Run button. Click OK to continue.
  • 32. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 32 Acknowledge the Confirmation Pop-up Click the OK button to acknowledge the Confirmation pop-up window. Monitor the Task Monitor the task in the Job List until its status changes from RUNNINGto SUCCEEDED. Note, this step may take a few minutes to complete. If you wish, you may switch to your command prompt window, and execute the [OPTIONAL] SYMCLI command in the next step while you wait. However, before proceeding with the remainder of the Unisphere portion of the exercise, ensure the task completes.
  • 33. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 33 [OPTIONAL] Virtualize an External LUN using SYMCLI To prepare to virtualize one of the remaining external LUNs, execute the following commands in the Command Prompt: 1. Change to the FTS directory cd c:labsFTS 2. Display the contents of the file fts_add_edisks.txt. type fts_add_edisks.txt Note the following about the files contents: • The WWNs after the wwn= match the WWNs in the LUN WWN column from the prvious SYMCLI command • The disk_group=512 designates the disk will be imported into the same disk group number 512. • The encapsulate _date=no option tells the system that this external LUN will be treated as a
  • 34. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 34 raw disk and the data it contains will not be preserved. 3. While holding down the mouse's left button, select the last line from the file. Click the mouse's right button to copy the line to the clipboard. 4. Type the command below, clicking the mouse's right button when indicated <right click> to paste the contents of the clipboard. symconfigure -sid 01 -cmd "<right click>" -noprompt commit Note, the -noprompt option in the command above may be abbreviated -nop as commit may be abbreviated com. Navigate to the Thin Pools Section Hover the cursor over the Storage option on the toolbar, then click Thin Pools. Examine the EXTERNAL_TP Examine the information in Thin Pools table for the EXTERNAL_TP thin pool. Note the pool's Configuration is Unprotected. With FTS, the VMAX relies on the external array to provide the raid protection.
  • 35. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 35 Display the EXTERNAL_TP Pool's Details Double click the small icon next to EXTERNAL_TP in the Name column to display the pool's detailed configuration. Examine the Pool's Properties Use the scroll bar to examine the EXTERNAL_TP pool's properties. Note the following properties are unique to an external disk group: • Raid Protection = Unprotected • Technology = N/A Examine the Pool's Related Objects In the Related Objects section, notice that one DATAVolume (TDAT) and one Bound Volume (TDEV) exist in the pool. The TDAT and TDEV have a 1:1 relationship, so as soon as the TDEV is provisioned to a host, all of the data that exists on the external LUN will be visible to the host.
  • 36. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 36 Navigate to the DATA Volumes (TDATs) Subsection Click the DATAVolumes link to display the volumes in the pool. Examine the Pool's DATA Volumes (TDATs) Examine the properties of the DATAVolume (TDEV) 0070 that was added during the virtualization step. Note that it's configuration is Unprotected, because with FTS, the external array provides an eDisk with its raid protection. [OPTIONAL] Double click on the Data Volume 0070 to see all of its details. Return to EXTERNAL_TP Thin Pool's Subsection Click the EXTERNAL_TP link in the navigation path to return to the pool's subsection. Navigate to the Bound Volumes (TDEVs) Subsection Click the Bound Volumes (TDEVs) link to display the volumes in the pool.
  • 37. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 37 Examine the Pool's Bound Volumes (TDEVs) Use the scroll bar to examine the Bound Volume (TDEV) 0071's properties, which was automatically created and bound to the pool during the virtualization step. As mentioned above, as soon as the volume is provisioned to a host, all of the data that exists on the external LUN will be visible to the host. Note that the volume is 100% allocated from the EXTERNAL_TP pool. [OPTIONAL] Double click on the Bound Volume 0071 to see all of its details. [OPTIONAL] Display the eDisks added to the External Disk Group 512 Using SYMCLI Execute the command below to list the eDisks added to the external disk group 512 during the virtualization step: symdisk -sid 01 list -disk_group 512 -spindle Note, if you encapsulated one spindle in Unisphere and the second with SYMCLI, two spindles will be displayed.
  • 38. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 38 [OPTIONAL] Show Spindle 1E00's Details Using SYMCLI Execute the command below to display spindle 1E00's details. Use the scroll bar to display all of the fields. symdisk -sid 01 show -spid 1E00 Note, the spindle's location is External. [OPTIONAL] Display the Hypers on Spindle 1E00 Scroll to the bottom of the command's output to display the Hypers on Spindle 1E00. Note that one hyper with Device id 0434 exists on the spindle. Also note that the device's Type is Ext-Data, which means it's a Data Device (TDAT) that exists on an external LUN.
  • 39. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 39 [OPTIONAL] Show the Data Device (TDAT) 0070's Details Execute the command below to display device 0070's details. Use the scroll bar to display all of the fields. symdev -sid 01 show 0070 Note that device 0070 is an Encapsulated Device that is part of the thin pool EXTERNAL_TP. Scroll down and review the Device External Identity section, which is one of the few areas of the output specific to FTS.
  • 40. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 40 Show the Thin Device (TDEV) 0071's Details Execute the command below to display device 0071's details. Use the scroll bar to display all of the fields. symdev -sid 01 show 0071 Scroll through the output and note the following about device 0071: • It is an Encapsulated Device that is bound to the thin pool EXTERNAL_TP, which is an indication the device is a thin device (TDEV) • Like all thin devices, it does not have a RAID group and is locally unprotected Conclusion In this lesson you've had the opportunity to configure FTS using both Unisphere and SYMCLI. While using Uhisphere you encapsulated the data on the external LUN and added it to a thin pool, which you will have the opportunity to use in the FAST VP lesson. While using SYMCLI you added the external disk to the disk group without preserving the data. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 41. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 41 Exercise 3: Using VP Snap
  • 42. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 42 Using VP Snap VP Snap leverages TimeFinder technology to create space-efficient snaps for thin devices by allowing multiple sessions to share allocations within a thin pool. In this lab, users will use Solutions Enabler v7.4 to create two VP Snap sessions using one source device. Avirtual provisioning thin pool will be monitored throughout this exercise. After noting the space consumed by these sessions, you will create two regular TimeFinder/Clone -copy sessions and compare the space saving efficiencies that VP Snap offers. Note that this lab uses the Symmetrix Command Line interface (SYMCLI) exclusively. Open a Command Prompt Click the Command Prompt icon located on the Desktop. Navigate to the VPsnap Lab Directory Execute the command below to change to the VPsnap directory: cd c:labsvpsnap Execute the command below to list the files that you'll use during the exercise: dir
  • 43. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 43 Display the Details for the Thin Pool VPsnap Execute the command below to display the detailed configuration of the VPsnap pool. symcfg -sid 01 show -pool VPsnap -thin -detail Scroll to the top of the command's output and review the above outlined two sections: 1. Note that there are 16 FBAdata devices that reside on Fibre Channel drives (FC) in the pool. These devices are protected with Raid1 (2-way-Mir). 2. The second section displays how the storage on the data devices has been allocated. We will be referring to this section throughout the lab to demonstrate the space savings efficency of the VP Snap feature. Examine the Thin Devices Bound to the VPsnap Pool Examine the section at the bottom of the output labeled Pool Bound Thin Devices and note the following: 1. Thin devices 004A - 004Fare bound to the VPsnap pool 2. The devices are all 32,775 tracks in size 3. Device 004A has 1044 tracks allocated and 1010 tracks written
  • 44. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 44 4. Atotal of 1104 tracks have been allocated from the pool, and 1010 tracks - from device 004A- have been written As hosts write to new areas of a thin device, 12 tracks refered to as an Extent are allocated in a round robin fashion from the enabled Data Devies in the pool. Enginuity maintains a flag referred to as NWBH - Never Written By Host - indicating if host has ever writen data to the Logical Block Address (LBA) range of each track. This mechanism allows the VMAX family to take short cuts in certain situations. In the example above device 004Ahas 1044 tracks or 92 extents allocated from the VPsnap pool. Create a VP Snap Session Execute the command below to display the contents of the file session1.txt: type session1.txt Note, the first device on the line (04A) is the source device and the second device (04B) is the target device. Execute the command below to create a VPsnap session using the devices in the session1.txt file: symclone -sid 01 create -f session1.txt -vse -noprompt Execute the command below to activate the VP Snap session: symclone -sid 01 activate -f session1.txt -noprompt Note, the -noprompt option used in the commands above may be abbreviated -nop.
  • 45. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 45 Query the VP Snap Session Execute the command below to list all of the VP Snap sessions: symclone list -vse Note the the Virtual Space Efficient (VSE) setting is active for the source and target pair 004A to 004B and that 1,035 tracks of device 004Aare protected. Note, you may have to execute the command a number of times to allow the activation to complete. Re-examine the Allocated and Shared Tracks Execute the command below to display the detailed configuration of the VPsnap pool. symcfg -sid 01 show -pool VPsnap -thin -detail
  • 46. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 46 Scroll up to display more details about the pool. Near the top of the output, note the values for the fields below: 1. # of Allocated Tracks in Pool 2. # of Shared Tracks in the Pool Note, the values haven't changed. The # of Shared Tracks will not increase until: 1. Asecond VP Snap session is activated using the same source device, but a different target device 2. Existing allocated tracks on the source device are changed When these conditions are met, the target devices in any active VP Snap sessions begin sharing the original tracks. The source device allocates new tracks from the free space in the pool to store the updated data. Create a Second VP Snap Session Execute the command below to display the contents of the file session2.txt: type session2.txt Note, the first device on the line (04A) is the source device and the second device (04C) is the target device. Execute the command below to create a VPsnap session using the devices in the session2.txt file: symclone -sid 01 create -f session2.txt -vse -noprompt
  • 47. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 47 Execute the command below to activate the VP Snap session: symclone -sid 01 activate -f session2.txt -noprompt Note, the -noprompt option used in the commands above may be abbreviated -nop. Query the VP Snap Sessions Execute the command below to list all of the VP Snap sessions: symclone list -vse Notice that both sessions contain the same number of protected tracks. Write Data to the Source Device 004A Execute the command below to list the devices presented to the host. syminq -winvol
  • 48. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 48 Notice that the PHYSICALDRIVE1 is the Source Device 004A and is mounted as drive E:. Execute the command below to copy the file VPsnaplab-test.log file to the E: drive. copy VPsnaplab-test.log E: Re-examine the Allocated and Shared Tracks Execute the command below to display the detailed configuration of the VPsnap pool. symcfg -sid 01 show -pool VPsnap -thin -detail Scroll up to display more details about the pool. Near the top of the output, note the values for the fields below: 1. # of Allocated Tracks in Pool 2. # of Shared Tracks in the Pool Note, the values have changed since the command was run previously. In the final step, we will display the thin pool details again. However, we will use this output to compare the number of allocated and shared tracks produced when we created the vse clone sessions.
  • 49. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 49 Examine the Enabled Data Devices Scroll to the Enabeled Data Devices section of the output. Notice that a number of Data Devices contain shared tracks as indicated by the S in the FLGS column. Shared tracks are defined as tracks that have pointers to them from multiple thin devices sharing the pool. Terminate the VP Snap Sessions In order to recognize the space savings that the VP Snap feature offers, we must compare the # of Allocated tracks from the previous step to the # of Allocated tracks after terminating the VP Snap sessions and creating regular Clone -copy sessions using the same device pairs.
  • 50. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 50 Execute the command below to terminate the VP Snap sessions: symclone -sid 01 terminate -f session1.txt -noprompt symclone -sid 01 terminate -f session2.txt -noprompt Note, the -noprompt option used in the commands above may be abbreviated -nop. Execute the command below to verify there are no VP Snap sessions remaining: symclone list -vse Create and Activate Two Clone -copy Sessions Execute the command below to create a Clone -copy session using the same device pairs as the first VP Snap session: symclone -sid 01 create -f session1.txt -copy -noprompt Execute the command below to activate the Clone -copy session.
  • 51. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 51 symclone -sid 01 activate -f session1.txt -noprompt Execute the command below to create a Clone -copy session using the same device pairs as the second VP Snap session: symclone -sid 01 create -f session2.txt -copy -noprompt Execute the command below to activate the Clone -copy session. symclone -sid 01 activate -f session2.txt -noprompt Note, the -noprompt option used in the commands above may be abbreviated -nop. List the Clone Sessions Execute the command below to list all of the clone sessions. symclone list Notice that the VSE setting is not active for these sessions. A differential clone session, which is indicated by the X in the D colmn, means only the tracks that have changed after activation need to be recopied when the clone session is recreated. Note the number of protected tracks will dwindle to zero as they're copied to both clone targets 004Aand 004B in the background. In other words, there are no tracks shared with this method.
  • 52. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 52 Repeat the command above until the number of protected tracks reaches zero and the Status column at the far right of the output changes to Copied. Note, in this virtualized environment it may take 1-2 minutes for the copies to complete. Re-examine the Allocated and Shared Tracks Execute the command below to display the detailed configuration of the VPsnap pool. symcfg -sid 01 show -pool VPsnap -thin -detail Scroll up to display more details about the pool. Near the top of the output, note the values for the fields below: 1. # of Allocated Tracks in Pool 2. # of Shared Tracks in the Pool Conclusion In conclusion the new TimeFinder VP Snap feature offers the ability to share changed tracks when multiple sessions are active against the same source device. This can represent a significant reduction in allocated storage when compared to traditional clone sessions. In this simple example, VP Snap allocated 60% fewer tracks than the traditional clone method when creating two local copies of a source volume. Allocated tracks with VP Snap: 1212 Allocated tracks with traditional Clones: 3168
  • 53. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 53 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 54. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 54 Exercise 4: Configuring Virtual Provisioning and FAST VP
  • 55. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 55 How to Configure Virtual Provisioning & FAST VP What is Virtual Provisioning? Virtual Provisioning is EMC's implementation of thin provisioning on the Symmetrix platform. Virtual Provisioning Components Virtual Pool Avirtual pool - also referred to as a thin pool - is a shared, physical storage resource comprised of data devices of a single Raid protection and technology type. Data Device (TDAT) Adata device - also referred to as a TDAT - is like a standard device in that it is created with a Raid protection type. However, it is considered a private device, because it cannot be directly mapped to a host. Instead data devices are added to a thin pool to provide shared physical storage. Thin Device (TDEV) Athin device - also referred to as a TDEV - is a cache only device that is created with a specific size. When created the device consumes no physical storage on disk. Once bound to a thin pool, extents are allocated from the data devices in the pool as the host writes to new areas of the device. Extent An extent is the Virtual Provisioning unit of storage allocation, which is 768KB in size. Extents are allocated in a round robin fashion across all of the enabled data devices in a pool. Binding Binding is an action performed on a thin device. When a thin device is bound to a pool, a single extent is allocated from a data device in the pool. As a host writes to new areas of the thin device, additional extents are allocated from the bound pool, up to the configured size of the thin device. What is FAST VP? FAST VP automates the identification of active or inactive application data for the purposes of reallocating that data across different performance/ capacity tiers within an array. FAST VP proactively monitors workloads at both the LUN level and sub-LUN level in order to identify busy data that would benefit from being moved to higher-performing drives, without existing performance being affected. This promotion activity is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, via virtual pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed non-disruptively, without affecting business continuity and data availability.
  • 56. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 56 FAST VP Components: VP Tier AVP Tier contains between one and four thin storage pools - each thin pool must contain data devices of the same RAID protection type, and be configured on the same drive technology. FAST Policy AFAST Policy groups between one and three VP Tiers and assigns an upper usage limit for each storage tier. The upper limit specifies the percentage of the configured, logical capacity of the associated storage group that can reside on each tier. Storage Group Astorage group is a logical grouping of Symmetrix devices that are to be managed together. Association Storage groups are associated with a FAST Policy, thereby defining the VP Tiers that data in the storage group can be allocated on. Exercise Overview In this exercise you will perform the steps required to place data under FAST VP control, including the following items: 1. Create Thin Pools 2. Create VP Tiers 3. Create FAST Policies 4. Associate a FAST Policy with a Storage Group 5. Modify a FAST Policy 6. Examine the FAST Compliance Report Note: 1. This lab is being run in a Virtual Appliance running Enginuity 5876. 2. This Virtual Appliance has a fraction of the resources that the smallest Symmetrix array offers. 3. Using 137 Cyl for the Volume Capacity will ensure that existing Data Devices (TDATs) are added to the pool, which will reduce the time it takes for you to complete the lab and will ensure the VM remains stable throughout the exercise. 4. The Virtual Appliance only supports Raid 1 devices - denoted as 2-Way-Mir in Unisphere - so all of the TDATs used in the lab will have Raid 1 (2-Way-Mir) protection. Navigate to the Storage Page Click Storage icon on the Unisphere toolbar to switch to the Storage page.
  • 57. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 57 Navigate to the Thin Pools Page Athin pool is a shared, physical storage resource of a single RAID protection and drive technology used for the purposes of Virtual Provisioning. Made up of multiple data devices (TDATs), each pool provides on-demand storage for host addressable thin devices (TDEVs). On the Storage page, click the Thin Pools big button. Create the EFD_R1 Thin Pool Click the Create button in the bottom left hand corner of the Thin Pools page. Complete the Create Thin Pool Dialog To create a thin pool, it is necessary to specify a pool name, the desired drive technology, the desired RAID protection, and emulation. The number of data devices, and their capacity, also needs to be specified.
  • 58. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 58 Complete the Create Thin Pool dialog as shown above. Be sure to change the Volume Capacity unit to Cyl before entering 137 for the actual capacity. Note, the protection type 2-way-mir is the same as Raid 1 protection, which is abbreviated as R1 in the pool and tier names throughout this lab. Execute the Create Thin Pool Command Verify that the 16 Existing Volumes will be added to the new thin pool. Select Run Now from the drop down to execute the create thin pool command. Acknowledge the Thin Pool was Created Click the Close button to confirm the thin pool was created.
  • 59. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 59 Create the FC_R1 Thin Pool Click the Create button in the bottom left hand corner of the Thin Pools page. Complete the Create Thin Pool dialog as shown above. Be sure to change the Volume Capacity unit to Cyl before entering 137 for the actual capacity. Verify that the 16 Existing Volumes will be added to the new thin pool. Select Run Now from the drop down to execute the create thin pool command. Click the Close button to confirm the thin pool was created.
  • 60. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 60 Create the SATA_R1 Thin Pool Click the Create button in the bottom left hand corner of the Thin Pools page. Complete the Create Thin Pool dialog as shown above. Be sure to change the Volume Capacity unit to Cyl before entering 137 for the actual capacity. Verify that the 16 Existing Volumes will be added to the new thin pool. Select Run Now from the drop down to execute the create thin pool command. Click the Close button to confirm the thin pool was created. Examine the new Thin Pools Verify that the three new thin pools are listed in the table. Double click them to view their details. Use the back button to return to the Thin Pools page after reviewing each pool's details.
  • 61. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 61 Return to the Storage Page Click the Storage link in the navigation path to return to the Storage page. Navigate to the Tiers Page AFAST VP tier defines a set of resources of the same drive technology type combined with a given RAID protection type, and the same emulation. FAST VP tiers can contain between one and four thin pools. Each pool must contain data devices of the same RAID protection type, and be configured on the same drive technology. Click the Tiers big button to navigate to the Tiers page. Create the EFD_R1 Tier Click the Create button located at the bottom left of the screen.
  • 62. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 62 Complete the Create Tier Dialog Complete the Create Tier dialog as indicated above. Confirm the Tier was Created Click the Close button on the Create Tier information pop up window to continue.
  • 63. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 63 Create the FC_R1 Tier Click the Create button located at the bottom left of the screen. Complete the Create Tier dialog as shown above and the click the OK button. Click the Close button on the Create Tier information pop up window to continue.
  • 64. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 64 Create the SATA_R1 Tier Click the Create button located at the bottom left of the screen. Complete the Create Tier dialog as shown above and the click the OK button. Click the Close button on the Create Tier information pop up window to continue.
  • 65. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 65 [INFO ONLY] Create the EXTERNAL Tier If you completed the Federated Tiered Storage (FTS) lesson above, you can see how a tier can be built from a thin pool containing external LUNs above. However, if you try to execute the command above it will fail. The failure is due to the version of Enginuity our virtual Symmetrix is running, which does not support this feature. The GAversion of 5876 does support this feature. At GA, Tiers built from external storage are always considered the lowest tier in a policy. Click the Cancel button to continue.
  • 66. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 66 Edit the FC_R1 Tier Select the FC_R1 tier from the Tiers table and click the Edit button. Add a Second Pool to the FC_R1 Tier Click the Check Box next to the VPsnap pool. Notice the increase in Free space available in the VP Tier FC RAID-1. Click the OK button to finish editing the tier. Click the Close button to continue.
  • 67. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 67 Note, when arrays running with the FAST VP feature are expanded with new and larger drive sizes, EMC recommends building a new pool with the data devices created on the larger drives, and then adding the pool to an existing tier if the goal is to expand the capacity available to that tier. Return to the Storage Page Click the Storage link in the navigation path to return to the Storage page. Navigate to the FAST Page Click the FAST big button to navigate to the FAST page. Navigate to the FAST Policies Page AFAST policy groups between one and three tiers and assigns an upper usage limit for each storage tier. The upper limit specifies the maximum amount of capacity from each storage group associated with the policy that can reside on that particular tier. Click the Mange Policies link to navigate to the FAST Polices page. Create the PLATINUM Policy Click the Create button to continue.
  • 68. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 68 Complete the Create FAST Policy Dialog Complete the Create FAST Policy dialog as shown above. Click the OK button to continue. Confirm the Policy was Created Click the Close button to dismiss the Create FAST Policy information pop-up window.
  • 69. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 69 Create the GOLD Policy Click the Create button to begin. Complete the Create FAST Policy dialog as shown above. Click the OK button to continue. Click the Close button to dismiss the Create FAST Policy information pop-up window. Select the GOLD Policy Select the GOLD policy in the FAST Policies table. Associate a Storage Group Associating a storage group to a policy allows data within the storage group to reside on up to three tiers. Astorage group is considered to be compliant with the FAST policy it is associated with when all data in the storage group is allocated within the bounds of the upper usage limits
  • 70. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 70 for each tier contained with the policy. Click the Associate Storage Groups button. Complete the Associate Storage Group Dialog Select the ESX_SGstorage group. Click the Show Advanced tab to display the Enable FAST VP RDFCoordination option. Click the OK button to continue. Click the Close button on the Add Storage Group(s) pop-up window to continue.
  • 71. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 71 Modify the PLATINUM Policy Modifying a FAST VP policy is considered a dynamic change. Changes made take effect immediately after the change has completed. Select the PLATINUM policy in the FAST Policies table. Right click the policy and then select the View Details... option from the Right-Click menu. Modify the %of the EFD_R1 Tier Increase the % for the EFD_R1 tier to 20. Click the Apply button to confirm the change. Click the Close button on the Edit FAST Policy pop-up window to continue. Navigate to the Storage Groups Sub-section Hover your mouse over the Storage section on the Toolbar. Click the Storage Groups option from the drop down menu.
  • 72. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 72 Associate a FAST Policy with the Storage Group WIN_SG Select the WIN_SGstorage group from the Storage Groups table. Click the Associate to FAST button at the bottom of the page to continue. Select the Gold Policy Select the GOLD policy on the Associate to FAST Policy dialog and click the OK button to continue. Examine the Details of the Storage Group WIN_SG Select the WIN_SGstorage group from the Storage Groups table. Click the View Details button at the bottom of the page to continue.
  • 73. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 73 Examine the Properties of the Storage Group WIN_SG Examine the Properties of the storage group WIN_SG. Note that the Total Capacity (GB) is 2.02. Examine the Compliance Report for the Storage Group WIN_SG Examine the FAST Compliance Report for the Storage Group WIN_SG. Note following information in the FAST Compliance Report: Max SGDemand (%) - the maximum capacity allowed in each tier based on the policy associated with the storage group. Limit (GB) - The capacity in GBs allowed in each tier based on the policy associated with the storage group. Fast SGUsed (GB) - the amount of capacity currently allocated in each tier. Growth (GB) - (Not shown) the amount of capacity the storage group can still consume in each tier based on the policy. Note, a negative number would indicate the storage group is out of compliance. Conclusion In this exercise you've configured virtual provisioning pools, FAST VP Tiers and FAST VP Policies. You've also associated storage groups with the FAST VP policies and examined the compliance report.
  • 74. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 74 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 75. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 75 Exercise 5: Using Tier Advisor to Size an Array for FAST VP
  • 76. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 76 Using Tier Advisor LAB overview In this lab you will learn how to analyze the current workload and configuration of a Storage Array and create a Tiered Storage Configuration. At the end of this lab you will understand • What are the input parameters needed for the modeling of Tiered configurations • How to interpret the workload characteristics that affect the configuration plan • Understand workload Skew • How to identify the utilization level of the disk resources in the current configuration • How to create a tiered storage configuration that can improve performance, reduce acquisition costs, and reduce power consumption and footprint • How to validate if the configuration that you create can sustain the desired performance as the workload grows Tier Advisor Tier Advisor is a modeling tool that estimates the performance and cost of mixing different types of disk drive technology within EMC storage arrays. Tier Advisor is not a requirement for FAST, but it is a useful tool when planning for a Fully Automated Storage Tiering (FAST) implementation. FAST optimizes the use of different disk types, or storage tiers, in a Symmetrix® array by placing the right data in the right tier at the right time. Tier Advisor helps you model an optimal storage array disk configuration by enabling interactive experimentation with different storage tiers and storage policies until achieving your desired cost and performance preferences. In this process you verify that the disk technology chosen in each tier has the capabilities needed to accommodate the different workloads. Tier Advisor helps you define the amount of disk drives to use for each disk drive technology when configuring a tiered storage solution. Launch Tier Advisor Double Click on the Tier Advisor Icon located on the Desktop
  • 77. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 77 Close the Workload: Data Source and Target Group Selection Window Double click the Tier Advisor icon in the upper left corner of the window. Open a New Session On the Tier Advisor menu, click File -> Open Session to open a new Tier Advisor session. Select the Sample File Follow the path below to the Full Array sample - Moderate load file. Computer -> Local Disk (C:) -> labs -> TA Select the file and click the Open button to continue. Display the Workload Click the Workload button to display the Workload window.
  • 78. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 78 Examine the Workload's Characteristics Examine the characteristics of the workload. Shown above: System: Symmetrix, Unified (VNX, CLARiiON), or Custom Identifier: Serial Number Devices: Number of logical devices Only the meta heads when meta volumes are used Cap(TB): Logical/Usable Capacity Total IO/Sec: Number of Front End I/Os Hits (%) : Cache Hits % BE* IO/sec: I/Os being directed to the Disk Adapters and disk drives. This field does not account for the RAID protection. RAID protection is accounted for when the Storage Tiers are defined. Not shown above: BE* Writes %: The percent Write requests directed to the Back End of the System BE* IO Size (kB): Back End IO Size in KBs There are two more columns not shown in the example above. Both columns are measures of workload skew. Workload skew in this context refers to the disribution of I/Os over a defined capacity. The concept can be associated to the 80/20 rule. Standard: Refered to as Standard skew, this column describes the observed distribution of the load in the devices Est. Virtual: Refered to as Virtual skew, this column is an estimate of how the workload from Extent Group Sets will be distributed over the available storage capacity. The identification of the workload skew is a fundamental piece of the analysis done by Tier Advisor. Skew influences the FAST policy definitions. The I/O workload displayed in the workload line represents the average + 1 Standard deviation of the time intervals selected on the bar chart below.
  • 79. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 79 Examine the Time Intervals Examine the time intervals available in the sample file. Note, that data is available for a 24 hour period beginning on Thursday 10/7/10 at 2AM. Narrow the Analysis Period All available time intervals are selected by default when calculating the workload, but it is possible to select any combination of time intervals for analysis using the mouse and the CTRL key to click on the hour bars. Click the 12PM interval as indicated above. Re-examine the Workload's Characteristics Note that the number of Back End I/Os per sec changed from 15,222 to 21,824. Also note that the Standard skew - not pictured above - changed from 79.5 to 86.1. This is an indication that during the 12:00PM interval the workload running in this system is concentrated on a smaller number of
  • 80. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 80 devices than in other time intervals. Reset the Interval Selection To reset the interval, click the box indicated above. Edit the Baseline Click the Edit Baseline button. Examine the Disks in the Baseline Editor One of the goals of the performance modeling is to create a disk drive mix that satisfies your cost and performance requirements. To facilitate this process the cost and performance of the configuration being proposed is compared against an existing configuration or against other configuration options.
  • 81. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 81 If the workload is currently running on an EMC array, Tier Advisor automatically loads the physical characteristics of the existing system and sets this system as a hardware baseline. These hardware characteristics will be used when comparing cost and power consumption of the different configuration options. Observe in the chart above that the source storage array uses 3 different disk types: 256 x FC 15K 300 GB 240 x FC 15K 146 GB 88 x FC 10K 300 GB 584 Examine the Devices in the Baseline Editor The bottom section of the Baseline Editor displays information about the devices that are allocated in each of the disk types selected. In this example we can observe that there are 1,244 devices allocated on the 300GB 15K drives. This group of 1,244 devices represents 65TB of usable capacity. RAID 5 7+1 protection is used, and the devices are responsible for 11,534 Back End I/Os. Close the Baseline Editor To continue, click the Done button on the Baseline Editor.
  • 82. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 82 Close the Workload Window To continue, click the Done button on the Workload Window. Examine the Hardware Configuration Section Examine the Hardware Configuration section, which allows you to compare Target Configurations against the Baseline Configuration. Recall that the Baseline Configuration is based on the hardware components included in the sample workload. Section 1 allows relative comparisons based on the following metrics: • Rel Cost - The cost of this configuration in comparison to the baseline • Rel RT - Relative Service Time / Response Time • Rel Pwr - Power consumption as a function of the number of disks • Rel Cap - Relative Raw capacity based on RAID protection Section 2 displays the number of physical disks in the target configuration. Examine the Policies Section Note that the default values indicate the following: • 3% of the total capacity is being allocated in the FLASH Tier • 27% is being allocated in the 15K 300 Mir Tier
  • 83. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 83 • 70% is being allocated in the Sata 2T 14R6 Tier Examine the Bar Chart The bar chart illustrates utilization of the Tier capabilities. The x and y-axis display information from the Tier selected. The x-axis represents the % of the Tier Storage Capacity being used. The y-axis represents the % of I/Os based on the Max acceptable number of I/Os per sec that can be executed in the tier. To select a tier, hover the mouse over the colored bars. In the example above, the tier 15K 300 Mir tier is selected, or the the green bar. Observe the following: • 99.4% of the storage capacity is being used • 39.1% of the maximum acceptable • There are 218 FC 15K 300GB disks in the tier Change the Relative Data Display Change the Relative Data display from Numeric Ratio to Percent Change.
  • 84. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 84 Change the Capacity Total Display Change the Capacity Total display from Physical Used to Logical Allocation. Change the Tier Capture Display Change the Tier Capture display from Capacity to Skew. Observe the Effect Observe the effect of the changes in the display options. Click each option and observe the following:
  • 85. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 85 Workload: • Relative Service time is displayed as -44% instead of 0.56 • The capacity being displayed is the logical or usable capacity of 120.37 TB Policies: • The policy section now displays the % of the I/Os that is expected to be captured in each Tier • The FLASH Tier is expected to execute 43.15% of the I/Os in the system • The FC Tier is expected to execute 55.8% of the I/Os • The SATATier will execute .96% of the I/Os Collect Info About the Flash Tier Hover your mouse over the Red bar in the Disk Utilization graph and observe the following: • There are (24) 200GB EFD Disks in the Tier • The I/O utilization is 22.6% (y-axis) • The storage capacity is 86% used (x-axis) Adjust the Workload Distribution to Optimize Cost and Performance In the Policies section, click the Flash 7R5 box and adjust the tier's percentage from 3% down to 2.3%.
  • 86. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 86 Observe the Changes in EFD Tier Observe the changes to the Disk Utilization section after reducing the percent of EFD capacity from 3% to 2.3%: • The number of disks reduced from (24) to (16) • I/O Utilization in the FLASH Tier increased from 22.6% to 29.5% • Space Utilization increase from 85.8% to 98.9% Observe the Changes to the Proposed Configuration Observe the changes in the relative cost of the proposed solution after reducing the percent of EFD capacity from 3% to 2.3%: • The relative cost changed from -1% to -13% Collect Info About the SATA Tier Hover your mouse over the Blue bar in the Disk Utilization graph and observe the following: • There are (64) 2TB SATADisks • The I/O utilization is 15.5% (y-axis)
  • 87. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 87 • The storage capacity is 76% used (x-axis) Observe Characteristics of the SATA Tier Observe the following about the SATATier: 1. The SATATier represents 70% of the system capacity 2. The overall raw capacity of the proposed configuration is 42% more than the baseline. Recall from the previous step, when we looked at the SATAbar in the disk utilization chart, that only 76% of the capacity in this tier is utilized. This means that this system has more capacity than required and the tiers can be further optimized. Adjust the SATA Tier In the Policies section, click the SATA 2T 14R6 box and adjust the tier's percentage from 71% down to 70%.
  • 88. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 88 Observe the Changes in the SATA Tier After Adjusting the Capacity Observe the changes to the Disk Utilization section after reducing the percent of SATAcapacity from 71% to 70%: • The number of disks reduced from (64) to (48) • I/O Utilization in the FLASH Tier increased from 15.5% to 16.5% • Space Utilization increase from 76% to 99.9% Note the RAID Protection selected for this Tier is RAID 6 14+2, which is why in this example the RAID groups are incremented or decremented in groups of 16 disks. Observe the Changes to the Proposed Configuration Observe the following changes to the Proposed Configuration after reducing the percent of SATA capacity from 71% to 70%: 1. The cost of the proposed configuration changed from 13% to 17% less expensive than the baseline 2. The response time is now 36% lower instead of 52% 3. The power consumption is 53% lower 4. The increase in raw capacity is now 21% instead of 42%
  • 89. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 89 * This increase in raw capacity is primarily a function of switching from a configuration that was 100% RAID 5 7+1 to a configuration that has 30% of the total capacity in RAID 1. * The use of RAID 1 in the middle Tier may increase the ‘Raw’ capacity in some configurations, but for the majority of the workloads this is an option that can provide a better cost and performance ratio than using RAID 5 in this Tier. 5. The baseline configuration has 584 disks and the recommended configuration has 290, which explains the reduced power consumption in the Rel Pwr column (3) Examine the FC Tier Hover your mouse over the Green bar in the Disk Utilization graph and observe the following: • There are (226) 300GB 15K FC Disks in the Tier • The I/O utilization is 42.0% (y-axis) • The storage capacity is 99.4% used (x-axis) Examine the Target Configuration's IO Profile Examine the Target Config 1's IO profile and note that the Back End Write activity in this system is 23.4%. This value is significantly below the typical 40-50% write activity observed in enterprise systems. When the back end write activity is near 50%, the middle tier rarely benefits from RAID 5
  • 90. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 90 protection. RAID 1 is a better cost and performance option. Because our sample workload has a backend write activity significantly below the observed average we should evaluate the effect of using RAID 5 in the middle tier. Open the Policies Click the Policies button to open the Storage Policies worksheet. Change the FC Tier's Raid Protection Type On the Storage policies worksheet complete the following actions to change the FC Tier's Raid Protection from RAID1 to RAID5 3+1: 1. Click the Tiers tab 2. Select RAID 5 3+1 from the Protection drop down box 3. Change the tier's name to 15K300 3R5 to reflect the new RAID protection type 4. Click the Close button
  • 91. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 91 Observe the Changes to the FC Tier Hover your mouse over the Green bar in the Disk Utilization graph and observe the following: • There are (152) 300GB 15K FC Disks in the Tier down from (226) disks • The I/O utilization is 65.7% up from 42.0% (y-axis), which viloates the I/O Utilization Policy • The storage capacity is 98.6% down from 99.4% used (x-axis) Observe the Changes to the Proposed Configuration Observe the following changes to the Proposed Configuration after changing the FC Tier from RAID1 to RAID5 3+1: 1. The relative cost changed from -17% to -31%, which means it's 31% less expensive than the baseline configuration 2. The relative response time is now -12%, which is lower than the baseline, but not as good as the option with RAID1
  • 92. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 92 Examine the Policy Relative Response Time Graph Hover your cursor over the small gray home plate icon on the Policy Relative Response Time graph. As the vertical bar is dragged to the right or left the response time information in the policy section is updated. This chart can be useful to project at what IO rate the resposne time curves will cross. Optimize the FC Tier To optimize the FC Tier, increase the percentage of the FC Tier from 28% to 38%. Click the 15K300 3R5 box and change the percentage from 28% to 38%.
  • 93. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 93 Observe the Results of Increasing the Percentage of the FC Tier Observe the following changes to the Proposed Configuration after increasing the percentage of the FC Tier percentage from 28% to 38%: 1. The relative cost changed from -31% to -21%, which means it's still 21% less expensive than the baseline configuration 2. The relative response time is now -27%, which is also lower than the baseline 3. The I/O utilization in the FC Tier decreased from 65.7% to 51.5% In this example the RAID 1 config provides better response time improvement for a slightly higher cost. In other cases when the percentage of Back End Writes is high we observe that the RAID 1 option for the middle tier results in lower cost and better performance than RAID 5. Conclusion In this exercise we explored the basic funtions in Tier Advisor for planning a tiered configuration. There are more functions to be explored, please ask your EMC representative for a demonstration. Thank you for participating in this introductory lab. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 94. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 94 Exercise 6: Using the symvm Command to Provision Gatekeepers to a Virutal Machine
  • 95. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 95 Using symvm to Provision Gatekeepers to a VM Solutions Enabler 7.4 includes a new command - symvm - for provisioning storage to Virtual Machines (VMs) running in Microsoft's Hyper-V and VMware's ESX hypervisors. In this lab you will provision gatekeeper devices (GKs) to the VM Win2K8, which is running on an ESXi 5.0 server, which is running within a VM too. Open a Command Prompt Click the Command Prompt icon located on the Desktop. List the ESX Servers Authorized Execute the command below to list the ESXi servers that have been registered on the SYMCLI host. symcfg list authorization -vmware Note the IP address in the Hostname column. This address corresponds to the ESXi 5 server's IP address. List the VMs on the ESXi Server Execute the command below to list the VMs running on the ESXi server: symvm -server 192.168.110.51 list -vm all
  • 96. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 96 We will provision GKs to the Win2K8 VM in an upcoming step. List the GKs Provisioned to the ESXi Server Execute the command below to list the GKs provisioned to the ESXi server: symvm -server 192.168.110.51 list -sid 01 -gk Note the full Array Id, it will be required to map the GKs in the next step. Scroll to the right to find the Dev column, which contains the Symmetrix device ids associated with the GK devices. Map the GKs to the VM Execute the command below to map the GKs 0056 - 005B to the VM Win2K8: symvm -server 192.168.110.51 map -VM Win2K8 -array_id 000194900001 -range 56:5b Note all devices mapped to a VM with the symvm command are provisioned as Raw Device Mapping (RDMs) devices on the ESXi server. List the GKs Mapped to the VM Execute the command below to list the GKs mapped to the VM Win2K8: symvm -server 192.168.110.51 list -sid 01 -gk -mapped -vm Win2K8
  • 97. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 97 Scroll to the right to view the Dev ID column. Unmap GKs from the VM Execute the command below to unmap the GKs 005A- 005B from the VM Win2K8: symvm -server 192.168.110.51 unmap -VM Win2K8 -array_id 000194900001 -range 5A:5B Conclusion This simple exercise demonstrates how the new symvm command can be used to manage GK devices in a virtualized environment. It should be noted that the symvm command works equally as well with regular volumes, however it is limited to presenting the devices as Raw Device Mappings (RDMs). RDMs are not usually recommended for general purpose storage. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 98. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 98 Exercise 7: Using Dynamic Cache Partioning's Analysis Mode
  • 99. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 99 Using Dynamic Cache Partioning's Analysis Mode Storage resource optimization based on workload equality is the default behavior for Enginuity, catering to homogenous application environments. However, as the trend continues to consolidate dissimilar workloads on the same storage array, flexibility for differential treatment between workflows becomes desirable. Important cache resources must be subject to isolation and prioritization mechanisms. The ability to set allocation preferences for cache facilitates many storage management objectives. The following exercise demonstrates how to determine the current division of cache resources. From this starting point, prudent management of cache via Dynamic Cache Partitioning is possible. The methodology uses Dynamic Cache Partitioning Analyze mode and entails no overhead on the array. This technique can be pursued in a production environment with no performance impact considerations. In this exercise scenario many workloads have been been under review and subsequently assigned into one of three service classes: High priority, default priority, and low priority. New Cache Partitions are to be created and the appropriate workloads assigned into the partitions based on the service classes. Decisions on which workloads reside in each service class are not considered in this exercise and are assumed to be complete. Analysis is performed on cache to allow informed creation of Cache Partions and subsequent management of the service classes via their Partitions. The steps in this exercise are: 1. Create two cache partitions and modify the DEFAULT_PARTITION ready for Analyze mode Distribute target allocations in all partitions equally so the target allocations sum to 100%. Set all partitions min allocation to 0% Set all partitions max allocation to 100% Set all partitions donation age to 0 seconds Assign workloads into the Partitions 2. Set Dynamic Cache Partitioning Analyze mode 3. Examine graphs produced by Unisphere for VMAX for each cache partition and note the following metrics: maximum/minimum and target cache utilization Cache Age GT 10 (10 min moving average of Fall Through Time) Read Hit % 4. Using the simple chart, assess the cache/disk affinity of the three partitioned service classes 5. Implement partitions to match current cache utilization of the service classes 6. Assert workload prioritization to achieve these results: ServiceClass1 = High Priority
  • 100. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 100 Open Command Prompt Click the Command Prompt icon located on the Desktop. Create Two Cache Partitions Execute the commands below to create two new Cache Partitions ready to participate in Analyze mode. The Partitions will be named ServiceClass1 and ServiceClass2. symqos -sid 01 -cp -name ServiceClass1 create -target 33 -min 0 -max 100 -time 0 -wp 80 symqos -sid 01 -cp -name ServiceClass2 create -target 33 -min 0 -max 100 -time 0 -wp 80 Modify the Default Cache Partition Execute the command below to modify the default Cache Partition for participation in Analyze mode: symqos -sid 01 -cp -name DEFAULT_PARTITION modify -min 0 -max 100 -time 0 Assign workloads into the Partitions Assigning devices into Partitions is most easily achieved using device groups containing the various workloads. Device groups are likely to already exist making placement in a Partition simple. For this exercise a single device for each Partition is used to symbolize the workload assignment process. All devices (workloads) not assigned into a partition remain in the default
  • 101. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 101 partition. Execute the commands below to assign devices ( symbolizing service class workloads) into their Partitions: symqos -sid 01 -cp -name ServiceClass1 add dev 048 symqos -sid 01 -cp -name ServiceClass2 add dev 049 Enable Dynamic Cache Partitioning Analyze Mode Analyze mode tracks cache slot usage by "proposed" Partition revealing how workloads in each "proposed" partition are actually using cache. Understanding how cache is used without Partitions enabled allows informed decisions on how to implement and manage Partitioning. Execute the command below to enable Cache Partitioning Analyze mode: symqos -cp -sid 01 analyze Review Dynamic Cache Partition Settings symqos -cp -sid 01 list -settings
  • 102. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 102 Capturing Cache metrics Unisphere for VMAX provides graphical representation of many cache metrics. The screen shot above shows Cache Partitioning choices available under the Performance section of Unisphere for VMAX. Subsequent graphs used for this exercise are produced using Unisphere for VMAX. Once workloads identified for Partitioning are in Analyze Mode let the analysis run for a period to allow data collection. (Anormal production day for example.) The data being collected shows cache usage for the service classes (multiple workloads) within the proposed Partitions.
  • 103. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 103 Examine the Sample Graph - maximum/minimum and target cache utilization Examine the above graphs for each partition produced by Unisphere for VMAX. These represent actual cache usage during the sample time. Examine the Sample Graph - Cache Age GT 10 The next graphs are 10 minute moving averages of Fall Through Time for service classes in each Partition. Fall Through Time is a measure of data residency in cache which can be used to indicate cache utilization effeciency. Examine the Sample Graph - Read Hit % The % of read hits and the Fall Through Time will be used together to draw inferences about the affinity of each service class for Cache or disk resources. Is a workload "cache friendly" or does it use the disk "backend" resource.
  • 104. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 104 Assess the Cache/Disk Affinity Using the simple chart, assess the cache/disk affinity of the three Partitioned service classes. Consider this to be an Open System Environment. ServiceClass 1 is (Cache friendly, neutral, disk intensive) ServiceClass 2 is (Cache friendly, neutral, disk intensive) DEFAULT_PARTITION is (Cache friendly, neutral, disk intensive) Important considerations: Giving more cache to a cache friendly workload improves that workload but has a secondary effect of requiring less disk access and thereby improves other disk intensive workloads. The reverse is also true, giving additional cache to a disk intensive workload forces a cache friendly workload to more disk access and ultimately decreases the disk intensive workload performance. Workloads far apart on the chart can both benefit when appropriate cache management is used. Cache friendly workloads close together on the chart compete for the same resource and cache management chooses one over the other. (Prioritization) Modify the Cache Partitions to Match the Current Utilization Dynamic Cache Partitioning parameters are set for maximum flexibility while in Analyze mode. To allow parameter modification in preparation for use requires moving to Disabled mode. Set Dynamic Cache Partitioning to Disable mode.
  • 105. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 105 symqos -cp -sid 01 disable Execute the commands below to modify the cache partitions to match the current cache utilization as recorded in your notes. Initial implementation of Cache Partitioning to match current cache usage allows the feature to be introduced in a neutral manner. This can be a very important consideration for sensitive production workloads. symqos -sid 01 -cp -name ServiceClass1 modify -target 20 -min 10 -max 40 -time 300 -wp 80 symqos -sid 01 -cp -name ServiceClass2 modify -target 30 -min 10 -max 50 -time 300 -wp 80 symqos -cp -name DEFAULT_PARTITION -sid 01 modify -time 300 -wp 80 The DEFAULT_PARTITION target value will be the cache remaining after the other paritions are modified. Set the Donation age to 300 seconds. Donations will be discussed in the next step. Flexing Partitions with Donation Age Donation age is the user-defined measure of cache use efficiency controlling movement of slots to other partitions. Acache slot is eligible for donation if the time since it was last accessed is greater than the donation age. Donation age of 0 results in unimpeded slot movement between partitions and donation age of 600 is expected to result in no slot movement between partitions. Partitions can donate slots away until they reach their specified minimum % value and receive slots to their specificed maximum % value. If no donations are taking place partitions will be at their target % values. The Donation parameter can vary between partitions which offers a mechanism for prioritization.
  • 106. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 106 Assert Workload Prioritization Once Partitioning is in place prudent management actions can be considered based on knowledge of "cache friendly" or "disk intensive" affinity for each service class. As per business directives it is desirable to increase performance of the priority workload, ServiceClass1. Considerations: Q. What is the closest workload on the Cache/Disk Affinity chart? What resource is the subject of competition? A. The Default Workload is closest on the Affinity chart. This is competing for disk resources. To attain better performance for ServiceClass1, the objective is to guarantee more access to disk resources. STEP 1 Partition Targets One possible Dynamic Cache Partitioning change to achieve the objective is to give 5% more cache to the Default Partition, reducing inefficient cache useage in ServiceClass1 and allowing the Default Partition to use the cache. More cache for the Default Partition reduces its disk access and disk resource competition with ServiceClass1. symqos -sid 01 -cp -name ServiceClass1 modify -target 15 The Default Parition will gain the additional 5% target allocation since the default is what remains after other partitions are defined. Observe the resulting partition parameters. symqos -sid 01 -cp list -settings
  • 107. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 107 STEP 2 Partition Max and Min Although ServiceClass1 is disk intensive it does display a cache use spike and so we won't reduce the value for the maximum partition size. When required, this priority partition can increase cache use to the maximum value already established in the sample period. Instead cache resource competition between the default and ServiceClass 2 will be influenced to provide "opportunity" for the default partition to take a little priority over ServiceClass2 if they are contending on the cache resource. Improving the default partition over ServiceClass2 will ultimately improve ServiceClass1. Reduce the maximim partition size of ServiceClass2 by 5%. symqos -sid 01 -cp -name ServiceClass2 modify -max 45 symqos -sid 01 -cp list -settings
  • 108. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 108 STEP 3 Donations Finally, the Donation time can be influenced to create an environment with a bias to achieving our goals. The goal is to improve the disk intensive ServiceClass1 by giving its competitor workload more cache and reducing disk contention. The competitor workload (default workload) will be given a higher priority over ServiceClass2 as defined by cache use effeciency i.e. the donation time. The Average Fall Through time acts as a guide for Donation time. Consider, if a cache slot is falling out of the Least Recently Used (LRU) ring in 60 seconds then the time since last access is 60 seconds. This one data point can guide Donation time considerations. ServiceClass2 experiences an minimum average Fall Through time of 80 seconds. To positively bias donation of cache slots to other partitions the donation time can be set slightly below this number. With a Donation time of 70 seconds the least effeciently used slots in ServiceClass2 will be available if other partitions demand the resource. The Default_Partition experieces large variations in the Fall Through time but our desire is to influence this partition towards keeping more cache slots. Set the Donation time for this partition to 90 seconds. ServiceClass1 Fall Through time indicates it generally doesn't use cache very effeciently but there is one spike that we will want to accomodate. Set the Donation time to 40 seconds. symqos -sid 01 -cp -name ServiceClass1 modify -time 40 symqos -sid 01 -cp -name ServiceClass2 modify -time 70 symqos -sid 01 -cp -name DEFAULT_PARTITION modify -time 90 symqos -sid 01 -cp list -settings Partition sizes flex around the Target values. Slot donation occurs according to the cache use
  • 109. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 109 efficiency measure represented by Donation time. The settings above are intended to provide a bias that supports easier disk access for ServiceClass1 by making the default partition (disk resource competitor) the recipient of cache slot donations. Enable Cache Partitioning and Observe the result Small changes were made to the Partition parameters in an effort to prioritize one ServiceClass over the others. Dynamic Cache Partitioning can now be enabled and the result reviewed. symqos -sid 01 -cp enable The workloads are run with the Partition parameters controlling cache usage. The Unisphere graphs shown above are re-examined to verify success (or failure) of the intended result. With this feedback more small alterations can be made or the environment left until workloads change enough over time to warrant a repeat of the management cycle. Conclusion The exercise above introduces concepts of Dynamic Cache Partitioning including: • Analyzing the current environment to understand neutral partition implementation • Understanding workload cache or disk affinity • Prioritizing between workloads competing for the same resource • Manipulating partition parameters Althought it is valuable to understand the material presented in this exercise, EMC provides a modeling tool that can greatly assist with Cache Partitioning considerations. This tool is available in consultation with an EMC SPEED representative. 1. - Introduction to Unisphere for VMAX 2. - Configuring Federated Tiered Storage (FTS) 3. - Using VP Snap 4. - Configuring Virtual Provisioning and FAST VP 5. - Using Tier Advisor to Size an Array for FAST VP 6. - Using the symvm Command to Provision Gatekeepers a Virtual Machine 7. - Using Dynamic Cache Partioning's Analysis Mode
  • 110. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 110 Conclusion
  • 111. HOL07-VMAX Enginuity 5876: FTS, VP Snap, FAST VP, Unisphere for VMAX, DCP, Tier Advisor and symvm - 111 Conclusion Thank you for taking the time to complete the VMAX 40K Hands-on lab session. We hope the lessons spark an interest in you to learn more about the new features in Enginuity 5876 as well as Unisphere for VMAX.