Your SlideShare is downloading. ×
Control Center Kss
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.


Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Control Center Kss


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. EMC Control Center permits administration/management of the entire infrastructure, from host/server OS, to SAN and to Storage systems, as Hitachi, HP, IBM, etc. By the use of wizards, Control Center permits administrators to do automated storage provisioning. Timefinder allows to make copies of volumen inside the same symmetrix system. (local) SRDF permits to replicate/copy sites between Symmetrix systems. (remote copy) Symmetrix architecture (950 particular, as A & B cards are back end) 16 17 0 C LOOP Each Hyper has a number, or symvol. This D 1 D D symmvol, is consecutive, so always is 0 1 C CACH E C C better to allocate continuous hypers. (EV ER YT IME A H OST S D HYPERS DISKS / 0 TRIES TO WR ITE , THE 0 WR ITE GOES TO THE B C 1 LU N 000 1 CACHE, A ND THEN IS DEST AG ED TO DISK ) B D The rule of 17, comes from older DMX or 0 A A C 8000 series. Always the directors 1 D configuration is started from highest # to the lowest (17-0) (16-1), etc. Hyper types: STD, BCV. LOOP STD: is the normal volume, and is able to 000 be under RAID1, 5, 6, unprotected, etc. 01 02 symm vol BCV: is used by Timefinder, normally unprotected, but it can also have R5, 6, 1, etc. For cloning, it can be done from STD to BCV, or from STD to STD volumes. VDEV: Virtual devices used with Timefinder Snap. VCMDB: Only exists one VCMDB per Symmetrix system, here resides all the masking configuration of the symmetrix system. (hosts HBA WWN, switch port, symm port, etc) Can be backedup in the Control Center Host. (Right click, backup VCMDB.) META: Meta devices are simply several logical devices that are presented to a host as just one larger device. Within the Console it appears in most views to be several devices, though the partnered members are easy to identify. Meta devices can be concatenated (data addressed linearly) or striped (data address shuffled among the members). R1 & R2: SRDF Volumes, R1 is source, R2 is target device. Gatekeeper (GK): utilized to send commands to Symmetrix system. This volumes must be presented to any host in which the ECC / Solution Enabler will be installed. Recommended is to
  • 2. have at least 4 GK Volumes allocated to the host via Fibre Channel. If not GK available, another hyper can be allocated to host. It normally has 6 cylinders (2.88). CLARiiON architecture: Storage processors in clariion have separated caches for read & writes, but not mirrored like in Symmetrix. WS- X95 30 SFI T M S EM E MG MGMT TU T I V T E A S Y C T WR S E CONSOL E 0 1/100 ST S A P R COM1 CFI SUPERV SOR I WS- X95 30 SFI MGMT T CONSOL E 0 1/100 COM1 M M S U E E V MG T CFI T ST I T SE TA Y C WR E S S A P R SUPERV SOR I To allocate disks from Clariion arrays, we must create RAID Groups, when creating a RAID Group, we must choose the type of protection you desire, but it will depend also, on how many disks you have selected for the group. * In Clariion, instead of Timefinder, we have Snapview, and instead of SRDF, Mirrorview. The SAN or Storage Area Network, is network of storage disks. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. Compared to managing hundreds of servers, each with their own disks, SANs improve system administration. By treating all the company's storage as a single resource, disk maintenance and routine backups are easier to schedule and control. In some SANs, the disks themselves can copy data to other disks for backup without any processing overhead at the host computers. Storage Scope: is an independent Oracle database for the reports There is a process that gathers information from ECC Database to the Storage Scope DB. (Extract, Transform and Load - ETL)
  • 3. ControlCenter is made up of three layers. They can be easily thought of as data visualization, data storage, and data collection, but are more formally called the User Interface Tier, Infrastructure Tier, and Agents Tier. The infrastructure is the tier responsible for data storage made up of three separate processes (Server, Repository, and Store). The console tier handles data presentation. The main presentation tool is the ControlCenter Java Console, but other presentation tools are used for deeper analysis. The agent tier is responsible for data gathering. Agents of different types monitor and manage objects such as arrays, switches, and applications. This will be covered in more detail in the next lecture. The Store, is the entity that writes all the agents information to the repository. The Java console permits a complete management, but the web console is only to manage alerts. ECC Server is the host from which all the commands are ran, and saved in to the repository. The data collection policies maintain our repository continously updated. The master agents are mandatory for the servers which you want to administer from control center, they manage the other installed agents. (permit updates, patching and install/deinstall). The host (or OS) agent, is the one that collects the host’s health and alerts and presents it to ControlCenter. Passes information to ECC and Storage Scope. In ESX servers, you must install master and vmware agents in order to view the LUNS mapped to virtual hosts. * And for every device you want to administer from ControlCenter, you will need to install an specific set of agents (master+device specific agents) to be able to control/administer them. (valid for switches, other storage arrays, etc.) All the information you work with on ControlCenter, is taken from the repository, and not from the final device. Every change made to the configuration here, is then later applied to the final device (host, switch, storage, etc.) ** Authentication is made at OS level, an “eccadmin” account must be created for ControlCenter.
  • 4. A key management server can be utilized for ControlCenter, for authentication using certificates. If HA is desired, install ECC on MS Cluster or SRDF/CE for MSCS to provide redundancy * The agents that have same pre-requisites, can be installed togheter, in same server. The communication between Symmetrix and ECC is over FC, but for the rest of devices is over IP.
  • 5. Agent Discovery Automatic Discovery: Many agents discover data objects automatically. – Host Agents. – Storage Agent for Symmetrix. – Symmetrix SDM Agent. Assisted Discovery: These agents must discover their objects by administrator action. – Common Mapping Agent. – Database Agent for Oracle. – Fibre Channel Connectivity Agent. – Storage Agents for CLARiiON, Centera, Invista, NAS, SMI, HP StorageWorks, HDS, and ESS. – VMware Agent. Use the Discover menu to perform Assisted Discovery. You can use the Discover > Review Progress menu to see the results of the discovery process. For ECC 6.1 and SAN Switches/Fabrics with firmware 6.1 or above, the proxy must be configured in order to allow communications with the rest of the SAN Switches. Unidentified Ports in ControlCenter ControlCenter matches WWNs to HBAs, and HBAs to switch ports automatically when: - FCC Agent discovers switches, which report all connected WWN - Host Agents discover HBA WWNs This will not happen correctly when: - Host does not have a Host Agent - WWN from unsupported HBA, tape library, or other object is discovered Result: Unidentified Ports in Tree Panel - Difficult to allocate storage to host by WWN - Difficult to report on storage utilization by WWN Migration Manager Overview You can use the Host Migration Manager to manage unidentified ports in a large SAN environment. It allows for the bulk creation of host objects and the association of WWNs to hosts. If you have a list of hosts and WWNs, you can create an input file to specify their relationships, and then use the Migration Manager to import all of them into ControlCenter. This tool does not provide the same level of detail as the manual addition of a host, but it may be an easier option if you have a large number of hosts to add. *The SAN Manager license is required to use the Host Migration Manager.
  • 6. To use the Host Migration Manager, prepare a file in the format shown here. A “name” and “id” row is needed for each HBA entry. The world wide name of the HBA appears at the left of the line before the first dot, and the related host name appears at the right after the equals sign. The file must be named as shown here, but the utility will allow you to retrieve the file from any folder on the Console host. Once the file is created, launch the Host Migration Manager from the ECC Administration task menu. It will prompt for the location of the file, and show a preview of the associations it will create. Use the preview to locate syntax errors. You can find the hosts that it creates in the Hosts tree. Drill down to find the HBAs you ssociated with the hosts. Host objects created this way have fewer details than normal hosts of course, and they are marked with a diamond icon to show that they are not being actively discovered by an agent. Menu bars, panels, and icons are used to manipulate the ControlCenter Console display. At the top of the window is the menu bar that most window applications feature. Many of the commands and views can be accessed from one of the menus available here. Below the menu bar is the task bar. Clicking on any of the five tasks (Storage Management, Monitoring, Performance Mgt, ECC Administration, Data Protection) alters the Console display to present features tailored to that task. The menu selections on the menu bar change and quick access icons appear on the Tool Bar to present features for that task. Pull down menus associated with each task are used to change Target Panel views. Alerts information is located on the far right of the task bar. The number and severity of the current alerts is displayed here. Clicking on the All Alerts button launches an Alerts view to display all current alerts. The tool bar presents buttons for the six common views: Alerts, At A Glance, Properties, Topology, Relationship, and Performance. Clicking on one of these buttons changes the Target Panel view. Icons to the left of the view buttons can be used for printing, print preview, exporting the Target Panel data to a file, and launching ControlCenter help. Quick access icons to the right of the view buttons are used for common tasks. At the bottom of the display are two text areas that display hints to guide a user through an operation (“Right click for popup menu”) and status information about ControlCenter objects. Also located on the bottom of the display are icons that launch the At A Glance View, the Consoles (At A Glance) View, and the Agents view. Holding the mouse over each of these icons provides a quick summary about the number of users logged in, number and severity of alerts, and number of active agents.
  • 7. The most common ways to manage the display of objects from the Tree Panel in Target Panel Views are: - Drag the object from the Tree Panel to the desired view in the Target Panel. Large numbers of objects can be added by dragging the folder that contains them. If an object cannot be displayed in the view, an explanation will appear in the Hint Area. - Check the box to the left of the object or folder in the Tree Panel. Un-checking the box selectively deletes objects from the view. - Right-click the object and use the Add to View menu option. Use the sub-menu to select one of the current views to add the object. Other options on this menu such as Properties and Alerts also add information about the object to an existing or new Target Panel view. - The eraser icon at a view’s title bar can be used to remove all of the objects from that view. Multiple Tree and Target Panels can be created using the horizontal and vertical split pane buttons on the upper right of each panel. The delete pane button (“X”) next to them can be used to delete an unwanted panel. At least one Target and Tree Panel must remain on the Console display—the last one of each can not be deleted.
  • 8. You can create groups of managed objects to simplify your ControlCenter monitoring and management tasks. You can easily add the objects in the group to one of the views by just selecting the group in the panel. This makes it easy to limit your views to a single department or line of business. These groups also appear in the StorageScope and Performance Manager tool. You can use these groups to create storage allocation reports or performance graphs related to the objects in the group. When a view is filtered, the filter icon at the top will be blue. If you are not seeing the objects you expect, make sure the view is not being filtered.
  • 9. A Properties view shows tabular information about objects. Different values are displayed for different types of objects. The example above displays information regarding the entire Symmetrix, its front end directors and its back end directors. By selecting individual arrays, you can find such things as Symmetrix serial Number, Model number, configured capacity, un- configured capacity, the amount of devices that are standard or BCV, and much more. By selecting front end fibre ports you can see whether the ports are online or offline, how many ports are managed by the processor, and how many devices are mapped to the director. The disc Director properties is similar to the front end with the addition of not only how many physical disks are mapped, but also how many hyper volumes reside on those disks. The Last Discovered and the Last Modified columns are the last columns in the properties view for many objects. The Last Modified column updates whenever information on the row changes. For example, the Last Modified column time will change if the physical capacity of the array changes or the alert severity changes. The Last Discovered column shows the time the object was discovered by an agent within ControlCenter. There is a specific data collection policy for each object type (i.e. Hosts, switches, and arrays) that runs and updates the Last Discovered column. For example, the Configuration policy schedule causes the Storage Agent for Symmetrix to read the configuration of the array at regular intervals. Since the policies discover whole objects, the Last Discovered time is only available for top-level objects like arrays, hosts, and switches. Symmetrix Fibre Channel Directors and Back End Directors don’t have a Last Discovered column, for instance. In short, by using the Properties view, you can gather basic information about any object within a Symmetrix, all the way from a single device to the whole array.
  • 10. Sub-objects are organized into different folders under the main object of the Tree Panel. Symmetrix devices are divided into Mapped, Unmapped, and System folders. Within each folder are subfolders that you can organize by type, name, or size. Open a folder to show the devices. NP=unprotected On the left, we can see Disk Directors (Disk- Fibre) or Back end, and SRDF reserved ports, with their corresponding devices. On the right, you can see the devices types, such as NP=unprotected, R5=RAID5, M1 & M2 But also, there is another way to recognize their type: With shadow=BCV ; Pink=R5 ; Nothin=unprotected. R5=RAID5
  • 11. Another very good view for examining device characteristics is the TimeFinder view. This view, opened by selecting TimeFinder under the Data Protection task drop down list, displays selected devices based on their TimeFinder relationships to other devices. This view can be used for performing research as part of architecting a TimeFinder solution as well as executing and monitoring relationships in a production environment. TimeFinder Architecting - Part of architecting TimeFinder solutions is identifying devices that are available to be used. By selecting the BCV container of a Symmetrix and adding it to the TimeFinder view window, you can very quickly identify those devices that are available by sorting the BCV column and looking for devices that are not paired with a standard device. You can easily populate device groups by dragging devices from the TimeFinder view into groups in the tree panel. You can then confirm changes once the solution has been implemented by examining Device group membership and standard-BCV partnering in the view window. Execution and monitoring – Whether you use the Console or scripts to execute TimeFinder tasks, the TimeFinder view can be an invaluable tool when managing these operations. From an execution standpoint, you can quickly select, sort and identify those devices that you would like to work with, perform pre-operation checks such as invalid track counts and pair states, and then execute commands all from within the view window. From a monitoring perspective, you can keep track of operation progress because the view is updated in real time. You can monitor changes in state, track table merging processes, MB out of sync numbers, and even estimated time to completion. There is a similar SRDF view for remote replication monitoring. ß Thse are physical disks, containing hypers. ß Thse are physical disks, containing hypers. ß Thse are physical disks, containing hypers.
  • 12. The Visual Storage view is accessed from the Storage Allocation task pull-down. The Visual Storage view shows a logical and physical configuration of Symmetrix, CLARiiON, HDS, and HP StorageWorks storage arrays. y The top panel of the view shows the logical arrangement of host-addressable devices. Each host port is displayed with each of the devices mapped to the port and their LUNS (logical unit numbers) and Symmetrix device numbers. In the illustration above, note that the #1 port of each Symmetrix Fibre Adapter has the same devices mapped to them--two access paths to the same devices might be used in a multi-pathed or clustered environment. y The bottom panel of the view shows the physical arrangement of the devices. Each disk port is displayed with the physical disks mapped to them; each disk displays the logical devices stored on it. In the illustration, several Symmetrix Disk Adapters having two ports (C and D) are visible. Only one physical disk is visible on each port without scrolling down, and the disks list many devices. y The very bottom of the view contains a mouse-over information box. Moving the mouse over objects in the view changes this display. In this example, the administrator has placed a Symmetrix into the Visual Storage view and then selected a host. By selecting a host, all of the devices that that host can see are highlighted in the view panel. This allows an administrator to very quickly identify device locations, and visually identify resources that might be overly taxed at this time. An administrator can highlight devices in the view panel by selecting hosts, databases, filesystem, device group, and many others. The Performance View within the ControlCenter Console has the ability to present real-time performance statistics as well as 24 hours of historical information (extended to 7-days with the Performance Manager license key).
  • 13. Here you can see the hypers on the disks, and the marked hypers in which disk are standing by. 500601=Clariion ; 500604=Symm The Topology View displays the physical layout of the environment in a pictorial rendering of the SAN. You can build this view by selecting objects in the tree panel with Topology view open and active. Hosts, connectivity devices, storage containers, adapters, ports, links, fabrics, user- defined objects, user-defined groups, zone members, and zone set members can be displayed in the map. With Topology view open and in focus, tree-selected objects, plus the objects to which they are connected, as well as the connectivity relationships among them, are displayed in the map.
  • 14. The Path Details view shows which paths exist (the mapping, or I/O path) between a host device and a storage logical volume across a storage area network. The information in the Path Details correlates data received from host, SAN, and array agents. You can use this view to examine the host device to logical volume mapping and resolve unmapped device paths. Only ports and fabrics that have viable connectivity will appear in the Path Details view. Path Details view has three panes. The top pane is a selection area used to filter the objects displayed. Click on one (only) object from each of the drill down boxes that appear in the selection area and then hit Show Devices to display the corresponding devices. The Relationship view is one of the most powerful views available in ControlCenter. This view is a visual display of the relationship between host storage structures (databases, file systems, volume groups), and their logical and physical locations in the storage arrays. It can be used for storage allocation planning by helping identify current storage layout, performance analysis by identifying devices to graph, business continuance activities to help identify STD or R1 devices, and a myriad of other administrative tasks. A Relationship option appears on the context-sensitive menu that appears when a user right- clicks on the object (objects that do not provide Relationship view information do not have this option). Under the Relationship option are a number of choices which parallel the information on the full Relationship view. Choosing one of the sub-choices brings up a properties-like dialog that lists the related information. Storage Device Masking functions as a component of the EMC ControlCenter storage management suite of tools and controls the masking policies of hosts and host ports to volumes in the SAN. It operates in Fibre Channel switched fabric or hub environments and has compatibility with a broad array of hardware and software platforms using Fibre Channel host bus adapters. The Masking view can be used to identify those devices that have been masked to a host. This view can be launched from within the Storage Allocation list in the Task Bar. Select one or more hosts in the Tree Panel to display in the view and then make these selections as numbered in the selection pane:
  • 15. 1. Storage type: Choose the type of array storage to display. Currently only Symmetrix and StorageWorks arrays are supported in this view. Storage from other array types are masked and monitored from other views. 2. Storage array: Choose the array. Since you dragged one or more hosts to populate this view, only the arrays that have storage masked to that host or hosts will be available. 3. Storage port: Choose the port. Again, only ports supporting devices masked to the hosts you dragged will be available here. 4. Device Filter Options: Choose filters to limit the devices displayed. You can show devices for which the host has or does not have access rights, devices masked to your selected hosts or to all hosts on this port, or devices that are or are not reserved. 5. Click Show Devices to display the selection in the display pane at the bottom. The devices displayed are the ones masked to your host or hosts that meet the options you chose. The device icons show the type of device and whether it is single (line to left below the device) or multi-pathed (line to left nd right below the device). The background color shows that access status. Data is updated in real- time, and color-coded according to perf alert severity. The metrics can be displayed in table or chart form. It is important to note that this view is not intended to be the primary tool for perf troubleshooting. The Performance Manager application is a far more powerful tool for detailed perf analysis and trending.
  • 16. Host Support for ControlCenter The following hosts are supported by ControlCenter * Dedicated Host agents – Microsoft Windows – Hewlett-Packard HP-UX – IBM AIX – IBM MVS – Linux – Sun Solaris * Proxy management via Common Mapping Agent (CMA) – Compaq Tru64, OpenVMS – Fujitsu-Siemens BS2000 – IBM OS/400 – Windows, Solaris, AIX, Linux, and HP-UX host monitoring capability. * Hosts supported through assisted discovery – VMware ESX Servers For specific OS levels see the ControlCenter Support Matrix. A few other hosts can be managed by proxy using the Common Mapping Agent (CMA). The Common Mapping Agent always resides on a different host and manages by proxy over the network. The Common Mapping Agent can also manage several types of dedicated agent hosts by proxy. This is sometimes useful because it allows management without installing agent software on the hosts. However, only limited functionality is available and the Solutions Enabler component Symapi Server must be running on the target host. Server Virtualization Support * VMware ESX Servers can be discovered and monitored using ControlCenter. * VMware agent supported on physical Windows platforms. * ControlCenter views show ESX and guest OS information. – Properties – Relationship – Topology – Path Details – Free Space – Alerts – Agents * Storage is provisioned to ESX HBAs then to VM guests through the VMware ESX server user interface. Server virtualization is becoming widely adopted by businesses to cut costs on physical server spending without compromising resources and performance. Many virtual machines can be monitored from one simple user interface and resources can be proportioned out on a per OS basis. As stated previously, ControlCenter provides server virtualization support for VMware ESX Servers. They can be discovered and monitored using the ControlCenter console interface. The VMware agent is supported only on Windows physical platforms. Once installed and running, the VMware agent provides users and administrators with a number of views showing information about ESX Servers and their guest VMs.
  • 17. Properties – Hosts In the example above, we can see that by selecting whole hosts, you can see inventory type information such as host name, operating system, OS levels and version numbers, # or CPUs amount of installed RAM, and even the time zone configured. Selecting devices displays device naming, array dev numbers, size, and utilization information (and more). You can even select individual hardware components like HBAs in order to determine drive and firmware information. Much can be found using this view as a research aid. Properties – Oracle Database Database and Backup Support for ControlCenter The following databases are supported by ControlCenter * Dedicated database agent – Oracle * Proxy management via Common Mapping Agent (CMA) – SQL Server – Sybase – Informix – DB2 * Dedicated backup agent – EMC EDM – IBM Tivoli – Legato Networker – Veritas Netbackup For specific database versions see the ControlCenter Support Matrix Although reporting of total database capacity is supported for all databases supported by the Common Mapping Agent (CMA): * Used capacity for Oracle databases is tablespace used capacity. For Sybase databases, used capacity is used database capacity. * For Sybase databases, the Common Mapping Agent collects total and used data and database capacity. For all other supported database types, the Common Mapping Agent collects total data and database capacity and assumes that used capacity is the same as the total.
  • 18. Print, Preview, Export and Launch
  • 19. `
  • 20. Here we can see the relationship within hosts and storage devices, with all the media in between them. Also, in the second image, we can see under visual storage, that, selecting a Disk Director, or a Hyper in it, where it resides, in which physical disks, in the below portion of the image. At a glance view (showing all)
  • 21. Web Console – Login Web Console Views The Web Console has several views that contain the same information as their Java Console counterparts, though it is displayed in a different way. You can see that the application removes the normal browser’s menu bar and management buttons, so you have to use the tool’s own links to do things like print and save. The Print button displays either the data from the tree or the main panel in a browser page without the menus, allowing you to print the data easily. The Export button saves the data from the tree or main panel in a file in HTML, JPG, or CSV format. Properties View – Symmetrix Host Directors: Same as Java console. Properties View – SAN ZoneSet: Same as Java console. Performance View – Symmetrix: Same as Java console. Relationship View – Host: Same as Java console. Alerts View: The only in which can be modified data, asignin new alerts, alerts to users, etc. Command History View – Symmetrix: System Auditory.
  • 22. The ControlCenter Security Model EMC ControlCenter’s security model is very simple and much like many others in the industry. User accounts are created within the ControlCenter Console. Users are then placed into groups for easier administration, monitoring, and reporting. Authorization rules are then created which define a set of objects and the rights that this rule grants to those objects. Lastly, groups are associated to the rules, effectively granting users within that group the permissions defined by the Authorization rule. ControlCenter Users: * ControlCenter User account logins are validated using the data center’s underlying security model. * eccadmin user must be a valid Local or LDAP user on the ControlCenter Server host. All other users can be associated to one of three types of host user accounts: – Local Windows user account created on the ControlCenter Server. – Windows domain user account. – User account in LDAP directory. * ControlCenter login uses Secure Socket Layer (SSL) to encrypt login and password information between the ControlCenter Console and Server. When installing ControlCenter, you need to specify which method of user validation is to be used: LDAP or Windows Domain authentication. This can be changed later, but you cannot use both types at the same time. It is important to note that all user and password information is encrypted using Secure Sockets Layer (SSL) between Console and Server hosts in order to protect the security of that information. ControlCenter Authorization Rules * Rules grant permissions to a single user or groups of users. * Permissions determine what actions a user or group may perform on a given object or user- defined group. * The “Any User Rule” is applied to newly created ControlCenter users and allows monitoring only permissions. * To make any management changes the user must be added to a group or an authorization rule must be assigned to the individual. * Only one Rule can be applied to a ControlCenter User or User Group. * Users can be members of multiple Groups.
  • 23. ControlCenter User Groups “The symcli does not have special permissions, as to deny the chance to run symcli commands with an unathorized user. One option would be to use Symmetrix ACL Flag to provide a group with SRDF commands for example.” Security Considerations * Users: – Controlling the administrators that configure the storage environment is a critical part of automated networked storage security. Controlling administrators’ actions requires enforcing general security rules: . Each administrator should have an individual account; there should be no shared accounts. . Strong password policies for administrators should be enforced; passwords should be complex and regularly changed. . Administrators should only be authorized to perform management actions that are required to perform their job. . Administrator actions should be audited. * Group/Rules: – Groups should be designed to reflect a particular job description or task. – Rules should be written most restrictively, to reflect the access rights required by the groups that will be associated to it (e.g., Payroll Backup Group - TimeFinder). To create a new authorization rule, right-click on Authorization Rules and choose New from the menu. Type a name for the rule at the top. If you are associating this rule with a group, it will make your administration easier if you give the rule a similar name. Then choose the user or group to associate the rule with from the top part of the dialog. Choose the actions or privileges to associate with this rule in the bottom of the dialog. Start by choosing to organize the actions by Groups/Instances or Types. Choosing Types (illustrated here) lets you actions that will apply across all objects of that type. Choosing an action related to Symmetrix arrays will give the users power over all arrays, for instance. Choose an object type first, and then one or more of the available actions next. You can repeat this process to add any number of actions to your rule.
  • 24. Add New User Add User to User Group New Authorization Rule By Type
  • 25. New Authorization Rule By Group/Instances * Groups can exist only in ControlCenter, there is no need for groups to exist in Local or LDAP Authentication. Data Collection Policies (DCP) * Formal set of statements used to manage the data collected by ControlCenter agents. * Policies specify the data to collect and the frequency of collection. * ControlCenter agents have predefined collection policies and collection policy templates. – Policy Definitions – Policy Templates * Managing Data Collection Policies: – Edit or Copy existing Data Collection Policies. – Create new Data Collection Policies from the template. – Delete Data Collection Policies. – View Data Collection Policies that apply to various managed objects. – Stagger start times to help distribute work load. By default, many generic policies are configured, but they tend to be broad in their scope so as to gather as much data as possible and populate the Repository so that ControlCenter can be quickly incorporated into business processes. It may be necessary to create new policies in order to set your discoveries based on priority. For example, say you have ten Windows 2000 servers. Four of them are mission critical database servers that you want monitored every six hours and six of them are corporate file servers that you only need updated once a day. Because the default policy includes all Windows 2000 servers, it is necessary to edit the default policy as well as add a second policy in order to cover the two separate business needs. Refer to the ControlCenter Performance and Scalability Guidelines document (available on Powerlink) for details of the recommended number of managed objects to be managed by a single Data Collection Policy definition. Data Collection Policies can be managed from within the AdministrationData Collection Policies folder. There are two subfolders that are used depending on whether you are creating new policies from scratch or managing existing policies.
  • 26. DCP Policy Definitions/Policy Templates Managing the Data Collection Policies consists of: * Assigning Data Collection Policies —Each agent is assigned a set of pre-defined policies and a set of policy templates. You can define new data collection policies from a pre-defined policy or from a policy template. * Editing Data Collection Policies—You can edit all settings for an existing data collection policy; however, you can only edit the schedule and properties defined by the data collection policy templates. * Copying Data Collection Policies —Use the copy policy function when you want to have more than one data collection policy with similar settings. * Deleting Data Collection Policies —You can only delete policies in the Policies Definitions branch of the Administration tree. Data collection policy templates cannot be deleted. * Viewing Data Collection Policies —You can create a tabular view of specific data collection policies and template settings. Editing an Existing Data Collection Policy: * Right-click to: – delete – edit – add/remove object – copy – disable/enable Editing an Existing Data Collection Policy
  • 27. Editing an Existing Data Collection Policy Schedules You can use schedules to specify the times ControlCenter should evaluate alerts and run data collection policies to collect statistics. The properties appear on two tabs: Properties and Alerts/Policies (this tab appears only if the schedule is used by alerts or data collection policies). Alert or data collection policy schedules define when ControlCenter should evaluate alerts and collect statistics through a data collection policy. In a schedule, you can define the interval at which an event occurs (every 10 seconds, minutes, hours, and so on), the days of the week, and the days of the year. ControlCenter provides several pre-defined schedules and you can define additional ones. Users are no longer able to delete or edit existing default/pre-defined schedules. Users and administrators must right click on the schedule and select Copy As. The copy of the default policy must then be given a new name and when then editing is complete, the new policy name shows up in the tree panel on the left.
  • 28. Schedules – Continued Creating a New Policy from a Template Creating a New Policy from a Template - (Continued)
  • 29. ControlCenter makes it very easy to monitor configured policies through the use of the Policies View window. If you drag an object to this view, it will show all of the policies applied to it. You can determine the frequency with which they discover information, the last time the policy executed, the host that the managed agent is running on, among others. To populate the Policy View, do the following: * Click on the ECC Administration Taskbar and open the drop-down list. * Select the Policies View (the active view window is now a Policies View). * Next select the object whose policies you want to look at from the tree panel and add them to the View window. Beyond monitoring your policies, the Views window can also be used for numerous management tasks such as enabling/disabling the policy, deleting, editing, and the like. The next time that the agent is scheduled to poll, the source for data is displayed in the Next collection column. By selecting refresh (shown here), the view is updated with the time and date of the next collection. It is important to understand that though Policy Definitions are configured and maintained in the Repository, a copy of the policy is pushed out to the Agent itself, and it is executed independently at the host level. Policies View * The “WLA” policies set are for perfomance gathering. Data Collection Policy Considerations When adding new hosts: * Create Discovery DCPs for each one-hour time slot within the window. * Distribute hosts evenly among the one-hour time slots. * Mix large, medium, and small hosts (as defined by the number of Host Devices) in each time slot. Schedule daily collections during an off-peak window whenever possible; especially host “Discovery.” Monitoring ControlCenter The Command History view shows all of the actions that users have taken through the ControlCenter Console. You will see administrative tasks such as adding a new user, adding a user to the Administrators group, or editing data collection policies. You will see object management tasks also, such as editing a zone set, creating new array devices, or discovering a new database.
  • 30. Command History View The Command History view shows all of the actions that users have taken through the ControlCenter Console. You will see administrative tasks such as adding a new user, adding a user to the Administrators group, or editing data collection policies. You will see object management tasks also, such as editing a zone set, creating new array devices, or discovering a new database. The display shows the name of the action, the status the object it was executed on, the user who executed it, and the time. If all of your users are logging in as different users—not using the common eccadmin account—this will accurately show you every change that they make through ControlCenter. Command History Data Retention Log Collection Wizard * Log Collection Wizard collects log data about managed objects in the ControlCenter environment. * Two versions of the software allows for seamless collaboration between customers and support.
  • 31. The Log Collection Wizard (LCW) is a graphical user interface that also collects log data about managed objects in the ControlCenter environment. It is installed automatically on the ControlCenter Server host. You can launch it using the desktop icon. The version installed with your ControlCenter environment is a full-featured version that interacts with your software. EMC customer service representatives have an internal version of the software that can be run independently of any ControlCenter software. The internal version can be used to demo the product to customers or to create instruction files that customers can load in to their Log Collection Wizards. Log Collection Wizard Communication * Communicates with Master Agents in the environment to collect log files. * Automatically installed with ControlCenter Server. Log Collection Wizard User Interface * Users create new or use existing instruction xml files to filter log data based on log types, file name, or host. * Send zipped log file collection to EMC FTP Server. * Attach important files to zipped log file collection results for additional troubleshooting information. The Log Collection Wizard communicates with Master Agents in the environment to collect log files. The Wizard is automatically installed on the host with the ControlCenter Server during the initial ControlCenter implementation. The user operating the Wizard makes selections based on the types of log data to be collected, the commands are sent to the Master Agents on the hosts that contain the managed object agent, the logs are collected, and then zipped into a log file archive on the ControlCenter Server host.
  • 32. Symmetrix Configuration Overview * Symmetrix configuration is a component of the ControlCenter Symmetrix Manager * More than one tool can configure a Symmetrix: – ControlCenter Console – Solutions Enabler, or SYMCLI – Symmetrix Management Console * Prerequisites: – ControlCenter . EMC ControlCenter Symmetrix Agent . Solutions Enabler installed on agent host . Symmetrix Manager License – Solutions Enabler or Symmetrix Management Console . Configuration capability has been available since Solutions Enabler 4.1 . Configuration Manager License . Symmetrix Management Console License if using GUI The Configuration changes allowed in the ControlCenter Console are listed above. Each of these configuration changes is considered a Change Class. We will look at each of these change classes in more detail during the course of this lesson. * Logical Device – Create and Delete Symmetrix Devices. * Meta Device Configuration – Create/Dissolve Symmetrix Meta Volumes. * Device Mapping (SDR or Symmetrix Device Reallocation) – Map Symmetrix Devices to Front End Ports. * Device Type Definition – Convert device types: Standard, Business Continuance Volumes (BCVs), or Dynamic Reallocation Volumes (DRVs). * Device Attribute Definition – Give Symmetrix devices the Dynamic RDF or Double Checksum attribute. * Device Protection Definition – Add a mirror to an unprotected device or drop a mirror from a 2- way mirrored device. * SRDF Device Definition – Create static SRDF Device pair definitions from existing Symmetrix devices. * Port Flag Settings - Modify SCSI or Fibre Channel front-end director flags. * Symmetrix Attributes – Change global Symmetrix attributes such as maximum number of hypers per disk, the RAID type to enable, SRDF settings, and others. * Save Pool – Create and Populate Save Pools. * SRDF/A Attributes – Change the SRDF/A checkpoint frequency, cache size, and other factors The Command Line interface also allows the following configuration changes. * Enable/Disable Dynamic RDF - If enabled the Dynamic RDF attribute can be set on non-RDF devices. * Enable/Disable FBA multi access cache - Must be enabled to create Celerra FBA devices. * Restrict access to the VCMDB device - If enabled, you deny database access to all hosts except those whose HBAs have been masked to the VCMDB device. Device masking could then be performed only by those select hosts. * Change device emulation - Change allowed between FBA emulations types only. * Reserve physical disks as dynamic spares - Disks with no hypers must be available. Dynamic spare is invoked against a failed disk. Symmetrix Configuration Process * Configuration change requests are sent from the ControlCenter Console to the primary Storage Agent for Symmetrix. * Symmetrix Agent sends the change requests to the Symmetrix via Solutions Enabler API over the SCSI/FC interface.
  • 33. * The steps in a configuration change session are as follows: – Submit – Commit – Validate – Database Refresh – Prepare * Configuration change sessions cannot be aborted via the ControlCenter Console. Accessing Symmetrix Configuration Options Symmetrix Configuration is part of the Storage Allocation task set in ControlCenter. Symmetrix configuration options can be accessed one of three ways: * Select Storage Allocation from the task bar. Highlight the Symmetrix which you intend to reconfigure. From the menu bar choose Configure> Symmetrix. * Select Storage Allocation from the task bar. Highlight the Symmetrix which you intend to reconfigure. Short cut icons are available for SDR and Meta Device Configuration. * Right-click on the Symmetrix which you intend to reconfigure. From the menu choose Configure.
  • 34. Symmetrix Management Console Context Launch Some ControlCenter Configuration Commands handled by SMC Tip: You can group tasks (if they are the same kind of tasks) to improve time, as things easy as creating metadevices, could take several minutes if many. Remember that at this time, the symm will be locked. Symmetrix Configuration Considerations: * Configuration changes should be performed by advanced users. * Planning is key. – Determine requirements. – Understand the proposed reconfiguration prior to change. – Ensure that critical data is safely preserved. * If possible, stop I/O activity on all Symmetrix devices to be altered prior to commit. * Determine if the configuration change requires devices to be unmapped. * Ensure SCSI timeouts are set according to Host Connectivity Guide. Solutions Enabler has a command that verifies that a configuration change can be performed on the Symmetrix Unit: * symconfigure verify –sid # > Command verifies that all requirements for the host and Symmetrix are correct. Such a verification cannot be performed from the ControlCenter Console. Configuration Log and Lock SYMAPI Log: A record of all the SYMAPI calls (Issued via SYMCLI or via ControlCenter) is kept in the SYMAPI Log files. The SYMAPI log file (symapi-yyyymmdd.log) is typically found in the /var/symapi/log directory in a UNIX environment, or under C:Program FilesEMCSYMAPIlog in a Windows environment. As indicated earlier configuration changes initiated via the ControlCenter Console are directed to the Primary Symmetrix Agent which in turn initiates SYMAPI (Solutions Enabler) calls. The configuration related SYMAPI calls are recorded in the SYMAPI log file on the Host where the Primary Symmetrix is running. The information in the SYMAPI log files is useful during troubleshooting. The primary Symmetrix Agent for a given array is easily determined via the tabular Agents view. Configuration Lock: The Symmetrix configuration lock is acquired and held for the duration of a Configuration Session in order to prevent simultaneous configuration changes. The Symmetrix Lock Number “15” is the Configuration Lock. A configuration change cannot be initiated if the lock is unavailable. The Failure to acquire the lock will result in a popup error message in the ControlCenter Console and is also recorded in the SYMAPI log file. Using SYMCLI one can release the Configuration Lock via symcfg –lockn 15 release [-force]. This should be done with extreme caution. Please call the EMC Support for help in a such a situation.
  • 35. Symmetrix Optimizer uses the same configuration change mechanism to perform swaps of hyper volumes. Symmetrix Optimizer needs to acquire the Configuration Lock as well when it is performing a Swap operation. If a Swap operation is in progress, a configuration change cannot be initiated and vice versa. While planning a configuration change ensure that there are no conflicts with Symmetrix Optimizer. Symmetrix Optimizer can be disabled if necessary. Identifying the Primary Agent When changing the symm configuration, there is no versioning, so the only way to recover it, would be by hand. Logical Device Configuration * Select these parameters – Number of devices – Device emulation – FBA only (open systems) – Device configuration (protection) – SAVE Device? * Considerations – Free (unconfigured) space must be available on physical disks with less than the maximum allowed number of hyper volumes – A Valid SSID (sub-system identifier) must be assigned to the new devices if the Symmetrix serves both open systems and mainframe – Devices can be destroyed via ControlCenter only for DMX Symmetrix (5670+) Device Configuration: To create new devices, launch the Logical Device Configuration dialog from the Configure menu. ControlCenter tries to acquire the lock on the Symmetrix. Once the lock is acquired, a warning message is displayed. In order to successfully create new devices, all the devices on the Symmetrix (excluding Virtual Devices) must be in a Ready state. If a device in not in a Ready state, chances are that there are some problems with the Symmetrix and thus a Configuration Change will not be allowed. Click OK to continue with the Logical Device Configuration process. The Logical Device Configuration input screen is displayed.
  • 36. Viewing Unconfigured Space Device Configuration The Logical Device Configuration input window allows you to build a list of devices that you would like to configure. Choose the number of volumes to create. ControlCenter only allows you to create FBA Devices. You can specify the size in MB or Cylinders. The drop down list shows you the device sizes that already exist on the Symmetrix. The recommendation is to choose the size from the drop down list, but you can enter a different size. Choose the Configuration (Protection Type) from the drop down list. If the devices being configured are to be used as SAVE devices, select ‘Yes’ in the SAVE device Type option. If the Symmetrix model doesn’t support SAVE devices then the SAVE device type option is not shown on the dialog. Click Add to create an entry in the Requested Configuration table. You can change the parameters and click Add again to build a list of different device types. Click Execute to submit the configuration change.
  • 37. Device Configuration Input à Device Configuration – Result à New devices will be in the Unmapped Devices folder The Configuration session initiated via the Console goes through a number of steps described earlier in the lesson. * Submit * Prepare * Validate * Commit After the Commit is finished, the Configuration Lock on the Symmetrix is released and then ControlCenter initiates a Database refresh to update the ControlCenter Repository with the most up-to-date information about the Symmetrix. Meta Device Configuration. ControlCenter allows the following: * Concatenated Meta Volumes: * Start removal from the tail member – Create or Dissolve * Striped Meta Volumes: – Add members – Create or Dissolve – Remove members – Add member * Must have an identical Meta-BCV available on the Symmetrix to successfully add a member to a striped meta while preserving data. * Only supported on certain microcode levels – Consult EMC in advance. * EMC Recommends adding all members in the same session rather than adding more members later. – Remove members are Not allowed – Stripe width: * EMC recommends using a two (2) cylinder (960 KB) stripe width. * In a DMX Symmetrix, the stripe width is preset at two (2) cylinder (960 KB).
  • 38. You can remove members [dissolve] (always last member) of a concatenated meta. But you must defrag it before, in order to be sure that the data is not standing on it. When dealing with metas, always the meta ID is the meta head ID. Meta Device Configuration – Considerations * All member devices must have the same type of: – Protection – Emulation (FBA only) – Attribute (BCV or Standard) * Devices must be unmapped before they can be formed as members of a meta. * Changes to the attribute of a meta are done by changing the attribute of the meta head. * Only the meta head is mapped to a front end port. Meta Performance Considerations * Capacity: – Largest capacity supported without RPQ – 1.1 TB. – Largest capacity possible – 16 TB. * Number of Members: – Largest number possible – 255. – Largest number tested by Performance Group – 48. – EMC generally recommends creating smaller meta volumes rather than very large meta volumes. * Meta volumes with four, eight and sixteen members are preferred. * Choice of members: – Member count even divisor or multiple of Disk Director count. – Spread members evenly across DA ports and processors. – Avoid members on the same physical disk. – RAID-S/Parity RAID – Choose members from different RAID groups.
  • 39. Display Disk Location of Unmapped Devices Before creating a Meta device, it is a good idea to look at the back-end locations of the devices that you intend to use as meta members. Ensure that the devices do not share the same physical disks and that they are spread as evenly as possible across Disk Directors and ports. It is especially important to make sure the devices do not share the same disk if you are creating a striped Meta, since you will lose the effectiveness of striping the data to multiple physical drives. The Visual storage view (Change the Target Panel to Visual Storage – Storage Allocation pull down) of the unmapped devices show you the back-end locations as shown in the slide. Creating Meta Volumes Device Mapping (Symmetrix Device Reallocation) * ControlCenter allows the following: – Mapping and unmapping of open system devices to Fibre Channel (FA) or SCSI (SA) ports only. – Move/Copy devices between front-end director ports. – Modify/Specify SCSI Target ID/LUN assignments. * Considerations: – Unprotected standard devices cannot be mapped.
  • 40. * Unprotected BCVs can be mapped. * Unprotected gatekeeper devices (smaller than 20 cylinders) can be mapped. – Determine the front-end director port to which the host is attached. * Devices should be mapped to more than one port in multipath and clustered environments. – Ensure that the selected Target ID and LUN is appropriate for the host. – Reconfigure the host to enable it to recognize the new device. Remember that after adding devices to a port or changing their LUN numbers, you may also need to execute some OS-specific commands to get the host to recognize the new devices. Device Mapping via the Command Line interface provides these additional features as well: * Open Systems or Mainframe (FBA and CKD) * Specify Virtual bus (vbus) address if volume set addressing is used in HP-UX * Specify CKD device number - OS/390 host * Update VCMDB with WWN of HBA to allow access to device being mapped. Determine Array Port Map Device
  • 41. Copy Device to Another Port You frequently want to map the same device to more than one array port to create redundant paths. Use the copy feature of the SDR dialog for this purpose. Locate the device under the Host Directors part of the tree on the left panel—remember, it is already mapped to at least one port. Then click the additional port you would like to copy the device to in the right panel and click Copy. The same device can be copied to any number of ports by repeating this procedure. Change Device Address You should always check the device address, or LUN number, before committing your changes. Many hosts have restrictions regarding these numbers. Gaps in the numbering is frequently disallowed. You can find the automatically assigned address with the device itself in the right panel under the Host Directors part of the tree. Just click the number to change it. Execute SDR: Once you have made the changes in the dialog, click Continue to review. You can make several changes to the mapping configuration and then commit them in one event. If everything looks good, click Execute to begin the configuration change. Most changes do take some time, as the popup alert shown here suggests. Newer arrays with faster processors naturally take less time to execute changes. The progress of the change is displayed in the lower part of the window. Once the change is complete, you can use the Properties view to examine the characteristics of the device. The detailed view shows all of the ports that the device has been mapped to.
  • 42. Execute SDR Reconfigure Hosts After SDR Operation – Solaris 2.6: disks; devlinks; devalias – Solaris 2.8: devfsadm – Solaris 2.9 /usr/sbin/update_drv * HP Hosts: Execute the following commands: ioscan -fnC disk ; insf -e * IBM AIX Hosts: Execute the following command: cfgmgr -v * Windows hosts: Add / Remove Hardware. Hosts have to be reconfigured to recognize the new devices that are available for access. Remember to perform LUN masking in a Fibre Channel switched environment. The commands to reconfigure hosts are Operating System specific. For Solaris 2.8 and higher, the devfsadm command can be used as well. In a Solaris environment, the sd.conf file should be appropriately configured as well. A disk label might also have to be applied with the format command. The update_drv command (available in Solaris 9 and higher) performs a dynamic reload operation on any loaded driver (such as the sd driver), forcing it to reread the configuration file, an operation which normally would have required a reboot in previous versions. It is very useful in a production environment where the host needs more disks presented to it, but rebooting it is not an option. In Topology view, you can see the hosts and their connections with Storage Systems. Device Type and Device Attribute Definition * Device type definition: – Allows you to convert between Standard, BCV and DRV device definitions * Device attribute definition: – Allows you to give device the following attributes.
  • 43. * Double Checksum * Dynamic SRDF (R1 or R2 or Both) * Ineligible devices will be filtered out by the Console – Mapped devices – System devices, Save devices, RAID-S, Parity RAID, SRDF, TDEV, VDEV, TDEV, COVD – BCV or STD devices in a synchronized state, Meta members. The Command Line interface allows the setting of the following additional attributes as well: * WORM (Write Once Read Many) * VCMDB * SCSI3 Persistent Reserve (For SUN Cluster 3.0 environments) Device Type Definition To change the device type definition, go to the configuration menu by any of the methods discussed earlier and choose Device Type Definition. After the configuration lock is acquired, the Device Type Definition dialog shown above will appear. Just click on the devices you want to change and click the BCV, DRV, or STD buttons. Changing the device type definition does not alter the protection of the device. Clicking the Execute button starts the configuration session. Note: To convert a device to a DRV, it must be configured as Unprotected. A BCV cannot be converted to a DRV directly, it must be converted to an STD first. Device Attribute Definition To change the device attribute definition, use the configuration menu to launch the Device Attribute Definition dialog shown above. To add or remove an attribute, click in the cells under the attribute column. Light blue colored cells indicate pending changes. Dynamic R1 and R2 can both be assigned to the same device if desired. If a device is capable of both Dynamic R1 and R2, it can be either the source or target of remote synchronization. It can also participate in an SRDF swap operation, or become a Cascaded SRDF R21 device. * option only available to use with Oracle DBs.
  • 44. Device Protection Definition The Device Protection Definition dialog allows you to remove a mirror from a mirrored device, or add a mirror to an unprotected device. No other protection types can be manipulated using this dialog. When turning a mirrored device into an unprotected device, one of the mirror hypers is split off as a new device. It has a new device number and appears in the Unmapped devices folder of the array. The original device type changes from 2-Way Mirror to Unprotected. The original device retaind its data, but the new device does not. When turning an unprotected device into a mirrored device, a new hyper is created and added as a mirror. Enough unconfigured space must exist on the array to mirror the device. You do not get to choose the disk to use for this, the Enginuity code determines the best location for the hyper. You cannot just join two unprotected devices into a mirrored pair. The new mirror is synchronized with the original device, preserving its data. As with the other configuration commands, ineligible devices are filtered from the dialog to prevent you from accidentally selecting the wrong thing. Device Protection Definition Dialog The example at the bottom of this illustration shows a BCV that has been unprotected. Each mirror becomes an independent device, with a new device number being generated for the additional mirror. Bring up the Device Protection Definition dialog to change the protection of a device. The ControlCenter Console only shows devices on which the Device Protection changes can be made. It filters out all RAID-5, RAID-6, and Mapped 2-Way Mirrored devices.
  • 45. The pending changes are shown in blue italics. Click Execute to commit the configuration change. SRDF Device Definition The SRDF Device Definition configuration adds the R1 attribute to a local device and the R2 option to a remote device, making them a linked SRDF pair. This option creates static SRDF Device pairs only. As we have seen, the Dynamic SRDF attribute is enabled in the Device Attribute Definition dialog, not the SRDF Device Definition dialog. Only mirrored, RAID-5, RAID-6, or Unprotected devices can be made into static SRDF pairs. A matched pair of eligible devices must exist on both of the arrays. Both devices match if they have the same Meta configuration (if Metas), size, emulation (FBA or CKD), and protection. As always, ineligible devices are filtered from the display. Additionally, the array must have SRDF directors, be linked to the remote array, and SRDF RA groups must already have been created (by command line). A configuration lock has to be acquired for both arrays. You can not delete or edit a static SRDF device pair relationship using ControlCenter.
  • 46. Use any of the methods described previously to launch the SRDF Device Definition dialog. The Select Symmetrix Screen will pop up first. Choose the Local and Remote Symmetrix from the Drop down lists. Only those arrays in your environment that are physically connected by SRDF links will appear here. Choose the RA Group Number from the list (cannot create a new RA Group). Choose what SRDF type(R1 or R2) the local device will be. -The Configuration Lock is acquired and the Warning message shown in Step 2 on the slide will pop up.- From the Select Local R1 Device Column select the device that becomes the R1. Before you pick a device from the Select Local column, the Select Remote R2 Device column will be empty. Once you select a device from the Select Local column, eligible devices are displayed on the Select Remote R2 Device column. Pick a device from the Select Remote R2 Device column and click Add to add this pairing into the Selected RDF pairs table. When an SRDF Device Pair is created, the previously separate devices are synchronized with the same set of data. You have the choice of invalidating (or losing) the data on either the local or the remote device. The device that is being invalidated must be unmapped or in a Write Disabled or Not Ready state. Port Flag Settings The Port Flag Settings dialog is used to change the communications protocol settings on Symmetrix array ports. * Change settings of SCSI or Fiber Channel Host Director ports * Considerations: – Settings may have to be changed when adding hosts to existing switched configurations or for preparing an unused port for host connectivity. – EMC recommends that you temporarily suspend I/O activity to the affected ports when setting front-end port attributes. – Incorrectly changing the port flags can render your Symmetrix storage system inaccessible. Be certain of the results of any change before resetting any of these flags. Port Flags Settings – Host Policy * Select the host from Host Policy list. * Select the host director port to which the host must be added. * Click on Add to add to the Selected list. * Click Next. This port flag attribute, is a bit to change, that will allow better communications with client servers, like HP-UX, Windows Cluster (MSCS), Solaris with Veritas Volume Manager, and other applications in operating systems, or some specific Platforms. Another option is to use “heterogeneous” bit setting. When choose Port Flags Settings from the configuration menu, you see a Default Settings dialog like the one above. You can use it to choose the standard settings for certain operating system configurations. Just choose the policy and the port and click Add. If none of these settings suits your needs, hit Next without making any changes here.
  • 47. Port Flags Settings – Review/Manual Edit The second part of the Port Flags Settings dialog gives you a chance to make detailed edits to the flag settings. Every flag which is appropriate for the type of port can be edited here by clicking in the box. A bullet indicates the flag is set. An empty box means it is not set. Settings that can not be edited are in gray. The two tabs choose the Fibre or SCSI flags for each port. Delete Logical Devices Use the Delete Logical Device configuration menu to delete one or more devices. This is one of the few configuration menu dialogs that does not allow you to choose the devices within the dialog. All of the devices selected on the console when the menu was launched will be deleted! If you right-clicked on a single device to launch the dialog, only that device is deleted. If you selected multiple devices and right-clicked, all of them are deleted. If you selected a Symmetrix and right-clicked, all eligible devices on the array are deleted! Be especially careful with this command.
  • 48. Set Symmetrix Attributes Note: Different options are available with different versions of Enginuity. Symmetrix Virtual Provisioning * Increased Speed and Ease of Provisioning * Improved Capacity Utilization * Improved Performance Symmetrix Virtual Provisioning allows administrators to allocate storage devices to hosts quickly and easily. Virtual—or “thin”—devices are allocated from a common storage pool, making it easy to provision for tiered service levels. Because the devices only consume storage when written to, they greatly improve capacity utilization. You can initialize an application with a large amount of virtual storage, and add more physical storage to the pool as the application grows. Virtual provisioning can also improve the performance of applications, since the data is automatically striped across all of the hardware in the pool. Virtual provisioning was introduced in Enginuity 5773. Only DMX-4 or later arrays support virtual provisioning. Virtual Provisioning Architecture * Virtual (“thin”) Device: – Must be bound to a Thin Pool. – Presented to host with a fixed capacity. – Initially, no disk storage allocated. – Writes to virtual device stored in Thin Pool * Data (“thin”) Pool: – Collection of regular (non-virtual) devices. – Virtual device writes striped across data Devices. * Data Devices: – Protection: RAID-1, RAID-5, RAID-6. – All Data Devices in a pool must have the same protection.
  • 49. Virtual Provisioning Storage Allocation Terminology Managing the capacity utilization of thin devices and thin pools is an important task in ControlCenter. If a pool becomes completely utilized, the thin devices bound to it will not be able to allocate new storage tracks. When this happens, writes that require new storage tracks will return a write failure to the host. Other I/O operations will still succeed, however. The free or available space in the pool represents all of the tracks that have not been allocated to the thin devices. If your pool is over-subscribed, you will have to watch this measure carefully, since you have promised more storage to the thin devices than the pool can provide. Virtual Provisioning Devices Support: Thin devices and data devices have a maximum size of 64 gigabytes—the same limit any devices in a Symmetrix with this level of Enginuity have. Of course, EMC recommends a smaller, more flexible device size. Thin devices can be mapped and masked, but data devices are never mapped or masked to a host. Thin devices are not protected—they depend on the protection of the devices in the data pool, which can be RAID-1, RAID-5, or RAID-6. Thin devices can be replicated to other thin or virtual devices, but never to a fully-provisioned device. You can use TimeFinder/Snap to replicate a thin device to a Snap virtual device. You can make a thin Clone of a thin device. Or you can use SRDF for remote replication if both the R1 and R2 devices are thin. If a large thin device is needed, you can create a meta device of thin devices. Regular devices and thin devices cannot be combined in the same meta.
  • 50. Thin Pool Generic Alert: A new alert has been added to the Storage Agent for Symmetrix to monitor the used capacity of thin pools. It is measuring the used capacity against the total capacity of the pool. The alert will trigger at any of the thresholds shown here, and display a message showing the exact utilization. Since the Solutions Enabler processes monitoring the arrays detect this event, the alert should arrive in the Console within minutes of a change in the pool’s utilization. The alert is enabled by default, and monitors all thin pools on the array. Virtual Provisioning Properties Views Thin pools are displayed as a data device in a blue rectangle. Thin Pool properties show the total capacity, used capacity, and free capacity of the pool. The subscribed capacity is labeled “Total Capacity Allocated” in this view. Virtual Provisioning Free Space View The storage summary graph now includes measures showing the Thin Pool Available, Thin Pool Used, and pool Over Subscription. The sum of all of the components included in this bar graph except for the Over Subscription represents the configured storage of the array.
  • 51. Thin Provisioning Implementation Steps 1. Create Data Devices (SMC) 2. Create Thin Pool, populate with Data Devices, enable Data Devices (SMC) 3. Create Thin Device, bind to Thin Pool (SMC) 4. Map, Mask Thin Device as normal (ControlCenter or SMC) Create Data Devices To create data devices, right-click on the array and choose Device Configuration > Create Device (SMC) from the menu. This will launch the Symmetrix Management Console dialog for creating devices. Choose the Data Device tab for thin pool devices. The Save Device tab is for TimeFinder/Snap or SRDF Delta Set Extension devices. Commit Configuration Changes Device creation tasks require a configuration change. When you confirm a Symmetrix Management Console operation that requires a configuration change, the task will just be added to the My Active Tasks tab of the Config Session view. After adding one or more tasks, you will need to switch to the Config Session view and commit your tasks. The Symmetrix Management Console does not always switch to this view automatically, so you should get in the habit of checking here for any uncommitted changes.
  • 52. Create Thin Pool To create a thin pool, right-click on the array and choose the Device Pool Management > New Thin Pool (SMC) menu. This menu will launch the Symmetrix Management Console dialog for creating thin pools. The Save options in this menu are for TimeFinder/Snap or SRDF Delta Set Extension pool management, so avoid them if you are looking for virtual provisioning tasks. Creating a thin pool does not require a lengthy configuration change procedure. Clicking OK will commit the change, and post a success or failure notification. You will not have to turn to the Config Session view after leaving this dialog. Enable Data Devices After creating the thin pool, you might like to view it before leaving Symmetrix Management Console. Locate the pool under the Pools folder under the array in the left panel of the Console. Click on it to show the general properties, thin devices, and data devices. You can click on an individual data or thin device to view its properties in the bottom of the dialog. If you did not choose to automatically enable the data devices in the Create Device dialog or in the Create Pool dialog, you will have to enable them manually. Click the Data Devices tab of the pool properties display and select all of the devices to be enabled. Then right-click and choose the Device Pool Management > Enable Device menu. Create Thin Device, Bind to Thin Pool à
  • 53. To create thin devices, right-click on the array and choose Device Configuration > Create Device (SMC) from the menu. This will launch the Symmetrix Management Console dialog for creating devices. Choose the Thin Device tab for thin pool devices. Creating a thin device is a configuration change, just like any device creation. Remember to use the Config Session dialog of Symmetrix Management Console to commit your changes. Verify Thin Device in SMC Save Pools * Use Logical Device Configuration dialog to create new Save devices. * Use Save Pool dialog to create new Save Pools. * Assign Snap Save Pools to Snap sessions with Create Snap Session dialog. * Assign DSE Save Pools to RA Groups with Set SRDF/A Configuration dialog. Save Pools are used to record data for two Symmetrix business continuity operations. Data is stored for TimeFinder/Snap devices in a Save Pool when a track of data differs from the source that the Snap is replicating. Many Snap devices can use the same pool to help share the storage resources. SRDF/A can be configured to cache writes on the local array’s Save Pool disks. This feature is known as Delta Set Extension and it helps recover from temporary bursts of writes or link failures. Delta Set Extension was made available with Enginuity 5772. Each of these technologies has its own type of Save Pool. Active SNAP sessions can cause many writes to the associated Save Pool. You might want to create separate Save Pools to prioritize the performance characteristics. You might assign many low priority SNAP sessions to the default pool, but assign only a few high priority sessions to a different pool. The devices in the high priority pool will have less contention for disk resources because there are fewer sessions competing for them. The down side of partitioning the Save devices into different pools is that a single session can not borrow from another pool if the present pool runs out of space. You might end up wasting disk space if you have more than one Save Pool.
  • 54. ß Creating and Editing Save Pools Deleting Save Pools and Devices à SRDF/A Attributes
  • 55. SRDF/A transmits data in time-consistent sets called “cycles.” The data on the remote site is consistent as of the last completed cycle. The Set SRDF/A Attributes dialog allows you to set the minimum cycle time from 5 to 59 seconds. Under normal circumstances, the remote site should be updated consistently at time periods equal to the Cycle Time. However, if too many writes arrive at the local array to be transmitted to the remote in one cycle, the cycle time temporarily elongates to accommodate the load. Cycle writes are kept in cache. If cache fills, SRDF/A sessions begin to drop or terminate. ** Use the SRDF/A Group Priority to prioritize the RA groups. Groups with higher priorities drop first when cache fills. Associating an SRDF/A Delta Set Extension Pool enables this feature for the RA Group. When a large number of writes fills cache, additional writes are written to the pool. By writing to the pool the session does not need to be dropped, but performance will be reduced as the writes now have to be retrieved from disk. If the Transmit Idle feature is enabled for the RA Group, the session will not drop when all the links between the sites fail. Instead, the local array just elongates the cycle time as it does when an overload of writes arrives. If a Delta Set Extension pool has been assigned, the writes are stored in the pool. Otherwise they fill cache. Either way, when the fixed amount of space is filled, the sessions drop. Delta Set Extension pools and the Transmit Idle feature are available on arrays having Enginuity 5772 or higher. Releasing Reservations Since reserved devices are omitted from ControlCenter configuration dialogs, there is no direct way to override a reservation. However, you can use the Console to release device reservations, and then return to the configuration dialog to modify the device.
  • 56. CLARiiON Configuration via ControlCenter * Configuration changes supported via ControlCenter Console: – Create, Defrag and Delete RAID Groups – Bind and Un-Bind LUNs – Create, Expand, Destroy and Modify MetaLUNs – Create SnapView snapshots – LUN Masking * Create/Delete Storage Groups. * Add/Remove LUNs to/from Storage Groups. * Attach hosts to a Storage Group. – Edit Storage Processor network settings. Accessing CLARiiON Configuration Options New RAID Group Bus, Enclosure and Disk number are part of the disk name.
  • 57. New RAID Group Properties Defragment RAID Group Fragmentation in a RAID Group occurs as you unbind and rebind LUNs on a RAID Group creating gaps. Fragmentation causes performance issues by spreading contiguous data to multiple regions of disk. Defragment a RAID Group to compress these gaps and provide more contiguous free space across the disks. De-fragmenting a RAID Group does not affect the arrangement of data within the LUNs. It is not equivalent to de- fragmenting a filesystem mounted on a CLARiiON LUN. If no deleted LUNs, no need to defrag. Delete RAID Group Before you can delete a RAID Group, you must unbind all the LUNs and Private LUNs (part of a MetaLUN) on it or you get an error as shown in this example. Just right-click on the RAID Group, and choose Delete RAID group from the Configure menu.
  • 58. Bind LUNs A LUN is a host- addressable storage unit created from a RAID Group. A RAID Group can have many LUNs, but they all share the same protection type. You set the protection type with the first LUN you bind. All additional LUNs must be of the same protection. This is the equivalent than creating a Hyper(meta) in Symm Arrays. Expand MetaLUN -> ß Expand MetaLUN Properties. a Properties View of some regular and a MetaLUN in the Console. The MetaLUN head along with its associated Components and Private LUNs (LUNs that are part of a MetaLUN) are shown. Note that the Size column shows the total meta capacity and the capacity of each private LUN, but the Actual User Capacity column has N/A for the private LUN size. The Actual User Capacity only shows a value for the MetaLUN head since it is the user-accessible device. You can select the size you want to show to the OS.
  • 59. Properties of LUNs: Modify and Destroy MetaLUN à You can modify some of the parameters of a MetaLUN by using the Modify MetaLUN option of the Configure menu. It is not possible to change the Element Size Multiplier or Alignment Offset when modifying a MetaLUN. You can also delete the MetaLUN by using the Destroy MetaLUN option. When deleting a MetaLUN all the members are unbound (deleted from the RAID Group) and the data will be lost. SP Network Settings and Managing SnapShots à The final CLARiiON configuration task that can be completed using ControlCenter is managing SnapView snapshots. By right clicking on the array in the Tree Panel and selecting Storage Agent > CLARiiON > Explore one can perform a variety of tasks for SnapView Snapshots.
  • 60. These tasks include: •Editing Snapshot Cache Properties. •Exploring Snapshot Sessions. •Exploring LUNs that a Snap session can be executed against. •Exploring a Snapshot. •Creating New Snapshot session. These are the possible Snapshot management tasks available in ControlCenter. Tasks such as activating and terminating a Snapshot session cannot be done using ControlCenter. SAN Management Overview * SAN Management tasks are performed via SAN Manager – Licensed ControlCenter application with comprehensive SAN Management capabilities – Use the ControlCenter Console to: *Discover and display an end-to-end topology view of the SAN *Monitor the heath and performance of the SAN *Manage zoning operations of the switched fabrics in the SAN *Perform storage device masking operations on storage arrays in the SAN – Please refer to the EMC ControlCenter Support Matrix for a comprehensive list of all the switches and storage arrays supported by SAN Manager. SAN Manager Deployment – Example. ß Switched Fabric Definitions.
  • 61. A single switch, or several physically connected switches, form a Fabric. The switches in a single fabric can route data between any of the connected ports, no matter what switch they are on. A Zone can be created to isolate traffic between a Host Bus Adapter (HBA) and an Array Port. The switches in the Fabric enforce isolation so that traffic from either end point can go only to the other endpoint and not the other switch ports. When an HBA initiates a connection, the only entity to reply is the zoned array port, which identifies the devices mapped to it. A Zone may not extend beyond the Fabric. A set of Zones within a Fabric is referred to as a Zone Set. In ControlCenter, a single Zone Set defines all of the Zones in a Fabric—there is only one Zone Set active at any time. SAN Manager Zoning Operations * Create/Manage Zones and Zone Sets * Import Zone Set: – Active and Inactive Zone Sets can be imported from the switch * Activate Zone Set: – A planned Zone Set is pushed to the switch and activated * Reactivate Zoning: – Reactivate the active Zone Set from ControlCenter onto the fabric * Considerations: – As a best practice import Active Zone Set before modification – SAN Manager retains copy of last active Zone Set for fail-back. Connectivity Area of the Tree Panel The Active Zone Set can be imported manually from the fabric, or you can use the Fabric Validation Data Collection Policy import it automatically. The Fabric Validation policy will periodically compare the active fabric zone set. If a difference is found between it and the Active Zone Set in ControlCenter, the policy can automatically re-import it, or it can send an alert to notify you that a difference was found. The Planned Zone Sets folder is used for editing zone sets. You can create new zone sets from scratch here, or make a copy of an existing zone set and make a few minor edits to it. When it is ready, you can make it active in the fabric. The Planned Zones folder is the place to store zones you might want to reuse in several zone sets. Create them once there, and then use them whenever needed. The Switches folder shows the physical switches in the fabric, and the Unzoned Ports folder has the physical ports that have been discovered that have not yet been zoned. It is a best practice to work on copies of active zone sets, in order to modify them and then activate it, instead of modifying the original one, that could not be done.
  • 62. Import Active Zone Set Task Lists and Tasks Create New Zoning Policy
  • 63. Enable Default Zoning Creating a New Zone Creating a New Zone Set
  • 64. To create a new zone set from scratch, drill down into a Fabric, right-click on the Planned Zones Folder and choose New > Zone Set. In the new dialog, type in a new Zone Set name, then pick the zones of interest from the Available Zones column. Click Add to add these zones to the Zones in Zone Set column. Any zone in any Zone Set in the fabric or in the Planned Zones folder can be selected. You can choose to activate the Zone Set immediately (use the check box). If the Activate Zone Set immediately box is unchecked, the new Planned Zone Set appears in the Planned Zone Sets folder. If the box is checked you are prompted to setup a task to activate this Zone Set. Right-clicking on an existing Zone Set allows you to edit the Zone Set to change the zones which are members of the Zone Set. You can also right-click on an existing zone set and choose Create Zone Set As to create a copy. The copy is made in the Planned Zone Sets folder and has exactly the same zones as the original. You then edit the planned zone set to make changes. The Zoning Policy dialog box is used to create and modify zoning policies. Zoning policies are used to set a standard for how zones are named and configured. Users most often use Zoning policies to automatically name their zones in a way that indicates what the end points are. When creating a new zone the user can select the appropriate Zoning Policy or use the default Zoning Policy for that fabric. Zoning policies specify the following characteristics of new zones: * Zoning Type— Whether the new zone will use switch port zoning (Port zoning) or end port zoning (WWN zoning). You cannot create a zone with mixed zoning when a zoning policy is applied. * Maximum # of Host Ports—The maximum number of host ports (ports connected to host devices) that can be included in the new zone (end port zoning only). * Maximum # of Storage Ports—The maximum number of storage ports (ports connected to storage devices) that can be included in the new zone (end port zoning only). * Zone Name Format —An expression that determines the name of the new zone. To set the default zoning policy for a fabric, right-click on the fabric and use the Zoning > Set Default Zoning Policy menu. ** When creating new zones, have in mind this Zoning Policies, as they give a big help, as naming convention is normally Host+HostPort+Storage+StoragePort (%H_%Hp_%A_%Ap) Adding a Zone to a Planned Zone Set
  • 65. Activate Zone Set, Compare to Active Zone Set After creating a Planned Zone Set with the desired members, the next step is to activate the Zone Set to apply these changes to the switched fabric. Right-click on the planned zone set and choose Zoning > Activate Zone Set. The Zone Set is moved to the Active Zone Set Folder and Marked as Active. A copy of the zone set remains in the Planned Zone Set Folder and marked a Copy of Active. The Zone Set that was active before is moved to the Planned Zone Set folder and marked as Last Active. If you need to undo these changes, you can activate this Last Active set again. As part of the activate process, you can compare the proposed Zone Set with the Active Zone Set. Use the Show drop down to show Changes Only, Zones Added Only, Zones Removed Only, Zones Modified only or All Zones. A Green plus indicates an added zone, an Amber minus indicates a removed zone, the blue inequality icon represents a modified zone. Click Continue to continue with the Activate Zone Set process. Either of these methods prompts the user with the choice to Execute or Cancel. Click Execute to activate the Zone Set. This forces the user to create a task in a task list. The progress of the task can be followed in the target panel. Compare Active Zone Set to Active Fabric This safety feature is useful in environments where some administrators might be using other tools to change the fabric. Let’s use a scenario to illustrate the issue.
  • 66. Reactivate Zone Set Use this feature when you know that the Active Zone Set in ControlCenter is the correct one for your environment, but you suspect that someone has made an improper change to the fabric using another tool. It is especially useful if your Fabric Validation Data Collection Policy is set to Compare but do not import. In this setting, the policy sends alerts when a difference is found in the fabric, but does not import it. After being alerted to the issue, you can carefully check the situation and manually reactivate the zone set if necessary. Cisco VSAN Support * ControlCenter has monitoring and management of Cisco Virtual SANs – Creating, editing, renaming, or deleting a VSAN. – Move members between VSANs. – Distributing and committing a planned VSAN. – Zone editing within a VSAN. – Suspending a distributed and committed VSAN. VSAN 1 is the default VSAN and VSAN 4094 is reserved for ports that have been isolated when their VSAN was deleted. These VSANs always exist and cannot be deleted. Only VSANs 2 through 4093 can be manipulated. Up to 256 VSANs can be configured in a switch. At install, all ports belong to VSAN 1. Ports can be added or removed to other VSANs non-disruptively. In a multi-vendor VSAN, using interoperability mode 0 or 1, it is assumed that the master switch is a Cisco MDS, and configured as the principal switch. Domain IDs must be set to static and persistent FCIDs must be set to dynamic. Fabrics containing Cisco switches have a different folder layout in the Console Tree Panel. At the top level is a Switches folder to identify the physical hardware, and a VSANs folder that contains all of the active VSANs. You cannot edit the VSANs or their membership in this folder—it
  • 67. represents the active state in the fabric. Use the Planned VSANs folder to create and edit copies of the active VSANs, and then make them active. VSAN Display in the Console Within a VSAN folder, you see familiar Zone Set and Zone folders that you can use to view the active state of the fabric and make changes. Fabric management within a VSAN is exactly the same as without VSANs. Creating a VSAN Adding Ports to a New VSAN.
  • 68. CISCO VSAN - Distributing and committing * To distribute and commit a planned VSAN to a fabric: – Right-click a planned VSAN and select VSAN, Distribute and Commit. – Click Execute. Remember, this only commits the VSAN object and port membership information. To make zonin changes, drill down into the VSAN folder and make the changes using the Zone Set folders. SAN Manager – Storage Device Masking * Device masking can be performed on: – Symmetrix: * VCMDB (Volume Configuration Management Database) Management * Search and Replace Masking (HBA change) * SID (Source ID) Lockdown – CLARiiON: * Manage Storage Groups – HP StorageWorks HSG arrays – HP XP arrays Unlike an environment in which the host is directly connected to the storage and sees only the volumes to which it is directly connected, the SAN environment introduces new challenges. Multiple hosts can be connected to the same storage array port, providing one host the ability to see, use, and potentially corrupt other hosts' storage areas. Storage Device Masking (SDM) addresses this problem by using a unique masking policy engine that resides on the storage itself, and management software that resides on the client hosts. Storage device masking allows you to specify the storage array devices that a specific host and host port can access. The SAN Manager implementation of SDM enables you to perform device masking for four types of storage arrays in the SAN: * Symmetrix * CLARiiON * HP StorageWorks * HP XP CLARiiON masking is similar, but is handled through the creation of Storage Groups. Masking View: This view is applicable to Symmetrix and HP StorageWorks arrays. Use it to view or modify storage device masking configurations. A user can right click on any device from this view and modify the masking configuration of that device.
  • 69. Masking View Symmetrix Device Masking To modify masking configurations for a host attached to a Symmetrix array, right-click the Host and choose Masking > Modify Masking Configurations from the menu, or highlight the host and choose Masking > Symmetrix> Modify Masking Configurations from the Storage Allocation menu. The Modify Masking Configuration window will open. To change the masking configuration, check the box to the left of the device name and choose the Grant or Remove button. The change you requested appears in blue. ControlCenter features include the ability to mask the same logical device to multiple domains or groups using the same storage port in HDS/HP XP third part arrays. In other words, you can make the same device available to multiple hosts that are zoned to the same port. This capability is available in the native array management tools and now ControlCenter supports it as well. Multiple logical devices can be masked to multiple domains or groups at a time. Multiple devices can also be unmasked with just one operation. The ControlCenter Storage Agent for HDS is required to perform device masking on these arrays.
  • 70. HDS/HP XP Device Masking Review Changes, Enable Dynamic LUN Addressing After making your masking changes and clicking the Continue button, the Modify Masking Configuration dialog summarizes your changes. You can use the buttons on the right to edit or delete any of the masking changes you made in the previous page of the dialog. With Enginuity 5772, you can enable Dynamic LUN Addressing using the check box at the top of the dialog. Without this feature, the host addresses the device using the LUN assigned when the device was mapped to the array port. This turns out to be somewhat inflexible if more than one host is zoned to this port. You may have difficulty assigning additional storage to the port in a way that maintains a consecutive list of LUN addresses on each host. With Dynamic LUN Addressing enabled, you can override the LUN address chosen during mapping. This allows you to select LUN addresses that are appropriate for each host, even when several hosts are zoned to the same array port. You can enable the feature independently for each HBA and array port pair. To set the LUN address, click the Set Dynamic LUN Address button at the top of the dialog. The dialog that appears shows the System LUN addresses—set during masking—and the Host LUN addresses. The Host LUN addresses are consecutively ordered by default, but you can click on values to manually change them. Once the device is masked however, you cannot change the LUN address without unmasking the device and starting over.
  • 71. Symmetrix VCMDB Management * Make Active: – Activates the configuration for a selected storage array * Backup: – Backs up the current configuration to a file you select * Restore: – Restores the VCM database from a file you select * Initialize: – Initializes the VCM database and saves the current configuration to a file you select. Symmetrix Device masking is performed by making changes to the Volume Configuration Management Database (VCMDB) on the array. The VCMDB is a file residing on a Symmetrix logical device that is used to store access configuration data used for masking logical devices from hosts. The VCM database exists on a special system resource logical device, referred to as the VCMDB device. Information stored in the VCM database includes host and storage World Wide Names, SID Lock and Volume Visibility settings, and native logical device data, such as the front-end directors and the director ports to which they are mapped. The VCMDB must be activated to make changes to the masking configuration take effect. From the ControlCenter Console, you can backup, restore and initialize the VCMDB as well. The VCMDB must be initialized before use, usually when the Symmetrix is first configured. ControlCenter forces you to backup the VCMDB if you try to initialize it. Backups of the VCMDB are stored on the ControlCenter Server host. Symmetrix VCMDB Commands To make a backup of the VCMDB, right-click on the Symmetrix array and choose Masking > VCMDB Management >Backup or highlight the Symmetrix and choose Masking > Symmetrix > VCMDB Management > Backup from the Storage Allocation menu. To restore the VCMDB, use the Console to locate the backup in the VCMDB Backups folder under the Symmetrix. Right-click, and choose Restore from the menu. The VCMDB Backup will appear in the VCMDB Backups folder on the ControlCenter Server. You can find it under a folder identified by the Symmetrix ID: <CC INSTALL ROOT>ecc_infdata<ECC Server Hostname>databackup<Symmetrix ID>
  • 72. Symmetrix Replace Masking Replacing masking transfers access configurations from one host port or unidentified port to another. This is useful when replacing an existing host port with a new one. Swapping access is performed on one host port pair at a time, and in one Symmetrix storage array at a time. The host port receiving new masking configurations cannot have existing masking configurations in the VCM database of the Symmetrix array in which access is being swapped. Right-click on an HBA or a Host and choose Masking > Replace Masking from the menu, or highlight an HBA or a Host and choose Masking>Symmetrix> Replace Masking from the Storage Allocation menus. In the Replace Masking dialog, choose the existing HBA from the panel on the left, and the storage array and the new HBA from the panels on the right. Click Add Action to make this edit. Make some more edits if needed, and then Continue to approve of and execute the edits. You can not manually enter a WWN in this dialog, ControlCenter must have discovered the HBA before you try to replace it. This works well if you install the new HBA in the host and then let the Host Agent discovery it before launching this dialog. The VCMDB can be activated as part of the Replace masking operation, but the New HBA must be zoned separately via SAN Manager. Editing CLARiiON Storage Groups In a CLARiiON array AccessLogix must be enabled on the CLARiiON array and then device mas is done as follows: 1. Create a Storage Group 2. Add LUNS into the Storage Group 3. Connect a host to the Storage Group ControlCenter terminology is Bind/Unbind hosts) Highlight one or more LUNs, right-click, and choose Add to Storage Group from the menu to launch the Storage Group Configuration Wizard.
  • 73. Storage Group Configuration Wizard The next step in the Storage Group Configuration Wizard is to add LUNs to the storage group. The LUNs you clicked on are already added, but you might want to choose a few more from the panel on the left. Once the LUNs are added you can Bind/Unbind hosts (Connect/Disconnect a Host from a Storage Group in Navisphere terminology). Note that the Bind process will be successful only if the Host is properly registered with the Storage Array. EMC Recommends that any server attached to a CLARiiON array should run the Navisphere Host agent and a ControlCenter Host Agent. The last step is a review. Click Execute or Execute Later. Setup the task and task list. Once the task is executed successfully, the masking process is complete. Properties of LUNs and Storage Groups In this example, once the MetaLUN has been added to the Storage Group the Properties view in the console shows the MetaLUN head belongs to the CLARiiON storage group WIN2-CX200. The second Properties view shows the LUNs in Component 0 of the MetaLUN, the Storage Group field is N/A as only the MetaLUN head is presented to the Storage Group.
  • 74. CLARiiON – Remove LUNs from Storage Group Business Continuance – Symmetrix * Symmetrix Manager license provides Business Continuance operations – Provides GUI support for: * TimeFinder/Mirror * TimeFinder/Clone * TimeFinder/Snap operations * SRDF/S and SRDF/A – Use to: * Monitor SRDF, SRDF/A and TimeFinder activities * Manage device groups * Manually execute local and remote SRDF, SRDF/A and TimeFinder commands for ad-hoc replication requirements * Manipulate Quality of Service (QoS) settings for devices * Create/Delete Dynamic SRDF device pairs – Storage Agent for Symmetrix must be deployed. TimeFinder/Mirror Operations TimeFinder Mirror * Identify appropriate Standard and BCV volumes Establish * Create Device Group: Split – Add Standard devices I/O – Associate BCVs 1) Establish 2) Split 10GB * Perform TimeFinder/Mirror operations: 10GB 3) Restore – Establish A B A B I/O – Split C C STD BCV – Restore * Monitor status in TimeFinder View. TimeFinder/Mirror operations in ControlCenter starts with identifying standard and Business Continuity Volumes (BCV) that can be used to replicate data. You can add the devices to a device group, and then perform synchronization operations on the devices. Finally, you can view the status of the synchronization in a TimeFinder view. TimeFinder/Mirror options can be found under the Data Protection Task. There are several ways the menu can be accessed Right-click the array and choose Data Protection > TimeFinder. Select the array and switch to the Data Protection task. Then use the TimeFinder menu. You can also use the quick access icons on the Data Protection task.
  • 75. Accessing TimeFinder/Mirror Options Identify Relevant Volumes We need a similarly configured BCV on the array to copy this filesystem. You can drag all the host devices to the Properties view and examine the ones that are marked as BCV. To be eligible to replicate the filesystem, the BCV must be the same size, same emulation (CKD, FBA), and if the standard device is a meta, the BCV must also be a meta constructed the same way. So you should be searching for a BCV with these characteristics. A matching BCV is illustrated on this page. For safety, we might also drag the BCV to a TimeFinder view to verify that it does not already have a relationship with another standard. If it does, the data on the BCV might be an important copy that someone is relying on. To identify the standard and BCV devices you have chosen, you should put them in a device group. You can easily execute commands against all of the devices in the group, and it will help identify them as being In Use to the other users. Device groups can also be managed by Solutions Enabler and the Symmetrix Management Console.
  • 76. To create a new group, Right-click on the host, an array, or the device and choose Data Protection > Device Groups > Create. Choose the Symmetrix from the drop-down list – this should be automatically selected because the wizard was launched from a right click context. Choose type of device that the non-BCVs of the group will be. You can choose Regular (this example), or one of the SRDF types (R1, R2, or R21) Device Group Wizard In the second page of the dialog, choose the host name where the group will be created. Device groups in ControlCenter can only be created on a host which has a Storage Agent for Symmetrix or Solutions Enabler installed and can access to the array. When you use this dialog to create a device group, it will interface with the Solutions Enabler software to create the group on the host. Then choose an existing or new group to edit, and click the Edit Members button to select the standard devices (only) to add to the group (which type depends on the type of group you created in the previous page). You can only add devices that match the group type you selected when you started.. If you right-clicked on one or more devices to launch the dialog, those will already be selected. Device Group Wizard – Associate BCVs You can search in host devices, for the device you want to use for the mirror/clone operations. Select two devices of same size, source must be STD Volume, target can be STD or BCV.
  • 77. The different types of BCVs that appear in this dialog are actually BCVs in different locations. You can also choose other devices in this part of the dialog: * BCV: This refers to local BCVs. i.e. BCVs in the same Symmetrix. You can choose a number of sets of local BCVs to create several copies of the standard data. * BRBCV: If the BCVs you chose in the BCV section are also R1 devices, then you can use this section to associate BCVs from the remote array to your group. The remote BCVs will replicate the R2 volume on the remote array. This feature lets you manipulate copies of the data on both the local and remote array all from the same device group. * RBCV: This option is available only if the standards added to the group are R1 or R2 devices. The BCV sets you are choosing here are actually on the remote array, and will replicate the remote SRDF device. * VDEV: Virtual devices used with TimeFinder/Snap. We will cover this later in the lesson. * Gatekeepers: For performance reasons, you may occasionally want to associated a gatekeeper device with a group. Commands issued to the group will use that gatekeeper device, leaving any other gatekeepers on the host available to process other commands. TimeFinder/Mirror Establish Dragging the BCV devices from the group to a TimeFinder view shows their status. Since the device (number 091 in this example) has never been synchronized with the standard, there is no information in the STD column. A few other devices have been added to the view as examples— you can see that they have been synchronized at some time in the past. An Establish operation synchronizes the data on the standard devices to the BCV devices. A simple way to establish pairs is to right-click on the device group and choose Data Protection > TimeFinder > Establish from the menu. Click the things you want to establish in the dialog (if you had clicked on multiple BCVs, you could now choose a subset of them), and the options. Host considerations prior to an Establish operation: * Establish is a non-disruptive operation to the Standard device. I/O to Standard devices can proceed during establish. Applications need not be quiesced during the establish operation. * The Establish operation will set a Not Ready status on the BCV device. Hence all I/O to the BCV device must be stopped before the Establish operation is performed. A Full Establish is required the first time a BCV and a Standard device are established. Make sure the incremental box is unchecked. You can use the Incremental option on subsequent operations. With this option, only the changed data is copied. The Optimize option chooses the pairing of standards and BCVs according to optimum data flow within the Symmetrix. The chief concern is to ensure that a standard and its paired BCV are on separate Disk Directors. Exact Pairing pairs each standard with a BCV based on their label number in the device group: DEV001 with BCV001, for example.
  • 78. TimeFinder View You can see that the devices are in a Sync in Progress state and that there are BCV Invalid tracks owed from the STD device to the BCV. The time left column estimates the time until complete synchronization based on the time take so far. When the synchronization is complete, the status will read Synchronized, and the invalid tracks will be zero. The device group name is also shown along with information about Mirror Quality of service settings, etc. TimeFinder/Mirror Split A Split operation suspends the mirroring between a standard and BCV, creating a point-in-time copy at the moment of the split. Right-click on the BCV or device group and choose Data > TimeFinder > Split from the menu. Click the objects you would like to split and the options. There are several things to consider if you want an uncorrupted point-in-time data copy. Data writes are typically buffered in the host’s memory at the application layer and the file system layer. Applications will return a write complete before data is actually written to disk. Host memory buffers must be flushed to disk prior to splitting a BCV, to ensure that the data on the BCV is identical to that on the Standard at the time the split command was issued. Without a flush, buffered data is not available on the Standard and hence will not be available on the BCV. Ideally, you should stop the applications and offline the devices before performing the Split. Until the State has changed to Synchronized, it is not possible to Split the standard and BCV unless the Force option is used, in which case the validity of the data on the BCV is uncertain.
  • 79. TimeFinder/Mirror Restore A Restore operation synchronizes the BCV data to the standard, and leaves the devices synchronized. There are several things to consider before performing a restore operation. Restore is a recovery operation. Data on the BCV will overwrite the data on the standard device. All I/Os to the standard device should be stopped and the device must be taken offline prior to a restore operation. Although it has the source data, the BCV is still the device that goes to a Not-Ready status to its host. All I/Os to the BCV devices must be stopped and the devices must be offline before issuing the restore command. Operations on the standard volumes can resume as soon as the restore operation is initiated, while the synchronization of the standards from the BCV is still in progress. Any reads from the standard that has not yet been copied will come from the good mirror—the BCV. The Restore operation can be incremental or full. Incremental restore is the default. TimeFinder/Snap Operations * Identify volumes of interest. * Identify appropriate VDEVs. * Create Device Group: – Add Source devices – Associate VDEVs * Perform TimeFinder/Snap operations: – Create – Terminate – Activate – Restore
  • 80. Using TimeFinder/Snap to perform virtual copying operations provides a space-saving method of creating instant, point-in-time copies of logical volumes. Snapping to a virtual device (VDEV) creates the appearance of copying volumes by simply copying the original data from changed tracks and the pointers to that data. The TimeFinder/SNAP operation uses two types of devices: VDEV and SAVE. A VDEV device contains pointers to the changed data, while a SAVE device holds the actual data that has been changed. Snapping to virtual devices uses a copy-on-first-write technique as a way to conserve disk space when making copies. Only writes to tracks on the source device or target virtual device cause any incremental storage to be consumed. The space savings using virtual devices can be significant when you consider that most applications change only a small percentage of data on a volume. However, you can expect performance degradation that varies according to application characteristics and I/O profile. TimeFinder/Snap operations allow you to copy data from a single source device to as many as fifteen target devices. The target of a copy operation is a Symmetrix virtual device, and the copy operation (also referred to as a virtual snap) performs a copy of those tracks identified by track pointers on a virtual device. Copying occurs only when there are writes to the source or target devices. The snap pair state remains CopyOnWrite until you terminate the copy session or all tracks have been written to. Accessing TimeFinder/Snap Options: TimeFinder/Snap falls under the Data Protection Task. TimeFinder/Snap Options can be accessed by right-clicking on the array, and choosing Data Protection > Snap from the menu. Or you can select the array, switch to the Data Protection task anduse the Snap menu. You can use the Snap view under the Data Protection task to monitor your TimeFinder/Snap synchronization. Identify Relevant Volumes Let us illustrate TimeFinder/Snap by using another example. We want to make a point in time copy of the file systems /istdata on the host DMX800IBM1. We use the Relationship view to determine that the filesystem resides on devices 026 and 027, and a Properties view to show the device properties.
  • 81. Dragging the host’s two virtual devices to the Properties view shows their properties also. Fortunately, they are the same size as the devices we need to copy. Even though they do not technically store the data, the virtual device must be configured as the same size as the device you want to replicate. The next step is to create a Device Group, add the source devices and associate the virtual devices. You have already seen the dialog for these actions, so this course will not show them again. While ControlCenter will allow you to perform TimeFinder/Snap operations on devices individually EMC recommends that you create Device Groups and then perform TimeFinder/Snap actions on the Device Groups. TimeFinder/Snap Create Drag the new group containing the source and virtual devices into a Snap view. You can see from the Snap view that there is no Snap session for the devices in our group (035, 036), because the Pair State column is NotCreated. Despite being put in a device group, the Symmetrix does not recognize a pairing relationship between the virtual devices and the sources until a Snap session is created. A session for a Snap pair originates when you issue the Create command and ends when you issue the Terminate command. A device cannot participate in two sessions simultaneously unless it is a source device that has multiple target devices. To pair an existing target device with a different source device, you need to first terminate the target’s original session. To ensure that you do not copy over previously copied data, each snap operation results in putting a hold on the target device. You can see from the dialog that you can choose a different Snap save pool for every device pair. The Create procedure is a non-disruptive operation to the source device. I/O to source devices can proceed during the Create operation. Before the Create can be performed, the VDEV needs to be in its normal status of Not Ready, therefore all I/O to the VDEV needs to be stopped, and all previous Snap sessions to the VDEV need to be terminated before the Create operation is performed. The Create operation marks the VDEV as Held, preventing it from being used in another Snap session. The VDEV remains in the Not Ready state after the session is created.
  • 82. TimeFinder/Snap Activate Now that the Snap session has been created, the Pair State column of the Snap view is Created. The virtual devices are in the Held state, and can not be used in a different Snap session until this one is terminated. The Held state is indicated in the Parameters column of the device Properties view. To identify the point-in-time copy, use the Activate command. Once activated, any writes to the source device cause the original data to be copied to the Save device to preserve the disk image as it was at the point of activation. Use the Activate option from the Snap menu to activate the session. The VDEV devices go to the Ready state after the Activate command. The host can now read and write the virtual device. TimeFinder/Snap Restore With the Snap session activated, the Pair State changes to CopyOnWrite. The Source Protected Tracks column measures the number of Symmetrix tracks on the source that have not yet changed. As post-activate writes are made to the source, this number will drop. The Status of the virtual devices shown in the Properties view changes from Not Ready to Ready to indicate that the VDEVs can now be accessed by the host.
  • 83. You can use the Restore command to restore data from a virtual device to various locations: * An incremental restore to the original source device * An incremental restore to a BCV that is split from the original source device but which still has an incremental relationship with that source. * A full restore to any unrelated standard or BCV device The Restore operation can be incremental or full. Incremental restore is the default, In this example, we highlight device groups and from the Menu Bar choose Snap > Restore. Confirm that you want to perform a restore. After a TimeFinder/Snap Restore Because restoring back to a device replaces what is on that device with the data from the VDEV, there is always the risk you can lose data from the target device. To address this issue, the Snap Restore command creates two sessions. The two sessions have states of Copy-on-Write (Activated Snap session) and Restored (Snap Restored session). TimeFinder/Snap Terminate Terminating a copy session removes any hold on the target device and deletes pair information about the terminated pair from the Symmetrix unit. The VDEV will return to the Not Ready state and be inaccessible to the host.
  • 84. When you select a device pair session from the Console to Terminate, the two sessions, Activated TimeFinder Clone and Restored, will appear in the Terminate dialog box. The restored session must be terminated before the original session. Create Activate 1) Create I/O 2) Activate 3) Terminate 10GB 10GB TimeFinder/Clone Operations A’’’ A’’ I/O B’ * Identify source volumes of interest B’’ C’ C STD STD / BCV * Identify appropriate target volume Backup Terminate The TimeFinder Clone feature allows you to make copies of data simultaneously on multiple target devices from a single source device. The data is available to a target’s host once the session is activated. You can copy data from a single source device to as many as sixteen target devices. Any device can be either the source or the target in a Clone synchronization. They must be the same size, and be constructed the same if Meta devices. Unlike a BCV mirror copy, which must be completely synchronized with its source and then split to access the data, the Clone copy activation makes data on the Clone immediately accessible to its host, even while copying is occurring in the background. The following TimeFinder Clone commands are available in ControlCenter: * Create Clone Copy - Create the relationship between the source and target devices. * Activate Clone Copy - Activate the copy operations. * Terminate Clone Copy - End the relationship between the source and target devices. Accessing TimeFinder/Clone Options TimeFinder/Clone also falls under the Data Protection Task. The Options can be accessed by right-clicking on the array and choosing Data Protection > Clone from the menu. You can also select the array, choose the Data Protection task and use the Clone menu. In this example, we would like use TimeFinder/Clone to copy the data on two BCV devices to a pair of R1 devices. All four of the devices are shown in the Properties view to verify that they are the same size, and that the destination devices are not in the Held state. Before a Clone session can be executed, the source and target must be first identified and paired. The Create command is used for this purpose. Launch the Create dialog from the Clone menu. Select a source device from the Select Source Device panel and a corresponding device from the Select Target Device panel. Note that after you select a source device, all corresponding target devices of the same size and emulation type are displayed in the other window. Click Add to add your pair to the Pairs panel. When you finish adding pairs, click Execute. You can choose three modes of copy when you create a Clone session. In No Copy mode, no additional copying is done. Data is copied to the Clone on a first access basis only: the first time data is read from the Clone, or the first time data is written to the Clone or source, the relevant
  • 85. track is copied to the Clone. In Pre-Copy mode, all tracks of data are copied from source to Clone when the pair is in Created state. After Activation, the additional copying terminates, leaving only the copy on first access to synchronize the devices. In Copy mode, all tracks are copied from source to Clone after Activation. The copy on first access process is also used to address data that has not been copied yet. Once a Clone copy session has been created, the target devices are in a Held state and no data is transferred until you activate the session. TimeFinder/Clone Create TimeFinder/Clone Activate With the Clone session created , the destination devices are marked as Held in the parameters column of the Properties view. The destination devices are set to Not Ready. Dragging the devices to a Clone view (use the Data Protection task to get a Clone view) shows the paired relationship. The Pair State column of the Clone view will be Created at this point. No data will be transferred until you activate the session. The Activate command is used for this purpose. To activate a Clone copy session, launch the Activate dialog from the Clone menu. A list of unactivated Clone sessions is displayed. Check those sessions that you want to activate, and click Execute to activate the checked sessions.
  • 86. If the No Copy option is selected when the pair is created, then activating them should change the state to Copy on Access, indicating that data is copied from the source to the clone on any first write or read of the destination, or on any first write to the source. If Copy had been chosen, the pair state reflects a status of CopyInProg while the contents are copied from the Clone Source to Target. Once all the data is copied the state changes to Copied. TimeFinder/Clone Terminate You can see that the Pair State has changed to CopyOnAccess, indicating that the pairs are active. The destination devices are Ready (read/write) again, so the point-in-time copy can be accessed. As accesses force data to be copied from source to target, the Source Protected Tracks values will decrease, and the % Copied value will increase. To ensure that you do not copy over previously copied data, each Clone operation results in putting a hold on the target device. Terminating a copy session removes any hold on the target device and deletes pair information about the terminated pair from the Symmetrix. The target host can no longer reference data on the source device through indirection (indirect target tracks that provide visibility to source tracks). When terminating a pair relationship, no conflicts exist if the pair is in the Created state or the Copied state. However, stopping a copy session for a pair whose state is CopyOnAccess or CopyInProg may end the session prematurely if an application has not finished accessing data, or if writes to the source device are ongoing. Terminating a pair while CopyInProg requires the Symforce flag. To terminate a Clone copy session, launch the Terminate dialog from the Clone menu, check the boxes of the pairs you want to terminate, and click the Execute button. After TimeFinder/Clone Terminate
  • 87. SRDF Operations * Identify SRDF volumes of interest – Restore * Create Device Group – Mode Change * Perform SRDF operations – Suspend Link – Failover – Resume Link – Update Source – Swap – Fail Back – Create Dynamic Pairs – Split – Delete Dynamic Pairs – Establish – Advanced SRDF creates and maintains a mirror image of one or more logical volumes on a remote Symmetrix array. Before you can use SRDF, the local and remote Symmetrix arrays must each be set up with at least two Remote Link Directors (RLD) through which the two arrays are linked. The Symmetrix array being mirrored is designated as the source (R1); the Symmetrix array maintaining the remote mirror is designated as the target (R2). Data is transferred across the SRDF link from the source to the target array. By maintaining real-time copies of data in different physical locations, SRDF enables you to perform the following operations with minimal impact on normal business processing: * Disaster recovery * Recovery from planned outage * Remote backup * Data center migration Accessing SRDF Options SRDF commands fall under the Data Protection Task. SRDF Options can be accessed by right- clicking the array and choosing Data Protection > SRDF from the menu. You can also select the array and the Data Protection task and use the SRDF menu. Some of the quick access icons in the Data Protection task initiate SRDF dialogs.
  • 88. You can monitor SRDF activity using an SRDF View, located in the Data Protection pull-down. This view displays: * Local and Remote Device Number and Symmetrix ID and Device Group name * Read and write state of the Devices * Pair State: Synchronized, Suspended, Split, Failed Over, etc. * SRDF Mode: Synchronous, Semi-Synchronous, Adaptive copy etc. * Domino : On or Off * Invalid track information * Quality of Service settings If a Symmetrix is dragged into the view, the status of every SRDF Volume in the array is shown. In this example, the SRDF view the same device group used in the previous example is shown. The screen is split three ways to show all the columns of information. SRDF View Identify Relevant Volumes Let us illustrate basic SRDF tasks with an example. The Relationship view shows us that our filesystem /istdata has been created on SRDF R1 volumes. When the SRDF pairs are synchronized, writes to the filesystem are being transferred to the paired devices on another Symmetrix. While ControlCenter allows you to perform SRDF operations on devices individually,
  • 89. EMC recommends that you create device groups and then perform SRDF actions on the device groups. You can see from the tree panel that a device group has been created for these devices previously. An SRDF Failover is invoked in the event of a disaster or in order to perform maintenance on the production site. A failover leads to a Write Disabled state on the Source side. If possible, ensure that a clean, consistent, coherent point in time copy which can be used with minimal or no recovery is available on the Target side. Ideally, you would stop all applications, unmount filesystems, and disable volume groups to be sure the source devices are not being written to. Failover can be invoked via ControlCenter from either the source or target side. In this example, the Failover is invoked from the a host attached to the Source side Symmetrix. The device group resides on the source side host. Assuming that the host considerations have been taken care of Failover is invoked by right-clicking on the device group and choosing Data Protection > SRDF > Failover from the menu. The Failover operation makes the R1 Devices Write Disabled and the R2 Devices Ready. The Pair State goes to Failed Over. SRDF Failover SRDF Fail Back
  • 90. An SRDF Fail Back is invoked in order to move operations back to the primary site after a disaster or maintenance has been performed on the Source Symmetrix. The Fail Back process compares the track tables for each affected volume in each Symmetrix and proceed to update the Source volumes with the changed data that has occurred on the Target Symmetrix. After Fail Back is invoked, the Source volumes are returned to a Ready state and the Target volumes to a Write Disabled state. Synchronization starts in the background. Production work can be resumed on the Source Symmetrix immediately, any updated tracks on the Target volumes which haven’t been transferred back to the Source volumes is read back across the SRDF link in the event that they are required by the resumed production applications. A performance overhead will be incurred by this ‘read across the link’ operation. Until the synchronization has completed, the Target Symmetrix cannot provide disaster recovery protection. Make sure that applications are properly quiesced and volume groups deactivated before you Fail Back. Assuming the host considerations have been taken care of we will perform the Fail Back. In this example, the Fail Back is perform from the Source side by right-clicking on the device group and choosing Data Protection > SRDF >Fail Back. The R1 Devices become Ready and the R2 Devices become Write Disabled. SRDF Split, Establish/Restore SRDF allows concurrent operations on the Source and Target devices simultaneously. The SRDF Split operation will suspend the SRDF link and makes both the source and target devices Ready as well. After a split, you can either perform an Establish to preserve source side changes or a Restore to preserve the target side changes. Both of these operations puts the pair back into regular Synchronized mode (after synchronization) with the source Ready and the target write disabled. SRDF Mode Control
  • 91. Right-clicking on a group, device, or array and choosing Data Protection >SRDF>Mode Control from the menu brings up the Mode Control dialog. Use this panel to change the way the devices are synchronized. Synchronous and Asynchronous are the main modes—the devices must be in one of these modes. Synchronous mode forces every write to the R1 be sent to the R2 and acknowledged before acknowledging a successful write to the host. Synchronous mode is the safest because a write is only committed when it has been copied to two arrays. However, it is the slowest of all the modes because each write is held up while the remote copy is done. Domino Effect is an option with Synchronous mode. With Domino Effect enabled, and link failure or other reason that prevents writes from being copied to the R2 side force the R1 device to be Not Ready. This prevents the host applications from writing any data that can not be copied to both the local and remote array. Adaptive Copy is more of a bulk load copy scheme. Writes are queued up at the R1 side and written to the R2 side whenever link availability allows. Adaptive Copy Disk mode queues the writes on the physical disks, and marks them to be copied to the R2 side. When link availability allows, the entire disk track is read back into cache and copied. Adaptive Copy Write Pending mode queues the writes in cache as write pendings. When link availability allows, the smaller, individual writes are copied. This mode can require significant amounts of cache. The Adaptive Copy Skew is a per-device threshold to limit the queue size. When the number of writes for a device hits the Skew, it reverts back to Synchronous mode, slowing the writes down. Create Dynamic SRDF Device Pairs The Dynamic SRDF attribute can be selected when a device is first configured, or after it is configured using the Device Attribute Definition configuration dialog. Dynamic SRDF devices can be paired with similar devices on a remote array for replication. Unlike static SRDF definitions, the pairing relationship can be undone and changed without a configuration change. Dynamic SRDF must be enabled on the Symmetrix when the array is first configured, or it can be enabled using Solutions Enabler.
  • 92. In the example, two devices on the local array have been chosen to be paired with two similarly- numbered devices on the remote array. The Properties view Configuration column shows that they are currently not SRDF devices, but the Dynamic RDF Capability column shows that they can become either R1s or R2s. To configure them for SRDF, launch the Create Dynamic Pair dialog by right-clicking on the Symmetrix and choosing Data Protection > SRDF >Create Dynamic Pair from the menu. Then choose the source and target array from the pull-down lists, and the RA group number. Also choose the type of local device. Click OK to assign individual pairs. Create Dynamic SRDF Device Pairs In the next page of the Create Dynamic Pair dialog, choose pairs of devices to become SRDF- capable. Choose a device in the local column and a list of eligible devices will appear in the remote column. Eligible devices are the same size as the local device, be the same configuration if a Meta device, are marked as being Dynamic SRDF capable, and not already a member of an SRDF pair. Click the remote device and click Add to define the pair. Repeat for any other pairs you would like to create. Then choose the initial synchronization mode to place the devices in: Synchronous, Adaptive Copy, or Asynchronous. Also choose which device (R1 or R2) to invalidate and lose the data. After the changes have been made, the Properties view shows the Status of the R2s Write Disabled, the Configuration of all devices is an SRDF type, and the Parameters column shows Dynamic SRDF.
  • 93. SRDF/Asynchronous SRDF/Asynchronous is a disaster recovery solution that provides good application response time while consistently replicating data. Database disaster recovery depends on consistent data: data that has been written to tablespaces and logs in order. Synchronous mode guarantees consistent replication, but cannot be used over long distances due to the performance penalty. The adaptive copy modes can be used over long distances, but writes might be transmitted in any order. SRDF/Asynchronous marks incoming writes as part of a “delta set:” for a fixed period of time, all writes received go into the current delta set. Writes received after the time expires go into the next delta set. In this way, the writes in a particular set are known to be consistent as of the time that the set expired, even though the Symmetrix does not log the exact time that each write is received. In other words, the delta set that expired at 16:35:00 contains a group of writes; we cannot tell what order they arrived in, but we know that they represent the exact state of changes to the database as of 16:35:00. While set N is being built, the previous set (N-1) is being transferred across the SRDF links. At the receiving site, the incoming writes are also marked as being part of set N-1. The writes may be transferred in any order that is efficient for the RDF directors. If a disaster occurs to stop set N-1 from being completed, it must be discarded since the writes within the set are not ordered. When the transferring N-1 set is finished transferring (the fixed cycle time expires), it is marked as a complete set (N-2) and the tracks marked for destage. If a disaster occurs, set N is completely lost and set N-1 must be discarded since it is incomplete. However, set N-2 represents a consistent write set as of the time it expired, which was at most two cycles in the past. Using the default cycle time of 30 seconds, this means that the R2 devices will have a consistent, recoverable database that is at most one minute out of date with respect to the source database.
  • 94. SRDF/A information in SRDF view SRDF/A pairs are managed much like other SRDF pairs. You can issue fail over, fail back, split, establish, and restore commands using the dialogs we have already seen. This example shows the status of some SRDF/A devices in the SRDF View. You can see several indicators that these are SRDF/A pairs. The Columns relevant to SRDF/A are: * Pair State: Consistent or Inconsistent * Asynch. SRDF: Should be Yes for SRDF/A devices * Consistency Protection: Enabled or not * R2 Consistent: Yes or No * Uncommitted tracks * R1/R2 Time Log * Mode: Asynchronous indicates SRDF/A mode. SRDF/A Disable
  • 95. When performing SRDF/A Disable or Enable actions, it must be performed against all volumes in the assigned SRDF/A RA Group. This can be done by: * Creating a Device Group containing all of the SRDF/A devices and then performing the SRDF/A Enable or Disable action. * Performing the SRDF/A Enable or Disable action against the SRDF/A RA Group. * Selecting all the SRDF/A volumes and then performing the Disable or Enable SRDF/A action. After performing an SRDF/A Disable action, the SRDF view shows that Consistency Protection is Disabled for the SRDF/A devices. The Pair State and R2 Consistent fields will also change as updates are performed against the R1 devices. Disabling SRDF/A reverts the volumes back to their normal Primary mode of operation (Synchronous or Adaptive Copy). Cascaded SRDF AS you can see in this example, you can cascade SRDF devices to have 2 replicated volumes besides original. This is a new feature, before it was done by the use of TimeFinder in the middle of two SRDF Sessions. Enginuity 5773 has introduced Cascaded SRDF synchronization. A Cascaded SRDF configuration consists of a Primary Site (Site A) replicating to a Secondary Site (Site B) and then replicating the same data to a Tertiary Site (Site C). A single device at the Secondary Site has both R1 and R2 features to continuously pass data to the Tertiary Site. The core benefit behind a “Cascaded” configuration is its inherent capability to continue replicating from the Secondary Site to the Tertiary Site in the event that the Primary Site goes down. This enables a faster recovery at the Tertiary Site. Cascaded SRDF uses the dual role R21 device on the Secondary Site. This can help reduce the number of devices required for a 3-site “No Data Loss” extended distance replication solution. Synchronous mode is not supported on both legs of a Cascaded SRDF configuration. Asynchronous or Adaptive Copy modes must be used for one or both of the legs. This table shows all of the supported Cascaded SRDF configurations. Creating Cascaded SRDF Relationships • Create one SRDF relationship and a normal device • Add SRDF Mirror to middle device to make it an R21
  • 96. Although an R21 device is treated as a distinct device type, there is no command to create one. Instead, you will add an SRDF mirror to an existing SRDF device to create a cascaded relationship. This makes the middle device an R21. Start with an SRDF pair and a regular device. Each device should be on one of the three Symmetrix arrays involved in the relationship. Then use the Add SRDF Mirror command to add the regular device as a mirror to the middle SRDF device. If the middle device is already and R2, you will add the regular device as an R2 mirror. If the middle device is an R1, you will add the regular device as an R1 mirror. Once the configuration is complete, the middle device will appear as an R21. In either configuration, two of the devices are dynamically changing their SRDF type; the Regular device becomes an SRDF device, and the middle device has an additional SRDF relationship added. For this reason, the devices used to create Cascaded SRDF relationships must have the Dynamic SRDF R1 or R2 feature enabled. Configuring Cascaded SRDF Relationships In this example, we will configure a Cascaded SRDF relationship using the devices shown here. In this test environment, only two arrays are being used. Device 33 will become the R21 device. In a realistic environment, you would follow these same steps with devices that are on three different arrays. Notice that all of the devices are Dynamic SRDF R1 and R2 capable. This is necessary for the R21 device, since it cannot be configured as a static R21 device. The R1 and R2 devices only need to be Dynamic R1 or Dynamic R2 capable.
  • 97. We will start by creating the SRDF relationship between device 22 and device 33. This can be done using the Create Dynamic Pair feature. The first step of the dialog prompts the user for the source and target arrays, and the RA group to use. The user will create the R1 device on the local array, and the R2 device on the remote array. Configuring Cascaded SRDF Relationships The next step of the dialog is to choose the R1 and R2 devices from the arrays, and the synchronization mode. Finally, the user will choose one of the two devices to invalidate, or overwrite during synchronization. The resulting SRDF View shows that the R1 and R2 devices are now paired in Synchronous mode. Creating R21 Devices The next step is to add an SRDF mirror to one of the devices. In this case, the user has chosen to add device 33 as a mirror to device 44. After selecting device 44 and choosing the Add RDF Mirror (SMC) option from the SRDF Configuration menu, the user is presented with a ControlCenter dialog displaying all of the Symmetrix Management Console hosts that can perform the task on the array. The Storage Agents for Symmetrix have created this list by querying the Solutions Enabler processes that support both ControlCenter and Symmetrix Management Console. The user will choose a host from the list, and if necessary, change the port used for secure web communications. Creating R21 Devices The next dialog is a Symmetrix Management Console dialog. Using this dialog, the user will select the RDF group and device to be the mirror of device 44.
  • 98. Device 44 is already selected as the target Device Range in the dialog. The user has chosen RDF Group six. Choosing a mirror type of R2 means that device 44 will be the R2 device and the remote device will be the R1. The user used the dialog to select device 33 as the remote device. The RDF mode for this pair has been set to Asynchronous. Creating R21 Devices Returning to controlCenter, you can view the result of the Symmetrix Management Console configuration. Device 44 has become an R2, and device 33 has become an R21. TimeFinder/SRDF Quality of Service Internal TimeFinder or SRDF synchronization can have an impact on overall Symmetrix performance. Although priority is given to any immediately needed host I/O request, a large business continuity synchronization will still cause a lot of traffic at the back-end. This additional traffic, even if low priority, will consume resources needed by host applications. TimeFinder/SRDF Quality of Service utility helps reduce performance problems caused by business continuity synchronization. It does this by inserting delays in between each scheduled track copy for TimeFinder or SRDF. The delay causes the business continuity task to take much longer, but by slowing down the rate of additional traffic, resources are freed up. There are two configurable delays in Quality of Service: TimeFinder and SRDF. For a non-BCV, the TimeFinder delay controls how much time is inserted between copying tracks from the source to target. Only the source volume setting is considered: for example, the setting for the Standard is considered for an Establish, and the setting for the BCV is considered for a restore. For a BCV, the TimeFinder delay controls how much time is inserted between copying tracks from the BCV to its mirror after a split (if the BCV is mirrored).
  • 99. The SRDF delay controls how much time is inserted between copying tracks using SRDF. Again, the source volume’s settings are the ones observed. Non-SRDF devices can not have their SRDF setting changed. Host Management * Capability licensed by Automated Resource Manager * Management tasks on remote hosts can be performed via ControlCenter – Explore Windows and UNIX hosts – Extend File Systems – Mount/Unmount File Systems – List all Files and Directories – Backup a UNIX File System using tar – Execute shell commands on UNIX hosts – Perform storage related commands – Explore and manage users, groups, services, registries, processes, and other non-storage entities – …. and many more tasks. Execute Command – UNIX Alerts - Overview * Why Alert? - Data availability – Monitor and report on events that could lead to application outages – Every ControlCenter agent can monitor a number of metrics * 34 agents and 700+ alerts * Alert categories – Health * Examples - Database instance up/down, Symmetrix service processor down, Connectivity device port status – Capacity * Examples - File System Space, File/Directory Size Change – Performance * Examples – Symmetrix Total Hit %, Host CPU Usage * Alert Matrix available on EMC Powerlink website
  • 100. Define an Alert Define Alert Instance The Alert Life Cycle refers to the sequence of typical events in an alert’s life. It starts when an alert is defined with the configured parameters shown in this illustration. We will discuss these parameters in detail later, but a quick glance shows that the alert can be configured to detect when an event happens or a critical measure meets some threshold. The condition of an event alert is either “True” or “False,” while the condition of a threshold alert is a numeric value that is compared against a measure. In either case, you can configure up to five severity levels. You can also choose which Console Views display the alert information. Before and After settings can be specified to handle short spike alerts. For most alerts, you can specify a schedule to control how frequently the objects are monitored. Some alerts are marked as “Agent Controlled.” The schedule is controlled by the agent in that case. An optional management policy can be used to route the alert to certain Console users—the default is for all Console users to see the alert. A management policy can also deliver an alert by email or SNMP. An optional autofix can be added to provide some kind of resolution. An autofix runs on the agent host when the alert first triggers. The Source and Apply To tabs of the alert configuration are used together to specify what objects the alert definition monitors. Alert Triggers Once an alert is defined, the agent will monitor the metric based on the schedule. When the event has occurred or the measure is found to be within one of the thresholds, the alert triggers or becomes active. Several things can happen when an alert first triggers. The Alerts View or At A Glance View might be updated to display the new alert, if those options were selected in the alert definition. A management policy might restrict what Console users see this information. In the Alerts View, the “Created” column will be set to the date and time the alert triggered. If a management policy that specifies email or SNMP delivery is specified, the email or SNMP trap will be sent. If one or more autofix scripts are specified, they will run on the agent host that detected the alert. If more than one agent can monitor the same object, the one currently designated as the “Primary” executes the autofix. Load balancing and failover might make any of the agents that can monitor the object a primary agent. During the time the alert is active, it can be actively managed by ControlCenter users. A user can acknowledge the alert, assign ownership, attach a note, or perform other tasks to help document the resolution process.
  • 101. Alert Updates At the next scheduled monitoring time, the agent checks the condition again. If the measure is still at one of the thresholds and it has changed value, the alert will be updated. Several things might happen during an update. The record in the Alerts View or At A Glance View will be updated with new information—a new record is not created during an update. When the measured value changes, the “Last Modified” column in the Alerts View is updated with the current date and time. You can use the “Last Modified” column to quickly tell how recently something has changed. But be warned: the “Last Modified” time will also change when any property of the active alert changes, including when a user performs a management action. A management policy will execute again during an update. If a Console user is specified by the policy, this simply means that the Alerts and At A Glance Views will be updated with any changed information. If email or SNMP is specified, a new message or trap will be sent that carries the new measure value and severity information. This helps keep users notified about the status of the measured value. An autofix is never run during an update. It is only run when the alert is first triggered. Remember, an update happens when the measure changes. If the measure of a threshold alert has not changed, the events on this page will not happen. Since “True/False” alerts do not have measured values, an update never happens for such alerts. The Alert Life Cycle of a True/False alert includes Trigger and Clear, but not Update. Alert Cleared An alert might self-clear during a monitoring period if the condition is no longer met. This typically happens with threshold alerts when the measure is no longer within any of the thresholds, though not all threshold alerts self- clear. An alert can be manually cleared by a user also. Several things happen when an alert is cleared. The alert is removed from the Alerts View and the At A Glance View. The alert is logged in the Alert History View. The complete history of the alert is available in this view, including the first trigger time, times of severity changes, and time when cleared. Any notes applied during management are also viewable. Alerts are retained in the Alert History View for a time period specified in the Alert Data Retention policy. By default, this policy retains the history information indefinitely. Once the alert has been cleared it is eligible to Trigger again, starting the alert lifecycle over. Threshold alerts typically self-clear when the value is no longer within a threshold, but True/False alerts do not typically self-clear. They remain active even when the event is no longer detected. Only by manually clearing these alerts will they become inactive.
  • 102. Configuring Alerts * Alert Definitions – Defined alerts – State: enabled/disabled * Alert Templates – List of all possible metrics for which an alert can be define * Autofixes – Scripts that run automaticall when an alert triggers – Requires Automated Resour Manager or StorageScope li * Management Policies – Notify CC User – Send e-mail – Notify SNMP * Alert Data Retention Alert Definition * List of active alerts currently defined * Enabled alert – Red “Bell” * Disabled alert – Gray * Many alerts for most agents are pre-defined and enabled when agent is installed. * Refer to EMC ControlCenter Alerts Matrix for a breakdown of all alerts. The Usage view of any defined alert shows if a particular alert is enabled in the Is Alert Enabled? column. The Usage view is one of the choices under the ECC Administration pull down. A red bell indicates an enabled alert as well. The Properties view of a defined alert shows if a Management Policy is associated with an alert definition, when it was modified and who modified it. Both of these views can be exported into a CSV file which can then be imported into an spreadsheet like Microsoft Excel. Exporting these two views prior to making changes is extremely useful to record of the state of the alert definitions. To modify an existing alert definition, right click on the alert and select Edit Alert. The Alert editor interface pop-ups. The interface has five tabs. * The Properties tab gives you information about the alert. The information on the properties page cannot be edited. * Alert Type: The four types are Count, Interval, Rate and State. * Alert Category: Alerts could be Health, Capacity or Performance related from object types of Host, Storage, Connectivity etc. * Last Modified: When was this alert definition last modified. * Who Modified: The ControlCenter Console user who last modified the alert definition. * Description: Gives you a brief description of the alert. The online help will provide more information on the alert.
  • 103. Alert Definitions – Usage and Properties Views Each alert definition has a unique descriptor. The unique descriptor is assigned automatically by ControlCenter, it starts at 00 for the first definition of a specific alert, and is incremented for new definitions of the same alert. The Director Status Alert is an example of a State Alert. Editing an Alert Definition Source and Conditions
  • 104. Source Tab: The information to be specified in the Source tab depends on the specific alert. For the Director Status Alert shown in this example, you can establish which objects should be monitored. The recommendation is to leave the Source tab at the default value of the asterisk wildcard. The Apply To tab (discussed in the next slide) allows a more granular filtering of the specific objects that one needs to monitor. Conditions Tab: There are five levels of severity that can be configured for each alert definition: (1) Fatal, (2) Critical, (3) Warning, (4) Minor, (5) Informational. Thresholds could be set up for each of the severity levels. For each severity level, the notifications can be sent to the Alerts View and the At A Glance View or, alternately, to the At A Glance View only. If both the Alerts view and At A Glance view boxes are unchecked, the alert does not trigger for that severity level. The Before and After values are used to control alert spikes—situations in which a resource temporarily exceeds a trigger value, and then quickly falls within acceptable ranges again, or a resource briefly goes offline, and then comes back online. Use these fields to prevent being inundated with false alerts. Before specifies how many consecutive times the alert conditions must be met before ControlCenter sends a notification or alert. After an alert or notification has triggered, ControlCenter uses the value in After to determine how many consecutive times the conditions must be false before ControlCenter removes the alert or notification. The alert schedule determines how often ControlCenter evaluates the alert definition. Actions and Apply To Actions Tab: The Actions Tab is used to specify the alert schedule and optionally to assign a Management Policy and an Autofix. We will discuss Management Policies and Autofixes shortly. Some Alerts will not allow the specification of a schedule, the schedule for such alerts is controlled by the agent. The schedule is chosen via the drop down list. The list will show all the pre-defined schedules. Schedule definitions can be edited and new schedules can be created if necessary. The schedule determines hooften an agent monitors a specific metric for threshold violations.
  • 105. Apply To Tab: This tab is used to specify the objects, such as hosts, storage arrays, or network components, to which the alert definition applies. You can either select individual objects or all valid objects monitored by the agent by checking the Apply this alert to all applicable. box. If the Apply this alert to all applicable box is checked, then any new objects which are discovered in ControlCenter inherit the alert definition automatically. Finally, before saving the new definition (OK button) to enable the alert, make sure that the Alert Definition Enabled? box is checked. Creating Alert Definition from Template Alert Definitions are created from alert templates. Right click and select the Alert Template Æ New option to create a new Alert Definition. The new alert definition interface window will pop up. This interface is almost identical to the edit alert interface. The only difference is that the Unique Descriptor is not shown. The Unique descriptor will be automatically generated by ControlCenter and appended to the name of the new alert definition when it is saved. This example shows a Capacity Type alert template from the Database Agent for Oracle using the Free % of Tablespace Alert Definition Template. The Actions Tab: As previously shown, the Actions Tab is used to specify the alert schedule and optionally to assign a Management Policy and an Autofix. We will discuss Management Policies and Autofixes shortly. Some Alerts will not allow the specification of a schedule, the schedule for such alerts is controlled by the agent. The schedule is chosen via the drop down list. The list will show all the pre-defined schedules. Schedule definitions can be edited and new schedules can be created if necessary. The schedule determines how often an agent monitors a specific metric for threshold violations.
  • 106. Apply To Tab: This tab is used to specify the objects, such as hosts, database instances, storage arrays, or network components, to which the alert definition applies. You can either select individual objects or all valid objects monitored by the agent by checking the Apply this alert to all applicable. box. If the Apply this alert to all applicable box is checked, then any new objects which are discovered in ControlCenter inherit the alert definition automatically. Management Policies: * What should ControlCenter do when an alert triggers? * Possible actions: – Display alert in the Console of a specific ControlCenter user – Send an e-mail message – Notify a SNMP management framework – Set up loops – Set up wait intervals * Create new or edit existing management policies * Same policy can apply to multiple alerts Management Policies control the way alert notification is handled. If no Management Policy is assigned to an alert notification to all Console Users Alerts View or At A Glance View. Management Policies allow alert notifications to specified Console users, to a Framework application via SNMP, or to an email address. You can also set up looping features to repeat the notifications a fixed number of times, and insert timed wait steps between the notifications. To create a new Management Policy right-click the Management Policies Folder under Administration > Alert Management and select New.
  • 107. Creating/Editing a Management Policy To edit the policy, drag action icons from the left of the dialog to the main display. If you are specifying a Console User or Email recipient, remember that you can only specify one name in an icon box. You can use “*” to mean all Console users, but otherwise avoid wildcards. For multiple recipients, drag multiple boxes. The SNMP icon does not have any configurable parameters. You can edit the SNMP receiver parameters in the Integration Gateway Agent’s configuration file. A management policy may be used by more than one alert. If you change a management policy, make sure your changes are appropriate for all the alerts to which the management policy is attached. ControlCenter executes the management policy steps in the sequence they appear in this dialog box, from top to bottom. ControlCenter executes the management policy steps when the alert first triggers, and again whenever the value of a threshold alert changes. The use of the loop and wait functionality is not recommended as an escalation policy. Once begun, the management policy executes all of the steps whether the alert is cleared or not. Keep the management policy simple; specify Console users, e-mail recipients and SNMP receivers only. Assigning Management Policy to Alerts Management polices can either be assigned with a right click on the alert folder from the tree panel or in the Actions tab of the alert definition. The Management Policy is displayed in the properties view for the alert.
  • 108. Management Policy – E-mail & SNMP Notification Autofixes * Types of autofixes – System defined – User defined * User defined autofix – Create the autofix definition within the Console – Create the autofix script on the host * Attach the autofix to the alert through the Edit Alert dialog box * Autofixes always run on the agent that detects the alert An autofix is a script or other program that the agent runs when the alert first triggers. A few System autofixes exist for Windows hosts. Two of them manipulate the Windows Event Log and are tied to the Host Agent for Windows Event Log alerts. The other System autofix restarts a Windows Service, and is related to the Service Failure alert. Any other autofixes have to be defined by the user. Do this by specifying the name of the executable and any parameters to pass and attaching it to the alerts. You must also install the executable on the agent host. The autofix will be executed by the agent that detects the alert condition. If multiple agents monitor the same object, any one of the agents could be the primary at the time the event occurs. Remember to install the executable on all of them. An autofix can created by right-clicking the Autofixes folder and selecting New. Parameters can be passed from ControlCenter to the Autofix scripts. The parameters are: * &METRIC is the alert name. * &LEVEL is the severity level of the alert, in string format: Fatal, Critical, Warning, Minor, and Information. * &KEY is the alert key (or managed object that the alert is acting on.) On UNIX and Windows, if the alert has more than one key, then append a number to &KEY for each key you want to pass, for example: &KEY1, &KEY2, and so on. * &VALUE is the value at which the alert triggered.
  • 109. Creating a User Defined Autofix Attaching an Autofix to an alert UNIX Autofix Example: * /usr/ecc/autofixes/vgextendfs &METRIC &LEVEL &KEY &VALUE #!/usr/bin/ksh TIME=`date` print "$TIME" >> /tmp/vgautofix.out print "Metric: $1 Filesystem Free Size: $4 %" >> /tmp/vgautofix.out if [ $2 = 'FATAL' ] then print "Severity: $2 Filesystem $3 has been increased by 4 MB" >> /tmp/vgautofix. out else print "Severity: $2 Filesystem $3 has not been increased" >> /tmp/vgautofix.out fi * Output from vgautofix.out Thu Jan 17 14:13:23 EST 2006 Metric: MAR.FileSystem.Space.PctFreeSpace Filesystem Free Size: 10.032552 % Severity: MINOR Filesystem /istdata has not been increased
  • 110. Setting Thresholds You can also create alert definitions using the Edit Thresholds dialog box. The Edit Thresholds dialog box allows you to create or modify multiple alert definitions at one time. In addition, the dialog box shows the category for each metric. The category determines in which At A Glance view chart a notification will display (for example, Storage System Performance or Host Capacity). To access the Edit Thresholds dialog box, right-click any object and select Alert Thresholds, Edit Thresholds. Using the edit threshold window will create a new alert in the tree panel for each unique alert level specified. In this example, the Total Hit% Metric for Symmetrix ‘156 is being set. The Critical level has been changed to 20. Clicking OK results in the creation of a new alert definition in the Alert Definition folder for the Storage Agent for Symmetrix. The resulting definition is shown in the next slide. The result of setting thresholds is the creation/alteration of the alert definition. In the previous slide, the Total Hit% threshold for Symmetrix ‘156 was changed. This slide shows the Alert Definition which was created as a result of setting the threshold.
  • 111. Monitoring and Handling Alerts * Alerts View – Chart View – Acknowledge – Clearing – Assign * Alerts History View – Notes * At A Glance View – Filtering Active Alerts View This shows the Alerts View. From this view you can quickly see the cause of the alert, which object is affected, and what the level of the alert is. For alerts, this is the main functional view and most of the management functions can be started from this view. Alerts can be displayed in the Alerts View in a number of different ways, a few are listed below: * Clicking the All Alerts button in the top right hand corner. The All Alerts button populates the Alerts View with all the active alerts. * Switch the target panel to Alert View by clicking the Alerts button in the taskbar. Then drag selected objects into the target panel from the tree panel. Dragging selected objects into the target panel displays alerts that relate to those specific objects. This is one way to filter the number of alerts shown in the alerts view. * Right click any object that shows a status icon and choose Alerts. This populates the target panel with the active alerts that are specific to that object. Description of the Columns: * Alert State: Yellow Bell – New alert (Text will be in Bold font). Gray Bell – Acknowledged or Assigned Alert (Text is in normal font). Note: A paper clip icon is shown here if a note is associated with this alert. * Autofix State: If an autofix is associated with the alert then this column will indicate the status of the autofix − Running – Wrench/Spanner icon − Success – Wrench/Spanner with green check − Failure – Wrench/Spanner with red X * Severity: Ranges from 1 to 5. 1 = Fatal, 2= Critical, 3 = Warning, 4 = Minor, 5 = Information * Object Name: Host, storage array, network component (such as a switch), or other managed object for which the alert triggered. * Message: A description of the condition that caused the alert. Look here for information about the specific resources affected.
  • 112. Active Alerts View Active Alerts View Options Acknowledging an Alert
  • 113. Assigning An Alert Adding Notes to Alerts Searching Alert Notes
  • 114. Clearing an Alert Alert History View Alert History of a Particular Alert Instance
  • 115. Setting Alert Data Retention Disabling an Alert Definition Alert Chart View
  • 116. Filtering Active Alerts At A Glance View At A Glance View – Health Drill Down
  • 117. At A Glance View – Capacity Drill Down Alerts Discussion * There are over 700 alerts – Too few alerts and problems will not be found – Too many alerts and they will be ignored * We need to figure out which alerts are best for each situation – There are no formulaic “best practices” – Based on the environment including, other monitoring products, which agents are installed and the organizational structure of the group using ControlCenter * ControlCenter alerts are broadly divided into 3 areas – Health – Performance – Capacity Health Alerts Capacity Alerts * Array * Array – Hardware – Symmetrix virtual provisioning thin pool – Software alert * SAN (Connectivity Objects) – CLARiiON Raid Group – Device/Port Status * Host – Zoning/Fabric – File system * Host – Volume Group/Disk Group – Process/Service failure – Swap Space – PowerPath – Quota * Applications * Applications – Database Instance up/down – Database tablespace – Backup application errors – Backup data size change * ControlCenter Server alerts Which Alerts Should You Enable? * Finding the most useful alerts requires understanding of – What problems have led to outages – How issues will appear from different perspectives – What alerts are available
  • 118. * Which alerts provide the most useful information? – Are all of them needed? – Are some redundant in this environment? * Are other tools being used for alerting? – Operating Systems – Applications – Frameworks Framework Integration Architecture ControlCenter Framework Integration is SNMP based; ControlCenter is capable of forwarding alerts as SNMP version 1 traps to any SNMP Trap recipient. The standard SNMP Traps defined within the FibreAlliance MIB are used. The ControlCenter Integration Gateway agent is the interface between ControlCenter and the external framework application. It sends selected alerts as traps, and also responds to “get” requests for information. It can also be configured to write events to the Windows Application Event log. This feature is used for Windows integration. Framework tools can be passive or active. Passive frameworks listen for traps from ControlCenter, and post them to an events display. Active frameworks typically listen for and post traps also, but they also query ControlCenter to show the status of every object. Active tools show the entire environment, while passive tools only show fault events. We discuss some of these topics in more detail in the next few slides. Supported Framework Applications Integration Gateway Agent can send SNMP v1 traps to any receiver software, supported or not. BMC Patrol and Microsoft Operations Manager or System Center Operations Manager are the only framework applications currently supported. These applications have been tested with ControlCenter, and you can download integration packages that will seamlessly adapt these products to display the ControlCenter alerts. There are other applications that EMC supplies integration packages for, but are not supported. These files are provided without support to help guide administrators when integrating ControlCenter into their application. The packages should work well for the versions specified in the release notes. If you have a different version or an unusual configuration, you may have to start with the package files and then do some manual editing of the configuration. Any other software not shown here should still be able to receive traps from the Integration Gateway Agent if it conforms to SNMP version 1. You will have to configure the appearance of such software manually, since no integration packages exist.
  • 119. BMC Patrol is a supported framework application that can be modified by an integration package. Patrol is an active framework tool. This means it can receive traps from the Integration Gateway and post them in an event window, and also query the Integration Gateway to retrieve the status of all of the objects managed by ControlCenter. This gives Patrol the ability to display a dashboard of your entire ControlCenter environment—even the objects that currently have no faults. The Integration Package for Patrol installs and configures the ecc3pi knowledge module. Patrol uses this tool to interpret traps and retrieve information from the Integration Gateway. This saves the Patrol administrator from having to investigate what information is returned by ControlCenter and then format it appropriately. The main map of BMC Patrol shows the ControlCenter objects organized into groups. You will find default groups like “Storage,” “Connectivity,” and “Hosts.” Other groups match the user-defined
  • 120. ControlCenter groups. Drill down into one of these groups to show a status dashboard for all of the objects. Colored icons show the current alert status of object. These colors correspond directly to the alert status in ControlCenter. Patrol also shows the individual traps received from the Gateway Agent in the Event Manager dialog. You can see the time, object name, and alert message for every trap sent. Microsoft Operations Manager Microsoft Operations Manager is another supported framework application that can be modified by an integration package. Microsoft Operations Manager does not use SNMP to receive information; it monitors the Windows Application Event Log. The Integration Gateway can be configured to write alerts to the log with an identifying tag. Operations Manager searches for events with the matching tag, and posts them in its interface. A similar architecture is used by Microsoft Systems Center Operations Manager. You will have to manually convert the AKM rules file supplied in the integration packages to the new XML format used in Systems Center Operations Manager, but otherwise the installation is the same. These tools are passive-only: they simply display the alerts sent from the Integration Gateway Agent. They never poll the agent to find the status of the other objects. Install Integration Gateway Agent The Integration Gateway Agent is pushed to any host having a Master Agent, just like any other ControlCenter agent. A prompt asks for the network name of the framework application. Traps will be sent to this address. If the framework application uses a distributed configuration, make
  • 121. sure the address you enter here is of the trap collector, not the central application server. Ask the framework application administrator for the correct address to send traps to. You will notice that there is no way to choose the port or community string. At installation, the agent is configured to respond to “get” requests using port 1273 and community string “public,” and to send traps using the default port of 162 and community string “public.” To change these defaults, you can edit the configuration file after installation. Greater customization of the parameters of the Integration Gateway can be done by editing the CNG.ini file (CSG.ini if the Gateway is on a Solaris host). Entries in this file specify the port number and community used by the framework software when querying the Gateway. The default port value for this software is 1273, not the traditional SNMP default of 161. This allows the Gateway to be installed on the same host as the framework application without a port conflict. Make sure you use the correct port number when attempting to read information from the Gateway. You can configure the Gateway to write alerts to the Windows Application Event log by replacing using a value of “EMC_Alarms” in the “NT_EventLog_Key” setting. This is a required step for integration with Microsoft framework applications.
  • 122. The recipient address designated when the agent was installed is recorded in the “trap_client_registration” setting. The entire setting shows the address of the recipient, the port (default 162), severity (default 10), and state (set to ACTIVE to enable sending of traps). If you need to change the recipient or port, simply edit those parameters in the file. To add additional receivers, just duplicate the line and edit the address. The severity setting can be used to filter the forwarding of traps by severity of the ControlCenter alert. Remember to use ControlCenter to restart the Integration Gateway after making changes to the initialization file. Troubleshooting: If your framework software does not show traps or managed objects as expected, there are several troubleshooting steps you can take. First, make sure the expected alerts appear in the ControlCenter Console. If the alert management policy includes at least one Console user, the event should be visible there. If it is not, then of course it will not be forwarded to the framework application. You might check the Integration Gateway Agent log files listed above for any software errors. Network errors and issues that prevent proper startup will be captured there. Some of the integration packages include a program named “ecc3pi_test.exe” which will simulate several of the Integration Gateway traps. These traps are sent from the location of the Integration Package install (where the program is), not necessarily from the Integration Gateway agent. So it is a good test of the framework’s ability to receive a trap, but may not completely test the network between the Integration Gateway and the framework. When an active framework receives the simulated Cold Start trap, it will try to discover the ControlCenter managed objects at that address. If the Integration Gateway is not present on that host, the application will discover an “empty” ControlCenter environment. This might have to be deleted from the application view. You might edit the configuration file to cause the Integration Gateway to record the events in the Windows Event Log as an additional troubleshooting step. By reading the Event Log, you can see what alerts the agent processed. The Integration Gateway also records all of the events that it attempts to forward in a file named “events.xml.” Any event listed there should have been forwarded to the framework receiver—if it did not arrive, check the address and port settings, the network connectivity, or the framework software itself. The gateway INI file contains a parameter that controls the maximum size of this file. When the maximum size is reached, older entries are deleted.
  • 123. Integration Gateway Event Log: events.xml StorageScope Features • Reporting built around storage management use cases • On-demand views show storage pain points • Scheduled reports delivered by File, Email, or Print • Different levels of customization • IT object (host, switch, storage) and file level data collections combined in one database This is a more detailed look at the StorageScope architecture and its relationship to the other ControlCenter components. StorageScope is considered to be part of the Infrastructure tier, although it is shown on a separate host in this example. The StorageScope Repository is populated with data from the ControlCenter Repository by the Extraction, Translation, and Load (ETL) process. Configuration and storage information about all of the hosts, applications, storage, and switches in the environment that are monitored by the
  • 124. agents are passed from one database to another. Certain data is time stamped to enable long term trend reports. By using this already-collected data, StorageScope avoids placing additional processing demands on the agent hosts. The ETL process typically runs once a day. The FLR Archiver performs the ETL process and also handles data requests from the user. It uses the ControlCenter Web Server to publish the user interface as HTTP pages. If the FLR Archiver is installed on a separate host from the ControlCenter Server (which also uses the Web Server for HTTP publication), a separate Web Server instance is installed. The only data that is not collected from the ControlCenter Repository is file level data. File level data is collected by the Host Agents and sent directly to the FLR Archiver. The file level data is stored as temporary files until the next ETL process. After ETL is run, the file level data is loaded into the StorageScope Repository and viewable in the views and reports. The StorageScope Repository must be an Oracle 10g Release 2 database, just like the main Repository. No other databases are supported with this release. It can be installed from the ControlCenter install CDs, or the customer can purchase and install their own Oracle instance. Installing Oracle from the ControlCenter CDs does not require any additional license or cost, but you will not have complete administrative control over the database. But even when using your own database to gain administrative control, be wary of performance issues. StorageScope is a demanding data warehouse-type application that will have significant performance effects on any other application using the same Oracle server. In most small environments, the StorageScope Repository can be installed on the Infrastructure host. The Repository and the StorageScope Repository instances share the same Oracle server in this case. In larger environments where many thousands of managed objects are cataloged, the performance demands requires the StorageScope Repository to be on its own dedicated server. The EMC ControlCenter Performance and Scalability Guidelines have complete details on the StorageScope requirements. The FLR Archiver is automatically installed on the same host as the StorageScope Repository. If you purchase and install your own Repository, EMC implementation engineers will initialize a new instance and install the FLR Archiver. Two licenses is required for full functionality of StorageScope. Most of the data collection and processing is available with the standard StorageScope license, but not the file level reporting capabilities. Those become active with the StorageScope File Level Reporting license, which can only be entered after the StorageScope license. The StorageScope File Level Reporting license allows two data collection policies to be enabled: The File Level Collection policy records file-by-file details of standard filesystems mounted on hosts. Any Host Agent can enable a File Level Collection policy to collect data about its file systems or mounted shares. The File Level Collection for UNC Connection policy records file-by-file details of UNC shares
  • 125. accessed across a network. This policy only applies to the Host Agent for Windows. The File Level Collection Data Collection Policy provides file scanning similar to the old StorageScope FLR and VisualSRM products. There are several scanning depth options available in the Source tab when creating the policy. The Scope of Data Collection changes how much detail is recorded in the Repository. An All Files and Folders scan loads everything, going through all the files and folders on a per file system basis. Alternatively, you could opt for only a summary file scan. A Folders Only scan examines the folder contents to record the number of files, their storage utilization, type, owner, and other details. But it does not store the information for every file, it only summarizes the data by folder. You can also choose “Exceptional Files and Folders,” such as the top N largest or the top N oldest. You can enter the N value in the next line of the dialog.
  • 126. The demanding nature of the file level collections are made clear in the Performance and Scalability Guidelines. It shows the circumstances under which the StorageScope components can be installed with the other Infrastructure components and when they should be on a dedicated server. In most cases, environments classified as small and medium by the Performance and Scalability Guidelinescan have the StorageScope database and server installed on the infrastructure. Large environments must use a dedicated server for StorageScope. When using a dedicated StorageScope server, the limit of the number of medium filesystems that can be added to File Level Data Collection Policies is 2,000. These scalability guidelines assume that the highest level of collection detail is reserved for the critical filesystems. Seventy-five percent of the maximum allowed filesystems should be monitored using a Folders Only scope settings with no summary file type collection. Twenty percent can have the Folders Only scope with summary type collection, and only five percent can have All Files and Folders and file type summary. When StorageScope is installed on the Infrastructure, the total number of filesystems allowed may be less than 2,000. Since the exact number depends on the power of the server and the size of the filesystems, make sure you ask the EMC team that installed ControlCenter what your limits are. In practice, you should turn on the “Folders Only” collections for all of your filesystems for some period to help identify the critical filesystems. Once you have studied the environment for a while, you can enable the higher details just on those filesystems that are in danger of running out of space. The file level data collection policies can optionally collect information on file types. To determine the file type, the agent compares the filename extension to a list of user-defined types. You can manage the types in the Edit File Type Definitions option of the Reports menu in the ControlCenter Console. When adding extensions, separate them by commas and do not use a period—just enter the text that appears after the last period in the file name. You can use a star to mean any characters. There is only one type definition list for all users, so consult your associates before changing an existing type. Since the type cataloging is done during a file level data collection policy execution, any changes to the type list will show up in the views, queries, and reports after the next policy execution and ETL process.
  • 127. The several reporting methods in StorageScope can be arranged from Novice to Expert in terms of difficulty and level of effort. Users can use one method to quickly spot trouble, and use a more detailed method to further investigate the issue. The Dashboard and SRM Views pages of the StorageScope interface provide quick, high-level information about your objects, their utilization, and storage trends. Reports show more detail about the same things, and can show file level data as well. Built-in queries are similar to reports, but tend toward more tabular data. Custom queries are fully configurable database queries. You can filter data from any of the object tables, and sort and summarize the output. The wizard interface gives you much of the power of a SQL query without having to actually know SQL. Use custom reports to query the StorageScope Repository and format the output to meet your specific needs. You need to use the Business Objects Crystal Reports editor to create custom reports (not supplied with ControlCenter). Once you have a useful report, you can add it to the StorageScope report list for automatic scheduling. Good knowledge of Crystal Reports is required for this option, and you may occasionally need some SQL programming as well.
  • 128. For the most complete reporting available, you can query the StorageScope Repository with any SQL-based reporting tool. You have the complete details of every table of data this way, but of course you have to write the reporting solution from scratch. You can go to the Snapshots page under the Utilities menu to view the built-in Snapshots or add a custom Snapshot. Any one of the 16 built-in Snapshots can be copied, modified, and then saved as custom snapshots, or you can create your own from scratch. The original built-in Snapshots can not be modified or deleted. The Snapshot Settings dialog pictured at left prompts for a Type (Table or Chart), organization Category, and a Title. The data source of the Snapshot is the SQL query, which you will have to enter by hand. If you are not familiar with SQL coding or the StorageScope Repository tables you might just make a slight change to the code of a copied Snapshot. Or you might capture the code from a Query you developed. If you are more familiar with SQL coding, you can use the EMC ControlCenter StorageScope Reference Guide to look up the names of the tables and columns when writing your own query. The SRM Views pages provide high-level summary information about all of the objects in the StorageScope Repository. They show basic configuration summaries, high-level storage allocation, “Top 10” storage issues, and trend graphs. Use these pages as the first step in your investigation to spot problem areas.
  • 129. The Array views contain information about the properties, capacity, and usage of your storage arrays. The views also identify storage device capacity that can be reclaimed and the hosts to which array storage is accessible. The Thin Pools View provides an enterprise summary about the number and usage of thin pools in the environment. This view gives very similar information as the Arrays View does regarding total, used, and subscribed capacity for individual thin pools and the arrays on which the reside. Trending for these metrics are also displayed in the Enterprise Summary View. The Host views contain information about the storage capacity (internal and external) accessible to your hosts, the database applications residing on those hosts, and host file-level details. The Connectivity views provide information about the properties and port usage of the Fibre channel switches in your environment. The NAS views provide information about your NAS file servers, data movers and NAS file systems. Except for the trend graphs, all the data in the SRM Views is from the most recent Extraction, Translation, and Load process. Unlike most of the rest of the reporting features of StorageScope, the SRM Views are not customizable in any way.
  • 130. Other SRM Views * Hosts • 10 Hosts Most – 10 Hosts with Most Accessible Storage Dormant|Aged|Media|Temp Cap – 10 Hosts Using Most Accessible Storage * Switches * Hosts Applications – 10 Switches with Most Used Ports – 10 Most Used Databases – 10 Switches with Most Free Ports – 10 Least Used Databases – Switches with No Free Ports in 6 Months – Databases Full in 6 Months * NAS File Servers * Hosts File Level Storage – 10 File Servers with Most Used File – File Systems Systems • 10 Most|Least Used – 10 File Servers with Least Used File • FS Full in 6 Months Systems • 10 Most Dormant|Aged|Media|Temp * NAS File Systems Cap – 10 Most Used File Systems – Folders/Directories – 10 Least Used File Systems • 10 Most – File Systems Full in 6 Months Dormant|Aged|Media|Temp Cap * Thin Pools – Files – 10 Most Utilized and Subscribed Pools – 10 Least Utilized and Subscribed Pools
  • 131. ControlCenter Real-time Performance Monitoring * Statistics can be displayed for: * Symmetrix - Overall performance, Host Directors, Host Director Ports, Disk Directors, Devices, Disks * Switches - Switch ports . Performance statistics in approximately Real-Time . Historical chart of data can be kept for up to seven days . Measures that exceed performance alert thresholds color-code the display The performance data is updated every few minutes according to the schedule in a data collection policy. This approximately real-time view is good for analyzing the current status of the components. As traffic rates change, you will quickly see the effects and might be able to pinpoint the source.
  • 132. Console Performance Monitoring Console Performance Data Collection Policies Overall Symmetrix Performance – Table
  • 133. Tabular performance numbers are either displayed in real-time or exponential formats. Real-time values are simply the last number read from the agent. Display of real-time values is the most accurate view of the object’s performance, but the values may tend to change frequently. Exponential values are averaged with the previous few values displayed, with an exponentially decreasing weight on numbers further in the past. The exponential display shows the overall trend of the measure but not the actual value read from the agent. Exponential values tend to be more stable. Overall Symmetrix Performance – Chart Setting/Editing Thresholds You can view the performance of the Host Directors and their ports by selecting the Host Directors folder and all the sub folders. The user has chosen to display measures across the whole array, and also measures across sub-objects. The Host Directors folder has been displayed, showing the sum of the measures across all the host directors. Some individual host directors have being displayed (named “Front End Directors” in the display) to show the breakdown across these objects. Finally, some host director ports are being displayed to show the I/O on the ports.
  • 134. Symmetrix Front-end Director/Port Statistics You might combine a view like this with a Relationship view. Use the Relationship view to show which ports and directors your host is connected to, and then drag the objects into a Performance view to show the workload. Symmetrix Back-end Director/Disk Statistics Switch Port Statistics – Table
  • 135. Switch Port Statistics – Chart Historical Chart ControlCenter Performance Manager * Performance analysis tool useful for troubleshooting, tuning, and trend analysis of the enterprise storage environment * Data can be collected for Symmetrix, CLARiiON, HDS, NAS arrays, Fibre Channel Switches, Oracle Databases, and Hosts * Much more detailed measures than real-time Console performance monitoring—most complete Symmetrix performance analysis available * Archive viewing is strictly historical—all charts are static displays of past performance events * Two ways to view performance archives: – Performance Manager tool – Detailed performance archive viewing tool * Contains many default charts showing most commonly used performance measures * Custom charts can be created to focus on a particular application or group of devices – Automated reports – Selected Performance Manager reports in HTML format
  • 136. Archive Types Created by the WLA Daily Policy * Interval – Raw WLA Daily data; number of data points per hour determined by the WLA Daily Policy * Daily – One data point per hour (24 hours per archive) averaged from Interval data * Weekly – One data point per hour (24 hours per archive) averaged from one week of Daily data * Monthly – One data point per hour (24 hours per archive) averaged from one month of Weekly data A sample Workload Analyzer Daily collection policy is displayed above. As with most data collection policies, the Actions and Apply To tabs can be used to specify the frequency of the collection and the managed objects it pertains to. An administrator has the flexibility to set different collection schedules for different managed objects of the same type.
  • 137. Data Collection Policy – Daily Collections Getting Revolving Collections An Analyst data collection policy is used to schedule a future Analyst collection. Most of the settings are very similar to the other policies, but the method used to set the schedule is not immediately obvious. None of the timing features at the bottom of the Actions tab are directly editable, making it appear that only the frequency can be set here.
  • 138. Performance Manager * Displays archives in graphical format * Many useful graphs included as default views * Custom graphs can be created by combining any measures * Contains interface for customizing automated HTML reports * Can be launched directly from the Console if both are installed on the same host Performance Manager Communication * Communicates directly with ControlCenter Repository to determine location of WLA Archiver Agents * Retrieves list of archives directly from Archiver * Loads chosen archive to the local system for viewing (temporary copy only) * Archive can be saved locally if needed
  • 139. Performance Manager – Accessing Data Views Tab Creating New Data Views
  • 140. Graph Data View Metrics Tab Links View
  • 141. Configuration Tab Groups in Performance Manager Saving Data and Graphs
  • 142. Automated HTML Reports * Created and stored automatically by the WLA Archiver Agent * Published by the EMC ControlCenter Performance Manager Reports service on the WLA Archiver Agent host * Viewed with a normal web browser – Good for quick looks at common performance features – Publishes performance information to a wide audience without requiring widespread Performance Manager installs * Configurable – Add, remove graphs – Set thresholds to color reports according to severity of performance issue – Thresholds can also be used to trigger the creation of a report Automation Jobs Automation Reports Login and Selection Page
  • 143. Automation Jobs – Reports Storage Allocation Steps Storage provisioning can be a complex process. There are many variables that come into play, and many steps that must be taken to successfully incorporate new storage into the management umbrella of an operating system, and made useable by a database or application. Before we start to analyze a planning task, we need to understand everything that is involved with allocating storage. The above image breaks the provisioning steps down into their basic parts. Below we discuss what is happening at each phase: Array Management * Configure new volumes – This stage can be quite complex and includes such steps as carving off logical volumes from physical disks and combining multiple logical volumes into striped or concatenated meta volumes. The complexity does not lie in the steps, but in the thought process behind them, for example things like what device type and emulation, but also calculating the performance impact on cache, director and device resources. * Assign volumes to ports – The main considerations here include ensuring that the devices are assigned to a sufficient number of ports to support performance requirements, confirming that the port flags are properly set according to the host that will use the devices, and ensuring that you do not exceed any LUN limits supported by the Director.
  • 144. Storage Array Management * Allocate volumes to Host – If Symmetrix Device Masking is being used, you will need to configure the VCM database to allow the Host HBA to have access to the mapped devices. * SAN Zoning – You must configure the SAN zoning to make sure that the Host HBAs have access to the array director ports where the devices are mapped. This step could include editing or creating new zones, updating the zoneset and activating the zone set to apply those changes. Host Storage Management * Volume Management –This stage includes such tasks as editing configuration files, rescanning buses for the device, incorporating the new device into PowerPath control and editing volume groups. * File System Management - This step involves enabling existing filesystems to recognize the new storage that is now available. * Database/Application Management – The last step would be making the database or application capable of using the new filesystem space. The most common example here would be extending database tablespaces. Adding New Storage to a Server * You are tasked with adding storage for a new application to a server in your environment. This implementation has the following requirements – This new application requires at least 3GB of storage – Storage must be striped across at least 8 array devices in order to support the expected performance demands – Storage must also be RAID-1 protected – The storage must be accessible via at least two distinct data paths with full redundancy * As a storage administrator, what information must you gather during the planning phase of this rollout? * What planning must be done prior to the actual implementation? Implementing Local Replica Solution * As part of a new backup plan, you have been asked to implement a local replication solution for an existing application server. A new server has been purchased in order to act as a centralized backup host for several applications. * As a storage administrator, what information must you gather during the planning phase of this implementation? * What planning must be done prior to the actual implementation? Manual Storage Allocation Storage provisioning can be a complex process. There are many variables that come into play and many steps that must be taken to successfully incorporate new storage into the management umbrella of an operating system and made useable by an database or application. The above image breaks the provisioning steps down into their basic parts. AutoPathing and Storage Provisioning Services
  • 145. As we’ve seen, the provisioning of storage can be a constant challenge for administrators in a storage area network (SAN) environment. Allocating storage, whether it is for a new server or for an existing server, is often complex, time-consuming, and error-prone. Expert knowledge of multi-vendor storage arrays, SANs, and host system administration is required. Often, this expertise is lacking since skilled and trained personnel are in short supply. Even if skilled staff is available, a straightforward request for additional storage can take days to fulfill since storage, SAN, and host-specific expertise often crosses different organizational boundaries within a data center. ARM’s Storage Provisioning Services (SPS) provides a complete end-to-end solution by automating allocation actions from array to SAN and host management with a few mouse clicks. It automates the search for unallocated disks and the selection of the appropriate available capacity based upon predefined policies. It then performs the complex tasks associated with storage provisioning in a heterogeneous SAN, including LUN masking, zoning, multipathing, and extension of file systems or volume groups. As a result, SPS enables users to provide “just-in- time” storage provisioning to meet their Service Level Agreements. Not only does SPS expedite the entire storage provisioning process, it also reduces errors as staff with a lower level of experience can allocate storage based on business policies and best practices. SAN Manager provides automation to perform Zoning and LUN Masking in one step using the AutoPathing wizard. Storage Provisioning Services Objective * Challenges: – Respond quickly to end-user requirements for additional storage – Reduce administrative intervention for storage-based alerts – Remove storage quickly * Automated Resource Manager – Quick and easy policy-based storage allocation and deallocation – Automation based on your business rules and policies * SAN Manager – Quick and easy zoning and LUN masking using automation wizard * Meet time to provision, availability, and performance goals. SPS allows a storage administrator to quickly meet the challenges of supporting a highly available storage environment. Using ControlCenter Automated Resource Manager, administrators can now quickly response to end-user storage requirements by using easy to manage policy based allocation tasks.
  • 146. It also allows you to reduce the overall administrative time and costs by automating the most common management processes through the use of pools, policies and wizards all based on your business rules, policies and processes. SAN Manager AutoPath * Automates the allocation of devices to hosts – Creates required Zones – Creates required number of paths requested – Activates updated Zone Set – Performs LUN Masking – Symmetrix – Devices must be mapped to front end ports – CLARiiON – Devices could be from the Mapped or Unmapped folders – Autopath can be used to allocate devices which have already been allocated to other hosts * Requires a SAN Manager license Starting AutoPath Wizard Launching the Autopath wizard is really very easy. Simply select the mapped devices that you would like the host to have access to and the HBA that you would like to access those devices across. Next right click on one of the objects and from the dropdown list select Allocation Æ Autopath. At this point you need to complete the wizard process presented on the following pages. Note: When zoning Symmetrix devices, you must select devices that are currently mapped to front end ports because Autopath will not perform SDR changes. Clariion devices, however can be selected from the mapped or unmapped folders. Autopath requires a default zoning policy to be associated with each SAN Fabric that will affected by the Autopath actions. A warning as shown in step 3 will be shown if a default zoning policy does not exist for the affected fabrics.
  • 147. AutoPath Wizard The review allocation task screen gives you your last chance to look over the changes before executing them. Using the Specifications tab you can review the objects that are to be effected. Here you confirm the devices and the HBAs that will be effected. The Path Details tab allows you to do one last review of the paths between those objects that are to be defined. Once completed, you can choose to execute now or save these changes in a task list to be executed later. SPS Overview * Enable “just in time” Storage Provisioning * Allocate storage based on business policies * Capture storage provisioning best practices to enable less experienced personnel to allocate storage * Array support – CLARiiON, Symmetrix, HP StorageWorks (HSG80) * Host support - Reconfiguration – Windows, Solaris, HP-UX, IBM AIX, and VMware ESX Server 3.0.1
  • 148. SPS Steps - Preparation 1. Determine strategy for pool design 2. Determine strategy for allocation policy definitions 3. Storage Pools must be created and populated 4. Allocation Policies must be defined 5. Users must be created and permissions assigned When planning and preparing for an SPS rollout you should follow the following steps. Determine strategy for pool design – This step is the sole objective that will be discussed in lesson 4, however the basic concept here is that you must decide how you want to split up your storage devices such that SPS tasks can pull storage and allocate it based on business processes. Determine strategy for allocation policy definitions – Like a pool strategy, you need to decide not only how your storage allocation policies will be created, but how they will be used for different provisioning tasks. Often these decisions are a direct result of the Pool design. For example, if the pools are designed by department, then the finance hosts will use allocation policies designed to pull devices from a pool reserved for finance. If the pools are designed based on protection, then hosts who need mirrored protection would use allocation policies pulling from pools made up of mirrored devices. Storage Pools must be created and populated – Here, the administrator must go through the manual task of building and populating storage pools based on the design strategy defined and adopted in step 1. Allocation Policies must be defined – Again, here the administrator must go through the manual task of building the policies that will be used when creating storage allocation tasks Users must be created and permissions assigned – The last step that must be performed is the assignment of permissions to those users who will have the right to create allocation tasks. Though this step may seem simple and perhaps obvious at first glance, this is where SPS can truly change how businesses perform storage allocation. Most application or database administrators currently request additional storage via e-mail, phone calls or face to face discussions. If a storage admin decides to grant the storage request, they must then, from scratch, provision that storage. With SPS you can assign to those DBAs and Application administrators the right to create SPS tasks that request not only how much storage they want, but which devices and the paths that they will use. You can then have the storage administrator approve, deny or edit those tasks. SPS Steps - Execution 1. Launch SPS Allocation Wizard a. Select how much storage b. Which allocation policy c. Which host 2. Create Tasks and Task lists 3. Execute Task Lists When it is time to actually allocate storage to a host or filesystem, the process is broken down as follows: Launch SPS wizard – Launching the wizard is as simple as right clicking on a host and drilling down to Storage Allocation. Once the wizard has launched you will need to provide it will information regarding how much storage you would like, which allocation policy you would like to use and which hosts to include (could be multiple hosts and might include backup hosts if BCVs are to be mapped as well)
  • 149. Create Tasks and add to task lists – Once you have specified what you want allocated, you will finish by saving this request as a task and place it into a task list. Task lists allow you to create multiple changes ahead of time and then execute them later during predetermined off-peak hours or scheduled down time. Execute task lists – Once you are ready to commit the changes, you simply need to execute the task lists and all the tasks within it will be run. Storage Pools Storage pools are groups of storage devices that SPS will search for available storage. The first step to using SPS is for the administrator to categorize storage devices and create storage pools for each category. Storage pools can be based on device ownership, device performance, device type, devices assigned to specific geographic locations or data centers, or any other logical grouping that a customer would need. Once the pools are created, the administrator populates the pools with the appropriate devices or other nested pools. Nested pools can be used to arrange the pools into any format that makes sense in the user environment. For example, a user may choose to use a format such as Data Center/Departments/Applications or Location/Application. The contents of the pool must be storage, but can be across arrays and array types. For example, an administrator can create a storage pool for a given business unit, such as Development. This particular business unit might only use HP StorageWorks storage. After storage pools are created, the administrator can create allocation policies that use these storage pools. Storage Pools - Rules • A device can belong to only one pool at a time. • A pool can contain either devices or other storage pools. • A storage device can be moved between pools. • A storage device remains in a pool, even after allocation. • A pool can contain devices across multiple arrays and array types. • A pool can contain specific devices or entire arrays. To create a Storage Pool use the following steps: 1. Open the Storage Administration folder. 2. Right-click Storage Pools and select New >Storage Pool. 3. Rename the new storage pool folder to a descriptive pool name. 4. Add devices to the new pool by dragging and dropping the devices from the Console to the pool.
  • 150. Storage Pools – Creation Allocation Policies SPS simplifies the provisioning process by enabling customers to provision storage based on business rules. These rules are specified within the ControlCenter allocation policies. Allocation policies contain all of the general criteria for storage allocation requests. For example, a allocation policy for an OLTP application that requires premium-level storage could be created. This might mean that the application is only allocated RAID 1 storage with four paths between the host and array with remote replication support. Another allocation policy could be created for telemarketing file servers that require lower-level storage. This might mean that these file servers would only be allocated RAID 5 storage with two paths between the host and array with no replication support. Regardless of how you choose to create your policies they all must be configured with the following information: • Replica class • Storage Pool • Type of storage • RAID Level • Number of paths Allocation Policies – Creation
  • 151. Allocation Policies – Policy Name and Options • Storage Pool —Select the group of storage devices within which SPS will search for devices for each of the replicas. • Storage Type —Select the type of storage to be selected from the specified storage pool. • RAID Level —Select array-based protection level for the LUN to be chosen. Options, such as RAID1 and RAID 5, vary depending on storage type selected. • Number of Paths —Select the number of paths to be created between the server and storage. For Symmetrix and CLARiiON systems, you can specify up to 32 paths. On HP StorageWorks arrays, specify one path since multipathing is not supported in this release. • Mapped Device Only (Symm only)—Selected when SPS should only search for devices that are already mapped to front-end array ports. This feature only applies to the Symmetrix system. • Zoned Storage Only —Selected when SPS should only search for devices on ports that are already zoned. This constraint is used to force new devices to come from arrays that are already in use or arrays that are prepared for use by pre-zoning them to the host. • Disable Host Actions —Selected when SPS should not run host commands. This can be used when the Storage Administration and Host Administration are separate, and the host functions should not be done as automatically as part of the automation process. Create New Storage Group (CLARiiON Only) – Selected when SPS should create a new storage group for Clariion device masking if a group does not currently exist. If this box is not selected the device will only be allocated if the target host iscurrently a member of an existing storage group.
  • 152. • Add Storage to Host(s) —The host(s) selected in the navigation tree when the wizard was launched. Additional hosts can be added via drag and drop from the Console. If multiple hosts are in this field, the same storage will be allocated to each of the hosts. • Storage Policy —Select the Allocation Policy that applies to the specified host(s) and/or application to run on the host based on business requirements. The Allocation Policies editor/creator can also be launched from this dialog box by clicking Edit or New. The default Allocation Policy is shown. • Amount to Allocate — Indicate the amount of storage needed. Available options are at most “x” GB, at least “y” GB, and a range. • Requested # of Devices: Range —Indicate a range for the number of devices that SPS should return for the allocation request. You can use these parameters to specify a single large volume (by specifying a range of 1 to 1) or a greater number of smaller volumes (by changing the “from” value to the number of volumes desired). Multiple devices might be selected if you plan to place the devices in a host logical volume group, for example. SPS searches for storage devices and paths that will satisfy the selected Allocation Policy and additional criteria. SPS displays the results of the query in the Details of Proposed Path window.
  • 153. If the policy being used for this allocation task specifies a local or remote replica, the dialog on the left will appear. Here you can choose to define which host the BCV or R2 will be mapped to for recovery or backup purposes. It is important to note, however that this step is not mandatory. In the left window, select the local (or remote) replica. In the middle window, select the host that you would like the replica to be mapped to. Once you have made your selection, click on the add button and confirm that the Primary device host and replica device host are correct before clicking the Next button to move on. After selecting Replica Hosts click Next. The wizard will then begin searching for available devices and paths. If no suitable devices are found in the Storage Pool specified in the Allocation Policy selected then the dialog on the right will appear. This dialog is useful in finding out why no suitable devices were found. In this example the: We asked for at least 0.5 GB and enough BCV devices were not available to give us 0.5 GB, but the log shows that one BCV of 449 MB was available. So we could step the SPS Wizard back to at for “At Most” 0.5 GB. Device Group Selection If replicas were assigned using this task, you have the ability here to specify which device group you would like to make these devices a member of. If you are performing this provisioning tasks for the purpose of increasing the storage for an existing application, simply add them to the device group that applications current devices are a members of. If however you are allocating devices for a new application or filesystem, you might need to create a new device group prior to running the task wizard. The reason for this is that on this screen you cannot create new device groups, only assign devices to existing ones.
  • 154. Review Allocation Task Execute Task Later • Allows tasks to be batched • Multiple requests in a Task List get executed together • Separate permissions to save and execute With SPS, provisioning requests are processed by creating storage allocation tasks. When you create a task, you save it as part of a task list. You can group several tasks together in a task list. The advantage to this is seen during execution of the tasks. SPS groups similar actions within the tasks; for example, all Symmetrix disk reallocation (SDR) actions (mapping volumes to Symmetrix front end ports) will be grouped together and executed at the same time for tasks within the same task list. In addition, junior staff members can generate tasks to be reviewed and approved/executed by more senior staff. Naming of the tasklists could be used effectively to organize work: wf_create Create all the tasks under this task lists wf_review Move the tasks to this task lists once the change control board approves it wf_readytoexecute Move the tasks to this tasklist when it is ready to execute wf_execute Move the tasks to this task if they are executed Put the change control request ID in the task name. It will much easier to tie the requests and ControlCenter tasks. Reserved Volumes Once provisioning tasks are saved, the volumes and their LUN IDs are reserved. This prevents the same volumes from being selected by another SPS request. The same volumes are unreserved after the task is executed. Volumes are also unreserved if a task is deleted. To review reserved volumes select the storage pools and select Show reserved volumes in the right-click menu. There are multiple ways to configure storage, including SPS and low-level commands within ControlCenter, array-specific tools, and SYMCLI. SPS is unaware of these changes until a task list is executed. Prior to execution a process is run to verify that reserved devices have not become allocated by other tools. To determine if there is a problem with some reserved volumes before your normal maintenance window, follow these steps: 1. Right-click the target task list. 2. Select Rebuild Task List from the menu. 3. The Task List is rebuilt automatically. If there are problems, you will have time to edit the appropriate provisioning task in order to select a different storage volume.
  • 155. Execute Task List To execute a task list , simply right click on the task list from within the tree panel and select Execute Task List. After confirming the command you can view the list of steps and status for each task within the list as it is executing. If an error occurs during execution of a task list, you can be notified by a ControlCenter alert or by viewing the properties of the Tasklist. If an existing Symmetrix configuration operation or Optimizer process has a current lock on the Symmetrix then the tasklist will fail. To check for locks on a Symmetrix use the following symcli command : symcfg –sid nn –lock –lockn ALL list You can then address the error (normally by waiting for the other task to complete) and execute the task list again by right clicking on the tasklist and selecting Re-Try Tasklist. Deallocation policies define the actions the Storage Provisioning Services wizard should perform when deallocating storage. Storage administrators can create different policies for storage with different needs or to control deallocation tasks performed by junior administrators. Use the Storage Provisioning Services wizard to deallocate storage from: Hosts ; Host devices ; Host adapters ; Host ports ; Unidentified ports ; Storage ports ; Storage devices
  • 156. Deallocate Storage Extend a File System: In the situation where a host volume group has additional free space, ARM can automatically extend the file system using the storage that is already allocated to the host volume group. Alternately, this may also be done manually from the ControlCenter Console. In the situation where a host volume group has no additional space, SPS can increase the size of the existing volume group by selecting additional LUNs and extending the file system. Please check the Support Matrix for the supported OS Platforms, FileSystem types and Array Devices on which SPS will extend File Systems. SPS determines the attributes of the storage currently being used and, in the background, will propose a policy to be applied to the new storage for the file system extension Other SPS functions include: * Extend Volume Group * Extend Logical Volume
  • 157. Storage Provision Services CLI * Allows the administrator to integrate the Storage Provisioning Services functions with third party software such as a workflow system. * Installed from ControlCenter CDs * SPS CLI examples: – SPS allocate –host seawin-01 –allocpolicy finance –tasklist wf_create –task payroll_project - atleast 100 – SPS extendfs –host losbe089 –fs /dev –atleast 10 – SPS extendvg –host losbe089 –vg vg00 –atleast 20 – SPS extendlv –host losbe089 –vg vg00 –lv lv00 –atleast 200 * Additional SPS CLI commands available to: – Manage tasks and tasklists – Research hosts, filesystems, volume groups and logical volumes * Refer to Storage Provisioning Services Command Line Interface Reference Guide for more details.
  • 158. Storage Pool Design is Critical * There are many possible options * No one is necessarily better than another – It all depends on the customer environment and their needs y We will examine some pros and cons of just 5 options to illustrate the point: – Line-of-Business-based Storage Pools – Geographically-based Storage Pools – SAN-Fabric-based Storage Pools – Array-based Storage Pools – Service-Level-based Storage Pools The next set of several slides attempt to show why the design of Storage Pools is critical for the successful implementation of the Storage Provisioning Services capability of EMC ControlCenter. First, there are many options from which to choose. Here we only list 5 possible ways to think about organizing and setting up Storage Pools. There are many others and they could also be used in any number of combinations if so desired. So the first reason to focus on Storage Pool design is the large number of alternatives you can choose from. Second, each device can be placed in one, and only one, Storage Pool. The granularity of the Storage Pools that are chosen are going to impact the ease or difficulty with which this can be accomplished. If you have a Storage Pool that is to include every device in a specific storage array, it is easy to drag and drop the array itself into the Pool and the setup is done. However, if the Storage Pools are intended to contain only some of the devices from only some of the storage arrays in your Customer environment, each device or device range will have to separately be dragged and dropped into the chosen Storage Pool – clearly more effort and more error-prone too. Third, changing from one Storage Pool design to another should be avoided. Moving the devices or arrays to a different pool organization is likely to be time-consuming and also error-prone. It is much better to have thought about what is the ‘right’ or most appropriate Storage Pool strategy before the implementation so as to minimize the chances of this happening. Changing the Storage Pool design will also necessitate changing the Allocation Policies too, adding even more administrative workload that could have been avoided by good planning.
  • 159. The designs presented here are not the only possible Storage Pool design philosophies that are possible. There are many others of which we have chosen just these 5 to share and discuss with you. Consider them as indicating some of the reasons that every Customer should spend time up- front identifying what might be relevant alternative Storage Pool designs and then determining the pros and cons for them in order to make the most informed and long-lasting set of decisions about the Storage Pool that will be most appropriate to be deployed. This first design is based on setting up Storage Pools which are aligned with specific Lines of Business (LoB). Four fictitious examples are given for illustration. The intention of this design is to place into each Storage Pool only those storage array devices which a specific LoB is expected to need to be provisioned over whatever is the next planning period. For example, the Sales Storage Pool will contain the specific storage array devices the Sales LoB will need to use once its growth demands more than the existing allocation of storage devices. This includes storage array devices of the right quantities and with the correct characteristics (e.g. STDs, BCVs and/or R1s for a Symmetrix array) to service the future provisioning needs of the Sales LoB. Clearly this approach to Storage Pool design is most likely to appeal, at least initially, to a Customer that is focused on the different LoBs that it has to service. The suggestion in the title of this slide is that this scenario might be most appealing or relevant for environments that have a single storage array or are restricted to a single geographic location – but in no way should that be considered as a hard and fast rule (it is more complex than that and the title is only trying to suggest an additional flavor concerned with the simplicity of the storage environment we wish you to consider the pros and cons of with this design). Before you turn to the next page in the notes, please try to think of reasons why this design approach might be good and reasons why it might be bad, depending upon individual Customer circumstances, of course. When you have done that, move on and review the sample set of pros and cons we provide you with on the next slide.
  • 160. This next design is based on setting up Storage Pools which are aligned to specific storage arrays. Several examples are given for illustration. The intention of this design is to have just one array per Storage Pool and to place all storage devices from an array in its designated Storage Pool. The diagram above depicts several array pools sub-pools within an array type pool within a location Storage Pool. The Storage Pools would contain all storage array devices with any technical characteristics (e.g. STDs, BCVs and/or R1s for a Symmetrix array) that reside in the designated array. If provisioning requires storage devices from a specific array it is easy to identify the correct Storage Pools and the relevant Storage Management Policies that would have to be setup to support this. Clearly this approach to Storage Pool design is most likely to appeal, at least initially, to a Customer that is focused on managing the arrays themselves. For this design, you have to decide in advance from which storage array you wish to provision the storage capacity. The suggestion in the title of this slide is that this scenario might be most appealing or relevant for environments that have to satisfy multiple service levels across LoBs, locations and/or SANs –
  • 161. but in no way should that be considered as a hard and fast rule (it is more complex than that and the title is only trying to suggest an additional flavor concerned with the simplicity of the storage environment we wish you to consider the pros and cons of with this design). Before you turn to the next page in the notes, please try to think of reasons why this design approach might be good and reasons why it might be bad, depending upon individual Customer circumstances, of course. When you have done that, move on and review the sample set of pros and cons we provide you with on the next slide. This last design is an outgrowth of the previous array-based Storage Pool design. It takes the definition of a Storage Pool to a lower level i.e. based on different service level characteristics that can be provided from the different storage arrays and different storage array types. Six fictitious examples are given for illustration of lower level sub-Pools within an array Storage Pool but they are likely only to represent a subset of any real Customer environment that used this type of Storage Pool design.
  • 162. The intention of this design is to place into each Storage Pool only those storage array devices that have the same stated technical characteristics e.g. for a Symmetrix: mirrored, high performance STD devices or unprotected high performance BCVs or striped meta STDs or striped BCV metas or unprotected R1s. This means only storage devices with the appropriate technical characteristics have to be placed in the appropriate service-level-based Storage Pool. Clearly this approach to Storage Pool design is most likely to appeal, at least initially, to a Customer that wants explicitly to provision storage devices with known and different service level characteristics (performance, availability, security, etc.). Such Customers are likely already to have a strong awareness of and focus on the different service levels required for the data their LoB applications need to access and to be oriented to service management to meet agreed business needs for LoB application service levels. The suggestion in the title of this slide is that this scenario might be most appealing or relevant for environments that have a focus on provisioning storage that has different service levels whether from one location or one array or not – but in no way should that be considered as a hard and fast rule (it is more complex than that and the title is only trying to suggest an additional flavor concerned with the simplicity of the storage environment we wish you to consider the pros and cons of with this design. Before you turn to the next page in the notes, please try to think of reasons why this design approach might be good and reasons why it might be bad, depending upon individual Customer circumstances, of course. When you have done that, move on and review the sample set of pros and cons we provide you with on the next slide. Symmetrix Management Console (SMC) * Independent, light-weight, web-based application – Simple and easy to use browser interface – Hosted on small Windows/Linux server – Enables remote access and management from nearly any client * Enables access, configuration, and basic operation of Symmetrix arrays
  • 163. – Supports all configuration capabilities of Solutions Enabler/CLI * Supports multiple generations of Symmetrix – Enginuity version 5x68 and newer * Provides day-one support of new Symmetrix features when released * Adding full-feature ControlCenter does not require management data to be migrated. SMC Functionality * Access Management – Manage users, permissions/roles – Symmetrix Access Controls * Configuration Management – Create devices, map and mask devices, create device groups, set Symmetrix attributes * Replication Management – TF/Clone, TF/Mirror, TF/Snap, SRDF/S, SRDF/A, SRDF/DM, Open Replicator, Optimizer * Alerts and Monitoring – Monitor Device status, device attributes, operations status – Monitor array alerts
  • 164. SMC Local User Accounts * The installation dialog includes the specification of the SMC Super User account – Default user "smc" with password "smc" * Password must be changed to avoid unauthorized access! * Recommended to create other Administrators in place of Super User – Can specify different Super User name in installation dialog * The specified user must have a valid login to the SMC Server host * Authentication done by the SMC Server host OS * Menu: Administration > Local User Accounts – List existing usernames in a table – Add new user information (name and password) – Edit user information (enter a new password) – Delete selected user account(s)
  • 165. Permissions and Roles * Menu: Administration > Permissions – List Symmetrix ID, Username, and Role entries – Add new permission entry – Edit permission entry (change role) – Delete permission entry * Five Roles – Administrator, StorageAdmin and SecurityAdmin combined – StorageAdmin, full Symmetrix control – Monitor, view Symmetrix only, no manipulation of array – SecurityAdmin, add users and set permissions, no Symmetrix access – None, no access other than login (the default) * Super User – Created during installation – Only user with permission to set Symmetrix Preferences and LDAP-SSL – smc/smc username/password is a potential security hole if unchanged
  • 166. Let’s explore the creation of FBA Meta Devices. Right click on a Symmetrix and choose FBA Meta Device Configuration, then choose Form Meta. In the example shown, the reservation filter is used to only show those devices reserved by the current SMC user. Four devices are listed in the unmapped devices list. Add all moves the devices to the Meta member column. The meta head can then be specified. As with all configuration tasks, click on the Add to Config Session list button. The actual commit of this action is done from ConfigSession view. When creating a meta, you can optionally use the “Auto Select” feature. The Auto select feature allows you to specify the number of metas, number of meta members per meta, and the meta heads only. The Symmetrix microcode automatically chooses the meta members from the available pool of unmapped devices.
  • 167. Device Groups * SRDF and TimeFinder operations in SMC require Device Groups * All device groups in the default symapi_db.bin or GNS (if active) are available in SMC * Device Groups can be created, deleted, renamed and devices can be added and removed as needed * SRDF and TimeFinder operations are invoked by selecting a device group and using the Replication option * Device Masking operation can also be invoked by selecting a device group * Devices can belong to multiple Device Groups The Devices types supported in SMC are shown on the slide. The STD Devices can be Regular (non SRDF Devices), SRDF R1s, or SRDF R2. Devices can be added as Clone Target devices on the local or the remote array, and remote Virtual devices can be added. The device group dialog displays only valid devices when choosing a device type. For instance, when adding remote Clone Target devices, only eligible devices configured on the remote array display in the dialog.
  • 168. Device Types in Device Groups
  • 169. SMC Device Group creation is a multi-step process, similar to creating SYMCLI device groups using symdg, symld, and symbcv. The wizard is launched by right clicking the Device Group Folder and choosing the Device Group Management >Create Device Group option. In this example, we create a Device Group of Type RDF1 and associate local VDEVSs, local BCVs and remote clone targets. The first step is to choose a name for the Device Group (DemoDG in this example), then the Device Group Type (RDF1 in this example because we want to add SRDF R1 STD devices). Click on STD in the Device Type Column and add the required R1 devices from the Available box to the Group Members box. The bottom of the wizard gives a summary of the selections that have been made. In the next slide, the local BCVs, VDEVs and the remote Clone target selections are made.
  • 170. SRDF Mode operations are shown in this slide. Right Click on an RDF device group and choose Replication >SRDF Settings . The dialog box shows the current mode and pair states. The mode can be changed by using the Set Mode pull down. The various options are shown on the slide. In this example the mode is being changed from Synchronous to Asynchronous. SRDF Control operations are performed by right clicking the RDF Device Group and choosing Replication > SRDF Control. A failover operation is shown. The same dialog box can be used for the other actions like Failback, Split, Establish, etc. The complete list of actions is displayed on the slide. The SRDF Control window has 2 pages. On page 1 the action (failover for example) and the device pairs are chosen. Choose options that relate to a specific action and to execute the action via the Finish button.
  • 171. SMC Device Group creation is a multi-step process, similar to creating SYMCLI device groups using symdg, symld, and symbcv. The wizard is launched by right clicking the Device Group Folder and choosing the Device Group Management >Create Device Group option. In this example, we create a Device Group of Type RDF1 and associate local VDEVSs, local BCVs and remote clone targets. The first step is to choose a name for the Device Group (DemoDG in this example), then the Device Group Type (RDF1 in this example because we want to add SRDF R1 STD devices). Click on STD in the Device Type Column and add the required R1 devices from the Available box to the Group Members box. The bottom of the wizard gives a summary of the selections that have been made. In the next slide, the local BCVs, VDEVs and the remote Clone target selections are made.
  • 172. TimeFinder/Clone operations are performed by right clicking a device group and choosing Replication >TimeFinder/Clone. If clone pairs have not yet been created, the dialog box does not show any device pairings. The “Set exact pairs” option is not available for Clone operations. Device pairs have to be defined by using the Edit Pairs button, then the device pairs can be defined in the Add/Remove Clone pairs dialog. Once the device pairs are defined, the rest of the operations are similar to TimeFinder/Snap.
  • 173. Click on “Find” tab and in the search box, type the string (LUN in this case) you are looking for.
  • 174. Once found the Hyper we will be able to view its contents. In this particular case, we are loking at a Meta, with 3 Hypers: One is the Meta head, the other two are meta devices. Note the M, as Meta head is marked with (M) and accompanied with the number of meta devices in it. (IE M2, for a meta with 2 hypers) Also, have in mind that Hypers are normally of two sizes (per standardization) 16.96 and 56.9966 Gygabytes, varying on Symmetrix System. On the right hand, we can see the Hyper properties. Listed is the Meta head, ID 087, 16.96GB, allocated, and within its total Meta device size, 50.87 (3 Hypers in total). Here the example is repeated, but with more Hypers. Here we can see Meta head 5F4, with its 7 devices attached to it, making it a (M)7 of 135.64Gb total.
  • 175. To see the diference, we have listed a Hyper here. You can appreciate that Hyper B4A, has 5.62GB Size, and summarizes with the other Hypers, 11.24GB.
  • 176. Note that LUNs are not the common standard for Symmetrix, as they do not use the disk completely as Hypers. Hypers use the cylinders of the disks, using even the lastest cylinder of the disks, avoiding to leave MBs in disuse as is usual in Hypers. That is why the 16.96 and 59.9966 sizes in this case, are so intrincanted. Each Symmetrix Storage will have its own Hyper size, depending on disks sizes and other factors. You may want to have a spreadsheet were you have the free / unmapped Hypers, filtered by size, device, etc. Documentation is always a must, as it will guide us for future configurations, as for systems recovery. Select Hyper “729”, right click, go to configure and SDR Device Mapping. SDR Stands for “Symmetrix Device Reallocation”. If you need to add more than one disk, select the disks with “ctrl” key and follow the same steps. In this case, ports 8C:1 & 9C:0, are the last octets from BACKUPSRV HBA ports.
  • 177. Select the device on the left and the HBA ports on the right. Hold control to select more than one. Alert messages will be displayed for each of the HBAs to which we are adding the device (hyper). To check availability of addresses into this ports, run: symcfg list -sid 1098 -fa 08c -p 1 -addresses -available symcfg list -sid 1098 -fa 09c -p 0 -addresses -available Here is a preview of how the devices (Hyper) will be presented (allocated) to the selected storage HBA port. You can change the Address presented to the host, for better administration. (some systems do not like hexa values.) Select the addresses with * from the list gathered before, from OS. Then, System will ask if you are sure to execute this.
  • 178. To make the storage visible to the host, you must complete the Hyper Masking steps. To do so, go to “Hosts” section in left pannel, right click on the desired host (host to which allocate the storage) and select “Masking > Modify Masking Configurations”. This will take you to the following screen. This can be also done from CLI, with the “symmaskdb” command. In this screen we will provide all the data needed to complete the masking. • First select the ports (HBA ports) of the server to which attach the devices. • Then, Select both HBA’s WWPN on upper left box, choose storae array, and select type of device as desired. Finally, select the storage port to be assigned to. • In bottom list, check the devices you will be adding, in pairs (as they are seen once per HBA port.) • The is the completed list of tasks. Once we have this, click on “Grant” to execute the tasks. At the end, Note the differences between the marked devices: This is telling us that the disks were previously assigned to another host/s, but they maybe still allocated or not. Please it is very important for you to validate that you are not assigning a device for second time to the same or another host.
  • 179. The thin line represents one previous assignment, the thick line represents more than one previous assignment. The disks you normally will be assigning/allocating, should not have this “underscore” mark on the devices list. This list is an example, as you can also see that there are some devices with granted permissions to a host. At last, Verify in the detail that all the operations have been completed as desired. Finally, Verify on host with ioscan –fnC disk and insf –C disk, that the Hyper has been correctly assigned and visible to the Operating System. If this does not work, use ioscan as it, and the insf –C. For basic troubleshooting, use: symmaskdb -sid 1098 list assignment -dev 10E8 symdev -sid 1098 show 10E8 | more To show how the device is mapped to the host and were could be a communication loss.
  • 180. Thanks for reading. ~ THE END ~