EMC Proven Professional Knowledge Sharing 2009 Book of Abstracts


Published on

EMC Proven Professional Knowledge Sharing 2009 Book of Abstracts

Published in: Education, Technology, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

EMC Proven Professional Knowledge Sharing 2009 Book of Abstracts

  1. 1. Knowledge Sharing 2009 Book of AbstractsAn EMC Proven Professional Publication
  2. 2. Knowledge Sharing Winners 2008 Awards:(left to right) Diedrich Ehlerding, Lalit Mohan, Thank You!Brian Russell, and Paul Brant with AlokShrivastava, Senior Director, EMC Education For the third consecutive year, we are pleased to congratulate our EMC®Services; Frank Hauck, Executive VicePresident, Global Marketing and Customer Proven™ Professional Knowledge Sharing authors. This year’s Book ofQuality; and Tom Clancy, Vice PresidentEMC Education Services. Abstracts demonstrates how the Knowledge Sharing program has grown into a powerful forum for sharing ideas among information storage professionals. In 2009, Knowledge Sharing articles were downloaded more than 104,000 times, underscoring the power of the knowledge sharing concept. You may view the 2009 articles and the monthly release of our 2009 Knowledge Sharing articles at http://education.emc.com/knowledgesharing. Our Knowledge Sharing authors also play a leading role in our new EMC Proven Professional community. It’s a great place to collaborate with other Proven Professionals, ask questions about the program, or share your experiences. Visit the community at http://education.emc.com/provencommunity. The EMC Proven Professional program had another great year—we recently awarded our 39,000th certification. Also, we recently announced publication of “Information Storage and Management,” the first technology book from EMC. It will be a valuable addition to any IT professional’s reference library. Our continuing success is built on the foundation of committed professionals who participate, contribute, and share. We thank each of you who participated in the 2009 Knowledge Sharing competition. Tom Clancy Alok Shrivastava Vice President Senior Director EMC Education Services EMC Education Services 3
  3. 3. Table of Contents First-Place Knowledge Sharing Article. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 . Is Cloud Computing the Game Changer Your Company Needs in These Tough Times? Bruce Yellin, EMC Corporation Second-Place Knowledge Sharing Article. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Customized Tool for Automated Storage Provisioning Ken Guest, A Large Telecommunications Company Sejal Joshi, A Large Telecommunications Company Third-Place Knowledge Sharing Article. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Reclaiming SAN Storage—The Good, the Bad, and the Ugly Brian Dehn, EMC Corporation Best of Content Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Architecting an Enterprise-Wide Document Management Platform Jacob Willig, Documentum Consultant Best of Tiered Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Real-Life Application of Disaster Recovery Faisal Choudry, Magirus UK Ltd. Best of Backup and Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Best Practices for Implementing and Administering EMC NetWorker Anuj Sharma, Ace Data Devices Backup and Recovery A Load-Balancing Algorithm for Deploying Backup Media Servers. . . . . . . . . . . . . . . . 13 Krasimir Miloshev, EMC Corporation Backing Up Applications with NetWorker Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Aaron Kleinsmith, EMC Corporation DLm40xx Implementation and Upgrade Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Mike Smialek, EMC Corporation Implementing Deduplicated Oracle Backups with NetWorker Module for Oracle. . . . 16 Chris Mavromatis, EMC Corporation NDMP Localization/Internationalization Support for NetWorker . . . . . . . . . . . . . . . . . 17 Jyothi Deranna, EMC Corporation Using Disks in Backup Environments and Virtual Tape Library (VTL) Implementation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Emin Calikli, Gantek Technologies4 EMC Proven Professional: Knowledge Sharing 2009
  4. 4. Table of Contents (Continued) Business Process Enterprise Standards and Automation for Storage Integration and Installation at Microsoft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Aaron Baldie, EMC Corporation Significant Savings are Within Your Reach When You Understand the True Cost of Storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Bruce Yellin, EMC Corporation The Efficient, Green Data Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Raza Syed, EMC Corporation Connectivity Best Practices for Deploying Celerra NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Ron Nicholl, A Large IT Division CLARiiON and FCiP: A Practical Intercontinental DR and HA Solution. . . . . . . . . . . . . . 23 Jaison K. Jose, EMC Corporation Oracle Performance Hit | a SAN Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Kofi Ampofo Boadi, JM Family, Inc. Preventative Monitoring in the NAS Environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Robert Wittig, EDS, an HP Company Create a Comparative Analysis of an Oracle Database Using Storage Architectures NAS and SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Sergio Hirata, Columbia Storage Volnys Borges Bernal, Universidade de São Paulo/LSITec Content Management Custom Documentum Application Code Review. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Christopher Harper, EMC Corporation Tiered Storage Business Continuity Planning for Any Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Smartha Guha Thakurta, EMC Corporation Data Migration Strategy (EMC SRDF via “Swing Frame”). . . . . . . . . . . . . . . . . . . . . . . . 29 Sejal Joshi, A Large Telecommunications Company Ken Guest, A Large Telecommunications Company Data Storage Performance—Equating Supply and Demand. . . . . . . . . . . . . . . . . . . . . . 30 Lalit Mohan, EMC Corporation Integrating Linux and Linux-based Storage Management Software with RAID System-based Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Diedrich Ehlerding, Fujitsu Technology Solutions 5
  5. 5. Table of Contents (Continued) Simplifying/Demystifying EMC TimeFinder Integration with Oracle Flashback. . . . . 32 . Robert Mosco Jr., EMC Corporation Service-Oriented Architecture (SOA) and Enterprise Architecture (EA). . . . . . . . . . . . . 33 Charanya Hariharan, Pennsylvania State University Dr. Brian Cameron, Pennsylvania State University SRDF/Star Software Uses and Best Practices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Bill Chilton, EMC Corporation Using EMC ControlCenter File Level Reporting for CIFS Shares . . . . . . . . . . . . . . . . . . 35 . Chad DeMatteis, EMC Corporation Michael Horvath, Fifth Third Bancorp Virtualization Cloud Computing Services—A New Approach to Naming Conventions. . . . . . . . . . . . . 36 Laurence A. Huetteman, Technology Business Consultant Leveraging Cloud Computing for Optimized Storage Management . . . . . . . . . . . . . . . 37 Mohammed Hashim, Wipro Technologies Rejaneesh Sasidharan, Wipro Technologies6 EMC Proven Professional: Knowledge Sharing 2009
  6. 6. Is Cloud Computing the Game Changer Your Company Needs in These Tough Times? Bruce Yellin, EMC Corporation There have been countless articles written about cloud computing; a Google search will return millions of hits. Your colleagues discuss it at lunch and vendors bring it up in their presentations. From the enterprise perspective, it could be the perfect storm of value propositions—cloud computing drives the cost of IT down, is available on demand, and is scaleable. Cloud computing could even become a new competitive edge for your company. Maturing at a rapid pace, it will be the next chapter in data processing. But is it ready for your company? Can the hype of cloud computing live up to its promise? I call cloud computing a “game changer” because it is a service, a platform and operating environment poised to transform the status-quo computing model. Advocates argue it will“ he emergence of cloud computing and its T revolutionize how information will be delivered. It is difficult to find a leading technology impact in these tough economic times are company that isn’t already delivering some type of cloud product or discussing their plans the crux of the Knowledge Sharing charter. for the cloud. Many of us already rely on Yahoo or Google for personal e-mail, upload photos on Flickr for others to browse, and use Facebook to stay in touch. But when you use these As a former computer science teacher, titles at work, are you met with open arms, or does company policy frown upon their use? expanding my awareness of new topics and guiding others is my mission—that is how Cloud computing allows the user to access an application without having to own it, install new concepts are shaped.” it on a computer, or maintain it. The cloud offers the freedom to access an application from anywhere a browser can run—desktop, laptop, intelligent phone, etc. It can increase a Bruce Yellin, EMC Corporation company’s processing and storage capacity, and provide services without taking up data center space. On a personal level, some cloud computing services are free. For the enterprise, pricing is subscription-based or pay-as-you-go, allowing a company to enhance existing service or provide new IT functionality without a major investment. This article examines what cloud computing is, total cost of ownership, privacy/security impacts, universal access concerns, and hybrid strategies. 7
  7. 7. Customized Tool for Automated Storage Provisioning Ken Guest, A Large Telecommunications Company Sejal Joshi, A Large Telecommunications Company Homegrown tools for storage provisioning automation can improve efficiency and implement a common standard in a multi-vendor storage environment. This enables teams to work more efficiently and eliminates the need for expensive SRM tools that may not meet all business requirements. In the current economic environment, this is even more important as groups are being downsized and budgets cut while storage capacity and fiber switch ports increase at a breakneck speed. In 2009, we provisioned 6.0 PB of enterprise-class SAN-based storage, compared to 4.3 pro- visioned in 2007, a year-to-year increase of 140 percent. Our current environment is growing at a rate of ~100 TB per week. SRM tools do not scale to meet our requirements and service level agreements (SLAs) with this growth rate. Using our custom implementation, we can provision ~50 TB of storage across multiple frames in less than one hour and reduce overall re-work and human error. This includes design validation, LUN creation/masking, and zoning. This article discusses how to use vendor-provided CLI software to create/implement custom- ized provisioning tools. SRM-based provisioning tools are great for organizations with small storage footprints. However, they do not scale in larger and more diverse, enterprise-class, multi-data-center environments due to implementation costs and increased total cost of ownership (TCO). Instead, we can implement a customized provisioning solution with a small data center foot- print and very little additional cost to the company. This article discusses how to implement end-to-end storage provisioning automation using multi-vendor storage/switch platforms. The process includes a centralized ticketing system to track the storage request throughout its lifecycle. This enables the overall storage automation process. The workflow tool tracks approvals and design information, providing the feed to the automation scripts and informa- tion to the system administrator to build the file systems based on the design. Due to central automation, we can guarantee standards across multiple data centers/envi- ronments and easily create standard reports.8 EMC Proven Professional: Knowledge Sharing 2009
  8. 8. Reclaiming SAN Storage—The Good, the Bad, and the Ugly Brian Dehn, EMC Corporation “My bonus is based on saving $5M worth of storage.” “We are cutting the storage budget so you need to reuse capacity.” “We cannot purchase additional storage until we increase utilization.” Reclaiming storage capacity for reuse reduces IT capital expenditures, increases storage utilization, contributes to “green computing” initiatives, and can address each of the issues“ wrote this article because I realized that I above. Proper planning and execution of a storage reclamation effort is key to avoiding prob- lems and realizing maximum benefits. no material existed that would provide storage users with a best-practices meth- IT professionals, especially storage administrators, usually know how much storage is allo- odology for reclaiming storage.” cated and available for allocation. “Orphaned” storage, or capacity that appears to be used but is not, is more difficult to find. We must understand storage configuration states and “ he best thing about being a Knowl- T the storage configuration hierarchy to find this treasure trove of reclaimable storage. Most of the layers in the storage configuration hierarchy include potentially reclaimable capacity. edge Sharing author is the amount of The level of effort required to reclaim that capacity, however, may not be worth the return on knowledge I gained personally as I was investment, depending on where it exists. Identifying candidates for reclamation makes it researching and writing the article. I think even more challenging. the person who gained the most from writing this article was me.” This article provides a best-practices “blueprint” for a successful reclamation strategy. The following topics are discussed: Brian Dehn, EMC Corporation • Understanding the storage configuration hierarchy • Identifying capacity for potential reclamation • Determining whether the benefit is worth the cost • Using storage resource management tools to facilitate the effort This article will help you achieve storage reclamation objectives (the good), while reducing time and cost (the bad), and avoiding significant problems (the ugly). 9
  9. 9. Architecting an Enterprise-Wide Document Management Platform Jacob Willig, Documentum Consultant This article presents best practices discovered while implementing EMC Documentum® within large organizations using a central-platform approach. Documentum is typically implemented as a dedicated host for document management and storage. The implementation is tailored to the specific set of functional requirements and does not necessarily consider future expan- sion into different functional areas across the organization. If the need arises, a new point solution is often implemented. It, too, is tailored to the new and existing requirements. This approach fulfills the short-term business need for functionality, but it becomes increas-“ keep learning as this is very much unex- I ingly more expensive to maintain and operate the growing number of point solutions. This plored territory. These are exciting times. minimizes the effectiveness of a content management solution as the overhead costs re-occur I never had an article posted in the past, over time. so the prospect of this article being posted Large organizations adopting an enterprise view of content management, and implementing makes me feel very proud. It would be document management as a strategic platform, are able to host document flows uniformly. great if many more people would become The support team can add a new document flow easily and efficiently as everything in the aware of the necessity of implementing platform is set up generically. This approach, though, will incur additional startup costs to document management in a platform-like enable future, controlled growth. architecture.” This article discusses several of the challenges I faced while implementing document manage- Jacob Willig, Documentum Consultant ment in a platform architecture. I will explain both the challenges and solutions. Topics include information management, object model, security, workflows, and functional, application and technical support.10 EMC Proven Professional: Knowledge Sharing 2009
  10. 10. Real-Life Application of Disaster RecoveryFaisal Choudry, Magirus UK Ltd. EMC business continuity (BC) and disaster recovery (DR) offer a myriad of options to protect your data. Organizations sometimes implement them without a thorough analysis of how and when they would use the technologies to recover data if needed. Even if familiar with the technology, procedures can be so arduous that the organization has no chance of meeting their committed recovery point objective (RPO) and recovery time objective (RTO). “State-of- the-art” recovery technologies cannot help if no one knows how to use them. Organizations need procedures to document how to implement their recovery technologies. This is even more important if an external organization did the implementation. What ques- tions should you consider? • hen the implementation ends, can the customer grasp the complexities of the new W technologies and use them if needed? • Can we make these complex solutions easier, especially when organizations don’t have the “luxury” of appointing a full-time disaster recovery team? SMEs and technology professionals face these issues when proposing BC and DR solutions. This article uses a case scenario to examine these questions in relation to a recent real-life solution, one that I proposed and implemented. The solution included multiple sites, using EMC CLARiiON® systems (CX3-10 systems) at each site, EMC MirrorView™/Asynchronous, and EMC SnapView™. ESX™ Servers are the attached hosts, so we implemented Site Recov- ery Manager (SRM) using the recently released SRA adapter for MirrorView/A. The project was successful, but raised interesting afterthoughts, especially regarding end- users’ perceptions and technology expectations. In conclusion, this article will offer advice on how to address these issues. 11
  11. 11. Best Practices for Implementing and Administering EMC NetWorker Anuj Sharma, Ace Data Devices EMC NetWorker® is the fastest performing backup application in the market. Integration with replication and snapshot technologies helps you meet the most aggressive RTO and RPO requirements and transform backup to disk or backup to tape in an off-host, off-hours process. It supports a broad set of operating systems, databases, applications, and topologies. EMC NetWorker’s compatibility with various operating systems, applications, and databases makes it successful in today’s competitive industry. However, it must be implemented properly to get the most out this wonderful product. There are some practices to keep in mind to make the backups and recovery more effective and beneficial for the organization. This article covers the various practices that I performed when implementing and administering EMC NetWorker. They include: • Implementing NetWorker on various operating systems • Making it foolproof in case of NetWorker server disaster • mplementing NetWorker in a bidirectional as well as unidirectional hardware I firewall, including various scenarios (i.e., when some of the clients are in DMZ) • Working with the NetWorker ports • Implementing NetWorker in a cluster • Integrating e-mail alerts with NetWorker • Implementing persistent binding through EMC NetWorker • Integrating EMC Avamar® for deduplication • Probe-based backups12 EMC Proven Professional: Knowledge Sharing 2009
  12. 12. A Load-Balancing Algorithm for DeployingBackup Media ServersKrasimir Miloshev, EMC Corporation Finding the best backup client’s distribution over designated media servers can be considered part of the general load-balancing problem. Our goal is to distribute the backup client’s data in the best possible way among newly designated media servers responsible for the backup read/write operations. This article suggests a load-balancing approach based on one criterion—the amount of data that must be backed up on each client. It begins by introducing the basic components of backup infrastructures: the master server, clients, media servers, and storage backup devices. It presents two approaches for deploying media servers and provides a structure for decision making. Next, we investigate a load-balancing schema where there are only 10 media backup clients and two media servers. We calculate capacity and learn a load-distribution algorithm. Finally, the article aligns the basic load-balancing algorithm to program implementation. In two easy steps, you’ll be able to implement the algorithm using C or Korn-shell. We can reduce the backup window by balancing the backup load among all the backup media servers. Even when data size is the only criteria we use, we can expect to achieve visible performance improvement and shorter backup windows. 13
  13. 13. Backing Up Applications with NetWorker Modules Aaron Kleinsmith, EMC Corporation This article describes how to configure EMC NetWorker® to back up a database application. It includes NetWorker’s technical setup to help protect popular databases that relate to released NetWorker modules. Backup administrators who configure and monitor the online database backups through NetWorker benefit most from this article. The article discusses traditional database backup methodologies such as online (hot) back- ups, offline (cold) backups, and transaction log backups. The article describes the backup procedure to capture a consistent copy of an online database and transactions logs from the source disks using the application server resources that manage the primary copy of the database and data. It provides enough technical information to help a storage administrator who manages NetWorker verify and monitor a properly set-up client. This article discusses: • Overview and explanation of NetWorker modules • General concepts and planning applicable to all NetWorker modules • etWorker configuration settings applicable to all modules and specific settings N and considerations for each NetWorker module within the NetWorker software14 EMC Proven Professional: Knowledge Sharing 2009
  14. 14. DLm40xx Implementation and Upgrade Guide Mike Smialek, EMC Corporation Implementation of a data library for mainframe, DLm40xx series, requires coordination between multiple system and people resources. They include the mainframe tape library system, z/OS operating system software, NAS configuration and configuration of the DLm ACP, and VTE components. EMC Celerra® Replicator V2.0 is also required if replicating data to another DLm. This Implementation and Upgrade Guide walks through the implementation process to define the information necessary to configure each component. This guide is designed for solution architects, implementation specialists, maintenance and support services personnel, and storage administrators who want to understand how to implement or upgrade a DLm system. It presents necessary Linux commands and scripts to“ y idea for this article came from par- M allow a mainframe-centric person to do a DLm configuration. ticipating on project implementations. The DLm Implementation and Upgrade Guide addresses several key areas: Besides the opportunity to have the article published, writing it forces me to • DLm mainframe checklist to gather current customer tape information organize many implementation notes and • Input form to provide required IP addresses and phone numbers experiences into an orderly layout. Having • PC software and hardware needed to configure DLm others benefit from my experience helps • DLm40xx hardware installation requirements • /OS updates to tape catalog, HCD, SMS/ACS, MTL, OAM, and Esoterics z IT services organizations deliver projects • NAS configuration efficiently and provides customer TCE.” • Running DLm Linux scripts to define Tapelibs and NFS mount points Mike Smialek, EMC Corporation • Running SCRIPT80 to change permissions • Installing DLm Healthcheck script and mainframe reporting • DLm z/OS utilities • ESCON or FICON CHPID updates • est and acceptance procedures T • IP replication configuration • roubleshooting T 15
  15. 15. Implementing Deduplicated Oracle Backups with NetWorker Module for Oracle Chris Mavromatis, EMC Corporation More and more customers are now evaluating the cost benefits of data deduplication. This is partly due to the explosive growth of software deduplication technology (such as EMC Avamar®). EMC NetWorker® Module for Oracle is a mature product, with thousands of customers, that offers a strong backup solution for an Oracle database. The first quarter of 2009 will be the first time that NetWorker Module for Oracle will have deduplication integration for Avamar. This feature empowers users to conduct Oracle deduplication backups and restores via the integrated use of a deduplication storage node (Avamar server). A deduplication backup can be a manual backup initiated within RMAN or a scheduled backup via the NetWorker Management Console scheduler framework. There are a number of differences and additional configuration items that we must consider when deploying NetWorker Module for Oracle 5.0 to perform deduplication backups. This article outlines installation, configuration, pitfalls, and best practices. It provides answers to questions that can help customers better embrace this paradigm shift in backup solutions. This article highlights these considerations and provides guidance for deployment. It is not intended to be a step-by-step guide, nor does it replace the Installation Guide or Release Notes. It does assume a level of knowledge using NetWorker, Avamar, Oracle, and NetWorker Module for Oracle. Sales, systems engineers, support personnel, and customers will benefit from learning how to deploy this solution.16 EMC Proven Professional: Knowledge Sharing 2009
  16. 16. NDMP Localization/InternationalizationSupport for NetWorkerJyothi Deranna, EMC Corporation Computer internationalization and localization are important because of the numerous differences that exist among countries, regions, and cultures with respect to language (not only distinct languages but also dialects and other differences within a single language for weights and measures, currency, date and time formats, and more). The EMC NetWorker® NDMP client-connection feature provides NAS with fast, flexible backup and restore of mission-critical data residing on filers. The Network Data Management Protocol (NDMP) is a TCP/IP-based protocol that specifies how network components communicate with one another to move data across the network for backup and recovery. NetWorker with NDMP client connections provides backup and recovery support for more than eight NAS hardware providers. With NetWorker 7.4, the NDMP client is responsible to back up and recover the non-ASCII data from the NAS filers. EMC NetWorker 7.4 is a full-fledged internationalization release; the product is I18N and L10N compatible. Many changes were made in the NDMP client-connection feature to handle non-ASCII characters. NAS vendors each have their own mechanisms to store the non-ASCII characters in their specific filers. NDMP is a single interface that helps NetWorker understand how each vendor is handling non-ASCII data. Using NDMP, NetWorker is able to back up filers’ data and store it without data corruption. This article explains how NetWorker backs up and recovers non-ASCII data residing in the NAS filers using the NDMP client-connection feature. It describes how to configure NAS filers for non-ASCII data backup/recovery and explains how the changes are done while configuring a NAS filer as an NDMP client. It offers guidelines to configure filers from different vendors and best practices and troubleshooting tips to avoid data corruption and increase performance. 17
  17. 17. Using Disks in Backup Environments and Virtual Tape Library (VTL) Implementation Models Emin Calikli, Gantek Technologies Protecting data is vital for availability and business continuity. There are many data protection solutions in the IT market. However, they address recovery time objectives (RTO) and recovery point objectives (RPO). Recovery time represents the time it takes to restore the data; recovery point represents the “data currency” at the backup state. These two concepts are not indepen- dent and must be integrated based on company requirements. Many organizations are experiencing shrinking backup windows and increasing data loads. During a restore from tape, slower recovery operations result in longer production system downtime. Companies are seeking faster backup and recovery solutions.“ used current, real-world data I Disks could be a viable solution for meeting fast data recovery requirements. The service protection requirements and personal level agreement (SLA) depends on the application and the customer. It is often difficult to meet all SLAs with a single data protection solution. Backup vendors are trying to tailor experiences to write my article. Valuable their applications to use disks efficiently; this approach requires intelligent development data size is increasing day to day and processes. protection becomes more important and critical.” Virtual Tape Library (VTL) could help us reduce or eliminate the following problems: Emin Calikli, Gantek Technologies • Physical tape damage • Cost of tape drives • Highly utilized tape drive resources (VTL staging or post processing) • Backup windows • ecurity problems (backup encryption) S We have to know: • TL implementation methods V • Differences of disk usage between OLTP applications and backups • Impacts of block size on throughput and bandwidth 18 EMC Proven Professional: Knowledge Sharing 2009
  18. 18. Enterprise Standards and Automation for StorageIntegration and Installation at MicrosoftAaron Baldie, EMC Corporation Microsoft’s most challenging problem is how to keep tens of thousands of servers up to date with drivers and firmware in an environment that is spread across many global data centers. More and more, these data centers are subject to budget and staff cuts. Personnel are becoming less specialized, and in some cases, have no technical training at all other than the ability to power cycle equipment. The EMC account team has worked directly with Microsoft’s IT staff to overcome these challenges and provide standards for drivers, firmware, and complete automation packages that integrate with existing processes to allow for storage connectivity across the enterprise. This allows Microsoft IT staff to rapidly deploy EMC storage to any server globally and manage this storage with a limited, centralized IT staff. To accomplish this, the EMC Support Matrix is refreshed on a six-month cycle to provide a baseline of supported drivers, firmware, and applications, and their compatibility with the OS and hardware. A matrix is published to lock in the revisions so a deployment kit can be created once this standard is established. All drivers are then downloaded to a central location and rolled into an automation package that can be integrated into Microsoft’s own deployment process. Testing is performed for multiple scenarios across all currently supported OS versions for both upgrades from the existing standards and new deployments. Any issues found during this test process are triaged with Microsoft and fixed before the latest versions are released to gold. The local account team accomplishes all of these tasks and owns the project from start to finish. EMC and Microsoft’s strong alignment and rapid deployment of EMC hardware achieves one of the largest ratios of storage to administrator at about 1.5 PB per head count across 150 EMC CLARiiON® systems and 10 EMC Symmetrix® DMX™ systems. 19
  19. 19. Significant Savings are Within Your Reach when You Understand the True Cost of Storage Bruce Yellin, EMC Corporation Do you find yourself struggling with your company’s insatiable craving for more storage? Will any of your storage suppliers’ claims of “faster, cheaper, and better” really save your company money? How about the stark reality that your dwindling IT budget is causing you sleepless nights? Has the time come to expand the outdated, state-of-the-art storage infrastructure you leased just three years ago? You may be asked to trim capital and operating storage expenses by hundreds of thousands of dollars, while simultaneously introducing innovation to your organization. This seemingly contradictory set of storage demands also impacts your internal service level agreements. Where do you begin? Which concepts will deliver significant short- and long-term savings?“ nowledge sharing is the basis by which K we all share our collective expertise to As a veteran of the storage industry, I have heard questions such as, “How much does a gigabyte cost?” or “I don’t have a lot of money to spend” countless times. Neither is the benefit others in IT. With cost often being right place to start when trying to determine the actual cost of storage, nor how to make your part of the storage equation, communicat- storage budget go farther. ing my technical training in terms of dollars and cents turned out to be both fun and I have also witnessed an explosion of data growth; some pundits claim the rate is as high educational.” as 60 percent per year. Whatever the rate, we will have to store more data tomorrow than yesterday. In addition, corporate policies and regulations require us to save that data for Bruce Yellin, EMC Corporation longer periods of time. That means storage, which translates into more floor space, more power, more staff, and more complexity. This article explores the challenges facing the IT storage manager and offers insight into navigating a course of action to provide budget relief, while offering better services to internal and external customers. It explores issues such as frame expansion and future-proof architectures, environmental impact, virtualization, deduplication, risk avoidance, stretch- ing the lifespan of existing equipment, cost-effective education, required negotiation and financial skills, cutting fat from a budget, and much more.20 EMC Proven Professional: Knowledge Sharing 2009
  20. 20. The Efficient, Green Data CenterRaza Syed, EMC Corporation This article will help you build an efficient, green data center that will yield financial and environmental benefits. Data center power and cooling and virtualization are two key strategies to help you reap these benefits. We discuss power and cooling optimization, as well as virtualization and other related data center technologies that are required to develop and operate an efficient, green data center. This discussion occurs in the context of a virtualization-leveraged data center. Virtualization has been positioned as the core enabler for driving broader efficiencies. It includes server and storage procurement and utilization, information protection (backup and recovery), business continuity and disaster recovery (local and remote replication), infrastruc- ture management, and infrastructure consolidation and automation. This article provides you with a breadth of knowledge about the major data center functions that have a direct or indirect impact on operational and environmental efficiency. Our discussion is not limited to technology, but includes other relevant areas of a data center that are either impacted by or have an impact on the technology infrastructure and IT opera- tions. It identifies major areas of consideration as well as step-by-step guidance about how to implement power and cooling, virtualization, and other related technologies. Designs and architectural drawings for optimization are included. This article consists of three sections: • ntroduction focuses on financial and environmental impacts of inefficient data I centers and the case for building efficient and green data centers. • Considerations for Building Efficient, Green Data Centers focuses on high-level data center operating environments, IT processes, and technology considerations for data center efficiency. • Implementing an Efficient, Green Data Center focuses on implementation processes, steps, and technologies; and describes designs and architectural drawing in detail. 21
  21. 21. Best Practices for Deploying Celerra NAS Ron Nicholl, A Large IT Division Deploying EMC Celerra® NAS involves many pieces of the IT infrastructure ranging from backend storage, to network topology, and beyond. Choosing a solid design can make the difference between mediocre performance and exceeding your customers’ expectations. NAS solutions are quickly becoming a viable alternative to mitigate the cost of a SAN-based storage solution. A Celerra NAS solution offers much of the same functionality traditionally seen on the storage array over IP, including replication, checkpoints, mirroring, and more. Applying best practices to your design will improve performance and reliability. This article includes:“ he process of creating my article helped T 1. How to lay out the backend storage devices cement and expand on my knowledge.” 2. ow to implement a network configuration that allows for greater flexibility and H Ron Nicholl reliability 3. Planning Microsoft Windows domain interaction 4. Best practices for backing up the Celerra environment 5. Monitoring the performance of the Celerra solution There are many components to a successful NAS design. The network topology can be lever- aged to provide a greater scope of service; the CIFS and NFS clients can be configured for greater performance and reliability. Implementing best-practices standards can reduce the customers’ cost of ownership and improve reliability. This article provides a quick reference to configuring Celerra.22 EMC Proven Professional: Knowledge Sharing 2009
  22. 22. CLARiiON and FCiP: A Practical IntercontinentalDR and HA SolutionJaison K. Jose, EMC Corporation EMC CLARiiON® offers possibilities that meet almost all the needs of today’s high-demand business. We can easily achieve complex industry requirements when we join this small magic box to other technology. I would like to share an Intercontinental disaster recovery (DR) solution that was achieved with the help of several products, including EMC CLARiiON, MirrorView™/AS, SnapView® Clones, SnapView SnapShots, FCiP, and Vsan. We had to find a solution to implement a primary site in Europe and a disaster recovery site in the U.S. for a customer-facing application of EMC, so it was very important to have a robust solution with a proper DR plan. FCiP was our first choice to manage data movement flawlessly over the Atlantic; we could use Internet connectivity and VPN to create a tunnel between the two sites. We segregated the data replication SAN by separating the ports to a special VSAN extending to the DR site using FCiP. CLARiiON was the obvious choice for this midrange application. The amount of data was huge, but it was not very dynamic. Data availability was the primary concern. How would we connect two sites separated by thousands of miles with a CLARiiON? Our best answer was with MirrorView/AS since the data is sent through the FCiP tunnel to the DR CLARiiON. Then, we needed a backup solution. This environment was hosted in a third-party data hosting facility. A tape-based or external backup option often costs more money. SnapView Clones were our answer. A gold copy of production and DR LUNs were set up to protect against data corruption or data loss caused by human error. MirrorView A/S, SnapView SnapShots, SnapView Clones, FCiP, and Vsan—all of these products’ features are utilized in this unique disaster recovery solution. This not just a concept; it has been implemented and is working in a production environment. I am happy to share it with you. 23
  23. 23. Oracle Performance Hit | a SAN Analysis Kofi Ampofo Boadi, JM Family, Inc. Performance problems can be avoided or minimized if we design the right disk layout. RAID type definitions for specific components are essential to Oracle’s performance. Not keeping defined components on the same spindles is equally crucial. This article uses a real-life case study to explain Oracle’s components and illustrate the effects of a poorly designed SAN on Oracle’s performance. A hit! Please be aware that the order of the analysis is irrelevant; it is the content that matters. Applications’ performance relies heavily on the SAN. Performance issues can be centric to the host, connectivity device, or the storage array, or a combination. This article elaborates on the effect that the SAN can have on Oracle’s performance with an emphasis on the EMC CLARiiON® storage array. I will touch briefly on the Symmetrix since most of the concepts and analysis in the article apply to the EMC Symmetrix® as well. The items below will be addressed in detail via a case study. 1. Define and analyze sequential and random writes and how they impact Oracle’s performance design. 2. Define the components of Oracle architecture and their importance. 3. What are Redo Groups and why are they important to Oracle’s performance? 4. Detail each component’s behavior on the SAN and which disk layout best fits for optimal performance. 5. CLARiiON has a limited performance-tuning ability, it has a non-scalable cache, hence which architectural designs do you need to avoid? 6. How host-side analysis and hit can contribute to the performance of Oracle. 7. witch-level specifications and alerts that can significantly contribute to the S performance of applications. 8. he Case study! This details most of the issues administrators run into and suggests T the best resolutions. Performance analysis can be approached in several ways. The key is to use the appropriate performance tools to understand what you are analyzing. Please note that different approach- es may lead to the same result.24 EMC Proven Professional: Knowledge Sharing 2009
  24. 24. Preventative Monitoring in the NAS EnvironmentRobert Wittig, EDS an HP Company The EMC Celerra® actively monitors the environment for warnings and failures. As the NAS environment expands, it becomes increasingly important to assess the current health of each frame and verify that the current configuration fully utilizes redundant Celerra capabilities. Verification, testing, and preventative monitoring are important aspects of maintaining the Celerra’s reliability and availability. Once configured, we must test and monitor the Celerra environment to verify that it will prop- erly handle any faults and continue to provide the services for which it was designed. Testing should not stop once the system is in production; it should continue at regular intervals to ensure continuous functioning if a failure occurs, and to notify the appropriate support personnel in the event of a failure. Finally, we should make non-intrusive checks at regular intervals to verify that regular support activities have not adversely impacted any part of the environment. This article identifies methods and preventative measures to identify configuration issues, verify redundant hardware, and ensure that configured notifications function properly. The objective is to provide the Celerra storage administrator with a set of actions to check the status of a running environment, verify redundant operations, validate the configuration, and confirm that failure notifications are functioning properly. We will examine four parts of the Celerra environment: • Provisioned storage • Redundant data mover configuration • Warning and failure notifications • Celerra Connect Home functionality Finally, this article suggests methods that can be applied to gather these checks into a single automatic process. This process can be regularly executed to provide evidence of the validated configuration and identify potential problems before they impact the availability of the services or of the entire Celerra. 25
  25. 25. Create a Comparative Analysis of an Oracle Database Using Storage Architectures NAS and SAN Sergio Hirata, Columbia Storage Volnys Borges Bernal, Universidade de São Paulo/LSITec Today’s storage architecture market is divided among three large groups: direct-attached storage (DAS), network-attached storage (NAS), and storage area network (SAN). The stor- age system, among others, affects any application’s performance. The application’s overall performance is also affected by the storage network technology, the data storage commu- nication protocol, and the storage network components. Performance is measured using response time. Storage managers have difficulty aligning the application’s needs to the appropriate storage architecture. Many factors are involved in this decision, including the compatibility between the host bus adapter, switches, and storage systems, and the latency, cost, and management“ ´m finishing my Science Computing I tasks. Also, storage managers must consider the desired availability level for the application Master’s degree and I saw in Knowledge and achieve the service level agreements (SLAs) negotiated with different departments. Sharing an opportunity to publish my work. Application simulators are an alternative to choose the best data storage technology or archi- I used to read a lot of works comparing tecture. The present work uses a simulator of an order entry application to generate the I/O Fibre Channel, iSCSI, and NAS using file operations at an Oracle database installed on a SAN Fibre Channel and SAN iSCSI and NAS system benchmark tools, but a real ap- with NFS architectures. It´s expected that the results will indicate whether an Oracle database plication has different behavior than a file needs a Fibre Channel infrastructure or an iSCSI pipe has enough throughput to support it. system benchmark tool. My idea is to open This article is a guide to choosing the most accurate storage architecture for an Oracle a discussion about a price/performance application. relation for the DBMS-based application in some customer scenarios.” Sergio Hirata, Columbia Storage26 EMC Proven Professional: Knowledge Sharing 2009
  26. 26. Custom Documentum Application Code ReviewChristopher Harper, EMC Corporation An EMC Documentum® consultant typically is the last point of contact when things have gone wrong. Firefighting is the term given for this type of corrective work. These assignments are caused by solutions that are developed by someone with limited knowledge of our systems. This lack of knowledge causes issues both in the design of the application and the way it is implemented. This daunting task typically presents itself as heaps of documentation, one or more DFC/WDK projects containing source code, and a time-boxed schedule that prevents a full review of documentation and code. How should we approach reviewing code written by a third party who doesn’t necessarily conform to the standards you are accustomed to? This article provides basic principles on what to look for and explains some of the common ways that our systems are misused leading to poor performance. I will provide the technical rationale for each instance where we discuss why or why not to use a particular approach. Also, we will discuss corrective measures for each encountered problem. I will present the technical solutions for all of the cases we discuss. The cases are ‘real’ and have been encountered “in the wild.” 27
  27. 27. Business Continuity Planning for Any Organization Smartha Guha Thakurta, EMC Corporation This article introduces the methodologies to develop a company’s information survival strategy. The goal is to analyze the organization’s critical information assets, do a risk- mitigation analysis and data recovery planning encompassing change management. The overall objective of a business continuity plan is that “in this demanding market, a proactive approach aimed at assuring continuity of business processes and applications amid major and minor disruptions is absolutely essential.” The broad scope of the article includes: • An introduction to business continuity planning • Business continuity planning objectives • Defining disaster and its types with different points of view • Global best practices • Benchmark case study of implementing BCP for an organization • Methodology, barriers, and challenges • Change management and emergency decision making • Recommendations and conclusions After reading this article, you will understand: • The importance of business continuity planning • The benefits and cost savings to stakeholders • The roadmap/project plan developed for the organization • Business process mapping and re-engineering for continuity of operations Methodology and plan of work: • Experiences from professional life • Benchmarking with industry best practices • Research data from global experts • Review with the mentor at regular basis • Findings and knowledge gathering from the field and the organization28 EMC Proven Professional: Knowledge Sharing 2009
  28. 28. Data Migration Strategy (SRDF via “Swing Frame”)Sejal Joshi, A Large Telecommunications CompanyKen Guest, A Large Telecommunications Company Increasing data center power and cooling requirements impact IT infrastructures’ scalability. Storage consolidation provides relief for power and cooling and also reduces total cost of ownership (TCO). Simplifying storage infrastructures and ease of management are the two reasons that businesses use storage consolidation. Businesses have had to scale their stor- age infrastructures to accommodate capacity, performance, and high-availability require- ments due to massive data growth. This article provides guidelines for using storage-based replication (EMC SRDF®) to migrate/ consolidate data from multiple storage arrays to just a few. It explains how to migrate data using SRDF from (5670) to (5772) code using a swing frame method (EMC Symmetrix® DMX™-2) running (5671) code. This migration also involved multiple Oracle databases, so maintaining data consistency was critical. We were able to accomplish this task in a very short amount of time. There are advantages and disadvantages of using storage-based or host-based migration methods. The article discusses these and provides guidelines for migrating data from DMX-2 and DMX-3 arrays to DMX-4 using SRDF. 29
  29. 29. Data Storage Performance—Equating Supply and Demand Lalit Mohan, EMC Corporation Individual components, including storage, contribute to the cumulative outcome of performance. When the storage processing duration is proportionately long, “demand” is the workload generated by host computer systems, and “supply” is the processing service provided by the data storage system. The “quality of performance” experienced by the business relies on how well supply meets demand. We must match projected demand with capability to supply for selecting and designing data storage components. The resulting solution would operate at an optimum level, where demand equals supply. In this article, we apply the demand-supply analogy to build a universal framework using data storage domain performance characteristics as proxies“ decided on a topic based on the direct I representing demand and supply. financial and operational benefit to custom- This will be done in light of several popular information technology infrastructures, for ers and to IT services providers. I wanted example, messaging, enterprise resource planning, and relational database applications to find a way to engage customers more in an open systems environment, and mainframe host applications in a proprietary closely in the current climate of tight IT environment. budgets by trying to get more return on investment.” Among the topics discussed are: • Relevant terms and definitions “ am delighted to see my article I • Characteristics of “demand” placed on data storage components published! It’s akin to a mother • “Supply” capability of the data storage component looking at her newborn!” • Combining demand and supply into the working framework • Options for improving performance capability Lalit Mohan, EMC Corporation • Case scenarios to illustrate key points • Recommendations in conclusion • Assumptions, impact, and remedy • Limitations and improvements This article helps you better plan and design optimum data storage infrastructures. It helps you support centralization of business information assets into efficient shared service centers, a necessity in the current financial climate. This aggregation may enhance the value of information to management, improving return on investment.30 EMC Proven Professional: Knowledge Sharing 2009
  30. 30. Integrating Linux and Linux-based Storage Management Software with RAID System- Based Replication Diedrich Ehlerding, Fujitsu Technology Solutions All major database and ERP software vendors release their products on Linux. As with other operating systems, we must replicate using RAID array functionality to meet the demands for short backup windows, fast restore processes, and fast system copy processes. The legacy device names that are traditionally used in Linux without any storage management software are inappropriate for enterprise-class configurations. The main problem is that these name spaces are not persistent over server reboots. They cannot guarantee that the system will find its data at the same device node which it saw before the reboot. This article discusses various naming spaces within Linux—legacy names, device mapper names, IO multipathing software names, volume management layers, and file system layers.“ wrote this article because I got involved I All these layers create their own naming spaces. With RAID system-based replication, we in a project that needed solutions. I wanted must take care to have the proper name for the replica; in a shared storage configuration, to contribute some kind of an article we have to provide an identical name on all cluster nodes. which will hopefully be useful for the EMC The article reviews naming schemes with respect to persistence and replication issues. Proven™ Professional community and anyone else who might read it.” The article discusses software layers: Diedrich Ehlerding, • Linux native layers (legacy sd names, device mapper names) Fujitsu Technology Solutions • Multipathing drivers: EMC PowerPath® names and Linux native multipath names • lvm2 as an example of a volume manager • File system issues (labelled file systems, file system uuids) And replication or shared storage usage scenarios for: • Local cluster • Stretched cluster/disaster recovery configurations • Off-host backup • System copy 31
  31. 31. Simplifying/Demystifying EMC’s TimeFinder Integration with Oracle Flashback Robert Mosco Jr., EMC Corporation Integrating EMC TimeFinder® business continuity application with Oracle can be a somewhat tricky endeavor. This article describes Oracle’s Flashback and EMC TimeFinder technologies and how the two applications can help end users recover data. Oracle Flashback and EMC TimeFinder are two separate technology applications. Under- standing them, and then describing their integration for the purpose of recovering data, is the main goal of this article. I use diagrams and commands to show how both technologies can recover data. Once you have an understanding of each technology, the article progresses to an integration phase showing how the two applications can be used to develop a “repair” and/or a “recovery” plan. Discussions include point-in-time recovery, Flashback rewind, and recovery time objectives. The article presents the following: • Enabling and disabling Oracle Flashback • Setting up a TimeFinder Oracle business continuity (BC) environment • Dropping and/or deleting data (to simulate a data corruption situation) • Recovering or flashing back to a previous point in time • Developing a recovery or a get-well plan All of these topics include diagrams and commands with simple explanations to help you understand the power of both technologies.32 EMC Proven Professional: Knowledge Sharing 2009
  32. 32. Service-Oriented Architecture (SOA) andEnterprise Architecture (EA)Charanya Hariharan, Pennsylvania State UniversityDr. Brian Cameron, Pennsylvania State University Most companies are re-evaluating the way they purchase, deploy, manage, and use business applications due to challenging market conditions, competitive pressures, and new tech- nologies. Software buyers want applications that leverage existing investments; customers demand solutions that provide quantifiable performance improvement. In response, companies must evolve into agile enterprises that can rapidly change direction. Yet their structures, processes, and systems are often inflexible, rendering them incapable of rapid change. Adding hardware, software, packages, staff, or outsourcing are not solutions. This is not a computer problem, it is a business problem. To address this growing gap between IT and business, companies are adopting an end-to-end enterprise architecture approach to re-align IT development with business objectives. EA is a framework that covers all the dimensions of IT architecture for the enterprise; SOA provides an architectural strategy that uses the concept of “services” as the underlining business-IT alignment entity. These forces drive the IT industry to deliver breakthrough technologies, many at the founda- tion layer. SOAs, specifically, are at the cusp of change. This article focuses on the relation- ship between EA and SOA and the resulting impact on business. These are the key research questions in this research: • Are there any business impacts to marrying EA and SOA? • How do organizations fit SOA with EA? • Is it better to adopt either SOA or EA, and not both? 33
  33. 33. SRDF/Star Software Uses and Best Practices Bill Chilton, EMC Corporation Disaster recovery is becoming more critical as new laws are being passed to protect data, and legislation mandates extended data-retention policies. Many companies are building redundant data centers to avoid potential losses. The largest financial institutions are build- ing three data centers, two located in close proximity with the other in a different region of the country or in a different country altogether. These companies are managing three data centers and keeping all the information consistent by deploying EMC SRDF®/Star software. SRDF/Star is data-replication software that uses synchronous and asynchronous data trans- fer to maintain consistency in multiple sites. The intent is to provide redundancy so that if one of the data centers experiences a disaster, the other sites can continue to replicate data and take over processing immediately. Star is exceptional as a disaster recovery software program, but what else can you realize with this software and what are the best ways to deploy it? The documentation on SRDF/Star explains how the software works and how to install it, but does not discuss best practices. This article seeks to bridge the gap between installation and deployment by reviewing: • Building a test Star • Load balancing applications across data centers • Eliminating downtime while working on servers • Migrating data while staying consistent • Switching between concurrent and cascading, and back again • Best practices and troubleshooting hints34 EMC Proven Professional: Knowledge Sharing 2009
  34. 34. Using EMC ControlCenter File LevelReporting for CIFS SharesChad DeMatteis, EMC CorporationMichael Horvath, Fifth Third Bancorp It can be a daunting task to gather file and folder statistics and properties from a NAS CIFS share with deep directory trees. It is especially difficult when you’re using Microsoft Windows native tools that may have poor enumeration performance. These activities consume a great deal of time in an environment with tens of thousands of folders and millions of files. This article discusses how EMC ControlCenter® Network File System Assisted Discovery feature, introduced in v6.0, can be used to provide EMC Celerra® CIFS administrators with file- and folder-level reporting, covering UNC FLR configuration considerations and lessons learned during an EMC ControlCenter deployment. It provides practical examples of how you can use CIFS FLR reports to quickly determine file age and type distribution, top storage users, and utilization trending. These reports provide administrators with information to maximize storage utilization and address CIFS storage consumption issues before they impact end users. The article addresses the following topics: • onsiderations and lessons learned during assisted discovery of network file system C in an EMC ControlCenter such as domain ID and host agent selection • teps to configure and schedule data collection policies for network file systems tak- S ing into account collection criteria and CIFS folder and file counts • erformance considerations for CIFS data collection, providing examples of perfor- P mance statistics from the Celerra and the host agent server during CIFS scans • xamples of how EMC StorageScope™ file-level CIFS share reports can be used to E show aged and dormant CIFS files for archiving, file type distribution for reclamation, and top CIFS storage users 35
  35. 35. Cloud Computing Services—A New Approach to Naming Conventions Laurence A. Huetteman, Technology Business Consultant It has become increasingly difficult to have meaningful discussions about cloud computing without a common language. You will see every inconsistency in naming conventions for cloud technology when searching the topic on the Web; this inconsistency is also present during consulting or business conversations. Wikipedia defines cloud computing by using a six-layer stack of components with terms like client, application, services, platform, etc., with the word “cloud” preceding each component. Others refer to everything as a service, while others try to map SOA, utility computing, and grid to cloud computing services. Even EMC’s suite of cloud computing offerings, while technically impressive, seems to be a loosely coupled group of point solutions with no real structure or clear naming conventions (Hulk/Maui evolved into Atmos™, Mozy™, Pi, etc.) that are referred to generically as cloud computing. Vendors, partners, and customers spend valuable cycles deciding which model to follow in their discussions, or worse, inventing their own. My article simplifies this process and applies logic to inconsistent naming conventions by proposing a new naming convention for cloud computing service offerings. It is easily understood and applied across the broad spectrum of services. It is intuitive, so it can be easily adopted, and based on a logical model. If successful, this model will be flexible enough to accommodate the anticipated growth of this evolving field of technology. One consistent theme in all cloud computing discussions is the concept of tiering or layers of services. The thought here is to use an existing scientific classification and to map the layers to current and potentially future cloud computing components or services. One logical choice is to leverage the familiar and widely accepted term, cloud, but map the model to the common atmospheric cloud terminology. This article presents such a model to help you effectively engage with other IT professionals.36 EMC Proven Professional: Knowledge Sharing 2009
  36. 36. Leveraging Cloud Computing for Optimized Storage Management Mohammed Hashim, Wipro Technologies Rejaneesh Sasidharan, Wipro Technologies“ MC’s Knowledge Sharing program is E Cloud computing refers to spreading IT computing resources across Internet cloud boundar- becoming a new torch bearer to spread ies and offering selective access through consolidated service providers located at strategi- technical awareness among global profes- cally placed data centers. Generally, users pay for computing capacity on-demand and are not concerned with the essential technologies or challenges used to achieve the increased sionals. This is an ideal platform where and diverse storage scalability, server, and other resource capacity and extensibility. technologists from diverse backgrounds contribute tremendously toward widening This article focuses on cloud computing, cloud models, storage, solutions, and comparing their technical spheres. Besides, this initia- the different setups. It also describes features of storage optimization, security, leveraging tive exposes the various technical/non- the current IT infrastructure, and the advantages and disadvantages of the model. technical aspects of newer technologies The article presents the following: and product advancements in a concise and lucid manner.” 1. Overview of SOA, SaaS, distributed, grid, and cloud computing 2. Cloud architecture and applying cloud computing to storage Mohammed Hashim, Wipro Technologies 3. Cloud models and outlining the cloud storage solution 4. Managing solutions over storage infrastructure with optimal performance 5. Security in the clouds and comparing cloud-based services“ he knowledge that we acquire today has a T 6. Advantages and risks of cloud computing value exactly balanced to our talent to deal 7. Potential future of the cloud with it. Tomorrow, when we know more, Many would embrace the ability to immediately increase capacity or add capabilities without we recall that part of knowledge and use investing in new infrastructure, training new personnel or licensing new software. This article it better; the EMC Knowledge Sharing pro- is helpful for any engineer who is involved with storage design and management. gram is a magnificent invention which has given me an abundance of global technical astuteness.” Rejaneesh Sasidharan, Wipro Technologies 37
  37. 37. Archiving Cries for a Holistic Architecture Paul Kingston, Solutions Architect EMC Corporation, United States38 EMC Proven Professional: Knowledge Sharing 2009