The Value of a Smarter Data Centre


Published on

70% of today's IT budget is spent on simply managing and maintaining the IT infrastructure, with only 30% funding strategic initiatives to fuel innovation. IT managers need to address today's operational challenges and find ways to leverage IT infrastructure to transform spending - putting more dollars to work to solve new problems.

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The Value of a Smarter Data Centre

  1. 1. WHITE P APER The Value of Smarter Datacenter Services Sponsored by: IBM Michelle Bailey Rob Brothers Katherine Broderick May 2011 IDC The next five years in IT will likely be some of the most exciting and demanding for datacenter managers and the office of the CIO. In this post-recession period, organizations will be setting in place strategies that will expand their core business while mining for new market opportunities. For many, this business transformation willF.508.935.4015 include new product development, mergers and acquisitions, geographic expansion, cross-selling opportunities, and partnerships. Technology will be a critical enabler for these new initiatives, and a diverse and efficient datacenter strategy will be essential. While the emphasis has been on consolidation and cost reduction during theP.508.872.8200 economic downturn, as IT organizations look to the future, success will be built on streamlining processes, reducing complexity, and improving time to market. IT organizations will be responsible for real IT transformation in the coming years and not simply the cost reduction of the past. They will have to strike a balance between integrating new applications and multiple infrastructure delivery models andGlobal Headquarters: 5 Speen Street Framingham, MA 01701 USA continuing to support their already substantial IT portfolio. The future datacenter will be a highly automated set of standardized infrastructure where applications and data will be deployed and provisioned on systems and in sites based on workload demand. New cloud-based technologies and methodologies will expand the options for IT organizations to source hosting or outsourcing providers for software, platforms, infrastructure, and datacenters at varying price points and locations. The backdrop to all of these choices is the physical backbone of the datacenter, otherwise known as facilities. Power and cooling will need to be flexible enough to keep up with automated, virtualized, dynamic IT while keeping in mind capacity limitations, efficiency, and budgets. With these new demands being placed on IT and facilities, it is not surprising that in a recent IDC survey of over 250 IT managers, more than one in five found that their IT staff is not skilled enough to implement a private cloud. In another IDC survey of over 400 IT decision makers, lack of in-house IT expertise is listed as a top challenge to virtualization by over 22% of respondents. These two data points indicate that for many IT organizations, the journey toward adding incremental value to the business will require external help. In addition, the large number of forthcoming sourcing options and technology decisions will challenge IT organizations to balance their need to maintain control without inhibiting innovation. As time to market becomes a differentiator in the economic recovery, speed of deployment must be balanced against security, availability, and service levels across the IT organization.
  2. 2. Achieving this equilibrium will require IT organizations to plan carefully, conductongoing monitoring and measurement, and draw on extensive experience.IN THIS WHITE P APERThis paper provides an overview of how IT organizations can optimize across theentire life cycle of their datacenters and build a strong foundation for future IToperations. Opportunities for optimization remain strong, including increasingefficiency on both the IT side and the facilities side, improving user support andprotection, and increasing the flexibility of the datacenter to be more responsive to thebusiness. With so many opportunities for CIOs and senior IT decision makers to focuson enabling business improvements, this paper also includes suggestions on whereto start while keeping in mind a long-term vision.SITUATION OVERVIEWToday, the most significant challenge for IT organizations is meeting the needs of thebusiness with their limited resources. As business goals change from cost cutting toinnovation and growth, IT organizations will have to rethink their datacenter strategy.Many companies have already extracted significant cost reduction through extensiveconsolidation, virtualization, and standardization programs and in doing so have builtcredibility with the business. In addition, this improved architecture has laid thefoundation for the next phase of the datacenter, which will place automation and newdelivery models at the heart of supporting business change without trading offsignificant cost increases or lower service levels.These new delivery models speed time to market by decreasing deployment andprocurement time while increasing availability. To achieve these results and maintainservice levels, IT needs to increase predictability. IT needs to know the amount ofresources available (both on-premise and off-premise) along with resource utilizationfor the past, present, and future. To obtain this predictability, many leading ITdepartments are using sensors, software, and hardware to gather information abouttheir datacenter design and ongoing operations. This information is, unfortunately, notenough to ensure predictable ebbs and flows in a datacenter. The next step is toperform analytics on this large set of disparate data and optimize the datacenter forefficiency, on both the IT side and the facilities side.For many CIOs and IT organizations, a lack of insight into this powerful information isa constraint that many IT organizations are not even aware of. Further, in IDCsexperience, where this information is available, many are unsure of how to leveragethe data or are concerned with the risk of change, and so no decision is made.Inaction can often be a by-product and a fear that if one change is made, it will causewaves, sometimes resulting in downtime, across the datacenter floor. Thiscombination of not knowing where to start and the fear of making waves causes manyIT organizations to overlook the many opportunities that are present in todaysdatacenters for increased efficiency, improved reliability, and increased flexibility.2 #228261 ©2011 IDC
  3. 3. Opportunities for the IT OrganizationTodays datacenters look very little like their predecessors, and this is attributable tohow much CIOs and IT organizations have worked to attain efficient, dependable,agile datacenters for the business. The evolution from monolithic warehouses for ITto modular designs optimized for IT has been a long road worth taking. Datacentersof the future will evolve even further to address efficiency, on both the IT side and thefacilities side, simplifying management while maintaining uptime and speeding timeto market. To make smart, effective change in the datacenter, CIOs need to keepin mind multiple goals simultaneously. They need to balance the goals outlined inthe following sections while moving forward because the datacenter today is sointerconnected.IT Infrastructure Resource EfficienciesMany datacenter managers have already made lengthy inroads to increase ITresource utilization. IT managers have virtualized, increased the number of logicalservers per system administrator, and deployed tools in datacenters to manage theenvironment more effectively. According to IDCs recent virtualization survey, in 2011,one in five physical servers shipped will be virtualized. From the workload view, thatequates to 65% of all workloads running on a virtualized physical host. This differencefrom the physical virtualization to the workload view is due to the increased densitypossible with virtualization. In 2011, IT managers will deploy over seven virtualmachines (VMs) per host.This new, highly virtualized world requires changes not only to the servers but also tothe storage, networking, process, and people sides of datacenter operations. Asshown in Figure 1, virtualization has driven an increased need for managementconsolidation and cost control. Virtualization has decreased server spending anddecreased the cost of power and cooling systems, but it has increased managementcosts greatly. This is largely because todays systems administrators handle virtualmachines in much the same way they handle physical machines. The explosion ofvirtual machines depicted in Figure 1 by the red line exposes the virtualizationmanagement gap that has come about in many of todays datacenters. On thestorage side, the explosion in virtualized computing requires smart, agile, highlyutilized storage resources. Networking needs to be able to see into servers, not onlyat the physical level but also at the virtual level. This is essential for deploying policiesfor VMs over the network.Power and cooling, a historically stagnant, not agile part of the datacenter needs tostretch and become malleable to keep up with todays virtual resources. As thenumber of VMs increases and decreases and as the machines move physicallythroughout the datacenter, it is essential for power and cooling to change airflows,power draws, and temperatures accordingly. The stagnancy on the facilities side andthe increasing complexity on the IT side make the results of a recent IDC surveyexpectable. As mentioned earlier, in a survey of over 400 IT decision makers, lack ofin-house IT expertise is listed as a top challenge to virtualization by over 22%of respondents. This means that about one of every five IT departments has a lack ofinternal expertise regarding virtualization. The need for external help with such acrucial technology for the future of the datacenter is real. External datacenter service©2011 IDC #228261 3
  4. 4. providers can help IT departments make the most of their resources, in terms ofprocess improvement, better management through tools and software, more accurateanalytics, and an objective point of view.FIGURE 1New Economic Model for the Datacenter Worldwide Spending on Servers, Power and Cooling, and Management/Administration Customer Spending ($B) Installed Servers (M) $300 Physical Server Installed Base (M) Logical Server Installed Base (M) 80 $250 Power & Cooling Expense Management Cost Server Spending 60 $200 Virtualization Management Gap $150 40 $100 20 $50 $0 0 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13Source: IDC, 2011To address these growing concerns around management, power, cooling, and ITinfrastructure in the datacenter, many datacenter managers are charting a coursetoward cloud computing. This long-term process will not be completed summarily butwill happen through a series of stages. Figure 2 depicts these stages and where mostdatacenters and IT organizations fit today along their journey toward the privatecloud. The stages are: Pilot. Fifteen percent of datacenter managers are in this stage of testing virtualization. Less than 10% of their servers are virtualized, and they are not yet familiar with the virtualization management gap problem depicted in Figure 1. Consolidation. The majority of IT organizations are in the consolidation phase. These IT organizations have experience with virtualization now and are seeing savings in terms of physical server cost, power, cooling, and space. Although their production environment runs on virtual IT assets, only ad hoc policies are in place for management. These organizations are starting to see increased virtual machine deployments and increased management costs.4 #228261 ©2011 IDC
  5. 5.  Assured computing. One in four CIOs is in the assured computing stage. The problem of management and visibility has been recognized and is starting to be addressed. The IT processes and policies are partially integrated and standardized, and VMs are becoming more mobile and reliable. Production-level, mission-critical workloads are being run in this virtual environment. Private cloud. The virtualization management gap has been addressed in this stage. Processes, policies, and automation tools are in place to make administering a virtual server less cumbersome than managing a physical one. Only 5% of CIOs are in this position, but many are headed in this direction.FIGURE 2Virtualization Maturity Phase Pilot Consolidation Assured Private Cloud Impact Computing Staff Skills Little or no expertise Hands-on expertise; Formal training; Certification some formal training certification required desirable Technology & Tools Simple static Simple Mobility: Portable Policy-Based partitions Manual & Off-hours Applications: Automation; Matched application Automated Failover Service Management; pairs CMDB Implemented Life-Cycle Mgmt; Self-Service Delivery Financial Impact No substantial Measurable Hard Cost Justified TCO Variable costs financial impact Savings: savings: recognized or Consolidation chargeback models Business Continuity Power/Real Estate established IT Process & Policies Skunk Works Ad hoc Partially Integrated Fully Integrated Partially Fully Standardized Standardized Line of Business Hidden Revealed Transparent Engaged in Governance Process Application Usage Test Development Production: Production: Business Production: Service Noncritical Critical Profiles & Catalogs % of Customers 15% 55% 25% 5% Average VM Density 4 6 10 35 Experience 9–12 months 9 months–2 years 1.5 –3 years 3–5 years % Virtualized Servers <10% 25% 50% 80%Source: IDC, 2011The beginning stages of this maturity curve, the pilot stage and the consolidationstage, present hard cost savings in terms of physical IT infrastructure, power, andcooling. In the later stages, savings are presented in the form of total cost ofownership (TCO) as the savings are largely in soft costs such as management anddowntime. Moving along this curve requires IT directors to focus not just on thesingular goal of increasing IT utilization but also on balancing reliability and flexibility.©2011 IDC #228261 5
  6. 6. As stated earlier, a recent IDC survey found that of 250 IT manager respondents, morethan one in five believe their IT staff is not skilled enough to implement a private cloud. Itis clear that additional, external help will be needed to move along this virtualizationmaturity curve. External datacenter service providers can help IT with an objectiveviewpoint, advanced analytics (to see what the real issues are versus perceived issues),and years of experience in multiple, diverse datacenter environments.Improving Storage EfficienciesIT organizations are being pulled in multiple directions simultaneously with pressuresfor increased efficiency, flexibility, and availability through evolved deploymentmodels and opposing force from the growth in complexity of the IT environment andshrinking budgets in already extremely lean organizational structures. Nowhere is thisjuxtaposition more clear than in the world of storage. IDC expects storage capacity inenterprises to soar through 2014 (see Figure 3). This growth in data is not only in thestructured data that is more familiar but also in unstructured data. Businesses arealready growing reliant on mining and analyzing this structured and unstructured datafor improved intelligence, competitiveness, and financial results. It is difficult toimagine how IT will keep available, let alone gain value from, this growing, complexdata swamp.FIGURE 3Worldwide Enterprise Storage System Capacity Shipped,2008–2014 80,000 60,000 (PB) 40,000 20,000 0 2008 2009 2010 2011 2012 2013 2014Source: IDC, 2011According to IDCs latest enterprise storage forecast (Worldwide Enterprise StorageSystems 2010–2014 Forecast Update: December 2010, IDC #226223, December2010), the quantity of petabytes shipped is expected to increase at a compoundannual growth rate (CAGR) of 50% over the course of the next four years (refer backto Figure 3 for additional detail). The ongoing management of enterprise storage6 #228261 ©2011 IDC
  7. 7. systems will always play a critical role in any IT environment. With businesses movingfrom standalone systems to virtualization, and from local applications to cloud, thetask of maintaining these devices internally is becoming increasingly complex for ITstaff. It is IDCs opinion that because of the complexities and proprietary nature ofstorage subsystems, utilizing experts who work with these systems on a regular basisand who have industry best practices on how to deploy and support these arrays isthe best way to get the most value, performance, and reliability from these IT assets.Datacenter Flexibility to Adapt to Changes in DemandDatacenters of the future will be built with modularity in mind. These buildings willreally be big computers that adapt and change their operations to respond to theneeds of the business. This flexibility is achieved through predictable, repeatabledesigns that can be easily monitored and measured during operations.To achieve these modular, amendable designs, IT organizations will constructgreenfield (new implementation) and retrofit datacenters. Both greenfield and retrofitdatacenters will achieve more efficient equipment, better standardization, moreevolved processes, longer life cycles, and better overall TCO than those in the past.To attack these issues and make the most of the IT organizations investment, CIOsneed to consider their organizations innate abilities and possibly get external help.It is sometimes difficult to see the forest through the trees and really identify what thesources of problems are. This is an essential stage for choosing what the prioritieswill be in new or retrofitted datacenters. With IT budgets not getting any larger, it isimportant for datacenter design teams to identify what is really important to futuredesigns and operations. These priorities need to be identified and their value needs tobe quantified in terms of downtime, dollars, and people. This quantification increasesthe likelihood of these priorities surviving strict budgets. These strategic designchoices are difficult, but they can really set up the IT organization for success downthe road.In addition to achieving flexibility in the on-premise datacenter, IT organizations areincreasingly looking to off-premise solutions for flexibility and the freedom to focus oncritical workloads internally. These public cloud software-as-a-service (SaaS)solutions require IT organizations to prioritize what should be moved to the cloud andwhat should remain on-premise. In platform-as-a-service (PaaS) solutions, newframeworks for application development need to be worked out (to be easily portableto the cloud and back). In the case of infrastructure-as-a-service (IaaS) solutions,capacity planning needs to extend beyond the four walls of the datacenter onsite andout to the cloud.The options for increasing savings, flexibility, and reliability abound. In fact,many CIOs find themselves with so many options that its difficult to know whereto begin. In many cases, the help of an outside partner is necessary to determinehow to go about making a change and how to make sure the solution is effective.The use of datacenter services can be a wise choice for many datacenter managersat this fork in the road. This presents yet another choice, which is who to ask for help.There are quite a few factors to consider when evaluating who should be ITs partnerin the datacenter.©2011 IDC #228261 7
  8. 8. FACTORS TO CONSIDER WHEN EV ALU ATINGDATACENTER SERVICESAs enterprise IT departments struggle with the challenges of maximizing theperformance of their IT landscape, many need help from external datacenter serviceproviders to facilitate that process across the IT ecosystem. These services helpdatacenter managers identify opportunities where they can succeed today and setthemselves up for more success tomorrow. After many years of covering thedatacenter environment, IDC has identified the following best practices for ITdepartments that are evaluating high-quality services for optimizing the datacenter.Key Potential Success FactorsPutting the Puzzle Pieces TogetherExtensive knowledge across all aspects of the IT landscape — from facilities to IT to thecustomization required for specific solutions — is a critical consideration when selecting adatacenter service provider. Knowledge of all aspects of the datacenter has never beenmore pertinent. Today, the datacenter is interconnected, mobile, and dynamic, with VMsmoving, tools automating, and power and cooling flows changing frequently. Breadth ofknowledge is vital from storage to computer room air conditioners (CRACs). At the sametime, datacenter managers should not give up depth of expertise for breadth.System-level knowledge is just as important as understanding the connectionsbetween systems. Both systems and datacenter operations are possible opportunitiesfor optimization, and both are potential sources of downtime. IT managers need apartner in the datacenter that understands not only how complicated theseenvironments are but also how to simplify daily operations. The true help in todaysdatacenter is making operations appear automatic and simple but simultaneouslykeeping track of the physical backbone (infrastructure and facilities).Thinking GloballyDatacenter service providers with experience across multiple geographies, multipledatacenter environments, and various stages along the virtualization maturity curveare invaluable. No two datacenters are alike, and IT organizations should be lookingfor a datacenter service provider that has seen it all.Experience across products and geographies adds value to datacenter services intwo ways. First, datacenter service providers with experience working in multipleenvironments understand the common dilemmas faced by datacenter managers andthe solutions that are time-tested and work. Second, these service providers workwith clients at all stages of the virtualization and cloud maturity curves. They can helpthe datacenter go from the earliest stages of adoption to the late stages of automationand cloud usage models. This can be done in one large project, one small project, ora series of smaller projects because of the large product lines available fromdatacenter service providers with experience and expertise.8 #228261 ©2011 IDC
  9. 9. Being Credible QuicklyThe growing complexity of IT environments requires a detailed, coordinated approachto identify, diagnose, and resolve specific issues in the IT infrastructure. Todaysdatacenter systems and operations are kept in disparate spreadsheets andworkbooks with little method in place for continuity or the ability to replicate whatworks. Bringing in a datacenter service provider with systematic time- and customer-tested strategies specific to a given datacenter environment will have positive effectsin the long term. This approach also makes moving along the evolution curve thatmuch easier because as the business and IT scale, systems, datacenter capacity,and operating procedures can scale as well.Delivering Rapid Return with a Strategic Goal in MindDatacenter managers need to choose projects with quick ROIs while keeping in mindthe 15- to 20-year life cycle of their brick-and-mortar datacenter. These projects needto have a quick but, more importantly, lasting payoff for IT in terms of efficiency andavailability. These "quick wins" are great on their own and also in the beginningstages of larger projects. These up-front successes pave the way with business unitsand executives for further optimization. At an organizational level, they allow IT todemonstrate its relevance and ability to deliver for the business. The rapid returns ofearly projects are crucial to the long-term viability of budgets and approval forstrategic shifts and initiatives.Understanding the Importance of Analytics Throughout the DatacenterLife CycleDatacenter analytics is an emerging field within the IT organization. While customershave long had a surplus of information on their server, storage, and networkingdevices, as well as mechanical and electrical equipment, the ability to capture thisdata for meaningful analytics that provides a holistic view of the datacenter remainsaspirational for many.In IDCs experience, information capture on systems and facilities is the beginning ofany IT transformation project and, until recently, has been an extremely manuallyintensive task. With the advent of virtualization comes a new wave of systemsmanagement tools that are enabling more automated data capture across the entiredatacenter. This information is continuously captured, increasingly in real time, from avariety of sources, including statistics on utilization, deployment and provisioningtools, orchestration and governance practices, health monitoring systems, as well asfailover and disaster recovery activities.This large body of information for the enterprise datacenter opens the possibility forhigher-level analytics that can intelligently provide insight across the entire life cycle ofthe datacenter that will optimize both day-to-day tasks and ongoing operations of theentire facility. Predictive analytics opens the possibility of taking a wide variety ofdisparate data and sorting through what is really relevant to set in place accurate, long-range planning strategies. Imagine a datacenter where a site-based outage is predictedbefore it happens by understanding system, application, and power dependencies alongwith historical information on system, application, and utility performance. From thisincident, analytics could suggest a new architecture or blueprint for the datacenter.©2011 IDC #228261 9
  10. 10. The real benefits of this type of analysis are twofold. The first benefit is in improvedday-to-day operations; the second, and most important, payoff is in using theseanalytics as part of a feedback loop for continuous improvement. Datacenter analyticsnot only can be part of a cycle of making daily improvements but also, once a changeis made, can be recalibrated to drive a new set of analytics that constantly drives anenhanced datacenter environment and a more predictable, repeatable service.This type of continuous improvement should of course be tied to business metrics sothat the datacenter is fully aligned with the organization. These metrics includeprojected revenue growth, profit margins, customer service requirements, and newbusiness or regional expansion.These New Solutions Require SupportIn many cases, enterprises assume that because they have virtualized theirdatacenter environment, they will not need support for the infrastructure and software.This most certainly is not the case. The main reason for this is that the complexity ofthese configurations can cause even the savviest end users to need help when thingsgo wrong. Whether it be a hardware or software issue or a user error, when theservers are running mission-critical workloads, they require external support services.Approach to Support and Deployment Needs to ChangeDespite the fact that enterprises do need to support their environments, how they supportthe [virtualized] environment does need to be different from the traditional server supportmodel. Because their mission-critical data will be on fewer servers, when something goeswrong, it can have a broad impact across many departments in the organization. Theability to contact a vendor that has intimate knowledge of the environment will be crucial.IDC interviewed a customer at an IDC virtualization forum that was in the process of"devirtualizing" its environment at a significant cost because it did not go through a robustplanning process. As a result, the customer virtualized several applications that did notwork well together on the same physical server, which led to significant applicationperformance problems. The situation deteriorated to such an extent that the customerdetermined that the best remedy was to devirtualize and then start again. These issuesand others are significant, and IDC believes that enterprises need to enlist organizationsthat have performed complex virtualization implementations.Choice of Support VendorVirtualized datacenters require a vendor that can support the entire environment ratherthan just one technology asset. As a result, selecting a vendor that has a robust supportportfolio and can look across all of the assets that are required to support the businessprocesses becomes increasingly critical in a highly virtualized environment.IBM SERVICES OFFERINGSIBM has all of the factors that IT organizations should consider when choosing a partner indatacenter operations and design services. IBM has breadth of offerings, deep expertise,a systematic approach, an experienced team, great support, and industry-leadinganalytics. IBMs strengths are the breadth of its offerings and the ability to deliver a holistic10 #228261 ©2011 IDC
  11. 11. set of services that identify interdependencies across the IT portfolio and provide analyticsthat can optimize across the entire life cycle of the datacenter. IBM addresses thedatacenter during the entire life cycle across IT and facilities. Key services include:ExtendIBM extends the life of the datacenter and the life cycle of IT assets with servervirtualization servers, storage automation, and middleware optimization. Theseservices allow IT managers to defer constructing a new datacenter or procuring newIT equipment. At the same time, these services increase the efficiency of the systemsthat are already in place in terms of power, cooling, space, and personnel time. Server Optimization and Integration Services. In terms of virtualization, most datacenter managers have already virtualized the "easy workloads," and they do not know where to go with more complicated virtualization projects in terms of time, resources, and skill sets while providing a strong ROI for the business. IBM services work with datacenter managers to virtualize complex Wintel workloads. IBM utilizes an outside tool and partner called CiRBA to automatically collect workload characteristics and interdependencies. Then IBM runs profiling to determine which workloads are good virtualization candidates. The profiles break down into six patented workload scenarios. IBM virtualizes the appropriate workloads using a standard factory model for faster implementation and a repeatable model. This service leaves IT with efficiencies from consolidated physical servers, a quick virtualization plan, and lowered power and cooling costs. Intelligent Storage Service Catalog. The rapid explosion of structured and unstructured data predicted by IDC will lead to a storage management conundrum for IT organizations. IBM can automate storage provisioning to speed time to market and decrease management costs. This IBM service also frees up storage architects so they can focus on adding incremental value to the business rather than maintaining and managing existing storage. The Intelligent Storage Service Catalog defines common application-based standards, maps the standards to the appropriate storage, and builds the corresponding catalogs and requests. The process is policy based for ease of repeatability. This service can increase storage utilization, decrease management time, and decrease the demand for tier 1 storage. Middleware Design and Strategy Services. The shift toward application rationalization is happening in many datacenters as IT organizations take a hard look at efficiencies, virtualization opportunities, and the amount of time they spend on management (not innovation). This process is important in the middleware environment as well. IBM combines its performance and relationship analysis multiple error diagnostic (ParaMedic) tool, which identifies abnormal performance bottlenecks from unusual CPU utilization, with performance and capacity evaluation services (PACES). PACES analyzes and optimizes workloads by looking at Web response times. This process enables IBM and the IT organization to model performance outcomes before the actual implementation. In the end, IBM uses these tools to speed middleware consolidation and optimization so that IT managers have what they need and do not need to manage what they do not need.©2011 IDC #228261 11
  12. 12. RationalizeIBMs services help datacenter managers perform a portfolio rationalization of theirdatacenter. Many environments today are reactionary and have not established whatassets, management, and designs are necessary to be proactive for the business.IBMs rationalization services include: Datacenter Strategy. IBMs datacenter strategy helps business balance the goals of budget, availability, and expanding services. The necessity for tools is apparent in many IT organizations as this exercise is not undertaken with much regularity, and the risk of not taking a hard look at overall datacenter strategy is too risky not to do it. IBM uses cash flow analysis, outage analysis, and capacity planning tools to set up the datacenter for success. In particular, the capacity planning and resiliency tools are patent-pending, leading-edge tools developed in collaboration with IBM Research. The capacity planning tool provides a new level of predictability that can be used to plan for the next 10–20 years. The tool empowers decision making and improved performance through the use of complex modeling and Monte Carlo simulations to determine the best way to meet the unpredictable demands of datacenter capacity in the future. The datacenter strategy service is useful for datacenter managers wondering where to start while keeping the delicate balance of the datacenter (budget, availability, and expansion) in check. Datacenter Consolidation and Relocation. This patent-pending technology maps dependencies of all IT assets up to the application level. Analytics for Logical Dependency Mapping (ALDM) is ideal for datacenter relocation or consolidation. ALDM allows datacenter managers to focus on application availability during datacenter moves and consolidation because what runs together unfortunately goes down together. With this new technology, this risk is mitigated because the dependencies are known. In a world where the cost of moving a datacenter can sometimes equal or exceed the cost of building a new datacenter, this technology is very valuable.DesignAs noted earlier, the physical backbone of the datacenter is often forgotten whenthese transitions and services come in to play. With IBM, this is not the case; it hasexpertise in datacenter design, construction, and operation from its worldwide hostingand outsourcing businesses. This experience can be brought in to help datacentermanagers figure out how to retrofit, expand, or build a new datacenter. The servicesthat help IT organizations go down the path of datacenter capacity expansion include: Scalable Modular Datacenter (SMDC). This package is for new datacenter needs in small to midmarket companies that are experiencing capacity, availability, or flexibility limitations. The package includes a preintegrated enclosed rack with cooling, onsite services and consultation, and a power distribution unit (PDU). The greatest value-add here from IBM is the "single throat to choke." With small to midmarket companies sometimes lacking facilities knowledge or staff bandwidth, having a single point of contact to provide project management services and manage other vendors is invaluable. This solution is a great place to start a new datacenter footprint without going through a massive project.12 #228261 ©2011 IDC
  13. 13.  Portable Modular Datacenter (PMDC). This solution is a great way to add capacity to an existing site, create a new point of presence, increase disaster recovery capabilities, or gain capacity in remote areas. IBM also offers preintegrated datacenters in shipping containers (20 feet long and 40 feet long) with facilities included. The specification includes cooling, uninterruptable power supply (UPS), fire suppression, batteries, and remote monitoring. IBM is vendor neutral for IT equipment, although of course, it can populate the container with IBM IT systems as well. Enterprise Modular Datacenter (EMDC). This IBM service for enterprise clients supports modularity from the first stages of the datacenter build. By building modularity into the datacenter design from the ground up, enterprises avoid costly retrofits down the road and improve flexibility to meet changing business requirements. The EMDC is essentially a "shrink-wrapped," standardized datacenter between 5,000 square feet and 20,000 square feet in size. This approach to enterprise-level datacenter construction provides just-in-time compute for the business without overprovisioning today for tomorrows computing requirements.Datacenter managers undertaking the building of a new datacenter have a plethora ofchoices, and making the correct decision, in many cases, will impact the datacenter forthe next 15 to 20 years. This is a difficult situation to be in due to the unpredictability ofITs needs over the course of the future, lack of information, and lack of perspective.IBM datacenter life-cycle cost tools can help rightsize the trade-offs in terms of capitalexpenditure (capex) and operational expenditure (opex) for different types of cooling(one of the longest-term impact decisions in datacenter design). IBM uses these tools todesign the modular datacenters mentioned in this section.ManageA common problem for datacenter managers today is making more time for their staffto focus on strategically critical projects rather than mundane day-to-day tasks. Theseday-to-day tasks need to be accomplished to keep the datacenter, IT, and thebusiness running but are not adding incremental value to IT or the business. To solvethe problems of todays datacenter and increase flexibility, efficiency, and reliability,IT needs to focus on incremental improvements rather than keeping the ship afloat.The problem is that there are a finite number of IT staff members, so IT managersneed a datacenter service provider to accomplish maintenance and day-to-daychores, thereby freeing up internal IT staff to focus on helping the business. IBMsservices to help manage the IT environment include: Managed Server Services. IBMs Enterprise Server Managed Services provide monitoring and management of the IT infrastructure, including servers, middleware, storage, and databases. IT organizations that utilize this service and give up an essential but not incrementally valuable task, free up these administrators to innovate, create value-adding services for lines of business, and focus on more mission-critical work. This IBM service is available for System Z and System I platforms with local service delivery where offshore delivery is noncompliant. Native language delivery and support are available for Japan, Korea, and China.©2011 IDC #228261 13
  14. 14.  Managed Storage and Data Services. Given the oncoming explosion of data and storage capacity and management demands being placed on IT, IBMs Enterprise Managed Storage Services are rather timely. This service features flexible, scalable, resilient storage capacity on demand for clients. Disk, archive, backup, and restore management services are available as part of a fully managed solution. These services include reporting, monitoring, management, and allocation-based pricing. The location options abound from an IBM service delivery center, a hosting center, or a customers datacenter. In terms of connectivity, storage area networks (SANs) and local area networks (LANs) are available. This highly secure service from IBM cures the headache of many datacenter managers looking to offload some of the data onslaught to free up internal resources for more strategic initiatives. Tivoli Live Monitoring Services. For datacenter managers facing repeated instances of downtime and a deluge of alerts, IBM offers Tivoli Live Monitoring Services. This service allows datacenter managers to have greater visibility into the incidents from their infrastructure without installing management tools. IT organizations are constantly looking for better insight around availability, capacity, and energy efficiency. Tivoli Live Monitoring Services uses intelligent automation and policy-based alert monitoring to limit issues resulting in downtime. This ultimately frees up IT staff to focus on problems affecting business performance.FUTURE OUTLOOKThe future outlook for external services in the datacenter is bright. Now that most ofthe easily virtualized workloads are consolidated, the next hurdles for many IT shopsare to decrease management time and resources, increase availability, and expand tosupport the business. Many of these process and soft issues are difficult to addressfrom within the organization. Bringing in an external point of view, both to help the ITdirector choose where to begin while maintaining balance between resourcelimitations, availability, and expansion and to see the forest through the trees, is, inmany cases, a valuable endeavor.IDC believes that the opportunity for IT managers to gain knowledge, strategic insight,and an improved IT environment from datacenter service providers will grow as morecompanies move along the virtualization management curve. For datacenter serviceproviders, the keys to success are breadth and depth of expertise, proven strategicinsight, global experience, and analytically driven actions.CHALLENGES/OPPORTUN ITIESChallenges Datacenter managers are still focused on day-to-day survival instead of long-term excellence. IBM needs to get these IT organizations to think differently about their IT and facilities environments.14 #228261 ©2011 IDC
  15. 15.  Cloud computing and the rise of off-premise computing is a tempting proposition for some datacenter managers. Luckily, IBM has a strong hosting offering that complements its internal services offerings. Changing behaviors is difficult. To really increase datacenter efficiency, agility, and availability, IBM needs to effect change at an organizational level.Opportunities With a large, ongoing post-recession buildout of enterprise-class datacenters to replenish an outdated and out-of-capacity supply, IBMs timing is impeccable. IBM has the ability to cross-leverage its other lines of business, including systems and hosting. IBM can either be best-of-breed one-stop shopping or work with other vendors in the datacenter to provide what the situation requires. This is a true comparative advantage in the market today. Virtualization is at the point in its adoption curve where complexity is really becoming the limiting factor. IBM can help datacenter managers optimize that last mile.CONCLUSIONTodays datacenters have solved one problem (physical server sprawl) withvirtualization and are not dealing with a resulting problem (management and virtualserver sprawl). Many CIOs know they need help, but they do not know where the bestplace is to begin optimization for flexibility, reliability, and efficiency. Many times thereare trade-offs between goals, and, unfortunately, stagnancy is not a viable option.IBMs datacenter services are meant to help. Datacenter managers have manyoptions to choose from with the assistance of a reliable, proven partner. IBMsbreadth and depth of analytically backed offerings are proven not only throughmeasurement and analysis but also in its own outsourcing and customer datacenters.For datacenter managers on the road to optimization or those looking for where tobegin, IBM is an excellent place to start.Copyright NoticeExternal Publication of IDC Information and Data — Any IDC information that is to beused in advertising, press releases, or promotional materials requires prior writtenapproval from the appropriate IDC Vice President or Country Manager. A draft of theproposed document should accompany any such request. IDC reserves the right todeny approval of external usage for any reason.Copyright 2011 IDC. Reproduction without written permission is completely forbidden.©2011 IDC #228261 15 SFW03013-USEN-00