Your SlideShare is downloading. ×
The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

The Value of Memory-Dense Servers IBMs System x MAX5 for Its eX5 Server Family

819
views

Published on

This IDC white paper highlights how IBM eX5 systems with MAX5 memory technology play a significant role in increasing the value of memory dense servers. …

This IDC white paper highlights how IBM eX5 systems with MAX5 memory technology play a significant role in increasing the value of memory dense servers.

Published in: Technology, Business

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
819
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. WHITE P APER The Value of Memory-Dense Servers: IBMs System x MAX5 for Its eX5 Server Family Sponsored by: IBM Michelle Bailey March 2010 IDC OPINIONwww.idc.com The technology industry has reached a crossroads. After more than a decade of physical server sprawl, nearly exponential growth in storage, and a proliferation of network technologies, IT organizations are now facing tremendous challenges in planning for a future enterprise architecture that is less expensive, less complex, andF.508.935.4015 more agile than todays infrastructure. At the core of this reinvention is virtualization and, increasingly, a converged set of IT infrastructure that is built on a service-centric approach to supporting the business. This new technology cycle is squarely aimed at improving utilization rates, driving efficiency across the datacenter, and simplifying deployment and ongoing maintenance in order to ultimately shorten time to marketP.508.872.8200 and optimize the business value from IT investments. Many IT organizations are well on their way to creating a more flexible and responsive enterprise architecture. Server virtualization has quickly become mainstream and is the foundational platform for the datacenter. More than 50% of allGlobal Headquarters: 5 Speen Street Framingham, MA 01701 USA server workloads are now deployed on virtual machines, and this is driving a sea change in the types of technologies that IT organizations are procuring and configuring and their approach to IT processes and practices. We have already seen customers move toward more richly configured servers to maximize the number of virtual machines (VMs) consolidated per physical server. The correct balance of processor, memory, and I/O is critical in architecting an effective virtualization solution. Initially, the emphasis on building physical systems for virtual machines focused on multicore processors. However, with the maturity in virtualization, most IT organizations now report that the single greatest limiter in driving higher VM densities is tied to the amount of memory that their virtual machines can access. Servers that were previously built to support single applications have become inadequate in meeting the virtualization goals of customers. Prior to virtualization, only the most demanding workloads required high memory footprints — large databases, OLTP applications, and enterprise ERP and CRM solutions. Today, because each virtual machine requires its own memory to ensure consistent application performance, systems with large memory capabilities become essential. As a result, new x86-based servers are coming to market that can massively expand memory capacities.
  • 2. With this change in technology comes a new set of metrics for measuring ongoingsuccess in virtualization. "Cost per application" or "cost per VM" is now used to gaugethe effectiveness of technology investments, and as a consequence, customers arelooking to match their consolidation goals with newer systems infrastructure thathelps maximize VM densities relative to physical hardware.SITUATION OVERVIEWA New Approach to Datacenter Economics IsRequiredFor many years, IT organizations would install at least one physical server perapplication, and often three to five servers per application, when taking into accounttest/development, staging, and disaster recovery environments. This inevitably led toan explosion in the number of physical systems and devices installed as well asdatacenter sites. Prior to virtualization, most IT organizations faced: Physical server sprawl. The number of installed physical servers has increased sixfold from just over 5 million in 1996 to more than 30 million in 2010. Overprovisioning and underutilized assets. Most applications consume a fraction of a standalone servers total capacity, averaging 5–10% CPU utilization of a typical x86 server. Spiraling operational costs. Most customers have underinvested in systems management and automation tools relative to the investments that have been made in x86 systems infrastructure. This has meant that many datacenters employ manually intensive processes, resulting in greater burdens on staff. Server sprawl that exacerbates the power and cooling challenges of aging datacenter facilities. The average age of a datacenter in the United States is 12 years. This means that the typical datacenter was built to support a substantially different set of infrastructure that has become increasingly dense over time. Most datacenters were designed to support 1–2kW per rack versus 8–15kW per rack that we routinely observe.Virtualization Is the Killer App for theDatacenterVirtualization technologies have completely transformed the way in which customersbuild, deploy, and manage their systems infrastructure. Virtualization tools allowmultiple logical servers or "virtual machines" to run on a single physical server. Byconsolidating applications onto fewer physical servers, customers have been able toslow the sprawl of physical servers within their datacenters. In fact, today mostdatacenters report that virtualization has become the default build for new serverinstallations (see Figure 1).2 #222224 ©2010 IDC
  • 3. Customers have realized three primary benefits in deploying virtualizationtechnologies: Physical server consolidation. Consolidation remains the main driver for deploying virtualization today. By consolidating multiple virtual machines on a single physical server, customers have less server hardware to purchase and fewer installed servers. The most direct benefits are server hardware savings and, consequently, fewer hardware maintenance agreements. Other benefits include reduced energy demands for the datacenter and lower requirements for floor space and rack space. This consolidation helps in reducing staff burdens for purchasing, deployment, and hardware maintenance; however, customers have yet to see any significant benefit from application and OS management. Improved availability and disaster recovery. Mobility tools enable the migration of a virtual machine from one piece of physical server hardware to another. Customers have found these technologies particularly useful for reducing planned downtime and alleviating the pressure on shrinking maintenance windows. Mobility tools are also used to combat unplanned downtime and can be used alone or in conjunction with existing tools such as clustering and replication. Over time, we expect that customers will be able to regularly move virtual machines not just across the datacenter floor but also from one site to another, creating a new paradigm for disaster recovery. Improved flexibility. Virtualization has allowed customers to be more responsive to the business. Virtual server deployments can literally reduce the time to deploy a server to minutes compared with days or even weeks for physical server deployments, meaning that time to market is significantly reduced. Virtualization also decouples the server hardware from the application so that maintaining legacy applications is greatly simplified.©2010 IDC #222224 3
  • 4. FIGURE 1Server Virtualization AdoptionQ. Which of the following statements most closely describes the build decision for new server hardware at your organization? Virtualization is the default build for new server hardware unless a case can be made for a standalone, unvirtualized server Standalone servers are the default build, but we strongly advise or incent our application owners to use virtualization where possible Standalone servers are the default build, and we will suggest virtualization with application owners but will not push it Standalone servers are the default build, and we will deploy virtualization only if our customers request it 0 10 20 30 40 50 (% of respondents)n = 400Source: IDCs Server Virtualization Multiclient Study, 2009The Impacts of Mainstream ServerVirtualization AdoptionGiven the broad adoption of virtualization, the physical server market has changedsubstantially and the number of installed servers worldwide is leveling off. However,at the same time, the number of virtual machines is exploding. This "virtual serversprawl" is already having a profound impact on IT operations and procurementstrategies.Virtual Machine Sprawl a Rising Datacenter CostIDC expects that more than 50 million virtual servers and just 30 million physicalsystems will be installed by 2013, resulting in more than 80 million logical machines(see Figure 2).4 #222224 ©2010 IDC
  • 5. FIGURE 2New Economic Model for the DatacenterShifts to Automation Tools Are a RequirementSource: IDC, 2009Virtual Machine Densities on the RiseThe rapid growth in the number of virtual machines is due not just to the growingproportion of servers being virtualized but also to the growing number of virtualmachines installed per physical server.After years of building in overhead on hardware resources to help guarantee service-level agreements (SLAs), most customers had modest goals for increasing theutilization of their servers. Many report an ideal of moving from 5% or 10% utilizationfor standalone servers to 30% or 40% utilization for virtual servers. This has meantthat on average, the number of VMs per server has been approximately 6 to 1.Figure 3 demonstrates the average number of VMs deployed per physical server,according to a recent survey of 400 systems administrators. While a consolidationratio of 6 VMs per server is the average, IDC routinely sees customers standardizingon consolidation ratios of 8:1 or 10:1 and leading-edge customers deploying 25, 30,or even 40 VMs per physical server.©2010 IDC #222224 5
  • 6. Changing Server Configurations to Optimize for VirtualizationIDC finds that IT organizations with more aggressive VM density goals are deployingmore richly configured systems with significantly higher memory installations (seeFigure 4). To achieve this increase in memory, customers will often buy servers withhigher processor counts for two reasons:1. The higher the socket count, the greater the access to physical memory.2. Servers with higher numbers of sockets tend to have higher numbers of DIMM slots on the motherboard.Often, we find that customers that purchase systems with high core counts forimproved memory accessibility have underutilized processors.FIGURE 3Server Virtualization Densities, 2008 20–24 VMs per 25+ VMs per physical server physical server (4.5%) (3.4%) 1 VM per physical 15–19 VMs per server (10.9%) physical server (4.5%) 10–14 VMs per physical server (10.2%) 2–4 VMs per physical server 5–9 VMs per (42.2%) physical server (24.3%) n = 400Source: IDCs Server Virtualization Multiclient Study, 20096 #222224 ©2010 IDC
  • 7. FIGURE 4Server Virtualization Densities by Memory Installed per Server 45 41.7 Average memory installed 40 35 32.3 29.5 per server (GB) 30 25 21.2 20 15 12.1 10 5 0 <4 4–5 6–9 10–19 20+ (Number of VMs per server)n = 400Source: IDCs Server Virtualization Multiclient Study, 2009New Hardware Solutions Are Required for Substantial Increases in VMDensitiesIDC research shows that customers are expecting to achieve utilization rates of60–80% on their hardware compared with 30–40% today. This type of utilization is onpar with that seen in mainframe technologies. To meet this goal, IT organizationsmust make substantial changes in the way they purchase and configure their serverhardware. They must recognize that: Memory capacity is just as important as processor power in virtual server configurations. For the past several years, IT organizations have been taking advantage of improvements in multicore technology to drive up VM densities. Also, new hardware assist functionality built in to processors has helped reduce virtualization overhead and enabled I/O offloading. However, while processor improvements have been extremely beneficial, many customers now report that the biggest constraint to increasing VM densities lies in the ability to add memory to a system (see Figure 5). Virtualized servers have much richer configurations relative to standalone servers. IDC continues to see customers buying servers with large numbers of cores as well as large numbers of DIMM slots to support additional memory for virtualization. Typically, we see virtualized x86 servers with 28GB of RAM and a disproportionate number of 4–8 sockets compared with just 4GB RAM and 1–2 sockets on unvirtualized servers. Servers with higher processor counts provide additional memory access by default because they typically have greater numbers of DIMM slots and higher overall memory capacities.©2010 IDC #222224 7
  • 8. Physical memory can be severely limiting to VM densities. Virtual machines must have access to enough physical memory to start the VM and run the guest operating system as well as the application. Administrators have to specify either the total amount of system memory required or the maximum, minimum, and shared memory needed, depending on their choice of virtualization technology. With higher numbers of VMs per server, memory can quickly become overcommitted. So without extended memory solutions, IT organizations have to either limit the number of VMs per server (and therefore increase the number of physical servers installed) or increase the number of installed sockets per server to increase the amount of addressable memory on a system or purchase expensive high-capacity DRAM modules. Types of applications also impact the memory requirement for virtual servers. The size of an application also has a substantial impact on the number of VMs installed per server. The number of users, the active concurrency of these users, and the memory addressability requirements of the application play a large role in determining the VM density of a virtualized server. Database and OLTP applications, for example, have both high memory and I/O requirements and are not suitable candidates for virtualization with limited memory configurations and where there is overhead from the hypervisor.Traditional Thinking Hampers VM DensitiesIDCs research shows that as the number of cores on a virtual server increases, sotoo does the memory configuration. VM densities also rise and then level off at justunder 10 VMs per server on average. Today, this is primarily because servers withhigher core counts are typically used to support higher-end workloads. VM densitiesactually start to decline with 32 or more installed sockets due to the increased use ofricher applications on these multiprocessor servers. So rather than driving up VMdensities on these larger boxes, many customers are applying traditional thinking tosystems configuration — that is, that smaller applications run on smaller servers andlarge applications run on larger servers.Figure 6 displays the average amount of installed memory and the correspondingnumber of virtual machines based on core count. Servers with four cores in total(typically dual-socket, dual-core processor systems) average 14GB of installed RAMand support just six virtual machines. This translates into approximately one core and2.5GB of memory per VM. In contrast, a virtualized server with 32 or more coresaverages almost 45GB of total memory and just under nine virtual machines. This isalmost four cores and 5GB of memory per VM.As the core count of these servers increases, so too does the prevalence of memory-intensive applications such as business processing, Oracle Database, businessanalytics, and collaborative applications (see Figure 7). As shown in Figure 6, VMdensities for servers with high core counts level off at 8.5 VMs per server.Interestingly, customers are able to virtualize a broader set of applications as the corecount of the server increases. IDC expects that without a change to memorycapabilities, VM densities will stabilize on higher-end systems as customers deploymore memory-intensive applications on these servers.8 #222224 ©2010 IDC
  • 9. FIGURE 5Virtual Server Configuration Requirements: x86-Based Servers OnlyQ. Which of the following hardware components are mainly driving the richer configurations on your virtual servers? 90 80 mentioned that component is driving richer configurations) (% of respondents who 70 60 50 40 30 20 10 0 Memory Processors Storage I/O devices Othern = 400Note: Multiple responses were allowed.Source: IDCs Server Virtualization Multiclient Study, 2009FIGURE 6Memory Density and VM Density by Server Core Count 50 6 45 40 5 35 Memory (GB) 4 30 25 3 20 15 2 10 1 5 0 0 4 cores 8 cores 16 cores 32+ cores Average memory (GB) Average number of VMs Average number of cores per VM Average memory per VM (GB)n = 400Source: IDCs Server Virtualization Multiclient Study, 2009©2010 IDC #222224 9
  • 10. FIGURE 7Virtual Server Workload Profile by Server Core Countn = 400Source: IDCs Server Virtualization Multiclient Study, 2009Automation a Key Driver to Future Success in VirtualizationMost customers have invested far less in systems management and automation toolsrelative to the investments that have been made in hardware virtualization. Consequently,many datacenters still employ manually intensive processes to manage their virtualmachines. The processes are often based on the management of their physical machines.For instance, even though most IT organizations will leverage mobility tools that enablethe movement of virtual machines from one physical server to another, most of thismigration is done using a combination of manual intervention and point tools, and typicallythese VMs are moved for the purposes of maintenance (not failover). This movementtends to happen monthly or quarterly and usually during off-hours.While the success of virtualization has largely been built on server hardware savings,the future success of an increasingly virtualized architecture is in automation.Automation provides IT organizations with the ability to link workflow practices to an"on-demand" and highly utilized infrastructure. Most importantly, automation enablesIT organizations to minimize the manually intensive tasks of systems administratorsand significantly lower maintenance costs that can be paralyzing to innovation. As aresult, customers are building a shared pool of compute, memory, I/O, and storageupon which to support existing applications and launch new projects as well asreduce datacenter power and cooling demands.10 #222224 ©2010 IDC
  • 11. Changing Thinking Required in the Use of Automation Tools toDrive Up VM DensitiesMost IT organizations are a long way from fully trusting workload-balancing tools thatcould automate many of these tasks. IDC expects that if customers dont significantlyimprove automation capabilities for their virtualized environments, IT management costswill actually rise over the next five years as systems administrators struggle to maintaina growing installed base of virtual servers that need to be patched, upgraded, andsecured as any physical server (see Figure 8). Without implementing automatedworkload-balancing techniques, customers will have to continue to build in systemsoverhead, which impacts the ability to more fully utilize system resources. Applicationavailability and performance will be at risk as bottlenecks will likely ensue on a systemthat is maximized without the ability to seamlessly move in resources on demand.As customers begin to build a new automation platform for their virtual environments,memory-rich systems can bridge the movement to automation by providing theappropriate headroom to successfully drive up VM densities.FIGURE 8New Economic Model for the DatacenterManagement Costs Shift to Virtual ServersSource: IDC, 2009©2010 IDC #222224 11
  • 12. IBMs Memory Extension Solution forVirtualization and DatabasesIn response to customer requirements for higher memory footprints in virtualizedservers and for high-end databases, IBM has released its eX5 server line with itsMAX5 memory technology that can provide up to double the amount of physicalmemory available per server relative to industry standards. The eX5 server line is thefifth generation in IBMs Enterprise X-Architecture. IBM has been innovating aroundIntel-based solutions since 2000 to create a more scalable x86-based architecture tobalance processing, memory, and I/O for higher-end workloads.MAX5 is utilized across IBMs newly released eX5 servers in 2-socket, 4-socket, and8-socket configurations for a maximum of 1TB, 1.5TB, and 3.0TB of total memory ineach of the respective systems with 16GB DRAM modules. These large memorycapacities are made possible by attaching the IBM System x MAX5 memoryexpansion drawer, thereby increasing the number of available DIMM slots. The MAX5memory expansion drawer provides 32 additional DIMM slots for each eX5 rackserver. Thus, a 2-socket server can be expanded to 64 DIMM slots, a 4-socket servercan be expanded to 96 DIMM slots, and each of the server chassis in an 8-socketserver can be expanded to 192 DIMM slots.The Advantages of Memory-Dense ServersIT organizations have been able to achieve substantial consolidation objectives withvirtualization to date, but in order for IT to continue to drive down costs in thedatacenter, additional improvements are needed within hardware solutions to drive upVM densities. If customers are to consider more than 20 VMs per server, they willneed to procure servers with very high memory capabilities. Given that a proportionalincrease in processor counts is not required, IDC believes that organizations willincreasingly look to a new set of server infrastructure that scales memory capacitywhile optimizing for processor counts. There are multiple benefits to this type of"memory-rich" system: Scale virtual server environments without installing new physical servers. By procuring servers with higher memory capabilities, IT organizations can choose to grow their installed base of virtual servers as their requirements increase without adding another physical server. Customers can scale their server environment by installing additional memory modules rather than installing a new server. This approach saves on not only hardware, real estate, and power and cooling but also time to order, builds, and deployment of a new piece of hardware. Choose DIMM counts, DRAM modules, and overall memory costs. By selecting servers with high numbers of DIMM slots, customers can choose to fill these DIMM slots with lower-cost 2GB and 4GB memory DRAM modules or maximize the available memory access with more expensive 8GB or 16GB DRAM modules. Customers can also decide if they want to fill up the DIMM slots with less expensive memory or use fewer, more expensive DRAM modules and allow for future expansion with free DIMM slots.12 #222224 ©2010 IDC
  • 13. Improve application choice for physical and virtual servers. Memory-rich servers can be used not only for delivering high numbers of virtual machines per server but also for hosting higher-end 64-bit workloads such as large databases and OLTP, ERP, or CRM solutions that are memory and/or I/O intensive and are sensitive to the overhead of virtualization. This type of architecture also makes virtualization of these higher-end workloads more realistic. While customers may choose to install fewer, larger VMs on these servers, they can still reap the additional benefits of virtualization, mainly higher availability and improved flexibility from mobility and deployment tools. Better leverage processor-based software pricing. For customers that have applications priced by socket or core, implementing memory-rich systems without an increase in socket or core count means that IT organizations can take advantage of existing software pricing and improve consolidation rates without an increase in software costs. Aid in migrating large databases to a virtual environment or x86 architecture. With massively scalable memory architectures, x86 customers will have greater choice in where to run their large databases. Prior to these innovations, customers would typically deploy large databases on richly configured standalone systems. Memory capacities in excess of 1TB provide customers with significantly more options for migrating these databases from existing platforms. Memory-rich systems also open up the possibility of virtualizing these databases so that customers can exploit the advantages of mobility and rapid deployment that come with virtualization. Improve database performance by providing more memory addressability and memory sharing. IT organizations could choose to use memory-rich systems for the purposes of improving the performance of large databases on x86 platforms. Enhanced memory addressability lowers the thrash on system performance with memory-hungry databases and improves memory sharing.CONCLUSIONIDC believes that a new IT business cycle has begun. Over the next 10 years, ITorganizations will be challenged to meet increasing demands from the business withoutinnovating around technology. At the same time, the expectation is to continue to drivegreater efficiencies and maximize IT budgets. As businesses become increasinglyconnected and interconnected to technology, the need to support an ever-growingportfolio of applications and analytics requires a smarter set of IT systems.Virtualization will be at the heart of future datacenter transformations andfundamentally requires a different set of systems that are tightly integrated andpurpose built for virtualization. This new generation of servers is designed from theground up to support virtual machines and will require large memory footprints tooptimize virtual workloads and large databases. These systems bring together server,storage, and networking systems as well as automation tools that seek to reducemanagement complexities that have become a burden for most large ITorganizations. While these systems will be more proprietary in nature, the trade-off isin simplifying deployment and maintenance.©2010 IDC #222224 13
  • 14. To continue to drive efficiencies in datacenter consolidation and address ongoingconsolidation, IT organizations should carefully assess the total cost of implementingmemory-rich systems with high VM densities, as well as scalable workloads, againstthe moderate virtualization goals they have today. IDC believes that without a changein IT practices and policies, the cost of computing will continue to rise as virtualizationbecomes saturated at more modest consolidation levels.To drive up VM densities, customers should: Balance newer processing capabilities in systems with dense memory configurations. This is essential for a host of benefits: improving consolidation ratios, expanding the choice of physical and virtual servers for more applications, leveraging processor-based software licensing, enabling migration of large databases to a virtual environment or x86 architecture, and improving database performance with more memory addressability and memory sharing. Take advantage of innovations in processing architecture with embedded virtualization assist technology to enable offloading and lower the overhead from the hypervisor. Implement networked storage solutions that enable mobility of virtual machines across physical systems and allow for optimization of applications across the entire datacenter while still meeting SLA requirements for availability and performance. Implement automation and workload-balancing tools to reduce the amount of required hardware for overhead purposes and reach a higher level of system utilization and lower staff maintenance costs. Consolidate applications with the same operating system on physical servers to encourage page sharing between applications. This lowers the overhead on system memory should capacity become low. Aggressively test current IT practices and policies and reevaluate if these serve longer-term goals for virtualization adoption and consolidation. This will likely require a change in current thinking and may be the most difficult change to make in creating a more integrated set of technologies for the future datacenter.Copyright NoticeExternal Publication of IDC Information and Data — Any IDC information that is to beused in advertising, press releases, or promotional materials requires prior writtenapproval from the appropriate IDC Vice President or Country Manager. A draft of theproposed document should accompany any such request. IDC reserves the right todeny approval of external usage for any reason.Copyright 2010 IDC. Reproduction without written permission is completely forbidden. XSW03070-USEN-0014 #222224 ©2010 IDC