E-GuideServer Optimization, Hardwareand VirtualizationSQL Server performance depends on hardware, and what you buy canhave...
SearchSQLServer.com E-Guide                                       Server Optimization, Hardware and Virtualization       E...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and VirtualizationPu...
SearchSQLServer.com E-Guide                                            Server Optimization, Hardware and Virtualization   ...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and Virtualization  ...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and VirtualizationSo...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and VirtualizationSe...
SearchSQLServer.com E-Guide                                            Server Optimization, Hardware and VirtualizationJam...
SearchSQLServer.com E-Guide                                            Server Optimization, Hardware and VirtualizationSQL...
SearchSQLServer.com E-Guide                                            Server Optimization, Hardware and VirtualizationFin...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and Virtualizationst...
SearchSQLServer.com E-Guide                                           Server Optimization, Hardware and VirtualizationSQL ...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and VirtualizationHo...
SearchSQLServer.com E-Guide                                             Server Optimization, Hardware and Virtualizationat...
SearchSQLServer.com E-Guide                                           Server Optimization, Hardware and VirtualizationAbou...
Upcoming SlideShare
Loading in...5
×

Server Optimization, Hardware & Virtualization

447

Published on

SQL Server performance depends on hardware, and what you buy can have a huge impact. Yet people make mistakes when purchasing hardware. In this expert E-Guide readers will find out what to watch out for when purchasing hardware for SQL Server. Additionally, this E-guide takes a close look at the rising presence of solid-state drives (SSDs) in enterprise applications such as SQL Server.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
447
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
19
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Server Optimization, Hardware & Virtualization "

  1. 1. E-GuideServer Optimization, Hardwareand VirtualizationSQL Server performance depends on hardware, and what you buy canhave a huge impact. Yet people make mistakes when purchasinghardware. In this expert E-Guide readers will find out what to watchout for when purchasing hardware for SQL Server. Additionally, this E-guide takes a close look at the rising presence of solid-state drives(SSDs) in enterprise applications such as SQL Server. You’ll find tipson what to look for when buying hardware for SQL Servervirtualization and how virtualization promises big savings forbusinesses, but is it always the answer. Sponsored By:
  2. 2. SearchSQLServer.com E-Guide Server Optimization, Hardware and Virtualization E-Guide Server Optimization, Hardware and Virtualization Table of Contents Purchasing hardware for SQL Server: What not to do Solid-state storage devices for SQL Server: Are they worth the cost? SQL Server virtualization is inevitable: Get the right hardware SQL Server virtualization risks: among all the pros, some cons About Dell and MicrosoftSponsored By: Page 2 of 15
  3. 3. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationPurchasing hardware for SQL Server: What notto doBy Don Jones, ContributorBuilding a new SQL Server system can be tricky. SQL Server is a product that really utilizeshardware, and its performance is dependent on how you configure your server -- and inparticular on how you configure your server’s storage subsystems. With that in mind, hereare some of the top mistakes people make when purchasing hardware for SQL Server: Going the DIY route. Don’t build your own SQL Server computer from off-the-shelf parts, unless it’s just meant to be a nonproduction development machine. Servers in general, and SQL Server computers specifically, need tightly matched parts: processors, chipsets, memory, controller cards and the like. You need components that will hold up to high heat, for example, and that have been designed to work together. That isn’t saying it’s impossible to build your own server -- but it’s far easier to buy one that’s been fully integrated and will be supported by the manufacturer. Having no performance expectations. You simply can’t build a SQL Server system properly unless you know what kind of load it’s going to be under. Well, you can -- but you’ll either underbuild or overbuild, and either one is going to be expensive. When you underbuild, you’re essentially setting your server up to not have enough power sometime in the future, meaning you’ll be forced to spend money upgrading (and depending on the server’s initial configuration, upgrading may not even be possible). With overbuilding, you’re spending more than you need or ever anticipate needing. Use existing databases, applications or even vendor benchmarks to get some expectation of how many transactions per second you expect to process and size the hardware accordingly. Buying disk size, not disk performance. Yes, SQL Server often needs tons of disk space. But all that space is useless if the disk technology isn’t fast. Tossing a handful of drives into a RAID 5 array might get you the space and redundancy you want, but if that array can’t move the bits on and off the platters with some serious speed, it’sSponsored By: Page 3 of 15
  4. 4. SearchSQLServer.com E-Guide Server Optimization, Hardware and Virtualization going to be a major performance bottleneck for your system. If you can’t afford fast disks in the size you need, then you can’t afford SQL Server. Ideally, database files and transaction logs should be on different disks (or arrays), and SQL Server should be accessing them through different channels, such as disk controller cards or storage area network (SAN) connections. The tempdb system database may need its own disk or array as well if it’s heavily used. Choosing the wrong RAID option. RAID 5 is slow at writing data to disk. Period. Most RAID controllers attempt to overcome this handicap by caching data in on- controller memory (which is typically battery backed up for safety), but a busy SQL Server database can fill that cache and hit a bottleneck. RAID 10 is the way to go. Its more expensive than RAID 5, but it combines disk mirroring with data striping, and it offers higher redundancy and faster reads and writes. Buying too few drives. If you need X number of gigabytes or terabytes of storage space, you want it delivered in as many physical disks as possible in order to get the fastest throughput possible. That’s because having more disks—whether small or large in capacity -- is better than going with fewer bigger ones. With striping (supported by both RAID 5 and RAID 10), every extra disk will improve SQL Server’s performance just a bit more. If, for example, you have an option of buying five 1 TB drives or twenty 250 GB drives, the twenty drives (assuming they’re configured in a stripe array and the drives feature the same speed and transfer rate ) will almost always outperform the five. Using disk controllers without batteries. If you’re relying on disk controllers to cache write instructions -- say, to a RAID 5 array -- make sure there are batteries on board. Plan to monitor the server’s power-on self-test (POST) screen from time to time to make sure those batteries (usually lithium watch batteries) continue to hold a charge. Blindly trusting the SAN. A SAN is not the perfect answer to storage in all cases. You have to make sure it’s built for fast throughput and that SQL Server isn’t sharing it with so many other servers and applications that it has to compete for bandwidthSponsored By: Page 4 of 15
  5. 5. SearchSQLServer.com E-Guide Server Optimization, Hardware and Virtualization and throughput. SQL Server needs fast storage access -- it’s the biggest performance bottleneck for most SQL Server computers. Make sure you know the configuration of the SAN (RAID 5 versus RAID 10, for example, with the above mistakes in mind), its throughput and other details—just as you would want to know for direct-attached storage. Going 32-bit. Not so much in the hardware, which is mostly all 64-bit these days, but in the software. On a 32-bit copy of Windows, it’s harder for SQL Server to utilize more than 3 GB of memory -- it has to use some paging extensions that aren’t as efficient as just having raw access to tons of memory. If you’ve got 64-bit hardware, run a 64-bit operating system on it. Besides, Windows Server 2008 R2 -- and later versions of Windows -- are only available in 64-bit versions.Many of these mistakes seem to be storage-related, don’t they? Definitely. Storage for SQLServer is the one area where people tend to focus too much only on size, and not enoughon other factors, such as throughput. Especially with SANs, where storage becomessomething like “a service of our private cloud,” like a big magic box in the sky, where datalives.Of course, there’s more to SQL Server performance than just storage, such as processorarchitecture and server memory capacity. Details matter, and performance counts. Avoidthese mistakes when purchasing hardware for SQL Server and you’ll have a healthier,happier -- and above all, faster -- machine.Sponsored By: Page 5 of 15
  6. 6. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationSolid-state storage devices for SQL Server: Arethey worth the cost?By Serdar Yegulalp, ContributorFew people can deny the rising presence of solid-state drives (SSDs) in enterpriseapplications such as SQL Server. They have a few major advantages over their spinning-platter counterparts, namely, their increased read and random-access speeds. But giventhat conventional spinning-platter drives have been on the market for decades and have agreat deal of proven technology behind them, is there a real incentive to push for a switchto solid-state storage devices for SQL Server -- especially given their cost?SSDs have a number of attractive features that make them increasingly competitive againstconventional disks. They consume little energy, they have fast random-access read modes,and they come in form factors (e.g., Serial Advanced Technology Attachment) that allowthem to natively replace hard disks. For database administrators, SSDs’ high read speedsare a major draw, since increasing those speeds can theoretically reduce a major I/Obottleneck.But there are several valid reasons not to go with solid-state storage devices for SQLServer. The single biggest is their cost-effectiveness, whether or not they deliver betterthroughput for the dollar than conventional disks. When dealing with storage systemscontaining many disks—as you often do with databases -- it isn’t just raw performance thatmatters but performance per dollar. If you can solve most of your bandwidth problems witha broad array of cheap hard disk drives, go for it. With SSDs, you could be spending up to10 times as much money, but unless you’re getting 10 times better performance (and youtypically don’t), you’re better off with hard disks.A 2009 Microsoft Research paper, Migrating Server Storage to SSDs: Analysis of Tradeoffs,concluded that SSDs were not, at the time, a viable replacement for conventional harddrives in any of the server scenarios they tested. “The capacity/dollar of SSDs needs toimprove by a factor of 3-3,000 for SSDs to be able to replace disks,” the authors wrote.“The benefits of SSDs as an intermediate caching tier are also limited, and the cost ofprovisioning such a tier was justified for fewer than 10% of the examined workloads.” SQLSponsored By: Page 6 of 15
  7. 7. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationServer was not one of the workloads the authors tested explicitly, but they did test againsta 5,000-user installation of Microsoft Exchange Server (which uses an embedded database)and didn’t find the investment worthwhile.One thing that should not be held against SSDs almost inevitably comes up in anydiscussion about their long-term use: that flash memory cells can withstand a limitednumber of write cycles. Users and IT administrators alike have been hyperconscious of thisfact ever since flash drives came on the market. In a consumer setting, where the amountof I/O isn’t as aggressive as in an enterprise environment, maybe write-cycle limit isn’t sucha big deal. But in an enterprise setting, especially for applications like databases, wherereliability is crucial, people don’t want to bank on a technology that might torch their data.A closer look shows the “write endurance” problem is a lot worse on paper than in reality,and it has been mitigated to a great extent by good design. SSD market analyst ZsoltKerekes did his own investigation of the issue and concluded, “In a well-designed flash SSDyou would have to write to the whole disk the endurance number of cycles to be in danger.”Even databases that sustain a great deal of writes don’t pose a write-endurance problem toSSDs.Given such a scenario, the write-endurance lifetime of the solid-state storage drive is manytimes longer than the likely deployment lifetime of the unit itself. In other words, you’re farmore likely to replace an SSD because a newer, larger, faster or more energy-efficientmodel of SSD comes on the market than because it runs out of write cycles.And newer models are constantly arriving, although the prices have a long way to fall beforethey become cost-effective replacements for conventional drives. Consequently, if you’relooking to spend the kind of money spent on flash SSD storage for a database system(easily on the order of thousands of dollars), you might be better off putting thoseresources toward other components in your database system. Increasing RAM, for instance,means less of the workload is I/O-bound, and may be a more cost-effective way to speedthings up than dropping stacks of cash on SSDs. Your best bet is to use real-world statisticsto find out how much of your database workload is irrevocably I/O-bound, and thendetermine if SSDs are worth the cost.Sponsored By: Page 7 of 15
  8. 8. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationJames Hamilton of the Data Center Futures team at Microsoft crunched some numbers onwhen SSDs make sense in server applications and produced a useful formula for figuring thecost-effectiveness of SSDs. His formula uses a database server (a “red-hot transactionprocessing system,” in his words) as a test case for when SSDs might be justified. Fromwhat he’s found, random I/O to and from disks have consistently lagged behind other kindsof I/O, so it’s tempting to replace disks with solid-state storage devices on this note alone.But, again, there’s how cost effective it is to do so, and if you gather real-world data fromyour own systems and do the numbers, you may find the costs don’t justify the gains.While SSDs are on the way to overtaking their spinning-disk counterparts in manyenvironments, it’s still hard to justify their use in a SQL Server environment from a costperspective. That will change as the prices on SSDs come down, or your workloads change,or both. But before you drop the cash, do the math; for the time being, your money may beput to better use somewhere else.Sponsored By: Page 8 of 15
  9. 9. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationSQL Server virtualization is inevitable: Get the righthardwareBy Don Jones, ContributorLet’s face it: If you haven’t already virtualized SQL Server instances in your environment,you’re going to do so eventually, if not “real soon now.”SQL Server and virtualization are made for each other, and the situation is getting better allthe time. Not just for workload management and consolidation either, but also for highavailability. A new breed of technologies is out there now that can provide multiprocessorpower to SQL Server on multiple hosts, keeping virtual machine instances in lockstep withone another and enabling zero-nanosecond failover in the event that one instance goesdown.But to make it all happen you’re going to need hardware, and buying hardware for avirtualization host that will be running SQL Server is a bit different than selecting hardwarefor SQL Server itself. You also have to plan your SQL Server instances. Busy instances thathandle big databases might go onto virtual machines (VMs) all their own, while smallerinstances might be teamed up within a single VM.Remember that a VM becomes your basic unit of management: You can move VMs todifferent hosts, fail them over and so forth -- but every instance within each VM goes alongfor the ride. Focus on creating VMs that need as few virtual processors as possible to dotheir job; that will make each VM more granular in terms of the workload it handles, and itwill make it easier for those VMs to co-exist with other VMs on the same host.When outfitting that host, there are three things to consider: disk throughput, memory andprocessors. Your money is best spent, initially, on processors. Ignore blade servers andcompact 1U servers for SQL Server hosts: You’ll squeeze more processor sockets and coresinto a 4U chassis, and that chassis will often run with lower cooling and power requirementsthan a similar 1U or 2U chassis.Sponsored By: Page 9 of 15
  10. 10. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationFind the “sweet spot” for processor speed -- where you’re getting the best performance foryour dollar -- don’t just buy the fastest. A few extra megahertz aren’t going to deliver a vastperformance improvement. Do focus on server-class processors, though. If you’re the kindof person who believes he can build a server from off-the-shelf Centrino-basedmotherboards, abandon that theory when it comes to SQL Server virtualization hosts,please.Memory is the next expense. The more, the merrier. Modern hypervisors typically let youovercommit memory, meaning you can configure your VMs to use more memory, in total,than the host actually has. Many environments do well with a 50% overcommit, but SQLServer is a real memory hog. Analyze SQL Server instances to see how much memorythey’re typically consuming, plan your overcommit accordingly, and don’t put VMs on thesame host if they’re all running SQL Server instances, which tend to max out their memoryallocation.Bear in mind that SQL Server, more than many other server applications, will try to usewhatever memory the operating system is willing to give it—so if Windows thinks it has 12GB of memory, SQL Server will often make its best effort to utilize that. That behavior canmake overcommit tricky, so proceed with caution. In fact, most experienced databaseadministrators don’t like to use memory overcommit at all when they’re virtualizing SQLServer.That said, the amount of memory is the one thing you can skimp on when buying a server.That’s because you can add more later -- provided you put the largest memory modulespossible in your server, leaving free slots for future expansion. Don’t cheap out on thememory you do buy, however. Get error-correcting memory that’s speed-matched to theserver’s motherboard. In other words, buy whatever your chosen server vendorrecommends for your server, and ideally buy the memory from that same vendor. After all,that vendor is most likely to offer you support if you have problems with it.Disk is last, and in most SQL Server cases you’ll be building a storage area network (SAN)rather than relying heavily on internal storage within the servers. (You might build amirrored set of internal hard drives to run SQL Server and Windows themselves, not toSponsored By: Page 10 of 15
  11. 11. SearchSQLServer.com E-Guide Server Optimization, Hardware and Virtualizationstore data.) In order of priority, build your SAN for fault tolerance, speed and size; if youthink you need a 10-terabyte SAN, size is the last thing you price out.First, make sure you can afford to make that storage redundant enough to survive thefailure of a handful of actual disks, and you have to build it to be fast enough to supportSQL Server. SQL Server’s most common bottleneck is storage speed, so it’s almostimpossible to build a SAN that’s “too fast.” Thats especially true with virtualization, whichbrings its own I/O overhead as data is written to virtual disk images.While SQL Server is perfectly capable of being run in a virtual machine, buying hardware forSQL Server virtualization hosts and configuring the virtual machines requires a specializedapproach. Simply moving your existing SQL Server instances into poorly configured virtualmachines, or to poorly provisioned hosts, can significantly degrade performance. There’s noneed to take that risk: Keep these tips in mind and you’ll have an efficient virtualizationinfrastructure that’s SQL Server-ready.Sponsored By: Page 11 of 15
  12. 12. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationSQL Server virtualization risks: among all the pros,some consBy Alan R. Earls, ContributorIt is hard to argue with virtualization. Few technologies have had such a sudden andprofound impact on the way businesses run their IT operations, saving them money andmanpower, all with scant glitches or snafus. But when it comes to databases such as SQLServer, analysts warn there may be a few “gotchas” -- namely, SQL Server virtualizationrisks -- lurking out there.One note of caution came from Peter O’Kelly, principal analyst at O’Kelly Associates. O’Kellysaid there are “waves,” or trends, in the IT industry, and the current wave holds thatvirtualization is supposed to be good for everything. “Now, industry is discovering that thereare some places where you may want to dial that back a bit,” he said. “It is probablysomething that needs to be assessed on a case-by-case basis.”Virtualization might not always be a good thing for databases in general because it mayinterfere with the heuristics of the database management system for data accessoptimization, which is designed to work directly with data storage devices.“Adding virtual storage may result in more disk access operations, and since disk access ismeasured in milliseconds while memory access [e.g., for cached data] is measured innanoseconds, the consequences can be significant,” O’Kelly said. “The heuristics will break,the optimizer won’t do everything it is expected to do, and that will create a problem.”Chris Wolf, an analyst at Gartner Inc., agreed that memory can be an Achilles’ heel fordatabases in virtualized environments. “Historically, people have run into issues involvingmemory management,” he said.For instance, a few years ago hypervisors were using software to emulate physical memory.And, as noted by O’Kelly, when you try to emulate memory in software you run intobottlenecks and end up with slower response times.Sponsored By: Page 12 of 15
  13. 13. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationHowever, starting in the second half of 2009, AMD began to introduce AMD-V RapidVirtualization Indexing on hardware, and Intel early last year released Extended PageTables. According to Wolf, these developments allow virtual machines to manage their ownphysical page tables in memory. That removes the software bottleneck. “So a few years agopeople virtualizing SQL Server might have said it doesn’t run well, but with the rightarchitecture today, it isn’t a problem,” Wolf said.Another potential rough spot for SQL Server virtualization involves memory appetite. “SQLwill take as much memory as you will give it, and that will cause problems with resourcessharing," Wolf said. "That’s why on a physical server, people must tune it to use as muchmemory as it needs, not as much as it wants.”Fortunately, Wolf said, the tuning is straightforward. So, he advises that infrastructurepeople and SQL Server administration teams make a point of talking about the issue andresolving it.The same can be said for making sure I/O is optimized. As an example, Wolf cited vSphere2.1, in which VMware introduced Paravirtual SCSI. “It is a new storage driver to provideaccelerated I/O to access storage, and they introduced a new feature -- storage I/O control-- which lets you prioritize storage access for certain applications, so one app won’t takeover all of the I/O,” he explained.Similar tuning issues concerned Greg Shields, an IT analyst at consulting firm ConcentratedTechnology. Now, although almost anything can be virtualized, Shields said implementationcan still present challenges.It used to be that you could look at a server and say, that’s a network problem -- but withvirtualized servers, it might really be a lack of processing resources.-- Greg Shields, IT analyst at Concentrated Technology“In my experience, most IT pros do not have a good handle on capacity management,” hesaid. When the world was all physical servers, Shields said experience taught people todevelop gut feelings for things like the supply of memory. “It used to be that you could lookSponsored By: Page 13 of 15
  14. 14. SearchSQLServer.com E-Guide Server Optimization, Hardware and Virtualizationat a server and say, that’s a network problem -- but with virtualized servers, it might reallybe a lack of processing resources.”According to Shields, what is needed now, especially with SQL Server, which has thepotential for very high utilization, are tools to help administrators convert their data andmetrics to actual intelligence. “By themselves, no human being can really do a good job ofconverting those metrics into useful information,” he explained.“When you have those insights, maybe it means you don’t consolidate as many machines ormaybe in some cases it might mean you virtualize one server on one physical machine, ascounterintuitive as that might seem,” Shields added.However, he stressed, the power of virtualization is such that the “overhead” of virtualizingis now so minimal that performance is “almost native” anyway. And that’s bound to be goodnews for those running SQL Server.“Today, with the right architecture, there is no reason you can’t run a SQL Server workloadin a virtual machine environment,” Wolf said. “We have had many of our customers doingthis with large-scale databases. Our position is that virtualization should be the defaultplatform for all your apps in an x86 environment. The onus should be on the owner to showwhy it isn’t good rather than on IT to show why it is needed.”Sponsored By: Page 14 of 15
  15. 15. SearchSQLServer.com E-Guide Server Optimization, Hardware and VirtualizationAbout Dell and MicrosoftFor more than 25 years, Dell and Microsoft have worked to deliver jointly-developedsolutions that simplify IT management, optimize performance and evolve the way yourbusiness operates. Since the very beginning of our long-term partnership together, Dell andMicrosoft have aligned to deliver customer-driven, innovative solutions that span the entireMicrosoft® product portfolio.Sponsored By: Page 15 of 15

×