• Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,133
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
29
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Talk Track:Hello, my name is <insert name>. Today we are going to talk about how you can use SQL Server Enterprise to optimize the way you collect, store, retrieve, and report on data.The key points are:SQL Server is the foundation of the Microsoft Enterprise Data Platform.SQL Server 2008 provides a trusted, intelligent and productive Data Platform that meets the needs of the Largest Enterprises down to the smallest handheld device. he objective for today is communicate the value prop of SQL Server 2008 Enterprise Edition. SQL Server 2008 provides a comprehensive data management solution that can manages all your data and provide an Enterprise Data Platform.There are many different editions of SQL Server and each edition is right for a certain audience. This deck will explain some common scenarios and business needs where Enterprise Edition is the correct edition of SQL Server and how Enterprise Edition will help to optimize your customer’s investment in their Enterprise Data Platform.
  • 162% ROI, break even in 6 mos. by upgrading to SQL Server 2008 Enterprise (Source: Total Economic Impact of Upgrading to SQL Server 2008, Forrester Research 2008)Largest ecosystem to support packaged applications and tools. (Source: SQL Server 2008 Ups Pressure On Competitors, Forrester 2008)Mission CriticalEnable 99.99% uptime and maintain revenue streams through AlwaysOn Technologies (Source: SQLCAT Case Study: High Availability and Disaster Recovery at ServiceU: A SQL Server 2008)Improve app. Performance 30% or more with Resource Governor (Source: Microsoft Case Study, CareGroup) BICut data storage up to 90% with Data Compression (Source: Microsoft Case Study, Quanta Computer)Reduce backup storage needs by 66%, cut costs, and increase backup speeds up to 80 times faster with Backup 50% better performance with Analysis Services Enterprise enhancements (Source: Microsoft Case Study, Microsoft AdCenter) Cost ReductionCut hardware, licensing, and power and cooling costs by up to 50% through consolidation and virtualization (Source: Whitepaper, How Customers Are Cutting Costs and Building Value with Microsoft Virtualization)Remove third party software with built-in tools like encryption– avg annual savings of $60k-$100k/yr (Source: Total Economic Impact of Upgrading to SQL Server 2008, Forrester Research 2008)
  • Talk Track:You can either use the scenario description below for your talking points or customize it to the needs of the customer.Scenario Description:An online retailer has purchased the hardware and software to run their business and for the most part everything works well. At certain times in the year they experience a spike in the number of orders received. This causes problems as the servers become overworked and they fail. When the servers are down the retailer can not take any new orders and they lose money. In addition to the immediate loss of revenue they have also recognized that a lot of their customers are using search engines to find them and if they can not get the product from this retailer they will move to the next entry in the search list and purchase from them. Because the customers have a negative experience with this retailer they are less likely to come back and purchase from them the next time.The retailer analyzed the feasibility of purchasing enough hardware to handle the seasonal spikes in orders but decided that purchasing hardware that would sit idle most of the year is not cost effective. Instead there is a company wide initiative to change the applications to make them more resilient to spikes in the number of orders. The business has identified that they want to reduce the time amount of unplanned downtime by 80% as compared to the previous year. They also want to ensure that any outage can be recovered from within 10 minutes.The business has tasked the IT department to get this done and the IT department has implemented an internal SLA with the business owners to make sure that there is accountability for the web site being available.SQL Server Enterprise Edition provides many features that can be used by themselves or in combination to address the concerns of the retailer and ensure that the web site is available for their customers. The features are all available with the Enterprise Edition of SQL Server and complement each other. No single feature is a silver bullet to avoid unplanned downtime but by combining hardware and software many of the common risks for unplanned downtime can be mitigated.The cnet article referenced in the business impact can be found at http://news.cnet.com/8301-10784_3-9962010-7.html
  • Talk Track:You can either use the scenario description below for your talking points or customize it to the needs of the customer.Scenario Description:A large manufacturer has many internal systems that they use for customer (internal and external), order, support, planning, and other information. They have built or customized many applications that access this data and make it easier for their employees to get their work done. The manufacturer is proud of the amount of automation that they have put into their systems to allow the computer to complete tasks that are repetitive and have well defined rules. Because of this the manufacturer has been able to keep its labor costs low and gained an advantage over its competitors. Occasionally the systems become unavailable. The manufacturer has studied the problems and found that the major causes are human error, site disasters, and hardware failure. The manufacturer has put an emphasis on training its employees to reduce the number and severity of human caused outages but has not been able to completely eliminate them.When the internal applications are not available the employees have to resort to manual means to track new orders or questions about missed delivery. There is then a lot more work as the approval processes that were automated need to be carried out manually. All of this results in delays in delivering on promised orders and frustration on the part of the employees. The worst part of the system being unavailable is that the information the manufacturer needs to notify customers that delivery will be slow is also unavailable so they usually end up finding out about missed commitments when a customer complains.SQL Server Enterprise Edition provides many features that can be used by themselves or in combination to address the concerns of the manufacturer and ensure that the ERP and CRM systems are available for their employees. The features are all available with the Enterprise Edition of SQL Server and complement each other. No single feature is a silver bullet to avoid unplanned downtime but by combining hardware and software many of the common risks for unplanned downtime can be mitigated.Quote on lost data comes from http://www.continuitycentral.com/news04161.html
  • Animation:A database cluster is set up with the network connections for data shown in grey and the network connections for the heartbeat to allow the cluster to know which servers are available and which ones have experienced a problem.Data is written to the shared disk through the primary serverThe primary server fails. This is detected and the failover server takes over the operation of the database server. A different application writes data to the shared disk through the primary server assigned to that application and is not affected by the failure of the other node. If that server were to fail it would also use the shared failover server. Talk Track:After explaining what database clustering is point out that having multiple nodes in a cluster reduces the cost per active server. Another advantage of multi-node clusters is that a single node can be set up to be a failover node for multiple active nodes. This means that fewer machine resources are dedicated to “waiting” for failures so the overall utilization of your cluster is increased. Finally the failover servers can be configured in a chain so a failover node can have a failover to protect against multiple machines failing at the same time.Database Clustering:Database clustering is one of the most mature technologies for providing high availability in SQL server. Failover clustering works with the database engine, analysis services, and full text indexing. With SQL Server 2008 Enterprise Edition the cluster could have up to 16 nodes. This provides tremendous flexibility in creating failover scenarios. Depending on the configuration of the cluster some servers could be set up to be standby servers which would not incur any additional licensing costs. Another option is to analyze the workload of the various machines in the cluster and configure the cluster to set up failover in a manner that would allow different workloads to fail over to servers that could handle the additional load. A third option could be to designate one or more servers in the cluster as failover servers. Multiple active servers could share the same failover server. When a machine fails it would fail to its designated failover server. By having multiple active nodes share the same failover node customers can reduce the cost for hardware and software that is idle most of the time waiting for a failure. Having a multi-node failover cluster allows you flexibility in planning your workloads on the various cluster nodes to optimize utilization while maintaining high availability.Technical details on database clustering are available in SQL Books Online at http://technet.microsoft.com/en-us/library/ms189134.aspx Database Clustering Online Retail:The online retailer could choose to create a database cluster that would provide for automatic failover of the database in the event that the server running the database stops running. The retailer has flexibility in setting up the cluster and how servers will failover. During the failover process there will be some times when the server will be unavailable. These brief outages need to be accounted for but by architecting the application to retry and log errors the retailer should be able to receive and process the vast majority of the orders placed on their web site. Database Clustering Manufacturing:The manufacturer identified catastrophic events and hardware failure as two of the major causes of downtime. Although rare, these events cause prolonged downtime because the necessary hardware, software, and data is not readily available. By clustering the databases in a multinode cluster they can protect against the loss of a single machine. When a server experiences problems a different server in the cluster will take over. There will be a short interruption of service as the new node starts up and begins servicing requests but most users should be able to retry whatever they were doing and not loose any data. The manufacturer can have multiple nodes configured to fail over to a single failover node. Since hardware failures are rare the failover node would be able to handle the workload of a single server at a time.  Evidence:The evidence came from http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001625Additional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderSQL Server 2008 Failover clustering training check  (TechReady7 DB317) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=188061
  • Animation:A portion of the database is mirrored onto another serverA page on the primary server is returning a 823 errorThe damaged page is automatically retrieved from the mirror and updated on the primary server so it no longer returns a 823 error Talk Track:After explaining what automatic page repair is point out that by taking advantage of the redundant copy of the database that a database mirror provides, when a torn page is detected the damage page can be retrieved asynchronously from the mirrored copy. The application will receive an error but by adding a minimal amount of code to the application or training the users to try again, the operation can be repeated. Automatic Page Repair:Automatic page repair only works with database mirroring. When the server detects SQL Server errors 823, 824, and 829 it will start an asynchronous process to retrieve the page from the mirror. The page repair works on both the primary and mirrored server. The application will not be notified when the page is repaired so it will either need to poll the database, read from the mirror, retry the read after a period of time, or return the error to the user and ask them to retry the operation.Technical details of automatic page repair can be found in SQL Books online at http://msdn.microsoft.com/en-us/library/bb677167.aspx Automatic Page Repair Online Retailer:Once the retailer has made the commitment to mirror a portion of their application data they can use automatic page repair to recover from certain types of errors. Applications that are modified to look for the error message returned can take several actions including automatically retrying the operation after a short delay. If the data has been retrieved from the mirrored server the operation can continue and the user will not know that an error has occurred. This results in an effective 100% uptime for the user with minimal changes by developers. Automatic Page Repair Manufacturing:The manufacturer may not be able to change the ERP and CRM systems to handle the error returned by the damaged page. This may result in errors appearing to the end user. The automatic page repair would still work and fix the damaged page so the next time that data was accessed there would be no error. By training their staff to retry certain errors (and requesting that the ERP and CRM vendors handle the errors) the manufacturer can still ensure access to critical data. Evidence:Evidence comes from the case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000002409
  • Animation:Data is written to two different servers and replicated to the other server. Talk Track:After explaining what peer-to-peer replication is point out that allowing users to create and manage replication topologies without the need for central control provides flexibility in distributing data to different servers. The replication topology can be changed with nodes added or removed without affecting the other nodes. Replicating the data across servers protects against a single server failing. The ability of users to create and change the replication without the need for central IT to be involved reduces the administrative overhead of replication.Peer-to-peer replication:Peer-to-peer replication is an extension of the transactional replication functionality that is available in SQL Server. Peer-to-peer replication allows machines to be set up in a replication topology that allows new nodes to be added and current nodes to be removed without affecting the replication between the other nodes. Because of the ease of adding and removing nodes peer-to-peer replication can be used to set up replication where advanced IT knowledge is not necessary.SQL Server 2008 provides a wizard interface that makes creating peer-to-peer replication topologies easier. The Configure Topology page of the wizard includes a topology viewer that enables you to perform common configuration tasks, such as adding new nodes, deleting nodes, and adding new connections between existing nodes.Technical details on peer-to-peer replication can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms151196.aspx Peer-to-peer replication Online Retailer:If the retailer takes an order but later does not deliver the ordered goods they will also loose customers. To ensure that they are able to fulfill all completed orders the retailer could make sure there are multiple copies of the data on different servers. Using peer-to-peer replication allows the retailer to replicate their catalog data to a different server. The data can then be read from any of the servers. The application can be architected to read from any one of the available nodes and if a node is not available the application will read from a different node. An additional benefit could be to replicate the data to a server in a different data center that is distant enough that it would not be likely to be effected by a natural disaster. Fulfilling orders in a timely manner with resilient systems will build brand loyalty and help to ensure that customers become repeat customers. Peer-to-peer replication Manufacturing:The manufacturer could use peer-to-peer replication to replicate essential CRM and ERP data to other servers. The replicated data could be used for reporting and other uses outside of the functionality of the ERP and CRM systems. Since it is unlikely that the ERP and CRM systems are aware of the replication any redirection to a replicated server would have to be handled by the IT department. Another advantage of peer-to-peer replication for the manufacturer is the ability for groups to set up replication without involving IT. For instance a group of contractors from the same company or an auditing firm could set up peer-to-peer replication amongst themselves for data that is not related to the work of the manufacturing company and could share that information amongst themselves without the manufacturer incurring any setup or maintenance costs. Evidence:Customer quote from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001468Additional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderPeer-Peer Replication: New features in SQL Server 2008 and best practices  (TechReady 6 DBCT308-R2) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=185908
  • Animation:The three database servers depicted on the slide are running in separate virtual machines. Because SQL Server does not need to change to run in a virtual environment the connections to the applications and business logic servers are unaffected by whether the SQL server is running in a virtual machine or on physical hardware. Talk Track:After explaining what virtualization is point out that unlimited virtualization provides significant cost benefits for multiple virtual machines running on a fairly standard configuration for a host machine. The ability to transfer machines between hosts provides availability and flexibility in load balancing servers. Restoring from a virtual machine snapshot is faster than building a new machine from scratch so disaster recovery becomes quicker.Virtualization:Many organizations are using virtualization to increase server utilization and provide greater server availability. Because all of the resources and attributes associated with the SQL Server are contained within the virtual server, the virtual machine can be moved between different physical machines without having to change anything in the applications that access that SQL server. With virtualized SQL Servers several common tasks become easier. It is easy to back up an entire server by taking a “snapshot” of the virtual machine. This snapshot can then be moved to another machine to provide a stand by server in case of a failure. If the virtual machine resides on a disk external to the machine running the virtualization software then the virtual machine can be started on a different host machine if the original host machine fails. This give up to the second recovery of all committed database transactions. The only work that would need to be redone is the work that wasn’t completed when the original host machine crashed. With newer virtualization technology (VMWare ESX Server and soon Microsoft Hyper-V) the restarting of a virtual machine on a different host can be automated to eliminate the delay caused when a person has to be notified that a virtual machine has stopped responding and to restart the virtual machine on a different server.Virtualization can be used not only for unplanned downtime but for migrating servers from one physical server to another as hardware is upgraded. You can move licenses across a server farm with flexibility. There is no longer a 90 day waiting period between moves. With Enterprise Edition, an unlimited number of virtual machines can be run on a server if all the physical processors on that server are licensed for Enterprise Edition. (There will be costs to license Windows and other software in the virtual machine but that is outside the scope of this discussion).Information on Hyper-v can be found at http://www.microsoft.com/virtualization/default.mspx and http://www.microsoft.com/windowsserver2008/en/us/hyperv.aspx Virtualization Online Retailer:The retailer could choose to explore the benefits of virtualization to help them meet their requirements for availability. The online retailer could use virtualization as a method of scaling out their server farm during peak times of the year. The ability to add additional computing power as needed could relieve the stress on the other servers and avoid some of the downtime they have experienced in the past. As demand on the servers is decreased the additional servers could be shut down. With Enterprise Edition, an unlimited number of virtual machines can be run on a server if all the physical processors on that server are licensed for Enterprise Edition.  Virtualization Manufacturing:The manufacturer could use virtual SQL servers to run their ERP and CRM systems alongside other less demanding SQL server instances on a physical machine. In the case of hardware failure the virtual machines could be moved from one physical server to another one to quickly get the systems running again. Another advantage of virtual machines to the manufacturer is that they can isolate the COTS applications in their own virtual machine so they do not need to worry about possible side effects that an upgrade to one application might have on other applications running on the same machine. Evidence:Virtualization in Retail Study data came from press release at http://www.microsoft.com/presspass/press/2008/apr08/04-29RetailVirtualizationPR.mspxMicrosoft Retail VPR case studies at http://www.microsoft.com/presspass/events/msretail/casestudies.mspxVirtualization in Banking Survey 2008 press release at http://www.microsoft.com/presspass/press/2008/apr08/04-29BankVirtualizationPR.mspxCIO|Insight Top IT Spending Priorities Report for 2009 - http://www.cioinsight.com/c/a/IT-Management/Top-IT-Spending-Priorities-for-2009Additional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderSQL Server 2008 Failover clustering training check  (TechReady7 DB317) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=188061For licensing advantages of EE in virtualization scenarios please check  SQL Server Licensing a new look recording (Academy Live)http://infoweb2007/academy/catalog/webcastsessions/SQL58AL.htmFor licensing advantages of EE in Server mobility scenarios please check  SQL Server Licensing a new look recording (Academy Live)http://infoweb2007/academy/catalog/webcastsessions/SQL58AL.htm
  • Animation:A host server with two virtual servers is running. The red virtual server is used for reporting and is about to experience an increase in workload. To prepare for the increase the virtual machine will be migrated to a different host server. The virtual server is migrated to a new host. The traffic is automatically redirected. The virtual machine on the original host server can be removed. This frees resources for other virtual machines on the host. Talk Track:With Windows Server 2008 R2 Hyper-V hypervisor resources can be migrated between host servers with a minimum amount of interruption. This allows organizations to migrate running SQL Server virtual machines between hosts to perform maintenance on the host machine or O/S with a minimum amount of downtime. This also could be used to balance the workload on physical servers if one server sees an unexpected spike in workload. The virtual machine (or others on the same host) could be migrated to hosts that are not experiencing the spike in workload.Live Migration:Live migration brings up a copy of the virtual machine on a different host. Once the new virtual machine is running the traffic is redirected to the new virtual machine. When all client traffic is running from the new virtual machine the old virtual machine will be shut down. Live Migration Online Retailer:The retailer could use live migration of virtual machines to manage its virtual machines and handle changes in usage for databases. The retailer experiences spikes in activity when it is running a sale and around the holiday season. Before the spike in activity occurs the retailer can move instances of SQL server to different servers to enable the physical servers to handle the anticipated workload. In addition on a monthly basis the retailer is able to move its reporting database to a dedicated host to run the month end reports. This not only completes the reports faster it reduces the database contention on the server caused by running the large reports. Live Migration Manufacturing:The manufacturer could use live migration to move virtual machines between host machines to allow the manufacturer to apply security patches and update the hardware in the physical machines. By migrating the virtual machines without any downtime the manufacturer is able to support their SLA and not negatively impact the manufacturing operations while maintaining the stability and security of the system. Evidence:Evidence from case study at http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?casestudyid=4000005268.
  • Talk Track:Disasters are rare occurrences but can have catastrophic consequences to your business. Ensuring that you have a backup that is located far enough from the source data to protect from tornados, fire, flood, or other natural disasters are the key to recovering from these disasters. Scenario Description:A government agency maintains huge data sets that are mission critical. Backups are regularly performed, but backup devices fail occasionally and are stored local to the server except for weekly backups which are taken offsite, leaving the possibility of a losing large amounts of data in the case of a catastrophic event.Quote on lost productivity comes from http://www.continuitycentral.com/news04161.html
  • Animation:The data in the database is backed up to the middle backup device. The backup is mirrored to the other 2 backup devices so that the customer is protected against the loss of a single backup device. Talk Track:Backup mirrors reduce the impact of a media failure in a backup. By having multiple copies of the backup the failed media can be replaced by a mirrored copy. In addition if the mirror copy is available online when the backup is restored the corrupt data can automatically be read from the mirror so the administrator does not have to do anything to allow the restore to continue.Backup Mirrors:Backups are the last line of defense when a disaster strikes and you need to get your data back. Backup mirrors allow you to mirror the backup onto additional media. SQL Server is aware that the backup has been mirrored. All of the backup devices need to be available when the backup is made but when the data is restored only one set of the backup media needs to be available. If an error is discovered in the backup media SQL Server will automatically attempt to read from one of the mirrors to allow the restore to continue. Backup Mirrors Government Agency:The government agency can use backup mirrors to ensure that their critical data is backed up to more than one device. Since the mirrored backups run in parallel they don't take any longer to backup than a normal backup. The mirrored backup can be used to protect against backup device failures and to speed restore operations. One of the other features that is attractive to the government agency is that they can take the mirrored backup copies to another building which gives them some of the benefits of having an offsite backup while still allowing quick access to the backup tapes when they are needed. They feel that they are better protected against a fire, flood, or other localized disaster that affects their data center. Technical details of backup mirrors can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms175053.aspx Evidence:Evidence quote comes from whitepaper available at http://download.microsoft.com/download/a/c/d/acd8e043-d69b-4f09-bc9e-4168b65aaa71/SQL2008HA.docx
  • Animation:Data is written to two different servers and replicated to the other server. Talk Track:After explaining what peer-to-peer replication is point out that allowing users to create and manage replication topologies without the need for central control provides flexibility in distributing data to different servers. The servers can be geographically separate and thus help protect against natural disasters. The replication topology can be changed with nodes added or removed without affecting the other nodes. Replicating the data across servers protects against a single server failing. The ability of users to create and change the replication without the need for central IT to be involved reduces the administrative overhead of replication.Peer-to-peer replication:Peer-to-peer replication is an extension of the transactional replication functionality that is available in SQL Server. Peer-to-peer replication allows machines to be set up in a replication topology that allows new nodes to be added and current nodes to be removed without affecting the replication between the other nodes. Because of the ease of adding and removing nodes peer-to-peer replication can be used to set up replication where advanced IT knowledge is not necessary.SQL Server 2008 provides a wizard interface that makes creating peer-to-peer replication topologies easier. The Configure Topology page of the wizard includes a topology viewer that enables you to perform common configuration tasks, such as adding new nodes, deleting nodes, and adding new connections between existing nodes.Technical details on peer-to-peer replication can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms151196.aspx Peer-to-peer replication Government Agency:The government agency can use their exiting WAN and offices spread throughout their geographic area to replicate the critical data between different offices. This will help to protect the data against local disasters such as fires, floods, or tornados. The replication topology can be easily changed to account for any network connectivity issues or extended maintenance at one of the nodes. This allows nodes to be added to and removed from the replication topology as needed. Evidence:Customer quote from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001468Additional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderPeer-Peer Replication: New features in SQL Server 2008 and best practices  (TechReady 6 DBCT308-R2) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=185908
  • Talk Track:For a lot of applications there is a requirement to be available 99.99% of the time or greater. Applying patches to the OS or SQL server can cause unacceptable levels of downtime. By using the high availability features in Enterprise edition you can reduce and in some cases eliminate downtime for routine maintenance.Scenario Description:A multinational airline uses SQL server to track information about its airplanes. Since there is never a good time to have the system be down they have to limit the amount of time that the system is unavailable. Because airplanes are all around the world there is not “night” , “weekend”, or “holiday” time that someone doesn’t need access to the data. Adding to the problem the airline has different busy seasons in different parts of the world so they can’t even rely on a slow time to allow them to schedule maintenance on their critical systems. The airline has looked at specialty hardware and software that would limit the maintenance needed but has found that it will not run all of the applications they need and that it comes at a much higher cost. They are looking for ways to make their investment in SQL server stable enough to limit the amount of planned downtime.SQL Server Enterprise Edition provides many features that can be used by themselves or in combination to address the concerns of the airline and ensure that their systems are available for their customers and employees. The features are all available with the Enterprise Edition of SQL Server and complement each other. No single feature is a silver bullet to provide 100% uptime but by combining hardware and software many of the common factors that contribute to long outages due to maintenance can be mitigated.The advantage of having a good strategy for handling planned downtime is that it also benefits you in the case of unplanned downtime. According to ContinuityCentral.com “A solution that avoids both planned and unplanned downtime will typically generate a substantial return on investment base on planned downtime avoidance alone.” - http://www.continuitycentral.com/feature0359.htm
  • Animation:A table and the physical layout of its index is shown. The logical order of rows in the index is always in order but the physical (disk) storage of the index is not guaranteed to be stored in order or even near the other rows on the disk. When the physical order is different than the logical order there is index fragmentation. There is no index fragmentation and all of the pages for the index are ordered correctly.New rows are added, existing rows are updated, and this causes the physical storage of the index to change. Especially disruptive are "page splits" where an index page is full and must be split in 2. One part will remain where it is and the other page will be placed somewhere else on disk. Over time the index is fragmentedRows are deleted from the table and at some point it is possible that the index page for those rows are no longer needed. The end result of all the changes to the database is that the index fragmentation gets worseAn online index rebuild is started. The users can still add and change data.The users can also continue to delete data while the online index rebuild is continuing.When the online index operation has finished the table and index have no more fragmentation. The free space is combined to provide for additional growth to the table. Talk Track:In Standard edition and previous versions of SQL Server maintaining indexes would keep users from doing anything with the table and would effectively make the application unavailable. This meant that maintenance had to be performed only during the regularly scheduled maintenance windows. Performance could noticeably degrade between maintenance windows. With SQL Server Enterprise edition the operations can be run and still allow the users to work with the table so performance can be kept at peak levels.Online Operations:You can create, rebuild, or delete indexes while maintaining the availability of the index and the underlying table. Instead of acquiring exclusive locks and completing the operation as the only user of the table the online option acquires shared locks and keeps duplicate copies of the index while it is being rebuilt. Because the server must keep track of the changes to the index in a copy while the old index is still being used the online operation will require additional disk space and can require more memory and processing power as it gets and releases locks.Technical information on online operations can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms177442.aspx Online Operations Airline:Because the airline is adding and changing data in its database constantly the database tables are prone to changes that leads to slower performance. To fix these problems SQL Server provides options for rebuilding the indexes. In other versions of SQL Server these maintenance operations lock other users out from making changes. This results in downtime for the users. In Enterprise Edition the index maintenance operations can be completed "online". The table is still available to other users and they don't experience any downtime. Because the server must keep track of the changes to the index in a copy while the old index is still being used the online operation will require additional disk space and can require more memory and processing power as it gets and releases locks. Evidence:Evidence taken from http://technet.microsoft.com/en-us/library/ms177442.aspx
  • Animation:Two servers are connected to the applications and business logic serversData is read and written to the servers. The server on the left has an additional CPU added. The server on the right has RAM added. Both servers are able to continue to server user requests as the hardware is upgraded. Talk Track:When database usage spikes it affects all of the databases on a server. Moving the busy database is usually unacceptable due to the downtime needed to move the database. Moving other databases to free up resources meant downtime for several other applications. By taking advantage of the support for adding additional CPUs and RAM without bringing down the server lets you scale up your hardware to handle the increased load without having to bring down any of your servers.Hot Add CPU and RAM:Hot add CPU and RAM require hardware and OS support for adding in the hardware. Once the hardware is added SQL Server needs to be configured to use the new hardware. SQL Server does not automatically start using the hardware because you may want to use that hardware for other workloads on the server.The CPU and RAM can be either physical or additional resources added to a virtual machine. The virtualization layer must and the OS must support the addition of CPU or RAM while the virtual machine is running. With this support SQL server can add the additional resources into a virtual machine without having to restart the server.Technical details of hot add CPU can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/bb964703.aspx. Technical details of hot add RAM can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms175490.aspx Hot Add CPU and RAM Airline:The airline can avoid a lot of its maintenance downtime by taking advantage of the ability to hot add CPU and RAM. While CPU and RAM can not be removed if a CPU is failing a new one could be inserted and SQL Server configured to use the new CPU instead of the failing one. This effectively lets them replace failing a CPU while remaining operational. The physical hardware can be upgraded without having to take the server offline. This allows the IT department flexibility in upgrading physical servers without having to worry about a maintenance window. Evidence:Evidence from http://technet.microsoft.com/en-us/library/bb964703.aspxAdditional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:Data is written to two different servers and replicated to the other server. Talk Track:Peer-to-peer replication enables administrators and users to easily set up replication where nodes can be added or removed without affecting the other nodes in the replication topology. Because replication supports WAN links or slow network connections it is ideal for copying a database to branch offices or other locations where the users have slow connections. Applications can be configured to access the data locally but if that server is not available because of maintenance to automatically connect to a different server with a replicated copy of the database so users can continue to work even if a server is down because of planned maintenance.Peer-to-peer replication:Peer-to-peer replication is an extension of the transactional replication functionality that is available in SQL Server. Peer-to-peer replication allows machines to be set up in a replication topology that allows new nodes to be added and current nodes to be removed without affecting the replication between the other nodes. Because of the ease of adding and removing nodes peer-to-peer replication can be used to set up replication where advanced IT knowledge is not necessary.SQL Server 2008 provides a wizard interface that makes creating peer-to-peer replication topologies easier. The Configure Topology page of the wizard includes a topology viewer that enables you to perform common configuration tasks, such as adding new nodes, deleting nodes, and adding new connections between existing nodes.Technical details on peer-to-peer replication can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms151196.aspx Peer-to-peer Replication Airline:The airline could use peer-to-peer replication to set up servers around the world that would have copies of the data. The applications could use these up to date copies of the data during the maintenance window for the primary server. After maintenance on the primary server is completed the changed data can be replicated back to it and the applications reset to use it. SQL Server 2008 provides a wizard interface that makes creating peer-to-peer replication topologies easier. The Configure Topology page of the wizard includes a topology viewer that enables you to perform common configuration tasks, such as adding new nodes, deleting nodes, and adding new connections between existing nodes. Evidence:Customer quote from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001468Additional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderPeer-Peer Replication: New features in SQL Server 2008 and best practices  (TechReady 6 DBCT308-R2) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=185908
  • Animation:A database cluster is set up with the network connections for data shown in grey and the network connections for the heartbeat to allow the cluster to know which servers are available and which ones have experienced a problem.Data is written to the shared disk through the primary serverThe primary server fails. This is detected and the failover server takes over the operation of the database server. A different application writes data to the shared disk through the primary server assigned to that application and is not affected by the failure of the other node. If that server were to fail it would also use the shared failover server. Talk Track:If the operating system needs to be rebooted because a patch was applied a cluster can significantly reduce the amount of downtime for the applications. By planning the order of the upgrade, unused failover nodes can be upgraded and then databases failed over to those patched servers. The primary server can then be patched. Since multi-node clusters allow a chain of failover servers , a series of servers can be set up as failover servers allowing the application to still be protected by a failover node that might include the original primary server.Database Clustering:Database clustering is one of the most mature technologies for providing high availability in SQL server. Failover clustering works with the database engine, analysis services, and full text indexing. With SQL Server 2008 Enterprise Edition the cluster could have up to 16 nodes. This provides tremendous flexibility in creating failover scenarios. Depending on the configuration of the cluster some servers could be set up to be standby servers which would not incur any additional licensing costs. Another option is to analyze the workload of the various machines in the cluster and configure the cluster to set up failover in a manner that would allow different workloads to fail over to servers that could handle the additional load. A third option could be to designate one or more servers in the cluster as failover servers. Multiple active servers could share the same failover server. When a machine fails it would fail to its designated failover server. By having multiple active nodes share the same failover node customers can reduce the cost for hardware and software that is idle most of the time waiting for a failure. Having a multi-node failover cluster allows you flexibility in planning your workloads on the various cluster nodes to optimize utilization while maintaining high availability.Technical details on database clustering are available in SQL Books Online at http://technet.microsoft.com/en-us/library/ms189134.aspx Database Clustering Airline:The airline could use a database cluster to reduce the downtime due to patches for SQL Server and Windows. The application would experience two brief interruptions. The first would be as the primary node in the cluster was failed over to a different node. The maintenance could then be performed on the primary node. When maintenance was completed the application would have another brief interruption as it was failed over to the primary node.  Evidence:The evidence came from http://www.continuitycentral.com/feature0359.htmAdditional Information:For Technical Slides please check SQL Server 2008 High Availability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=125965&view=folderSQL Server 2008 Failover clustering training check  (TechReady7 DB317) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=188061
  • Animation:A host server with two virtual servers is running. The red virtual server is used for reporting and is about to experience an increase in workload. To prepare for the increase the virtual machine will be migrated to a different host server. The virtual server is migrated to a new host. The traffic is automatically redirected. The virtual machine on the original host server can be removed. This frees resources for other virtual machines on the host. Talk Track:With Windows Server 2008 R2 Hyper-V hypervisor resources can be migrated between host servers with a minimum amount of interruption. This allows organizations to migrate running SQL Server virtual machines between hosts to perform maintenance on the host machine or O/S with a minimum amount of downtime. This also could be used to balance the workload on physical servers if one server sees an unexpected spike in workload. The virtual machine (or others on the same host) could be migrated to hosts that are not experiencing the spike in workload.Live Migration:Live migration brings up a copy of the virtual machine on a different host. Once the new virtual machine is running the traffic is redirected to the new virtual machine. When all client traffic is running from the new virtual machine the old virtual machine will be shut down. Live Migration Airline:The airline could use live migration to move virtual machines between host machines to allow the airline to apply security patches and update the hardware in the physical machines. By migrating the virtual machines without any downtime the airline is able to support their SLA and not negatively impact their operations while maintaining the stability and security of the system. Evidence:Evidence from case study at http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?casestudyid=4000005268.
  • Talk Track:Surveys show that average IT organizations spend 70% of their budget maintaining existing systems. By reducing the cost of maintaining SQL Server organizations can free up some of that budget to create new solutions that help the business to grow.Scenario Description:A major manufacturer tracks parts, production, and sales through it's databases. Recently, demand has grown across all divisions for custom reporting, while a partner-facing extranet is growing in popularity with vendors and distributors. This new demand exceeds original planning, and now reporting for big divisions can not only bottleneck other reports, but can slow down transactional systems used for daily business and block distributors from accessing key data on the extranet. This leads to lost time waiting for slow transactional systems, slow delivery of key reports, unhappy distributors, and repeated help-desk trouble ticket reports.SQL Server Enterprise Edition provides features that can be used by themselves or in combination to address the concerns of the manufacturer and ensure that the SQL Server is responsive to their employees. The features are all available with the Enterprise Edition of SQL Server and complement each other. No single feature is a silver bullet to avoid contention but by combining appropriate hardware and software scalability features many of the common problems can be mitigated.
  • Animation:As user connections come into the server they are partitioned into different resource pools based on some attribute in the connection. Within each pool there are limits on CPU. The first query comes in and places a load on the server. The second connection goes to a different pool and then executes a command. The load exceeds the limit for the pool and would over burden the server so SQL server limits the amount of CPU that the command can take until it completes. The command will eventually complete but will be slower than before it ran in this pool. The next command executed in the same pool doesn’t exceed the limit for the pool and can finish without being restricted. Talk Track:Reducing maintenance costs in many cases means consolidating databases from different applications onto a single server. When this happens there can sometimes be unintended consequences. For instance month end reports could impact the performance of your order taking system. To reduce the impact, ensure that databases can be consolidated, and that high priority workloads are not impacted by lower priority workloads, resource governor allows administrators to limit the resources available to the application and ensure that high priority workloads have the resources they need to complete in a consistent manner.Resource Governor:The resource governor allows an administrator to create a function that will divide workloads into different pools. Each pool can have a minimum or maximum value for memory and CPU usage set up to ensure that a certain set of resources are available for the pool. As a connection is made to the SQL Server it is put into a pool and will use the resources assigned to that pool. A runaway query can not monopolize all the resources on the server and all users will have a more consistent experience. The monitoring tools built into Windows and SQL Server allow you to monitor the resource usage by each pool so you can further fine tune the limits for each pool.Technical details of the resource governor can be found at SQL Books Online at http://technet.microsoft.com/en-us/library/bb933866.aspx Resource Governor Manufacturer:The manufacturing company could use the resource governor to put the month, quarter, or year end processes into a resource pool separate from the normal workload. The various resource pools can divide up CPU and memory to ensure that the normal workload doesn’t get slowed down by other queries that are not urgent. This will reduce the number of help desk tickets opened and ensure that employees and partners on the extranet can get the data they need while still allowing the extra workload to be run during regular business hours. Evidence:Evidence quote comes from Microsoft case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001474Additional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:The detach phase has already been completed and the database is now in the refresh stage. The reporting volumes are set to read/write mode.Data is refreshed from the source server.The attach phase is completed with the source server being detached, the volumes set to read only, and the reporting volumes being attached to the reporting servers. The reporting servers will then be able to produce reports with the updated data.Data is read from the reporting volume through each of the reporting servers. Since the data is read only the same data is returned regardless of which reporting server the data comes from. Talk Track:Reporting servers can proliferate and cause maintenance headaches. The reporting servers tend to have a lot of RAM and disk space and need to be refreshed with updated data on a periodic basis. By consolidating the reporting data into a SAN the need for disk space on the reporting server is greatly reduced. Also the updated data only needs to be refreshed in one location reducing the maintenance burden of updating multiple reporting servers. Finally, the reporting servers can be commodity servers with a sufficient amount of RAM making the maintenance of those servers easier. If a reporting server is offline the application can be redirected to a different reporting server. Since the reporting servers all use the same source data the user gets the same data from their report.Scalable Shared Database:The scalable shared database allows a company to copy its data to a storage area network (SAN) where it will be marked read only and used for reporting. When data changes the shared database needs to be updated. The data copied to the SAN could be OLTP tables or SSAS cubes.The scalable shared database is updated in 3 stages (detach, refresh, attach):1 The workload is stopped on the reporting servers and the reporting volume database is detached.2 The data is refreshed from the source server3 The reporting volume is attached to the reporting database and the workload is started on the reporting server again.Technical information on scalable shared databases is available in SQL Books Online at http://technet.microsoft.com/en-us/library/ms345392.aspx Scalable Shared Database Manufacturer:The manufacturer could use a scalable shared database to offload the month end reports to dedicated reporting servers. The number of reporting servers can be increased or decreased to handle the load placed on the servers.  Evidence:Evidence quote comes from SQL Server Books Online at http://technet.microsoft.com/en-us/library/bb933866.aspx
  • Animation:The DAC is defined by the data tier developer and through the central management server is pushed to the servers in the “finance” group. The client can connect to the servers in the group and the DAC will be applied. When the client connects to a different server the same DAC is applied.Talk Track:When managing a large number of servers the effort needed to make sure that all servers are performing correctly and aggregating that information into a single result can become a large part of an administrator’s workload. By using Application and Multi-Server Management the administrator can accomplish the things she needs to in a minimum amount of time.Application and Multi-Server Management (AMM):Application and Multi-Server Management (AMM) models an organization’s SQL Server instances in a unified view. The SQL Server Control Point provides the central point for assessing SQL Server resource utilization. Once a Control Point is established, it provides a dashboard and summary for each enrolled instance as well as registered data-tier applications across a variety of dimensions. Administrators can also drill down into details for enrolled instances and data-tier applications to troubleshoot or simply gather more detailed information. By comparing performance data to policies, administrators can then identify bottlenecks and consolidation opportunities.Technical details on AMM can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ee210557(SQL.105).aspxApplication and Multi-Server Management Manufacturer:The manufacturer could use AMM to define a data tier application. The DAC could define utilization policies that the manufacturer wants to monitor. When a serer is under utilized or over utilized the manufacturer will be able to reallocate resources to more efficiently use all of the servers in the server group without having to manually gather and report on the usage statistics. 
  • Talk Track:Organizations see server consolidation as a means of saving money, energy, and effort in maintaining multiple servers. Server consolidation also allows applications to take advantage of the hardware that they already own.Scenario Description:An energy company is looking to modernize its infrastructure, and plans to reduce hardware, licensing & power expenses by consolidating its many SQL server machines onto a smaller number of more powerful and modern machines. Where possible SQL Servers are consolidated onto a single physical server. There are some older applications that are not instance aware and therefore can not be combined as SQL instances. These servers are created as virtual machines and consolidated into a server running Hyper-V. Administration information taken from http://www.dell.com/downloads/global/solutions/public/white_papers/dell_2650_to_sql_2008.pdfServer utilization statistics quoted in http://www.cirba.com/news/press_releases/2006/060828.htmAdditional Information:For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htm
  • Animation:The DAC is defined by the data tier developer and through the central management server is pushed to the servers in the “finance” group. The client can connect to the servers in the group and the DAC will be applied. When the client connects to a different server the same DAC is applied.Talk Track:When managing a large number of servers the effort needed to make sure that all servers are performing correctly and aggregating that information into a single result can become a large part of an administrator’s workload. By using Application and Multi-Server Management the administrator can accomplish the things she needs to in a minimum amount of time.Application and Multi-Server Management (AMM):Application and Multi-Server Management (AMM) models an organization’s SQL Server instances in a unified view. The SQL Server Control Point provides the central point for assessing SQL Server resource utilization. Once a Control Point is established, it provides a dashboard and summary for each enrolled instance as well as registered data-tier applications across a variety of dimensions. Administrators can also drill down into details for enrolled instances and data-tier applications to troubleshoot or simply gather more detailed information. By comparing performance data to policies, administrators can then identify bottlenecks and consolidation opportunities.Technical details on AMM can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ee210557(SQL.105).aspxApplication and Multi-Server Energy:The energy company could use AMM to define a data tier application. The DAC could define utilization policies that the manufacturer wants to monitor. When a serer is under utilized or over utilized the energy company will be able to reallocate resources to more efficiently use all of the servers in the server group without having to manually gather and report on the usage statistics. 
  • Animation:As user connections come into the server they are partitioned into different resource pools based on some attribute in the connection. Within each pool there are limits on CPU. The first query comes in and places a load on the server. The second connection goes to a different pool and then executes a command. The load exceeds the limit for the pool and would over burden the server so SQL server limits the amount of CPU that the command can take until it completes. The command will eventually complete but will be slower than before it ran in this pool. The next command executed in the same pool doesn’t exceed the limit for the pool and can finish without being restricted. Talk Track:When multiple databases are consolidated onto the same server there can be unintended consequences. A low priority workload may take resources away from a higher priority workload. Resource governor lets administrators determine the amount of CPU or RAM that a process can use to allow high priority workloads to continue uninhibited and all workloads to have a consistent response time.Resource Governor:The resource governor allows an administrator to create a function that will divide workloads into different pools. Each pool can have a minimum or maximum value for memory and CPU usage set up to ensure that a certain set of resources are available for the pool. As a connection is made to the SQL Server it is put into a pool and will use the resources assigned to that pool. A runaway query can not monopolize all the resources on the server and all users will have a more consistent experience. The monitoring tools built into Windows and SQL Server allow you to monitor the resource usage by each pool so you can further fine tune the limits for each pool.Technical details of the resource governor can be found at SQL Books Online at http://technet.microsoft.com/en-us/library/bb933866.aspx Resource Governor Energy:The energy company could use the resource governor to limit the resources that an individual named instance of SQL server on the server will not be able to monopolize all of the resources on the server and starve other instances. Another useful strategy is to allow the instance to use multiple CPUs (Enterprise Edition allows greater than 4 CPUs) to spread the workload across the multiple physical servers to allow queries to run in parallel to complete sooner and reduce contention with other named instances. Evidence:Evidence quote comes from Microsoft case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001474Additional Information:For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htmFor Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:NoneTalk Track:SQL Server Enterprise will allow up to 50 named instances to run on the same server. Each named server has its own set of users and resources so they are isolated from each other. This allows administrators to plan maintenance and downtime for each instance separate from the other instances.Multiple Instances:An instance, whether default or named, has its own set of program and data files, as well as a set of common files shared between all instances of SQL Server on the computer. The Database Engine, Analysis Services, and Reporting Services can have instances. Each instance has its own security context with its own set of accounts and permissions.With servers typically having a low utilization rate and relatively high numbers of processors/cores and memory many instances of SQL server can be consolidated onto a larger server. Not only do you save money in hardware and software costs but also save money for energy and HVAC costs. The optimal utilization for a server should be around 50% to allow for growth of individual databases and spikes in usage. Having the ability to put up to 50 named instances on a single server allows you to consolidate the database servers onto a more efficient server.Multiple Instances Energy:The energy company can test its current applications. They have many options available to them. For databases that can be combined they can combine the databases onto a single SQL Server. Other applications that can connect to a named instance of SQL Server could be consolidated with other named instances of SQL server on a single physical server. The named instances each have their own security settings so granting a user permissions to one named instance doesn't allow them to see any other named instances. As long as the server has sufficient resources the named instances will not interfere with each other. The energy company uses historic usage data to understand which SQL servers should be consolidated on the same machine. They can also use other features such as the resource governor to reduce the impact of a runaway query on the other named instancesEvidence:Statistics on server usage taken from report at http://queue.acm.org/detail.cfm?id=1348590 and PDF page 22Case Study:http://h71028.www7.hp.com/ERC/downloads/4AA2-3208ENW.pdfOn a 64-bit Itanium II Processor, the study found that server consolidation brought down the TCO by 12.4% in a 3 year period.In order to reduce operating costs and improve service levels the organization conducted a study on consolidating 84 servers running various versions of Microsoft SQL Server. The goal was to bring as many of these databases as possible together into a single datacenter with standard practices for high availability and reliability.The study examined all costs associated with the acquisition and operation of the two alternatives over a three year analysis period. Hard (or direct) costs included hardware and software initial purchases, and annual maintenance, internal labor and professional services fees for installation, configuration, migration and on-going systems support, and facilities costs for power, cooling and datacenter floor space. Soft (or indirect) costs included employee productivity and revenue losses from downtime and opportunity costs related to business agility. Table 1 below shows that over the three year analysis period the HP Integrity solution yielded a 12.4% or $800,692 lower total cost of ownership compared to the x86 based server solution.Additional Information:For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htm
  • Animation:The three database servers depicted on the slide are running in separate virtual machines. Because SQL Server does not need to change to run in a virtual environment the connections to the applications and business logic servers are unaffected by whether the SQL server is running in a virtual machine or on physical hardware. Talk Track:A lot of organizations are looking to a virtualized environmentto consolidate multiple physical servers on to a single server. The virtualized environment provides advantages in server utilization and power savings. For SQL Server a virtualized environment can save money on licensing costs if the physical servers are licensed then an unlimited number of virtual SQL processors can be run without paying for more SQL licenses. Enterprise edition also has the advantage of allowing virtual machines to be moved between servers in a server farm so you can balance the load on physical servers.Virtualization:Many organizations are using virtualization to increase server utilization and provide greater server availability. Because all of the resources and attributes associated with the SQL Server are contained within the virtual server, the virtual machine can be moved between different physical machines without having to change anything in the applications that access that SQL server. With virtualized SQL Servers several common tasks become easier. It is easy to back up an entire server by taking a “snapshot” of the virtual machine. This snapshot can then be moved to another machine to provide a stand by server in case of a failure. If the virtual machine resides on a disk external to the machine running the virtualization software then the virtual machine can be started on a different host machine if the original host machine fails. This give up to the second recovery of all committed database transactions. The only work that would need to be redone is the work that wasn’t completed when the original host machine crashed. With newer virtualization technology (VMWare ESX Server and soon Microsoft Hyper-V) the restarting of a virtual machine on a different host can be automated to eliminate the delay caused when a person has to be notified that a virtual machine has stopped responding and to restart the virtual machine on a different server.Virtualization can be used not only for unplanned downtime but for migrating servers from one physical server to another as hardware is upgraded. You can move licenses across a server farm with flexibility. There is no longer a 90 day waiting period between moves. With Enterprise Edition, an unlimited number of virtual machines can be run on a server if all the physical processors on that server are licensed for Enterprise Edition. (There will be costs to license Windows and other software in the virtual machine but that is outside the scope of this discussion).Information on Hyper-v can be found at http://www.microsoft.com/virtualization/default.mspx and http://www.microsoft.com/windowsserver2008/en/us/hyperv.aspxVirtualization Energy:The energy company may have some instances of SQL server that do not run on a named instance or whose licensing requires that it not run on the same server as other applications. In this case they can still consolidate servers onto fewer physical servers using virtual machines. Each virtual machine is a completely isolated environment and it will appear to the application that the database is running on a dedicated server. With SQL Server Enterprise edition, an unlimited number of virtual machines can be run on a server if all the physical processors on that server are licensed for Enterprise Edition. (There will be costs to license Windows and other software in the virtual machine but that is outside the scope of this discussion).Evidence:Virtualization in Retail Study data came from press release at http://www.microsoft.com/presspass/press/2008/apr08/04-29RetailVirtualizationPR.mspxMicrosoft Retail VPR case studies at http://www.microsoft.com/presspass/events/msretail/casestudies.mspxVirtualization in Banking Survey 2008 press release at http://www.microsoft.com/presspass/press/2008/apr08/04-29BankVirtualizationPR.mspxAdditional Information:For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htmFor licensing advantages of EE in virtualization scenarios please check  SQL Server Licensing a new look recording (Academy Live)http://infoweb2007/academy/catalog/webcastsessions/SQL58AL.htmFor licensing advantages of EE in Server mobility scenarios please check  SQL Server Licensing a new look recording (Academy Live)http://infoweb2007/academy/catalog/webcastsessions/SQL58AL.htm
  • Animation:A database is shown.As data compression is enabled either on the row or page level the size of the database on disk shrinks. Talk Track:Regardless of the consolidation strategy that you pursue, reducing the size of data on disk will result in cost savings as you can store more data for the same cost for disks. In addition the compressed data can be read from and written to disk faster.Data Compression:Data compression will shrink the data on disk. Compression happens at the row or page level on a table by table basis. The smaller size will result in faster data retrieval and storage. The compression will increase CPU usage and might adversely effect other concurrent operations. Technical information on data compression can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/cc280449.aspx Data Compression Energy:The energy company realizes that putting multiple named instances of SQL server on the same machine could tax the I/O subsystem on that machine. In general, databases tend to tax the I/O subsystem before other resources are taxed. By compressing the data in large or frequently accessed tables the data can be retrieved in its compressed form to allow for more databases to share the limited I/O subsystem. Evidence:Quote comes from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000002453Additional Information:For technical slides and training on SQL Server 2008 Compression please check  SQLCAT - Data Compression & Backup Compression Lessons Learned from Customer Deployments (techReady8 DB314) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190553And Reducing Storage cost in SQL Server 2008 (TechReady7 DB315) http://voyager/aspen/lang-en/management/LMS_CNT_LaunchCourse.asp?UserMode=0&iLedefID=188059&EventID=0For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htm
  • Talk Track:SQL Server 2008 Enterprise edition has several other features that help when consolidating servers. Support for more than 8 CPUs:Enterprise edition will scale up to use 8 CPUs in the machine. Standard edition is limited to 4 processors regardless of how many are installed on the machine.Hot Add CPU and RAM:Hot add CPU and RAM require hardware and OS support for adding in the hardware. Once the hardware is added SQL Server needs to be configured to use the new hardware. SQL Server does not automatically start using the hardware because you may want to use that hardware for other workloads on the server.The CPU and RAM can be either physical or additional resources added to a virtual machine. The virtualization layer must and the OS must support the addition of CPU or RAM while the virtual machine is running. With this support SQL server can add the additional resources into a virtual machine without having to restart the server.Technical details of hot add CPU can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/bb964703.aspx. Technical details of hot add RAM can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms175490.aspxAdditional Information:For technical slides please check  SQL Server consolidation deck http://arsenalcontent/ContentDetail.aspx?ContentID=118518&view=folderFor level 300-400 slides please check SQL Server 2008 Server Consolidation TDM Deck (SQL37PAL) http://infoweb2007/academy/catalog/webcastsessions/SQL37PAL.htm
  • Talk Track:Data is a valuable asset for organizations. Protecting that data and determining who is accessing it is important for all organizations. In some cases there may be legal or financial penalties for not properly securing the data. SQL Server Enterprise edition has the tools you need to secure and audit your data.Scenario Description:A pharmaceutical research company stores data about drugs and clinical trials in their database. This database contains data about patients, clinical trials, and research projects. The data is covered by legal agreements and privacy regulations. If a copy of the database from a stolen server, a disgruntled employee, or an off-site backup were to be lost or stolen, the company could be liable for large fines under HIPPA regulation. Under other regulations they could be responsible for notifying the patients whose data is missing and providing services to make sure that identity fraud or other crimes are not committed against their patients. The data in the database could also potentially give a competitor the information they need to bring a similar medication to market without spending the millions of dollars on the initial research for the product. SQL Server Enterprise edition also allows the user to enable Common Criteria compliance. Common criteria is an internationally accepted set of IT security standards. Common criteria does not "make an application secure" but instead makes sure that the application meets a minimum level of security. Because Common Criteria does give a standard it is a requirement for many governments, industries and large enterprise customers before they will deploy applications. SQL Server 2008 Enterprise edition can make Common Criteria available via the common criteria compliance enabled option.Another problem facing this research company is the unauthorized access to data from someone inside their network. Beyond the usual risk of disgruntled employees, the research company uses a lot of temporary workers for data entry as well as contractors and vendors in order to cut costs and supplement their staff. These users have to have access to the data that they are working on but the company is concerned that they might decide to download all of the data that they can access . Traditional security is too granular to handle the case where a user has to have rights to read data but shouldn’t be able to read more than a certain number of records at a time. The organization wants to audit all database activity on the sensitive data to review it for unusual patterns in data access. They may not be able to stop the initial access to the data they will be able to stop it as soon as possible and minimize the damage. The hope from the business is that they will be able to find people who are abusing the system and stop them from transmitting the data outside of the office to stop problems before they happen.SQL Server Enterprise Edition provides many features that can be used by themselves or in combination to address the concerns of the pharmaceutical company and ensure that their data is protected. The features are all available with the Enterprise Edition of SQL Server and complement each other. No single such thing as unbreakable security and no feature is a silver bullet to avoid data loss but by combining hardware and software many of the common risks of dealing with sensitive data can be mitigated. In addition the task of “proving” to auditors that you are in compliance can be made easier.The real reason for the pharmaceutical company (or any other company) to invest in security is to ensure that they protect the asset that the data represents. Each company will have different answers to the questions about the cost to gather data and the competitive advantage that it provides. The cost to develop a new medication can be extreme. According to a recent study “Estimates about the cost of developing a new drug vary widely, from a low of $800 million to nearly $2 billion per drug. Even the high end of those estimates may soon be considered a bargain. “ -http://www.america.gov/st/econ-english/2008/April/20080429230904myleen0.5233981.html.In addition to the cost and competitive advantage of the data there can be regulatory and other issues with disclosing data. If the data falls under SOX or HIPPA regulation there could be fines or jail time for failure to protect the data properly. According to http://www.hipaas.com/what.html “For misuse of patient data the fine could be $250,000 plus jail time”.Cost estimate of $14 million per incident comes from http://www.computerworld.com/securitytopics/security/story/0,10801,106180,00.htmlCost estimate of $182/compromised record comes from http://www.networkworld.com/news/2006/110206-data-breach-cost.htmlAdditional Information:For Technical Slides please check SQL Server 2008 Security TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=123738&view=folderFor training  please check Security (Auditing, Encryption) -- Data Security, Admin Security (techReady8 DB315) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190554
  • Animation:The database has TDE enabled.Data is written to the database. TDE encrypts the data and changes it from something human readable into seemingly random data. When the data is backed up (the blue cylinder) it is written in encrypted form. Talk Track:Previously encrypting data in the database involved changing data types, application code, and managing keys. With Enterprise edition and transparent data encryption you don’t have to change a thing. SQL Server will encrypt data when it is written to disk and decrypt it when it is read back in memory so the applications and users do not know that the data is encrypted. The data is encrypted when it is “at rest”.Transparent Data Encryption:Implementing encryption in a database traditionally involves complicated application changes such as modifying table schemas, removing functionality, and significant performance degradations. Ranged and equality searches are not allowed; and the application must call built-ins (or stored procedures or views that automatically use these built-ins) to handle encryption and decryption, all of which slow query performance. TDE solves these problems by simply encrypting everything. Thus, all data types, keys, indexes, and so on can be used to their full potential without sacrificing security or leaking information on the disk. TDE operates at the I/O level through the buffer pool. Thus, any data that is written into the database file (*.mdf) is encrypted. Snapshots and backups are also designed to take advantage of the encryption provided by TDE so these are encrypted on disk as well. Data that is in use, however, is not encrypted because TDE does not provide protection at the memory or transit level. The transaction log is also protected, but additional caveats apply.Data in transit is not protected because the information is already decrypted before it reaches this point; SSL should be enabled to protect communication between the server and any clients.Encrypting at the I/O level also allows the snapshots and backups to be encrypted, thus all snapshots and backups created by the database will be encrypted by TDE. The certificate that was used to protect the Data Encryption Key when the file was written must be on the server for these files to be restored or reloaded. Thus, you must maintain backups for all certificates used, not just the most current certificate.Encryption is CPU intensive and is performed at I/O. Therefore, servers with low I/O and a low CPU load will have the least performance impact.In general, TDE and cell-level encryption accomplish two different objectives. If the amount of data that must be encrypted is very small or if the application can be custom designed to use it (or if the application has custom design requirements) and performance is not a concern, cell-level encryption is recommended over TDE. Otherwise, TDE is recommended for encrypting existing applications or for performance sensitive applicationsTechnical details on transparent data encryption are available on SQL Books Online at http://technet.microsoft.com/en-us/library/bb934049.aspx Transparent Data Encryption Pharmaceutical:The pharmaceutical company could use TDE to protect the data on the patients and clinical trials in its data systems. By using TDE the data will be encrypted any time it is written to disk. This not only protects the data from unauthorized access if a backup tape or hard drive is stolen or misplaced but also helps the company defend against fines and lawsuits for not protecting the data in a responsible manner. Evidence:Evidence comes from Microsoft case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001003Additional Information:For Technical Slides please check SQL Server 2008 Security TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=123738&view=folderFor training  please check Security (Auditing, Encryption) -- Data Security, Admin Security (techReady8 DB315) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190554
  • Animation:A database is shownA USB key is used to store the keys used for encryption. When data is written to the database Transparent Data Encryption uses the key stored on the USB key to encrypt the data and write it to disk. Talk Track:Encryption doesn’t protect your data if the attacker can get to the key. By having the key kept in a different location than the data it adds an additional level of protection. In addition hardware security modules can quickly create and maintain different keys. The encryption key management could even be outsourced to a company that specializes in encryption keys.Extensible Key Management:Extensible key management works with transparent data encryption to separate the encryption key from the database.With the growing demand for regulatory compliance and concern for data privacy, organizations are taking advantage of encryption as a way to provide a "defense in depth" solution. This approach is often impractical using only database encryption management tools. Hardware vendors provide products that address enterprise key management by using Hardware Security Modules (HSM). HSM devices store encryption keys on hardware or software modules. This is a more secure solution because the encryption keys do not reside with encryption data.Extensible Key Management also provides the following benefits:1. Additional authorization check (enabling separation of duties).2. Higher performance for hardware-based encryption/decryption.3. External encryption key generation. 4. External encryption key storage (physical separation of data and keys).5. Encryption key retrieval. 6. External encryption key retention (enables encryption key rotation).7. Easier encryption key recovery.8. Manageable encryption key distribution. 9. Secure encryption key disposal.Technical information on Extensible Key Management available in SQL Books Online at http://technet.microsoft.com/en-us/library/bb895340.aspx Extensible Key Management Pharmaceutical:The pharmaceutical company could use extensible key management to separate the key used to encrypt the data from the encrypted data. Without EKM if a complete set of database backups were to be stolen the thieves could use the key information stored in the master database to decrypt the other encrypted database. With EKM the thieves would also need the hardware module that contains the key. This makes it much harder for a malicious person to get to the data encrypted on the backup media. Evidence:Evidence from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001003Additional Information:For Technical Slides please check SQL Server 2008 Security TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=123738&view=folderFor training  please check Security (Auditing, Encryption) -- Data Security, Admin Security (techReady8 DB315) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190554
  • Animation:A database has a table and an audit table shown.Data is written to the table. SQL Auditing writes a record into the audit table about the change and who did it.Data is read from the table. A new audit record is created telling what data was read and who did it. Talk Track:Other auditing solutions require writing triggers or using external programs to audit database access. These solutions require maintenance and risk losing data if the third part application isn’t running. By having auditing part of the database engine Enterprise edition ensures that all actions, including selects, can be audited.Database Auditing:With SQL Server 2008 you can create audits that will allow you to analyze the data usage patterns for the data in your database. Most auditing solutions will show changes to data but will not let you know who is accessing data. For databases with sensitive information this is a must have feature. With SQL Server 2008 EE you can audit not only changes to data but also which users are reading data. From a security standpoint this will allow you to see when a user who would normally have access to a certain set of data is accessing more than they should. This will allow you to take corrective action. You can also determine if data is accessed where there is no real need to know the information. A popular method of auditing data is to use triggers to track changes to data. The triggers will only let you know when data is changed not when it is accessed. With other editions of SQL Server you would have to run a SQL Profiler session or use another tool to track data reads. Since these tools run outside the database engine you can not guarantee that they will be able to track all data when a server is started. The auditing features of SQL Server 2008 are robust . If desired, you can set up the system to not allow a database to run if the auditing fails. This ensures that there is no unauthorized access that goes undetected due to a server problem.The auditing in SQL Server 2008 does not store before and after values of database updates so you can not use it as a means to reconstruct your database but it does track the SQL statements so you can see who added or changed the data in the database.Technical information on database auditing available in SQL Books Online at http://technet.microsoft.com/en-us/library/cc280526.aspx Database Auditing Pharmaceutical:The pharmaceutical company utilizes a lot of temporary and contract workers. These workers are required to sign agreements about the confidentiality of the data they are working with but still the pharmaceutical company worries that a worker with a legitimate need to read a few records will read a lot more than they should. Using database auditing they could monitor the usage of the temporary and contract workers to ensure that the agreements are not breached. If a user is found to be accessing data they shouldn't or large amounts of data that account can be disabled quickly to limit the potential loss.Another concern for the pharmaceutical company is that workers have been tempted to look up information on famous patients. By auditing access to data read from the database they can find these instances and hopefully prevent an data leak before it happens. Evidence:Evidence comes from Microsoft case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001003Additional Information:For Technical Slides please check SQL Server 2008 Security TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=123738&view=folderFor training  please check Security (Auditing, Encryption) -- Data Security, Admin Security (techReady8 DB315) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190554
  • Animation:Checkboxes are added to the table. Talk Track:Common criteria compliance allows an administrator to easily turn on and turn off the features that make SQL Server 2008 compliant with the common criteria. The auditing and protection granted by common criteria meets and in some cases exceeds customer’s security requirements.Common Criteria Compliance:Common Criteria Compliance can be turned on via a database option and will enable the 3 items listed in the slide.RIP requires a memory allocation to be overwritten with a known pattern of bits before memory is reallocated to a new resource. Meeting the RIP standard can contribute to improved security; however, overwriting the memory allocation can slow performance. After the common criteria compliance enabled option is enabled, the overwriting occurs.After the common criteria compliance enabled option is enabled, login auditing is enabled. Each time a user successfully logs in to SQL Server, information about the last successful login time, the last unsuccessful login time, and the number of attempts between the last successful and current login times is made available. These login statistics can be viewed by querying the sys.dm_exec_sessions (Transact-SQL) dynamic management view.After the common criteria compliance enabled option is enabled, a table-level DENY takes precedence over a column-level GRANT. When the option is not enabled, a column-level GRANT takes precedence over a table-level DENY. This means that with common criteria compliance enabled, a user who has explicitly been denied access to the table or who is a member of a group that has explicitly been denied access to the table will no longer be able to see any data in the table. With common criteria compliance disabled they would be able to see specific columns that they had been granted access to. This change in behavior could cause some applications to break so applications should be tested thoroughly before changing this option in a production system.Technical information on Common Criteria Certification is available in SQL Books Online at http://technet.microsoft.com/en-us/library/bb153837.aspx and http://technet.microsoft.com/en-us/library/bb326650.aspx Common Criteria Compliance Pharmaceutical:The pharmaceutical company could use the ability to turn on Common Criteria compliance along with other security measures to ensure that sensitive data is given the highest level of protection. Also due to the regulatory environment that the company works in they need to be able to show auditors that they are following mandated security procedures. Since Microsoft has gone to the effort to get SQL Server certified it saves the pharmaceutical company valuable time and money doing their own testing and justifying it to the regulators. Evidence:Evidence from SQL Books Online article at http://technet.microsoft.com/en-us/library/bb153837.aspxAdditional Information:For Technical Slides please check SQL Server 2008 Security TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=123738&view=folderFor training  please check Security (Auditing, Encryption) -- Data Security, Admin Security (techReady8 DB315) http://voyager/aspen/lang-en/management/LMS_TrainLedInfo.asp?UserMode=0&LEDefId=190554
  • Talk Track:Enterprise edition has the features and horsepower to handle the demands of today’s data warehouses and should be the default for creating strong, robust reporting solutions.Scenario Description:When the data for reports are drawn from many different source systems there can be issues with data integrity. When any discussion about a report starts with questions like "which server did this data come from?", "how old is this data?", "have you checked this data?", or "did the data load finish last night?" then you know you have problems. In an ideal world all of the reporting data would always be up to date and accurate but in the real world that is not always the case. Updating the reporting databases requires knowing what data has changed since the last update and being able to add the data to the reporting database without causing undue impact to the source or destination database. When data comes from many different sources the sources can have conflicting data that needs to be reconciled. Another common concern is the transformation of the data from its form in the source form to the standardized form in the reporting database. As new systems are acquired or the company pursues its mergers and acquisition strategy a wide variety of systems can be introduced into the enterprise. Even for systems that have been in place for a while as the volume of data grows there can be issues with identifying the data that needs to be updated and copying the volume of data into the reporting systems. When the organization has data in its warehouse it will inevitably want to find new and better ways to analyze the data. The server will use a lot of memory and processing power to analyze the data and produce the report. All of these factors can create the situation where the organization has problems with its reporting.A large financial company sees millions of transactions per day, and now deals with tables with billions of rows and data warehouses which can have several such 'fact tables' which have to be updated from multiple databases daily. Extracting, transferring, transforming, and loading the changed data takes a long time. Because the data warehouses have so much data reporting has started to take even longer. The financial company needs to have a way to get to current data quickly and efficiently in order to respond to changing financial markets.
  • Animation:Master data comes from different systems. The master data service examines the data and hierarchies. For all data that meets the requirements the data is then sent to other servers. If the data is incorrect a workflow is started that alerts someone to fix the issues. Talk Track:One of the things that holds back businesses from making decisions on their data is the lack of a ‘single source of truth”. When shared data comes from many different sources it can be in slightly different formats and have different values. By using a master data management scheme all data can be checked against the standard and made to conform to company policies.Master Data Services:With SQL Server 2008 R2 Enterprise, organizations can deploy a master data management solution. Master Data Services allow a server to act as a data hub with the ability to manage the data entities and hierarchies. Administrators can define the data model and then adapt it as business needs change. With a powerful rules engine and integration with workflow, data owners can be notified if data that violates the rules is loaded into the system. By providing central management of business critical data, organizations can ensure that users have access to the correct data.Technical information on master data management is available in TechNet at http://technet.microsoft.com/en-us/library/bb190163.aspx Master Data Services Financial Services:The financial company could use master data services to specify the data that is to be used for shared data such as customers. When new data is loaded it will be inspected and if required data is missing or there is a conflict with one of the other rules the data will be sent to a human for review and correction.Evidence:Quote comes from blog at http://blogs.technet.com/dataplatforminsider/archive/2009/05/13/master-data-services-what-s-the-big-deal.aspx.
  • Animation:Data from the table is loaded into the data warehouse. Change data capture is enabled for the database and also for this table.A new row is added. That is logged in the change table.A row is updated. The change is also logged in the change table.The ETL process runs again. This time the data will come from the change table and go through the same ETL process to update the data warehouse.Talk Track:One of the challenges of a data warehouse is to keep the data updated as the source data changes. Most solutions involve writing procedures or triggers to store the changed data in separate tables in the database. These are effective but slow down the server and need to be maintained as tables are changed. Change data capture works with the database engine to provide high performance which does not require additional user maintenance.Change Data Capture:SQL Server 2008 change data capture uses a separate process to read the transaction log and record any data changes into a table, called the change table, that mirrors the structure of the source table. The process that reads the log data runs asynchronously so there is very little impact on the server. SQL Server also produces table valued functions that retrieve the data from the change table. The functions can return all changes or just the net changes for the period desired. The ability to retrieve net changes can reduce the time spent updating the data warehouse or reporting data with data that will be overwritten later on in the load process. The database needs to be enabled for change data capture and individual tables configured for change data capture before SQL Server will track changes on the table. Also to limit the size of the change table a time limit for retaining data in the change tables should be set.Technical information on change data capture can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/bb522489.aspx Change Data Capture Financial Services:The financial company could use change data capture to track changes on the tables that feed its data warehouse. Because some of the data changes quite often through out the day (for instance interest rates) the company loves the flexibility of being able to get all of the change rows for one process so they can track the changes through the day and getting just the net changes for another process to track the value at the end of the day. This allows the company to optimize the network bandwidth and other resources (transformations and load processes) associated with loading the different data stores.Case Study:http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=40000011932. http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001007“We’re eager to make use of the Change Data Capture feature in SQL Server 2008 because we had actually been handling this with our own custom code that used a mirrored table to log every creation, modification, or deletion of a record,” says Joe Snitker, Database Developer at CyberSavvy. “The Change Data Capture feature uses the same philosophy, but does so on a much broader scale. It provides a huge benefit for our customers.”Joe Snitker,Database Developer, CyberSavvy
  • Animation:Without Star Join optimizations all rows in the join would be completed and then non-qualifying rows would be removed. This results in many unnecessary operations. With Star Join optimization the non-qualifying rows are removed earlier in the join process saving memory and I/O operations and reducing the total amount of time to complete the operation.Talk Track:The star schema is very common in a data warehouse. Enterprise edition introduces enhancements that allow rows to be disqualified earlier on. This does not change the results but will reduce the number of joins that need to be computed and will result in faster queries that use fewer CPU and memory resources.Star Join Optimization:In SQL Server 2008, bitmap filtering can be introduced in the query plan after optimization, as in SQL Server 2005, or introduced dynamically by the query optimizer during query plan generation. When the filter is introduced dynamically, it is referred to as an optimized bitmap filter. Optimized bitmap filtering can significantly improve the performance of data warehouse queries that use star schemas by removing non-qualifying rows from the fact table early in the query plan. Without optimized bitmap filtering, all rows in the fact table are processed through some part of the operator tree before the join operation with the dimension tables removes the non-qualifying rows. When optimized bitmap filtering is applied, the non-qualifying rows in the fact table are eliminated immediately.Bitmap filtering and optimized bitmap filtering are implemented in the query plan by using the bitmap showplan operator. Bitmap filtering is applied only in parallel query plans in which hash or merge joins are used. Optimized bitmap filtering is applicable only to parallel query plans in which hash joins are used. In both cases, the bitmap filter is created on the build input (the dimension table) side of a hash join; however, the actual filtering is typically done within the Parallelism operator, which is on the probe input (the fact table) side of the hash join. When the join is based on an integer column, the filter can be applied directly to the initial table or index scan operation rather than the Parallelism operator. This technique is called in-row optimization.Technical information on bitmap filtering can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/bb522541.aspxStar Join Optimization Finance:The finance company can use the optimization to help them retrieve the information for their reports quickly. This allows them to create more complex reports with more intricate data that are still able to be rendered quickly. The financial company is able to report on the thousands of rows of data in its data warehouse.Evidence:Evidence quote comes from case study at http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000001193Additional Information:For Technical Slides please check SQL Server 2008 Data Warehousing TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=118397&view=folder
  • Animation: A single table is partitioned into separate filegroups on the same server. The division into filegroups allows the server to take advantage of enhancements for parallel processing to read or write the data in parallel.Talk Track:Enterprise edition will parallelize certain operations on tables that are split across filegroups. The parallel operations complete faster and can take advantage of the resources in the server.Table Partitioning:Partitioning makes large tables or indexes more manageable, because partitioning enables you to manage and access subsets of data quickly and efficiently, while maintaining the integrity of a data collection. By using partitioning, an operation such as loading data from an OLTP to an OLAP system takes only seconds, instead of the minutes and hours the operation takes in earlier versions of SQL Server. Maintenance operations that are performed on subsets of data are also performed more efficiently because these operations target only the data that is required, instead of the whole table.The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a database. The data is partitioned horizontally, so that groups of rows are mapped into individual partitions. The table or index is treated as a single logical entity when queries or updates are performed on the data. All partitions of a single index or table must reside in the same database.Partitioned tables and indexes support all the properties and features associated with designing and querying standard tables and indexes, including constraints, defaults, identity and timestamp values, and triggers. Therefore, if you want to implement a partitioned view that is local to one server, you might want to implement a partitioned table instead.Deciding whether to implement partitioning depends primarily on how large your table is or how large it will become, how it is being used, and how well it is performing against user queries and maintenance operations.Generally, a large table might be appropriate for partitioning if both of the following are true: 1 The table contains, or is expected to contain, lots of data that are used in different ways. 2 Queries or updates against the table are not performing as intended, or maintenance costs exceed predefined maintenance periods. Technical details on table partitioning can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms188706.aspx Table Partitioning Financial Services:The financial services company can use table partitioning to divide up its large tables into smaller partitions based on time. They could partition the table by quarter, month, or even week depending on the usage patterns for reports that gives them the best performance for both data loads and reporting while not putting too great a maintenance burden on the server administrators.Evidence: TechNet: http://technet.microsoft.com/en-us/magazine/2008.04.overview.aspx SQL Server 2008 works with the table partitioning mechanism (which was introduced in SQL Server 2005) to allow the SQL Server engine to escalate locks to the partition level before the table level. This intermediary level of locking can dramatically reduce the effects of lock escalation on systems that have to process hundreds and thousands of transactions per second. SQL Server 2008 offers several new query processor improvements for when the query interacts with partitioned tables. The query optimizer can now perform query seeks against partitions as it would against individual indexes by only working with the partition ID and not the partitioning mechanism at the table level. - Randy Dyess, SQL Server MentorAdditional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:The database is divided across three file groups. When a query is issued against the data mining model the query analyzer divides up the work across all three filegroups. When a query is issued against the table it is automatically run in parallel. This reduces the amount of time that it takes to return the complete data to the user.Talk Track:When querying data the query optimizer in Enterprise edition will determine if there are parts of the query that can be run in parallel. By running operations in parallel the overall time needed to complete the query is reduced.Parallelism:SQL Server 2008 has expanded the scenarios where operations can run in parallel. In particular the ETL functions of a data warehouse can be run in parallel so lookups in dimension tables can happen at the time of data load instead of having to load the data into a staging area and then perform the lookup on it.Additionally by processing parts of the data mining model in parallel more data can be examined and the results will be better because you have more input into the model. SQL Server Standard Edition can only use 4 processors but Enterprise Edition can use all of the processors supported by the OS.Technical data on parallel operations can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms174880.aspx Parallelism Financial Services:The financial services company can take advantage of the parallel operations in SQL Server to do foreign key lookups while loading the data into its data warehouse to shorten the time it takes to update its data warehouse. After the data is loaded the server can take advantage of the processors in the machine to analyze more data and come up with better results from the analysis. Evidence:Evidence quote from http://blogs.msdn.com/craigfr/archive/2006/10/11/introduction-to-parallel-query-execution.aspxAdditional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Talk Track:Just having data doesn’t help an organization. When they can find patterns in the data and take actions to optimize their business based on that data, then the data becomes insight. Having the algorithms to analyze the data used to mean hiring a statistician and a lot of painstaking effort. Enterprise edition provides the algorithms and tools that an organization needs to turn data into insight that can be used by all employees.Scenario Description:Just having data is not enough. It must be analyzed and mined to find trends in it. Up to now there have been several ways to analyze the data. Most of them require retrieving large amounts of data and analyzing it in a program outside of SQL Server. Even with the analysis services and data mining built into SQL Server do not cover every possible way of analyzing the data. To do further mining of the data the data must be retrieved from the server into an external program. This uses a lot of network and memory resources.An online retailer uses data mining techniques to detect possible fraud instances, and to predict customer behavior. Custom algorithms are built externally and require transferring data out of the database to act on, increasing database load slowing system performance and making real-time mining impossible.
  • Animation:Data from the database is used in PowerPivot for SharePoint 2010 to enable analyzing the data in the manner that the user wants or in a SQL Server Reporting Services report to show the data plotted on a map. Talk Track:For analysis to make a difference in a business the data must be accessible and actionable. With SQL Server 2008 R2 there are new options for examining and reporting on the data. With PowerPivot users can look at data in Excel and use the familiar interface and tools such as macros to look at the data in the way that they want. When integrating with SharePoint users can collaborate on the data and come to a consensus decision. When visualizing the data it can now be shown on a map. This can help to find the shortest route that goes to a set of points, understand how many customers live within a certain radius of your store, or helps you to see locations in an aerial or 3-D view.PowerPivot for SharePoint 2010:PowerPivot for SharePoint 2010 is a data analysis tool that works with Excel 2010 to help business users analyze large amounts of data with a familiar tool. PowerPivot works seamlessly with SQL Server and SharePoint 2010 to give users the power to gain deep insight into their data and make informed decisions quickly. SharePoint 2010 also provides collaboration and workflow features to help users share data and gain consensus from multiple people. Additionally, the PowerPivot Management Dashboard allows IT organizations to manage the performance, availability, and quality of service.More information on PowerPivot is available at http://www.powerpivot.comPowerPivot for SharePoint 2010 Retailer:The retailer can take advantage of the PowerPivot for SharePoint 2010 to manage the data that is being used to create reports. By managing the data they can ensure that the information is correct and current and that all users have the same base information to report from.  
  • Animation:The data mining query uses an algorithm that comes with SQL Server 2008. It is created by Microsoft Research and does a good job of finding clusters in the data. The clusters are represented by the larger circles. The organization believes that they can see different patterns and would like to modify the algorithm to ensure that it provides the information they want.The organization extends the algorithm and they see different clusters. In this case the clusters are represented by the double green lines. The organization can use the new clusters to drive their business and provide them an advantage that their competitors do not have.Talk Track:Enterprise edition ships with a variety of algorithms to mine data for patterns and insight. In some cases the algorithms may not exactly match what the organization wants. In these situations the organization can customize an existing algorithm to come up with results that are more relevant to the questions being asked.Algorithm Extensibility:SQL Server 2008 comes with many different data mining algorithms that let you discover information in your data. At times the built in models do not provide all the information that you want. After you have selected an algorithm that meets your business needs, you can customize the mining model in the following ways to potentially improve results. Use different columns of data in the model, or change the usage or content types of the columns.Create filters on the mining model to restrict the data used in training the model.Set algorithm parameters to control thresholds, tree splits, and other conditions.Change the default algorithm that is used to analyze data or make predictions.Technical information on algorithm extensibility can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/cc280427.aspx Algorithm Extensibility Retailer:The retailer could use the built in data mining algorithms to discover relationships between products that customers order. Perhaps they feel that the data doesn't quite meet their needs or that they have a better algorithm that will give them a competitive advantage. Because of the large amount of data that the retailer deals with they want to be able to keep the data in the database and analyze it there rather than pulling the large data set into an external system to analyze it. The retailer can save a lot of money by improving their ability to cross sell and by finding and stopping fraud quickly before they lose more money. Evidence:Quote take n from white paper available at http://download.microsoft.com/download/6/9/D/69D1FEA7-5B42-437A-B3BA-A4AD13E34EF6/SQL2008PredictAnalysis.docx
  • Animation:A single table is partitioned into separate filegroups on the same server. The division into filegroups allows the server to take advantage of enhancements for parallel processing to read or write the data in parallel.Talk Track:Data warehouse tables are frequently partitioned on a key such as year. When a query only needs data from a single year the query optimizer will recognize this and only check for data in the relevant filegroup. This can dramatically speed up queries where the index doesn’t help.Table Partitioning:Partitioning makes large tables or indexes more manageable, because partitioning enables you to manage and access subsets of data quickly and efficiently, while maintaining the integrity of a data collection. By using partitioning, an operation such as loading data from an OLTP to an OLAP system takes only seconds, instead of the minutes and hours the operation takes in earlier versions of SQL Server. Maintenance operations that are performed on subsets of data are also performed more efficiently because these operations target only the data that is required, instead of the whole table.The data of partitioned tables and indexes is divided into units that can be spread across more than one filegroup in a database. The data is partitioned horizontally, so that groups of rows are mapped into individual partitions. The table or index is treated as a single logical entity when queries or updates are performed on the data. All partitions of a single index or table must reside in the same database.Partitioned tables and indexes support all the properties and features associated with designing and querying standard tables and indexes, including constraints, defaults, identity and timestamp values, and triggers. Therefore, if you want to implement a partitioned view that is local to one server, you might want to implement a partitioned table instead.Deciding whether to implement partitioning depends primarily on how large your table is or how large it will become, how it is being used, and how well it is performing against user queries and maintenance operations.Generally, a large table might be appropriate for partitioning if both of the following are true: 1 The table contains, or is expected to contain, lots of data that are used in different ways. 2 Queries or updates against the table are not performing as intended, or maintenance costs exceed predefined maintenance periods. Technical details on table partitioning can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms188706.aspx Table Partitioning Retailer:The retail company can use table partitioning to divide up its large tables into smaller partitions based on time. They could partition the table by quarter, month, or even week depending on the usage patterns for the data mining algorithms that gives them the best performance for both data loads and mining while not putting too great a maintenance burden on the server administrators.Evidence: TechNet: http://technet.microsoft.com/en-us/magazine/2008.04.overview.aspx SQL Server 2008 works with the table partitioning mechanism (which was introduced in SQL Server 2005) to allow the SQL Server engine to escalate locks to the partition level before the table level. This intermediary level of locking can dramatically reduce the effects of lock escalation on systems that have to process hundreds and thousands of transactions per second. SQL Server 2008 offers several new query processor improvements for when the query interacts with partitioned tables. The query optimizer can now perform query seeks against partitions as it would against individual indexes by only working with the partition ID and not the partitioning mechanism at the table level. - Randy Dyess, SQL Server MentorAdditional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:The database is divided across three file groups. When a query is issued against the data mining model the query analyzer divides up the work across all three filegroups. This reduces the amount of time that it takes to return the complete data to the user.Talk Track:Data mining algorithms take advantage of the parallelism built into the database engine to mine data in parallel. With very large datasets this can mean dramatically faster analysis.Parallelism:SQL Server 2008 has expanded the scenarios where operations can run in parallel. In particular the ETL functions of a data warehouse can be run in parallel so lookups in dimension tables can happen at the time of data load instead of having to load the data into a staging area and then perform the lookup on it.Additionally by processing parts of the data mining model in parallel more data can be examined and the results will be better because you have more input into the model. SQL Server Standard Edition can only use 4 processors but Enterprise Edition can use all of the processors supported by the OS.Technical data on parallel operations can be found in SQL Books Online at http://technet.microsoft.com/en-us/library/ms174880.aspx Parallelism Retail:The retailer can take advantage of the parallel operations in SQL Server to do foreign key lookups while loading the data into its data warehouse to shorten the time it takes to update its data warehouse. After the data is loaded the server can take advantage of the processors in the machine to analyze more data and come up with better results from the analysis. Evidence:Evidence quote from http://blogs.msdn.com/craigfr/archive/2006/10/11/introduction-to-parallel-query-execution.aspxAdditional Information:For Technical Slides please check SQL Server 2008 Performance and Scalability TDM Deckhttp://arsenalcontent/ContentDetail.aspx?ContentID=122429&view=folder
  • Animation:A data mining model has been created and trained and is ready for the user to query the model to find out information about the data.The user queries the mining model.The user is able to look at the data in the meaning model. If they choose they can then drill through to the training algorithm and examine the data that was used to train the model to determine if the conclusions of the model are correct. Talk Track:When the mining algorithm returns results that you do not agree with or that contradict your intuition you need to find out why. Traditionally you would have to spend a lot of time looking at the raw data and running the algorithm by hand. With Enterprise edition you can drill down to the data that was used to train the algorithm to see if it is still representative of your data now. This can save a lot of time and effort trying to diagnose the problems.Drillthrough mining queries:Drillthrough means the ability to query both a mining model and a mining structure to learn details about the cases included in the model or in the structure. SQL Server 2008 provides two different options for drilling through into case data. You can drill through to the cases that were used to build the data, or you can drill through to the cases in the mining structure.Drilling through to case data is useful if you want to view the cases that were used to train the model, versus the cases that are used to test the model, or if you want to review the attributes of the case data. Drilling through to structure cases is useful when the structure contains information that might not be available in the model. Typically, if you have a mining structure that supports many different kinds of models, the data from the structure is used more selectively in the model. For example, you would not use customer contact information in a clustering model, even if the data was included in the structure. However, after you create the model, you might want to retrieve contact information for customers who are grouped into a particular cluster.  Drillthrough mining queries Retailer:The retailer can use the ability to examine the source data to find out greater details about trends they are seeing. This allows them to make a judgment on the usefulness of the model and to look for changes that might signal a change in buying habits or an increase in fraud. By being able to see the source data the retailer can have a human decide if there is a new pattern (like someone who moved to a new address) as opposed to a problem. Also being able to view the data used to train the model will allow the retailer to determine if the model is still relevant based on the growth of their business and the way that their business has evolved. Evidence: Evidence quote comes from white paper available at http://download.microsoft.com/download/6/9/D/69D1FEA7-5B42-437A-B3BA-A4AD13E34EF6/SQL2008PredictAnalysis.docx

Transcript

  • 1. Advantages of Enterprise Edition
    <Presenter’s Name> |January 2010
  • 2. Notes to the Presenter
    • Information in this deck is intended for customers who have already decided on Microsoft SQL Server 2008 R2 as a database solution
    • 3. No competitive information is provided
    • 4. There is no overview of all SQL Server capabilities
    • 5. The focus is to sell the business value of incremental features in the Enterprise edition and describe features in the Enterprise edition that enhance capabilities in the Standard edition
    • 6. If the customer is not yet committed to SQL Server 2008 R2 or if they need an overview of the features in SQL Server 2008 R2, use the standard TDM deck
  • How to Use This Deck
    • The deck is designed to help sell the business value of SQL Server Enterprise capabilities
    • 7. There are two decks that support SQL Server Enterprise and they are organized around two key pivots:
    • 8. “Advantages of Enterprise Edition” is organized around sample customer scenarios (for example, the customer needs to reduce downtime). Each scenario shows Enterprise edition capabilities that help to address customer needs.
    • 9. “Benefits of Enterprise Edition” shows technology pillars (for example, “high availability”). Each technology pillar section shows capabilities of the technology.
    • 10. You may choose to use either deck or a subset of the scenarios or pillars that are most relevant to your customer
    • 11. Use a subset of the slides or the full set of slides
    • 12. Customize a scenario to better fit your customer's needs
    • 13. This deck has the scenarios, but you may choose to lead with the summary of technologies when it is appropriate
  • Summary of ScenariosAdvantages of Enterprise Edition Deck
    * Talking points cover both unplanned downtime scenarios
  • 14. Why Enterprise is Right for You?
    The right version of SQL Server for all businesses that need to:
    • Achieve higher levels for high availability, scalability and security;
    • 15. Maximize business insight from organizational data;
    • 16. Dramatically reduce data management operational costs.
    Enterprise-Class Capabilities
    OrganizationalBusiness Intelligence
    Cost Reduction
    • Enables 99.99% uptime with AlwaysOn Technologies
    • 17. Improve app. performance 30% or more with Resource Governor
    • 18. Reduce DW storage costs up to 90% and reduce backup storage by 66% with compression
    • 19. 50% better performance with Analysis Services Enterprise enhancements
    • 20. Reduce HW, licensing and operational costs up to 50% through consolidation and virtualization
    • 21. Save up to $100K in 3rd party SW with built-in tools like Encryption and Performance Monitoring
    Scale-out Reporting & Analysis Servers, Partitioned Cubes, PowerPivot for SharePoint 2010, High-Speed Connectors, Master Data Services
    Application and Multi-server Management, Live Migration and Virtualization Support, Transparent Data Encryption
    Multiple Instance Clustering, Database Mirroring, Resource Governor, All Actions Audited
  • 22. Typical Customer Scenarios
    Unplanned Downtime
    Manufacturer can’t afford to have operations stalled
    Business Intelligence
    Online retailer analyzes customer information to predict what products
    will sell the best
    Disaster Recovery
    Government agency must be up and running quickly after a disaster
    Planned Downtime
    Multinational airline doesn’t have any convenient maintenance windows
    Reporting
    Financial company needs to have a consistent view of customer data
    Security and Governance
    Pharmaceutical company needs to protect patient information
    Resource Management
    Manufacturer balances demands of reporting and extranet
    Server Consolidation
    Energy company modernizing infrastructure
  • 23. Unplanned Downtime
  • 24. ENTERPRISE EDITION offers a rich feature set that provides high availability, increased reliability, and reduced business impact when failures occur.
    Downtime Means Lost Business
    Unplanned downtime is a common problem for Internet retailers, service providers, and social networking sites. Whether it is caused by a hardware failure, overloaded servers, or another reason, unplanned downtime can mean lost revenue and unhappy customers and partners.
    Unplanned
    Downtime
  • 25. ENTERPRISE EDITION offers a rich feature set that provides high availability, increased reliability, and reduced business impact when failures occur.
    Downtime Means Lost Productivity
    Organizations come to rely on manyline-of-business applications to manage and surface the business data that is necessary for day-to-day operations. Unplanned downtime can stop business cold and leave employees idle.
    Unplanned
    Downtime
    Business Impact
  • 26. Provide Powerful Failover Scenarios
    Multiple-Instance Database Clustering
    More than one passive node is available to host instances from multiple failovers on active nodes
    Having multiple failover nodes provides greater availability
    Reduces hardware costs with a shared failover node in the multiple-node cluster
    Simplified setup reduces administrative costs
    Because of the critical nature of the G4S application, CASON sets up the servers in a failover cluster to ensure high availability.
    — CASON Case Study
    Unplanned
    Downtime
    Applications & Business Logic
    110010100101110010100101
    110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    Offline
    Failover
    Active
    Active
    Active
  • 27. Automatically Handle I/O Errors
    “This is a really powerful enhancement because prior to this … you would have to run DBCC CHECKDB … and that would likely mean taking downtime… With SQL Server 2008 Database Mirroring you can avoid the effort and downtime.”
    — Glenn Berry, Database Architect, NewsGator Technologies
    Unplanned
    Downtime
    High Performance Mirroring
    Increase performance through asynchronous mirroring
    Automatic Page Repair
    Automatically detects page corruption and retrieves data from the mirror
    Reduces downtime and management costs
    Minimizes application changes to correctly handle I/O errors
    Reporting from Mirror
    Increase utilization of mirror server
    Reduce need for reporting servers
    Applications & Business Logic
    Principal
    Mirror
  • 28. Increase Reliability and Performance
    Peer-to-Peer Replication
    Increases reliability by replicating data to multiple servers
    Provides higher availability in case of failure or to allow maintenance at any of the participating nodes
    Offers improved performance for each node with geo-scale architecture
    Add and remove servers easily without taking replication offline, by using the new topology wizard
    “[Microsoft] SQL Server 2008 replication proved to be very predictable and reliable in our testing. This helps us to create flexible and scalable replication solutions. Reliability must be at the foundation of all that we do.”
    — Sergey Elchinsky, Leading System Engineer, Baltika Breweries
    Unplanned
    Downtime
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101100101100101100101
    110010100101
    110010100101 110010
  • 29. Increase Flexibility and Speed Recovery
    Unplanned Downtime
    Virtualization Licensing
    Increases deployment flexibility by running SQL Server inside a virtualized OS, which allows it to be easily moved or migrated to another server
    Allows 4 virtual machines/processor for each licensed physical processor
    Provides flexibility in managing server farms by allowing licenses to be moved between servers without restrictions
    Virtualization is the top spending priority for CIOs with a 5 percent increase in the number of companies budgeting for virtualization and more than a 15 percent increase in spending .
    —CIO|Insight Top IT Spending Priorities Report for 2009
    Unplanned Downtime
    Applications & Business Logic
    Virtual
    Servers
    Host
    Server
  • 30. Minimize Planned Downtime and Increase Efficiency
    Live Migration
    Move running instances of VMs between host servers
    Virtual machines can be moved for maintenance or to balance workload on host servers
    Perform maintenance on physical machines without any downtime
    Requires Windows Server 2008 R2 Hyper-v
    “This server already runs on our cluster solution with high availability, but after we have tested live migration on the new hardware, we’ll move it over to ensure optimal performance and reliability”
    —Rodrigo Immaginario, IT Manager, Universidade Vila Velha
    Unplanned Downtime
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    Virtual
    Servers
    Virtual
    Server
    Host
    Server
    Host
    Server
  • 31. Disaster Recovery
  • 32. ENTERPRISE EDITION has the tools to get you back up and running quickly and smoothly.
    Get Back to Business Faster
    When disaster strikes, organizations need a way to quickly recover their data and resume normal operations as quickly as possible. Time spent waiting for backups to load represents lost opportunities to connect with customers, partners, and suppliers.
    Disaster Recovery
    Business Impact
  • 33. Reduce Impact of Backup Errors
    Backup Mirrors
    As databases grow, the probability increases that failure of a backup device or media will make a backup non-restorable
    Mirroring a media set increases backup reliability by reducing the impact of backup-device malfunctions
    Having a mirror can resolve some restore errors quickly by substituting mirrored media for damaged backup media
    The backup media mirroring feature of SQL Server enables you to perform a mirrored backup of a database to multiple backup devices, which greatly increases the reliability of backups in case of faulty media or a lost backup device.
    — High Availability White Paper
    Disaster Recovery
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
    1100101001010010
  • 34. Replicate to Remote Locations
    Peer-to-Peer Replication
    Increases reliability by replicating data to multiple servers
    Provides higher availability in case of failure or to allow maintenance at any of the participating nodes
    Offers improved performance for each node with geo-scale architecture
    Add and remove servers easily without taking replication offline, by using the new topology wizard
    “[Microsoft] SQL Server 2008 replication proved to be very predictable and reliable in our testing. This helps us to create flexible and scalable replication solutions. Reliability must be at the foundation of all that we do.”
    — Sergey Elchinsky, Leading System Engineer, Baltika Breweries
    Disaster
    Recovery
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101100101100101100101
    110010100101
    110010100101 110010
  • 35. Planned Downtime
  • 36. ENTERPRISE EDITION provides the tools necessary to ensure high availability while still allowing for the maintenance time necessary to keep your servers secure and well-maintained.
    Balance Maintenance With Availability
    In a worldwide economy, data needs to be available 24 hours a day. However, systems still need to be maintained, and finding a balance between maintenance and availability can be difficult.
    Planned
    Downtime
    Business Impact
  • 37. Maintain Databases Without Downtime
    Online Operations
    Allow routine maintenance without corresponding downtime
    Online index operations
    Online page and file restoration
    Online configuration of peer-to-peer nodes
    Users and applications can access data while the table, key, or index is being updated
    We recommend performing online index operations for business environments that operate 24 hours a day, seven days a week, in which the need for concurrent user activity during index operations is vital.
    — SQL Server Books Online
    Planned
    Downtime
    0
    5
    1
    1
    Applications & Business Logic
    2
    2
    3
    3
    4
    4
    0
    5
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    Index
    Table
    5
    Index
    Table
    Deleted
    0
    4
    Deleted
    2
    Deleted
    Deleted
    3
    7
    5
    0
    4
    6
    5
    3
    7
    6
    7
  • 38. Upgrade Servers Without Downtime
    Planned
    Downtime
    Hot-Add CPU and RAM
    Dynamically add memory and processors to servers without incurring downtime
    Requires hardware support for either physical or virtual hardware
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
  • 39. Access Data Seamlessly Across Servers
    Peer-to-Peer Replication
    Increases reliability by replicating data to multiple servers
    Provides higher availability in case of failure or to allow maintenance at any of the participating nodes
    Offers improved performance for each node with geo-scale architecture
    Add and remove servers easily without taking replication offline, by using the new topology wizard
    “[Microsoft] SQL Server 2008 replication proved to be very predictable and reliable in our testing. This helps us to create flexible and scalable replication solutions. Reliability must be at the foundation of all that we do.”
    — Sergey Elchinsky, Leading System Engineer, Baltika Breweries
    Planned
    Downtime
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101100101100101100101
    110010100101
    110010100101 110010
  • 40. Provide Flexible Failover Solutions
    Multiple-Instance Database Clustering
    More than one passive node is available to host instances while performing maintenance on active nodes
    Having multiple failover nodes provides greater availability
    Multiple instances can share the same failover node, which reduces hardware costs
    Simple administration makes it easy to move instances to perform maintenance on passive nodes
    A solution that avoids both planned and unplanned downtime will typically generate a substantial return on investment based on planned downtime avoidance alone.
    — ContinuityCentral.com
    Planned
    Downtime
    Applications & Business Logic
    110010100101110010100101
    110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    Offline
    Failover
    Active
    Active
    Active
  • 41. Minimize Planned Downtime and Increase Efficiency
    Live Migration
    Move running instances of VMs between host servers
    Virtual machines can be moved for maintenance or to balance workload on host servers
    Perform maintenance on physical machines without any downtime
    Requires Windows Server 2008 R2 Hyper-v
    “This server already runs on our cluster solution with high availability, but after we have tested live migration on the new hardware, we’ll move it over to ensure optimal performance and reliability”
    —Rodrigo Immaginario, IT Manager, Universidade Vila Velha
    Planned Downtime
    Applications & Business Logic
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    110010100101
    110010100101 110010
    Virtual
    Servers
    Virtual
    Server
    Host
    Server
    Host
    Server
  • 42. Resource Management
  • 43. ENTERPRISE EDITION has the capabilities to manage resources and ensure efficient, predictable response times.
    Manage Database Resources
    Resource
    Management
    Reporting and other data-intensive business processes can put a higher than normal load on database servers. Many of these processes are not urgent but still compete with the normal workload, which causes delays and makes business applications less responsive.
  • 44. Keep Mission-Critical Applications Responsive
    Resource Governor
    Workloads can be prioritized to prevent runaway processes from monopolizing resources and interfering with mission-critical applications
    Establish service-level agreements (SLAs) with customers for predictable response times
    Users will have a consistent experience, which can result in fewer service calls about slow systems
    “We deal with a lot of large data feeds—both coming from manufacturers as data updates, and going out to our subscribers. Resource Governor allows us to control the percent[age] of total resources any operation can consume so that they don’t adversely impact our real-time data access.”
    — Michael Steineke, Vice President, Information Technology, Edgenet
    Resource
    Management
    Applications & Business Logic
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    POOL 0
    LIMIT 50%
    POOL 1
    LOAD 25%
    LIMIT 30%
    POOL 2
    LIMIT 20%
    LOAD 15%
    LOAD 45%
  • 45. Provide Faster, More Scalable Reporting
    Scalable Shared Databases
    Reduce costs and increase reliability when scaling out reporting servers
    Guarantee an identical view of reporting data from all servers
    Redirect an application to a different reporting server if one server becomes unavailable
    Reduce storage requirements
    Scalable shared databases provide scale-out of reporting databases using commodity servers, workload isolation with each server using its own memory, CPU, and tempdb, and a guarantee of identical data from all reporting volumes.
    — SQL Server Books Online
    Resource
    Management
    SAN
    Reporting Servers
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    Loading
    Reporting
    Valves
    Read Only
    Read / Write
  • 46. Optimize Resource Management Centrally
    Application and Multi-Server Management
    SQL Server Control Point provides dashboard viewpoints into instance and data-tier application utilization
    Drill down into details to troubleshoot or confirm consolidation candidates
    Easily adjust default utilization policies to meet the needs of your SQL Server environment
    "When you work in an environment where a single product requires up to 40 SQL Server instances for its full life cycle, the multi-server management features of SQL Server 2008 R2 become an invaluable tool – it’s almost like having an extra DBA.”
    — Chuck Heinzelman, BigHammer
    Resource Management
    Client
    Data-Tier Developer
    Managed Server Group
    “Finance”
    Database Administrator
    Control Point
  • 47. Server Consolidation
  • 48. ENTERPRISE EDITION supports multiple consolidation scenarios to provide flexibility and choices in reducing server and storage needs.
    Get the Most from Fewer Servers
    Server
    Consolidation
    Reduce the significant costs associated with server farm hardware, software licenses, electricity, cooling, maintenance, and administration by consolidating multiple servers onto fewer, more powerful ones.
  • 49. Optimize Resource Management Centrally
    Application and Multi-Server Management
    SQL Server Control Point provides dashboard viewpoints into instance and data-tier application utilization
    Drill down into details to troubleshoot or confirm consolidation candidates
    Easily adjust default utilization policies to meet the needs of your SQL Server environment
    "When you work in an environment where a single product requires up to 40 SQL Server instances for its full life cycle, the multi-server management features of SQL Server 2008 R2 become an invaluable tool – it’s almost like having an extra DBA.”
    — Chuck Heinzelman, BigHammer
    Resource Management
    Client
    Data-Tier Developer
    Managed Server Group
    “Finance”
    Database Administrator
    Control Point
  • 50. Maximize Utilization of Database Servers
    Resource Governor
    Prioritized workloads allow high-impact but intermittent tasks to run side-by-side with mission-critical operations on the same server
    Users have a more consistent experience, which can result in fewer service calls about slow systems
    Prevent runaway queries that hold resources for extended periods of time
    “Resource Governor allows us to control the percent[age] of total resources any operation can consume so that they don’t adversely impact our real-time data access.”
    — Michael Steineke, Vice President, Information Technology, Edgenet
    Server
    Consolidation
    Applications & Business Logic
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    POOL 0
    LIMIT 50%
    POOL 1
    LOAD 25%
    LIMIT 30%
    POOL 2
    LIMIT 20%
    LOAD 15%
    LOAD 45%
  • 51. Leverage High-Powered Hardware
    Consolidate and Save
    Provides support for more processors and memory to consolidate multiple databases onto more powerful hardware
    Protect data through unique security contexts of each named instance
    Provide the opportunity to reduce power consumption, rack space, and management costs as servers typically have only 15 to 20 percent utilization
    Server consolidation on a truly scalable platform… not only can deliver positive economic advantages in the short term, but can position organizations for lower operating costs, and improved service delivery for years to come.
    — Server Consolidation Case Study by Alinean
    Server
    Consolidation
  • 52. Enable Mobility Within Your Server Farm
    Virtualization Licensing
    Increases deployment flexibility by running SQL Server inside a virtualized OS, which allows it to be easily moved or migrated to another server
    Allows 4 virtual machines/processor for each licensed physical processor
    Provides flexibility in managing server farms by allowing licenses to be moved between servers without restrictions
    Virtualization is the top spending priority for CIOs with a 5 percent increase in the number of companies budgeting for virtualization and more than a 15 percent increase in spending .
    —CIO|Insight Top IT Spending Priorities Report for 2009
    Server
    Consolidation
    Applications & Business Logic
    Virtual
    Servers
    Host
    Server
  • 53. Reduce Storage Requirements
    Data Compression
    20% to 60% compression ratios*
    Saves disk storage
    • Can be used with backup compression, now available with standard
    Provides more room to store more data, which allows more instances to share disk resources
    Reduces data size to increase performance
    Moves more applications to the data center based on reduced storage
    “Our initial testing shows we’ll see 50 percent to 60 percent data compression using SQL Server 2008... we will also benefit from faster query performance.”
    — MazalTuchler, BI Manager, Clalit Health Services
    Server
    Consolidation
    10010100101001010000111110110101001
    10010100101001010000111110110101001
    *Stated percentages are typical but not guaranteed
  • 54. Additional Consolidation Features
    Server
    Consolidation
    Support for 8 CPUs
    Provide support for the most powerful servers
    Provide more resources for each SQL instance
    Consolidate more servers onto a single physical server
    Hot-add CPU and RAM
    Extend consolidation efforts by adding resources to physical or virtual machines without incurring downtime
  • 55. Security and Governance
  • 56. ENTERPRISE EDITION has the tools to secure data and track its use to limit the possibility of data loss.
    Increase Security and Auditing
    Data is the lifeblood of business, key to day-to-day operations, and long-term success. Lost or stolen customer data can mean losing customer trust, paying fines, and handling lawsuits. Compromised data that is associated with products or research and development can mean loss of competitive advantage. Companies need to know who is seeing that data.
    Security and
    Governance
  • 57. Encrypt Your Data On-the-Fly
    Transparent Data Encryption
    Encrypt the entire database on the disk to protect against lost or stolen disks or backup media
    Does not increase database size and has minimal performance impact
    Does not require application changes
    Backups are automatically encrypted
    Protects against direct access to database files
    “With SQL Server 2008 we have transparent encryption, so we can easily enforce the encryption of the information in the database itself without making any changes on the application side.”
    — AvadShammout, Lead Technical Database Administrator, CareGroup HealthCare System
    Security and
    Governance
    Employee
    Marc Boyer
    Salary: $40.000
    110010100101001010011101011001010011000111
    11001010010100101001
    11001010010100101001
    11001010010100101001
  • 58. Further Protect Encrypted Data
    Extensible Key Management
    “Defense in depth” makes unauthorized access to data harder by storing encryption keys away from the data
    May facilitate separation of duties between DBA and data owner
    Uses HSM for encryption and decryption which may result in performance gains
    Enables centralized key management across organization
    …SQL Server 2008 helps CareGroup comply with HIPPA data encryption requirements… SQL Server 2008 delivers an excellent solution… by supporting third-party key management and hardware security module products.
    — CareGroup Case Study
    Security and
    Governance
    Employee
    Marc Boyer
    Salary: $40.000
    11001010010100101001110101100101001
  • 59. Deepen Insight into Data Use
    SQL Server Audit
    Track reads, writes, and other events to Windows Application Log and Windows Security Log
    Detect misuse of permissions early on to limit possible damage
    More granular audits for flexibility
    Built into the database engine
    Simple configuration using SQL Server Management Studio
    Faster performance than SQLTrace
    “The enhanced auditing tools in SQL Server 2008 enable us to track all changes to tables and other data elements in our system.”
    — AvadShammout, Lead Technical Database Administrator, CareGroup HealthCare System
    Security and
    Governance
    AUDIT
    User:
    Tina Makovec
    Action: WRITE
    Employee
    Marc Boyer
    Salary: $40.000
    Employee
    Marc Boyer
    Salary: $40.000
    User:
    Anders Riis
    Action: READ
  • 60. Enable World-Class Compliance
    Common Criteria Certification
    Requirement for many governments, industries, and enterprise customers
    SQL Server 2008 Enterprise achieved Common Criteria (CC) compliance at EAL1+ (Evaluation Assurance Level)
    Represents the third time for CC compliance and the first time for a 64-bit version of SQL Server
    R2 is built on the SQL Server 2008 foundation and brings forward the security benefits with minimal changes to the core engine
    The Common Criteria was designed by a group of nations to improve the availability of security-enhanced IT products, help users evaluate IT products for purchase, and contribute to consumer confidence in IT product security.
    — SQL Server Books Online
    Security and
    Governance
  • 61. Data Warehousing and Reporting
  • 62. ENTERPRISE EDITIONprovides enterprise-class scalability and features to meet today’s reporting and data warehousing needs.
    Build On the Platform for Data Warehousing & Reporting
    Reporting databases and data warehouses can make reporting faster and provide unparalleled insight into your business, but very large data sets can limit or eliminate those benefits.
    Data Warehousing & Reporting
  • 63. Clean Incoming Data
    Master Data Services
    Standardize the data people rely on to make critical business decisions
    Enable central management of data entities and hierarchies
    Provide human workflow notification of data that violates business rules
    Track changes to data over time with versioning
    “Investing in master data management solutions, especially in today’s difficult economic times makes dollars and sense. Master data management is an investment in cost savings, revenue recovery, human resource optimization, and capital investment efficiency.”
    — Kirk Haselden, Microsoft
    Data Warehousing & Reporting
    110010100101
    110010100101
    110010100101
    LOB
    Accounting
    CRM
  • 64. Capture Incremental Changes to Data
    Change Data Capture
    Enable change tracking to the data in tables
    Speed updates to data warehouses by capturing net changes
    Provide relatively low impact on performance
    “The CDC feature gives us the information we need and frees us from the task of creating and testing triggers.”
    — Gerald Schinagl, Project Manager and Systems Architect for the Sports Database,Austrian Broadcasting Corporation Radio & Television (ORF)
    Data Warehousing & Reporting
    Data
    Warehouse
    11001010010100101001
    11001010010100101001
    ETL
    ETL
    UPDATE
    INSERT
    CHANGE
  • 65. Optimize Star Schema Queries
    Star Join Optimizations
    Process more data in a shorter time by optimizing common join scenarios in a data warehouse
    Significantly reduce the amount of processing for star schema queries
    Faster join processing speeds up lookups during data load, which shortens load windows and enables more frequent updates for better reporting
    “...ORF has found an immediate improvement of 15 percent in data loading. We consider that a great advantage when you can get 15 percent faster data loading without having to change a line of our own code.”
    — Gerald Schinagl, Project Manager and Systems Architect, ORF
    Data Warehousing & Reporting
    DIMENSION TABLE
    DIMENSION TABLE
    DIMENSION TABLE
    FACT TABLE
    DIMENSION TABLE
    DIMENSION TABLE
    DIMENSION TABLE
    1,000,000
    623,194
  • 66. Solution to help customers and partners accelerate their data warehouse deployments
    Fast Track Data Warehouse offers reference architectures and templates for data warehouse solutions to increase scale and speed time to value for creating data warehouses.
    Twelve SMP Reference Architectures
    SI Solution Templates
    Enable Very Large Databases with Fast Track Data Warehouse 2.0
  • 67. Report on Relevant Data Only
    Table Partitioning
    Manage and access subsets of data quickly and efficiently
    Reduce time spent troubleshooting storage allocation issues
    Speed data load and maintenance operations
    Take advantage of all available CPUs in the machine to complete operations more quickly
    “Enhancements in partition query dramatically reduce the effects of lock escalation… improving availability and improv[ing] query response time.”
    — Randy Dyess, SQL Server Mentor, TechNet Article
    Data Warehousing & Reporting
  • 68. Report More Quickly on Large Data Sets
    Query Parallelism
    SQL Server Enterprise automatically executes operations in parallel by splitting work across all the CPUs in the machine, which allows for larger queries or quicker responses
    Data mining in parallel takes advantage of all CPUs in the machine to examine more data and give results more quickly
    “For most large queries SQL Server generally scales linearly… this means that if we double the number of CPUs, we see the response time drop in half.”
    — Craig Freedman, Coauthor, Inside Microsoft SQL Server 2005: Query Tuning and Optimization
    Data Warehousing & Reporting
  • 69. Analysis
  • 70. ENTERPRISE EDITION deepens business understanding by providing the means to quickly and easily analyze your data.
    Deepen Insight into Your Business
    The ability to analyze data for trends that predict possible outcomes can provide insight into business operations, optimize sales, limit waste, or quickly find and stop fraud.
    Analysis
  • 71. Provide Richer Data Analysis to All Users
    Analysis
    PowerPivot for SharePoint 2010
    Examine large amounts of data in a familiar tool to gain deep insight
    Work seamlessly with SharePoint 2010 to collaborate with thousands of users on that data
    Enable IT organizations to manage the service with the PowerPivot Management Dashboard
    "Using Excel as an interface for Self-Service BI, we are modeling, analyzing and pivoting millions of records in memory and publish it to SharePoint in few minutes and other people being able to access it from a URL. It is fast and easy”
    — AyadShammount, Lead DBA, Caregroup
    Administrator
    Power User / Report Developer
    End User
  • 72. Customize Data Mining for Your Needs
    Modified
    Clustering
    Algorithm Extensibility
    Change the default algorithm for analyzing data to discover new and different patterns in the data
    Filter and restrict the mining model in intelligent ways to fine-tune results
    Provide greater business value by customizing mining and reporting data to specific business needs
    Different businesses have different goals and need to make different decisions. [Data mining technologies] are extensible, enabling you to add plug-in algorithms that meet uncommon analytical needs that are more specific to an individual business.
    — Predictive Analysis with SQL Server 2008 White Paper
    Analysis
    Microsoft
    Clustering
  • 73. Look Only at Data That Interests You
    Table Partitioning
    Manage and access subsets of data quickly and efficiently
    Reduce time spent troubleshooting storage allocation issues
    Speed data load and maintenance operations
    Take advantage of all available CPUs in the machine to complete operations more quickly
    “Enhancements in partition query dramatically reduce the effects of lock escalation… improving availability and improv[ing] query response time.”
    — Randy Dyess, SQL Server Mentor, TechNet Article
    Analysis
  • 74. Mine Through Very Large Data Sets
    Query Parallelism
    Enterprise edition automatically executes operations in parallel by splitting work across all the CPUs in the machine, which allows for larger queries or quicker responses
    Data mining in parallel takes advantage of all available CPUs in the machine to examine more data and give results more quickly
    “For most large queries SQL Server generally scales linearly… this means that if we double the number of CPUs, we see the response time drop in half.”
    — Craig Freedman, Coauthor, Inside Microsoft SQL Server 2005: Query Tuning and Optimization
    Analysis
  • 75. Understand Query Results
    Drill Through Queries
    Gain insight into how results were obtained by drilling into more detail
    Ability to query the model data, to understand how the model was trained and if it is still similar to the data that is being reported on, to find inaccurate reporting models
    The ability to query directly against the data mining structure enables users to easily include attributes beyond the scope of the mining model requirements, presenting complete and meaningful information.
    — SQL Server 2008 White Paper
    Analysis
    Meaning Model
    Training Algorithm
    α∑
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    110010100101 110010100101 110010
    Cases
  • 76. Advantages of Enterprise Edition
    Unplanned Downtime
    Reduce downtime that results from unexpected events
    Disaster Recovery
    Reduce impact and quickly recover from disasters
    Business Intelligence
    Deliver business insight by transforming data into actionable knowledge
    Reporting
    Provide information to the people who need it to make good decisions
    Planned Downtime
    Reduce downtime for regularly scheduled maintenance
    Server Consolidation
    Reduce hardware and software costs by combining servers
  • 77. © 2009 Microsoft Corporation. All rights reserved.Microsoft, SQL Server, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
    The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
  • 78. Appendix
  • 79. Summary of TechnologiesBenefits of Enterprise Edition