Before we get into SQL Server 2014 I want to start off by showing you how far the Microsoft Data Platform has come in the last decade. Many of you may be still be using SQL Server 2005 or SQL Server 2008 running on Windows Server 2003 or 2008 and may still have the perception that SQL Server is a good Tier 2 and Tier 3 database, but not ready for my mission critical tier 1 applications. Well, both SQL Server and Windows Server have come a long aways when it comes to tackling the largest mission critical applications. With SQL Server 2012 in introduced a comprehensive set of mission critical capabilities across performance with in-memory capabilities for data warehousing in addition to in-memory analytics, security, a zero transaction loss high availability with AlwaysOn. SQL Server 2012 also brought to market one of the most comprehensive BI platform for both the IT implementers and business users with Data Quality Service and Power View in Excel. SQL Server 2014 is all about differentiation and leap frogging the Tier 1 data platform vendors like Oracle and IBM with breakthrough performance via the 3rd release of in-memory technology built-in to the SQL Server, called In-Memory OLTP. With SQL Server 2014 you can uniquely speed transaction, queries and analytics as well as throughput, we’ll talk about this in more detail in just a little bit. SQL Server 2014 also leverage fantastic new capabilities in the Windows Server 2012 and 2012 R2 to provide predictable performance and scale for your tier 1 applications, with technologies like Nic Team, Storage Spaces, SAN like intelligence built right into the OS. Finally SQL Server 2014 taking the hybrid cloud platform introduced in SQL Server 2012 to the next level with new compelling scenarios for your on-premises SQL Server applications, like simplified cloud backup, cost effective disaster recovery for your SQL Server applications on premises. So as you can see SQL Server 2014 is no longer the database for your tier 2 applications, we are providing differentiation across mission critical, BI and hybrid cloud for the largest applications. So let dig into each of these areas and take a look at the innovation we are delivering in this release.
Before we jump into Microsoft’s in-memory engineering design points, lets take a look at a couple of key trends that have impacted our design. One, of course, is the significant drop in-memory pricing that makes in-memory databases feasible for customers. The second is around CPU performance flattening out, meaning that just throwing more compute at a problem may not resolve performance bottlenecks. Our design approach took into account how to better utilize existing CPU capacity, as we often hear from customers that typical CPU utilization is below the 50% mark—often due to contention.
Now lets take a closer look at our unique in-memory design points—from our engineers deciding to make in-memory pervasive by building it in to the data platform to how we have made it easy to implement in-memory into your applications.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Now lets talk about the next unique design point our engineering team took, which was to make in-memory flexible. What do I mean when I say flexible? I am talking about being able to have in-memory tables work along side traditional tables on-disk. Again, we believe that putting cold data in-memory is not a good use of memory because if the data is hardly utilize—who cares if it is running in-memory? We believe the best design is to have the hot tables and stored procedures running in-memory, with the cold tables residing on SSD or disk. And with SQL Server 2014, you don’t have to create two separate databases and place traditional tables in one and in-memory tables in another; you can query both tables residing in-memory as well as tables on disk with a single query.
There are some key benefits to you as the customer from in-memory flexibility. One, it minimizes your CapEx as data volumes grow—meaning you get to choose which tables reside in-memory and which ones remain on disk. Unlike SAP HANA, which requires the entire database be loaded in-memory—whether it is hot or cold data—with SQL Server 2014 you get to choose. This also means cost reduction in terms of hardware upgrades. For example, if you have a 2 or 3 terabyte database, you would need that much memory to use SAP HANA and if your hardware doesn’t support it you will need to refresh your hardware. With the flexibility of our in-memory solution, we can speed your applications regardless of the hardware it is sitting it on because you get to choose which tables to migrate. We also provide you with tooling to help you decide what are the optimal tables and stored procedures to migrate, which we will talk about a bit later.
Finally because the entire database doesn’t have to live in-memory, it also means you don’t need to rewrite the entire application. If you have a SQL Server application, you just migrate the select tables to memory, as matter of fact, migration of tables is only impacting dll’s (dynamic link libraries) and not even the application code.
SBI Liquidity is a SQL Server customer in the financial sector, and they process Japanese currency exchange trading and deal with trade volumes greater than the entire GDP of Japan. For them, this flexible design meant they didn’t have to rewrite their entire SQL Server application and they can aggregate transactional currency data along with historical data with our in-memory technology 10 times faster. This means being able to predict currency upticks and downticks 10 times faster, which translates into greater profit even though it is a “pennies on the dollar type scenario”, with such high volumes it has a significant business impact.
Now lets talk about how we increase both transactional speed and throughput by removing contention in the database. Many of you might be thinking, I could pin tables to memory in previous versions of SQL Server, how is they any different or better? The speed gains you have been hearing me talk about from SBI liquidity, Ferranti are all comparing to previous versions of SQL Server paging tables to memory. So why the massive speed gains? They key reason is the table structures are now optimized to run in-memory, there are no more paging of tables to memory—period. And there are no more locks and latches which removes contention in the database. This is how we can achieve transactional performance increases up to 30x.
In addition to speed, we can also improve throughput because our engineering team came up with an algorithm to remove locks and latches without compromising durability. This means massive reduction of contention in the database, which leads to increased throughput as well as speed.
Bwin is an ISV in the online-gaming industry and for them, transactions equates to revenue. With our unique in-memory OLTP design point of optimized tables structures and no locks and latches, they were able to improve transaction speed by 16x and increase player capacity by 20x on the same hardware. Because contention is significantly reduced, they were also able to cut player response times from 50 milliseconds to 3 milliseconds. In terms of business value that SQL Server 2014’s in-memory OLTP technology provided Bwin, this meant increased revenue, significantly improved customer experience, and a greater number of customers on the same infrastructure! This is why we feel in-memory technology is transformational, it’s because of the significant impact it can have on your business.
SQL Server 2014 In-Memory OLTP | TechDays Sweden 2014
Blixtrande prestanda med SQL Server 2014 In-Memory OLTP
Johan Åhlén, SolidQ
• Johan Åhlén
• SolidQ Nordisk CTO
• Ordförande SQL Server användarförening
• Ansvarig för prestanda-SM på SQL Server
• SQL Server MVP sedan 2011
• Inblandad i ett par böcker
5 myter om In-Memory OLTP
• ”All data försvinner om man drar ut sladden”
• “Hekaton är en NoSQL databas”
• “Hekaton är en förbättrad DBCC PINTABLE”
• “Inga anpassningar behövs av befintlig kod”
• “Alla applikationer går 100 gånger snabbare”
Use the same tools across services
Relational Data Services
Write speed of
in 15 minutes
Leverage familiar tools
No costly add-ons
Works seamlessly with
existing SQL Server features
Relational Data Services
Familiar Dev &
The entire DB doesn’t need to be In-Memory
Minimize capex as
data volumes grow
Access In-Memory and
on-disk with a single query
Don’t need to rewrite
In-memory for increased throughput & speed
Optimized table structures
No locks or latches with
100% data durability
Up to 30x transactional
with no locks or latches
“To describe Hekaton
in two words, it’s wicked fast.”
Rick Kutschera, Bwin
Kan skrivas till disk
Måste rymmas i minnet
Skrivs alltid till disk
Behöver inte rymmas i minnet
Saker som supporteras av In-Memory OLTP ”version 1.0”
Lagring på disk (om du vill)
DACPAC och BACPAC