Presentation (in Swedish) from Microsoft TechDays Sweden 2014.
For more information (in Swedish) see http://www.johanahlen.info/2014/11/sql-server-2014-in-memory-oltp-eller-what-the-heck-is-a-hekaton/.
The document provides an overview of key concepts for SQL Server development including:
- Database architecture including files, file groups, and I/O requests
- Performance considerations such as identifying large/heavily accessed tables
- Disaster recovery strategies
- Exploring system databases like master, model, tempdb, and msdb
- Database objects including tables, views, functions, triggers, and transactions
The document also covers database design concepts such as normalization, referential integrity, and strategies to improve database design and performance.
CBSE XII Database Concepts And MySQL PresentationGuru Ji
The document provides an introduction to database concepts and the relational model. It defines what a database is and discusses the purpose of databases, including reducing data redundancy and maintaining data integrity. It also describes different data models like relational, network, and hierarchical models. The relational model is then explained in detail, covering terminology, keys, views, and relational algebra operations like select, project, cartesian product. The document provides examples to illustrate database concepts and the relational model.
Smartare lagring i en smartare infrastruktur - IBM Smarter Business 2013IBM Sverige
Datalagring blir allt mer krävande att hantera då mängden data ökar med rasande fart hos företag. Det kräver att lagringslösningar bland annat blir mer flexibla, snabbare och mer lätthanterliga. Med IBM's mjukvarudefinerade lagring, flash-teknologier och hybridlösningar så kan
företag möta dagens och framtidens utmaningar på ett enkelt sätt. IBM kommer i detta pass beskriva hur vi kan bryta isär beroendet av hård- och mjukvara inom lagring, hur IBM kan öka prestanda hos applikationer genom lagringsteknologi samt hur företag kan reducera kostnader för långtidslagring. Talare: Christofer Jensen, Technical Specialist, STG, IBM. Mer från dagen på http://bit.ly/sb13se
EPiServer, Drupal, Django, Wordpress, Sharepoint, Sitecore, Umbraco... När det gäller CMS och webbramverk är verktygslådan stor! Hur vet man vad man ska välja? Står valet mellan open-source-produkter eller kommersiella produkter eller finns det fler parametrar som spelar in?
I denna första genomgång av den tekniska verktygslådan guidar Pär Fröberg och Mattias Uhlegård dig genom CMS- och webbramverksdjungeln. Vi kommer att berätta om för- och nackdelar med de plattformar som vi på Creuna arbetar mest med och vilka trender vi ser framöver.
Pär Fröberg CTO Creuna
Mattias Uhlegård System Architect Creuna
The document provides an overview of key concepts for SQL Server development including:
- Database architecture including files, file groups, and I/O requests
- Performance considerations such as identifying large/heavily accessed tables
- Disaster recovery strategies
- Exploring system databases like master, model, tempdb, and msdb
- Database objects including tables, views, functions, triggers, and transactions
The document also covers database design concepts such as normalization, referential integrity, and strategies to improve database design and performance.
CBSE XII Database Concepts And MySQL PresentationGuru Ji
The document provides an introduction to database concepts and the relational model. It defines what a database is and discusses the purpose of databases, including reducing data redundancy and maintaining data integrity. It also describes different data models like relational, network, and hierarchical models. The relational model is then explained in detail, covering terminology, keys, views, and relational algebra operations like select, project, cartesian product. The document provides examples to illustrate database concepts and the relational model.
Smartare lagring i en smartare infrastruktur - IBM Smarter Business 2013IBM Sverige
Datalagring blir allt mer krävande att hantera då mängden data ökar med rasande fart hos företag. Det kräver att lagringslösningar bland annat blir mer flexibla, snabbare och mer lätthanterliga. Med IBM's mjukvarudefinerade lagring, flash-teknologier och hybridlösningar så kan
företag möta dagens och framtidens utmaningar på ett enkelt sätt. IBM kommer i detta pass beskriva hur vi kan bryta isär beroendet av hård- och mjukvara inom lagring, hur IBM kan öka prestanda hos applikationer genom lagringsteknologi samt hur företag kan reducera kostnader för långtidslagring. Talare: Christofer Jensen, Technical Specialist, STG, IBM. Mer från dagen på http://bit.ly/sb13se
EPiServer, Drupal, Django, Wordpress, Sharepoint, Sitecore, Umbraco... När det gäller CMS och webbramverk är verktygslådan stor! Hur vet man vad man ska välja? Står valet mellan open-source-produkter eller kommersiella produkter eller finns det fler parametrar som spelar in?
I denna första genomgång av den tekniska verktygslådan guidar Pär Fröberg och Mattias Uhlegård dig genom CMS- och webbramverksdjungeln. Vi kommer att berätta om för- och nackdelar med de plattformar som vi på Creuna arbetar mest med och vilka trender vi ser framöver.
Pär Fröberg CTO Creuna
Mattias Uhlegård System Architect Creuna
Checklista med 13 punkter för att de filer som skickas till användare av webben ska vara både användbart och gå snabbt att ladda hem.
Bland utmaningarna som är orsaken till denna checklista är lagstiftning om diskriminering av de med funktionsnedsättning, att webben måste fungera på tveksam internetuppkoppling men inte minst att det är god sökmotoroptimering (SEO) att en webbplats laddar snabbt.
Revitalisering av legacy - är det möjligt - Joakim LindbomJoakim Lindbom
How do you put old or almost dead systems in a state where you can handle and further develop them another 10 years? This seminar is discussing the need for modernisation in a digital age and 6 project experiences, both good and bad.
IBM Lotus - Utnyttja kraften i Lotus samverkansportföljIBM Sverige
Hamburgerkedjan Max delar med sig av sina bakomliggande behov samt målsättningen med att gå från en ren informationsportal till en rollstyrd portal med hjälp av Lotus plattformar.
Talare: Johnny Bröms CIO, Max och Tobias Mård, VD, Exait
Denna presentation hölls på ett seminariepass för Lotus under IBM Software Day 2010.
Check out the presentations from Integration Summit 2016 and get the latest updates on BizTalk 2016, cloud integration, future trends and customer cases.
Checklista med 13 punkter för att de filer som skickas till användare av webben ska vara både användbart och gå snabbt att ladda hem.
Bland utmaningarna som är orsaken till denna checklista är lagstiftning om diskriminering av de med funktionsnedsättning, att webben måste fungera på tveksam internetuppkoppling men inte minst att det är god sökmotoroptimering (SEO) att en webbplats laddar snabbt.
Revitalisering av legacy - är det möjligt - Joakim LindbomJoakim Lindbom
How do you put old or almost dead systems in a state where you can handle and further develop them another 10 years? This seminar is discussing the need for modernisation in a digital age and 6 project experiences, both good and bad.
IBM Lotus - Utnyttja kraften i Lotus samverkansportföljIBM Sverige
Hamburgerkedjan Max delar med sig av sina bakomliggande behov samt målsättningen med att gå från en ren informationsportal till en rollstyrd portal med hjälp av Lotus plattformar.
Talare: Johnny Bröms CIO, Max och Tobias Mård, VD, Exait
Denna presentation hölls på ett seminariepass för Lotus under IBM Software Day 2010.
Check out the presentations from Integration Summit 2016 and get the latest updates on BizTalk 2016, cloud integration, future trends and customer cases.
3. • Johan Åhlén
Presentatör
• SolidQ Nordisk CTO
• Ordförande SQL Server användarförening
i Sverige
• Ansvarig för prestanda-SM på SQL Server
• SQL Server MVP sedan 2011
• Inblandad i ett par böcker
Blog: www.johanahlen.info
6. 5 myter om In-Memory OLTP
• ”All data försvinner om man drar ut sladden”
• “Hekaton är en NoSQL databas”
• “Hekaton är en förbättrad DBCC PINTABLE”
• “Inga anpassningar behövs av befintlig kod”
• “Alla applikationer går 100 gånger snabbare”
10. Use the same tools across services
Disk-based
Relational Data Services
Write speed of
200
million rows
in 15 minutes
In-memory built-in
Key Benefits
Leverage familiar tools
No costly add-ons
Works seamlessly with
existing SQL Server features
Real-time
data access
New! In-Memory
Relational Data Services
Microsoft Azure
Infrastructure Services
Familiar Dev &
Management Tools
11. The entire DB doesn’t need to be In-Memory
10x
Faster
performance
In-memory flexibility
Key Benefits
Minimize capex as
data volumes grow
Access In-Memory and
on-disk with a single query
Don’t need to rewrite
entire app
with
scalability and
reduced
operating
costs
On-Disk
Exponential
growth
Application
Warm and
hot data
Cold Data
Single Query
12. In-memory for increased throughput & speed
Key Benefits
Optimized table structures
No locks or latches with
100% data durability
Up to 30x transactional
performance gains
Greater throughput
with no locks or latches
16x
faster transactions
“To describe Hekaton
in two words, it’s wicked fast.”
Rick Kutschera, Bwin
Before After
30x faster
transactions
Natively compile
stored procedures
in-memory
OLTP
Stored
Procedures
App
35x
faster transactions
14. Jämförelse
14
In-Memory OLTP
Hög skrivhastighet
Hög läshastighet
Kan skrivas till disk
Okomprimerad
Måste rymmas i minnet
ColumnStore
Indexes
Låg skrivhastighet
Hög läshastighet
Skrivs alltid till disk
Komprimerad
Behöver inte rymmas i minnet
15. Saker som supporteras av In-Memory OLTP ”version 1.0”
15
Lagring på disk (om du vill)
AlwaysOn
Resource Governor
SSIS
Service Broker
PowerShell
DACPAC och BACPAC
Before we get into SQL Server 2014 I want to start off by showing you how far the Microsoft Data Platform has come in the last decade. Many of you may be still be using SQL Server 2005 or SQL Server 2008 running on Windows Server 2003 or 2008 and may still have the perception that SQL Server is a good Tier 2 and Tier 3 database, but not ready for my mission critical tier 1 applications. Well, both SQL Server and Windows Server have come a long aways when it comes to tackling the largest mission critical applications. With SQL Server 2012 in introduced a comprehensive set of mission critical capabilities across performance with in-memory capabilities for data warehousing in addition to in-memory analytics, security, a zero transaction loss high availability with AlwaysOn. SQL Server 2012 also brought to market one of the most comprehensive BI platform for both the IT implementers and business users with Data Quality Service and Power View in Excel. SQL Server 2014 is all about differentiation and leap frogging the Tier 1 data platform vendors like Oracle and IBM with breakthrough performance via the 3rd release of in-memory technology built-in to the SQL Server, called In-Memory OLTP. With SQL Server 2014 you can uniquely speed transaction, queries and analytics as well as throughput, we’ll talk about this in more detail in just a little bit. SQL Server 2014 also leverage fantastic new capabilities in the Windows Server 2012 and 2012 R2 to provide predictable performance and scale for your tier 1 applications, with technologies like Nic Team, Storage Spaces, SAN like intelligence built right into the OS. Finally SQL Server 2014 taking the hybrid cloud platform introduced in SQL Server 2012 to the next level with new compelling scenarios for your on-premises SQL Server applications, like simplified cloud backup, cost effective disaster recovery for your SQL Server applications on premises. So as you can see SQL Server 2014 is no longer the database for your tier 2 applications, we are providing differentiation across mission critical, BI and hybrid cloud for the largest applications. So let dig into each of these areas and take a look at the innovation we are delivering in this release.
Before we jump into Microsoft’s in-memory engineering design points, lets take a look at a couple of key trends that have impacted our design. One, of course, is the significant drop in-memory pricing that makes in-memory databases feasible for customers. The second is around CPU performance flattening out, meaning that just throwing more compute at a problem may not resolve performance bottlenecks. Our design approach took into account how to better utilize existing CPU capacity, as we often hear from customers that typical CPU utilization is below the 50% mark—often due to contention.
Now lets take a closer look at our unique in-memory design points—from our engineers deciding to make in-memory pervasive by building it in to the data platform to how we have made it easy to implement in-memory into your applications.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Now lets talk about the next unique design point our engineering team took, which was to make in-memory flexible. What do I mean when I say flexible? I am talking about being able to have in-memory tables work along side traditional tables on-disk. Again, we believe that putting cold data in-memory is not a good use of memory because if the data is hardly utilize—who cares if it is running in-memory? We believe the best design is to have the hot tables and stored procedures running in-memory, with the cold tables residing on SSD or disk. And with SQL Server 2014, you don’t have to create two separate databases and place traditional tables in one and in-memory tables in another; you can query both tables residing in-memory as well as tables on disk with a single query.
There are some key benefits to you as the customer from in-memory flexibility. One, it minimizes your CapEx as data volumes grow—meaning you get to choose which tables reside in-memory and which ones remain on disk. Unlike SAP HANA, which requires the entire database be loaded in-memory—whether it is hot or cold data—with SQL Server 2014 you get to choose. This also means cost reduction in terms of hardware upgrades. For example, if you have a 2 or 3 terabyte database, you would need that much memory to use SAP HANA and if your hardware doesn’t support it you will need to refresh your hardware. With the flexibility of our in-memory solution, we can speed your applications regardless of the hardware it is sitting it on because you get to choose which tables to migrate. We also provide you with tooling to help you decide what are the optimal tables and stored procedures to migrate, which we will talk about a bit later.
Finally because the entire database doesn’t have to live in-memory, it also means you don’t need to rewrite the entire application. If you have a SQL Server application, you just migrate the select tables to memory, as matter of fact, migration of tables is only impacting dll’s (dynamic link libraries) and not even the application code.
SBI Liquidity is a SQL Server customer in the financial sector, and they process Japanese currency exchange trading and deal with trade volumes greater than the entire GDP of Japan. For them, this flexible design meant they didn’t have to rewrite their entire SQL Server application and they can aggregate transactional currency data along with historical data with our in-memory technology 10 times faster. This means being able to predict currency upticks and downticks 10 times faster, which translates into greater profit even though it is a “pennies on the dollar type scenario”, with such high volumes it has a significant business impact.
Now lets talk about how we increase both transactional speed and throughput by removing contention in the database. Many of you might be thinking, I could pin tables to memory in previous versions of SQL Server, how is they any different or better? The speed gains you have been hearing me talk about from SBI liquidity, Ferranti are all comparing to previous versions of SQL Server paging tables to memory. So why the massive speed gains? They key reason is the table structures are now optimized to run in-memory, there are no more paging of tables to memory—period. And there are no more locks and latches which removes contention in the database. This is how we can achieve transactional performance increases up to 30x.
In addition to speed, we can also improve throughput because our engineering team came up with an algorithm to remove locks and latches without compromising durability. This means massive reduction of contention in the database, which leads to increased throughput as well as speed.
Bwin is an ISV in the online-gaming industry and for them, transactions equates to revenue. With our unique in-memory OLTP design point of optimized tables structures and no locks and latches, they were able to improve transaction speed by 16x and increase player capacity by 20x on the same hardware. Because contention is significantly reduced, they were also able to cut player response times from 50 milliseconds to 3 milliseconds. In terms of business value that SQL Server 2014’s in-memory OLTP technology provided Bwin, this meant increased revenue, significantly improved customer experience, and a greater number of customers on the same infrastructure! This is why we feel in-memory technology is transformational, it’s because of the significant impact it can have on your business.