The document discusses performance issues and troubleshooting for SQL Server. It begins with an introduction of the presenter and purpose of the session. Several case studies are then presented on resolving specific performance problems including adjusting the max memory setting, checking query execution plans, updating statistics and indexing. The document concludes with resources provided for further reading on SQL Server performance tuning.
Zapping ever faster: how Zap sped up by two orders of magnitude using RavenDBOren Eini
Join a real uplift experience with Hagay Albo, the CTO of the Zap/Yellow Page Group in Israel, in which he explains how his team was able to take a legacy (slow and hard to modify) group of sites and make them easier to work with, MUCH faster and greatly simplified the operational environment.
By prioritizing high availability, flexible data modeling and focusing on raw speed Zap was able to reduce its load times by Two Orders of Magnitudes. Using RavenDB as the core engine behind Zap's new sites had improved site traffic, reduced time to market and made it possible to implement the next-gen features that were previously beyond reach.
Who wants to be a DBA? Roles and ResponsibilitiesKevin Kline
There are a lot of great careers in information technology (IT) these days. The US Bureau of Labor Management predicts years of shortages in many IT career tracks, and one of the most highly paid and respected of IT professions is the database administrator (DBA).
This session will teach you all about the professional expectations, roles, and responsibilities of a DBA. We'll show you what they do on a regular basis and how they operate within medium and large IT organizations. You'll learn not only what is most commonly expected of a DBA from a technological standpoint, but what your future boss and your future customers need to be satisfied by your performance.
This is the first of several sessions in a series about DBA skills and professionalism. This session is beginners. Video available at http://sqlsentry.tv.
Sharding is a technique for scaling databases by partitioning data across multiple database servers or nodes. There are different ways to implement sharding such as sharding on the primary key or an index in a relational database. For key-value stores, a common approach is to hash the key and assign it to a node using consistent hashing. While sharding improves scalability, it also introduces some limitations like not being able to perform joins across shards and additional work required for data maintenance.
1) The document discusses using Couchbase NoSQL technology to store data for social network games, which have huge concurrent requests but require low response times.
2) Traditional SQL databases have limitations for these workloads, as they are centralized and have processing overhead. Couchbase is distributed, stores active data in RAM for fast access, and allows horizontal scaling.
3) However, moving to Couchbase from SQL presented design and architecture challenges. The document then describes the SNS Storage Engine (SSE), a PHP library that provides a layer on top of Couchbase to address these challenges through features like concurrency control and high-level data structures.
This document discusses converting SQL queries from Microsoft Access to SQL Server for improved performance when queries are slow in Access or when moving to a .NET application front-end. It outlines reasons for translation like gaining speed by executing queries in seconds instead of minutes and enabling the move to .NET. The document also previews a demo database that will be used to illustrate lifting tables from Access to SQL Server and re-linking queries.
Le but de cette session est de faire découvrir les nouveautés de AlwaysOn pour SQL Server 2016 : Comment remplacer mon Mirorring; Est-ce que mon DTC sera compatible avec les groupes de dispos ? SSISDB ? La haute dispo dans Azure…
This document discusses how to set up a new SQL Server instance within an hour by having a standardized, automated process. It recommends capturing configuration settings, collecting existing scripts, assembling a batch file, and documenting the build process. Key steps include installing SQL Server, configuring settings like file paths and ports, setting up monitoring and alerts, performing initial maintenance like index rebuilds and backups, and rebooting. Automating as much as possible via scripts allows the process to be easily delegated and saves significant time over manual configuration.
Zapping ever faster: how Zap sped up by two orders of magnitude using RavenDBOren Eini
Join a real uplift experience with Hagay Albo, the CTO of the Zap/Yellow Page Group in Israel, in which he explains how his team was able to take a legacy (slow and hard to modify) group of sites and make them easier to work with, MUCH faster and greatly simplified the operational environment.
By prioritizing high availability, flexible data modeling and focusing on raw speed Zap was able to reduce its load times by Two Orders of Magnitudes. Using RavenDB as the core engine behind Zap's new sites had improved site traffic, reduced time to market and made it possible to implement the next-gen features that were previously beyond reach.
Who wants to be a DBA? Roles and ResponsibilitiesKevin Kline
There are a lot of great careers in information technology (IT) these days. The US Bureau of Labor Management predicts years of shortages in many IT career tracks, and one of the most highly paid and respected of IT professions is the database administrator (DBA).
This session will teach you all about the professional expectations, roles, and responsibilities of a DBA. We'll show you what they do on a regular basis and how they operate within medium and large IT organizations. You'll learn not only what is most commonly expected of a DBA from a technological standpoint, but what your future boss and your future customers need to be satisfied by your performance.
This is the first of several sessions in a series about DBA skills and professionalism. This session is beginners. Video available at http://sqlsentry.tv.
Sharding is a technique for scaling databases by partitioning data across multiple database servers or nodes. There are different ways to implement sharding such as sharding on the primary key or an index in a relational database. For key-value stores, a common approach is to hash the key and assign it to a node using consistent hashing. While sharding improves scalability, it also introduces some limitations like not being able to perform joins across shards and additional work required for data maintenance.
1) The document discusses using Couchbase NoSQL technology to store data for social network games, which have huge concurrent requests but require low response times.
2) Traditional SQL databases have limitations for these workloads, as they are centralized and have processing overhead. Couchbase is distributed, stores active data in RAM for fast access, and allows horizontal scaling.
3) However, moving to Couchbase from SQL presented design and architecture challenges. The document then describes the SNS Storage Engine (SSE), a PHP library that provides a layer on top of Couchbase to address these challenges through features like concurrency control and high-level data structures.
This document discusses converting SQL queries from Microsoft Access to SQL Server for improved performance when queries are slow in Access or when moving to a .NET application front-end. It outlines reasons for translation like gaining speed by executing queries in seconds instead of minutes and enabling the move to .NET. The document also previews a demo database that will be used to illustrate lifting tables from Access to SQL Server and re-linking queries.
Le but de cette session est de faire découvrir les nouveautés de AlwaysOn pour SQL Server 2016 : Comment remplacer mon Mirorring; Est-ce que mon DTC sera compatible avec les groupes de dispos ? SSISDB ? La haute dispo dans Azure…
This document discusses how to set up a new SQL Server instance within an hour by having a standardized, automated process. It recommends capturing configuration settings, collecting existing scripts, assembling a batch file, and documenting the build process. Key steps include installing SQL Server, configuring settings like file paths and ports, setting up monitoring and alerts, performing initial maintenance like index rebuilds and backups, and rebooting. Automating as much as possible via scripts allows the process to be easily delegated and saves significant time over manual configuration.
Drupal commerce performance profiling and tunning using loadstorm experiments...Andy Kucharski
Drupal commerce performance profiling by load testing a the kickstarter drupal commerce site on an AWS instance and comparing how the site performa after several well known performance tuning enhancements are applied. We try to compare performance improvements after druapl cache, aggregation, varnish, and nginx reverse proxy.
This presentation was first given at Drupal Mid Camp in Chicago. We used loadstorm and new relic to analyze results.
Cassandra consistently outperforms other NoSQL databases in throughput and scalability according to various benchmark tests, but has higher read latencies. MongoDB typically has the worst performance in terms of latency. The best database depends on application requirements - no single NoSQL database is best for all use cases. Combining database types, such as using Cassandra for analytics and an RDBMS for transactions, can leverage each database's strengths.
MySQL X protocol - Talking to MySQL Directly over the WireSimon J Mudd
The document discusses the MySQL X Protocol, which introduces a new way for clients to communicate directly with MySQL servers over TCP/IP. It provides an overview of how the protocol works, including capabilities exchange, authentication, querying the server for both SQL and noSQL data, pipelining requests, and the need for a formal protocol specification. Building client drivers requires understanding the protocol by reading documentation, source code, and examples as documentation is still incomplete. Pipelining requests can improve performance over high-latency connections. A standard specification would help driver development and ensure compatibility as the protocol evolves.
Experiences testing dev versions of MySQL and why it is good for youSimon J Mudd
Presentation given at OpenExpo Europe 2018 in Madrid on 6th June 2018
Each new version of MySQL comes out with exciting new features, many of which we’ve been asking for for a long time. The first development or DMR versions are released to the public some time before the software is considered production quality. So who is going to test these new versions which might break at any time and lose all your data?
booking.com does just this. The talk explains why we do it, what both we and the MySQL community gets out of it. If you’ve not considered doing such testing it’s very easy so come along and find out how. If you want to find out about some of the fun bugs we’ve seen then you’ll like this presentation too.
1. Learn about service accounts for SharePoint 2013
2. Learn how to install SharePoint 2013 using best practices for lowest privilege installations
3. Learn about the installation of workflow server & Office web apps and how they interact with SharePoint 2013
SQL Server Best Practices - Install SQL Server like a boss (RELOADED)Andre Essing
The document discusses SQL Server best practices for installation, configuration, and maintenance. It recommends planning system resources before deployment, using optimal hardware, separating files and volumes, enabling security features, and automating maintenance tasks with scripts instead of plans. Regular backups and monitoring are emphasized to ensure a reliable and high-performing SQL Server system.
Rainbows, Unicorns, and other Fairy Tales in the Land of Serverless DreamsJosh Carlisle
When done correctly Serverless offers fantastic potential but can also lead to spectacular failure when critical concepts are overlooked. With over a dozen Serverless implementations on Azure Functions over the last couple years, I’ve learned some lessons the hard way. In this talk, I will be sharing a few of the most impactful hard-earned lessons and how I was able to overcome them. I’ll be touching on topics ranging from considerations using traditional relational databases, managing service and data connections to managing complexity and increasing observability. The talk is done in the context of Azure Functions but whose concepts apply equally to all Serverless Platforms.
Window phone programing pain and how to deal withZalo_app
This document discusses ways to improve performance when working with images and LINQ queries in Windows Phone applications. It recommends caching images locally using a FIFO rule to manage memory usage. When making LINQ queries, it suggests writing queries in batches, using compiled queries for frequent queries, and only selecting the needed entity properties to reduce response sizes. The document provides examples showing how these techniques can significantly improve query write and read times.
How to run Apache Cassandra over an IaaS of a distributed organization? What are the challenges we can solve with Cassandra?
One of the tool for measuring replication latency can be found here:
https://github.com/gitaroktato/cassandra-replication-latency-tools
The Stack Exchange infrastructure supports 560 million page views and 34TB of data transferred per month across multiple technology stacks and datacenters. Performance is the top priority, and tools like Mini Profiler, OpServer, and Client Timings are used to monitor and improve performance. The infrastructure is designed with redundancy across networks, load balancers, web and database servers, caching, and search to ensure high availability and fast response times below 60ms for core pages.
This document discusses Stripe's evolution in using Hadoop to analyze data that was originally stored in MongoDB. It describes three approaches they tried: 1) dumping data to TSV files and querying with Impala, but it was slow; 2) using MongoSQL to copy data to HBase but Impala queries were still slow; 3) settling on storing data in Parquet files using Thrift definitions to define the schema, which allows fast queries in Impala and scalable MapReduce jobs while synchronizing data from MongoDB to Hadoop.
Solution for the reactive relational DB connection when programming using Spring WebFlux. When you want all you app to be reactive, don't make JDBC DB connection a bottlenack. Use R2DBC driver. In the presentation I share my experience on how to work with the driver, and if it is already ready to be used in serious projects on production. The talk was presented at Devoxx Ukraine, Nov 1, 2019.
Stream upload and asynchronous job processing in large scale systemsZalo_app
This document discusses an asynchronous job processing system developed by VNG for Zalo. It allows for parallel stream uploads and background processing of large amounts of data. The key points are:
1. It uses a job server and distributed worker model to process jobs asynchronously in a reliable, scalable, and high performance manner.
2. Jobs are collected, then processed by workers and responded to. The system supports both single and batch jobs to efficiently handle large volumes of uploads.
3. It was implemented using C/C++ for high performance and includes features like load balancing, failover, recovery from failures, and a job state system to reliably process all jobs.
How many lines of code does it take to generate a running total? How would you find a value in the next row of data – without using a cursor or loop? How can you efficiently store rows of data with a lot of optional fields, and how can you quickly find which of those rows have values? And how can you eliminate locking without resorting to dirty reads? SQL Server has answers for all of these questions, and none requires more than a few lines of code. Give me an hour, and I will blow your mind!
This document discusses data bloat in SQL Server tables and provides recommendations for addressing it. It notes that data bloat can occur over time and lead to large table sizes that impact disk usage and cache performance. The document provides examples of inappropriate data types being used in tables and suggests more optimal data types to reduce storage size and improve efficiency. It recommends checking table sizes and structures using stored procedures to identify issues and opportunities to reduce data bloat.
Omney Mohamed Fawzy Elsayed is an Egyptian national who has worked as a Project Assistant and Assistant to the Resident Representative at the Friedrich-Ebert-Stiftung in Cairo, Egypt since 2004. He holds a BA in Architecture from Helwan University and is fluent in Arabic, German, and English. Over his career, Omney has taken on various administrative and managerial roles within the Friedrich-Ebert-Stiftung, and has participated in numerous training programs to develop his professional skills.
Drupal commerce performance profiling and tunning using loadstorm experiments...Andy Kucharski
Drupal commerce performance profiling by load testing a the kickstarter drupal commerce site on an AWS instance and comparing how the site performa after several well known performance tuning enhancements are applied. We try to compare performance improvements after druapl cache, aggregation, varnish, and nginx reverse proxy.
This presentation was first given at Drupal Mid Camp in Chicago. We used loadstorm and new relic to analyze results.
Cassandra consistently outperforms other NoSQL databases in throughput and scalability according to various benchmark tests, but has higher read latencies. MongoDB typically has the worst performance in terms of latency. The best database depends on application requirements - no single NoSQL database is best for all use cases. Combining database types, such as using Cassandra for analytics and an RDBMS for transactions, can leverage each database's strengths.
MySQL X protocol - Talking to MySQL Directly over the WireSimon J Mudd
The document discusses the MySQL X Protocol, which introduces a new way for clients to communicate directly with MySQL servers over TCP/IP. It provides an overview of how the protocol works, including capabilities exchange, authentication, querying the server for both SQL and noSQL data, pipelining requests, and the need for a formal protocol specification. Building client drivers requires understanding the protocol by reading documentation, source code, and examples as documentation is still incomplete. Pipelining requests can improve performance over high-latency connections. A standard specification would help driver development and ensure compatibility as the protocol evolves.
Experiences testing dev versions of MySQL and why it is good for youSimon J Mudd
Presentation given at OpenExpo Europe 2018 in Madrid on 6th June 2018
Each new version of MySQL comes out with exciting new features, many of which we’ve been asking for for a long time. The first development or DMR versions are released to the public some time before the software is considered production quality. So who is going to test these new versions which might break at any time and lose all your data?
booking.com does just this. The talk explains why we do it, what both we and the MySQL community gets out of it. If you’ve not considered doing such testing it’s very easy so come along and find out how. If you want to find out about some of the fun bugs we’ve seen then you’ll like this presentation too.
1. Learn about service accounts for SharePoint 2013
2. Learn how to install SharePoint 2013 using best practices for lowest privilege installations
3. Learn about the installation of workflow server & Office web apps and how they interact with SharePoint 2013
SQL Server Best Practices - Install SQL Server like a boss (RELOADED)Andre Essing
The document discusses SQL Server best practices for installation, configuration, and maintenance. It recommends planning system resources before deployment, using optimal hardware, separating files and volumes, enabling security features, and automating maintenance tasks with scripts instead of plans. Regular backups and monitoring are emphasized to ensure a reliable and high-performing SQL Server system.
Rainbows, Unicorns, and other Fairy Tales in the Land of Serverless DreamsJosh Carlisle
When done correctly Serverless offers fantastic potential but can also lead to spectacular failure when critical concepts are overlooked. With over a dozen Serverless implementations on Azure Functions over the last couple years, I’ve learned some lessons the hard way. In this talk, I will be sharing a few of the most impactful hard-earned lessons and how I was able to overcome them. I’ll be touching on topics ranging from considerations using traditional relational databases, managing service and data connections to managing complexity and increasing observability. The talk is done in the context of Azure Functions but whose concepts apply equally to all Serverless Platforms.
Window phone programing pain and how to deal withZalo_app
This document discusses ways to improve performance when working with images and LINQ queries in Windows Phone applications. It recommends caching images locally using a FIFO rule to manage memory usage. When making LINQ queries, it suggests writing queries in batches, using compiled queries for frequent queries, and only selecting the needed entity properties to reduce response sizes. The document provides examples showing how these techniques can significantly improve query write and read times.
How to run Apache Cassandra over an IaaS of a distributed organization? What are the challenges we can solve with Cassandra?
One of the tool for measuring replication latency can be found here:
https://github.com/gitaroktato/cassandra-replication-latency-tools
The Stack Exchange infrastructure supports 560 million page views and 34TB of data transferred per month across multiple technology stacks and datacenters. Performance is the top priority, and tools like Mini Profiler, OpServer, and Client Timings are used to monitor and improve performance. The infrastructure is designed with redundancy across networks, load balancers, web and database servers, caching, and search to ensure high availability and fast response times below 60ms for core pages.
This document discusses Stripe's evolution in using Hadoop to analyze data that was originally stored in MongoDB. It describes three approaches they tried: 1) dumping data to TSV files and querying with Impala, but it was slow; 2) using MongoSQL to copy data to HBase but Impala queries were still slow; 3) settling on storing data in Parquet files using Thrift definitions to define the schema, which allows fast queries in Impala and scalable MapReduce jobs while synchronizing data from MongoDB to Hadoop.
Solution for the reactive relational DB connection when programming using Spring WebFlux. When you want all you app to be reactive, don't make JDBC DB connection a bottlenack. Use R2DBC driver. In the presentation I share my experience on how to work with the driver, and if it is already ready to be used in serious projects on production. The talk was presented at Devoxx Ukraine, Nov 1, 2019.
Stream upload and asynchronous job processing in large scale systemsZalo_app
This document discusses an asynchronous job processing system developed by VNG for Zalo. It allows for parallel stream uploads and background processing of large amounts of data. The key points are:
1. It uses a job server and distributed worker model to process jobs asynchronously in a reliable, scalable, and high performance manner.
2. Jobs are collected, then processed by workers and responded to. The system supports both single and batch jobs to efficiently handle large volumes of uploads.
3. It was implemented using C/C++ for high performance and includes features like load balancing, failover, recovery from failures, and a job state system to reliably process all jobs.
How many lines of code does it take to generate a running total? How would you find a value in the next row of data – without using a cursor or loop? How can you efficiently store rows of data with a lot of optional fields, and how can you quickly find which of those rows have values? And how can you eliminate locking without resorting to dirty reads? SQL Server has answers for all of these questions, and none requires more than a few lines of code. Give me an hour, and I will blow your mind!
This document discusses data bloat in SQL Server tables and provides recommendations for addressing it. It notes that data bloat can occur over time and lead to large table sizes that impact disk usage and cache performance. The document provides examples of inappropriate data types being used in tables and suggests more optimal data types to reduce storage size and improve efficiency. It recommends checking table sizes and structures using stored procedures to identify issues and opportunities to reduce data bloat.
Omney Mohamed Fawzy Elsayed is an Egyptian national who has worked as a Project Assistant and Assistant to the Resident Representative at the Friedrich-Ebert-Stiftung in Cairo, Egypt since 2004. He holds a BA in Architecture from Helwan University and is fluent in Arabic, German, and English. Over his career, Omney has taken on various administrative and managerial roles within the Friedrich-Ebert-Stiftung, and has participated in numerous training programs to develop his professional skills.
Le monde des théories semble souvent bien éloigné du quotidien des professionnels. Leaders politiques, religieux, entrepreneurs, artistes… L’œuvre de chacun contient une vision des enjeux, des buts, des difficultés et du sens de son travail.
This document provides information about eCapital Advisors, a performance management and business analytics consulting firm. It discusses eCapital's founding, headquarters location, number of customers, employees, and service offerings such as strategic assessments, implementations, upgrades, training, and managed services across various industries. The document then outlines an agenda for discussing advanced allocations and actual efficiency through density and sparsity in Essbase.
The document lists various SQL Server resources including websites, personalities to follow, podcasts to listen to, video training options, tools to use, and scripts to run. It also provides contact details for the author and promises a future discussion on scripts and their results.
Omney Mohamed Fawzy Elsayed is an Egyptian national who has worked as a Project Assistant and Assistant to the Resident Representative at the Friedrich-Ebert-Stiftung in Cairo, Egypt since 2004. He holds a BA in Architecture from Helwan University and is fluent in Arabic, German, and English. Over his career, Omney has taken on various administrative and managerial roles within the Friedrich-Ebert-Stiftung, and has participated in numerous training programs to develop his professional skills.
Besides the fire recovery and restoration service, we have the specialization to cut down the issues and complexities with the mold growth and development.
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)Aurimas Mikalauskas
Is my MySQL server configured properly? Should I run Community MySQL, MariaDB, Percona or WebScaleSQL? How many innodb buffer pool instances should I run? Why should I NOT use the query cache? How do I size the innodb log file size and what IS that innodb log anyway? All answers are inside.
Aurimas Mikalauskas is a former Percona performance consultant and architect currently writing and teaching at speedemy.com. He's been involved with MySQL since 1999, scaling and optimizing MySQL backed systems since 2004 for companies such as BBC, EngineYard, famous social networks and small shops like EstanteVirtual, Pine Cove and hundreds of others.
Additional content mentioned in the presentation can be found here: http://speedemy.com/17
This document summarizes Chris Skardon's experience migrating the database for his competition running site Tournr from SQL to document and graph databases. It describes how the initial database choice of SQL Server became limiting and led to migrations first to RavenDB, a document database, and then to Neo4j, a graph database. Both migrations required reworking the data model and code but provided performance and flexibility benefits. While challenging, the migrations were worthwhile as the graph model better fit Tournr's needs.
1. SQL Server forSharePoint geeksA gentle introductionThomas Vochten • Septem...BIWUG
This is the presentation I delivered at the latest BIWUG meeting. I also included a list of links underneath for people that want to know more about SQL Server
ICONUK 2016: Back From the Dead: How Bad Code Kills a Good ServerSerdar Basegmez
This document summarizes the troubleshooting process used to identify and resolve a performance issue impacting a mission critical Domino database. Initial analysis found the database compact was not completing and the server was experiencing high swap space usage and memory pressure. Further investigation revealed several issues with the database design and scheduled tasks. A multi-step process was then used to optimize the operating system, Domino configuration, and address a problem with a custom application that was filling memory. Collaborative debugging between administrators and developers was able to replicate the issue and identify the specific code causing the performance problem.
Transitioning From SQL Server to MySQL - Presentation from Percona Live 2016Dylan Butler
What if you were asked to support a database platform that you had never worked with before? First you would probably say no, but after you lost that fight, then what? That is exactly how I came to support MySQL. Over the last year my team has worked to learn MySQL, architect a production environment, and figure out how to support it alongside our other platforms (Microsoft SQL Server and Oracle). Along the way, I have also come to appreciate the unique offering of this platform and see it as an important part of our environment going forward.
To make things even more challenging, our first MySQL databases were the backend for a critical, web based application that needed to be highly available across multiple data centers. This meant that we did not have the luxury of standing up a simpler environment to start with and building confidence there. Our final architecture ended up using a five node Percona XtraDB Cluster spread across three data centers.
This session will focus on lessons learned along the way, as well as challenges related to supporting more than one database platforms. It should be interesting to anyone who is new to MySQL, anyone who is being asked to support more than one database platform, or anyone who wants to see how an outsider views the platform.
MongoDB .local Toronto 2019: Finding the Right Atlas Cluster Size: Does this ...MongoDB
How do you determine whether your MongoDB Atlas cluster is over provisioned, whether the new feature in your next application release will crush your cluster, or when to increase cluster size based upon planned usage growth? MongoDB Atlas provides over a hundred metrics enabling visibility into the inner workings of MongoDB performance, but how do apply all this information to make capacity planning decisions? This presentation will enable you to effectively analyze your MongoDB performance to optimize your MongoDB Atlas spend and ensure smooth application operation into the future.
ECMDay2015 - Kent Agerlund – Configuration Manager 2012 – A Site ReviewKenny Buntinx
Ever experienced sluggish ConfigMgr administrator console performance or collections taking forever to refresh? Join Kent Agerlund as he will walk you thru a ConfigMgr site review and reveal why so many ConfigMgr installations don’t perform as they should. This sessions will be packed with tip and tricks, SQL secrets and PowerShell scripts that will optimize your environment and bring ConfigMgr into the state it was supposed to be from the beginning
Maintenance Plans for Beginners | Each of experienced administrators used (to some extent) what is called Maintenance Plans - Plans of Conservation. During this session, I'd like to discuss what can be useful for us to provide functionality when we use them and what to look out for. Session at 200 times the forward-300, with the opening of the discussion.
This document outlines the topics and sections covered in a 35-hour Informatica training course. The course introduces Informatica concepts and components, including architecture, client software, source and target definitions, mappings, workflows, monitoring, transformations, parameters, reusable objects, and best practices. Key areas covered include designing ETL processes, loading and transforming data, debugging mappings, and administering Informatica. The course contains lectures, demonstrations, and hands-on labs for applying the skills learned.
The majority of cloud-based DWH provides a wide range of migration tools from in-house DWH. However, I believe that cloud migration success is based not only on reducing infrastructure maintenance costs, but also on additional performance profit inherited from tailored data model.
I am going to prove that copying star or snowflake schemas as is will not lead to maximum performance boost in such DWH as Amazon Redshift and Google BigQuery. Moreover, this approach may cause additional cloud expenses.
We will discuss why data models should be different for each particular database, and how to get maximum performance from database peculiarities.
Most of performance tuning techniques for cloud-based DWH are about adding extra nodes to cluster, but it may lead to performance degradation in some cases, as well as extra costs burden. Sometimes, this approach allows to get maximum speed from current hardware configuration, may be even less expensive servers.
I will show some examples from production projects with extra performance using lower hardware, and edge cases like huge wide fact table with fully denormalized dimensions instead of classical star schema.
Scaling a High Traffic Web Application: Our Journey from Java to PHP120bi
What makes an application scale? What should you worry about early on and what can wait?
Over the last 3 years, Achievers has learned many lessons and gained fundamental knowledge on scaling our SaaS platform. CTO Dr. Aris Zakinthinos will present and discuss the decisions we’ve made including language choice, server architecture, and much more; join us while we share tips, tricks, and things to absolutely avoid.
Throughout the evening you will have the opportunity to talk to the development team behind the Achievers Platform and ask questions on scaling best practices.
Database Fundamental Concepts- Series 1 - Performance AnalysisDAGEOP LTD
This document discusses various tools and techniques for SQL Server performance analysis. It describes tools like SQL Trace, SQL Server Profiler, Distributed Replay Utility, Activity Monitor, graphical show plans, stored procedures, DBCC commands, built-in functions, trace flags, and analyzing STATISTICS IO output. These tools help identify performance bottlenecks, monitor server activity, diagnose issues using traces, and evaluate hardware upgrades. The document also covers using SQL Server Profiler to identify problems by creating, watching, storing and replaying traces.
SQL Azure for ISUG(SQL Server Israeli User Group)Pini Krisher
This document provides an overview of SQL Azure and discusses key topics including:
- What is Platform as a Service (PaaS) and how SQL Azure fits within Azure's PaaS offerings.
- Key aspects of SQL Azure including the portals, performance tiers, versions, security features, limitations compared to on-premise SQL Server, pricing, and pros/cons.
- Additional Azure data services like Storage, Virtual Machines, DocumentDB, Tables, Hadoop, and BI.
This document discusses benchmarking TPC-H queries in MongoDB compared to MySQL. It introduces MongoDB and describes setting up the TPC-H data by embedding all tables into a single MongoDB collection. Six sample queries are presented and run using Map-Reduce and the Aggregation Framework. Benchmark results show MongoDB performing worse than MySQL on all queries due to data conversion difficulties and MongoDB's immature Aggregation Framework. The document concludes that while MongoDB is suitable for some applications, it is not well-suited to complex queries like those in TPC-H due to its lack of standard query language and server-side processing abilities.
Performance Scenario: Diagnosing and resolving sudden slow down on two node RACKristofferson A
This document summarizes the steps taken to diagnose and resolve a sudden slow down issue affecting applications running on a two node Real Application Clusters (RAC) environment. The troubleshooting process involved systematically measuring performance at the operating system, database, and session levels. Key findings included high wait times and fragmentation issues on the network interconnect, which were resolved by replacing the network switch. Measuring performance using tools like ASH, AWR, and OS monitoring was essential to systematically diagnose the problem.
Performance Benchmarking: Tips, Tricks, and Lessons LearnedTim Callaghan
Presentation covering 25 years worth of lessons learned while performance benchmarking applications and databases. Presented at Percona Live London in November 2014.
This document summarizes a presentation about optimizing server-side performance. It discusses measuring performance metrics like time to first byte, optimizing databases through techniques like adding indexes and reducing joins, using caching with Memcached and APC, choosing fast web servers like Nginx and Lighttpd, and using load testing tools like JMeter to test performance before deployment. The presentation was given by a senior engineer at Wayfair to discuss their experiences optimizing their platform.
Similar to Random thoughts on sql server performance (20)
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Marlon Dumas
This webinar discusses the limitations of traditional approaches for business process simulation based on had-crafted model with restrictive assumptions. It shows how process mining techniques can be assembled together to discover high-fidelity digital twins of end-to-end processes from event data.
Did you know that drowning is a leading cause of unintentional death among young children? According to recent data, children aged 1-4 years are at the highest risk. Let's raise awareness and take steps to prevent these tragic incidents. Supervision, barriers around pools, and learning CPR can make a difference. Stay safe this summer!
2. About me
Nigel Foulkes-Nock
• SQL Server DBA
with TargetGroup, Newport
• Previous
Barclays Partner Finance
Hewlett Packard
• Worked with SQL Server
versions from 6.5 through 2016
2
3. Session Purpose
• Discuss real-life Performance issues
• Provide ideas and suggestions
• Get you thinking
3
4. Disclaimer
• Test in a non-Production environment
• Follow your Change process
• Speak with your DBA
4
9. The Server can breathe again
• Max Memory changed to 12Gb
• 4Gb free for O/S
• Everyone is happy
9
10. CPUs – More is good, right?
• Queries running longer
• 8 CPU, 64Gb Memory
10
11. CPUs – More is good, right?
• Check the
Defaults
11
12. CPUs – More is good, right?
• Check your queries with sp_blitzcache or
similar tool
• Set “Cost Threshold for Parallelism” and
consider “Max Degree of Parallelism”
12
13. Making the right decision
• Query Plans
• Estimated Query Plan
• Actual Query Plan
• One more...
13
22. Change the Queries
• Avoiding “Data Bloat”
• Choose Datatypes wisely
• Fix early (just say no!)
22
23. Go faster by doing less
• Performance was poor
• Virtual Server – hot-upgradable
• Updates, removing invalid Characters
• 3Tb of Data Updates but 200Gb of Data
23
24. Go faster by doing less
• SQL Table – 200Gb
• 3 Characters to update, 5 Fields
• 200Gb * 3 Characters * 5 Fields = 3000Gb
24
25. Go faster by doing less
• Only use required Columns
• Filter rows, early!
25
26. Conclusion
• It all happened – really!
• Resources to follow
• Nigel Foulkes-Nock
NigelFN@Outlook.com
26
27. Resources
• Where you can’t change the Query
• Memory Issues
• https://www.brentozar.com/blitz/max-memory/
• CPU
Simple
http://blog.idera.com/sql-server/is-your-sql-server-slow-maybe-its-
trying-too-hard/
Advanced
• http://www.littlekendra.com/tag/cost-threshold-for-parallelism/
27
30. Resources
• Are you using the correct Data Types?
• https://www.connectionstrings.com/sql-server-data-types-
reference/
• Where to go next - Advanced Links
• https://www.brentozar.com/responder/
• https://www.simple-talk.com/sql/performance/a-
performance-troubleshooting-methodology-for-sql-server/
• http://www.sqlskills.com/blogs/paul/wait-statistics-or-
please-tell-me-where-it-hurts/
30
Good afternoon. I'd like to introduce myself. I am Nigel Foulkes-Nock, and I am a DBA with TargetGroup, a software and Financial service provider based in South Wales. TargetGroup are on the SQL Relay “Awesome Employers” list, and more importantly came in the top 5 at the Cardiff Half Marathon Corporate Challenge last Sunday.
This talk is all about improving the performance of SQL Server. It's billed as a fairly basic talk - there’s plenty of ground to cover in 30 minutes. I won't be going into any area in any great depth, no demos but plenty of food for thought.
Let’s get a bit of feedback to start off with:
How many people have experienced performance problems… with SQL Server? OK, that’s pretty much everyone.
How many people managed to fix their problems? Got someone else to fix them?
Finally, how many people have caused performance problems in the past?
OK, that gives us all an idea of other people’s experiences, highlights our frustration and gives me an idea (for the last question) of how truthful you are.
Over the next 30 minutes, I'll be sharing some of my experiences with you. I'll be talking about issues that I have found, how I investigated further and the eventual outcome
The session is aimed at making you think - you may think “I know that already”, which is great. You may think, "ah, I can use some of those techniques", or even "oh heck, I've created that issue" (in which case you can promptly start looking at how you can resolve it when you get back to the office).
Exact details of the issues have been masked to protect the guilty. I hold no malice for people bringing me "learning opportunities", though it's never good for someone to bring the same issue again and again.
Disclaimer
Any suggestions or thoughts brought on from this talk should be performed in a safe environment (yeah, right) and any production changes should follow your existing change control process. I think I've covered everything in my disclaimer there. Don't go beyond your remit - if you're a Developer and you have DBAs at your organisation, don't go making Server level changes or installing stuff - go speak to your DBAs.
If you've already got a DBA but you notice some things a bit screwy in your environment, go and talk to them. There's nothing we like more than someone with ideas to help us - just don’t go in and change things. Even if you have the access, it’s still not OK.
If you have no DBA, lobby your management to get one. Better still, why not consider becoming a DBA? Happy to talk about this, and all sorts of other things after the session - just come and talk!
Now, I’ve split my experiences into two parts:
What to do when you can’t change the Query
In this section, I’ll talk about changes that you can make without changing the code. That doesn’t mean that they are not Changes (Change with a capital “C” here), they just don’t require “code changes”.
These changes can be really useful in many situations - you might be dealing with a Commercial Application, a piece of Legacy Software where all of the Developers have moved on, or maybe there’s just too much involved in testing any changes made to the logic of the existing code.
We’re just interested in the Environment.
What to do when you CAN change the Query
I nearly wrote “Performance Tuning when you CAN fix the Queries?”, but I didn’t.
Partly because the great Brent Ozar does a training course under this title, but more so because when it comes to queries, there’s often someone far more qualified to change code and fix Performance Issues. How about the people who caused the issues in the first place? Developers, Report Writers, Users can be quite receptive to polite suggestions towards how they might be able to make their queries play nicer with the Server and other queries. After all, they know the data, they know the results that they’re looking for and they have the expertise in testing the Changes that they’ve made. Often, they just need a bit of polite encouragement.
The call came in - “Nigel (never Nige), will you look at this server - it’s slow”. There wasn't a huge amount of data on there. The Server specification was reasonable for what was involved - 2 CPU / 16Gb Memory, not a lot of data, I logged on and it felt a bit sluggish. I checked the "Performance" Tab in Task Manager - the Server had 5Mb of Memory left.
Upon further investigation, the memory settings had been left at the default - essentially allowing SQL Server to “take as much as it wants”, so it did just that - it took everything. With no Memory available to the Operating System and other Services, Windows Server just started using the Paging file instead of memory and that’s why everything became really slow... Memory is quick, disk is slow.
I changed the Maximum Memory setting within SQL Server to 12Gb which gave the Operating System 4Gb to "breathe", re-started SQL Server and all was happy once more.
CPU - More is Less sometimes
Now, I'm going to continue on the theme of the last improvement to talk about another situation where SQL Server can have too much as well as too little.
Falling hardware costs and ever increasing Database Sizes mean that plenty of SQL Servers are being specified with multiple processors. Trouble is, they're not always configured to work with them properly.
Another time, I started to look into a situation whereby Queries were taking longer and longer to run each day. I started by checking the basic environment - after all, 8 CPU, 64Gb of memory, for the workload it should have been sufficient. No massive increase in data to be processed so no real reason for the slowdown.
I checked the basic SQL Server setup details and there was a clue on the Options tab of SQL Server Properties. The value for "Cost Threshold for Parallelism" was set to the default value of 5. Also, the "Max Degree of Parallelism" was set to the default of 0.
Most Servers these days will have more than one processor, this one certainly did. Running a query using multiple processors is always going to run faster than one a single processor, right? It Depends.
Before SQL Server runs a query, it works out the most efficient way that it can run. It creates an Estimated Execution Plan, together with calculating an arbitrary cost for running this Query. The "Cost Threshold for Parallelism" value is used to determine whether SQL Server should use multiple Processors to run this query. If it's set too low, then you can have a situation whereby too many queries are being run across multiple processors.
The default value for “Cost Threshold for Parallelism” is 5. Not five seconds, just 5 - the measurement used is an arbitrary value that fokelore says is based on one of the original SQL Server developers PC. That said, 5 is a low value. Current best practice is to set this to 50 in order to let a lot of the smaller queries just run happily along on single processors.
You see, when you run a query over multiple processors then you often get unexpected delays (Waits) - SQL Server will split up the work and then gather together the results at the end..
I can relate this back to the race that I spoke about on the intro slide. I was part of a Corporate team, but the race wasn’t over until all of the team were over the line. No matter how quick our best runner finished, they had to wait for me until we were considered finished.
In my case, I changed the configured values and suddenly SQL Server started running most of the smaller queries more quickly.
You can check the “Cost” of executed Queries by using a tool like sp_blitzcache from Brent Ozar unlimited, available as part of the “First Responder” kit a free download (link on the Resources slide).
I mentioned Query Execution Plans earlier - SQL Server uses them to determine the most efficient way of processing a Query. They don’t always work as you’d expect them to. You can't dictate exactly how SQL Server is going to run a query. Well, you can, but that’s like trying to dictate your children's choice of friends, it's never a good idea to force the situation. Far better to help guide them towards the right decision.
Reading and interpreting Query Plans is a huge subject. It’s a really interesting one and there’s plenty of Resources available to investigate further. I’ll provide a few links in the notes.
You can view the “Estimated Query Plan” before running a Query and look at the “Actual Query Plan” afterwards by using the settings in SQL Server Management Studio (SSMS). You can view the plan in SSMS, or better still use the excellent “SQL Plan Explorer”from SQLSentry which is a completely free tool that makes SQL Query Plans far easier to read (especially the big ones)..
We had an issue whereby a Query was running slow - too slow. I brought up the Estimated Query Plan and what did I find? A few Tables were being Joined, Indexes existed on the Columns being used for the Joins but it was clear that these Indexes weren’t being used, and the Query was running slowly.
Why was this? Lets talk about a few things
Statistics
SQL Server keeps Statistics on columns to keep a track of the distribution of values in those columns. As an example, if you have a column that contains Phone Numbers then the Statistics will show SQL Server that most of the values (perhaps all) are unique, so an Index is likely to be useful. On the other hand, a column that contains Gender is likely to have very few values so it would be less useful.
Statistics change over time and it's really easy for Statistics to get out of date, causing SQL Server to not use an Index and do a Table Scan instead - i.e. read all of the data in the table.
I checked how fresh the Statistics were - they were old and stale, so I updated them (really easy to do). Re-running the Query, SQL Server had a better idea of what was in the data and chose a better way of returning that data in the Query.
Indexes
Indexes can make it quicker to run a query, either by improving the speed of joins between Tables, or also when they include all of the columns needed from that table for a query - in that case, SQL only needs to read the Index instead of the entire table.
One difference between Indexes and Statistics is that in my experience, Managers know the word Index. “Add an Index, do it NOW” is a common expression when performance takes a nose-dive.
I can happily talk for hours about Indexes, but don’t worry, not right now. Just think of Indexes at mini-tables. They contain the Columns that you’re indexing on, together with a link back to the main Index that is used to hold the complete set of data.
The point that you should get from the above statement is that they take up disk space and they need maintenance. Each time that a Table is updated, if the update includes columns that are part of the Index, the Index will need updating too. You can quickly arrive at a situation whereby the maintenance of your Indexes has more of an effect than the problem that they are solving.
Also, when Joining Tables, make sure that the columns that you join on are of the same DataType. For example, if you’re using INT to store Customer Number in one Table and Varchar(25) to store it in another Table then joining those Tables won’t using any Indexes on those columns. You can see that this is happening if you see a “CONVERT_IMPLICIT” predicate in your Query Plan
Index Tuning Advisor can be a bit over-eager in making suggestions - you need to approach with caution and learn about your Data before making changes. Personally, I prefer to use sp_BlitzIndex from Brent Ozar Unlimited but even then you need to exercise caution and that tool won’t make suggestions on certain types of Indexes.
Indexes need maintenance too - they get Fragmented over time - they will need re-building and re-organising. It's worth checking and de-fragmenting your Indexes
A great tool for Maintenance is provided free by Ola Hallengren. This is highly configurable so that you can set it to only work on Tables that are fragmented beyond a certain level or above a certain size. Details on the Resources page once more.
In my experience, updating Statistics can have more of an effect than de-fragmenting Indexes. Also, de-fragmentation can cause an “IO storm” of updates - especially if you’re doing Log Backups / High Availability, so it’s probably best done less frequently.
Live Query Statistics
SQL Server Management Studio 2016 introduced a new feature called “Live Query Statistics”. This allows you to view a Query Plan as a the Query runs. is running in an animation. There’s a really nice graphical representation of the Query Plan with the arrows showing the number of rows processed - estimated rows and actual rows. It really helps with the learning of exactly where the bottlenecks are in running your query.
Not only that, it can be run against SQL Server 2014 as well as SQL Server 2016.
But wait, there’s more - this information is available not just in Graphical format - it’s exposed using a Dynamic Management View - sys.dm_exec_query_profiles.
This DMV provides a huge amount of data about a running Query - provided that you ask it.
To use sys.dm_exec_query_profiles, you need to set “Include Actual Execution Plan” in SSMS before running your query, or run the Query with the command “SET STATISTICS PROFILE ON”.
When would you want to use this, certainly not every time you run a query.
We had a situation where a Query wouldn’t complete and we were stuck. Started Troubleshooting by viewing the Estimated Execution Plan. That gave a few clues - I reviewed the Indexes, the Statistics and added an Index or two with the aim of making the Query Plan more efficient. The Estimated Query Plan returned a far lower value, but still the Query wouldn’t complete.
Further optimisation was necessary, but I needed to view the “Actual Query Plan” - but the Query wouldn’t complete. Catch-22 situation.
Using Live Query Statistics, either in it’s graphical form or by using sys.dm_exec_query_profiles then I can get a better idea of where the query is stuck. It’s also a really nice way to see where SQL Server is choosing a bad plan based on bad estimates. I’ve got a little query that tells me parts of the running query have completed where the Estimated Rowcount differs from the Actual Rowcount.
Potential reasons behind this include out of date Statistics, User Defined Functions, Table Variables, Distributed queries, Implicit Conversions and plain old Complex queries.
Hindsight
This last section was written with the benefit of hindsight. Fixing this specific issue didn’t involve the use of Live Query Statistics.
I had a chat with the Developer, nice chap.
He took the problematic code and examined it.
Now, many overly complex queries with huge sub-queries can be re-written. Data retrieved by the sub-queries can be extracted into Temporary Tables at the beginning of the query. These Tables can be Indexed just like any other Tables and breaking down a Query like this can provide not only higher performance for the Query, it can also make it easier to read and troubleshoot.
Just make sure that you’re not copying great swathes of data and creating yet another problem.
Sure enough, there were many sub-queries, the query was re-written as per the guidance above..
Performance increased significantly following the re-write, it completed in a tenth of the original time and everyone was happy. It just goes to show that sometimes things can just get too complicated for SQL Server to process in one go.
A word about Table Variables
I mentioned “Temporary Tables” note that I’m not talking about “Table Variables”. If you see any of these then that’s certainly a prompt to re-write the Query. Table Variables look like a great idea until you realise that SQL Server can’t deal with them properly when estimating Query Plans.
Oh, now I’ve strayed into the second part of the talk
Move quicker by avoiding “Data Bloat”
I was doing the Daily Checks one morning, and noticed some growth on one of our Databases. I took a quick look at the "Top Tables by Disk Space" Report (one of the Microsoft supplied ones that you can access from by right-clicking the Database, then selecting Reports / Top Tables by Disk Space.
Upon viewing the Report, I was surprised to see an unfamiliar table at the top of the tree. This table wasn't part of what I'd regarded as our main workload, I couldn’t understand why it was there - and based on dividing the Table size by number of rows, each row seemed inordinately large - something just didn't look right.
I examined the structure of the Table and it appeared that the Developer hadn't taken much care in their choice of DataTypes. Choosing the wrong DataType can make a massive amount of difference to the ongoing size of a Table. Simple things like using INT or BIGINT for Columns where a smaller DataType would suffice. Examples like "Month of Year" - BIGINT (that's 8 bytes) where TINYINT would have done (just 1 byte), using numeric fields with massive scope for values / precision, using fixed widthCharacter fields that are larger than the expected values “just in case”. Use of UNICODE Data Types where there’s no possibility of non-standard ASCII values being held.
By using the correct DataTypes, it was possible to reduce the size of the Table significantly. Most of the Indexed columns in the Table were affected too, so the Indexes were significantly larger than they should have been.
It wasn't a simple job to rectify this situation - we were running out of space because of... the overly sized Table and if you simply change the Column definition then guess what? The Table gets bigger. This is due to the way that SQL Server processes Table definition changes. The only route is to copy the Data into another Table (or out to a file and back). Even then, you are likely to need to do it in Batches to avoid filling your Log file. It’s not a pretty situation but it get worse as you go along, so best to catch it early. Best to catch it in a review of the code before it gets to Production but hey, we live in the real world.
Another "special feature" about this Table was a series of about 150 numeric columns, each of which may be populated, or more likely would be set to "null". Either value would take the same amount of space regardless. The use of "Sparse" columns can come in handy in a situation like this - provided that the vast majority of the entries in the "Sparse" column are likely to be "null" then by defining the column as "Sparse", the disk space that the Table uses would be significantly reduced.
Increase performance by doing less
I had a call to say that SQL Server performance was poor. This was on a Virtual Server, so we could add CPU or memory to make it go quicker. Guess what they wanted us to do?
I fired up SQL Profiler and set a trace to show me what was going on. I set a filter to only record queries that were taking more than a few seconds. One little point to note here is about SQL Profiler - make sure that you only use it when absolutely necessary and always switch it off straight afterwards otherwise you'll create yet another Performance issue, all by yourself.
After the main queries had run, I examined the Profiler output only to find that it involved data updates. Multiple updates. Very similar updates.
Nothing wrong with the work that needed to be done, let’s just say that it could have been done more efficiently.
The updates were to remove Control Characters from a Table prior to further processing..
Each update to remove an invalid character was run as a separate statement. There were three different characters being removed across five different fields, all done as a separate SQL Update Statements.
Now, this SQL Table was 200Gb in size. Let us do a little calculation...
200Gb (Table Size) x 3 (Characters) x 5 (Fields) = 3,000 Gb
That's 3 Terabytes of data moving around SQL Server just to do this data update.
The Server had 16Gb of memory allocated, OK, that’s not a lot. 12Gb of this was allocated to SQL Server. Now, SQL Server just loves memory.
SQL Server is one big cache. As you query a Database or update Data, SQL Server will deliver the information and perform the update in memory. If the data that you need is in memory right now, it'll let you have it. If it's not in memory then SQL Server will load it from disk, then hand it to you (or update it), from memory. This will happen until SQL Server fills the memory that it has to work with, then it writes any changed data back out to disk before dropping that data from memory to make space to load something else into memory. A constant shuffle. Remember - Memory is quick, disk is slow.
Reading through 3Tb of data to do the updates when 200Gb would do will make a significant difference to the speed of your processing.
The Query was updated to perform all of the updates in one go and performance increased significantly.
Don’t use so much Data
Only use the Columns that you require
Look out for “SELECT * FROM” within your code. Chances are, if you’re using that, no Indexes are being used and you’re shifting far more data than you need (or want) to.
Not just that, if you’re performing a Pivot or any form of GROUP BY operation, you only want to perform the action based on the relevant columns.
Filter rows, filter early
Another common issue that I’ve encountered is a series of SQL Statements that carefully copy another set of the Data, update, embellish that data and finally use a DELETE FROM or add a WHERE clause to only return part of the Data. Often, that filtering can be done earlier in the process. Along with only using the Columns that you need, this can make a tremendous difference to the performance of a query.
Conclusion
At the beginning of this session, I promised you would hear about some of my real-world examples. Everything in this talk truly happened - even if I can’t give all of the grizzly details.
The following slides will be available after the event. They cover background information about the Topics that I discussed. They will help guide you in the right direction.
If you have any questions, please get in touch, either following the talk or feel free to drop me an e-mail.
Thank you for your time.
Nigel Foulkes-Nock
NigelFN@Outlook.com