This document provides 10 tips for optimizing SQL Server performance, with the main tips being to establish a performance baseline before making changes, determine clear goals for optimization, and limit changes between benchmarks to isolate the effects of individual changes. Benchmarking against an established baseline helps identify abnormal behavior and measure the impact of performance improvements. The tips are meant to provide a process for using performance data to identify and address specific performance issues.
Performing Oracle Health Checks Using APEXDatavail
With the heavy workload that most, if not all, DBAs face, it’s no wonder there is little time left to perform routine health checks. This presentation deck reviews the real value of health checks, based on the thousands of them performed for clients and how APEX can be used to standardize health checks.
By using statistical process control (SPC), managers can determine if variations in their data are due to normal fluctuations or issues with the underlying process. SPC involves calculating control limits based on the average and standard deviation of historical data to identify when new data points are significantly different. The most common type of control chart used is the X-bar and moving range chart, which plots average values over time and the differences between successive values to monitor for instability.
This white paper discusses forecast value added (FVA) analysis, which evaluates the performance of each step and participant in the forecasting process. FVA compares forecast accuracy to a "naïve" baseline and determines whether each step improves or worsens the forecast. Conducting FVA analysis identifies wasteful activities that do not enhance forecast quality. The paper outlines how to map a forecasting process, collect necessary data, analyze the results, and interpret FVA metrics to streamline processes and improve forecasting.
Mark Graban SHS 2014: Two Data Points Are Not a Trend: Using SPC to Manage Be...Mark Graban
Healthcare leaders often make bad decisions due to a lack of statistical understanding. This session will remind attendees that simple comparisons of two data points or comparisons to goals and targets can be misleading. Control charts allow us to better validate project success and make better ongoing management decisions.
It’s far too easy for improvement facilitators to draw incorrect conclusions about the success of their Lean event or Six Sigma project if they are simply comparing before and after performance. Likewise, healthcare leaders make bad decisions when they are likewise comparing two data points (today versus a previous month or year or today versus a target).
Basic Statistical Process Control (SPC) methods, like control charts, are a simple and proven alternative.
Key Learning Objectives
1) Understand some of the common pitfalls in the creation and use of performance measures in various healthcare settings
2) See statistical chart analysis methods that allow for the best management decision making, such as knowing if we are improving and if a "bad day" requires investigation or if it is merely "noise" in the system's performance
3) Connect key principles of Lean management and the Deming philosophy into modern KPI and metrics management
By the end of this session attendees will
1) Understand the importance of "control charts" for management decision making
2) Be able to create and interpret a basic management control chart
3) Know of other resources for more learning
Mark Graban is author of the Shingo-Award winning book "Lean Hospitals: Improving Quality, Patient Safety, and Employee Engagement." Mark is also co-author, with Joe Swartz, of "Healthcare Kaizen: Engaging Front-Line Staff in Sustainable Continuous Improvements" (also a Shingo recipient) and "The Executive Guide to Healthcare Kaizen."
He serves as a consultant to healthcare organizations through his company, Constancy, Inc and is also the Chief Improvement Officer of the technology company KaiNexus.
Mark has a B.S. in Industrial Engineering from Northwestern University and an M.S. in Mechanical Engineering and an M.B.A. from the Massachusetts Institute of Technology’s Leaders for Global Operations Program. Mark and his wife live in San Antonio, Texas.
The document discusses the process of data preparation for analysis. It involves checking data for accuracy, developing a database structure, entering data into the computer, and transforming data. Key steps include logging incoming data, screening for errors, generating a codebook to document the database structure and variables, entering data using double entry to ensure accuracy, and transforming data through handling missing values, reversing items, calculating scale totals, and collapsing variables into categories.
7 Cases Where You Can't Afford to Skip Analytics TestingObservePoint
This document discusses the importance of creating and executing analytics test plans. It recommends testing key components of the analytics stack, including the data layer, tag management system, analytics solutions, and DOM elements. The document outlines seven scenarios where testing is especially important, such as when deploying tag management changes, application updates, new content, email campaigns, or A/B tests. It emphasizes automating the testing process to improve efficiency and minimize resources needed.
1) Database performance tuning is important to improve resource utilization and application performance as data amounts and workloads become more complex.
2) Gathering baseline performance metrics and examining execution plans are important first steps to identify tuning opportunities such as inefficient queries, tables, or indexes.
3) Regular monitoring is needed after tuning to ensure issues remain resolved and identify new tuning opportunities, as performance needs continually evolve with changing data and workloads.
Performing Oracle Health Checks Using APEXDatavail
With the heavy workload that most, if not all, DBAs face, it’s no wonder there is little time left to perform routine health checks. This presentation deck reviews the real value of health checks, based on the thousands of them performed for clients and how APEX can be used to standardize health checks.
By using statistical process control (SPC), managers can determine if variations in their data are due to normal fluctuations or issues with the underlying process. SPC involves calculating control limits based on the average and standard deviation of historical data to identify when new data points are significantly different. The most common type of control chart used is the X-bar and moving range chart, which plots average values over time and the differences between successive values to monitor for instability.
This white paper discusses forecast value added (FVA) analysis, which evaluates the performance of each step and participant in the forecasting process. FVA compares forecast accuracy to a "naïve" baseline and determines whether each step improves or worsens the forecast. Conducting FVA analysis identifies wasteful activities that do not enhance forecast quality. The paper outlines how to map a forecasting process, collect necessary data, analyze the results, and interpret FVA metrics to streamline processes and improve forecasting.
Mark Graban SHS 2014: Two Data Points Are Not a Trend: Using SPC to Manage Be...Mark Graban
Healthcare leaders often make bad decisions due to a lack of statistical understanding. This session will remind attendees that simple comparisons of two data points or comparisons to goals and targets can be misleading. Control charts allow us to better validate project success and make better ongoing management decisions.
It’s far too easy for improvement facilitators to draw incorrect conclusions about the success of their Lean event or Six Sigma project if they are simply comparing before and after performance. Likewise, healthcare leaders make bad decisions when they are likewise comparing two data points (today versus a previous month or year or today versus a target).
Basic Statistical Process Control (SPC) methods, like control charts, are a simple and proven alternative.
Key Learning Objectives
1) Understand some of the common pitfalls in the creation and use of performance measures in various healthcare settings
2) See statistical chart analysis methods that allow for the best management decision making, such as knowing if we are improving and if a "bad day" requires investigation or if it is merely "noise" in the system's performance
3) Connect key principles of Lean management and the Deming philosophy into modern KPI and metrics management
By the end of this session attendees will
1) Understand the importance of "control charts" for management decision making
2) Be able to create and interpret a basic management control chart
3) Know of other resources for more learning
Mark Graban is author of the Shingo-Award winning book "Lean Hospitals: Improving Quality, Patient Safety, and Employee Engagement." Mark is also co-author, with Joe Swartz, of "Healthcare Kaizen: Engaging Front-Line Staff in Sustainable Continuous Improvements" (also a Shingo recipient) and "The Executive Guide to Healthcare Kaizen."
He serves as a consultant to healthcare organizations through his company, Constancy, Inc and is also the Chief Improvement Officer of the technology company KaiNexus.
Mark has a B.S. in Industrial Engineering from Northwestern University and an M.S. in Mechanical Engineering and an M.B.A. from the Massachusetts Institute of Technology’s Leaders for Global Operations Program. Mark and his wife live in San Antonio, Texas.
The document discusses the process of data preparation for analysis. It involves checking data for accuracy, developing a database structure, entering data into the computer, and transforming data. Key steps include logging incoming data, screening for errors, generating a codebook to document the database structure and variables, entering data using double entry to ensure accuracy, and transforming data through handling missing values, reversing items, calculating scale totals, and collapsing variables into categories.
7 Cases Where You Can't Afford to Skip Analytics TestingObservePoint
This document discusses the importance of creating and executing analytics test plans. It recommends testing key components of the analytics stack, including the data layer, tag management system, analytics solutions, and DOM elements. The document outlines seven scenarios where testing is especially important, such as when deploying tag management changes, application updates, new content, email campaigns, or A/B tests. It emphasizes automating the testing process to improve efficiency and minimize resources needed.
1) Database performance tuning is important to improve resource utilization and application performance as data amounts and workloads become more complex.
2) Gathering baseline performance metrics and examining execution plans are important first steps to identify tuning opportunities such as inefficient queries, tables, or indexes.
3) Regular monitoring is needed after tuning to ensure issues remain resolved and identify new tuning opportunities, as performance needs continually evolve with changing data and workloads.
How to Improve Quality and Efficiency Using Test Data AnalyticsTequra Analytics
Discover 8 ways in our guide for advanced manufacturers.
Do you perform advanced manufacturing in an industry such as aerospace, automotive, medical devices or telecoms? Is product testing part of your manufacturing process? If you can answer yes to these questions, keep reading to learn how test data analytics can enable many improvements.
Whitepaper:Barriers to Effective and Strategic SPM CompensationIconixx
Learn best practice principles to anticipate barriers to SPM compensation. The five most common activation missteps are addressed, and practical recommendations are made to avoid them. The strategic approach outlined in this report will reduce the challenges encountered after activation and will help save time and money.
Growing pains what to do when you’ve outgrown small business accounting sof...BurCom Consulting Ltd.
This document provides tips for businesses that have outgrown their small business accounting software and are ready to implement a new enterprise resource planning (ERP) system. It recommends starting with a needs analysis to identify key problems, areas needing automation, and reporting requirements. An important step is calculating return on investment by considering costs of the current inadequate system as well as benefits of improved processes, data access, and decision making with a new system. When evaluating new systems, the document advises focusing on objectives and requirements rather than just features, and involving a task force of representatives from different departments. It also includes indicators that a business is ready to transition from small business accounting software to a more full-featured ERP solution.
The document discusses a business analysis project for Lowmill Corporation where the maintenance department needs reorganization from a physical and systematic level as they are unable to provide data on failing production equipment. A new system is needed to understand high failure production areas so the engineering department can improve equipment uptime and maximize manufacturing capacity. The paper will analyze the business process related to Lowmill's maintenance department issues.
Continual Improvement with Status EnterpriseRich Hunzinger
This document outlines how Status Enterprise can be used to implement continual improvement programs through information modeling, workflow tasks, reporting, and data mining of process data. Status Enterprise allows users to create an information model that connects operational and business data from multiple sources. This enables the visualization of relationships that reveal opportunities for optimization. Workflow tasks can automate activities like condition-based maintenance or report generation. Data mining techniques can be used to detect anomalies and patterns indicating areas for improvement. Changes implemented through continual improvement programs can then be easily repeated and scaled across the organization using the information model.
The document discusses best practices for collecting software project data including defining a process for collection, storage, and review of data to ensure integrity. It emphasizes personally interacting with data sources to clarify information, establishing a central repository, and normalizing data for later analysis and calibration of estimation models. The checklist provides guidance on reviewing various aspects of the data collection to validate completeness and accuracy.
Data Warehouses & Deployment By Ankita dubeyAnkita Dubey
This document contains the notes about data warehouses and life cycle for data warehouse deployment project. This can be useful for students or working professionals to gain the basic knowledge about Data warehouses.
Taming the Beast: Optimizing Oracle EBS for Radical EfficiencyDatavail
In this session, we'll present real case studies of Datavail's proven optimization methodology and performance tuning techniques that have enabled Fortune 500 companies to scale Oracle EBS for radical efficiency, meet their most crucial business needs, and increase the lifespan of their investment.
CHAPTER6 Performing a Risk AssessmentTHERE ARE SEVERAL S.docxchristinemaritza
CHAPTER
6 Performing a Risk Assessment
THERE ARE SEVERAL STEPS TO TAKE when performing a risk assessment. You start by
clearly defining what you will assess. This involves describing the system. You then collect data to
identify threats and vulnerabilities. These threats and vulnerabilities help you identify the risks.
Then identify countermeasures or controls that can mitigate the risks. Evaluate in-place and
planned controls. Finally, evaluate and recommend additional controls. You can support these
recommendations with a cost-benefit analysis.
Chapter 6 Topics
This chapter covers the following topics and concepts:
• What to consider when selecting a risk assessment methodology
• How to identify the management structure
• How to identify assets and activities
• How to identify and evaluate relevant threats
• How to identify and evaluate relevant vulnerabilities
• How to identify and evaluate countermeasures
• How to select a methodology based on the assessment needs
• How to develop mitigating recommendations
• What presenting risk assessment results entails
• What the best practices for performing risk assessments are
Chapter 6 Goals
When you complete this chapter, you will be able to:
• Select an appropriate risk assessment methodology
• Define the operational characteristics and mission of the system to be assessed
• State the importance of reviewing previous findings and status
• Describe the relevance of a management structure to a risk assessment
• Identify the types of assets to include in a risk assessment
• List steps to identify and evaluate threats
• List actions to identify and evaluate vulnerabilities
• List actions to identify and evaluate countermeasures
• Describe the difference between in-place and planned countermeasures
• Describe the process used to assess threats, vulnerabilities, and exploits
• Describe the process used to develop mitigation recommendations
• Describe the results of a risk assessment
• List best practices for performing risk assessments
Selecting a Risk Assessment Methodology
Once you decide to perform a risk assessment (RA), you’ll need to outline how you’ll proceed. In
other words, you’ll need to decide what specific steps to take. An RA isn’t a project that you can
decide to do one day and complete it the next. It takes time and planning.
The two primary types of RA approaches are quantitative and qualitative. This chapter helps
to paint the overall picture of an RA. In general, a risk assessment involves the following steps:
• Identify assets and activities to address.
• Identify and evaluate relevant threats.
• Identify and evaluate relevant vulnerabilities.
• Identify and evaluate relevant countermeasures.
• Assess threats, vulnerabilities, and exploits.
• Evaluate risks.
• Develop recommendations to mitigate risks.
• Present recommendations to management.
Before progressing with the RA, you need to complete two preliminary actions. These are:
• Defi ...
Testing Data & Data-Centric Applications - WhitepaperRyan Dowd
This document discusses the importance of data-centric testing for organizations that rely on data to drive their business. It provides an overview of a methodology for implementing data-centric testing that involves testing data during development and verifying data quality in production. Some key challenges discussed include the lack of tools specifically for data testing and the time required to create and manage test data sets. The methodology advocates for the involvement of developers, dedicated testers, and quality assurance in testing at the unit, integration and system levels with a focus on automated testing and data verification.
The document discusses reactive performance tuning in Oracle databases. It describes how to identify "resource hogs" - SQL statements that are consuming more system resources than necessary - by querying the V$SQLAREA view to find statements with the highest values for a "load factor" formula combining disk reads and buffer gets. The top 10-20 statements identified in this way should be prioritized for tuning, with the goal of eliminating resource intensive processes rather than just increasing system capacity. Patterns in the results can help determine appropriate tuning strategies.
Beyond Firefighting: A Leaders Guide to Proactive Data Quality ManagementHarley Capewell
This white paper discusses moving from reactive to proactive data quality management. It introduces the common problem of "data quality firefighting", where issues are dealt with reactively instead of proactively. The author advocates adopting several core capabilities to enable proactive management, including data lineage management, business glossaries and rules, a data quality framework, data quality technology, reference data strategies, and clear data ownership. These capabilities can help organizations scale data quality efforts, automate tasks, and create value from improved data. The paper provides examples and argues that proactive management can improve business outcomes like customer satisfaction and regulatory compliance.
Tuning a data warehouse is difficult due to its dynamic nature, unpredictable user queries, and changing business requirements. Performance is assessed using metrics like response time, throughput, and memory usage, which should be specified in a service level agreement. Tuning involves optimizing data loads, queries, and hardware. For fixed queries, usual SQL tuning and indexing can be used, while understanding user profiles is important for tuning unpredictable ad hoc queries.
This document provides an overview of Lean Six Sigma in 3 phases:
1) Pre-Define phase explains that Six Sigma helps improve quality and customer satisfaction while reducing costs, and Lean helps eliminate waste and improve efficiencies. Lean should be applied before Six Sigma to reduce waste before improving efficiency.
2) Define phase involves understanding customer wants, translating them into measurable requirements, identifying critical needs, and defining the project charter.
3) Measure phase focuses on validating measurement systems, collecting baseline data, understanding input variables, analyzing process stability, and calculating process capability.
This document summarizes a presentation about scaling analytics in a maturing organization. It discusses focusing initially on data infrastructure, integrity, access and visualization. As the organization grows, processes need to change from everyone accessing data as needed to assigning roles like analysts and business users. Getting buy-in for changes requires pre-research and collaboration. Solutions should be shipped as minimum viable products and improved iteratively. Empowering others involves creating transparent processes and frameworks for teams to self-govern requests. The overall goal is to start with basic functionality and expand the system as the organization matures.
This document introduces the Yet Another Performance Profiling Method (YAPP Method) for holistically tuning Oracle database performance. It begins by defining key terms like response time, scalability, and the 80/20 rule. It then explains that most performance issues originate from application design/implementation rather than the database itself. The document advocates focusing tuning efforts on the application level using a simplified approach focused on the areas of highest impact.
This document provides tips for businesses that have outgrown QuickBooks and are looking to move to a more powerful accounting solution. It discusses gathering requirements, evaluating needs, selecting the right system, and important factors to consider. Some key tips include creating a task force to get input from different departments, prioritizing integration capabilities, choosing software before hardware, and looking for vendors that invest in ongoing support, customization and research/development. The overall goal is to help businesses thoroughly evaluate their unique needs and select software that can adapt and grow with their business over time.
14 CREATING A GROUP AND RUNNING A PROJECTIn this chapter, we wil.docxaulasnilda
14 CREATING A GROUP AND RUNNING A PROJECT
In this chapter, we will discuss how you actually complete network studies within a company. We will cover how you build a group and how you run a project.
Typical Steps to Complete a Network Design Study
In a typical project, you are likely to run into problems with the data as well as organizational challenges of working on a project that impacts many people within a firm. To get a network design study done, you need to treat it as a project and manage it as you would manage any complex project within a company. Of course, there are elements unique to a network design study. In this section, we will cover the typical steps that you want to include in your network design project plan. Broadly, any network design project can be broken into five main steps or phases:
· 1. Model scoping and data collection phase
· 2. Data analysis and validation phase
· 3. Baseline development and validation phase
· 4. What-if scenario analysis
· 5. Final conclusion and development of recommendations
Each step is critical and has its own specific purpose. It is important for the project team to go through all the phases, irrespective of the scope and complexity of the supply chain being analyzed or the amount of time available to complete the analysis.
Step 1: Model Scoping and Data Collection
Before you start any project, it is important to first understand the questions that are to be answered and the associated parts of the supply chain that may be impacted. This step may seem trivial and is often overlooked, but it is very important to have a clear understanding of what decisions are being made, and which parts of the supply chain are open to change and which parts are not. In this phase of the project, you are applying the lessons learned from Chapter 12, “The Art of Modeling,” and specifically the section “Understanding the Supply Chain.”
For a retailer that recently acquired another retail company, the key questions are likely to be:
· ■ What is the optimal combined distribution network that minimizes logistics costs and maximizes service to stores and customers?
· ■ Which existing distribution center locations are redundant and can be closed?
· ■ What is the best way to distribute products to the newly combined store network?
For a consumer-products company that is looking to develop their long-term manufacturing strategy to support growth, the questions would be similar to the following:
· ■ Should we expand existing plants or build new plants? If so, where and when?
· ■ Which products should we manufacture internally and for which products should we use contract manufacturers?
· ■ Is there an opportunity to source products across various regions?
These are just two distinct examples; every supply chain network design study will have a different scope.
What is the same between all projects however is that it is critical that you get everyone on the team to agree to the scope and questions the optimization will ...
4 best practices_using finance applications for better process efficienciesKaizenlogcom
The document discusses the need for modern financial applications to improve process efficiencies and data accuracy for CFOs facing increasing demands. It outlines challenges such as legacy systems relying on manual data entry and inability to access timely and comprehensive data. The document then provides best practices for identifying financial applications including regularly assessing legacy systems, focusing on automation and integration, considering cloud-based systems, using single integrated systems, and automating the financial close process. Implementing appropriate applications can result in immediate impacts like accelerated ROI, greater productivity, improved reporting and analysis, stronger financial controls, and a more responsive finance team.
More Related Content
Similar to 10 tips-for-optimizing-sql-server-performance-white-paper-22127
How to Improve Quality and Efficiency Using Test Data AnalyticsTequra Analytics
Discover 8 ways in our guide for advanced manufacturers.
Do you perform advanced manufacturing in an industry such as aerospace, automotive, medical devices or telecoms? Is product testing part of your manufacturing process? If you can answer yes to these questions, keep reading to learn how test data analytics can enable many improvements.
Whitepaper:Barriers to Effective and Strategic SPM CompensationIconixx
Learn best practice principles to anticipate barriers to SPM compensation. The five most common activation missteps are addressed, and practical recommendations are made to avoid them. The strategic approach outlined in this report will reduce the challenges encountered after activation and will help save time and money.
Growing pains what to do when you’ve outgrown small business accounting sof...BurCom Consulting Ltd.
This document provides tips for businesses that have outgrown their small business accounting software and are ready to implement a new enterprise resource planning (ERP) system. It recommends starting with a needs analysis to identify key problems, areas needing automation, and reporting requirements. An important step is calculating return on investment by considering costs of the current inadequate system as well as benefits of improved processes, data access, and decision making with a new system. When evaluating new systems, the document advises focusing on objectives and requirements rather than just features, and involving a task force of representatives from different departments. It also includes indicators that a business is ready to transition from small business accounting software to a more full-featured ERP solution.
The document discusses a business analysis project for Lowmill Corporation where the maintenance department needs reorganization from a physical and systematic level as they are unable to provide data on failing production equipment. A new system is needed to understand high failure production areas so the engineering department can improve equipment uptime and maximize manufacturing capacity. The paper will analyze the business process related to Lowmill's maintenance department issues.
Continual Improvement with Status EnterpriseRich Hunzinger
This document outlines how Status Enterprise can be used to implement continual improvement programs through information modeling, workflow tasks, reporting, and data mining of process data. Status Enterprise allows users to create an information model that connects operational and business data from multiple sources. This enables the visualization of relationships that reveal opportunities for optimization. Workflow tasks can automate activities like condition-based maintenance or report generation. Data mining techniques can be used to detect anomalies and patterns indicating areas for improvement. Changes implemented through continual improvement programs can then be easily repeated and scaled across the organization using the information model.
The document discusses best practices for collecting software project data including defining a process for collection, storage, and review of data to ensure integrity. It emphasizes personally interacting with data sources to clarify information, establishing a central repository, and normalizing data for later analysis and calibration of estimation models. The checklist provides guidance on reviewing various aspects of the data collection to validate completeness and accuracy.
Data Warehouses & Deployment By Ankita dubeyAnkita Dubey
This document contains the notes about data warehouses and life cycle for data warehouse deployment project. This can be useful for students or working professionals to gain the basic knowledge about Data warehouses.
Taming the Beast: Optimizing Oracle EBS for Radical EfficiencyDatavail
In this session, we'll present real case studies of Datavail's proven optimization methodology and performance tuning techniques that have enabled Fortune 500 companies to scale Oracle EBS for radical efficiency, meet their most crucial business needs, and increase the lifespan of their investment.
CHAPTER6 Performing a Risk AssessmentTHERE ARE SEVERAL S.docxchristinemaritza
CHAPTER
6 Performing a Risk Assessment
THERE ARE SEVERAL STEPS TO TAKE when performing a risk assessment. You start by
clearly defining what you will assess. This involves describing the system. You then collect data to
identify threats and vulnerabilities. These threats and vulnerabilities help you identify the risks.
Then identify countermeasures or controls that can mitigate the risks. Evaluate in-place and
planned controls. Finally, evaluate and recommend additional controls. You can support these
recommendations with a cost-benefit analysis.
Chapter 6 Topics
This chapter covers the following topics and concepts:
• What to consider when selecting a risk assessment methodology
• How to identify the management structure
• How to identify assets and activities
• How to identify and evaluate relevant threats
• How to identify and evaluate relevant vulnerabilities
• How to identify and evaluate countermeasures
• How to select a methodology based on the assessment needs
• How to develop mitigating recommendations
• What presenting risk assessment results entails
• What the best practices for performing risk assessments are
Chapter 6 Goals
When you complete this chapter, you will be able to:
• Select an appropriate risk assessment methodology
• Define the operational characteristics and mission of the system to be assessed
• State the importance of reviewing previous findings and status
• Describe the relevance of a management structure to a risk assessment
• Identify the types of assets to include in a risk assessment
• List steps to identify and evaluate threats
• List actions to identify and evaluate vulnerabilities
• List actions to identify and evaluate countermeasures
• Describe the difference between in-place and planned countermeasures
• Describe the process used to assess threats, vulnerabilities, and exploits
• Describe the process used to develop mitigation recommendations
• Describe the results of a risk assessment
• List best practices for performing risk assessments
Selecting a Risk Assessment Methodology
Once you decide to perform a risk assessment (RA), you’ll need to outline how you’ll proceed. In
other words, you’ll need to decide what specific steps to take. An RA isn’t a project that you can
decide to do one day and complete it the next. It takes time and planning.
The two primary types of RA approaches are quantitative and qualitative. This chapter helps
to paint the overall picture of an RA. In general, a risk assessment involves the following steps:
• Identify assets and activities to address.
• Identify and evaluate relevant threats.
• Identify and evaluate relevant vulnerabilities.
• Identify and evaluate relevant countermeasures.
• Assess threats, vulnerabilities, and exploits.
• Evaluate risks.
• Develop recommendations to mitigate risks.
• Present recommendations to management.
Before progressing with the RA, you need to complete two preliminary actions. These are:
• Defi ...
Testing Data & Data-Centric Applications - WhitepaperRyan Dowd
This document discusses the importance of data-centric testing for organizations that rely on data to drive their business. It provides an overview of a methodology for implementing data-centric testing that involves testing data during development and verifying data quality in production. Some key challenges discussed include the lack of tools specifically for data testing and the time required to create and manage test data sets. The methodology advocates for the involvement of developers, dedicated testers, and quality assurance in testing at the unit, integration and system levels with a focus on automated testing and data verification.
The document discusses reactive performance tuning in Oracle databases. It describes how to identify "resource hogs" - SQL statements that are consuming more system resources than necessary - by querying the V$SQLAREA view to find statements with the highest values for a "load factor" formula combining disk reads and buffer gets. The top 10-20 statements identified in this way should be prioritized for tuning, with the goal of eliminating resource intensive processes rather than just increasing system capacity. Patterns in the results can help determine appropriate tuning strategies.
Beyond Firefighting: A Leaders Guide to Proactive Data Quality ManagementHarley Capewell
This white paper discusses moving from reactive to proactive data quality management. It introduces the common problem of "data quality firefighting", where issues are dealt with reactively instead of proactively. The author advocates adopting several core capabilities to enable proactive management, including data lineage management, business glossaries and rules, a data quality framework, data quality technology, reference data strategies, and clear data ownership. These capabilities can help organizations scale data quality efforts, automate tasks, and create value from improved data. The paper provides examples and argues that proactive management can improve business outcomes like customer satisfaction and regulatory compliance.
Tuning a data warehouse is difficult due to its dynamic nature, unpredictable user queries, and changing business requirements. Performance is assessed using metrics like response time, throughput, and memory usage, which should be specified in a service level agreement. Tuning involves optimizing data loads, queries, and hardware. For fixed queries, usual SQL tuning and indexing can be used, while understanding user profiles is important for tuning unpredictable ad hoc queries.
This document provides an overview of Lean Six Sigma in 3 phases:
1) Pre-Define phase explains that Six Sigma helps improve quality and customer satisfaction while reducing costs, and Lean helps eliminate waste and improve efficiencies. Lean should be applied before Six Sigma to reduce waste before improving efficiency.
2) Define phase involves understanding customer wants, translating them into measurable requirements, identifying critical needs, and defining the project charter.
3) Measure phase focuses on validating measurement systems, collecting baseline data, understanding input variables, analyzing process stability, and calculating process capability.
This document summarizes a presentation about scaling analytics in a maturing organization. It discusses focusing initially on data infrastructure, integrity, access and visualization. As the organization grows, processes need to change from everyone accessing data as needed to assigning roles like analysts and business users. Getting buy-in for changes requires pre-research and collaboration. Solutions should be shipped as minimum viable products and improved iteratively. Empowering others involves creating transparent processes and frameworks for teams to self-govern requests. The overall goal is to start with basic functionality and expand the system as the organization matures.
This document introduces the Yet Another Performance Profiling Method (YAPP Method) for holistically tuning Oracle database performance. It begins by defining key terms like response time, scalability, and the 80/20 rule. It then explains that most performance issues originate from application design/implementation rather than the database itself. The document advocates focusing tuning efforts on the application level using a simplified approach focused on the areas of highest impact.
This document provides tips for businesses that have outgrown QuickBooks and are looking to move to a more powerful accounting solution. It discusses gathering requirements, evaluating needs, selecting the right system, and important factors to consider. Some key tips include creating a task force to get input from different departments, prioritizing integration capabilities, choosing software before hardware, and looking for vendors that invest in ongoing support, customization and research/development. The overall goal is to help businesses thoroughly evaluate their unique needs and select software that can adapt and grow with their business over time.
14 CREATING A GROUP AND RUNNING A PROJECTIn this chapter, we wil.docxaulasnilda
14 CREATING A GROUP AND RUNNING A PROJECT
In this chapter, we will discuss how you actually complete network studies within a company. We will cover how you build a group and how you run a project.
Typical Steps to Complete a Network Design Study
In a typical project, you are likely to run into problems with the data as well as organizational challenges of working on a project that impacts many people within a firm. To get a network design study done, you need to treat it as a project and manage it as you would manage any complex project within a company. Of course, there are elements unique to a network design study. In this section, we will cover the typical steps that you want to include in your network design project plan. Broadly, any network design project can be broken into five main steps or phases:
· 1. Model scoping and data collection phase
· 2. Data analysis and validation phase
· 3. Baseline development and validation phase
· 4. What-if scenario analysis
· 5. Final conclusion and development of recommendations
Each step is critical and has its own specific purpose. It is important for the project team to go through all the phases, irrespective of the scope and complexity of the supply chain being analyzed or the amount of time available to complete the analysis.
Step 1: Model Scoping and Data Collection
Before you start any project, it is important to first understand the questions that are to be answered and the associated parts of the supply chain that may be impacted. This step may seem trivial and is often overlooked, but it is very important to have a clear understanding of what decisions are being made, and which parts of the supply chain are open to change and which parts are not. In this phase of the project, you are applying the lessons learned from Chapter 12, “The Art of Modeling,” and specifically the section “Understanding the Supply Chain.”
For a retailer that recently acquired another retail company, the key questions are likely to be:
· ■ What is the optimal combined distribution network that minimizes logistics costs and maximizes service to stores and customers?
· ■ Which existing distribution center locations are redundant and can be closed?
· ■ What is the best way to distribute products to the newly combined store network?
For a consumer-products company that is looking to develop their long-term manufacturing strategy to support growth, the questions would be similar to the following:
· ■ Should we expand existing plants or build new plants? If so, where and when?
· ■ Which products should we manufacture internally and for which products should we use contract manufacturers?
· ■ Is there an opportunity to source products across various regions?
These are just two distinct examples; every supply chain network design study will have a different scope.
What is the same between all projects however is that it is critical that you get everyone on the team to agree to the scope and questions the optimization will ...
Similar to 10 tips-for-optimizing-sql-server-performance-white-paper-22127 (20)
4 best practices_using finance applications for better process efficienciesKaizenlogcom
The document discusses the need for modern financial applications to improve process efficiencies and data accuracy for CFOs facing increasing demands. It outlines challenges such as legacy systems relying on manual data entry and inability to access timely and comprehensive data. The document then provides best practices for identifying financial applications including regularly assessing legacy systems, focusing on automation and integration, considering cloud-based systems, using single integrated systems, and automating the financial close process. Implementing appropriate applications can result in immediate impacts like accelerated ROI, greater productivity, improved reporting and analysis, stronger financial controls, and a more responsive finance team.
Infor CloudSuite Corporate is a cloud-based ERP solution that provides financial management, procurement, human resources management, and analytics capabilities. It aims to help companies modernize their business operations with a flexible system that offers lower costs than traditional on-premise ERP through benefits like automatic upgrades, mobile access, and reduced IT workload. The solution claims to improve productivity, cost savings, and growth opportunities for organizations through its unified platform.
White papers selecting erp for oil and gas industry contractors and vendorsKaizenlogcom
This whitepaper discusses enterprise resource planning (ERP) software options for contractors and vendors serving the oil and gas industry. It notes the industry is facing cost pressures and demands for greater accountability. It recommends ERP systems that can harmonize processes, standardize quality practices, and provide real-time risk management across engineering, fabrication, project management and aftermarket services. Specifically, it suggests ERP features important for different business models, such as project planning for engineering, procurement, construction contractors, and integrated engineering and purchasing for complex manufacturers.
White papers why and how to achieve global erpKaizenlogcom
Global ERP aims to integrate a company's operations onto a single ERP system and database. This allows standardization of processes, consolidation of IT costs, and improved management visibility. However, achieving true global ERP faces impediments including disagreement on standards, resistance to change, and implementation challenges. Selecting software flexible enough to support global needs and using a phased approach can help overcome barriers to achieving the benefits of global ERP.
White papers selecting erp for performance based logistics contractingKaizenlogcom
This document discusses the requirements for suppliers to take on performance-based logistics (PBL) contracts with the Department of Defense. PBL contracts shift traditional defense supply chain and maintenance functions to suppliers, who are then paid based on guaranteed levels of system performance. To succeed with PBL contracts, suppliers will need business systems that can handle the ongoing maintenance and support services required over the life of the contracts, which can last decades. Specifically, suppliers' ERP systems may need to integrate additional modules for predictive maintenance, risk management, and service management. Legacy ERP systems designed for manufacturing may not have the necessary maintenance functionality or ability to separate government data to meet PBL contract requirements.
YouGov, an international market research firm, implemented NetSuite's cloud-based ERP system to help integrate acquired companies and multiple subsidiaries into a single environment. With NetSuite's automatic cloud upgrades, YouGov avoids the fragmented upgrades of on-premise systems that waste time and resources. NetSuite provides YouGov with a complete and single view of its business without disruptions from upgrading. YouGov increased revenues 53% while adding just six staff, and accelerated its annual budget cycle by two weeks with NetSuite's always up-to-date integrated platform.
The document compares cloud ERP systems to on-premise ERP systems. It notes that upgrades can disrupt business for on-premise systems, taking time and resources, while cloud ERP systems like NetSuite provide automatic upgrades that eliminate these problems. The guide outlines issues companies face when upgrading on-premise systems, such as lengthy upgrades, lack of new features, disruption to operations, costly customization migration, and integration challenges, and how NetSuite's cloud solution addresses these issues through automatic updates, instant access to new features, no additional resources needed for deployment, automatic customization migration with each upgrade, and prebuilt integration with other systems.
An ERP system integrates various business processes like inventory, order management, accounting, human resources and customer relationship management into a single system. Cloud ERP solutions provide various benefits over on-premise systems like lower costs, easier upgrades, mobility, flexibility and scalability. Businesses are moving to Cloud ERP for cost savings, flexibility, mobility, automatic updates, security and compliance. The document examines factors to consider when choosing between on-premise and cloud ERP and concludes that cloud ERP now provides opportunities for businesses of all sizes.
A real time comprehensive view of your businessKaizenlogcom
NetSuite provides an integrated cloud-based software suite that allows businesses to gain a real-time comprehensive view of their operations. The software integrates ERP, CRM, HR and other functions on a single database, providing end-to-end visibility of processes. It can be easily deployed in new countries and customized as needed. NetSuite also ensures disaster recovery and business continuity with daily backups and secure servers in the cloud. Customer testimonials highlight how NetSuite improves efficiency, supports growth, and provides real-time visibility across organizations.
Ctac wanted to support its customers' business process innovations with a robust, in-memory cloud solution to provide real-time insights. Ctac deployed SAP HANA on IBM Power Systems to create a new cloud offering for in-memory processing. This solution provides reliable services with less operational support required and allows Ctac to help companies take advantage of real-time analytics.
This technical white paper evaluates how SAP HANA's in-memory database improves cold start performance and quality of service on IBM Power Systems when using IBM Non-Volatile Memory Express adapters. It finds that NVMe significantly accelerates performance ramp-up time after database reactivation by providing very low latency and high read bandwidth. A sample configuration using RAID100 with NVMe in parallel with traditional storage delivers increased resiliency without negatively impacting database read performance.
This document discusses how IBM Power Systems can provide benefits for businesses running SAP HANA and S/4HANA. It identifies four types of businesses that can benefit: 1) those with HANA appliances needing refresh, 2) those on commodity architecture moving to HANA, 3) those with traditional databases and SAP applications on Power Systems, and 4) those on Power Systems not currently using SAP. Power Systems provides flexibility, resiliency, and performance advantages for HANA workloads. It allows for workload consolidation and more efficient scaling. The document argues Power Systems is well-suited for the data-intensive needs of HANA and S/4HANA environments.
Ecogas is replacing its legacy commercial systems with SAP for Utilities running on SAP HANA to prepare for growth in Argentina's natural gas market. Migrating to IBM Power8 servers accelerated payment processing by 73% and compliance reporting by 83%. Implementing SAP HANA is expected to provide even greater performance boosts and enable deeper analytics of customer energy usage and payments.
Indus Motor Company, a manufacturer of Toyota vehicles in Pakistan, implemented SAP S/4HANA on IBM Power servers to integrate its front-to-back operations and gain real-time insights. This will enable 95% faster materials planning to reduce supply chain risks, 20% fewer defects to accelerate manufacturing, and 10% more accurate orders to meet demand. The new system will provide data to decision-makers immediately to help Indus Motor Company strengthen its competitiveness against new foreign entrants.
HR Group, a major shoe retailer, implemented SAP HANA on IBM Power Systems to gain near-instant insights into sales and market trends. This allowed HR Group to more quickly adapt its product assortments to shifting customer demands. The new platform reduced processing times for analytics by 60% using half the processor cores. Migrating SAP HANA to IBM Power Systems also simplified HR Group's IT environment and reduced costs.
MySQL 5.7 includes several improvements such as 3x faster performance for point selects, native JSON support with indexing capabilities, InnoDB optimizations like online buffer pool resizing and general tablespaces, and replication enhancements including multi-source replication and transaction-based parallel replication. The document provides details on these and other new features in areas like security, high availability, geospatial functions, and monitoring capabilities exposed through the performance schema and SYS schema.
The document provides an overview and evaluation guide for MySQL Cluster, a distributed, in-memory database that combines high availability and scalability. It discusses MySQL Cluster's architecture with different node types, considerations for evaluating it like hardware requirements and query patterns, best practices for testing performance and configurations, and tips for troubleshooting issues. The goal is to help efficiently evaluate if MySQL Cluster is suitable and accelerate adopting it.
This document discusses how IBM Power Systems and EnterpriseDB (EDB) Postgres can help organizations reduce database costs and free up funds for innovation. It notes that many organizations spend the majority of their IT budgets on basic maintenance rather than innovation. Adopting the open source EDB Postgres database, which provides enterprise-grade capabilities, on IBM Power Systems servers can significantly reduce database costs compared to proprietary solutions. The Power Systems platform is optimized to run EDB Postgres with high performance and price efficiency. This allows organizations to accomplish more with existing budgets and pursue new technologies.
Idc analyst report a new breed of servers for digital transformationKaizenlogcom
Digital transformation requires organizations to leverage new technologies like mobile, cloud, and big data analytics to develop new strategies. This transformation demands new approaches to data management and infrastructure. A robust, high-performing 1-2 socket server infrastructure is critical to support evolving applications from basic web and cloud services to advanced analytics. IBM's OpenPOWER LC servers, powered by the POWER8 processor and accelerators, provide such an infrastructure while also helping control operational expenses associated with low server utilization rates.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...sameer shah
"Join us for STATATHON, a dynamic 2-day event dedicated to exploring statistical knowledge and its real-world applications. From theory to practice, participants engage in intensive learning sessions, workshops, and challenges, fostering a deeper understanding of statistical methodologies and their significance in various fields."
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
Beyond the Basics of A/B Tests: Highly Innovative Experimentation Tactics You...Aggregage
This webinar will explore cutting-edge, less familiar but powerful experimentation methodologies which address well-known limitations of standard A/B Testing. Designed for data and product leaders, this session aims to inspire the embrace of innovative approaches and provide insights into the frontiers of experimentation!
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
1. Ten Tips for Optimizing
SQL Server Performance
Written by Patrick O’Keefe and Peter O'Connell
ABSTRACT
Performance optimization on SQL Server can be challenging.
A vast array of information is available on how to address
performance problems in general. However, there is not
much detailed information on specific issues and even less
information on how to apply that specific knowledge to your
own environment.
This paper offers 10 tips for optimizing SQL Server performance,
with SQL Server 2008 and 2012 in mind. There are no definitive
lists of the most important tips, but you won’t go wrong by
starting here.
As a DBA, you undoubtedly have your own perspective and will
your own favorite optimization tips, tricks and scripts. Why not
join the discussion on SQLServerPedia?
INTRODUCTION
Consider your tuning goals.
We all want to get the most value from our SQL Server
deployments. Increasing the efficiency of your database
servers frees up system resources for other tasks, such as
business reports or ad hoc queries. To get the most from your
organization’s hardware investment, you need to ensure the
SQL or application workload running on the database servers is
executing as quickly and efficiently as possible.
But how you tune depends on your goals. You might be
asking, “Am I getting the best efficiency from my SQL Server
deployment?” But someone else might be asking, “Will my
application scale?” Here are the main ways of tuning a system:
• Tuning to meet service level agreement (SLA) or key performance
indicator (KPI) targets
• Tuning to improve efficiency, thereby freeing up resources for
other purposes
• Tuning to ensure scalability, thereby helping to maintain SLAs or
KPIs in the future
2. 2
Remember that tuning is an ongoing
process, not a one-time fix.
Performance optimization is a continual
process. For instance, when you tune
for SLA targets, you can be “finished.”
However, if you are tuning to improve
efficiency or to ensure scalability, your
work is never truly complete; this type
of tuning should be continued until the
performance is “good enough.” In the
future, when the performance of the
application is no longer good enough, the
tuning cycle should be performed again.
“Good enough” is usually defined by
business imperatives, such as SLAs
or system throughput requirements.
Beyond these requirements, you should
be motivated to maximize the scalability
and efficiency of all database servers
— even if business requirements are
currently being met.
About this document
Performance optimization on SQL
Server is challenging. There is a
wealth of generalized information on
various data points — for example,
performance counters and dynamic
management objects (DMOs) — but
there is very little information on
what to do with this data and how to
interpret it. This paper describes 10
important tips that will be useful in the
trenches, allowing you to turn some of
this data into actionable information.
TIP #10. THE BASELINING AND
BENCHMARKING METHODOLOGY
HELPS YOU SPOT PROBLEMS.
Overview of the baselining and
benchmarking process
Figure 1 illustrates the steps in the
baselining process. The following
sections will explain the key steps in
the process.
Work to maximize
the scalability and
efficiency of all
database servers —
even if business
requirements are
currently being met.
Improvement?
Yes
No
Identity improvement
candidates
Define goal
Prioritize
candidates
Identify metrics
to track
Change component
(non-production)
Run workload
Benchmark
performance
Reset to
base state
Create or update
baseline
Review a
candidate
Deploy to
production
Figure 1. The baselining process
3. 3
Before you start making
changes, get a record of current
system performance.
When tuning and optimizing, most of
us will be tempted to jump straight
in and start tweaking. Have you ever
collected your car from a mechanic and
wondered if it’s actually performing
worse than it did before? You would like
to complain — but you just don’t know
for sure. You might also wonder whether
the cause of the problem you think you
see now is something the mechanic did,
or something that happened afterward.
Unfortunately, database performance
can suffer from the same problems.
After reading this paper, you will be full
of ideas and want to implement your
baseline and benchmarking strategy
immediately. The first step you need
to take is not the most exciting, but it is
certainly one of the most important: it
is time to establish just how well your
environment measures up against the
criteria you plan to change.
Determine your goals.
Before you do anything to your system,
decide what you want to achieve. Are
there aspects of your SLA that need
to be addressed with respect to
performance, resource consumption or
capacity? Are you addressing a current
issue in production? Have there been
complaints with respect to resource time?
Set some clear goals.
Most of us have many databases and
instances to look after. In order to
maximise the value from our efforts, we
need to think carefully what is required
from a particular system in order for it to
perform well and meet user expectations.
If you over-do the analysis and tuning,
you can find that a disproportionate
amount of time is spent on low-priority
systems to the detriment of your core
production systems. Be clear on
precisely what you want to achieve from
your measurement and tuning efforts.
Prioritize these and ideally get buy-in and
agreement from a business sponsor.
Establish the norm.
Once you’ve targeted what you want to
achieve, then you need to decide how
you will measure success. What are
the OS counters, SQL Server counters,
resource measures and other data points
that will give you the required insight?
Once you have that list, you need to
establish your baseline, or the typical
performance of your system against
your chosen criteria. You need to collect
enough data over a long enough period
of time to give a representative sample of
the typical performance of the system.
Once you have the data, you can
average the values over the period to
establish your first baseline. After you
make modifications to your system, you’ll
take a new benchmark and compare it to
the original one so you can objectively
measure the effect of your changes.
Don’t track just the average values; also
track the deviation from the norm.
It ’s important to be careful of averages,
though. At the very least, calculate the
standard deviation of each counter to
give you an indication of the variation
over time. Consider the rock climber
who is told that the average diameter of
the rope is 1 cm. The climber confidently
jumps over the side. He dangles several
hundred feet over the sharp rocks and
smiles smugly. Then he’s told that the
thickest section of the rope is 2 cm and
the thinnest is 0.1cm. Oops!
If you are unfamiliar with standard
deviation, grab a beginner’s book on
statistics. You don’t need to get too
sophisticated, but it helps to know
the basics.
The key message here is not just to
track averages; also track the deviation
from the norm (mean). Decide what that
norm should be (this is often found in
your SLA). Your mission is not to get
the maximum possible performance,
but to achieve the best possible fit for
your performance goals and then to
limit the deviation from these goals
as much as possible. Anything more
represents wasted time and effort and
could also point to under-utilization of
infrastructural resources.
Benchmarking helps
you spot abnormal
behavior because
you have a good
indication of what
normal behavior is.
4. 4
How much data is needed for
a baseline?
The amount of data required to establish
a baseline depends on the extent to
which your load varies over time. Talk to
system administrators, end users and
application administrators. They will
generally have a good feel for what the
usage patterns are. You should gather
enough data to cover off-peak, average
and peak periods.
It is important to gauge the amount
which load varies and how frequently.
Systems with predictable patterns will
require less data. The more variation,
the smaller the measurement interval
and the longer you will need to measure
to develop a reliable baseline. To take
our climbing analogy a little further, the
longer the length of rope you examine,
the better the chance that you will spot
the variations. How critical the system
is and the impact of its failure will also
affect how much scrutiny it must undergo
and how reliable the sample must be.
Storing the data
The more parameters you track and the
smaller the frequency, the larger the data
set you will collect. It may seem obvious,
but you do need to consider the capacity
that will be required to store your
measurement data. Once you have some
data, it should be pretty straightforward
to extrapolate the extent to which this
repository will grow over time. If you
are going to monitor over an extended
period, think about aggregating historical
data at intervals to prevent the repository
from growing too large.
For performance reasons, your
measurement repository should not
reside in the same place as the database
you are monitoring.
Limit the number of changes you make
at one time.
Try to limit the number of changes
you make between each benchmark.
Construct your modifications to test
one particular hypothesis. This will
allow you to meticulously rule in or out
each improvement candidate. When
you do hone in on a solution, you will
understand exactly why you are seeing
the change in behaviour, and this will
often reveal a series of additional
potential improvement options.
Analyzing the data
Once you have made changes to your
system, you will want to determine
whether they have had the desired effect.
To achieve this, repeat the measurements
you took for your original baseline, over
a similarly representative time scale. You
can then compare the two baselines to:
• Determine whether your changes had
the desired effect — When you tweak
a configuration setting, optimize an
index, or some change SQL code, the
baseline enables you to tell whether
that change had the desired effect. If
you receive a complaint about slower
performance, you can say for sure
whether the statement is accurate from
the perspective of the database.
The most frequent mistake that most
junior DBAs will make is to jump to
conclusions too early. Often you will
see someone jump for joy as they
observe an instant and observable
increase in performance after they
make one or more changes. They
deploy to production and rush to
send emails declaring the issue
resolved. But the celebrations can
be short-lived when the same issues
re-emerge sometime after or some
unknown side effect causes another
issue. Quite often this can result in a
state that is less desirable than the
original. Once you think you have
found the answer to an issue, test it
and benchmark the results against
your baseline. This is the only
reliable way of being sure you have
made progress.
Your mission is not
to get the maximum
possible performance;
it is to achieve the
best possible fit for
your performance
goals and then to limit
the deviation from
these goals as much
as possible.
5. 5
• Determine whether a change introduced
any unexpected side effects —
A baseline also enables you to see
objectively whether a change affected
a counter or measure that you hadn’t
expect it to affect.
• Anticipate problems before they
happen — Using a baseline, you can
establish accurate performance norms
against typical load conditions. This
enables you to predict whether and
when you will hit problems in the future,
based on how resource consumption is
trending today or based on projected
workloads for future scenarios. For
example, you perform capacity planning:
by extrapolating from current typical
resource consumption per connected
user, you can predict when your systems
will hit user connection bottlenecks.
• Troubleshoot problems more
effectively — Ever spent several
long days and nights fire-fighting a
performance issue with your database
only to find it was actually nothing
to do with the database itself?
Establishing a baseline makes it far
more straightforward to eliminate
your database instance and target the
culprit. For example, suppose memory
consumption has suddenly spiked.
You notice the number of connections
increasing sharply and it is well above
your baseline. A quick call to the
application administrator confirms that a
new module has been deployed in the
eStore. It doesn’t take long to establish
that that the new junior developer
is writing code that doesn’t release
database connections as it should. I bet
you can think of many more scenarios
just like this.
Ruling out the things that are NOT
responsible for a problem can save
a great deal of time by clearing the
clutter and providing a laser focus
on exactly what is causing the issue.
There are lots of examples where
we can compare system counters to
SQL server counters to quickly rule
the database in or out of an issue.
Once the usual suspects are ruled
out you can now begin search for
significant deviations from baseline,
collect related indicators together
and begin to drill into the root cause.
Repeat the baselining process as
often as needed.
Good tuning is an iterative and scientific
process. The tips presented in this
document provide a really good starting
point, but they are just that: a starting
point. Performance tuning is highly
personalised and is governed by the
design, makeup and usage of each
individual system.
The baselining and benchmarking
methodology is the cornerstone of
good and reliable performance tuning. It
provides the map, a reference and the
waypoints we require to figure out where
we need to go and how to get there, to
help make sure we don’t get lost on the
way. A structured approach allows us to
build reliable and consistent performance
across our database portfolio.
TIP #9. PERFORMANCE COUNTERS
GIVE YOU QUICK AND USEFUL
INFORMATION ABOUT CURRENTLY
RUNNING OPERATIONS.
Reasons for monitoring
performance counters
A very common question related to
SQL Server performance optimization
is: “What counters should I monitor?” In
terms of managing SQL Server, there
are two broad reasons for monitoring
performance counters:
• Increasing operational efficiency
• Preventing bottlenecks
Although they have some overlap, these
two reasons allow you to easily choose a
number of data points to monitor.
Monitoring performance counters to
improve operational efficiency
Operational monitoring checks for
general resource usage. It helps answer
questions like:
• Is the server about to run out of
resources like CPU, disk space,
or memory?
• Are the data files able to grow?
• Do fixed-size data files have enough free
space for data?
The amount of data
required to establish
a baseline depends
on the extent to which
your load varies
over time.
6. 6
You can also use the data for trending
purposes. A good example would be
collecting the sizes of all the data files
in order to trend their growth rates and
forecast future resource requirements.
To answer the three questions posed
above, you should look at the
following counters:
Monitoring performance counters to
prevent bottlenecks
Bottleneck monitoring focuses more on
performance-related matters. The data
you collect helps answer questions
such as:
• Is there a CPU bottleneck?
• Is there an I/O bottleneck?
• Are the major SQL Server subsystems,
such as the buffer cache and plan cache,
healthy?
• Do we have contention in the database?
To answer these questions, look at the
following counters:
Counter Enables you to
Processor%Processor Time Monitor the CPU consumption on the server
LogicalDiskFree MB Monitor the free space on the disk(s)
MSSQL$Instance:DatabasesDataFile(s)
Size(KB)
Trend growth over time
MemoryPages/sec Check for paging, which is a good indication
that memory resources might be short
MemoryAvailable MBytes See the amount of physical memory
available for system use
Counter Enables you to
Processor%
Processor Time
Monitoring CPU consumption allows you to check
for a bottleneck on the server (indicated by high
sustained usage).
High percentage of
Signal Wait
Signal wait is the time a worker spends waiting for CPU time
after it has finished waiting on something else (such as a
lock, a latch or some other wait). Time spent waiting on the
CPU is indicative of a CPU bottleneck.
Signal wait can be found by executing DBCC
SQLPERF(waitstats) on SQL Server 2000 or by querying sys.
dm_os_wait_stats on SQL Server 2005.
Physical DiskAvg. Disk
Queue Length
Check for disk bottlenecks: if the value exceeds 2 then it is
likely that a disk bottleneck exists.
MSSQL$Instance:Buffer
ManagerPage
Life Expectancy
Page Life Expectancy is the number of seconds a page
stays in the buffer cache. A low number indicates that pages
are being evicted without spending much time in the cache,
which reduces the effectiveness of the cache.
MSSQL$Instance:Plan
CacheCache Hit Ratio
A low Plan Cache hit ratio means that plans are not
being reused.
MSSQL$Instance:
General Statistics
Processes Blocked
Long blocks indicate contention for resources.
Limit the number of
changes you make
between each
benchmark, so you
can meticulously
evaluate the effects of
each change.
7. 7
TIP #8. CHANGING SERVER
SETTINGS CAN PROVIDE A MORE
STABLE ENVIRONMENT.
Changing settings within a product to
make it more stable may sound counter
intuitive, but in this case it really does
work. As a DBA, your job is to ensure
a consistent level of performance for
your users when they request data from
their applications. Without changing
the settings outlined in the rest of
this document, you may experience
scenarios that can degrade performance
for your users without warning. These
options can easily be found in sys.
configurations, which lists server level
configurations, available along with extra
information. The Is_Dynamic attribute
in sys.configurations shows whether
the SQL Server instance will need to be
restarted after making a configuration
change. To make the change you would
call the sp_configure stored procedure
with the relevant parameters.
The Min and Max memory settings
can guarantee a certain level
of performance.
Suppose we have an Active/Active
cluster (or indeed a single host with
multiple instances). We can make
certain configuration changes that can
guarantee we can meet our SLAs in the
event of a failover where both instances
would reside on the same physical box.
In this scenario, we would make changes
to both the Min and Max memory setting
to ensure that the physical host has
enough memory to cope with each
instance without having to constantly try
to aggressively trim the working set of
the other. A similar configuration change
can be made to use certain processors in
order to guarantee a certain level
of performance.
It is important to note that setting the
maximum memory is not just suitable
for instances on a cluster, but also those
instances that share resources with any
other application. If SQL Server memory
usage is too high, the operating system
may aggressively trim the amount of
memory it can use in order to allow
either itself or other applications room
to operate.
• SQL Server 2008 — In SQL Server
2008 R2 and before, the Min and Max
memory setting would restrict only the
amount of memory that the buffer pool
uses – more specifically only single 8KB
page allocations. This meant if you ran
processes outside of the buffer pool
(such as extended stored procedures,
CLR or other components such as
Integration Services, Reporting Services
or Analysis Services), you would need to
reduce this value further.
• SQL Server 2012 — SQL Server 2012
changes things slightly as there is a
central memory manager. This memory
manager now incorporates multi-page
allocations such as large data pages and
cached plans that
are larger than 8KB. This memory space
now also includes some
CLR functionality.
Two server options can indirectly
help performance.
There are no options that directly aid
performance but there are two options
that can help indirectly.
• Backup compression default — This
option sets backups to be compressed
by default. Whilst this may churn up
extra CPU cycles during compression,
in general less CPU cycles are
used overall when compared to an
uncompressed backup, since less data
is written to disk. Depending on your I/O
architecture, setting this option may also
reduce I/O contention.
• The second option may or may not be
divulged in a future tip on the plan cache.
You’ll have to wait and see if it made our
top 10 list.
Sage advice about
troubleshooting
in just three
words: “Eliminate or
incriminate.”
8. 8
TIP #7. FIND ROGUE QUERIES IN
THE PLAN CACHE.
Once you have identified a bottleneck,
you need to find the workload that is
causing the bottleneck. This is a lot
easier to do since the introduction of
Dynamic Management Objects (DMOs) in
SQL Server 2005. Users of SQL Server
2000 and prior will have to be content
with using Profiler or Trace (more on that
in Tip #6).
Diagnosing a CPU bottleneck
In SQL Server, if you identified a CPU
bottleneck, the first thing that you would
want to do is get the top CPU consumers
on the server. This is a
very simple query on sys.dm_exec_
query_stats:
The really useful part of this query is your
ability to use cross apply and sys.dm_
exec_sql_text to get the SQL statement,
so you can analyze it.
Diagnosing an I/O bottleneck
It is a similar story for an
I/O bottleneck:
SELECT TOP 50
qs.total_worker_time / execution_count AS avg_worker_time,
substring (st.text, (qs.statement_start_offset / 2) + 1,
( ( CASE qs.statement_end_offset WHEN -1
THEN datalength (st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/ 2)+ 1)
AS statement_text,
*
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) AS st
ORDER BY
avg_worker_time DESC
SELECT TOP 50
(total_logical_reads + total_logical_writes) AS total_logical_io,
(total_logical_reads / execution_count) AS avg_logical_reads,
(total_logical_writes / execution_count) AS avg_logical_writes,
(total_physical_reads / execution_count) AS avg_phys_reads,
substring (st.text,
(qs.statement_start_offset / 2) + 1,
((CASE qs.statement_end_offset WHEN -1
THEN datalength (st.text)
ELSE qs.statement_end_offset END
- qs.statement_start_offset)/ 2)+ 1)
AS statement_text,
*
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text (qs.sql_handle) AS st
ORDER BY total_logical_io DESC
Monitoring
performance counters
can help you
increase operational
efficiency and prevent
bottlenecks.
9. 9
TIP #6. SQL PROFILER IS
YOUR FRIEND.
Understanding SQL Server Profiler and
Extended Events
SQL Server Profiler is a native tool that
ships with SQL Server. It allows you to
create a trace file that captures events
occurring in SQL Server. These traces
can prove invaluable in providing
information about your workload
and poorly performing queries. This
whitepaper is not going to dive into
details on how to use the Profiler tool.
Whilst it is true that SQL Server Profiler
has been marked as deprecated in
SQL Server 2012 in favor of Extended
Events, it should be noted that this is just
for the database engine and not for SQL
Server Analysis Services. Profiler can still
provide great insight into the working
of applications in real time for many SQL
Server environments.
Using Extended Events is beyond the
scope of this whitepaper; for a detailed
description of Extended Events see the
whitepaper, “How to Use SQL Servers
Extended Events and Notifications
to Proactively Resolve Performance
Issues”. Suffice to say that Extended
Events were introduced in SQL Server
2008 and updated in SQL Server 2012
to include more events and an eagerly
anticipated UI.
Please note that running Profiler requires
the ALTER TRACE permission.
How to use SQL Profiler
Here’s how to build on the process of
collecting data in Performance Monitor
(Perfmon) and correlate the information
on resource usage with data on the
events being fired inside SQL Server:
1. Open Perfmon.
2. If you do not have a Data Collector set
already configured create one now using
the advanced option and the counters
from Tip 9 as a guide. Do not start the
data collector set yet.
3. Open Profiler.
4. Create a new trace by specifying details
about the instance, events and columns
you want to monitor, as well as the
destination.
5. Start the trace.
6. Switch back to Perfmon to start the data
collector set.
7. Leave both sessions running until the
required data has been captured.
8. Stop the Profiler trace. Save the trace
and then close it.
9. Switch to Perfmon and stop the data
collection set.
10. Open the recently saved trace file
in Profiler.
11. Click File and then Import
Performance Data.
12. Navigate to the Data Collection data file
and select the performance
counters of interest.
You will now be able to see the
performance counters in conjunction
with the Profiler trace file (see Figure 2),
which will enable a much faster
resolution of bottlenecks.
Extra tip: The steps above use the
client interface; to save resources, a
server-side trace would be more
efficient. Please refer to Books Online for
information about how to start and stop
server-side traces.
Operational
monitoring checks
for general resource
usage. Bottleneck
monitoring focuses
more on performance-
related matters.
10. 10
Figure 2. A correlated view of performance counters in conjunction with the
Profiler trace file
TIP #5. CONFIGURE SANS FOR SQL
SERVER PERFORMANCE.
Storage area networks (SANs) are
fantastic. They offer the ability to
provision and manage storage in a
simple way. SANs can be configured
for fast performance from a SQL
Server perspective — but they often
aren’t. Organizations usually
implement SANs for reasons such as
storage consolidation and ease of
management — not for performance. To
make matters worse, generally you do
not have direct control over how the
provisioning is done on a SAN. Thus, you
will often find that the SAN has been
configured for one logical volume where
you have to put all the data files.
Best practices for configuring SANs for
I/O performance
Having all the files on a single volume is
generally not a good idea if you want the
best I/O performance. As best practices,
you will want to:
• Place log files on their own volume,
separate from data files. Log files are
almost exclusively written to sequentially
and not read (exceptions to this include
Database Mirroring and Always On
Availability Groups). You should always
configure for fast write performance
• Place tempdb on its own volume.
Tempdb is used for myriad purposes by
SQL Server internally, so having it on its
own I/O subsystem will help. To further
fine tune performance, you will first need
some stats.
• Consider creating multiple data files
and filegroups in VLDBs in order to
benefit from parallel I/O operations.
• Place backups on their own drives
for redundancy purposes as well as
to reduce I/O contention with other
volumes during maintenance periods.
Collecting data
There are, of course, the Windows disk
counters, which will give you a picture of
what Windows thinks is happening. (Don’t
forget to adjust raw numbers based on
RAID configuration.) SAN vendors often
provide their own performance data.
SQL Server also provides file-level
I/O information:
• Versions prior to SQL 2005 — Use the
function fn_virtualfilestats.
• Later versions — Use the dynamic
management function sys.dm_io_virtual_
file_stats.
By using this function in the following
code, you can:
• Derive I/O rates for both reads
and writes
• Get I/O throughput
• Get average time per I/O
• Look at I/O wait times
The Is_Dynamic
attribute in Sys.
Configurations shows
whether the SQL
Server instance will
need to be restarted
after making a
configuration change.
11. 11
SELECT db_name (a.database_id) AS [DatabaseName],
b.name AS [FileName],
a.File_ID AS [FileID],
CASE WHEN a.file_id = 2 THEN ‘Log’ ELSE ‘Data’ END AS [FileType],
a.Num_of_Reads AS [NumReads],
a.num_of_bytes_read AS [NumBytesRead],
a.io_stall_read_ms AS [IOStallReadsMS],
a.num_of_writes AS [NumWrites],
a.num_of_bytes_written AS [NumBytesWritten],
a.io_stall_write_ms AS [IOStallWritesMS],
a.io_stall [TotalIOStallMS],
DATEADD (ms, -a.sample_ms, GETDATE ()) [LastReset],
( (a.size_on_disk_bytes / 1024) / 1024.0) AS [SizeOnDiskMB],
UPPER (LEFT (b.physical_name, 2)) AS [DiskLocation]
FROM sys.dm_io_virtual_file_stats (NULL, NULL) a
JOIN sys.master_files b
ON a.file_id = b.file_id AND a.database_id = b.database_id
ORDER BY a.io_stall DESC;
Analyzing the data
Please pay special attention to at the
“LastReset” value in the query; it shows
the last time the SQL Server service was
started. Data from Dynamic Management
Objects are not persisted, so any data
that is being used for tuning purposes
should be validated against how long
the service has been running; otherwise,
false assumptions can be made.
Using these numbers, you can quickly
narrow down which files are responsible
for consuming I/O bandwidth and ask
questions such as:
• Is this I/O necessary? Am I missing
an index?
• Is it one table or index in a file that is
responsible? Can I put this index or table
in another file on another volume?
Tips for tuning yourw hardware
If the database files are placed correctly
and all object hotspots have been
identified and separated onto different
volumes, then it is time to take a close
look at the hardware.
Tuning hardware is a specialist topic
outside of the scope of this whitepaper.
There are a few best practices and tips
that I can share with you which will make
this easier, though:
• Do not use the default allocation unit
size when creating volumes for use with
SQL Server. SQL Server uses
• 64KB extents, so this value should be
the minimum.
• Check to see if your partitions are
correctly aligned. Jimmy May has written
a fantastic whitepaper on this subject.
Misaligned partitions can reduce
performance by up to 30 percent.
• Benchmark your system’s I/O using a
tool such as SQLIO. You can watch a
tutorial on this tool.
Figure 3. File-level I/O information from SQL Server
Setting backups to
be compressed by
default reduces
I/O contention.
12. 12
TIP #4.CURSORS AND OTHER BAD
T-SQL FREQUENTLY RETURN TO
HAUNT APPLICATIONS.
An example of bad code
At a previous job, I found what has to be
the worst piece of code I have ever seen
in my career. The system has long since
been replaced, but here is an outline of
the process the function went through:
1. Accept parameter value to strip.
2. Accept parameter expression to strip.
3. Find the length of the expression.
4. Load expression into a variable.
5. Loop through each character in
the variable and check to see if this
character matched one of the values
to strip. If it does update the variable to
remove it.
6. Continue to next character until
expression has been fully checked.
If you have a look of horror on your face,
then you are in good company.
I am of course explaining somebody’s
attempt to write their own T-SQL
“REPLACE” statement!
The worst part was that this function was
used to update addresses as part of a
mailing routine and would be called tens
of thousands of times a day.
Running a server-side profiler trace
will enable you to view your server’s
workload and pick out frequently
executed pieces of code (which is the
way this “gem” was found).
Query tuning using query plans
Bad T-SQL can also appear as inefficient
queries that do not use indexes, mostly
because the index is incorrect or missing.
It’s vitally important to have a good
understanding of how to tune queries
using query plans.
A detailed discussion of query tuning
using query plans is beyond the scope of
this white paper. However, the simplest
way to start this process is by turning
SCAN operations into SEEKs. SCANs
read every row in the table or index, so,
for large tables, SCANs are expensive in
terms of I/O. A SEEK, on the other hand,
will use an index to go straight to the
required row, which, of course, requires
an index. If you find SCANs in your
workload, you could be missing indexes.
There are a number of good books on
this topic, including:
• “Professional SQL Server Execution Plan
Tuning” by Grant Fritchey
• “Professional SQL Server 2012 Internals
& Troubleshooting” by Christian
Bolton, Rob Farley, Glenn Berry, Justin
Langford, Gavin Payne, Amit Banerjee,
Michael Anderson, James Boother, and
Steven Wort
• “T-SQL Fundamentals for Microsoft
SQL Server 2012 and SQL Azure” by
Itzik Ben-Gan
Figure 4. Example of a large query plan
SQL Profiler can still
provide great insight
into the working of
applications in real
time for many SQL
Server environments.
13. 13
TIP #3. MAXIMIZE PLAN REUSE
FOR BETTER SQL
SERVER CACHING.
Why reusing query plans is important
Before executing a SQL statement, SQL
Server first creates a query plan. This
defines the method SQL Server will
use to take the logical instruction of the
query and implement it as a physical
action against the data.
Creating a query plan can require
significant CPU. Thus, SQL Server will
run more efficiently if it can reuse query
plans instead of creating a new one each
time a SQL statement is executed.
Evaluating whether you are getting
good plan reuse
There are some performance counters
available in the SQL Statistics
performance object that will tell you
whether you are getting good plan reuse.
This formula tells you the ratio of batches
submitted to compilations:
You want this number to be as small as
possible. A 1:1 ratio means that every
batch submitted is being compiled, and
there is no plan reuse at all.
Addressing poor plan reuse
It’s not easy to pin down the exact
workload that is responsible for poor
plan reuse, because the problem usually
lies in the client application code that is
submitting queries. Therefore, you may
need to look at the client application
code that is submitting queries.
To find code that is embedded within a
client application, you will need to use
either Extended Events or Profiler. By
adding the SQL:StmtRecompile event
into a trace, you will be able to see when
a recompile event occurs. (There is also
an event called SP:Recompile; this event
is included for backwards compatibility
since the occurrence of recompilation
was changed from the procedure level to
the statement level in SQL Server 2005.)
A common problem is that the code
is not using prepared parameterized
statements. Using parameterized
queries not only improves plan reuse
and compilation overhead, but it also
reduces the SQL injection attack risk
involved with passing parameters via
string concatenation. Figure 5 shows
two code examples. Though they are
contrived, they illustrate the difference
between building a statement through
string concatenation and using prepared
statements with parameters.
SQL Server cannot reuse the plan from
the “bad” example in Figure 5. If a
(Batch Requests/sec – SQL Compilations/sec) / Batch Requests/sec
SANs can be
configured for
fast performance
from a SQL Server
perspective — but
they often aren’t.
14. 14
Figure 5. Comparing code that builds a statement through string concatenation and
code that uses prepared statements with parameters
parameter had been a string type, this
function could be used to mount a SQL
injection attack. The “good” example is
not susceptible to a SQL injection attack
because a parameter is used, and SQL
Server is able to reuse the plan.
THE MISSING CONFIGURATION
SETTING FROM TIP #8
Those of you with a good memory will
remember that in Tip #8 (where we
talked about configuration changes),
there was one more piece of advice
left to bestow.
SQL Server 2008 introduced a
configuration setting called “Optimize
for ad hoc workloads.” Setting it will tell
SQL Server to store a stub plan rather
than a full plan in the plan cache. This
is especially useful for environments
that use dynamically built T-SQL code
or Linq which may result in the code not
being re-used.
The memory allocated to the plan cache
resides in the buffer pool. Therefore, a
bloated plan cache reduces the amount
of data pages that can be stored in the
buffer cache–so there will be more round
trips to fetch data from the I/O subsystem,
which can be very expensive.
TIP #2. LEARN HOW TO READ THE
SQL SERVER BUFFER CACHE AND
MINIMIZE CACHE THRASHING.
Why the buffer cache matters
As alluded to in the rather elegant segue
above, the buffer cache is a large area of
memory used by SQL Server to reduce
the need to perform physical I/O. No
SQL Server query execution reads data
directly off the disk; the database pages
are read from the buffer cache. If the
sought-after page is not in the buffer
cache, a physical I/O request is queued.
Then the query waits and the page is
fetched from the disk.
File-level I/O
information from SQL
Server can help you
pinpoint which files
are consuming I/O
bandwidth.
Good
Bad
15. 15
Changes made to data on a page from
a DELETE or UPDATE operation is also
made to pages in the buffer cache.
These changes are later flushed out to
the disk. This whole mechanism allows
SQL Server to optimize physical I/O in
several ways:
• Multiple pages can be read and written
in one I/O operation.
• Read ahead can be implemented. SQL
Server may notice that for certain types
of operations, it could be useful to read
sequential pages — the assumption
being that right after you read the page
requested, you will want to read the
adjacent page.
Note: Index fragmentation will hamper
SQL Server’s ability to perform read
ahead optimization.
Evaluating buffer cache health
There are two indicators of buffer
cache health:
• MSSQL$Instance:Buffer Manager
Buffer cache hit ratio — This is the ratio
of pages found in cache to pages not
found in cache (the pages that need to
be read off disk). Ideally, you want this
number to be as high as possible. It is
possible to have a high hit ratio but still
experience cache thrashing.
• MSSQL$Instance:Buffer Manager
Page Life Expectancy — This is the
amount of time that SQL Server is
keeping pages in the buffer cache
before they are evicted. Microsoft says
that a page life expectancy greater than
five minutes is fine. If the life expectancy
falls below this, it can be an indicator of
memory pressure (not enough memory)
or cache thrashing.
I’d like to conclude this section with an
analogy; many people argue that the
innovation of anti-lock brakes and other
assisted technology means that braking
distances should be reduced, and also
that speed limits could be raised in line
with this new technology. The warning
value of 300 seconds (five minutes) for
Page Life Expectancy incurs a similar
debate in the SQL Server community:
some believe that it is a hard and fast
rule, while others believe that the
increase in memory capacity in most
servers these days means that the
figure should be in the thousands rather
than in the hundreds. This difference in
opinion highlights to me the importance
of baselines and why it is so important to
have a detailed understanding of what
the warning levels of every performance
counter should be in your environment,
Cache thrashing
During a large table or index scan,
every page in the scan must pass
through the buffer cache, which means
that possibly useful pages will be
evicted to make room for pages that are
not likely to be read more than once.
This leads to high I/O since the evicted
pages have to be read from disk again.
This cache thrashing is usually an
indication that large tables or indexes
are being scanned.
To find out which tables and indexes are
taking up the most space in the buffer
cache, you can examine the sys.dm_os_
buffer_descriptors DMV (available from
SQL Server 2005). The sample query
below illustrates how to access the list
of tables or indexes that are consuming
space in the buffer cache in SQL Server:
SELECT o.name, i.name, bd.*
FROM sys.dm_os_buffer_descriptors bd
INNER JOIN sys.allocation_units a
ON bd.allocation_unit_id = a.allocation_unit_id
INNER JOIN
sys.partitions p
ON (a.container_id = p.hobt_id AND a.type IN (1, 3))
OR (a.container_id = p.partition_id AND a.type = 2)
INNER JOIN sys.objects o ON p.object_id = o.object_id
INNER JOIN sys.indexes i
ON p.object_id = i.object_id AND p.index_id = i.index_id
If the database
files are placed
correctly and all
object hotspots
have been identified
and separated onto
different volumes,
then it is time to
take a close look at
the hardware.
16. 16
You can also use the index DMVs to find
out which tables or indexes have large
amounts of physical I/O.
TIP #1. UNDERSTAND HOW
INDEXES ARE USED AND FIND
BAD INDEXES.
SQL Server 2012 provides some very
useful data on indexes, which you can
fetch using DMOs implemented in
version SQL Server 2005.
Using the sys.dm_db_index_
operational_stats DMO
sys.dm_db_index_operational_stats
contains information on current low- level
I/O, locking, latching, and access method
activity for each index. Use this DMF to
answer the following questions:
• Do I have a “hot” index? Do I have an
index on which there is contention? The
row lock wait in ms/page lock wait in ms
columns can tell us whether there have
been waits on this index.
• Do I have an index that is being used
inefficiently? Which indexes are currently
I/O bottlenecks? The page_io_latch_
wait_ ms column can tell us whether
there have been I/O waits while bringing
index pages into the buffer cache — a
good indicator that there is a scan
access pattern.
• What sort of access patterns are in use?
The range_scan_count and singleton_
lookup_count columns can tell us what
sort of access patterns are used on a
particular index.
Figure 6 illustrates the output of a query
that lists indexes by the total PAGE_IO_
LATCH wait. This is very useful when
trying to determine which indexes are
involved in I/O bottlenecks.
Using the sys.dm_db_index_usage_
stats DMO
sys.dm_db_index_usage_stats contains
counts of different types
of index operations and the time
each type of operation was last
performed. Use this DMV to answer
the following questions:
• How are users using the indexes?
The user_ seeks, user_scans, user_
lookups columns can tell you the types
and significance of user operations
against indexes.
• What is the cost of an index? The user_
updates column can tell you what the
level of maintenance is for an index.
• When was an index last used? The last_*
columns can tell you the last time an
operation occurred on an index.
Figure 6. Indexes listed by the total PAGE_IO_LATCH wait
Running a serverside
profiler trace will
enable you to
view your server’s
workload and pick out
frequently executed
pieces of code.
17. 17
Figure 7 illustrates the output of a query
that lists indexes by the total number
of user_seeks. If you instead wanted
to identify indexes that had a high
proportion of scans, you could order by
the user_scans column. Now that you
have an index name, wouldn’t it be good
if you could find out what SQL statements
used that index? On SQL Server 2005
and newer versions, you can.
There are of course many other areas
to indexes, such as design strategy,
consolidation and maintenance. If you
would like to read more about this vital
area of SQL Server Performance tuning,
head over to SQLServerPedia or check
out some of Quest's webcasts or white
papers on the subject.
CONCLUSION
Of course, there are far more than 10
things you should know about SQL
Server performance. However, this white
paper offers a good starting point and
some practical tips about performance
optimization that you can apply to your
SQL Server environment.
To recap, remember these 10
things when optimizing SQL
Server performance:
10. Benchmarking facilitates comparisons
of workload behavior and lets you
spot abnormal behavior because
you have a good indication of what
normal behavior is.
9. Performance counters give you quick
and useful information about currently
running operations.
8. Changing server settings can provide
a more stable environment.
7. DMOs help you identify performance
bottlenecks quickly.
6. Learn to use SQL Profiler,traces and
Extended Events.
5. SANs are more than just black boxes
that perform I/O.
4. Cursors and other bad T-SQL
frequently return to haunt applications.
3. Maximize plan reuse for better SQL
Server caching.
2. Learn how to read the SQL Server
buffer cache and how to minimize
cache thrashing.
And the number one tip for optimizing
SQL Server performance:
1. Master indexing by learning how
indexes are used and how to find
bad indexes.
Figure 7. Indexes listed by the total number of user_seeks
A page life
expectancy of less
than five minutes
can indicate memory
pressure (not enough
memory) or
cache thrashing.
18. 18
CALL TO ACTION
I am sure that you are eager to
implement the lessons you have learned
in this whitepaper. The table below lists
actions to start you on the road to a more
optimized SQL Server environment:
Action Subtasks Target date
Gain approval to
start the project
Speak to your line manager and
present the case that with this project
in place, you can be proactive rather
than reactive.
Identify
performance goals
Speak to company stakeholders
to determine acceptable levels
of performance.
Establish
a baseline
for system
performance
Collect relevant data and store it in a
custombuilt or third-party repository.
Identify top
performance
counters and
configure
trace and/or
extended events
Download Quest’s Perfmon counter
poster.
Review server
settings
Pay specific attention to the memory
and “Optimize for ad hoc workloads”
settings.
Review I/O
subsystem
If appropriate, speak to your
SAN administrators and consider
performing I/O load tests using tools
such as SQLIO, or simply determine
the rate at which you can do intensive
read and write operations, such as
when you’re performing backups.
Identify poorly
performing
queries
Analyze the data returned from
traces, extended event sessions
and the plan cache.
Refactor poorly
performing code
Check out the latest best practices
on ToadWorld
Index maintenance Ensure your indexes are as optimal
as possible.
Using parameterized
queries not only
improves plan reuse
and compilation
overhead, but it also
reduces the SQL
injection attack risk
involved with passing
parameters via string
concatenation.