This document discusses 15 ways to optimize SQL queries. Some key points include:
1. Index columns that are frequently searched to improve search performance.
2. Use symbol operators like > and = instead of NOT to avoid searching all rows.
3. Limit queries to the minimum number of rows needed to improve efficiency.
4. Consider postfix wildcards, UNIONs, and indexes to make the most of available optimizations.
Proper use of indexes, operators, and limiting result sets were some of the major techniques discussed for writing efficient SQL queries. Care must be taken to understand how each database handles optimizations.
A Practical Look at the NOSQL and Big Data HullabalooAndrew Brust
Martha Washington
Friend of: Thomas Jefferson
Friend of: Benjamin Franklin
Friend of: John Adams
Friend of: James Madison
Friend of: Alexander Hamilton
Friend of: Marquis de Lafayette
Friend of: Baron von Steuben
Friend of: Henry Knox
Friend of: Nathanael Greene
Friend of: Comte de Rochambeau
Friend of: Friedrich Wilhelm von Steuben
Friend of: Gilbert du Motier, Marquis de Lafayette
Friend of: John Jay
Friend of: John Marshall
Friend of: James Madison
Friend of: Thomas Jefferson
Friend of: Benjamin Franklin
Friend
The document provides an overview and agenda for a presentation on SQL Server Denali business intelligence (BI) capabilities. Key points include:
- PowerPivot and Excel Services allow self-service BI through a familiar Excel interface while leveraging Analysis Services for storage and collaboration features.
- Analysis Services Tabular Mode is the server implementation of PowerPivot, supporting partitions, roles and other enterprise features.
- Project "Crescent" provides ad hoc reporting directly against PowerPivot and Analysis Services Tabular models through a browser-based, Excel-like interface in Silverlight.
- Master Data Services and Data Quality Services provide master data management and data cleansing capabilities to support better data quality for BI initiatives.
This document discusses using Oracle VM VirtualBox for learning Oracle technologies. It provides an overview of VirtualBox, its use cases, pre-built development VMs from OTN, learning resources, and tips. Some key benefits highlighted include running Oracle software on any OS, easy deployment, and no dedicated hardware required. The presentation also demonstrates importing and using a pre-built database application development VM from OTN.
The document discusses execution plan basics, including:
1. An execution plan shows how a query will be executed and is the DBA's primary tool for troubleshooting slow queries.
2. When a query is submitted, it is parsed and processed by the query optimizer which generates an execution plan sent to the storage engine to retrieve data.
3. Execution plans reveal how indexes are used, what tables are scanned, and why one query may run faster than another, helping identify performance issues.
Dynamic SQL: How to Build Fast Multi-Parameter Stored ProceduresBrent Ozar
This document discusses best practices for building fast stored procedures that accept multiple parameters. It begins by describing a common business need to build a search page and outlines four approaches to building the stored procedure: 1) using OR conditions, 2) COALESCE, 3) dynamic SQL, 4) combining OR and dynamic SQL. Each approach is demonstrated with examples and limitations are discussed. While dynamic SQL allows for different execution plans per parameter combination, it can lead to bloating the plan cache and potential parameter sniffing issues. The document recommends techniques for troubleshooting dynamic SQL like using comments and debug variables.
It's no mystery to anyone that software out cycles are bolder than ever. Now that the cloud has become universal as a strategic component of IT services, we are spoiled by continually releasing unique features and services.
Great IDEs for SQL Query Performance Tuning and Practice.pdfTosska Technology
It generally includes a source code editor, debugger and some build automation tools among other elements. Although many of us started learning SQL with the command line, an IDE has all the resources that will prove indispensable when you begin SQL query performance tuning and working with larger databases.
A Practical Look at the NOSQL and Big Data HullabalooAndrew Brust
Martha Washington
Friend of: Thomas Jefferson
Friend of: Benjamin Franklin
Friend of: John Adams
Friend of: James Madison
Friend of: Alexander Hamilton
Friend of: Marquis de Lafayette
Friend of: Baron von Steuben
Friend of: Henry Knox
Friend of: Nathanael Greene
Friend of: Comte de Rochambeau
Friend of: Friedrich Wilhelm von Steuben
Friend of: Gilbert du Motier, Marquis de Lafayette
Friend of: John Jay
Friend of: John Marshall
Friend of: James Madison
Friend of: Thomas Jefferson
Friend of: Benjamin Franklin
Friend
The document provides an overview and agenda for a presentation on SQL Server Denali business intelligence (BI) capabilities. Key points include:
- PowerPivot and Excel Services allow self-service BI through a familiar Excel interface while leveraging Analysis Services for storage and collaboration features.
- Analysis Services Tabular Mode is the server implementation of PowerPivot, supporting partitions, roles and other enterprise features.
- Project "Crescent" provides ad hoc reporting directly against PowerPivot and Analysis Services Tabular models through a browser-based, Excel-like interface in Silverlight.
- Master Data Services and Data Quality Services provide master data management and data cleansing capabilities to support better data quality for BI initiatives.
This document discusses using Oracle VM VirtualBox for learning Oracle technologies. It provides an overview of VirtualBox, its use cases, pre-built development VMs from OTN, learning resources, and tips. Some key benefits highlighted include running Oracle software on any OS, easy deployment, and no dedicated hardware required. The presentation also demonstrates importing and using a pre-built database application development VM from OTN.
The document discusses execution plan basics, including:
1. An execution plan shows how a query will be executed and is the DBA's primary tool for troubleshooting slow queries.
2. When a query is submitted, it is parsed and processed by the query optimizer which generates an execution plan sent to the storage engine to retrieve data.
3. Execution plans reveal how indexes are used, what tables are scanned, and why one query may run faster than another, helping identify performance issues.
Dynamic SQL: How to Build Fast Multi-Parameter Stored ProceduresBrent Ozar
This document discusses best practices for building fast stored procedures that accept multiple parameters. It begins by describing a common business need to build a search page and outlines four approaches to building the stored procedure: 1) using OR conditions, 2) COALESCE, 3) dynamic SQL, 4) combining OR and dynamic SQL. Each approach is demonstrated with examples and limitations are discussed. While dynamic SQL allows for different execution plans per parameter combination, it can lead to bloating the plan cache and potential parameter sniffing issues. The document recommends techniques for troubleshooting dynamic SQL like using comments and debug variables.
It's no mystery to anyone that software out cycles are bolder than ever. Now that the cloud has become universal as a strategic component of IT services, we are spoiled by continually releasing unique features and services.
Great IDEs for SQL Query Performance Tuning and Practice.pdfTosska Technology
It generally includes a source code editor, debugger and some build automation tools among other elements. Although many of us started learning SQL with the command line, an IDE has all the resources that will prove indispensable when you begin SQL query performance tuning and working with larger databases.
This document provides information about Mr. J. Venkatesan Prabu, who is the Managing Director of KAASHIV INFOTECH, a software company in Chennai. It outlines his experience of over 8 years working with Microsoft technologies and his role in guiding over 20,000 young minds through career guidance programs. It also lists some of the awards he has received, including the Microsoft MVP award several times. The document then provides sample interview questions and answers related to SQL Server and promotes the inplant training programs offered by KAASHIV INFOTECH.
Hear Ryan Millay, IBM Cloudant software development manager, discuss what you need to consider when moving from world of relational databases to a NoSQL document store.
You'll learn about the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
The document introduces NoSQL databases as an alternative to SQL databases for applications that require massive horizontal scalability. It notes that while SQL databases can scale vertically by upgrading hardware, this approach is not cost effective and does not scale linearly with load. NoSQL databases like MongoDB, on the other hand, are designed for horizontal scalability across commodity servers and can scale performance and capacity linearly with load. Some key advantages of NoSQL databases mentioned include high performance, fault tolerance, and eventual consistency.
1) The document discusses the differences between SQL and NoSQL databases in terms of scalability, data modeling, and indexing. SQL databases are less scalable but ensure consistency and transactions, while NoSQL databases are more scalable through replication and sharding.
2) Complex applications may require a hybrid approach using both SQL and NoSQL databases. For example, storing product data in a NoSQL database and customer relationship management data in a SQL database.
3) There is no single best approach - the optimal solution depends on the specific business needs and data usage patterns. Both SQL and NoSQL databases each have their own advantages, and either can be suitable depending on the context.
RDBMS in the Cloud: Deploying SQL Server on AWSIrawan Soetomo
Amazon Web Services (AWS) is a flexible, cost-effective, easy-to-use cloud computing platform. Relational database management systems, or RDBMS, are widely deployed within the Amazon cloud. In this whitepaper, we help you understand how to deploy SQL Server databases on AWS. You can run SQL Server databases on Amazon Relational Database Service (Amazon RDS) or Amazon Elastic Compute Cloud (Amazon EC2).
Optimize sql server queries with these advanced tuning techniques tech repuKaing Menglieng
The document discusses several advanced techniques for optimizing SQL Server queries:
1. Using JOIN statements instead of subqueries to improve performance.
2. Using explicit transactions around data manipulation statements to reduce writes to the transaction log.
3. Using UNION ALL instead of UNION to avoid removing duplicate records.
4. Using EXISTS instead of COUNT(*) when checking for conditions to return results faster.
5. Using STATISTICS IO to determine the number of logical reads of a query to optimize performance.
Sql server common interview questions and answers page 5Kaing Menglieng
This document contains 30 common interview questions and answers about SQL Server. Some of the questions covered include: what is a NOT NULL constraint, how to get @@ERROR and @@ROWCOUNT at the same time, what are the advantages of using stored procedures, what is a table called if it has neither a cluster nor non-cluster index, and if SQL Server can be linked to other servers like Oracle.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical/civil studies. The training focuses on developing technical skills through hands-on demonstrations and projects.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical. The training focuses on developing technical skills through hands-on demos and projects.
This presentation discusses the challenges of sharing and moving data between different types of relational databases. It notes that while companies often find this a simple process, they underestimate the effort required due to limitations of their tools and lack of understanding of different database capabilities. The presentation will cover options for migrating, replicating, and accessing data between different database types, with a focus on Oracle's Heterogeneous Services functionality.
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
SQL Azure - the good, the bad and the ugly.Pini Krisher
The document discusses Microsoft Azure SQL Database, including an overview of cloud computing and databases on Azure, ways to work with SQL Azure, and the pros and cons. It provides an agenda covering basic introduction to SQL Azure, ways to work with it using the Azure portal, SQL Server Management Studio, and PowerShell. It then discusses the good aspects like scalability and accessibility, the bad like lack of regular backups and read replicas, and the ugly like retired features and slow adoption of new features. It concludes with recommendations on when SQL Azure would be suitable, such as for simple databases, handling spikes in usage, or when unlimited resources are needed without large agreements.
The document provides tips for speeding up SQL queries and database performance, including avoiding SELECT *, using indexes appropriately, normalizing tables, parameterizing queries, and optimizing stored procedures. Specific suggestions include explicitly selecting columns, using memory tables for frequently accessed lookup tables, and increasing query timeouts for long running reports.
1. The document discusses various optimizations that can be made to an ASP.NET MVC application to improve performance, including compiled LINQ queries, URL caching, and data caching.
2. Benchmark results show that optimizing partial view rendering, LINQ queries, and URL generation improved performance from 8 requests/second to 61.5 requests/second. Additional caching of URLs, statistics, and content improved performance to over 400 requests/second.
3. Turning off ASP.NET debug mode also provided a significant performance boost, showing the importance of running production sites in release mode.
JFokus 2011 - Running your Java EE 6 apps in the CloudArun Gupta
Oracle provides Java EE 6 application servers and databases that can run on various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. These cloud platforms offer virtual servers, storage, databases and additional services that allow flexible deployment of Java EE 6 applications in public, private and hybrid cloud environments. Pricing models vary between platforms and include consumption-based or commitment-based options.
10 sql tips to speed up your database cats whocode.comKaing Menglieng
This article provides 10 tips for optimizing and speeding up a SQL database:
1. Carefully design the database structure with clear table and column naming.
2. Use the EXPLAIN statement to understand query performance and identify areas for optimization.
3. Implement query caching to avoid repeatedly running the same queries.
4. Only select the necessary columns rather than using "*" to reduce overhead.
5. Use LIMIT to restrict large result sets when only a subset is needed.
6. Avoid running queries in loops which significantly increases load.
7. Prefer joins over subqueries when possible for better performance.
8. Use wildcards judiciously as full table scans are slow
The Future is Now: Leveraging the Cloud with RubyRobert Dempsey
My presentation from the Ruby Hoedown on cloud computing and how Ruby developers can take advantage of cloud services to build scalable web applications.
The document discusses harnessing the power of SQL Server columnstore indexes and Analysis Services ROLAP. It finds that combining clustered columnstore indexes with ROLAP in Analysis Services provides very fast performance for aggregates and distinct counts on large datasets of over 1 billion records, returning results within seconds. It recommends settings like enabling ROLAP distinct counts at the data source and maintaining statistics to optimize query plans when using this solution.
The document compares two approaches to handling business logic in transactional applications: the NoPlsql approach and the SmartDB approach. The NoPlsql approach treats the database as only a persistence layer, putting business logic in application code. The SmartDB approach implements business logic directly in the database using SQL and PL/SQL. An experiment found that for the same task, a SmartDB implementation using stored procedures was over 3 times faster and used half the database CPU resources compared to a NoPlsql implementation using Java and JDBC. This is because every SQL statement incurs networking and database entry costs in NoPlsql, while SmartDB SQL statements leverage existing database sessions.
What is your sql server backup strategy tech_republicKaing Menglieng
The document discusses various SQL Server backup strategies including full backups, differential backups, transaction log backups, and file group backups. It explains how to implement each strategy using either SQL Server Enterprise Manager graphical user interface or Transact SQL commands. Full backups create a complete picture of the database, differential backups back up only changed data since the last full backup, transaction log backups back up transactions and allow point-in-time recovery, and file group backups allow backing up individual files or groups.
Using sql server 2008's merge statement tech republicKaing Menglieng
The document discusses SQL Server 2008's new MERGE statement, which allows inserting, updating, or deleting data in one table based on conditions in another table in a single statement. It provides an example using the MERGE statement to load sales data from a staging table into a reporting table, handling inserting new records as well as updating existing records with new sales amounts and counts. The MERGE statement combines the logic of separate insert, update, and delete statements into one statement for efficiently modifying data between tables.
More Related Content
Similar to 15 ways to optimize your sql queries hungred dot com
This document provides information about Mr. J. Venkatesan Prabu, who is the Managing Director of KAASHIV INFOTECH, a software company in Chennai. It outlines his experience of over 8 years working with Microsoft technologies and his role in guiding over 20,000 young minds through career guidance programs. It also lists some of the awards he has received, including the Microsoft MVP award several times. The document then provides sample interview questions and answers related to SQL Server and promotes the inplant training programs offered by KAASHIV INFOTECH.
Hear Ryan Millay, IBM Cloudant software development manager, discuss what you need to consider when moving from world of relational databases to a NoSQL document store.
You'll learn about the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
The document introduces NoSQL databases as an alternative to SQL databases for applications that require massive horizontal scalability. It notes that while SQL databases can scale vertically by upgrading hardware, this approach is not cost effective and does not scale linearly with load. NoSQL databases like MongoDB, on the other hand, are designed for horizontal scalability across commodity servers and can scale performance and capacity linearly with load. Some key advantages of NoSQL databases mentioned include high performance, fault tolerance, and eventual consistency.
1) The document discusses the differences between SQL and NoSQL databases in terms of scalability, data modeling, and indexing. SQL databases are less scalable but ensure consistency and transactions, while NoSQL databases are more scalable through replication and sharding.
2) Complex applications may require a hybrid approach using both SQL and NoSQL databases. For example, storing product data in a NoSQL database and customer relationship management data in a SQL database.
3) There is no single best approach - the optimal solution depends on the specific business needs and data usage patterns. Both SQL and NoSQL databases each have their own advantages, and either can be suitable depending on the context.
RDBMS in the Cloud: Deploying SQL Server on AWSIrawan Soetomo
Amazon Web Services (AWS) is a flexible, cost-effective, easy-to-use cloud computing platform. Relational database management systems, or RDBMS, are widely deployed within the Amazon cloud. In this whitepaper, we help you understand how to deploy SQL Server databases on AWS. You can run SQL Server databases on Amazon Relational Database Service (Amazon RDS) or Amazon Elastic Compute Cloud (Amazon EC2).
Optimize sql server queries with these advanced tuning techniques tech repuKaing Menglieng
The document discusses several advanced techniques for optimizing SQL Server queries:
1. Using JOIN statements instead of subqueries to improve performance.
2. Using explicit transactions around data manipulation statements to reduce writes to the transaction log.
3. Using UNION ALL instead of UNION to avoid removing duplicate records.
4. Using EXISTS instead of COUNT(*) when checking for conditions to return results faster.
5. Using STATISTICS IO to determine the number of logical reads of a query to optimize performance.
Sql server common interview questions and answers page 5Kaing Menglieng
This document contains 30 common interview questions and answers about SQL Server. Some of the questions covered include: what is a NOT NULL constraint, how to get @@ERROR and @@ROWCOUNT at the same time, what are the advantages of using stored procedures, what is a table called if it has neither a cluster nor non-cluster index, and if SQL Server can be linked to other servers like Oracle.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical/civil studies. The training focuses on developing technical skills through hands-on demonstrations and projects.
This document provides information about Venkatesan Prabu Jayakantham (Venkat), the Managing Director of KAASHIVINFOTECH, a software company in Chennai. It outlines Venkat's experience in Microsoft technologies and certifications. It also describes KAASHIVINFOTECH's inplant training programs for students in fields like engineering, electronics, and mechanical. The training focuses on developing technical skills through hands-on demos and projects.
This presentation discusses the challenges of sharing and moving data between different types of relational databases. It notes that while companies often find this a simple process, they underestimate the effort required due to limitations of their tools and lack of understanding of different database capabilities. The presentation will cover options for migrating, replicating, and accessing data between different database types, with a focus on Oracle's Heterogeneous Services functionality.
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
SQL Azure - the good, the bad and the ugly.Pini Krisher
The document discusses Microsoft Azure SQL Database, including an overview of cloud computing and databases on Azure, ways to work with SQL Azure, and the pros and cons. It provides an agenda covering basic introduction to SQL Azure, ways to work with it using the Azure portal, SQL Server Management Studio, and PowerShell. It then discusses the good aspects like scalability and accessibility, the bad like lack of regular backups and read replicas, and the ugly like retired features and slow adoption of new features. It concludes with recommendations on when SQL Azure would be suitable, such as for simple databases, handling spikes in usage, or when unlimited resources are needed without large agreements.
The document provides tips for speeding up SQL queries and database performance, including avoiding SELECT *, using indexes appropriately, normalizing tables, parameterizing queries, and optimizing stored procedures. Specific suggestions include explicitly selecting columns, using memory tables for frequently accessed lookup tables, and increasing query timeouts for long running reports.
1. The document discusses various optimizations that can be made to an ASP.NET MVC application to improve performance, including compiled LINQ queries, URL caching, and data caching.
2. Benchmark results show that optimizing partial view rendering, LINQ queries, and URL generation improved performance from 8 requests/second to 61.5 requests/second. Additional caching of URLs, statistics, and content improved performance to over 400 requests/second.
3. Turning off ASP.NET debug mode also provided a significant performance boost, showing the importance of running production sites in release mode.
JFokus 2011 - Running your Java EE 6 apps in the CloudArun Gupta
Oracle provides Java EE 6 application servers and databases that can run on various cloud platforms including Amazon Web Services, RightScale, Microsoft Azure, and Joyent. These cloud platforms offer virtual servers, storage, databases and additional services that allow flexible deployment of Java EE 6 applications in public, private and hybrid cloud environments. Pricing models vary between platforms and include consumption-based or commitment-based options.
10 sql tips to speed up your database cats whocode.comKaing Menglieng
This article provides 10 tips for optimizing and speeding up a SQL database:
1. Carefully design the database structure with clear table and column naming.
2. Use the EXPLAIN statement to understand query performance and identify areas for optimization.
3. Implement query caching to avoid repeatedly running the same queries.
4. Only select the necessary columns rather than using "*" to reduce overhead.
5. Use LIMIT to restrict large result sets when only a subset is needed.
6. Avoid running queries in loops which significantly increases load.
7. Prefer joins over subqueries when possible for better performance.
8. Use wildcards judiciously as full table scans are slow
The Future is Now: Leveraging the Cloud with RubyRobert Dempsey
My presentation from the Ruby Hoedown on cloud computing and how Ruby developers can take advantage of cloud services to build scalable web applications.
The document discusses harnessing the power of SQL Server columnstore indexes and Analysis Services ROLAP. It finds that combining clustered columnstore indexes with ROLAP in Analysis Services provides very fast performance for aggregates and distinct counts on large datasets of over 1 billion records, returning results within seconds. It recommends settings like enabling ROLAP distinct counts at the data source and maintaining statistics to optimize query plans when using this solution.
The document compares two approaches to handling business logic in transactional applications: the NoPlsql approach and the SmartDB approach. The NoPlsql approach treats the database as only a persistence layer, putting business logic in application code. The SmartDB approach implements business logic directly in the database using SQL and PL/SQL. An experiment found that for the same task, a SmartDB implementation using stored procedures was over 3 times faster and used half the database CPU resources compared to a NoPlsql implementation using Java and JDBC. This is because every SQL statement incurs networking and database entry costs in NoPlsql, while SmartDB SQL statements leverage existing database sessions.
Similar to 15 ways to optimize your sql queries hungred dot com (20)
What is your sql server backup strategy tech_republicKaing Menglieng
The document discusses various SQL Server backup strategies including full backups, differential backups, transaction log backups, and file group backups. It explains how to implement each strategy using either SQL Server Enterprise Manager graphical user interface or Transact SQL commands. Full backups create a complete picture of the database, differential backups back up only changed data since the last full backup, transaction log backups back up transactions and allow point-in-time recovery, and file group backups allow backing up individual files or groups.
Using sql server 2008's merge statement tech republicKaing Menglieng
The document discusses SQL Server 2008's new MERGE statement, which allows inserting, updating, or deleting data in one table based on conditions in another table in a single statement. It provides an example using the MERGE statement to load sales data from a staging table into a reporting table, handling inserting new records as well as updating existing records with new sales amounts and counts. The MERGE statement combines the logic of separate insert, update, and delete statements into one statement for efficiently modifying data between tables.
Using object dependencies in sql server 2008 tech republicKaing Menglieng
This document discusses how SQL Server 2008 improves on tracking object dependencies by name rather than ID. It introduces two new dynamic management functions and a view for tracking dependencies. An example creates database objects with dependencies and queries the new functions and view to identify referenced and referencing objects. The conclusion states advantages of tracking dependencies by name include finding objects referencing those that don't exist.
Using hash fields in sql server tech republicKaing Menglieng
The document discusses using hash fields in SQL Server. It explains that hash fields can be used for auditing data changes and capturing data for data warehouses. It shows how to create a hash field using the BINARY_CHECKSUM function on multiple fields. This creates a hash value that changes when the fields change, allowing detection of data modifications. It demonstrates adding a hash field to a sample SalesHistory table and discusses how hash fields can simplify change data capture for data warehousing.
Using grouping sets in sql server 2008 tech republicKaing Menglieng
This article discusses the new GROUPING SETS clause in SQL Server 2008, which allows specifying combinations of field groupings in queries to see different levels of aggregated data. It provides an example showing how GROUPING SETS can return aggregated data at both the product level and grand total level. Another example shows returning aggregates by product, sales tier, and grand total. The article notes this functionality enhances reporting by returning multiple aggregations in one statement rather than separate queries.
Understand when to use user defined functions in sql server tech-republicKaing Menglieng
User-defined functions (UDFs) in SQL Server allow users to define custom functions that can accept parameters and return values. There are two main types of UDFs - table-valued functions that return results in a table that can be queried, and scalar-valued functions that return a single value. The document provides examples of creating both types of UDFs and using them to return sales data from a sample SalesHistory table based on input parameters.
Sql server indexed views speed up your select queries part 1 - code-projecKaing Menglieng
This document discusses indexed views in SQL Server, which allow for optimizing select queries. Indexed views store clustered index data to provide another location for the query optimizer to find data. They are best suited for read-heavy databases like data warehouses. The document covers how to create indexed views, including using schema binding and unique clustered indexes, and constraints like deterministic queries. It provides an example of creating a view and index on a table to optimize queries against that data.
This document discusses how to optimize SQL queries by removing bookmark lookups, RID lookups, and key lookups. These lookups occur when a query uses a non-clustered index to seek on columns in the WHERE clause, but must then perform further lookups to return non-key columns in the SELECT clause. The document demonstrates how to remove these lookups using two methods: 1) creating a non-clustered index that covers all columns in the query, or 2) creating an included column non-clustered index. Removing lookups in this way improves query performance.
Sql server common interview questions and answersKaing Menglieng
The document discusses common interview questions and answers related to SQL Server. It provides 6 questions and answers about topics like the TCP/IP port SQL Server uses, the differences between clustered and non-clustered indexes, index configurations for tables, collation sensitivity types, what OLTP is, and the differences between primary and unique keys.
Sql server common interview questions and answers page 6Kaing Menglieng
This document discusses common SQL Server interview questions and answers. It defines BCP as a tool used to copy large amounts of data between tables or views without copying structures. It also describes how to implement one-to-one, one-to-many, and many-to-many relationships in table design. Finally, it explains that an execution plan shows the query optimization methods chosen by SQL Server and can be viewed within Query Analyzer to understand query performance.
Sql server common interview questions and answers page 4Kaing Menglieng
This document discusses differences between local and global temporary tables in SQL Server. A local temporary table exists only for the duration of a connection, while a global temporary table remains in the database permanently but the rows only exist within a given connection. It also explains the STUFF and REPLACE functions, where STUFF overwrites existing characters at a given start position and length, while REPLACE replaces all occurrences of a search string. Primary keys, unique keys, foreign keys and check constraints are also summarized. Primary keys uniquely identify rows, unique keys enforce uniqueness of column values, foreign keys prevent orphaned rows, and check constraints limit column values.
Sql server common interview questions and answers page 2Kaing Menglieng
The document discusses differences between the DELETE and TRUNCATE commands in SQL Server. DELETE removes rows one by one and can be rolled back, while TRUNCATE removes all rows faster but cannot be rolled back. It also discusses the use of UPDATE_STATISTICS to update indexes after large data changes, and the differences between the HAVING and WHERE clauses. Finally, it defines SQL Profiler as a tool to monitor SQL Server events and lists the two authentication modes in SQL Server (Windows and Mixed) that can be changed in the SQL Server Configuration Manager.
Sql server – 2008 – hardware and software requirements for installing sql seKaing Menglieng
The document provides the minimum hardware and software requirements for installing SQL Server 2008. It lists requirements such as the .NET Framework 3.5, Windows Installer 4.5, Internet Explorer 6 SP1, and various supported Windows operating systems. It also provides specific requirements for installing different editions of SQL Server 2008 such as the Enterprise edition requiring a minimum 1 GHz processor and 512 MB of RAM.
Speeding up queries with semi joins and anti-joinsKaing Menglieng
- Semi-joins return rows from the first table where matches are found in the second table, but return each matching row from the first table only once. Anti-joins return rows from the first table where no matches are found in the second table.
- Semi-joins can be written using EXISTS or IN clauses, while anti-joins use NOT EXISTS or NOT IN. This allows Oracle to use more efficient semi-join and anti-join access paths.
- Oracle has improved at optimizing queries with EXISTS and IN clauses over time, and in Oracle 9i and later the choice is usually not important as Oracle will generate efficient plans for both.
Speed up sql server apps - visual studio magazineKaing Menglieng
This article provides 10 tips for improving the performance of SQL Server applications. Some of the tips include using EXISTS instead of COUNT(*) when checking for existence, being careful when using WHERE IN and WHERE NOT IN clauses, randomizing result sets with NEWID(), and increasing the default packet size for transferring large data fields.
See sql server graphical execution plans in action tech republicKaing Menglieng
Tim Chapman identifies a few basic things to look for in SQL Server graphical execution plans to understand how indexes are used. He creates a sample database with different indexes and queries it in various ways to demonstrate index seeks, scans, and lookups. The article shows how indexes can improve query performance when columns have high selectivity but may not help when columns have few distinct values.
Reviewing sql server permissions tech republicKaing Menglieng
The document reviews SQL Server permissions. It discusses reviewing login information using the sys.server_principals view, determining database users using sys.database_principals, viewing roles assignments with other system views, and identifying object permissions with sys.database_permissions. Examples are provided to test adding a login, user, and role membership. The document aims to help administrators understand permissions on their SQL Server instance.
Query optimization how to search millions of record in sql table faster -Kaing Menglieng
The document is a question posted on Stack Overflow asking how to search millions of records in a SQL table faster. The question details searching a table with 50 million records and 5 columns to find records containing "lifeis". Indexing did not help speed up the LIKE query. Respondents recommend using a full-text index, which supports faster LIKE queries for patterns beginning with a fixed string. While an index on the searched column could provide some help, a full-text index is likely the best approach given the large number of records.
New date datatypes in sql server 2008 tech republicKaing Menglieng
1) SQL Server 2008 introduces new date/time datatypes that provide greater precision and flexibility over previous versions.
2) The new datatypes include DATE, DATETIME2, TIME, and DATETIMEOFFSET to store date-only, time-only, or date+time values with time zone support.
3) These new datatypes allow for a wider range of dates, more precise time values down to 100 nanoseconds, and separation of date and time components for easier querying.
Introduction to policy based management in sql server 2008 tech-republicKaing Menglieng
Policy-Based Management in SQL Server 2008 allows database administrators to define policies that ensure database guidelines are followed. Policies can specify rules for object properties and names. Key components of policies include targets, facets, and conditions. Policies can be evaluated on demand, on change, or on a schedule. Policy-Based Management gives database administrators more control to strictly enforce standards and guidelines.
Introduction to policy based management in sql server 2008 tech-republic
15 ways to optimize your sql queries hungred dot com
1. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
Hungred Dot Com jQuery, JavaScript, Yii, Wordpress,
CSS, SQL
Ads by Google SQL Server MS SQL Server SQL Database SQL Tools Home About Contact
Sponsors
15 Ways to Optimize Your SQL Queries
Posted on October 27 by Clay
Hello there! If you are new here, you might want to subscribe to the X
RSS feed for updates on this topic.
Powered by WP Greet Box WordPress Plugin
Previous article was on 10 Ways To Destroy A SQL Database that sort of teaches 16
you what mistakes many company might make on their database that will 2
eventually lead to a database destroy. In this article, you will get to know 15
ways to optimize your SQL queries. Many ways are common to optimize a query while $10 - Advertise
others are less obvious.
Indexes
Index your column is a common way to optimize your search result. Nonetheless, one $10 - Advertise
must fully understand how does indexing work in each database in order to fully utilize
indexes. On the other hand, useless and simply indexing without understanding how it
work might just do the opposite.
SQL Server
Symbol Operator Training
Access Videos,
Articles and More.
Join the
Symbol operator such as >,<,=,!=, etc. are very helpful in our query. We can optimize
SSWUG.ORG
some of our query with symbol operator provided the column is indexed. For example, Community Today!
www.sswug.org
1 SELECT * FROM TABLE WHERE COLUMN > 16
Web based
OLAP Client
Now, the above query is not optimized due to the fact that the DBMS will have to look for Microsoft
for the value 16 THEN scan forward to value 16 and below. On the other hand, a Analysis Services
download Free
optimized value will be Evaluation
www.ReportPortal.com
1 SELECT * FROM TABLE WHERE COLUMN >= 15
This way the DBMS might jump straight away to value 15 instead. It’s pretty much the Download Pl Sql
same way how we find a value 15 (we scan through and target ONLY 15) compare to a Download the 30
day trail version
value smaller than 16 (we have to determine whether the value is smaller than 16; for PL/SQL IDE!
additional operation). www.allroundautomatio…
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
2. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
Wildcard Visual SQL to
XML
Easy to use Data
In SQL, wildcard is provided for us with ‘%’ symbol. Using wildcard will definitely slow Mapping
down your query especially for table that are really huge. We can optimize our query with environment. Try
now!
wildcard by doing a postfix wildcard instead of pre or full wildcard. www.ecrion.com
1 #Full wildcard
2 SELECT * FROM TABLE WHERE COLUMN LIKE '%hello%';
3 #Postfix wildcard
4 SELECT * FROM TABLE WHERE COLUMN LIKE 'hello%';
5 #Prefix wildcard
6 SELECT * FROM TABLE WHERE COLUMN LIKE '%hello';
That column must be indexed for such optimize to be applied.
P.S: Doing a full wildcard in a few million records table is equivalence to killing the
database.
NOT Operator
Try to avoid NOT operator in SQL. It is much faster to search for an exact match
(positive operator) such as using the LIKE, IN, EXIST or = symbol operator instead of a
negative operator such as NOT LIKE, NOT IN, NOT EXIST or != symbol. Using a
negative operator will cause the search to find every single row to identify that they are
ALL not belong or exist within the table. On the other hand, using a positive operator
just stop immediately once the result has been found. Imagine you have 1 million record
in a table. That’s bad.
COUNT VS EXIST
Some of us might use COUNT operator to determine whether a particular data exist
1 SELECT COLUMN FROM TABLE WHERE COUNT(COLUMN) > 0
Similarly, this is very bad query since count will search for all record exist on the table to
determine the numeric value of field ‘COLUMN’. The better alternative will be to use the
EXIST operator where it will stop once it found the first record. Hence, it exist.
Wildcard VS Substr
Most developer practiced Indexing. Hence, if a particular COLUMN has been indexed, it
is best to use wildcard instead of substr.
1 #BAD
2 SELECT * FROM TABLE WHERE substr ( COLUMN, 1, 1 ) = 'value'.
The above will substr every single row in order to seek for the single character ‘value’. On
the other hand,
1 #BETTER
2 SELECT * FROM TABLE WHERE COLUMN = 'value%'.
Wildcard query will run faster if the above query is searching for all rows that contain
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
3. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
‘value’ as the first character. Example,
1 #SEARCH FOR ALL ROWS WITH THE FIRST CHARACTER AS 'E'
2 SELECT * FROM TABLE WHERE COLUMN = 'E%'.
Index Unique Column
Some database such as MySQL search better with column that are unique and indexed.
Hence, it is best to remember to index those columns that are unique. And if the column
is truly unique, declare them as one. However, if that particular column was never used
for searching purposes, it gives no reason to index that particular column although it is
given unique.
Max and Min Operators
Max and Min operators look for the maximum or minimum value in a column. We can
further optimize this by placing a indexing on that particular column Misleading We can
use Max or Min on columns that already established such Indexes. But if that particular
column is frequently use, having an index should help speed up such searching and at the
same time speed max and min operators. This makes searching for maximum or
minimum value faster. Deliberate having an index just to speed up Max and Min is
always not advisable. Its like sacrifice the whole forest for a merely a tree.
Data Types
Use the most efficient (smallest) data types possible. It is unnecessary and sometimes
dangerous to provide a huge data type when a smaller one will be more than sufficient to
optimize your structure. Example, using the smaller integer types if possible to get
smaller tables. MEDIUMINT is often a better choice than INT because a MEDIUMINT
column uses 25% less space. On the other hand, VARCHAR will be better than longtext
to store an email or small details.
Primary Index
The primary column that is used for indexing should be made as short as possible. This
makes identification of each row easy and efficient by the DBMS.
String indexing
It is unnecessary to index the whole string when a prefix or postfix of the string can be
indexed instead. Especially if the prefix or postfix of the string provides a unique
identifier for the string, it is advisable to perform such indexing. Shorter indexes are
faster, not only because they require less disk space, but because they also give you more
hits in the index cache, and thus fewer disk seeks.
Limit The Result
Another common way of optimizing your query is to minimize the number of row return.
If a table have a few billion records and a search query without limitation will just break
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
4. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
the database with a simple SQL query such as this.
1 SELECT * FROM TABLE
Hence, don’t be lazy and try to limit the result turn which is both efficient and can help
minimize the damage of an SQL injection attack.
1 SELECT * FROM TABLE WHERE 1 LIMIT 10
Use Default Value
If you are using MySQL, take advantage of the fact that columns have default values.
Insert values explicitly only when the value to be inserted differs from the default. This
reduces the parsing that MySQL must do and improves the insert speed.
In Subquery
Some of us will use a subquery within the IN operator such as this.
1 SELECT * FROM TABLE WHERE COLUMN IN (SELECT COLUMN FROM TABLE)
Doing this is very expensive because SQL query will evaluate the outer query first before
proceed with the inner query. Instead we can use this instead.
1 SELECT * FROM TABLE, (SELECT COLUMN FROM TABLE) as dummytable
WHERE dummytable.COLUMN = TABLE.COLUMN;
Using dummy table is better than using an IN operator to do a subquery. Alternative, an
exist operator is also better.
Utilize Union instead of OR
Indexes lose their speed advantage when using them in OR-situations in MySQL at least.
Hence, this will not be useful although indexes is being applied
1 SELECT * FROM TABLE WHERE COLUMN_A = 'value' OR COLUMN_B =
'value'
On the other hand, using Union such as this will utilize Indexes.
1 SELECT * FROM TABLE WHERE COLUMN_A = 'value'
2 UNION
3 SELECT * FROM TABLE WHERE COLUMN_B = 'value'
Hence, run faster.
Summary
Definitely, these optimization tips doesn’t guarantee that your queries won’t become your
system bottleneck. It will require much more benchmarking and profiling to further
optimize your SQL queries. However, the above simple optimization can be utilize by
anyone that might just help save some colleague rich bowl while you learn to write good
queries. (its either you or your team leader/manager)
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
5. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
Like this post? Share it!
No related posts.
About Clay
I am Clay who is the main writer for this website. I own a small web hosting
company in Malaysia and i'm available to be hired as individual contractor on
elance or odesk. You can find me on twitter.
View all posts by Clay →
This entry was posted in Developer, How-to, Informative, SQL, Tips And Tricks, Web Development and tagged SQL.
Bookmark the permalink.
← 10 Ways To Destroy A SQL Database JavaScript Framework Selector Benchmark Tool →
9 Responses to 15 Ways to Optimize Your SQL Queries
Veera says:
October 27 at 12:24 PM
I couldn’t understand the ‘Symbol Operator’ point. Could you please explain it a
little further?
Greg Jorgensen says:
October 27 at 1:25 PM
You are wrong about the NOT opertor. If you think about it you will realize that
you can determine if there are NO black marbles in a bowl just as fast as you can
determine if there is at least one black marble. There is no need to examine every
marble; you can stop as soon as I find one black marble.
NOT EXISTS is exactly that: an EXISTS test that is logically negated. It’s possible
that a NOT EXISTS (or NOT LIKE or NOT IN) test will examine every
row/character/list member if the searched item is not present, but that will
happen for both EXISTS and NOT EXISTS.
Greg Jorgensen says:
October 27 at 1:30 PM
MAX and MIN do not “look for the maximum or minimum value in a column,”
and they aren’t operators. The MIN and MAX functions are aggregate functions
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
6. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
that operate on the selected rows (or groups of rows if GROUP BY is used).
SELECT MAX(col) FROM table will find the maximum value of col, but the
functions are more general than that. Indexes are expensive to maintain, and
indexing columns just to speed up MIN and MAX is not great advice.
Clay says:
October 27 at 3:20 PM
@Greg : I agree with you that indexing columns just to speed up MIN and MAX
is not a good advice. May be there is a misunderstanding on that point. I meant
that MAX and MIN can be used on indexed column for better speed. Deliberating
indexing a column because of a MIN or MAX is pure, NO NO. Thanks for the
feedback
Well, regarding the NOT operator, if there are any algorithm available in the
world that work like a human. May be your theory might just hit the right spot.
Clay says:
October 27 at 4:52 PM
@Veera : To make thing simple. If there are 15 available in that column it will
directly point towards 15 (if there are no exact value, it will just be similar as <16)
instead of going through the whole row finding which is the highest value that is
smaller than 16 (might be 15,14,13,12,11, etc. the DBMS do not know until it look
through them). If there is a equal to symbol, it means that there is a probability
that it will jump directly to 15 and return the result directly to you.
unreal4u says:
October 28 at 1:08 AM
nice post! i had no idea about #2 and #4…
also, some discoveries i’ve made about queries:
- UNION is slow, try to use stored procedures: i’ve not tested this one on MySQL,
but in PostGreSQL, it is much faster to use a SP than an union. Example:
SP:
SELECT
getValue(column1) AS name,
Operation(column2) AS number_of_products
is much much faster than:
(SELECT
name,
0 AS number_of_products
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
7. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
FROM customer
WHERE a=b)
UNION
(SELECT
0 AS name,
number_of_products
FROM products
WHERE y=z)
However, like i said, i’ve not tested this one on MySQL.
- PHP and MySQL: ok, strictly this isn’t a part of query optimization, but it is
always better to just retrieve the rows we are interested in, instead of all rows.
Why? Memory (PHP and MySQL) and bandwidth consumption.
Example:
best:
SELECT id, name FROM customer
instead of:
SELECT * FROM customer
(Assuming “customer” haves an id, name, last_name, description, login,
password, e-mail, etc.)
It also makes the code cleaner.
INNER/LEFT/RIGHT JOIN: ok, this is an untested one. A while ago, i was asked
to optimize a MySQL query. Originally, i used joins, but turned out it was much
faster to use them in the WHERE part.
Example:
Slower:
SELECT * FROM customer AS a LEFT JOIN products AS b ON a.id = b.id_customer
Faster:
SELECT * FROM customer AS a, products AS b
WHERE a.id = b.id_customer
I don’t know how or why, but it turned out that first case was much slower (10
seconds) than the second case (3,4 seconds).
It was however, a system already in production, so i hadn’t the chance to play
much with the query or with indexes
The client however, was very happy with the results xD
Greetings
JW says:
October 28 at 1:49 AM
"However, if that particular column was never used for searching purposes, it
gives no reason to index that particular column although it is given unique"
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
8. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
Um, I would say this is misleading too. What about unique columns that are
frequently used in joins, but never searched?
Clay says:
October 28 at 6:40 AM
@JW : Yup, you don’t need an index when no search is being done on a
particular column. But if it is a unique and frequently use column, having an
index will perform better.
For join, it depends on what DBMS you use. In MySQL, indexes perform more
efficient when your join have the same data type and size. Although you don’t
need its result, columns that frequently require in joins need DBMS to search for
the matching partners. Hence, having indexes will be better in joins too but
criteria to make it efficient depends on each individual implementation of DBMS.
Clay says:
October 28 at 7:00 AM
@unreal4u : Thanks for sharing
I’m not really familiar with stored procedure in MySQL at the moment since its
quite new (MySQL v5.1 onwards). But in MySQL, stored procedure is used when
a query is frequently used and have no data change. Hence, using a stored
procedure will be more efficient. You can read Stored Procedure for more
information. But it should work closely similar to PostGreSQL since the concept
is the same.
Ya, its not good to always use *. For security reason too other than optimization
point of view. (i was writing an example so i was lazy, sorry about that)
On the INNER/LEFT/RIGHT join, i also experience such situation with a few
million of data table, when a join seems slower than a WHERE clauses. I read it
somewhere before but i forgotten why is that so (should be at the documentation
of MySQL).
@Mehedi : Welcome
Tags Feature Post Recent Posts Popular Posts
Centos Chrome Concrete5 cPanel CSS Order Priority Tips and Tricks Average website response time bash 200++ Photoshop Photo Eff… 415.71
CSS extjs Google Chrome Extension Wordpress Plugin: Hungred Smart script view(s) per day
Quotes Yii CClientScript Disable
HTML Illustrator J2SE 15 Ways to Optimize Your … 79.71
JavaScript jQuery
Introduction to jQuery basic for RegisterScript view(s) per day
beginners Disable Yii Log on Action Controller Tutorial: Simple grey out… 60.43
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]
9. 15 Ways to Optimize Your SQL Queries - Hungred Dot Com
Nagios Others photoshop Tutorial: How to create a simple drop (#100) The status you are trying to view(s) per day
PHP Review Security shell shadow effect publish is a duplicate of, or too CSS Order Priority Tips a… 53.86
script SQL SWFUpload Tools Add additional value before ajax call similar to, one that we recently view(s) per day
Wordpress when using Yii CAutoComplete
Pushfix for Malaysia Push
posted to Twitter
Get WordPress Custom Post
Tutorial: How to align ce… 41 view(s)
Wordpress Plugin Yii Notification fix due to Jailbreak? Taxonomy Categories and Tags
per day
Tutorial: jQuery Select B… 39.43
Check Whether a page is using view(s) per day
WordPress Custom Taxonomy
Adobe Photoshop Model Pho… 38.71
Category view(s) per day
Tutorial: How to stop cac… 36.57
view(s) per day
How to Setup GFS2 or GFS … 35.57
view(s) per day
Tutorial: Function within… 32 view(s)
per day
Hungred Dot Com Proudly powered by WordPress.
http://hungred.com/useful-information/ways-optimize-sql-queries/[09/21/2012 4:05:29 PM]