SQL Server is able to service requests from a large number of concurrent users. When SQL Server is servicing requests from many clients, there is a strong possibility that conflicts will occur because different processes request access to the same resources at the same time. A conflict in which one process is waiting for another to release a resource is called a block. Although in SQL Server a blocked process usually resolves itself when the first process releases the resource but there are times when a process holds a transaction lock and doesn't release it. In this tip, we will learn different techniques to troubleshoot and resolve blocks in SQL Server.
This document provides an outline for a presentation on concurrency in SQL Server. The presentation introduction discusses sessions, locking, blocking, deadlocks, and pressure. It then covers transactions, isolation levels, and phenomena like dirty reads, non-repeatable reads, and phantom reads. The document discusses the speaker's background and certifications. It concludes with an outline of topics to be covered, including locking basics, advanced locking concepts, controlling locking, pessimistic and optimistic concurrency approaches.
SQL Server uses different types of locks at varying levels of granularity to control access to resources by transactions. Locking resources at a finer-grained level, like individual rows, increases concurrency but requires more locks. Locking at a coarser level, like entire tables, reduces the number of required locks but also decreases concurrency by restricting access to the entire resource. SQL Server automatically determines the appropriate lock level needed based on the transaction's data access needs.
Troubleshooting tips and tricks for Oracle Database Oct 2020Sandesh Rao
This talk presents 15 different tips and tricks using tools to better troubleshoot and debug problems with Database , Oracle RAC and Oracle Clusterware , ASM and how to get the right pieces of data with the least of commands which today most people do manually. This session will cover tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA.
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
M|18 How MariaDB Server Scales with SpiderMariaDB plc
Spider is a storage engine plugin that manages data stored across other storage engines. It supports sharding very large tables by partitioning them and storing the partitions on separate data nodes. Spider handles distributed queries by pushing down query fragments to the data nodes and consolidating the results. It provides data redundancy, load balancing, and two-phase commit for data consistency. New features in Spider include direct aggregation, update/delete, and join capabilities. Future work includes a Vertical Partition engine to support multi-dimensional sharding.
Stored procedures allow for grouping SQL statements and parameters to be stored and executed on a database. They provide more capabilities than scripts such as error handling and security. Parameters can pass data into and out of stored procedures. Stored procedures use structures like IF/ELSE, CASE, and cursors to implement decision-making and looping functionality similar to programming languages. Transactions allow grouping statements to commit or rollback changes and ensure data integrity.
The document discusses tips for designing test data before executing test cases. It recommends creating fresh test data specific to each test case rather than relying on outdated standard data. It also suggests keeping personal copies of test data to avoid corruption when multiple testers access shared data. The document provides examples of how to prepare large data sets needed for performance testing.
This document provides an introduction and overview of PostgreSQL, including its history, features, installation, usage and SQL capabilities. It describes how to create and manipulate databases, tables, views, and how to insert, query, update and delete data. It also covers transaction management, functions, constraints and other advanced topics.
This document provides an outline for a presentation on concurrency in SQL Server. The presentation introduction discusses sessions, locking, blocking, deadlocks, and pressure. It then covers transactions, isolation levels, and phenomena like dirty reads, non-repeatable reads, and phantom reads. The document discusses the speaker's background and certifications. It concludes with an outline of topics to be covered, including locking basics, advanced locking concepts, controlling locking, pessimistic and optimistic concurrency approaches.
SQL Server uses different types of locks at varying levels of granularity to control access to resources by transactions. Locking resources at a finer-grained level, like individual rows, increases concurrency but requires more locks. Locking at a coarser level, like entire tables, reduces the number of required locks but also decreases concurrency by restricting access to the entire resource. SQL Server automatically determines the appropriate lock level needed based on the transaction's data access needs.
Troubleshooting tips and tricks for Oracle Database Oct 2020Sandesh Rao
This talk presents 15 different tips and tricks using tools to better troubleshoot and debug problems with Database , Oracle RAC and Oracle Clusterware , ASM and how to get the right pieces of data with the least of commands which today most people do manually. This session will cover tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA.
The document discusses tuning SQL queries in Oracle databases. It begins by noting that while tools can help, there is no single process for tuning every query as each case depends on factors like the schema design, data distribution and how the optimizer chooses a plan. The document then provides a methodology for investigating and tuning a query with poor performance, including getting the execution plan, checking it visually, and identifying possible causes like stale statistics, missing indexes or inefficient SQL.
M|18 How MariaDB Server Scales with SpiderMariaDB plc
Spider is a storage engine plugin that manages data stored across other storage engines. It supports sharding very large tables by partitioning them and storing the partitions on separate data nodes. Spider handles distributed queries by pushing down query fragments to the data nodes and consolidating the results. It provides data redundancy, load balancing, and two-phase commit for data consistency. New features in Spider include direct aggregation, update/delete, and join capabilities. Future work includes a Vertical Partition engine to support multi-dimensional sharding.
Stored procedures allow for grouping SQL statements and parameters to be stored and executed on a database. They provide more capabilities than scripts such as error handling and security. Parameters can pass data into and out of stored procedures. Stored procedures use structures like IF/ELSE, CASE, and cursors to implement decision-making and looping functionality similar to programming languages. Transactions allow grouping statements to commit or rollback changes and ensure data integrity.
The document discusses tips for designing test data before executing test cases. It recommends creating fresh test data specific to each test case rather than relying on outdated standard data. It also suggests keeping personal copies of test data to avoid corruption when multiple testers access shared data. The document provides examples of how to prepare large data sets needed for performance testing.
This document provides an introduction and overview of PostgreSQL, including its history, features, installation, usage and SQL capabilities. It describes how to create and manipulate databases, tables, views, and how to insert, query, update and delete data. It also covers transaction management, functions, constraints and other advanced topics.
This document provides an introduction to SQL and database systems. It begins with example tables to demonstrate SQL concepts. It then covers the objectives of SQL, including allowing users to create database structures, manipulate data, and perform queries. Various SQL concepts are introduced such as data types, comparison operators, logical operators, and arithmetic operators. The document also discusses SQL statements for schema and catalog definitions, data definition, data manipulation, and other operators. Example SQL queries are provided to illustrate concepts around selecting columns, rows, sorting, aggregation, grouping, and more.
The document discusses SQL Server performance monitoring and tuning. It recommends taking a holistic view of the entire system landscape, including hardware, software, systems and networking components. It outlines various tools for performance monitoring, and provides guidance on identifying and addressing common performance issues like high CPU utilization, disk I/O issues and poorly performing queries.
Triggers can be used to add functionality to form items by executing PL/SQL code when events occur. Buttons can display LOVs using the LIST_VALUES or SHOW_LOV built-ins. Checkboxes and radio buttons can be interacted with to set other item properties or values conditionally. List items can be manipulated with built-ins to add, delete, or modify list elements. Images can be loaded into image items using READ_IMAGE_FILE. Hierarchical tree items can be populated from a hierarchical query using the Populate_Tree built-in.
Database Transactions and SQL Server ConcurrencyBoris Hristov
The document discusses database transactions and transaction management. It begins with an overview of transactions, their properties (atomicity, consistency, isolation, durability known as ACID), and how they are implemented using locks in SQL Server. It then covers transaction isolation levels, locking concepts like lock types and escalation, and how to troubleshoot locking problems including deadlocks. The document provides examples of transactions in SQL Server and demonstrations of managing transactions and concurrency.
Database tuning is the process of optimizing a database to maximize performance. It involves activities like configuring disks, tuning SQL statements, and sizing memory properly. Database performance issues commonly stem from slow physical I/O, excessive CPU usage, or latch contention. Tuning opportunities exist at the level of database design, application code, memory settings, disk I/O, and eliminating contention. Performance monitoring tools like the Automatic Workload Repository and wait events help identify problem areas.
MySQL views allow users to create virtual tables based on the result set of SELECT statements. Views can reference tables but have restrictions like not allowing subqueries or system variables. The CREATE VIEW statement is used to define a view with an AS clause specifying the SELECT statement. Views offer benefits like easier maintenance and security but can impact performance.
This document provides definitions and explanations of various Oracle database concepts and components. It defines terms like log switch, online redo log, archived redo log, database startup process, instance recovery, full backup restrictions, mounting modes, ARCHIVELOG mode advantages, database shutdown process, restricted instance startup, partial backup, mirrored redo log, and control file usage. It also answers questions on topics like views, tablespaces, schemas, segments, clusters, integrity constraints, indexes, extents, synonyms, and transactions.
This document lists 120 top PL/SQL interview questions covering topics such as SQL query execution order, differences between functions and procedures, triggers, collections, joins, exceptions, and performance tuning. The questions range from basic to advanced levels and cover a wide variety of PL/SQL concepts and capabilities.
Procesamiento y Mantenimiento de ArchivosRosmyl Giomar
Presentación sobre las operaciones generales que se realizan sobre un archivo de datos, las cuales son: procesamiento y mantenimiento. El procesamiento incluye: Creación, apertura, ampliación, eliminación y cierre. El mantenimiento consta de: Actualización y consulta
This document discusses Oracle database performance tuning. It covers identifying common Oracle performance issues such as CPU bottlenecks, memory issues, and inefficient SQL statements. It also outlines the Oracle performance tuning method and tools like the Automatic Database Diagnostic Monitor (ADDM) and performance page in Oracle Enterprise Manager. These tools help administrators monitor performance, identify bottlenecks, implement ADDM recommendations, and tune SQL statements reactively when issues arise.
The document discusses stored procedures in databases. It defines stored procedures as procedures that are stored in a database with a name, parameter list, and SQL statements. The key points covered include:
- Stored procedures are created using the CREATE PROCEDURE statement and can contain SQL statements and control flow statements like IF/THEN.
- Parameters can be used to pass data into and out of stored procedures.
- Variables can be declared and used within stored procedures.
- Cursors allow stored procedures to iterate through result sets row by row to perform complex logic.
- Error handling and exceptions can be managed within stored procedures using DECLARE HANDLER.
Stored procedures offer benefits
This document outlines 8 steps to create an audit trail for the table FND_LOOKUP_VALUES in an Oracle application: 1) Find the application name, 2) Ensure audit is enabled for that application, 3) Create an audit group for the table, 4) Run a concurrent program to create audit tables and triggers, 5) Verify the audit tables were created, 6) Test the audit trail by creating new lookup data, 7) View the audit data that was captured, and 8) Optionally add more columns to the audit trail.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
This document discusses database object dependencies in Oracle. It describes how different types of objects can reference other objects, creating dependencies. It defines direct and indirect dependencies. It also covers local dependencies within a database and remote dependencies that can occur between databases in a distributed system. The document discusses how Oracle tracks and manages dependencies and recompiles objects when dependencies change.
This document discusses stored procedures and functions in Oracle databases. It covers:
- What procedures and functions are and how they can be created using PL/SQL syntax.
- Parameters for procedures and functions, including IN, OUT, and IN OUT parameter modes.
- Developing procedures and functions, including compiling, storing, and executing them.
- Benefits of using procedures and functions such as improved maintainability and performance.
Oracle EBS 12.1.3 : Integrate OA Framework BC4J components within java concur...Amit Singh
This document discusses integrating Oracle Application Framework (OAF) BC4J components within a Java concurrent program to perform complex background operations like loading data from an XML file into a database table. It provides steps to setup the development environment, create the necessary OAF model objects (entity object, view object, application module), develop a Java class that implements the concurrent program interface, parse the XML using SAX, and extract and load the data. The goal is to demonstrate how to leverage OAF components for common tasks like database access within a custom Java background job.
Net-a-Porter has embarked on the mission of separating database refactoring from code deployment. The solution we've come up with is "refactoring as a service" and a soon-to-be released Perl module which drives it.
I'll explain how Liquibase, our Git repositories, Puppet and Jenkins all fit together to make database refactoring easy and deployment safe and roll-backable.
I'll also tell you how I just discovered Sqitch as a possible replacement for Liquibase.
... and all within 20 minutes!
PL/SQL cursors allow naming and manipulating the result set of a SQL query. There are two types of cursors: implicit and explicit. Implicit cursors are associated with DML statements and queries with INTO clauses, while explicit cursors must be declared, opened, fetched from, and closed. Explicit cursors can be static, using a fixed SQL query, or dynamic, changing the SQL at runtime. Cursors support attributes like %FOUND and %ROWCOUNT and can iterate over query results using a FOR loop.
DB2 Express-C is a free version of the DB2 database server from IBM. It has no usage or deployment limits and can run on Windows, Linux, Mac OS X, and Solaris operating systems. Minimum requirements are 256MB of RAM but it is recommended to have at least 1GB. DB2 Express-C provides basic database functionality and sits below the paid DB2 Workgroup and Enterprise editions in terms of features. It uses concurrency controls like locking and transactions to allow for multi-user access to the database.
The document discusses various SQL Server concepts and features including:
1) Encrypted stored procedures, linked servers, Analysis Services features like OLAP and data mining models.
2) The Analysis Services repository stores metadata for cubes and data sources. SQL Service Broker allows asynchronous messaging between databases.
3) User-defined data types are based on system types and ensure columns store the same type of data. Data types like bit store 0, 1, or null values.
This document provides an introduction to SQL and database systems. It begins with example tables to demonstrate SQL concepts. It then covers the objectives of SQL, including allowing users to create database structures, manipulate data, and perform queries. Various SQL concepts are introduced such as data types, comparison operators, logical operators, and arithmetic operators. The document also discusses SQL statements for schema and catalog definitions, data definition, data manipulation, and other operators. Example SQL queries are provided to illustrate concepts around selecting columns, rows, sorting, aggregation, grouping, and more.
The document discusses SQL Server performance monitoring and tuning. It recommends taking a holistic view of the entire system landscape, including hardware, software, systems and networking components. It outlines various tools for performance monitoring, and provides guidance on identifying and addressing common performance issues like high CPU utilization, disk I/O issues and poorly performing queries.
Triggers can be used to add functionality to form items by executing PL/SQL code when events occur. Buttons can display LOVs using the LIST_VALUES or SHOW_LOV built-ins. Checkboxes and radio buttons can be interacted with to set other item properties or values conditionally. List items can be manipulated with built-ins to add, delete, or modify list elements. Images can be loaded into image items using READ_IMAGE_FILE. Hierarchical tree items can be populated from a hierarchical query using the Populate_Tree built-in.
Database Transactions and SQL Server ConcurrencyBoris Hristov
The document discusses database transactions and transaction management. It begins with an overview of transactions, their properties (atomicity, consistency, isolation, durability known as ACID), and how they are implemented using locks in SQL Server. It then covers transaction isolation levels, locking concepts like lock types and escalation, and how to troubleshoot locking problems including deadlocks. The document provides examples of transactions in SQL Server and demonstrations of managing transactions and concurrency.
Database tuning is the process of optimizing a database to maximize performance. It involves activities like configuring disks, tuning SQL statements, and sizing memory properly. Database performance issues commonly stem from slow physical I/O, excessive CPU usage, or latch contention. Tuning opportunities exist at the level of database design, application code, memory settings, disk I/O, and eliminating contention. Performance monitoring tools like the Automatic Workload Repository and wait events help identify problem areas.
MySQL views allow users to create virtual tables based on the result set of SELECT statements. Views can reference tables but have restrictions like not allowing subqueries or system variables. The CREATE VIEW statement is used to define a view with an AS clause specifying the SELECT statement. Views offer benefits like easier maintenance and security but can impact performance.
This document provides definitions and explanations of various Oracle database concepts and components. It defines terms like log switch, online redo log, archived redo log, database startup process, instance recovery, full backup restrictions, mounting modes, ARCHIVELOG mode advantages, database shutdown process, restricted instance startup, partial backup, mirrored redo log, and control file usage. It also answers questions on topics like views, tablespaces, schemas, segments, clusters, integrity constraints, indexes, extents, synonyms, and transactions.
This document lists 120 top PL/SQL interview questions covering topics such as SQL query execution order, differences between functions and procedures, triggers, collections, joins, exceptions, and performance tuning. The questions range from basic to advanced levels and cover a wide variety of PL/SQL concepts and capabilities.
Procesamiento y Mantenimiento de ArchivosRosmyl Giomar
Presentación sobre las operaciones generales que se realizan sobre un archivo de datos, las cuales son: procesamiento y mantenimiento. El procesamiento incluye: Creación, apertura, ampliación, eliminación y cierre. El mantenimiento consta de: Actualización y consulta
This document discusses Oracle database performance tuning. It covers identifying common Oracle performance issues such as CPU bottlenecks, memory issues, and inefficient SQL statements. It also outlines the Oracle performance tuning method and tools like the Automatic Database Diagnostic Monitor (ADDM) and performance page in Oracle Enterprise Manager. These tools help administrators monitor performance, identify bottlenecks, implement ADDM recommendations, and tune SQL statements reactively when issues arise.
The document discusses stored procedures in databases. It defines stored procedures as procedures that are stored in a database with a name, parameter list, and SQL statements. The key points covered include:
- Stored procedures are created using the CREATE PROCEDURE statement and can contain SQL statements and control flow statements like IF/THEN.
- Parameters can be used to pass data into and out of stored procedures.
- Variables can be declared and used within stored procedures.
- Cursors allow stored procedures to iterate through result sets row by row to perform complex logic.
- Error handling and exceptions can be managed within stored procedures using DECLARE HANDLER.
Stored procedures offer benefits
This document outlines 8 steps to create an audit trail for the table FND_LOOKUP_VALUES in an Oracle application: 1) Find the application name, 2) Ensure audit is enabled for that application, 3) Create an audit group for the table, 4) Run a concurrent program to create audit tables and triggers, 5) Verify the audit tables were created, 6) Test the audit trail by creating new lookup data, 7) View the audit data that was captured, and 8) Optionally add more columns to the audit trail.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
This document discusses database object dependencies in Oracle. It describes how different types of objects can reference other objects, creating dependencies. It defines direct and indirect dependencies. It also covers local dependencies within a database and remote dependencies that can occur between databases in a distributed system. The document discusses how Oracle tracks and manages dependencies and recompiles objects when dependencies change.
This document discusses stored procedures and functions in Oracle databases. It covers:
- What procedures and functions are and how they can be created using PL/SQL syntax.
- Parameters for procedures and functions, including IN, OUT, and IN OUT parameter modes.
- Developing procedures and functions, including compiling, storing, and executing them.
- Benefits of using procedures and functions such as improved maintainability and performance.
Oracle EBS 12.1.3 : Integrate OA Framework BC4J components within java concur...Amit Singh
This document discusses integrating Oracle Application Framework (OAF) BC4J components within a Java concurrent program to perform complex background operations like loading data from an XML file into a database table. It provides steps to setup the development environment, create the necessary OAF model objects (entity object, view object, application module), develop a Java class that implements the concurrent program interface, parse the XML using SAX, and extract and load the data. The goal is to demonstrate how to leverage OAF components for common tasks like database access within a custom Java background job.
Net-a-Porter has embarked on the mission of separating database refactoring from code deployment. The solution we've come up with is "refactoring as a service" and a soon-to-be released Perl module which drives it.
I'll explain how Liquibase, our Git repositories, Puppet and Jenkins all fit together to make database refactoring easy and deployment safe and roll-backable.
I'll also tell you how I just discovered Sqitch as a possible replacement for Liquibase.
... and all within 20 minutes!
PL/SQL cursors allow naming and manipulating the result set of a SQL query. There are two types of cursors: implicit and explicit. Implicit cursors are associated with DML statements and queries with INTO clauses, while explicit cursors must be declared, opened, fetched from, and closed. Explicit cursors can be static, using a fixed SQL query, or dynamic, changing the SQL at runtime. Cursors support attributes like %FOUND and %ROWCOUNT and can iterate over query results using a FOR loop.
DB2 Express-C is a free version of the DB2 database server from IBM. It has no usage or deployment limits and can run on Windows, Linux, Mac OS X, and Solaris operating systems. Minimum requirements are 256MB of RAM but it is recommended to have at least 1GB. DB2 Express-C provides basic database functionality and sits below the paid DB2 Workgroup and Enterprise editions in terms of features. It uses concurrency controls like locking and transactions to allow for multi-user access to the database.
The document discusses various SQL Server concepts and features including:
1) Encrypted stored procedures, linked servers, Analysis Services features like OLAP and data mining models.
2) The Analysis Services repository stores metadata for cubes and data sources. SQL Service Broker allows asynchronous messaging between databases.
3) User-defined data types are based on system types and ensure columns store the same type of data. Data types like bit store 0, 1, or null values.
Database concurrency and transactions - Tal Oliersqlserver.co.il
This document provides an overview of database transactions and locking concepts. It discusses the ACID model which guarantees atomicity, consistency, isolation, and durability. It describes different isolation levels and how they handle phenomena like dirty reads. It also covers locking types including exclusive and share locks. Advanced topics covered include concurrency, lock escalation, and how transactions and locking are implemented differently in Oracle and SQL Server.
This document discusses SQL basics including transactions, concurrency control, and schema level objects. It explains the ACID properties of transactions including atomicity, consistency, isolation, and durability. It also covers concurrency control, isolation levels, schema objects like stored procedures and functions, domains, sequences, assertions and more. Key concepts are explained with SQL syntax examples.
The document contains interview questions and answers for an SQL Server database administrator position. It includes questions about improving query performance, resolving deadlocks, blocking troubleshooting, database backup types, database isolation levels, creating schemas and cursors, and database architecture. Key points covered are the wait for graph deadlock detection method, types of database backups, isolation levels, how to create schemas and dynamic/scrollable cursors, and the basic architecture of SQL Server databases. More interview questions and answers can be found at the provided link.
This script finds root blocking sessions in Oracle databases, both with and without Real Application Clusters (RAC). It analyzes lock requests and identifies the oldest blocking session, which may not always be the root blocker. The script outputs sessions holding or waiting for locks on the same resource, allowing identification of the true root blocker to kill to resolve blocking.
The document discusses various disaster recovery strategies for SQL Server including failover clustering, database mirroring, and peer-to-peer transactional replication. It provides advantages and disadvantages of each approach. It also outlines the steps to configure replication for Always On Availability Groups which involves setting up publications and subscriptions, configuring the availability group, and redirecting the original publisher to the listener name.
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...Alex Zaballa
Oracle Database 12c introduces many new features for developers and DBAs. These include native support for JSON, data redaction capabilities, improved SQL query functionality using row limits and offsets, and new PL/SQL features like calling functions from SQL. The presentation provides demonstrations of these new features.
OOW16 - Oracle Database 12c - The Best Oracle Database 12c New Features for D...Alex Zaballa
This document provides an overview of new features in Oracle Database 12c for developers and DBAs. It begins with an introduction by Alex Zaballa and then covers several new features including native support for JSON, data redaction, row limits and offsets for SQL queries, PL/SQL functions callable from SQL, session level sequences, and temporary undo. The document includes demonstrations of many of these new features.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
12cR1 new features. I have tried to cover all new features of 12cR1 and many more may be missing. These are all my own views and do not necessarily reflect the views of Oracle. Requesting all visitors to comment on it to improve further.
The document discusses key concepts in distributed systems including networking, remote procedure calls (RPC), and transaction processing systems (TPS). It covers networking fundamentals like sockets and ports. It describes how RPC works by allowing functions to be called remotely. It explains the ACID properties that TPS must support for atomicity, consistency, isolation, and durability of transactions processed across distributed systems.
Welcome to the nightmare of locking, blocking and isolation levels!Boris Hristov
There will always be locking inside your SQL Server box! In this session we go deep into how locking mechanism works, what are the main problems around locking, how we can resolve them and when isolation levels can actually be of help!
The document discusses database security. It defines database security as mechanisms that protect databases from intentional or unintentional threats like theft, fraud, loss of confidentiality, integrity, and availability. It discusses various security threats and countermeasures like authorization, views, backups, encryption, and locking. It describes different types of locks like shared and exclusive locks. It also covers authorization, views, backups, integrity controls, encryption, and PL/SQL security features like explicit locking statements.
This document contains a workload repository report for a database named DB11G. Key details include:
- The database ran on a Linux server with 1 CPU and 1.96GB of memory.
- Between two snapshots taken an hour apart, the average wait time per session was 4.8-5.1 seconds.
- The top foreground wait event was log file sync, taking up 9.15% of database time.
Pacemaker is a high availability cluster resource manager that can be used to provide high availability for MySQL databases. It monitors MySQL instances and replicates data between nodes using replication. If the primary MySQL node fails, Pacemaker detects the failure and fails over to the secondary node, bringing the MySQL service back online without downtime. Pacemaker manages shared storage and virtual IP failover to ensure connections are direct to the active MySQL node. It is important to monitor replication state and lag to ensure data consistency between nodes.
This document provides an overview of new features in SQL Server 2005, including SQLCLR which allows writing functions, procedures and triggers in .NET languages. It discusses how to install and debug SQLCLR assemblies, and create user-defined data types and aggregates that can extend the functionality of SQL Server. Key enhancements to T-SQL are also summarized, such as common table expressions, ranking commands, and exception handling.
This presentation by OECD, OECD Secretariat, was made during the discussion “Competition and Regulation in Professions and Occupations” held at the 77th meeting of the OECD Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Katharine Kemp, Associate Professor at the Faculty of Law & Justice at UNSW Sydney, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Professor Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by OECD, OECD Secretariat, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
This presentation by Nathaniel Lane, Associate Professor in Economics at Oxford University, was made during the discussion “Pro-competitive Industrial Policy” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/pcip.
This presentation was uploaded with the author’s consent.
This presentation by Yong Lim, Professor of Economic Law at Seoul National University School of Law, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
The importance of sustainable and efficient computational practices in artificial intelligence (AI) and deep learning has become increasingly critical. This webinar focuses on the intersection of sustainability and AI, highlighting the significance of energy-efficient deep learning, innovative randomization techniques in neural networks, the potential of reservoir computing, and the cutting-edge realm of neuromorphic computing. This webinar aims to connect theoretical knowledge with practical applications and provide insights into how these innovative approaches can lead to more robust, efficient, and environmentally conscious AI systems.
Webinar Speaker: Prof. Claudio Gallicchio, Assistant Professor, University of Pisa
Claudio Gallicchio is an Assistant Professor at the Department of Computer Science of the University of Pisa, Italy. His research involves merging concepts from Deep Learning, Dynamical Systems, and Randomized Neural Systems, and he has co-authored over 100 scientific publications on the subject. He is the founder of the IEEE CIS Task Force on Reservoir Computing, and the co-founder and chair of the IEEE Task Force on Randomization-based Neural Networks and Learning Systems. He is an associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS).
Carrer goals.pptx and their importance in real lifeartemacademy2
Career goals serve as a roadmap for individuals, guiding them toward achieving long-term professional aspirations and personal fulfillment. Establishing clear career goals enables professionals to focus their efforts on developing specific skills, gaining relevant experience, and making strategic decisions that align with their desired career trajectory. By setting both short-term and long-term objectives, individuals can systematically track their progress, make necessary adjustments, and stay motivated. Short-term goals often include acquiring new qualifications, mastering particular competencies, or securing a specific role, while long-term goals might encompass reaching executive positions, becoming industry experts, or launching entrepreneurial ventures.
Moreover, having well-defined career goals fosters a sense of purpose and direction, enhancing job satisfaction and overall productivity. It encourages continuous learning and adaptation, as professionals remain attuned to industry trends and evolving job market demands. Career goals also facilitate better time management and resource allocation, as individuals prioritize tasks and opportunities that advance their professional growth. In addition, articulating career goals can aid in networking and mentorship, as it allows individuals to communicate their aspirations clearly to potential mentors, colleagues, and employers, thereby opening doors to valuable guidance and support. Ultimately, career goals are integral to personal and professional development, driving individuals toward sustained success and fulfillment in their chosen fields.
This presentation by OECD, OECD Secretariat, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
XP 2024 presentation: A New Look to Leadershipsamililja
Presentation slides from XP2024 conference, Bolzano IT. The slides describe a new view to leadership and combines it with anthro-complexity (aka cynefin).
This presentation by Juraj Čorba, Chair of OECD Working Party on Artificial Intelligence Governance (AIGO), was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
This presentation by Tim Capel, Director of the UK Information Commissioner’s Office Legal Service, was made during the discussion “The Intersection between Competition and Data Privacy” held at the 143rd meeting of the OECD Competition Committee on 13 June 2024. More papers and presentations on the topic can be found at oe.cd/ibcdp.
This presentation was uploaded with the author’s consent.
4. Client-Server Communication Process
Client Application
Client Net-Library
Client
SQL Server
Relational
Engine
Storage Engine
Server
Local
Database
Database API
(OLE DB, ODBC,
DB-Library)
Memory
Open Data Services
Server Net-Libraries
Query
Result Set
Result Set
Query
1
2
3
4
5
Tabular Data Stream(TDS)
Processor
5. Lock in MS SQL Server.
2. Locks & Transaction Isolation Levels
1
2 Transaction Isolation Levels.
6. 2.1.Lock in MS SQL Server
Lock Granularity and Hierarchies.
a
b
d
Lock Modes.
What is Lock in SQL Server?
c
Lock Compatibility.
7. 2.1.a What is Lock in SQL Server?
Locking is a mechanism used by the Microsoft SQL
Server Database Engine to synchronize access by
multiple users to the same piece of data at the same
time.
The basis of locking is to allow one transaction to
update data, knowing that if it has to roll back any
changes, no other transaction has modified the data
since the first transaction did.
8. 2.1.b Lock Granularity and Hierarchies
Resource Description
RID A row identifier used to lock a single row within a heap.
KEY
A row lock within an index used to protect key ranges in serializable
transactions.
PAGE An 8-kilobyte (KB) page in a database, such as data or index pages.
EXTENT A contiguous group of eight pages, such as data or index pages.
HOBT
A heap or B-tree. A lock protecting an index or the heap of data pages
in a table that does not have a clustered index.
TABLE The entire table, including all data and indexes.
FILE A database file.
APPLICATION An application-specified resource.
METADATA Metadata locks.
ALLOCATION_
UNIT An allocation unit.
DATABASE The entire database.
9. 2.1.c Locks Modes
Locks have different modes that specify the level of access
other transactions have to the locked resource.
Update (U)
Shared (S)
Intent
Key-range
Exclusive (X)
Bulk Update (BU)
Locks
Modes
Schema
11. 2.1.c Locks Modes (cont)
Shared locks (S): Used for read operations that do
not change or update data, such as a SELECT
statement.
ID CA CB
3 AA03 102
RT RM RD
PAGE IS 1:154
OBJECT IS
KEY S
(03000d8f0
ecc)
Begin tran
Select ID,CA,CB
from dbo.tbl01 with( HOLDLOCK)
where ID=3
SELECT resource_type RT,
request_mode RM,
resource_description RD
FROM sys.dm_tran_locks
WHERE resource_type <> 'DATABASE'
Commit tran
12. 2.1.c Locks Modes (cont)
Exclusive locks (X): Used for data-modification operations, such
as INSERT, UPDATE, or DELETE. Ensures that multiple updates
cannot be made to the same resource at the same time.
BEGIN TRAN
UPDATE dbo.tbl01
SET CA = 'Exclusive Lock(X)'
WHERE ID = 5
SELECT resource_type RT,
request_mode RM,
resource_description RD
FROM sys.dm_tran_locks
WHERE resource_type <>
'DATABASE'
ROLLBACK
RT RM RD
PAGE IX 1:163
OBJECT IX
OBJECT IX
KEY X (0500d1d065e9)
13. 2.1.c Locks Modes (cont)
ID CA CB
10 CC03 303
Update locks (U): Used on resources that can be updated.
Prevents a common form of deadlock that occurs when multiple
sessions are reading, locking, and potentially updating
resources later.
Begin tran
Select ID,CA,CB from dbo.tbl01
WITH (UPDLOCK)
where CB >300
SELECT resource_type RT,
request_mode RM,
resource_description RD
FROM sys.dm_tran_locks
WHERE resource_type <>
'DATABASE'
Commit tran
RT RM RD
PAGE IU 1:163
KEY U (0a0087c006b1)
OBJECT IX
14. 2.1.c Locks Modes (cont)
Intent locks (I): Used to establish a lock hierarchy.
The types of intent locks are: intent shared (IS), intent
exclusive (IX), and shared with intent exclusive (SIX).
Begin tran
UPDATE dbo.tbl01
SET CA = 'Test Intent locks (I)'
WHERE ID = 5
SELECT resource_type RT,
request_mode RM,
resource_description RD
FROM sys.dm_tran_locks
WHERE resource_type <>
'DATABASE'
ROLLBACK
RT RM RD
PAGE IX 1:163
KEY X (0500d1d065e9)
OBJECT IX
15. 2.1.c Locks Modes (cont)
Schema locks (Sch): Used when an operation
dependent on the schema of a table is executing.
The types of schema locks are: schema modification
(Sch-M) and schema stability (Sch-S).
Begin tran
CREATE TABLE tbl02
(TestColumn INT)
SELECT resource_type RT,
request_mode RM,
resource_description RD
FROM sys.dm_tran_locks
WHERE resource_type <>
'DATABASE'
ROLLBACK
RT RM RD
HOBT Sch-M
METADATA Sch-S data_space_id = 1
OBJECT Sch-M
16. 2.1.c Locks Modes (cont)
Key-Range Locks: Protects the range of rows read by a query
when using the serializable transaction isolation level. Ensures that
other transactions cannot insert rows that would qualify for the
queries of the serializable transaction if the queries were run again.
SET TRANSACTION ISOLATION LEVEL serializable
Begin tran
Update dbo.tbl01 WITH (UPDLOCK) Set CA = 'Key-Range Locks'
where CB >300
SELECT resource_type RT, request_mode RM, resource_description RD
FROM sys.dm_tran_locks WHERE resource_type <> 'DATABASE'
Commit tran
RT RM RD
PAGE IX 1:163
KEY RangeX-X (0a0087c006b1)
KEY RangeS-U (ffffffffffff)
17. 2.1.c Locks Modes (cont)
Bulk Update (BU): Used when bulk copying data into a
table and the TABLOCK hint is specified.
CREATE TABLE tbl03
( CA VARCHAR(40),CB INT)
GO
BULK INSERT tbl03
FROM 'D:Bulk.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = 'n'
)
GO
AA01,101
BB02,202
CC03,303
DD04,404
18. 2.1.d Lock Compatibility
Lock compatibility controls whether multiple transactions can
acquire locks on the same resource at the same time. If a
resource is already locked by another transaction, a new lock
request can be granted only if the mode of the requested lock
is compatible with the mode of the existing lock.
If the mode of the requested lock is not compatible with the
existing lock, the transaction requesting the new lock waits for
the existing lock to be released or for the lock timeout interval
to expire.
19. 2.1.d Lock Compatibility (cont)
Existing Lock Type
Requested Lock Type IS S U IX SIX X Sch-S SCH-M BU
Intent Shared (IS) Y Y Y Y Y N Y N N
Shared (S) Y Y Y N N N Y N N
Update (U) Y Y N N N N Y N N
Intent Exclusive (IX) Y N N Y N N Y N N
Shared with Intent Exclusive
(SIE) Y N N N N N Y N N
Exclusive (E) N N N N N N Y N N
Schema Stability (Sch-S) Y Y Y Y Y Y Y N Y
Schema Modify (Sch-M) N N N N N N N N N
Bulk Update (BU) N N N N N N Y N Y
20. 2.2 Transaction Isolation Levels
What is Isolation Levels in SQL Server?a
b Types of Isolation Level
21. 2.2.a What is Isolation Levels in SQL Server?
Isolation levels come into play when you need
to isolate a resource for a transaction and protect
that resource from other transactions. The
protection is done by obtaining locks.
Lower Isolation Levels allow multiple users to
access the resource simultaneously (concurrency)
but they may introduce concurrency related
problems such as dirty-reads and data
inaccuracy.
Higher Isolation Levels eliminate concurrency
related problems and increase the data accuracy
but they may introduce blocking.
22. 2.2.b Types of Isolation Level
Traditionally, SQL Server has
supported six isolation levels:
Read Uncommitted.
Read Committed.
Repeatable Read.
Serializable Read.
Snapshot.
Read Committed Snapshot
23. 2.2.b Types of Isolation Level (cont)
Read Uncommitted: This is the lowest level and can be
set, so that it provides higher concurrency but introduces
all concurrency problems.
--Connection A
Begin tran
UPDATE dbo.tbl01
SET CA = ‘Read Uncommitted'
WHERE ID = 5
Commit Tran
--Connection B
Select ID,CA,CB
From dbo.tbl01
WHERE ID = 5
--Change Isolation Level of Connection B
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Select ID,CA,CB
From dbo.tbl01
WHERE ID = 5
Connection B can see data but It not correct.
This is call Dirty-Reading.
24. 2.2.b Types of Isolation Level (cont)
Read Committed: This is the default Isolation Level of
SQL Server. This eliminates dirty-reads but all other
concurrency related problems. You have already seen this.
CREATE PROCEDURE dbo.UpdateCB
@CA NVarchar(100), @CB int
AS
BEGIN TRAN
If Exists( Select 1 from dbo.tbl01
WHERE CA = @CA)
Begin
WAITFOR DELAY ’00:00:05′
UPDATE dbo.tbl01
SET CB = @CB WHERE CA = @CA
Commit Tran
Return
End
Else
RAISERROR (‘Data not exist’, 16, 1) ;
--User A Call sp dbo.UpdateCB
EXEC UpdateCB ‘AA01’,100
--After few second User B also
call sp UpdateCB with difference
CB
EXEC UpdateCB ‘AA01’,999
User A made the update and
no error message are
returned but it has lost its
update.
25. 2.2.b Types of Isolation Level (cont)
Repeatable Read Isolation Level:
This Isolation Level addresses all concurrency
related problems except Phantom reads. Unlike
Read Committed, it does not release the shared
lock once the record is read. This stops other
transactions accessing the resource, avoiding
Lost Updates and Nonrepeatable reads.
26. 2.2.b Types of Isolation Level (cont)
CREATE PROCEDURE dbo.UpdateCB
@CA NVarchar(100), @CB int
AS
SET TRANSACTION ISOLATION LEVEL
REPEATABLE READ
BEGIN TRAN
If Exists( Select 1 from dbo.tbl01
WHERE CA = @CA)
Begin
WAITFOR DELAY ’00:00:05′
UPDATE dbo.tbl01
SET CB = @CB WHERE CA = @CA
Commit Tran
Return
End
Else
RAISERROR (‘Data not exist’, 16, 1) ;
--User A Call sp dbo.UpdateCB
EXEC UpdateCB ‘AA01’,100
-- After few seconds, User B also
call sp UpdateCB with difference
CB
EXEC UpdateCB ‘AA01’,999
User B have been received
throw 1205 error of SQL
Server and Connection2 will be
a deadlock victim.
27. --Create new table tbl02 and Add Column CC into tbl01
CREATE TABLE dbo.tbl02 (CB int, CC int DEFAULT(0) )
ALTER TABLE dbo.tbl01 ADD CC bit DEFAULT(0) NOT NULL
--Create sp to insert data tbl01 to tbl02 with condition CB>300
Create PROCEDURE dbo.AddColCC
AS
BEGIN
BEGIN TRAN
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
INSERT INTO dbo.tbl02(CB, CC)
SELECT CB,100 FROM dbo.tbl01
WHERE CC = 0 AND CB > 300
WAITFOR DELAY '00:00:05'
UPDATE tbl01 SET CC = 1 WHERE CC = 0 AND CB > 300
COMMIT TRAN
END
--User A call sp AddColCC
exec AddColCC
--User B insert data to tbl01 with CB>300
insert into tbl01(CA,CB)
Values('Test REPEATABLE READ',304)
Step01
Step02
Step03 Step04
28. Result of User A & B
tbl01 tbl02
ID CA CB CC CB CC
8 CC01 301 1 301 100
9 CC02 302 1 302 100
10 CC03 303 1 303 100
11 Test REPEATABLE READ 304 1
In this case, we have an problem which is
Phantom Reads.
To avoid this problem, we need to use
hightest Isolation Level that is Serializable.
29. 2.2.b Types of Isolation Level (cont)
Serializable Isolation Level
This is the highest Isolation Level and it avoids all
the concurrency related problems.
The behavior of this level is just like the Repeatable
Read with one additional feature.
It obtains key range locks based on the filters that
have been used.
It locks not only current records that stratify the
filter but new records fall into same filter.
30. --Alter sp to insert data tbl01 to tbl02 with condition CB>300
Alter PROCEDURE dbo.AddColCC
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL Serializable
BEGIN TRAN
INSERT INTO dbo.tbl02(CB, CC)
SELECT CB,100 FROM dbo.tbl01
WHERE CC = 0 AND CB > 300
WAITFOR DELAY '00:00:05'
UPDATE tbl01 SET CC = 1 WHERE CC = 0 AND CB > 300
COMMIT TRAN
END
Step01
--Bring tbl01 & tbl02 table to original state
Update dbo.tbl01 set CC=0
Delete FROM dbo.tbl01 where ID>10
Delete FROM dbo.tbl02
--User A call sp AddColCC
exec AddColCC
--User B insert data to tbl01 with CB>300
insert into tbl01(CA,CB)
Values('Test REPEATABLE READ',304)Step03 Step04
Step02
31. Result of User A & B
tbl01 tbl02
ID CA CB CC CB CC
8 CC01 301 1 301 100
9 CC02 302 1 302 100
10 CC03 303 1 303 100
12 Test REPEATABLE READ 304 0
Connection of User B will be blocked until
connection of User A completes the
transaction, it is avoiding Phantom Reads
32. 2.2.b Types of Isolation Level (cont)
The Snapshot Isolation Level works with Row Versioning
technology. Whenever the transaction requires a modification
for a record, SQL Server first stores the consistence version of
the record in the tempdb.
If another transaction that runs under Snapshot Isolation Level
requires the same record, it can be taken from the version
store.
This Isolation Level prevents all concurrency related problems
just like Serializable Isolation Level, in addition to that it allows
multiple updates for same resource by different transactions
concurrently.
ALTER DATABASE [DB Name]
SET ALLOW_SNAPSHOT_ISOLATION ON
33. --Alter sp to insert data tbl01 to tbl02 with condition CB>300
Alter PROCEDURE dbo.AddColCC
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
BEGIN TRAN
INSERT INTO dbo.tbl02(CB, CC)
SELECT CB,100 FROM dbo.tbl01
WHERE CC = 0 AND CB > 300
WAITFOR DELAY '00:00:05'
UPDATE tbl01 SET CC = 1 WHERE CC = 0 AND CB > 300
COMMIT TRAN
END
Step02
--Bring tbl01 & tbl02 table to original state
Update dbo.tbl01 set CC=0
Delete FROM dbo.tbl01 where ID>10
Delete FROM dbo.tbl02
--User A call sp AddColCC
exec AddColCC
--User B insert data to tbl01 with CB>300
insert into tbl01(CA,CB)
Values(‘ISOLATION LEVEL SNAPSHOT',305)Step04
Step05
Step03
--Enable Allow_Snapshot_Isolation on local DB Test
ALTER DATABASE [DB_Test]
SET ALLOW_SNAPSHOT_ISOLATION ON
Step01
34. Result of User A & B
tbl01 tbl02
ID CA CB CC CB CC
8 CC01 301 1 301 100
9 CC02 302 1 302 100
10 CC03 303 1 303 100
13 SNAPSHOT 305 0
The result is the same setting Serializable
Isolation Level
35. No User A User B
0
SET TRANSACTION ISOLATION
LEVEL SNAPSHOT
1 Begin tran Begin tran
2
Update dbo.tbl01
Set CA='SNAPSHOT-1‘
Where CB=303
Select ID, CA, CB
From tbl01
Where CB=303
3
Select ID, CA, CB
From tbl01
Where CB=303
Update dbo.tbl01
Set CA='SNAPSHOT-2'
Where CB=303
4 Commit Commit
Example 2
36. No User A User B
2 (1 row(s) affected) Return data with CA='SNAPSHOT‘
3
Return data with
CA='SNAPSHOT-1‘ Processing
4
Return message:
Command(s) completed
successfully.
Return message: Snapshot isolation
transaction aborted due to update conflict.
You cannot use snapshot isolation to
access table 'dbo.tbl01' directly or
indirectly in database 'DB_Test' to update,
delete, or insert the row that has been
modified or deleted by another transaction.
Retry the transaction or change the
isolation level for the update/delete
statement.
Result of User A & B
37. 2.2.b Types of Isolation Level (cont)
Read Committed Snapshot is the new
implementation of the Read Committed Isolation
Level.
It has to be set not at session/connection level
but database level.
The Read Committed Snapshot differs from
Snapshot in two ways; Unlike Snapshot, it always
returns latest consistence version and no conflict
detection.
ALTER DATABASE [DBName] SET READ_COMMITTED_SNAPSHOT ON
38. Summarize of Isolation Level
Dirty
Reads
Lost
Updates
Non
repeatable
reads
Phantom
reads
Conflict
Detection
Read
Uncommitted
Yes Yes Yes Yes No
Read Committed No Yes Yes Yes No
Repeatable Read No Yes Yes Yes No
Serializable No No No No No
Snapshot No No No No Yes
Read Committed
Snapshot
No Yes Yes Yes No
39. 3 Blocking in the system
Purpose of Blocking.
1
2
What is Blocking in SQL Server?
3 Detecting SQL Server Blocking
40. 3.1 What is Blocking in SQL Server?
Blocking in SQL Server is a scenario where one
connection to SQL Server locks one or more
records, and a second connection to SQL Server
requires a conflicting lock type on the record or
records locked by the first connection.
This causes the second connection to wait until
the first connection releases its locks.
By default, a connection will wait an unlimited
amount of time for the blocking lock to go away.
41. 3.2 Purpose of Blocking.
Dirty
Reads
Lost
Updates
Phantom
reads
Non
repeatable
reads
42. 3.3 Detecting SQL Server Blocking
What is Dead Lock?a
b Detection SQL Server Blocking.
43. 3.3.a What is Dead Lock?
A deadlock occurs when two or more tasks
permanently block each other by each
task having a lock on a resource which the
other tasks are trying to lock.
44. 3.3.b Detection SQL Server Blocking
No User A User B
1 Begin tran Begin tran
2
Update dbo.tbl01
Set CA='SNAPSHOT-1‘
Where CB=303
Select ID, CA, CB
From tbl01
Where CB=303
3
Select ID, CA, CB
From tbl01
Where CB=303
Update dbo.tbl01
Set CA='SNAPSHOT-2'
Where CB=303
4 --Commit Commit
Profiler
Trace
Activity
Monitor
Report
Service
45. SQL Server Profiler Trace
Enable Blocked process threshold on Database
sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
sp_configure 'blocked process threshold'
GO
sp_configure 'blocked process threshold', 5 -–5 is time blocked
GO
RECONFIGURE
GO
46. SQL Server Profiler Trace
Use Tool SQL Server Profiler, connect Database to monitor.
On Tab ‘General’, call the Trace name as ‘CheckDeadlocks’. Then choose
a template ‘Blank’. Check the checkbox ‘Save to File’ and save the file in
a preferred location.
47. SQL Server Profiler Trace (Cont)
Expand the ‘Errors and Warnings’ section and select the
‘Blocked Process Report’ Item.
48. SQL Server Profiler Trace (Cont)
After the trace is run the *.trc file can be viewed in SQL Server Profiler
or can be loaded into a database. It will show an XML view of what
query was being blocked and what query was doing the blocking.
49. SQL Server Activity Monitor
This tool is a component of the SQL Server Management Studio.
It helps in getting information about users connection, and locks
that may happen because of different reasons. There are 3 pages
in the Activity Monitor:
1 - Process Info Page - contains information about all
connections.
2 - Locks by Process Page - contains sorted information by
locks on the connections.
3 - Locks by Object Page - Contains information about locks
sorted based on the object.
Whenever a lock occurs in a database, the Activity Monitor is the
best place to view, in order to figure out the cause of the lock.
Its important to note here that in order to view the Activity
Monitor, the user needs to have the VIEW SERVER STATE
permission on the SQL Server he/she is working on.