SlideShare a Scribd company logo
Microsoft SQL Server
Session : Steps Towards SQL Server Developer
Ahsan Kabir
SQL Server Development
 Introduction to Database
 Data Base Creation
 Architecture of Database Files and File group
 SQL Server I/O request
 Performance Consideration
 Disaster Recovery
 Explore System Databases
 Table
 View
 Cursor
 User defined function
 Trigger
 Locking
 Exception Handling
 Transaction Isolation
 Row Version
Database
A database is an organized collection of data. It is the collection of
schemes, tables, queries, reports, views and other objects.
A database management system (DBMS) is a computer software application
that interacts with the user, other applications, and the database itself to
capture and analyze data. A general-purpose DBMS is designed to allow the
definition, creation, querying, update, and administration of databases. Well-
known DBMSs include MySQL, PostgreSQL, Microsoft SQL Server, Oracle,
Sybase and IBM DB2.
Gartner Magic Quadrant
Brainstorming Session
Database design Process
The cyclical process of designing a database, which includes
the following basic steps:
1. The requirement collection and analysis phase
2. The conceptual design phase
3. The logical design phase
4. The physical design phase
5. The implementation and loading phase
6. The testing and evaluation phase
To fine-tune the design,
Database Creation
Architecture of Database Files and
File group
File groups
for allocation and administration
Data files
Contain tables, indexes, or the text,
ntext, or image data
Log file
is used for Atomicity, Consistency,
Isolation, and Durability
SQL Server I/O request
SQL Server
I/O Request
I/O Manager
Device driver
Data is read from, or
written to, disk.
Performance Thinking
 Identify the large tables
 Identify Complex processes
 Identify heavily accessed table
 Identify Less accessed tables
 Put different tables used in the same join queries in different filegroups
 Transaction log file or files on the same physical disk
Explore system databases
Master Database
 Hold information of other database
 System logins, configuration settings
 Linked servers
Model
 Template database
 Place stored procedures, views…
Tempdb
 Global and local temporary tables table-valued functions, and
temporary table indexes
Msdb
 database backups, SQL Agent information, DTS packages, SQL Server
jobs, and log shipping.
Table
Computed column
Key is a subset of columns in a table that allow a row to be uniquely identified. So, a key can be more
than just one column. And, every row in the table will have a unique value for the key – or a unique
combination of values if the key consists of more than just one column. According to the SQL standard,
a key is not allowed to have values that are NULL-able.
Key that has more columns than necessary to uniquely identify each row in the table is called a super-
key (think of it as a super-set). But, if the key has the minimum amount of columns necessary to
uniquely identify each row then it is called a minimal super-key. A minimal super-key is also known as a
candidate key, and there must be one or more candidate keys in a table.
PRIMARY KEY and UNIQUE KEY enforces the Uniqueness of the values (i.e. avoids duplicate values) on
the column[s] on which it is defined. Also these key’s can Uniquely identify each row in database table.
A foreign key identifies a column or group of columns in one (referencing) table that refers to a column
or group of columns in another (referenced) table – in our example above, the Employee table is the
referenced table and the Employee Salary table is the referencing table.
A foreign key can actually reference a key that is not the primary key of a table. But, a foreign key must
reference a unique key. foreign key can hold NULL values. Because foreign keys can reference unique,
non-primary keys – which can hold NULL values – this means that foreign keys can themselves hold
NULL values as well.A table can have multiple unique and foreign keys. However, a table can have only
one primary key.
Even though the SQL standard says that a key can not be NULL, in practice actual RDBMS
implementations (like SQL Server and Oracle), allow both foreign and unique keys to actually be NULL.
And there are plenty of times when that actually makes sense. However, a primary key can never be
NULL.
Key in SQL
Referential integrity is a relational database concept in which multiple tables share a relationship
based on the data stored in the tables, and that relationship must remain consistent.
The concept of referential integrity, and one way in which it’s enforced, is best illustrated by an
example. Suppose company X has 2 tables, an Employee table, and an Employee Salary table. In the
Employee table we have 2 columns – the employee ID and the employee name. In the Employee
Salary table, we have 2 columns – the employee ID and the salary for the given ID.
Now, suppose we wanted to remove an employee because he no longer works at company X. Then,
we would remove his entry in the Employee table. Because he also exists in the Employee Salary
table, we would also have to manually remove him from there also. Manually removing the
employee from the Employee Salary table can become quite a pain. And if there are other tables in
which Company X uses that employee then he would have to be deleted from those tables as well –
an even bigger pain.
By enforcing referential integrity, we can solve that problem, so that we wouldn’t have to manually
delete him from the Employee Salary table (or any others). Here’s how: first we would define the
employee ID column in the Employee table to be our primary key. Then, we would define the
employee ID column in the Employee Salary table to be a foreign key that points to a primary key
that is the employee ID column in the Employee table. Once we define our foreign to primary key
relationship, we would need to add what’s called a ‘constraint’ to the Employee Salary table. The
constraint that we would add in particular is called a ‘cascading delete’ – this would mean that any
time an employee is removed from the Employee table, any entries that employee has in the
Employee Salary table would also automatically be removed from the Employee Salary table.
Referential integrity
1.We may not add a record to the Employee Salary table unless the foreign key
for that record points to an existing employee in the Employee table.
2.If a record in the Employee table is deleted, all corresponding records in the
Employee Salary table must be deleted using a cascading delete. This was the
example we had given earlier.
3.If the primary key for a record in the Employee table changes, all
corresponding records in the Employee Salary table must be modified using
what's called a cascading update.
Referential integrity Rules
Difference between PRIMARY KEY and UNIQUE KEY
PRIMARY KEY UNIQUE KEY
NULL Primary Key can't accept null values.
PRIMARY KEY = UNIQUE KEY + Not Null CONSTRAINT
Allows Null value. But only one Null value.
INDEX By default it adds a clustered index By default it adds a UNIQUE non-clustered index
LIMIT A table can have only one Primary key . A table can have more than one UNIQUE Key Column[s]
CREATE SYNTAX
Below is the sample example for defining a single column as a PRIMARY
KEY column while creating a table:CREATE TABLE dbo.Customer
(
Id INT NOT NULL PRIMARY KEY,
FirstName VARCHAR(100),
LastName VARCHAR(100),
City VARCHAR(50)
)
Below is the Sample example for defining multiple columns as PRIMARY
KEY. It also shows how we can give name for the PRIMARY KEY:
CREATE TABLE dbo.Customer
(
Id INT NOT NULL,
FirstName VARCHAR(100) NOT NULL,
LastName VARCHAR(100),
City VARCHAR(50),
CONSTRAINT PK_CUSTOMER PRIMARY KEY (Id,FirstName)
)
Below is the sample example for defining a single column as a
UNIQUE KEY column while creating a table:
CREATE TABLE dbo.Customer
(
Id INT NOT NULL UNIQUE,
FirstName VARCHAR(100),
LastName VARCHAR(100),
City VARCHAR(50)
)
Below is the Sample example for defining multiple columns as
UNIQUE KEY. It also shows how we can give name for the
UNIQUE KEY:
CREATE TABLE dbo.Customer
(
Id INT NOT NULL,
FirstName VARCHAR(100) NOT NULL,
LastName VARCHAR(100),
City VARCHAR(50),
CONSTRAINT UK_CUSTOMER UNIQUE (Id,FirstName)
)
ALTER SYNTAX Below is the Syntax for adding PRIMARY KEY CONSTRAINT on a column
when the table is already created and doesn’t have any primary
key:ALTER TABLE dbo.Customer
ADD CONSTRAINT PK_CUSTOMER PRIMARY KEY (Id)
Below is the Syntax for adding UNIQUE KEY CONSTRAINT on a
column when the table is already created:ALTER TABLE
dbo.Customer
ADD CONSTRAINT UK_CUSTOMER UNIQUE (Id)
DROP SYNTAX Below is the Syntax for dropping a PRIMARY KEY:ALTER TABLE
dbo.Customer
DROP CONSTRAINT PK_CUSTOMER
Below is the Syntax for dropping a UNIQUE KEY:ALTER TABLE
dbo.Customer
DROP CONSTRAINT UK_CUSTOMER
Database design and performance
1. Choose Appropriate Data Type
Choose appropriate SQL Data Type to store your data since it also helps in to improve
the query performance. Example: To store strings use varchar in place of text data type
since varchar performs better than text. Use text data type, whenever you required
storing of large text data (more than 8000 characters). Up to 8000 characters data you
can store in varchar.
2. Avoid nchar and nvarchar
Practice to avoid nchar and nvarchar data type since both the data types takes just
double memory as char and varchar. Use nchar and nvarchar when you required to
store Unicode (16-bit characters) data like as Hindi, Chinese characters etc.
3. Avoid NULL in fixed-length field
Practice to avoid the insertion of NULL values in the fixed-length (char) field. Since,
NULL takes the same space as desired input value for that field. In case of requirement
of NULL, use variable-length (varchar) field that takes less space for NULL.
Database design and performance
1. Choose Appropriate Data Type
Choose appropriate SQL Data Type to store your data since it also helps in to improve
the query performance. Example: To store strings use varchar in place of text data type
since varchar performs better than text. Use text data type, whenever you required
storing of large text data (more than 8000 characters). Up to 8000 characters data you
can store in varchar.
2. Avoid nchar and nvarchar
Practice to avoid nchar and nvarchar data type since both the data types takes just
double memory as char and varchar. Use nchar and nvarchar when you required to
store Unicode (16-bit characters) data like as Hindi, Chinese characters etc.
3. Avoid NULL in fixed-length field
Practice to avoid the insertion of NULL values in the fixed-length (char) field. Since,
NULL takes the same space as desired input value for that field. In case of requirement
of NULL, use variable-length (varchar) field that takes less space for NULL.
Database design and performance
04.Avoid * in SELECT statement
Practice to avoid * in Select statement since SQL Server converts the * to columns name
before query execution. One more thing, instead of querying all columns by using * in
select statement, give the name of columns which you required.
-- Avoid
SELECT * FROM tblName
--Best practice
SELECT col1,col2,col3 FROM tblName
05.Use EXISTS instead of IN
Practice to use EXISTS to check existence instead of IN since EXISTS is faster than IN.
-- Avoid
SELECT Name,Price FROM tblProduct
where ProductID IN (Select distinct ProductID from tblOrder)
--Best practice
SELECT Name,Price FROM tblProduct
where ProductID EXISTS (Select distinct ProductID from tblOrder)
Database design and performance
06.Avoid Having Clause
Practice to avoid Having Clause since it acts as filter over selected rows. Having clause is
required if you further wish to filter the result of an aggregations. Don't use HAVING clause
for any other purpose.
07.Create Clustered and Non-Clustered Indexes
Practice to create clustered and non clustered index since indexes helps in to access data
fastly. But be careful, more indexes on a tables will slow the INSERT,UPDATE,DELETE
operations. Hence try to keep small no of indexes on a table.
08.Keep clustered index small
Practice to keep clustered index as much as possible since the fields used in clustered index
may also used in nonclustered index and data in the database is also stored in the order of
clustered index. Hence a large clustered index on a table with a large number of rows
increase the size significantly. Please refer the article Effective Clustered Indexes
09.Avoid Cursors
Practice to avoid cursor since cursor are very slow in performance. Always try to use SQL
Server cursor alternative. Please refer the article Cursor Alternative.
10.Use Table variable inplace of Temp table
Practice to use Table varible in place of Temp table since Temp table resides in the TempDb
database. Hence use of Temp tables required interaction with TempDb database that is a
little bit time taking task.
Database design and performance
11.Use UNION ALL inplace of UNION
Practice to use UNION ALL in place of UNION since it is faster than UNION as it doesn't sort
the result set for distinguished values.
12.Use Schema name before SQL objects name
Practice to use schema name before SQL object name followed by "." since it helps the SQL
Server for finding that object in a specific schema. As a result performance is best.
--Here dbo is schema name
SELECT col1,col2 from dbo.tblName
-- Avoid
SELECT col1,col2 from tblName
13.Keep Transaction small
Practice to keep transaction as small as possible since transaction lock the processing tables
data during its life. Some times long transaction may results into deadlocks.
Database design and performance
14.SET NOCOUNT ON
Practice to set NOCOUNT ON since SQL Server returns number of rows effected by
SELECT,INSERT,UPDATE and DELETE statement. We can stop this by setting NOCOUNT ON like
as:
CREATE PROCEDURE dbo.MyTestProc
AS
SET NOCOUNT ON
BEGIN
..
END
15.Use TRY-Catch
Practice to use TRY-CATCH for handling errors in T-SQL statements. Sometimes an error in a
running transaction may cause deadlock if you have no handle error by using TRY-CATCH.
16.Use Stored Procedure for frequently used data and more complex queries
Practice to create stored procedure for quaery that is required to access data frequently. We
also created stored procedure for resolving more complex task.
17.Avoid prefix "sp_" with user defined stored procedure name
Practice to avoid prefix "sp_" with user defined stored procedure name since system defined
stored procedure name starts with prefix "sp_". Hence SQL server first search the user
defined procedure in the master database and after that in the current session database.
This is time consuming and may give unexcepted result if system defined stored procedure
have the same name as your defined procedure.
OFFSET FETCH
OFFSET and FETCH NEXT arguments can be add with the SELECT statement's ORDER BY clause that let
you retrieve a fixed number of rows:
 OFFSET <EXPR1>
Specifies the number of rows to skip before it starts to return rows from
the query expression.
 FETCH NEXT <EXPR2> ROWS ONLY, Specifies the number of rows to return after the OFFSET
clause has been processed
Here's the syntax for a simple SELECT statement that uses these arguments:
SELECT * FROM <table>
ORDER BY <columns>
OFFSET <EXPR1> ROWS
FETCH NEXT <EXPR2> ROWS ONLY
Notes
 Pagination is now very easy Using OFFSET and FETCH is bit faster then using ROW_NUMBER(),
TOP, Order BY clauses
 FETCH can be used with either FIRST or NEXT like
 FETCH NEXT 6 ROWS ONLY or FETCH FIRST 6 ROWS ONLY anything can be used
 OFFSET (40) ROWS or OFFSET (40) ROW or Offset 40 rows anything can be used
 ROW and ROWS are synonyms and are provided for ANSI compatibility.
Example of OFFSET FETCH
The below snippet shows the output when running
the above commands.
Rules to use OFFSET FETCH :
1. ORDER BY is mandatory to use OFFSET and
FETCH clause.
2. OFFSET clause is mandatory with FETCH. You
can never use, ORDER BY … FETCH.
3. TOP cannot be combined with OFFSET and
FETCH in the same query expression.
4. The OFFSET/FETCH rowcount expression can be
any arithmetic, constant, or parameter expression
that will return an integer value. The rowcount
expression does not support scalar sub-queries.
Local or Global Temporary
1.Local Temp Table
Local temp tables are only available to the SQL Server session or connection (means single user) that
created the tables. These are automatically deleted when the session that created the tables has
been closed. Local temporary table name is stared with single hash ("#") sign.
02.Global Temp Table
Global temp tables are available to all SQL Server sessions or connections (means all the user). These
can be created by any SQL Server connection user and these are automatically deleted when all the
SQL Server connections have been closed. Global temporary table name is stared with double hash
("##") sign.
Global temporary tables are visible to all SQL Server connections while Local temporary tables are
visible to only current SQL Server connection.
Common Table expressions (CTE)
It is a temporary result set and typically it may be a result of complex sub-query. Unlike
temporary table its life is limited to the current query. It is defined by using WITH
statement. CTE improves readability and ease in maintenance of complex queries and
sub-queries.
A sub query without CTE is given below :
SELECT * FROM
(SELECT Addr.Address, Emp.Name, Emp.Age From Address Addr Inner join Employee Emp on Emp.EID = Addr.EID
) Temp
WHERE Temp.Age > 50
ORDER BY Temp.NAME
By using CTE above query can be re-written as follows :
With CTE1(Address, Name, Age) --Column names for CTE, which are optional
AS
(SELECT Addr.Address, Emp.Name, Emp.Age from Address Addr INNER JOIN EMP Emp ON Emp.EID = Addr.EID
)
SELECT * FROM CTE1 --Using CTE
WHERE CTE1.Age > 50
ORDER BY CTE1.NAME
When to use CTE
01.This is used to store result of a complex sub query for further use.
02.This is also used to create a recursive query.
Table Variable
This acts like a variable and exists for a particular batch of query execution. It gets dropped once it
comes out of batch. This is also created in the Tempdb database but not the memory. This also allows
you to create primary key, identity at the time of Table variable declaration but not non-clustered index.
GO
DECLARE @TProduct TABLE
(
SNo INT IDENTITY(1,1),
ProductID INT,
Qty INT
)
--Insert data to Table variable @Product
INSERT INTO @TProduct(ProductID,Qty)
SELECT DISTINCT ProductID, Qty FROM ProductsSales ORDER BY ProductID ASC
--Select data
Select * from @TProduct
--Next batch
GO
Select * from @TProduct --gives error in next batch
Notes:
1.Temp Tables are physically created in the Tempdb database. These tables act as the normal table and
also can have constraints, index like normal tables.
2.CTE is a named temporary result set which is used to manipulate the complex sub-queries data. This
exists for the scope of statement. This is created in memory rather than Tempdb database. You cannot
create any index on CTE.
3.Table Variable acts like a variable and exists for a particular batch of query execution. It gets dropped
once it comes out of batch. This is also created in the Tempdb database but not the memory.
File Tables
FileTables can be used for the storage and management of unstructured data that are currently residing
as files on file servers. Another advantage is that the Windows Application Compatibility for their
existing Windows applications enables to see these data as files in the file system. First step, you will
need to enable the Filestream feature :
To enable and change FILESTREAM settings
1. On the Start menu>All Programs>SQL Server 2016 >
Configuration Tools>SQL Server Configuration Manager.
2. In services list, right-click SQL Server Services, and then click
Open.
3. In the SQL Server Configuration Manager snap-in, locate the
instance of SQL Server on which you want to enable
FILESTREAM.
4. Right-click the instance, and then click Properties.
5. In the SQL Server Properties dialog box, click the FILESTREAM
tab.
6. Select the Enable FILESTREAM for Transact-SQL access check
box.
7. If you want to read and write FILESTREAM data from Windows,
click Enable FILESTREAM for file I/O streaming access. Enter the
name of the Windows share in the Windows Share Name box.
8. If remote clients must access the FILESTREAM data that is
stored on this share, select Allow remote clients to have
streaming access to FILESTREAM data.
9. Click Apply
File Tables
Method 1: Copy Paste data into the FileTables folder
First, find the folder where FileTable will be storing the files.
Go to Databases >> FileStorage>> Expand Tables.
Now expanded file table, “FileTableTb”>> Right click on the newly created
table, and
click on “Explore FileTable Directory”.
Now open up the folder where the FileTable data will be stored.
Method 2: using SQL Statement :
to create new files or directories using a T-SQL procedure
need to supply a filename and the filestream. The constraints on the
table take care of the rest of the fields.
View
View is a
 Virtual table
 Not a Temporary
 Physical table
 OCCUPY MEMORY
Used to
 Encapsulate/protect some important/sensitive column
Example
SQL JOINs
INNER JOIN
Match rows between the two tables
specified
LEFT OUTER JOIN
fetches data if present in the left table.
RIGHT OUTER JOIN
fetches data if present in the right table.
FULL OUTER JOIN
fetches data if present in either of the two
tables.
SQL JOINs
Cartesian product is based on multiplying the number of rows from the left table by the number
of rows in the right table.
In self Join same table is specified twice with two different aliases in order to match the data
within the same table.
SELECT a.emp_id AS "Emp_ID",a.emp_name AS "Employee Name",
b.emp_id AS "Supervisor ID",b.emp_name as” Supervisor Name"
FROM employee a, employee b
WHERE a.emp_supv = b.emp_id
tblEmployee Desired Output
Cursor
cursors are required when it is required to update records in a database table in singleton
fashion means row by row. A Cursor also impacts the performance of the SQL Server since
it uses the SQL Server instance’s memory, reduce concurrency, decrease network
bandwidth and lock resources.
A Cursor impacts the performance of the SQL Server since it uses the SQL Server
instances' memory, reduce concurrency, decrease network bandwidth and lock resources.
Hence it is mandatory to understand the cursor types and its functions so that you can
use suitable cursor according to your needs.
You should avoid the use of cursor. Basically you should use cursor alternatives like as
WHILE loop, sub queries, Temporary tables and Table variables. We should use cursor in
that case when there is no option except cursor.
Cursor Example
In some context business logic that will require us to process data in a sequence
Type of Cursor
1.Static Cursors
A static cursor populates the result set at the time of cursor creation and query result is cached for the lifetime of the
cursor. A static cursor can move forward and backward direction. A static cursor is slower and use more memory in
comparison to other cursor. Hence you should use it only if scrolling is required .No UPDATE, INSERT, or DELETE
operations are reflected in a static cursor (unless the cursor is closed and reopened). By default static cursors are
scrollable. SQL Server static cursors are always read-only.
2.Dynamic Cursors
A dynamic cursor allows you to see the data updating, deletion and insertion in the data source while the cursor is
open. Hence a dynamic cursor is sensitive to any changes to the data source and supports update, delete operations.
By default dynamic cursors are scrollable.
3.Forward Only Cursors
A forward only cursor is the fastest cursor among the all cursors but it doesn't support backward scrolling. You can
update, delete data using Forward Only cursor. It is sensitive to any changes to the original data source.
There are three more types of Forward Only Cursors. Forward_Only KEYSET, FORWARD_ONLY STATIC and
FAST_FORWARD.
A FORWARD_ONLY STATIC Cursor is populated at the time of creation and cached the data to the cursor lifetime. It is not sensitive to any
changes to the data source.
A FAST_FORWARD Cursor is the fastest cursor and it is not sensitive to any changes to the data source.
4. Keyset Driven Cursors
A keyset driven cursor is controlled by a set of unique identifiers as the keys in the keyset. The keyset depends on all
the rows that qualified the SELECT statement at the time of cursor was opened. A keyset driven cursor is sensitive to
any changes to the data source and supports update, delete operations. By default keyset driven cursors are scrollable.
Examples of Cursors
CREATE TABLE Employee
(
EmpID int PRIMARY KEY,
EmpName varchar (50) NOT NULL,
Salary int NOT NULL,
Address varchar (200) NOT NULL,
)
GO
INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(1,'Mohan',12000,'Noida')
INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(2,'Pavan',25000,'Delhi')
INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(3,'Amit',22000,'Dehradun')
INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(4,'Sonu',22000,'Noida')
INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(5,'Deepak',28000,'Gurgaon')
GO
SELECT * FROM Employee
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE @salary int
DECLARE cur_emp CURSOR
STATIC FOR
SELECT EmpID,EmpName,Salary from Employee
OPEN cur_emp
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM cur_emp INTO @Id,@name,@salary
WHILE @@Fetch_status = 0
BEGIN
PRINT 'ID : '+ convert(varchar(20),@Id)+', Name : '+@name+ ', Salary :
'+convert(varchar(20),@salary)
FETCH NEXT FROM cur_emp INTO @Id,@name,@salary
END
END
CLOSE cur_emp
DEALLOCATE cur_emp
SET NOCOUNT OFF
Static Cursor - Example
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Dynamic_cur_empupdate CURSOR
DYNAMIC
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Dynamic_cur_empupdate
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Dynamic_cur_empupdate INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Mohan'
Update Employee SET Salary=15000 WHERE CURRENT OF Dynamic_cur_empupdate
FETCH NEXT FROM Dynamic_cur_empupdate INTO @Id,@name
END
END
CLOSE Dynamic_cur_empupdate
DEALLOCATE Dynamic_cur_empupdate
SET NOCOUNT OFF
Go
Select * from Employee
Dynamic Cursor - Example
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Dynamic_cur_empdelete CURSOR
DYNAMIC
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Dynamic_cur_empdelete
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Dynamic_cur_empdelete INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Deepak'
DELETE Employee WHERE CURRENT OF Dynamic_cur_empdelete
FETCH NEXT FROM Dynamic_cur_empdelete INTO @Id,@name
END
END
CLOSE Dynamic_cur_empdelete
DEALLOCATE Dynamic_cur_empdelete
SET NOCOUNT OFF
Go
Select * from Employee
Dynamic Cursor for Delete - Example
SET NOCOUNT ON
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Forward_cur_empupdate CURSOR
FORWARD_ONLY
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Forward_cur_empupdate
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Forward_cur_empupdate INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Amit'
Update Employee SET Salary=24000 WHERE CURRENT OF Forward_cur_empupdate
FETCH NEXT FROM Forward_cur_empupdate INTO @Id,@name
END
END
CLOSE Forward_cur_empupdate
DEALLOCATE Forward_cur_empupdate
SET NOCOUNT OFF
Go
Select * from Employee
Forward Only Cursor - Example
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Forward_cur_empdelete CURSOR
FORWARD_ONLY
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Forward_cur_empdelete
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Forward_cur_empdelete INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Sonu'
DELETE Employee WHERE CURRENT OF Forward_cur_empdelete
FETCH NEXT FROM Forward_cur_empdelete INTO @Id,@name
END
END
CLOSE Forward_cur_empdelete
DEALLOCATE Forward_cur_empdelete
SET NOCOUNT OFF
Go
Select * from Employee
Forward Only Cursor for Delete- Example
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Keyset_cur_empupdate CURSOR
KEYSET
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Keyset_cur_empupdate
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Keyset_cur_empupdate INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Pavan'
Update Employee SET Salary=27000 WHERE CURRENT OF Keyset_cur_empupdate
FETCH NEXT FROM Keyset_cur_empupdate INTO @Id,@name
END
END
CLOSE Keyset_cur_empupdate
DEALLOCATE Keyset_cur_empupdate
SET NOCOUNT OFF
Go
Select * from Employee
Keyset Driven Cursor - Example
SET NOCOUNT ON
DECLARE @Id int
DECLARE @name varchar(50)
DECLARE Keyset_cur_empdelete CURSOR
KEYSET
FOR
SELECT EmpID,EmpName from Employee ORDER BY EmpName
OPEN Keyset_cur_empdelete
IF @@CURSOR_ROWS > 0
BEGIN
FETCH NEXT FROM Keyset_cur_empdelete INTO @Id,@name
WHILE @@Fetch_status = 0
BEGIN
IF @name='Amit'
DELETE Employee WHERE CURRENT OF Keyset_cur_empdelete
FETCH NEXT FROM Keyset_cur_empdelete INTO @Id,@name
END
END
CLOSE Keyset_cur_empdelete
DEALLOCATE Keyset_cur_empdelete
SET NOCOUNT OFF
Go Select * from Employee
Keyset Driven Cursor for Delete - Example
User Defined Functions
A Scalar UDF can accept 0 to many input parameter and will return a single
value. A Scalar user-defined function returns one of the scalar
(int, char, varchar etc) data types. Text, ntext, image and timestamp data
types are not supported.
1. Scalar
2. Inline Table-Valued and
3. Multi-statement Table-valued.
Scalar Functions
The function which returns a Scalar/Single value. A Scalar user-defined function
returns one of the scalar data types. Text, ntext, image and timestamp data types are
not supported.
Inline Table-Valued
The function which returns a row set of SQL server Table data type. Inline Table-Value user-
defined function returns a table data type and is an exceptional alternative to a view as the
user-defined function can pass parameters into a T-SQL select command and in essence
provide us with a parameterized, non-updateable view of the underlying tables
Create table Employee(ID int , Name varchar(50))
Insert into Employee values (1,'Ferdous')
Insert into Employee values (2,'Tanvir')
Insert into Employee values (3,'Rechard')
Insert into Employee values (4,'Anil')
CREATE FUNCTION EmployeeNameById
(@ID int)
RETURNS
TABLE
AS
RETURN
SELECT * from Employee where id=@ID
GO
---Execution
Select * from EmployeeById (4)
Multi-Statement Table-Valued
A Multi-Statement Table-Valued user-defined function returns a table. It can have one or more than one
T-Sql statement. Within the create function command you must define the table structure that is being
returned. After creating this type of user-defined function, we can use it in the FROM clause of a T-SQL
command unlike the behavior found when using a stored procedure which can also return record sets.
CREATE FUNCTION GetAuthorsByState
( @state char(2) )
RETURNS
@AuthorsByState table (
au_id Varchar(11),
au_fname Varchar(20)
)
AS
BEGIN
INSERT INTO @AuthorsByState
SELECT au_id,
au_fname
FROM Authors
WHERE state = @state
IF @@ROWCOUNT = 0
BEGIN
INSERT INTO @AuthorsByState
VALUES ('','No Authors Found')
END
RETURN
END
GO
SQL Server treats an inline table valued function more like it would do for a view and treats a multi-
statement table valued function similar to how it would a stored procedure.
When an inline table-valued function is used as part of an outer query, the query processor expands
the UDF definition and generates an execution plan that accesses the underlying objects, using the
indexes on these objects.
Multi-statement table valued function, an execution plan is created for the function itself and stored in
the execution plan cache (once the function has been executed the first time). If multi-statement table
valued functions are used as part of larger queries then the optimiser does not know what the function
returns, and so makes some standard assumptions - in effect it assumes that the function will return a
single row, and that the returns of the function will be accessed by using a table scan against a table
with a single row.
Inline and Multi-Statement Table Valued Function
Performance Comparison
This compares an expression to a set of simple expressions to find the result. This expression compares
an expression to the expression in each WHEN clause for equivalency. If the expression with in the WHEN
clause is matched, the expression in the THEN clause will be returned.
With this function you can replace a column value with a different value based on the original column
value. An example of where this function might come in handy is where you have a table that contains a
column named EmploymentType, where 0 stands for Permanent , 1 for Contractual , etc., and you want
to return the value " Permanent " when the column value is 0, or " Contractual " when the column value
is 1, etc.
The CASE function allows you to evaluate a column value on a row against multiple criteria, where each
criterion might return a different value. The first criterion that evaluates to true will be the value
returned by the CASE function. Microsoft SQL Server Books Online documents two different formats for
the CASE function. The "Simple Syntax" looks like this:
CASE expression
WHEN expression1 THEN Result1
WHEN expression2 THEN Result2
ELSE ResultN
END
Using the CASE
Example 1 :
DECLARE @intInput INT
SET @intInput = 2
SELECT
CASE(@intInput)
WHEN 1 THEN 'One'
WHEN 2 THEN 'Two'
WHEN 3 THEN 'Three'
ELSE 'Your message.'
END
Example 2 :
select top 5 title,
case
when price < 12.00 then'Cheap'
when price < 3.00 then 'Really Cheap'
when price > 12.00 and price < 20.00 then 'Average'
Else 'Expensive' end 'Price Category'
from pubs.dbo.titles
PIVOT is one of the New relational operator.
It provides an easy mechanism in SQL
Server to turn rows into columns.
Crosstab queries using PIVOT in SQL Server
UNPIVOT is the reversal of the PIVOT
operation. It basically provides a mechanism
for transforming columns into rows.
Index
 Quickly Retrieve data
 without reading the whole table.
 Using single or group of columns
 ROWID is created for each row
 Selection of fields depends on what
you are using in your SQL queries.
Indexes
• Sorted data are physically stored
• Order change by Insert update
delete balanced tree (B-tree)
Clustered NonClustered
• Independent of the physical sort
order.
• If there is no clustered index, its data
rows are stored in an unordered
structure called a heap
Clustered Tables vs Heap Tables
• If a table has no indexes or only has non-clustered indexes it is called a
heap
An age old question is whether or not a table must have a clustered
index. The answer is no, but in most cases, it is a good idea to have a
clustered index on the table to store the data in a specific order.
• The name suggests itself, these tables have a Clustered Index. Data is stored
in a specific order based on a Clustered Index key.
Cluster table
Heap Tables
Clustered Tables vs Heap Tables
HEAP
• Data is not stored in any particular
order
• Specific data can not be retrieved
quickly, unless there are also non-
clustered indexes.
• Data pages are not linked, so
sequential access needs to refer back
to the index allocation map (IAM)
pages
• Since there is no clustered index,
additional time is not needed to
maintain the index
• Since there is no clustered index, there
is not the need for additional space to
store the clustered index tree
• These tables have a index_id value of 0
in the sys.indexes catalog view
Clustered Index
• The top-most node of this tree is called
the "root node"
• The bottom level of the nodes is called
"leaf nodes"
• Any index level between the root node
and leaf node is called an "intermediate
level"
• The leaf nodes contain the data pages of
the table in the case of a cluster index.
• The root and intermediate nodes
contain index pages holding an index
row.
• Each index row contains a key value and
pointer to intermediate level pages of
the B-tree or leaf level of the index.
• The pages in each level of the index are
linked in a doubly-linked list.
Non-clustered Index
• Index Leaf Nodes and Corresponding Table Data
• Each index entry consists of the
indexed columns (the key,
column 2) and refers to the
corresponding table row
(via ROWID or RID).
• Unlike the index, the table data is
stored in a heap structure and is
not sorted at all.
• There is neither a relationship
between the rows stored in the
same table block nor is there any
connection between the blocks.
PRIMARY KEY AS A CLUSTERED INDEX
• Primary key: a constraint to enforce uniqueness in a table. The primary key columns
cannot hold NULL values.
• In SQL Server, when you create a primary key on a table, if a clustered index is not
defined and a non-clustered index is not specified, a unique clustered index is
created to enforce the constraint.
• However, there is no guarantee that this is the best choice for a clustered index for
that table.
• Make sure you are carefully considering this in your indexing strategy.
A clustered index determines the order in which the rows of a table are stored on disk. If a table has a
clustered index, then the rows of that table will be stored on disk in the same exact order as the
clustered index. The query will run much faster than if the rows were being stored in some random
order on the disk
Example :
Suppose we have a table named Employee which has a column named EmployeeID. Let’s say we create
a clustered index on the EmployeeID column. What happens when we create this clustered index?
Well, all of the rows inside the Employee table will be physically – sorted (on the actual disk) – by the
values inside the EmployeeID column. What does this accomplish? Well, it means that whenever a
lookup/search for a sequence of EmployeeID’s is done using that clustered index, then the lookup will
be much faster because of the fact that the sequence of employee ID’s are physically stored right next
to each other on disk – that is the advantage with the clustered index. This is because the rows in the
table are sorted in the exact same order as the clustered index, and the actual table data is stored in
the leaf nodes of the clustered index.
index is usually a tree data structure – and leaf nodes are the nodes that are at the very bottom of that
tree. In other words, a clustered index basically contains the actual table level data in the index itself.
SQL Server index
When a query is issued against an indexed
column, the query engine starts at the root
node and navigates down through the
intermediate nodes,
For example, if you’re searching for the value
123 in an indexed column, the query engine
would first look in the root level to determine
which page to reference in the top
intermediate level. The leaf node will contain
either the entire row of data or a pointer to
that row, depending on whether the index is
clustered or non-clustered.
Clustered index vs No-Clustered Index
Fact that if a given row has a value updated in one of it’s (clustered) indexed columns what typically
happens is that the database will have to move the entire row so that the table will continue to be
sorted in the same order as the clustered index column. clustered indexes are usually created on
primary keys or foreign keys, because of the fact that those values are less likely to change once they
are already a part of a table.
A non-clustered index will store both the value of the EmployeeID AND a pointer to the row in the
Employee table where that value is actually stored. But a clustered index, on the other hand, will
actually store the row data for a particular EmployeeID – so if you are running a query that looks for an
EmployeeID of 15, the data from other columns in the table like EmployeeName, EmployeeAddress, etc.
will all actually be stored in the leaf node of the clustered index itself.
This means that with a non-clustered index extra work is required to follow that pointer to the row in
the table to retrieve any other desired values, as opposed to a clustered index which can just access the
row directly since it is being stored in the same order as the clustered index itself. So, reading from a
clustered index is generally faster than reading from a non-clustered index.
A table can have multiple non-clustered indexes because they don’t affect the order in which the rows
are stored on disk like clustered indexes.
Non-clustered index:
• Leaf level is the actual data page
• Non-leaf levels contains the index key columns
• Clustered index scan = table scan on Heap
Non-clustered index:
• Leaf level contains the key and include columns
• Non-leaf levels contains the index key columns
Summary of the differences:
 A clustered index determines the order in which the rows of the table will be stored
on disk – and it actually stores row level data in the leaf nodes of the index itself. A
non-clustered index has no effect on which the order of the rows will be stored.
 Using a clustered index is an advantage when groups of data that can be clustered
are frequently accessed by some queries. This speeds up retrieval because the data
lives close to each other on disk. Also, if data is accessed in the same order as the
clustered index, the retrieval will be much faster because the physical data stored on
disk is sorted in the same order as the index.
 A clustered index can be a disadvantage because any time a change is made to a
value of an indexed column, the subsequent possibility of re-sorting rows to
maintain order is a definite performance hit.
 A table can have multiple non-clustered indexes. But, a table can have only one
clustered index.
 Non clustered indexes store both a value and a pointer to the actual row that holds
that value. Clustered indexes don’t need to store a pointer to the actual row
because of the fact that the rows in the table are stored on disk in the same exact
order as the clustered index – and the clustered index actually stores the row-level
data in it’s leaf nodes.
Tuning SQL Indexes for better performance
Don’t use too many indexes
As you know, indexes can take up a lot of space. So, having too many indexes can actually be damaging to your
performance because of the space impact. For example, if you try to do an UPDATE or an INSERT on a table that
has too many indexes, then there could be a big hit on performance due to the fact that all of the indexes will
have to be updated as well. A general rule of thumb is to not create more than 3 or 4 indexes on a table.
Try not to include columns that are repeatedly updated in an index:
If you create an index on a column that is updated very often, then that means that every time the column is
updated, the index will have to be updated as well. This is done by the DBMS, of course, so that the index stays
current and consistent with the columns that belong to that index. So, the number of ‘writes’ is increased two-
fold – one time to update the column itself and another to update the index as well. So, you might want to
consider avoiding the inclusion of columns that are frequently updated in your index.
Creating indexes on foreign key column(s) can improve performance:
Because joins are often done between primary and foreign key pairs, having an index on a foreign key column can
really improve the join performance. Not only that, but the index allows some optimizers to use other methods of
joining tables as well.
Create indexes for columns that are repeatedly used in predicates of your SQL queries:
Take a look at your queries and see which columns are used frequently in the WHERE predicate. If those columns
are not part of an index already, then you should add them to an index. This is of course because an index on
columns that are repeatedly used in predicates will help speed up your queries.
Consider deleting an index when loading huge amounts of data into a table
If you are loading a huge amount of data into a table, then you might want to think about deleting some of the
indexes on the table. Then, after the data is loaded into the table, you can recreate the indexes. The reason you
would want to do this is because the index will not have to be updated during the delete, which could save you a
lot of time!
Stored Procedure
 Set of T-SQL code
 Pre-compile objects
 which are compiled
 executes compiled code
 Security due to encryption
Trigger
• Specialized stored procedure
• Executed on INSERT, UPDATE, or DELETE.
System Catalogs
The SQL Server system catalogs is a set of views that show metadata that describes the objects in an
instance of SQL Server. Metadata is data that describes the attributes of objects in a system. SQL
Server-based applications can access the information in the system catalogs by using Information
Schema( views to quickly retrieve metadata )
and Catalog Views, recommended .
Catalog views can be used to get information like objects, logins permissions etc used by SQL server
database engine. Rather than accessing the system tables directly, catalog views can be used. Catalog
views don’t contain information about replication backup etc.
o What table does a particular column belongs to?
o What all Stored Procedures effect a particular table?
o How can I see what particular constraint does my tables have?
o What all Foreign Keys defined in table’s columns are linked to?
Information Schema Views:
They present the catalog information in a format that is independent of any catalog table
implementation and therefore are not affected by changes in the underlying catalog tables.
Catalog Views: provide access to metadata that is stored in every database on the server.
A subquery is a query within a query that returns a
result that is expected at the place where the
subquery is placed.
query optimization is outside the scope of this
article, but I do want to mention this. You can see
IO statistics by running the following statement in
your query window.
SET STATISTICS IO ON
turn on the option 'Include actual query plan'
under the 'Query' menu option.
Now run your query again.
'Execution plan', which shows exactly what steps
SQL Server had to perform to get to your result.
Read it from right to left. Less steps is not always
better, some steps require more work from SQL
Server than others.
Execution Plan
IN, ANY, SOME, ALL and EXISTS
ANY operator works much like the IN operator, except in that you can use the >, <, >=, <=, =
and <> operators to compare values. ANY returns true if at least one value returned by the
subquery makes the predicate true. So the following query returns all persons except that
with BusinessEntityID 1, because 1 > 1 returns FALSE.
SELECT *
FROM Person.Person
WHERE BusinessEntityID > ANY (SELECT 1)
Instead of ANY you can use SOME, which has the same meaning.
DECLARE @OrderDate AS DATETIME = '20050517'
DECLARE @Status AS TINYINT = 4
IF @Status > SOME(SELECT Status
FROM Purchasing.PurchaseOrderHeader
WHERE OrderDate = @OrderDate)
PRINT 'Not all orders have the specified status!'
ELSE
PRINT 'All orders have the specified status.'
ALL and Exists
ANY, ALL looks at all results returned by a subquery and only returns TRUE if the comparison with all results makes the
predicate true.
DECLARE @OrderDate AS DATETIME = '20050517'
DECLARE @Status AS TINYINT = 4
IF @Status < ALL(SELECT Status
FROM Purchasing.PurchaseOrderHeader
WHERE OrderDate = @OrderDate)
PRINT 'All orders have the specified status.'
ELSE
PRINT 'Not all orders have the specified status!'
EXISTS can be used like ANY and ALL, but returns true only if at least one record was returned by the subquery.
SELECT *
FROM Sales.Customer AS c
WHERE EXISTS(SELECT *
FROM Sales.SalesOrderHeader AS s
WHERE s.CustomerID = c.CustomerID)
EXISTS functions only returns TRUE or FALSE and not any columns. For that reason it does not matter what you put in
your SELECT statement.
Querying from subqueries; Derived tables
When we use subqueries in our FROM clause the result is called a derived table. A
derived table is a named table expression and, like a subquery, is only visible to its outer
query. It differs from subqueries in that they return a complete table result.
SELECT *
FROM (SELECT
SalesOrderID,
SalesOrderNumber,
CustomerID,
AVG(SubTotal) OVER(PARTITION BY CustomerID) AS AvgSubTotal
FROM Sales.SalesOrderHeader) AS d
WHERE AvgSubTotal > 100
ORDER BY AvgSubTotal, CustomerID, SalesOrderNumber
The result of a subquery needs to be relational. That means every column it returns must
have a name. AVG(SubTotal)... would not have a name, so we MUST alias it. We must also
alias the derived table itself.
sort the data first before it can check which rows should and should not be returned. And
when the data is sorted SQL Server does not unsort them before returning the result. In
this case a sort is not necessary because the entire table needs to be returned anyway.
CROSS APPLY
CROSS APPLY operator works like an INNER JOIN in that it can match rows from two tables and leaves out rows that
were not matched by the other table in the result. we can use multiple APPLY operators in a single query.
Select all Persons that have a SalesOrder and show some order information for the most expensive order that Person
has made.
SELECT
p.BusinessEntityID,
p.FirstName,
p.LastName,
a.*
FROM Person.Person AS p
CROSS APPLY (SELECT TOP 1
s.SalesOrderID,
s.CustomerID,
s.SubTotal
FROM Sales.SalesOrderHeader AS s
JOIN Sales.Customer AS c ON c.CustomerID = s.CustomerID
WHERE c.PersonID = p.BusinessEntityID
ORDER BY s.SubTotal DESC) AS a
ORDER BY p.BusinessEntityID
CROSS APPLY operator takes a table expression as input parameter and simply joins the result with each row of the
outer query.
OUTER APPLY
OUTER APPLY works in much the same way as the CROSS APPLY with the exception that it also returns
rows if no corresponding row was returned by the APPLY operator.
Persons that have not placed an order are now also returned in the result set.
SELECT
p.BusinessEntityID,
p.FirstName,
p.LastName,
a.*
FROM Person.Person AS p
OUTER APPLY (SELECT TOP 3
s.SalesOrderID,
s.CustomerID,
s.SubTotal
FROM Sales.SalesOrderHeader AS s
JOIN Sales.Customer AS c ON c.CustomerID = s.CustomerID
WHERE c.PersonID = p.BusinessEntityID
ORDER BY s.SubTotal DESC) AS a
ORDER BY p.BusinessEntityID
PARSE
Parsing is a special kind of cast which always casts a VARCHAR value into another datatype. In SQL
Server we can use the PARSE or TRY_PARSE function which takes as parameters a VARCHAR value, a
datetype and an optional culture code to specify in which culture format the value is formatted. We can
for example parse a VARCHAR value that represents a date formatted to Dutch standards into a
DATETIME value.
SELECT PARSE('12-31-2013' AS DATETIME2 USING 'en-US') AS USDate,
PARSE('31-12-2013' AS DATETIME2 USING 'nl-NL') AS DutchDate
FORMAT
The FORMAT function does not really provide a means to convert between datatypes. Instead it
provides a way to output data in a given format.
SELECT
SalesOrderID,
FORMAT(SalesOrderID, 'SO0') AS SalesOrderNumber,
CustomerID,
FORMAT(CustomerID, '0.00') AS CustomerIDAsDecimal,
OrderDate,
FORMAT(OrderDate, 'dd-MM-yy') AS FormattedOrderDate
FROM Sales.SalesOrderHeader
REPLACE REVERSE STUFF
REPLACE you can replace a character or a substring of a string with another character or string. With
STUFF you can replace a part of a string based on index. With REVERSE you can, of course, reverse a
string. In the following example we revert the SalesOrderNumber, we replace the 'SO' in the
SalesOrderNumber with 'SALE', and we replace the first two characters of the PurchaseOrderNumber
with 'PURC'.
SELECT
SalesOrderNumber,
REVERSE(SalesOrderNumber) AS ReversedOrderNumber,
REPLACE(SalesOrderNumber, 'SO', 'SALE') AS NewOrderFormat,
PurchaseOrderNumber,
STUFF(PurchaseOrderNumber, 1, 2, 'PURC') AS NewPurchaseFormat
FROM Sales.SalesOrderHeader
IFF
IIF you can test a predicate and specify a value if it evaluates to true and a value if it evaluates to false.
SELECT
BusinessEntityID,
CASE
WHEN Title IS NULL THEN 'No title'
ELSE Title
END AS TitleCase,
IIF(Title IS NULL, 'No title', Title) AS TitleIIF,
FirstName,
LastName
FROM Person.Person
COALESCE, ISNULL and NULLIF
With COALESCE we can specify a range of values and the first value that is not NULL is returned. It can actually make
our IIF that checks for a NULL from the previous section even shorter.
SELECT
BusinessEntityID,
COALESCE(Title, 'No title'),
FirstName,
LastName
FROM Person.Person
COALESCE returns NULL if all values that were passed to it are NULLs. ISNULL does the same as COALESCE, but with
some differences. The first difference is that ISNULL can only have two values. So if the first value is NULL it will return
the second value (which may also be NULL).
SELECT
BusinessEntityID,
ISNULL(Title, 'No title'),
FirstName,
LastName
FROM Person.Person
DECLARE @first AS VARCHAR(4) = NULL
DECLARE @second AS VARCHAR(5) = 'Hello'
SELECT
COALESCE(@first, @second) AS [Coalesce],
ISNULL(@first, @second) AS [IsNull]
Exception Handling TRY..CATCH
SQL Server also has an exception model to handle exceptions and errors that occurs in T-SQL
statements. To handle exception in Sql Server we have TRY..CATCH blocks. We put T-SQL statements in
TRY block and to handle exception we write code in CATCH block. If there is an error in code within TRY
block then the control will automatically jump to the corresponding CATCH blocks. In Sql Server, against
a Try block we can have only one CATCH block.
ERROR_NUMBER(): The number assigned to the error.
ERROR_LINE(): The line number inside the routine that caused the error.
ERROR_MESSAGE():
The error message text, which includes the values supplied for any substitutable
parameters, such as times or object names.
ERROR_SEVERITY(): The error’s severity.
ERROR_STATE(): The error’s state number.
ERROR_PROCEDURE(): The name of the stored procedure or trigger that generated the error.
BEGIN TRY
SELECT [Second] = 1/0
END TRY
BEGIN CATCH
SELECT [Error_Line] = ERROR_LINE(), [Error_Number] = ERROR_NUMBER(),
[Error_Severity] = ERROR_SEVERITY(), [Error_State] = ERROR_STATE()
SELECT [Error_Message] = ERROR_MESSAGE()
END CATCH
Through
The role of the TRY statement is to capture the exception. If an exception occurs within the TRY block, the
part of the system called the exception handler delivers the exception to the other part of the program,
which will handle the exception. This program part is denoted by the keyword CATCH and is therefore
called the CATCH block.
THROW. This statement allows you to throw an exception caught in the exception handling block. Simply
stated, the THROW statement is another return mechanism, which behaves similarly to the already
described RAISEERROR statement.
Drop table #TestRethrow
CREATE TABLE #TestRethrow
( ID INT PRIMARY KEY
);
BEGIN TRY
INSERT #TestRethrow(ID) VALUES(1);
-- Force error 2627, Violation of PRIMARY KEY constraint to be raised.
INSERT #TestRethrow(ID) VALUES(1);
END TRY
BEGIN CATCH
Declare @Errormessage nvarchar(100)
SELECT @Errormessage = ERROR_LINE()+ERROR_NUMBER()+ERROR_STATE() ;
THROW 6000,@Errormessage,1;
END CATCH;
--Test that next statement is executed or Not
Select * from sys.objects
THROW statement must be followed by the semicolon (;) statement terminator.
Difference between Through and RAISERROR
If a TRY…CATCH construct is not available, the session is ended. The line number and procedure where the exception
is raised are set. The severity is set to 16.
If the THROW statement is specified without parameters, it must appear inside a CATCH block. This causes the caught
exception to be raised. Any error that occurs in a THROW statement causes the statement batch to be ended.]
RAISERROR statement THROW statement
If a msg_id is passed to RAISERROR, the ID must
be defined in sys.messages.
The error_number parameter does not have to
be defined in sys.messages.
The msg_str parameter can contain printf
formatting styles.
The message parameter does not accept printf
style formatting.
The severity parameter specifies the severity of
the exception.
There is no severity parameter. The exception
severity is always set to 16.
Locking in SQL Server
Default transaction isolation setup :
ALTER DATABASE AdventureWorks2014
SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE AdventureWorks2014
SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE AdventureWorks2014
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT ON;
Read Committed whether or not row versioning is used. To get around this, you must use the SET
TRANSACTION ISOLATION LEVEL statement at the session level, or use a table hint at the statement
level, if you want your change to apply only to that statement. For example, the following SELECT
statement specifies the TABLOCK table hint:
SELECT EmpID, FirstName, LastName FROM EmployeeInfo WITH(TABLOCK)
WHERE EmpID > 99 ORDER BY LastName;
TABLOCK table hint directs the database engine to lock the data at the table level, rather than the row
or page level. The table hint will apply only to the table targeted in this statement and will not impact
the rest of the session, as would a SET TRANSACTION ISOLATION LEVEL statement.
Transaction Isolation Levels
READ UNCOMMITTED:
A query in the current transaction can read data
modified within another transaction but not yet
committed. The database engine does not issue
shared locks when Read Uncommitted is specified,
making this the least restrictive of the isolation
levels. As a result, it’s possible that a statement will
read rows that have been inserted, updated or
deleted, but never committed to the database, a
condition known as dirty reads. It’s also possible for
data to be modified by another transaction
between issuing statements within the current
transaction.
Use the SET TRANSACTION ISOLATION LEVEL statement, as
shown below:
SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED;
SELECT * FROM EmployeeInfo WHERE EmpID = 1;
Notice it is simple to specify the isolation level in our SET
TRANSACTION ISOLATION LEVEL statement, in this case,
Read Uncommitted. We can then run our query under that
isolation level. Afterwards, we can return our session to
the default level by issuing the following statement:
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
Concurrency issues that each isolation level is susceptible to:
Transaction Isolation Levels
Isolation level Dirty read Nonrepeatable read Phantom read
Read uncommitted ✔ ✔ ✔
Read committed ✗ ✔ ✔
Repeatable read ✗ ✗ ✔
Serializable ✗ ✗ ✗
Snapshot ✗ ✗ ✗
The SELECT statement retrieves the transaction_isolation_level column from the DMV. The statement
also includes a WHERE clause that uses the @@SPID system variable to specify
the current session ID.
In this case, the SELECT statement returns a value of 1. SQL Server uses the following values to represent
the isolation levels available through the sys.dm_exec_sessions view:
0 = Unspecified
1 = Read Uncommitted
2 = Read Committed
3 = Repeatable
4 = Serializable
5 = Snapshot
Transaction Isolation Levels
READ COMMITTED:
A query in the current transaction cannot read data modified by another transaction that has not yet
committed, thus preventing dirty reads. However, data can still be modified by other transactions
between issuing statements within the current transaction, so nonrepeatable reads and phantom reads
are still possible.
The isolation level uses shared locking or row versioning to prevent dirty reads, depending on whether
the READ_COMMITTED_SNAPSHOT database option is enabled. Read Committed is the default isolation
level for all SQL Server databases.
ALTER DATABASE AdventureWorks2014
SET READ_COMMITTED_SNAPSHOT ON;
To disable the option, simply set it to OFF:
ALTER DATABASE AdventureWorks2014
SET READ_COMMITTED_SNAPSHOT OFF;
Transaction Isolation Levels
SNAPSHOT:
A statement can use data only if it will be in a consistent state throughout the transaction. If another
transaction modifies data after the start of the current transaction, the data is not visible to the current
transaction. The current transaction works with a snapshot of the data as it existed at the beginning of
that transaction. Snapshot transactions do not request locks when reading data, nor do they block other
transactions from writing data. In addition, other transactions writing data do not block the current
transaction for reading data. As with the Serializable isolation level, the Snapshot level prevents dirty
reads, nonrepeatable reads and phantom reads. However, it is susceptible to concurrent update errors.
ALTER DATABASE AdventureWorks2014
SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE AdventureWorks2014
SET READ_COMMITTED_SNAPSHOT OFF;
Transaction Isolation Levels
SERIALIZABLE:
A query in the current transaction cannot read data modified by another transaction that has not yet
committed. No other transaction can modify data being read by the current transaction until it completes,
and no other transaction can insert new rows that would match the search condition in the current
transaction until it completes. As a result, the Serializable isolation level prevents dirty reads,
nonrepeatable reads, and phantom reads. However, it can have the biggest impact on performance,
compared to the other isolation levels.
Transaction Isolation Levels
REPEATABLE READ:
A query in the current transaction cannot read data modified by another transaction that has not yet
committed, thus preventing dirty reads. In addition, no other transactions can modify data being read
by the current transaction until it completes, eliminating nonrepeatable reads. However, if another
transaction inserts new rows that match the search condition in the current transaction, in between
the current transaction accessing the same data twice, phantom rows can appear in the second read.
Transaction Isolation Levels
Row Versioning
When we update a row in a table or index, the new row is marked with a value called the transaction
sequence number (XSN) of the transaction that is doing the update. The XSN is a monotonically
increasing number, which is unique within each SQL Server database. When updating a row, the
previous version of the row is stored in the version store, and the new version of the row contains a
pointer to the old version of the row in the version store. The new row also stores the XSN value,
reflecting the time the row was modified.
Each old version of a row in the version store might, in turn, contain a pointer to an even older version
of the same row. All the old versions of a particular row are chained together in a linked list, and SQL
Server might need to follow several pointers in a list to reach the right version. The version store must
retain versioned rows for as long as there are operations that might require them. As long as a
transaction is open, all versions of rows that have been modified by that transaction must be kept in
the version store, and version of rows read by a statement (RCSI) or transaction (SI) must be kept in the
version store as long as that statement or transaction is open. In addition, the version store must also
retain versions of rows modified by now-completed transactions if there are any older versions of the
same rows.
Row Versioning
In Figure 1, Transaction T3 generates the current version of the row, and it is stored in the normal data
page. The previous versions of the row, generated by Transaction T2 and Transaction Tx, are stored in
pages in the version store (in tempdb).
Before switching to a row-versioning-based isolation level, for reduced blocking and improved
concurrency, we must carefully consider the tradeoffs. In addition to requiring extra management to
monitor the increased use of tempdb for the version store, versioning slows the performance of UPDATE
operations, due to the extra work involved in maintaining old versions. The same applies, to a much
lesser extent, for DELETE operations, since the version store must maintain at most one older version of
the deleted row.
Thanks

More Related Content

What's hot

Sql a practical introduction
Sql   a practical introductionSql   a practical introduction
Sql a practical introductionHasan Kata
 
Intro to T-SQL – 2nd session
Intro to T-SQL – 2nd sessionIntro to T-SQL – 2nd session
Intro to T-SQL – 2nd session
Medhat Dawoud
 
Sql server T-sql basics ppt-3
Sql server T-sql basics  ppt-3Sql server T-sql basics  ppt-3
Sql server T-sql basics ppt-3
Vibrant Technologies & Computers
 
Advanced SQL Webinar
Advanced SQL WebinarAdvanced SQL Webinar
Advanced SQL Webinar
Ram Kedem
 
Sql tutorial
Sql tutorialSql tutorial
Sql tutorial
Rumman Ansari
 
Optimizing Data Accessin Sq Lserver2005
Optimizing Data Accessin Sq Lserver2005Optimizing Data Accessin Sq Lserver2005
Optimizing Data Accessin Sq Lserver2005
rainynovember12
 
Introduction to SQL
Introduction to SQLIntroduction to SQL
Introduction to SQL
Amin Choroomi
 
SQL Overview
SQL OverviewSQL Overview
SQL Overview
Stewart Rogers
 
DDL(Data defination Language ) Using Oracle
DDL(Data defination Language ) Using OracleDDL(Data defination Language ) Using Oracle
DDL(Data defination Language ) Using Oracle
Farhan Aslam
 
Sql
SqlSql
SQL Tutorial - Basic Commands
SQL Tutorial - Basic CommandsSQL Tutorial - Basic Commands
SQL Tutorial - Basic Commands
1keydata
 
SQL : introduction
SQL : introductionSQL : introduction
SQL : introduction
Shakila Mahjabin
 
Ankit
AnkitAnkit
Database queries
Database queriesDatabase queries
Database queries
IIUM
 
SQL Commands
SQL Commands SQL Commands
SQL Commands
Sachidananda M H
 
Introduction to SQL
Introduction to SQLIntroduction to SQL
Introduction to SQL
Ehsan Hamzei
 
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQLSql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Prashant Kumar
 

What's hot (20)

Sql a practical introduction
Sql   a practical introductionSql   a practical introduction
Sql a practical introduction
 
Intro to T-SQL – 2nd session
Intro to T-SQL – 2nd sessionIntro to T-SQL – 2nd session
Intro to T-SQL – 2nd session
 
Sql server T-sql basics ppt-3
Sql server T-sql basics  ppt-3Sql server T-sql basics  ppt-3
Sql server T-sql basics ppt-3
 
Advanced SQL Webinar
Advanced SQL WebinarAdvanced SQL Webinar
Advanced SQL Webinar
 
Sql tutorial
Sql tutorialSql tutorial
Sql tutorial
 
Optimizing Data Accessin Sq Lserver2005
Optimizing Data Accessin Sq Lserver2005Optimizing Data Accessin Sq Lserver2005
Optimizing Data Accessin Sq Lserver2005
 
Introduction to SQL
Introduction to SQLIntroduction to SQL
Introduction to SQL
 
SQL Overview
SQL OverviewSQL Overview
SQL Overview
 
Sql wksht-7
Sql wksht-7Sql wksht-7
Sql wksht-7
 
DDL(Data defination Language ) Using Oracle
DDL(Data defination Language ) Using OracleDDL(Data defination Language ) Using Oracle
DDL(Data defination Language ) Using Oracle
 
Sql
SqlSql
Sql
 
Sql ch 4
Sql ch 4Sql ch 4
Sql ch 4
 
SQL Tutorial - Basic Commands
SQL Tutorial - Basic CommandsSQL Tutorial - Basic Commands
SQL Tutorial - Basic Commands
 
SQL : introduction
SQL : introductionSQL : introduction
SQL : introduction
 
Ankit
AnkitAnkit
Ankit
 
T-SQL Overview
T-SQL OverviewT-SQL Overview
T-SQL Overview
 
Database queries
Database queriesDatabase queries
Database queries
 
SQL Commands
SQL Commands SQL Commands
SQL Commands
 
Introduction to SQL
Introduction to SQLIntroduction to SQL
Introduction to SQL
 
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQLSql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
Sql notes, sql server,sql queries,introduction of SQL, Beginner in SQL
 

Similar to Steps towards of sql server developer

SQL.pptx for the begineers and good know
SQL.pptx for the begineers and good knowSQL.pptx for the begineers and good know
SQL.pptx for the begineers and good know
PavithSingh
 
BIS06 Physical Database Models
BIS06 Physical Database ModelsBIS06 Physical Database Models
BIS06 Physical Database Models
Prithwis Mukerjee
 
BIS06 Physical Database Models
BIS06 Physical Database ModelsBIS06 Physical Database Models
BIS06 Physical Database Models
Prithwis Mukerjee
 
Database COMPLETE
Database COMPLETEDatabase COMPLETE
Database COMPLETE
Abrar ali
 
SQL -Beginner To Intermediate Level.pdf
SQL -Beginner To Intermediate Level.pdfSQL -Beginner To Intermediate Level.pdf
SQL -Beginner To Intermediate Level.pdf
DraguClaudiu
 
DATABASE MANAGMENT SYSTEM (DBMS) AND SQL
DATABASE MANAGMENT SYSTEM (DBMS) AND SQLDATABASE MANAGMENT SYSTEM (DBMS) AND SQL
DATABASE MANAGMENT SYSTEM (DBMS) AND SQL
Dev Chauhan
 
Introduction to sql server
Introduction to sql serverIntroduction to sql server
Introduction to sql serverVinay Thota
 
Unit 2 Chap 4 SQL DDL.pptx
Unit 2 Chap 4 SQL DDL.pptxUnit 2 Chap 4 SQL DDL.pptx
Unit 2 Chap 4 SQL DDL.pptx
PetroJoe
 
Dbms important questions and answers
Dbms important questions and answersDbms important questions and answers
Dbms important questions and answers
LakshmiSarvani6
 
Database Basics
Database BasicsDatabase Basics
Database Basics
Abdel Moneim Emad
 
Sq lite
Sq liteSq lite
Physical elements of data
Physical elements of dataPhysical elements of data
Physical elements of dataDimara Hakim
 
Structure Query Language (SQL).pptx
Structure Query Language (SQL).pptxStructure Query Language (SQL).pptx
Structure Query Language (SQL).pptx
NalinaKumari2
 
Structure query language (sql)
Structure query language (sql)Structure query language (sql)
Structure query language (sql)
Nalina Kumari
 
Physical Design and Development
Physical Design and DevelopmentPhysical Design and Development
Physical Design and Development
Er. Nawaraj Bhandari
 

Similar to Steps towards of sql server developer (20)

SQL.pptx for the begineers and good know
SQL.pptx for the begineers and good knowSQL.pptx for the begineers and good know
SQL.pptx for the begineers and good know
 
Module02
Module02Module02
Module02
 
BIS06 Physical Database Models
BIS06 Physical Database ModelsBIS06 Physical Database Models
BIS06 Physical Database Models
 
BIS06 Physical Database Models
BIS06 Physical Database ModelsBIS06 Physical Database Models
BIS06 Physical Database Models
 
Database COMPLETE
Database COMPLETEDatabase COMPLETE
Database COMPLETE
 
SQL -Beginner To Intermediate Level.pdf
SQL -Beginner To Intermediate Level.pdfSQL -Beginner To Intermediate Level.pdf
SQL -Beginner To Intermediate Level.pdf
 
MYSQL.ppt
MYSQL.pptMYSQL.ppt
MYSQL.ppt
 
DATABASE MANAGMENT SYSTEM (DBMS) AND SQL
DATABASE MANAGMENT SYSTEM (DBMS) AND SQLDATABASE MANAGMENT SYSTEM (DBMS) AND SQL
DATABASE MANAGMENT SYSTEM (DBMS) AND SQL
 
Introduction to sql server
Introduction to sql serverIntroduction to sql server
Introduction to sql server
 
Unit 2 Chap 4 SQL DDL.pptx
Unit 2 Chap 4 SQL DDL.pptxUnit 2 Chap 4 SQL DDL.pptx
Unit 2 Chap 4 SQL DDL.pptx
 
Dbms important questions and answers
Dbms important questions and answersDbms important questions and answers
Dbms important questions and answers
 
Database Basics
Database BasicsDatabase Basics
Database Basics
 
Sq lite
Sq liteSq lite
Sq lite
 
Physical elements of data
Physical elements of dataPhysical elements of data
Physical elements of data
 
12 SQL
12 SQL12 SQL
12 SQL
 
12 SQL
12 SQL12 SQL
12 SQL
 
Structure Query Language (SQL).pptx
Structure Query Language (SQL).pptxStructure Query Language (SQL).pptx
Structure Query Language (SQL).pptx
 
Structure query language (sql)
Structure query language (sql)Structure query language (sql)
Structure query language (sql)
 
Physical Design and Development
Physical Design and DevelopmentPhysical Design and Development
Physical Design and Development
 
SQL.pptx
SQL.pptxSQL.pptx
SQL.pptx
 

More from Ahsan Kabir

Sql server 2016 rc 3 query store overview and architecture
Sql server 2016 rc 3  query store overview and architectureSql server 2016 rc 3  query store overview and architecture
Sql server 2016 rc 3 query store overview and architecture
Ahsan Kabir
 
Brief overview on Microsoft Solution Framework (MSF)
Brief overview on Microsoft Solution Framework (MSF)Brief overview on Microsoft Solution Framework (MSF)
Brief overview on Microsoft Solution Framework (MSF)
Ahsan Kabir
 
Step by Step design cube using SSAS
Step by Step design cube using SSASStep by Step design cube using SSAS
Step by Step design cube using SSAS
Ahsan Kabir
 
Overview of business intelligence
Overview of business intelligenceOverview of business intelligence
Overview of business intelligence
Ahsan Kabir
 
Steps towards business intelligence
Steps towards business intelligenceSteps towards business intelligence
Steps towards business intelligence
Ahsan Kabir
 
Brief overview on microsoft solution framework
Brief overview on microsoft solution frameworkBrief overview on microsoft solution framework
Brief overview on microsoft solution framework
Ahsan Kabir
 

More from Ahsan Kabir (6)

Sql server 2016 rc 3 query store overview and architecture
Sql server 2016 rc 3  query store overview and architectureSql server 2016 rc 3  query store overview and architecture
Sql server 2016 rc 3 query store overview and architecture
 
Brief overview on Microsoft Solution Framework (MSF)
Brief overview on Microsoft Solution Framework (MSF)Brief overview on Microsoft Solution Framework (MSF)
Brief overview on Microsoft Solution Framework (MSF)
 
Step by Step design cube using SSAS
Step by Step design cube using SSASStep by Step design cube using SSAS
Step by Step design cube using SSAS
 
Overview of business intelligence
Overview of business intelligenceOverview of business intelligence
Overview of business intelligence
 
Steps towards business intelligence
Steps towards business intelligenceSteps towards business intelligence
Steps towards business intelligence
 
Brief overview on microsoft solution framework
Brief overview on microsoft solution frameworkBrief overview on microsoft solution framework
Brief overview on microsoft solution framework
 

Recently uploaded

Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
NABLAS株式会社
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
haila53
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
vcaxypu
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
slg6lamcq
 

Recently uploaded (20)

Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
 

Steps towards of sql server developer

  • 1. Microsoft SQL Server Session : Steps Towards SQL Server Developer Ahsan Kabir
  • 2. SQL Server Development  Introduction to Database  Data Base Creation  Architecture of Database Files and File group  SQL Server I/O request  Performance Consideration  Disaster Recovery  Explore System Databases  Table  View  Cursor  User defined function  Trigger  Locking  Exception Handling  Transaction Isolation  Row Version
  • 3. Database A database is an organized collection of data. It is the collection of schemes, tables, queries, reports, views and other objects. A database management system (DBMS) is a computer software application that interacts with the user, other applications, and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation, querying, update, and administration of databases. Well- known DBMSs include MySQL, PostgreSQL, Microsoft SQL Server, Oracle, Sybase and IBM DB2.
  • 6. Database design Process The cyclical process of designing a database, which includes the following basic steps: 1. The requirement collection and analysis phase 2. The conceptual design phase 3. The logical design phase 4. The physical design phase 5. The implementation and loading phase 6. The testing and evaluation phase To fine-tune the design,
  • 8. Architecture of Database Files and File group File groups for allocation and administration Data files Contain tables, indexes, or the text, ntext, or image data Log file is used for Atomicity, Consistency, Isolation, and Durability
  • 9. SQL Server I/O request SQL Server I/O Request I/O Manager Device driver Data is read from, or written to, disk.
  • 10. Performance Thinking  Identify the large tables  Identify Complex processes  Identify heavily accessed table  Identify Less accessed tables  Put different tables used in the same join queries in different filegroups  Transaction log file or files on the same physical disk
  • 11. Explore system databases Master Database  Hold information of other database  System logins, configuration settings  Linked servers Model  Template database  Place stored procedures, views… Tempdb  Global and local temporary tables table-valued functions, and temporary table indexes Msdb  database backups, SQL Agent information, DTS packages, SQL Server jobs, and log shipping.
  • 12. Table
  • 14. Key is a subset of columns in a table that allow a row to be uniquely identified. So, a key can be more than just one column. And, every row in the table will have a unique value for the key – or a unique combination of values if the key consists of more than just one column. According to the SQL standard, a key is not allowed to have values that are NULL-able. Key that has more columns than necessary to uniquely identify each row in the table is called a super- key (think of it as a super-set). But, if the key has the minimum amount of columns necessary to uniquely identify each row then it is called a minimal super-key. A minimal super-key is also known as a candidate key, and there must be one or more candidate keys in a table. PRIMARY KEY and UNIQUE KEY enforces the Uniqueness of the values (i.e. avoids duplicate values) on the column[s] on which it is defined. Also these key’s can Uniquely identify each row in database table. A foreign key identifies a column or group of columns in one (referencing) table that refers to a column or group of columns in another (referenced) table – in our example above, the Employee table is the referenced table and the Employee Salary table is the referencing table. A foreign key can actually reference a key that is not the primary key of a table. But, a foreign key must reference a unique key. foreign key can hold NULL values. Because foreign keys can reference unique, non-primary keys – which can hold NULL values – this means that foreign keys can themselves hold NULL values as well.A table can have multiple unique and foreign keys. However, a table can have only one primary key. Even though the SQL standard says that a key can not be NULL, in practice actual RDBMS implementations (like SQL Server and Oracle), allow both foreign and unique keys to actually be NULL. And there are plenty of times when that actually makes sense. However, a primary key can never be NULL. Key in SQL
  • 15. Referential integrity is a relational database concept in which multiple tables share a relationship based on the data stored in the tables, and that relationship must remain consistent. The concept of referential integrity, and one way in which it’s enforced, is best illustrated by an example. Suppose company X has 2 tables, an Employee table, and an Employee Salary table. In the Employee table we have 2 columns – the employee ID and the employee name. In the Employee Salary table, we have 2 columns – the employee ID and the salary for the given ID. Now, suppose we wanted to remove an employee because he no longer works at company X. Then, we would remove his entry in the Employee table. Because he also exists in the Employee Salary table, we would also have to manually remove him from there also. Manually removing the employee from the Employee Salary table can become quite a pain. And if there are other tables in which Company X uses that employee then he would have to be deleted from those tables as well – an even bigger pain. By enforcing referential integrity, we can solve that problem, so that we wouldn’t have to manually delete him from the Employee Salary table (or any others). Here’s how: first we would define the employee ID column in the Employee table to be our primary key. Then, we would define the employee ID column in the Employee Salary table to be a foreign key that points to a primary key that is the employee ID column in the Employee table. Once we define our foreign to primary key relationship, we would need to add what’s called a ‘constraint’ to the Employee Salary table. The constraint that we would add in particular is called a ‘cascading delete’ – this would mean that any time an employee is removed from the Employee table, any entries that employee has in the Employee Salary table would also automatically be removed from the Employee Salary table. Referential integrity
  • 16. 1.We may not add a record to the Employee Salary table unless the foreign key for that record points to an existing employee in the Employee table. 2.If a record in the Employee table is deleted, all corresponding records in the Employee Salary table must be deleted using a cascading delete. This was the example we had given earlier. 3.If the primary key for a record in the Employee table changes, all corresponding records in the Employee Salary table must be modified using what's called a cascading update. Referential integrity Rules
  • 17. Difference between PRIMARY KEY and UNIQUE KEY PRIMARY KEY UNIQUE KEY NULL Primary Key can't accept null values. PRIMARY KEY = UNIQUE KEY + Not Null CONSTRAINT Allows Null value. But only one Null value. INDEX By default it adds a clustered index By default it adds a UNIQUE non-clustered index LIMIT A table can have only one Primary key . A table can have more than one UNIQUE Key Column[s] CREATE SYNTAX Below is the sample example for defining a single column as a PRIMARY KEY column while creating a table:CREATE TABLE dbo.Customer ( Id INT NOT NULL PRIMARY KEY, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(50) ) Below is the Sample example for defining multiple columns as PRIMARY KEY. It also shows how we can give name for the PRIMARY KEY: CREATE TABLE dbo.Customer ( Id INT NOT NULL, FirstName VARCHAR(100) NOT NULL, LastName VARCHAR(100), City VARCHAR(50), CONSTRAINT PK_CUSTOMER PRIMARY KEY (Id,FirstName) ) Below is the sample example for defining a single column as a UNIQUE KEY column while creating a table: CREATE TABLE dbo.Customer ( Id INT NOT NULL UNIQUE, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(50) ) Below is the Sample example for defining multiple columns as UNIQUE KEY. It also shows how we can give name for the UNIQUE KEY: CREATE TABLE dbo.Customer ( Id INT NOT NULL, FirstName VARCHAR(100) NOT NULL, LastName VARCHAR(100), City VARCHAR(50), CONSTRAINT UK_CUSTOMER UNIQUE (Id,FirstName) ) ALTER SYNTAX Below is the Syntax for adding PRIMARY KEY CONSTRAINT on a column when the table is already created and doesn’t have any primary key:ALTER TABLE dbo.Customer ADD CONSTRAINT PK_CUSTOMER PRIMARY KEY (Id) Below is the Syntax for adding UNIQUE KEY CONSTRAINT on a column when the table is already created:ALTER TABLE dbo.Customer ADD CONSTRAINT UK_CUSTOMER UNIQUE (Id) DROP SYNTAX Below is the Syntax for dropping a PRIMARY KEY:ALTER TABLE dbo.Customer DROP CONSTRAINT PK_CUSTOMER Below is the Syntax for dropping a UNIQUE KEY:ALTER TABLE dbo.Customer DROP CONSTRAINT UK_CUSTOMER
  • 18. Database design and performance 1. Choose Appropriate Data Type Choose appropriate SQL Data Type to store your data since it also helps in to improve the query performance. Example: To store strings use varchar in place of text data type since varchar performs better than text. Use text data type, whenever you required storing of large text data (more than 8000 characters). Up to 8000 characters data you can store in varchar. 2. Avoid nchar and nvarchar Practice to avoid nchar and nvarchar data type since both the data types takes just double memory as char and varchar. Use nchar and nvarchar when you required to store Unicode (16-bit characters) data like as Hindi, Chinese characters etc. 3. Avoid NULL in fixed-length field Practice to avoid the insertion of NULL values in the fixed-length (char) field. Since, NULL takes the same space as desired input value for that field. In case of requirement of NULL, use variable-length (varchar) field that takes less space for NULL.
  • 19. Database design and performance 1. Choose Appropriate Data Type Choose appropriate SQL Data Type to store your data since it also helps in to improve the query performance. Example: To store strings use varchar in place of text data type since varchar performs better than text. Use text data type, whenever you required storing of large text data (more than 8000 characters). Up to 8000 characters data you can store in varchar. 2. Avoid nchar and nvarchar Practice to avoid nchar and nvarchar data type since both the data types takes just double memory as char and varchar. Use nchar and nvarchar when you required to store Unicode (16-bit characters) data like as Hindi, Chinese characters etc. 3. Avoid NULL in fixed-length field Practice to avoid the insertion of NULL values in the fixed-length (char) field. Since, NULL takes the same space as desired input value for that field. In case of requirement of NULL, use variable-length (varchar) field that takes less space for NULL.
  • 20. Database design and performance 04.Avoid * in SELECT statement Practice to avoid * in Select statement since SQL Server converts the * to columns name before query execution. One more thing, instead of querying all columns by using * in select statement, give the name of columns which you required. -- Avoid SELECT * FROM tblName --Best practice SELECT col1,col2,col3 FROM tblName 05.Use EXISTS instead of IN Practice to use EXISTS to check existence instead of IN since EXISTS is faster than IN. -- Avoid SELECT Name,Price FROM tblProduct where ProductID IN (Select distinct ProductID from tblOrder) --Best practice SELECT Name,Price FROM tblProduct where ProductID EXISTS (Select distinct ProductID from tblOrder)
  • 21. Database design and performance 06.Avoid Having Clause Practice to avoid Having Clause since it acts as filter over selected rows. Having clause is required if you further wish to filter the result of an aggregations. Don't use HAVING clause for any other purpose. 07.Create Clustered and Non-Clustered Indexes Practice to create clustered and non clustered index since indexes helps in to access data fastly. But be careful, more indexes on a tables will slow the INSERT,UPDATE,DELETE operations. Hence try to keep small no of indexes on a table. 08.Keep clustered index small Practice to keep clustered index as much as possible since the fields used in clustered index may also used in nonclustered index and data in the database is also stored in the order of clustered index. Hence a large clustered index on a table with a large number of rows increase the size significantly. Please refer the article Effective Clustered Indexes 09.Avoid Cursors Practice to avoid cursor since cursor are very slow in performance. Always try to use SQL Server cursor alternative. Please refer the article Cursor Alternative. 10.Use Table variable inplace of Temp table Practice to use Table varible in place of Temp table since Temp table resides in the TempDb database. Hence use of Temp tables required interaction with TempDb database that is a little bit time taking task.
  • 22. Database design and performance 11.Use UNION ALL inplace of UNION Practice to use UNION ALL in place of UNION since it is faster than UNION as it doesn't sort the result set for distinguished values. 12.Use Schema name before SQL objects name Practice to use schema name before SQL object name followed by "." since it helps the SQL Server for finding that object in a specific schema. As a result performance is best. --Here dbo is schema name SELECT col1,col2 from dbo.tblName -- Avoid SELECT col1,col2 from tblName 13.Keep Transaction small Practice to keep transaction as small as possible since transaction lock the processing tables data during its life. Some times long transaction may results into deadlocks.
  • 23. Database design and performance 14.SET NOCOUNT ON Practice to set NOCOUNT ON since SQL Server returns number of rows effected by SELECT,INSERT,UPDATE and DELETE statement. We can stop this by setting NOCOUNT ON like as: CREATE PROCEDURE dbo.MyTestProc AS SET NOCOUNT ON BEGIN .. END 15.Use TRY-Catch Practice to use TRY-CATCH for handling errors in T-SQL statements. Sometimes an error in a running transaction may cause deadlock if you have no handle error by using TRY-CATCH. 16.Use Stored Procedure for frequently used data and more complex queries Practice to create stored procedure for quaery that is required to access data frequently. We also created stored procedure for resolving more complex task. 17.Avoid prefix "sp_" with user defined stored procedure name Practice to avoid prefix "sp_" with user defined stored procedure name since system defined stored procedure name starts with prefix "sp_". Hence SQL server first search the user defined procedure in the master database and after that in the current session database. This is time consuming and may give unexcepted result if system defined stored procedure have the same name as your defined procedure.
  • 24. OFFSET FETCH OFFSET and FETCH NEXT arguments can be add with the SELECT statement's ORDER BY clause that let you retrieve a fixed number of rows:  OFFSET <EXPR1> Specifies the number of rows to skip before it starts to return rows from the query expression.  FETCH NEXT <EXPR2> ROWS ONLY, Specifies the number of rows to return after the OFFSET clause has been processed Here's the syntax for a simple SELECT statement that uses these arguments: SELECT * FROM <table> ORDER BY <columns> OFFSET <EXPR1> ROWS FETCH NEXT <EXPR2> ROWS ONLY Notes  Pagination is now very easy Using OFFSET and FETCH is bit faster then using ROW_NUMBER(), TOP, Order BY clauses  FETCH can be used with either FIRST or NEXT like  FETCH NEXT 6 ROWS ONLY or FETCH FIRST 6 ROWS ONLY anything can be used  OFFSET (40) ROWS or OFFSET (40) ROW or Offset 40 rows anything can be used  ROW and ROWS are synonyms and are provided for ANSI compatibility.
  • 25. Example of OFFSET FETCH The below snippet shows the output when running the above commands. Rules to use OFFSET FETCH : 1. ORDER BY is mandatory to use OFFSET and FETCH clause. 2. OFFSET clause is mandatory with FETCH. You can never use, ORDER BY … FETCH. 3. TOP cannot be combined with OFFSET and FETCH in the same query expression. 4. The OFFSET/FETCH rowcount expression can be any arithmetic, constant, or parameter expression that will return an integer value. The rowcount expression does not support scalar sub-queries.
  • 26. Local or Global Temporary 1.Local Temp Table Local temp tables are only available to the SQL Server session or connection (means single user) that created the tables. These are automatically deleted when the session that created the tables has been closed. Local temporary table name is stared with single hash ("#") sign. 02.Global Temp Table Global temp tables are available to all SQL Server sessions or connections (means all the user). These can be created by any SQL Server connection user and these are automatically deleted when all the SQL Server connections have been closed. Global temporary table name is stared with double hash ("##") sign. Global temporary tables are visible to all SQL Server connections while Local temporary tables are visible to only current SQL Server connection.
  • 27. Common Table expressions (CTE) It is a temporary result set and typically it may be a result of complex sub-query. Unlike temporary table its life is limited to the current query. It is defined by using WITH statement. CTE improves readability and ease in maintenance of complex queries and sub-queries. A sub query without CTE is given below : SELECT * FROM (SELECT Addr.Address, Emp.Name, Emp.Age From Address Addr Inner join Employee Emp on Emp.EID = Addr.EID ) Temp WHERE Temp.Age > 50 ORDER BY Temp.NAME By using CTE above query can be re-written as follows : With CTE1(Address, Name, Age) --Column names for CTE, which are optional AS (SELECT Addr.Address, Emp.Name, Emp.Age from Address Addr INNER JOIN EMP Emp ON Emp.EID = Addr.EID ) SELECT * FROM CTE1 --Using CTE WHERE CTE1.Age > 50 ORDER BY CTE1.NAME When to use CTE 01.This is used to store result of a complex sub query for further use. 02.This is also used to create a recursive query.
  • 28. Table Variable This acts like a variable and exists for a particular batch of query execution. It gets dropped once it comes out of batch. This is also created in the Tempdb database but not the memory. This also allows you to create primary key, identity at the time of Table variable declaration but not non-clustered index. GO DECLARE @TProduct TABLE ( SNo INT IDENTITY(1,1), ProductID INT, Qty INT ) --Insert data to Table variable @Product INSERT INTO @TProduct(ProductID,Qty) SELECT DISTINCT ProductID, Qty FROM ProductsSales ORDER BY ProductID ASC --Select data Select * from @TProduct --Next batch GO Select * from @TProduct --gives error in next batch Notes: 1.Temp Tables are physically created in the Tempdb database. These tables act as the normal table and also can have constraints, index like normal tables. 2.CTE is a named temporary result set which is used to manipulate the complex sub-queries data. This exists for the scope of statement. This is created in memory rather than Tempdb database. You cannot create any index on CTE. 3.Table Variable acts like a variable and exists for a particular batch of query execution. It gets dropped once it comes out of batch. This is also created in the Tempdb database but not the memory.
  • 29. File Tables FileTables can be used for the storage and management of unstructured data that are currently residing as files on file servers. Another advantage is that the Windows Application Compatibility for their existing Windows applications enables to see these data as files in the file system. First step, you will need to enable the Filestream feature : To enable and change FILESTREAM settings 1. On the Start menu>All Programs>SQL Server 2016 > Configuration Tools>SQL Server Configuration Manager. 2. In services list, right-click SQL Server Services, and then click Open. 3. In the SQL Server Configuration Manager snap-in, locate the instance of SQL Server on which you want to enable FILESTREAM. 4. Right-click the instance, and then click Properties. 5. In the SQL Server Properties dialog box, click the FILESTREAM tab. 6. Select the Enable FILESTREAM for Transact-SQL access check box. 7. If you want to read and write FILESTREAM data from Windows, click Enable FILESTREAM for file I/O streaming access. Enter the name of the Windows share in the Windows Share Name box. 8. If remote clients must access the FILESTREAM data that is stored on this share, select Allow remote clients to have streaming access to FILESTREAM data. 9. Click Apply
  • 30. File Tables Method 1: Copy Paste data into the FileTables folder First, find the folder where FileTable will be storing the files. Go to Databases >> FileStorage>> Expand Tables. Now expanded file table, “FileTableTb”>> Right click on the newly created table, and click on “Explore FileTable Directory”. Now open up the folder where the FileTable data will be stored. Method 2: using SQL Statement : to create new files or directories using a T-SQL procedure need to supply a filename and the filestream. The constraints on the table take care of the rest of the fields.
  • 31. View View is a  Virtual table  Not a Temporary  Physical table  OCCUPY MEMORY Used to  Encapsulate/protect some important/sensitive column
  • 33. SQL JOINs INNER JOIN Match rows between the two tables specified LEFT OUTER JOIN fetches data if present in the left table. RIGHT OUTER JOIN fetches data if present in the right table. FULL OUTER JOIN fetches data if present in either of the two tables.
  • 34. SQL JOINs Cartesian product is based on multiplying the number of rows from the left table by the number of rows in the right table. In self Join same table is specified twice with two different aliases in order to match the data within the same table. SELECT a.emp_id AS "Emp_ID",a.emp_name AS "Employee Name", b.emp_id AS "Supervisor ID",b.emp_name as” Supervisor Name" FROM employee a, employee b WHERE a.emp_supv = b.emp_id tblEmployee Desired Output
  • 35. Cursor cursors are required when it is required to update records in a database table in singleton fashion means row by row. A Cursor also impacts the performance of the SQL Server since it uses the SQL Server instance’s memory, reduce concurrency, decrease network bandwidth and lock resources. A Cursor impacts the performance of the SQL Server since it uses the SQL Server instances' memory, reduce concurrency, decrease network bandwidth and lock resources. Hence it is mandatory to understand the cursor types and its functions so that you can use suitable cursor according to your needs. You should avoid the use of cursor. Basically you should use cursor alternatives like as WHILE loop, sub queries, Temporary tables and Table variables. We should use cursor in that case when there is no option except cursor.
  • 36. Cursor Example In some context business logic that will require us to process data in a sequence
  • 37. Type of Cursor 1.Static Cursors A static cursor populates the result set at the time of cursor creation and query result is cached for the lifetime of the cursor. A static cursor can move forward and backward direction. A static cursor is slower and use more memory in comparison to other cursor. Hence you should use it only if scrolling is required .No UPDATE, INSERT, or DELETE operations are reflected in a static cursor (unless the cursor is closed and reopened). By default static cursors are scrollable. SQL Server static cursors are always read-only. 2.Dynamic Cursors A dynamic cursor allows you to see the data updating, deletion and insertion in the data source while the cursor is open. Hence a dynamic cursor is sensitive to any changes to the data source and supports update, delete operations. By default dynamic cursors are scrollable. 3.Forward Only Cursors A forward only cursor is the fastest cursor among the all cursors but it doesn't support backward scrolling. You can update, delete data using Forward Only cursor. It is sensitive to any changes to the original data source. There are three more types of Forward Only Cursors. Forward_Only KEYSET, FORWARD_ONLY STATIC and FAST_FORWARD. A FORWARD_ONLY STATIC Cursor is populated at the time of creation and cached the data to the cursor lifetime. It is not sensitive to any changes to the data source. A FAST_FORWARD Cursor is the fastest cursor and it is not sensitive to any changes to the data source. 4. Keyset Driven Cursors A keyset driven cursor is controlled by a set of unique identifiers as the keys in the keyset. The keyset depends on all the rows that qualified the SELECT statement at the time of cursor was opened. A keyset driven cursor is sensitive to any changes to the data source and supports update, delete operations. By default keyset driven cursors are scrollable.
  • 38. Examples of Cursors CREATE TABLE Employee ( EmpID int PRIMARY KEY, EmpName varchar (50) NOT NULL, Salary int NOT NULL, Address varchar (200) NOT NULL, ) GO INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(1,'Mohan',12000,'Noida') INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(2,'Pavan',25000,'Delhi') INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(3,'Amit',22000,'Dehradun') INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(4,'Sonu',22000,'Noida') INSERT INTO Employee(EmpID,EmpName,Salary,Address) VALUES(5,'Deepak',28000,'Gurgaon') GO SELECT * FROM Employee
  • 39. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE @salary int DECLARE cur_emp CURSOR STATIC FOR SELECT EmpID,EmpName,Salary from Employee OPEN cur_emp IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM cur_emp INTO @Id,@name,@salary WHILE @@Fetch_status = 0 BEGIN PRINT 'ID : '+ convert(varchar(20),@Id)+', Name : '+@name+ ', Salary : '+convert(varchar(20),@salary) FETCH NEXT FROM cur_emp INTO @Id,@name,@salary END END CLOSE cur_emp DEALLOCATE cur_emp SET NOCOUNT OFF Static Cursor - Example
  • 40. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Dynamic_cur_empupdate CURSOR DYNAMIC FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Dynamic_cur_empupdate IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Dynamic_cur_empupdate INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Mohan' Update Employee SET Salary=15000 WHERE CURRENT OF Dynamic_cur_empupdate FETCH NEXT FROM Dynamic_cur_empupdate INTO @Id,@name END END CLOSE Dynamic_cur_empupdate DEALLOCATE Dynamic_cur_empupdate SET NOCOUNT OFF Go Select * from Employee Dynamic Cursor - Example
  • 41. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Dynamic_cur_empdelete CURSOR DYNAMIC FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Dynamic_cur_empdelete IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Dynamic_cur_empdelete INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Deepak' DELETE Employee WHERE CURRENT OF Dynamic_cur_empdelete FETCH NEXT FROM Dynamic_cur_empdelete INTO @Id,@name END END CLOSE Dynamic_cur_empdelete DEALLOCATE Dynamic_cur_empdelete SET NOCOUNT OFF Go Select * from Employee Dynamic Cursor for Delete - Example
  • 42. SET NOCOUNT ON SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Forward_cur_empupdate CURSOR FORWARD_ONLY FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Forward_cur_empupdate IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Forward_cur_empupdate INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Amit' Update Employee SET Salary=24000 WHERE CURRENT OF Forward_cur_empupdate FETCH NEXT FROM Forward_cur_empupdate INTO @Id,@name END END CLOSE Forward_cur_empupdate DEALLOCATE Forward_cur_empupdate SET NOCOUNT OFF Go Select * from Employee Forward Only Cursor - Example
  • 43. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Forward_cur_empdelete CURSOR FORWARD_ONLY FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Forward_cur_empdelete IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Forward_cur_empdelete INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Sonu' DELETE Employee WHERE CURRENT OF Forward_cur_empdelete FETCH NEXT FROM Forward_cur_empdelete INTO @Id,@name END END CLOSE Forward_cur_empdelete DEALLOCATE Forward_cur_empdelete SET NOCOUNT OFF Go Select * from Employee Forward Only Cursor for Delete- Example
  • 44. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Keyset_cur_empupdate CURSOR KEYSET FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Keyset_cur_empupdate IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Keyset_cur_empupdate INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Pavan' Update Employee SET Salary=27000 WHERE CURRENT OF Keyset_cur_empupdate FETCH NEXT FROM Keyset_cur_empupdate INTO @Id,@name END END CLOSE Keyset_cur_empupdate DEALLOCATE Keyset_cur_empupdate SET NOCOUNT OFF Go Select * from Employee Keyset Driven Cursor - Example
  • 45. SET NOCOUNT ON DECLARE @Id int DECLARE @name varchar(50) DECLARE Keyset_cur_empdelete CURSOR KEYSET FOR SELECT EmpID,EmpName from Employee ORDER BY EmpName OPEN Keyset_cur_empdelete IF @@CURSOR_ROWS > 0 BEGIN FETCH NEXT FROM Keyset_cur_empdelete INTO @Id,@name WHILE @@Fetch_status = 0 BEGIN IF @name='Amit' DELETE Employee WHERE CURRENT OF Keyset_cur_empdelete FETCH NEXT FROM Keyset_cur_empdelete INTO @Id,@name END END CLOSE Keyset_cur_empdelete DEALLOCATE Keyset_cur_empdelete SET NOCOUNT OFF Go Select * from Employee Keyset Driven Cursor for Delete - Example
  • 46. User Defined Functions A Scalar UDF can accept 0 to many input parameter and will return a single value. A Scalar user-defined function returns one of the scalar (int, char, varchar etc) data types. Text, ntext, image and timestamp data types are not supported. 1. Scalar 2. Inline Table-Valued and 3. Multi-statement Table-valued.
  • 47. Scalar Functions The function which returns a Scalar/Single value. A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported.
  • 48. Inline Table-Valued The function which returns a row set of SQL server Table data type. Inline Table-Value user- defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables Create table Employee(ID int , Name varchar(50)) Insert into Employee values (1,'Ferdous') Insert into Employee values (2,'Tanvir') Insert into Employee values (3,'Rechard') Insert into Employee values (4,'Anil') CREATE FUNCTION EmployeeNameById (@ID int) RETURNS TABLE AS RETURN SELECT * from Employee where id=@ID GO ---Execution Select * from EmployeeById (4)
  • 49. Multi-Statement Table-Valued A Multi-Statement Table-Valued user-defined function returns a table. It can have one or more than one T-Sql statement. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, we can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets. CREATE FUNCTION GetAuthorsByState ( @state char(2) ) RETURNS @AuthorsByState table ( au_id Varchar(11), au_fname Varchar(20) ) AS BEGIN INSERT INTO @AuthorsByState SELECT au_id, au_fname FROM Authors WHERE state = @state IF @@ROWCOUNT = 0 BEGIN INSERT INTO @AuthorsByState VALUES ('','No Authors Found') END RETURN END GO
  • 50. SQL Server treats an inline table valued function more like it would do for a view and treats a multi- statement table valued function similar to how it would a stored procedure. When an inline table-valued function is used as part of an outer query, the query processor expands the UDF definition and generates an execution plan that accesses the underlying objects, using the indexes on these objects. Multi-statement table valued function, an execution plan is created for the function itself and stored in the execution plan cache (once the function has been executed the first time). If multi-statement table valued functions are used as part of larger queries then the optimiser does not know what the function returns, and so makes some standard assumptions - in effect it assumes that the function will return a single row, and that the returns of the function will be accessed by using a table scan against a table with a single row. Inline and Multi-Statement Table Valued Function Performance Comparison
  • 51. This compares an expression to a set of simple expressions to find the result. This expression compares an expression to the expression in each WHEN clause for equivalency. If the expression with in the WHEN clause is matched, the expression in the THEN clause will be returned. With this function you can replace a column value with a different value based on the original column value. An example of where this function might come in handy is where you have a table that contains a column named EmploymentType, where 0 stands for Permanent , 1 for Contractual , etc., and you want to return the value " Permanent " when the column value is 0, or " Contractual " when the column value is 1, etc. The CASE function allows you to evaluate a column value on a row against multiple criteria, where each criterion might return a different value. The first criterion that evaluates to true will be the value returned by the CASE function. Microsoft SQL Server Books Online documents two different formats for the CASE function. The "Simple Syntax" looks like this: CASE expression WHEN expression1 THEN Result1 WHEN expression2 THEN Result2 ELSE ResultN END Using the CASE
  • 52. Example 1 : DECLARE @intInput INT SET @intInput = 2 SELECT CASE(@intInput) WHEN 1 THEN 'One' WHEN 2 THEN 'Two' WHEN 3 THEN 'Three' ELSE 'Your message.' END Example 2 : select top 5 title, case when price < 12.00 then'Cheap' when price < 3.00 then 'Really Cheap' when price > 12.00 and price < 20.00 then 'Average' Else 'Expensive' end 'Price Category' from pubs.dbo.titles
  • 53. PIVOT is one of the New relational operator. It provides an easy mechanism in SQL Server to turn rows into columns. Crosstab queries using PIVOT in SQL Server UNPIVOT is the reversal of the PIVOT operation. It basically provides a mechanism for transforming columns into rows.
  • 54. Index  Quickly Retrieve data  without reading the whole table.  Using single or group of columns  ROWID is created for each row  Selection of fields depends on what you are using in your SQL queries.
  • 55. Indexes • Sorted data are physically stored • Order change by Insert update delete balanced tree (B-tree) Clustered NonClustered • Independent of the physical sort order. • If there is no clustered index, its data rows are stored in an unordered structure called a heap
  • 56. Clustered Tables vs Heap Tables • If a table has no indexes or only has non-clustered indexes it is called a heap An age old question is whether or not a table must have a clustered index. The answer is no, but in most cases, it is a good idea to have a clustered index on the table to store the data in a specific order. • The name suggests itself, these tables have a Clustered Index. Data is stored in a specific order based on a Clustered Index key. Cluster table Heap Tables
  • 57. Clustered Tables vs Heap Tables HEAP • Data is not stored in any particular order • Specific data can not be retrieved quickly, unless there are also non- clustered indexes. • Data pages are not linked, so sequential access needs to refer back to the index allocation map (IAM) pages • Since there is no clustered index, additional time is not needed to maintain the index • Since there is no clustered index, there is not the need for additional space to store the clustered index tree • These tables have a index_id value of 0 in the sys.indexes catalog view
  • 58. Clustered Index • The top-most node of this tree is called the "root node" • The bottom level of the nodes is called "leaf nodes" • Any index level between the root node and leaf node is called an "intermediate level" • The leaf nodes contain the data pages of the table in the case of a cluster index. • The root and intermediate nodes contain index pages holding an index row. • Each index row contains a key value and pointer to intermediate level pages of the B-tree or leaf level of the index. • The pages in each level of the index are linked in a doubly-linked list.
  • 59. Non-clustered Index • Index Leaf Nodes and Corresponding Table Data • Each index entry consists of the indexed columns (the key, column 2) and refers to the corresponding table row (via ROWID or RID). • Unlike the index, the table data is stored in a heap structure and is not sorted at all. • There is neither a relationship between the rows stored in the same table block nor is there any connection between the blocks.
  • 60. PRIMARY KEY AS A CLUSTERED INDEX • Primary key: a constraint to enforce uniqueness in a table. The primary key columns cannot hold NULL values. • In SQL Server, when you create a primary key on a table, if a clustered index is not defined and a non-clustered index is not specified, a unique clustered index is created to enforce the constraint. • However, there is no guarantee that this is the best choice for a clustered index for that table. • Make sure you are carefully considering this in your indexing strategy.
  • 61. A clustered index determines the order in which the rows of a table are stored on disk. If a table has a clustered index, then the rows of that table will be stored on disk in the same exact order as the clustered index. The query will run much faster than if the rows were being stored in some random order on the disk Example : Suppose we have a table named Employee which has a column named EmployeeID. Let’s say we create a clustered index on the EmployeeID column. What happens when we create this clustered index? Well, all of the rows inside the Employee table will be physically – sorted (on the actual disk) – by the values inside the EmployeeID column. What does this accomplish? Well, it means that whenever a lookup/search for a sequence of EmployeeID’s is done using that clustered index, then the lookup will be much faster because of the fact that the sequence of employee ID’s are physically stored right next to each other on disk – that is the advantage with the clustered index. This is because the rows in the table are sorted in the exact same order as the clustered index, and the actual table data is stored in the leaf nodes of the clustered index. index is usually a tree data structure – and leaf nodes are the nodes that are at the very bottom of that tree. In other words, a clustered index basically contains the actual table level data in the index itself. SQL Server index
  • 62. When a query is issued against an indexed column, the query engine starts at the root node and navigates down through the intermediate nodes, For example, if you’re searching for the value 123 in an indexed column, the query engine would first look in the root level to determine which page to reference in the top intermediate level. The leaf node will contain either the entire row of data or a pointer to that row, depending on whether the index is clustered or non-clustered.
  • 63. Clustered index vs No-Clustered Index Fact that if a given row has a value updated in one of it’s (clustered) indexed columns what typically happens is that the database will have to move the entire row so that the table will continue to be sorted in the same order as the clustered index column. clustered indexes are usually created on primary keys or foreign keys, because of the fact that those values are less likely to change once they are already a part of a table. A non-clustered index will store both the value of the EmployeeID AND a pointer to the row in the Employee table where that value is actually stored. But a clustered index, on the other hand, will actually store the row data for a particular EmployeeID – so if you are running a query that looks for an EmployeeID of 15, the data from other columns in the table like EmployeeName, EmployeeAddress, etc. will all actually be stored in the leaf node of the clustered index itself. This means that with a non-clustered index extra work is required to follow that pointer to the row in the table to retrieve any other desired values, as opposed to a clustered index which can just access the row directly since it is being stored in the same order as the clustered index itself. So, reading from a clustered index is generally faster than reading from a non-clustered index. A table can have multiple non-clustered indexes because they don’t affect the order in which the rows are stored on disk like clustered indexes. Non-clustered index: • Leaf level is the actual data page • Non-leaf levels contains the index key columns • Clustered index scan = table scan on Heap Non-clustered index: • Leaf level contains the key and include columns • Non-leaf levels contains the index key columns
  • 64. Summary of the differences:  A clustered index determines the order in which the rows of the table will be stored on disk – and it actually stores row level data in the leaf nodes of the index itself. A non-clustered index has no effect on which the order of the rows will be stored.  Using a clustered index is an advantage when groups of data that can be clustered are frequently accessed by some queries. This speeds up retrieval because the data lives close to each other on disk. Also, if data is accessed in the same order as the clustered index, the retrieval will be much faster because the physical data stored on disk is sorted in the same order as the index.  A clustered index can be a disadvantage because any time a change is made to a value of an indexed column, the subsequent possibility of re-sorting rows to maintain order is a definite performance hit.  A table can have multiple non-clustered indexes. But, a table can have only one clustered index.  Non clustered indexes store both a value and a pointer to the actual row that holds that value. Clustered indexes don’t need to store a pointer to the actual row because of the fact that the rows in the table are stored on disk in the same exact order as the clustered index – and the clustered index actually stores the row-level data in it’s leaf nodes.
  • 65. Tuning SQL Indexes for better performance Don’t use too many indexes As you know, indexes can take up a lot of space. So, having too many indexes can actually be damaging to your performance because of the space impact. For example, if you try to do an UPDATE or an INSERT on a table that has too many indexes, then there could be a big hit on performance due to the fact that all of the indexes will have to be updated as well. A general rule of thumb is to not create more than 3 or 4 indexes on a table. Try not to include columns that are repeatedly updated in an index: If you create an index on a column that is updated very often, then that means that every time the column is updated, the index will have to be updated as well. This is done by the DBMS, of course, so that the index stays current and consistent with the columns that belong to that index. So, the number of ‘writes’ is increased two- fold – one time to update the column itself and another to update the index as well. So, you might want to consider avoiding the inclusion of columns that are frequently updated in your index. Creating indexes on foreign key column(s) can improve performance: Because joins are often done between primary and foreign key pairs, having an index on a foreign key column can really improve the join performance. Not only that, but the index allows some optimizers to use other methods of joining tables as well. Create indexes for columns that are repeatedly used in predicates of your SQL queries: Take a look at your queries and see which columns are used frequently in the WHERE predicate. If those columns are not part of an index already, then you should add them to an index. This is of course because an index on columns that are repeatedly used in predicates will help speed up your queries. Consider deleting an index when loading huge amounts of data into a table If you are loading a huge amount of data into a table, then you might want to think about deleting some of the indexes on the table. Then, after the data is loaded into the table, you can recreate the indexes. The reason you would want to do this is because the index will not have to be updated during the delete, which could save you a lot of time!
  • 66. Stored Procedure  Set of T-SQL code  Pre-compile objects  which are compiled  executes compiled code  Security due to encryption
  • 67. Trigger • Specialized stored procedure • Executed on INSERT, UPDATE, or DELETE.
  • 68. System Catalogs The SQL Server system catalogs is a set of views that show metadata that describes the objects in an instance of SQL Server. Metadata is data that describes the attributes of objects in a system. SQL Server-based applications can access the information in the system catalogs by using Information Schema( views to quickly retrieve metadata ) and Catalog Views, recommended . Catalog views can be used to get information like objects, logins permissions etc used by SQL server database engine. Rather than accessing the system tables directly, catalog views can be used. Catalog views don’t contain information about replication backup etc. o What table does a particular column belongs to? o What all Stored Procedures effect a particular table? o How can I see what particular constraint does my tables have? o What all Foreign Keys defined in table’s columns are linked to? Information Schema Views: They present the catalog information in a format that is independent of any catalog table implementation and therefore are not affected by changes in the underlying catalog tables. Catalog Views: provide access to metadata that is stored in every database on the server.
  • 69. A subquery is a query within a query that returns a result that is expected at the place where the subquery is placed. query optimization is outside the scope of this article, but I do want to mention this. You can see IO statistics by running the following statement in your query window. SET STATISTICS IO ON turn on the option 'Include actual query plan' under the 'Query' menu option. Now run your query again. 'Execution plan', which shows exactly what steps SQL Server had to perform to get to your result. Read it from right to left. Less steps is not always better, some steps require more work from SQL Server than others. Execution Plan
  • 70. IN, ANY, SOME, ALL and EXISTS ANY operator works much like the IN operator, except in that you can use the >, <, >=, <=, = and <> operators to compare values. ANY returns true if at least one value returned by the subquery makes the predicate true. So the following query returns all persons except that with BusinessEntityID 1, because 1 > 1 returns FALSE. SELECT * FROM Person.Person WHERE BusinessEntityID > ANY (SELECT 1) Instead of ANY you can use SOME, which has the same meaning. DECLARE @OrderDate AS DATETIME = '20050517' DECLARE @Status AS TINYINT = 4 IF @Status > SOME(SELECT Status FROM Purchasing.PurchaseOrderHeader WHERE OrderDate = @OrderDate) PRINT 'Not all orders have the specified status!' ELSE PRINT 'All orders have the specified status.'
  • 71. ALL and Exists ANY, ALL looks at all results returned by a subquery and only returns TRUE if the comparison with all results makes the predicate true. DECLARE @OrderDate AS DATETIME = '20050517' DECLARE @Status AS TINYINT = 4 IF @Status < ALL(SELECT Status FROM Purchasing.PurchaseOrderHeader WHERE OrderDate = @OrderDate) PRINT 'All orders have the specified status.' ELSE PRINT 'Not all orders have the specified status!' EXISTS can be used like ANY and ALL, but returns true only if at least one record was returned by the subquery. SELECT * FROM Sales.Customer AS c WHERE EXISTS(SELECT * FROM Sales.SalesOrderHeader AS s WHERE s.CustomerID = c.CustomerID) EXISTS functions only returns TRUE or FALSE and not any columns. For that reason it does not matter what you put in your SELECT statement.
  • 72. Querying from subqueries; Derived tables When we use subqueries in our FROM clause the result is called a derived table. A derived table is a named table expression and, like a subquery, is only visible to its outer query. It differs from subqueries in that they return a complete table result. SELECT * FROM (SELECT SalesOrderID, SalesOrderNumber, CustomerID, AVG(SubTotal) OVER(PARTITION BY CustomerID) AS AvgSubTotal FROM Sales.SalesOrderHeader) AS d WHERE AvgSubTotal > 100 ORDER BY AvgSubTotal, CustomerID, SalesOrderNumber The result of a subquery needs to be relational. That means every column it returns must have a name. AVG(SubTotal)... would not have a name, so we MUST alias it. We must also alias the derived table itself. sort the data first before it can check which rows should and should not be returned. And when the data is sorted SQL Server does not unsort them before returning the result. In this case a sort is not necessary because the entire table needs to be returned anyway.
  • 73. CROSS APPLY CROSS APPLY operator works like an INNER JOIN in that it can match rows from two tables and leaves out rows that were not matched by the other table in the result. we can use multiple APPLY operators in a single query. Select all Persons that have a SalesOrder and show some order information for the most expensive order that Person has made. SELECT p.BusinessEntityID, p.FirstName, p.LastName, a.* FROM Person.Person AS p CROSS APPLY (SELECT TOP 1 s.SalesOrderID, s.CustomerID, s.SubTotal FROM Sales.SalesOrderHeader AS s JOIN Sales.Customer AS c ON c.CustomerID = s.CustomerID WHERE c.PersonID = p.BusinessEntityID ORDER BY s.SubTotal DESC) AS a ORDER BY p.BusinessEntityID CROSS APPLY operator takes a table expression as input parameter and simply joins the result with each row of the outer query.
  • 74. OUTER APPLY OUTER APPLY works in much the same way as the CROSS APPLY with the exception that it also returns rows if no corresponding row was returned by the APPLY operator. Persons that have not placed an order are now also returned in the result set. SELECT p.BusinessEntityID, p.FirstName, p.LastName, a.* FROM Person.Person AS p OUTER APPLY (SELECT TOP 3 s.SalesOrderID, s.CustomerID, s.SubTotal FROM Sales.SalesOrderHeader AS s JOIN Sales.Customer AS c ON c.CustomerID = s.CustomerID WHERE c.PersonID = p.BusinessEntityID ORDER BY s.SubTotal DESC) AS a ORDER BY p.BusinessEntityID
  • 75. PARSE Parsing is a special kind of cast which always casts a VARCHAR value into another datatype. In SQL Server we can use the PARSE or TRY_PARSE function which takes as parameters a VARCHAR value, a datetype and an optional culture code to specify in which culture format the value is formatted. We can for example parse a VARCHAR value that represents a date formatted to Dutch standards into a DATETIME value. SELECT PARSE('12-31-2013' AS DATETIME2 USING 'en-US') AS USDate, PARSE('31-12-2013' AS DATETIME2 USING 'nl-NL') AS DutchDate FORMAT The FORMAT function does not really provide a means to convert between datatypes. Instead it provides a way to output data in a given format. SELECT SalesOrderID, FORMAT(SalesOrderID, 'SO0') AS SalesOrderNumber, CustomerID, FORMAT(CustomerID, '0.00') AS CustomerIDAsDecimal, OrderDate, FORMAT(OrderDate, 'dd-MM-yy') AS FormattedOrderDate FROM Sales.SalesOrderHeader
  • 76. REPLACE REVERSE STUFF REPLACE you can replace a character or a substring of a string with another character or string. With STUFF you can replace a part of a string based on index. With REVERSE you can, of course, reverse a string. In the following example we revert the SalesOrderNumber, we replace the 'SO' in the SalesOrderNumber with 'SALE', and we replace the first two characters of the PurchaseOrderNumber with 'PURC'. SELECT SalesOrderNumber, REVERSE(SalesOrderNumber) AS ReversedOrderNumber, REPLACE(SalesOrderNumber, 'SO', 'SALE') AS NewOrderFormat, PurchaseOrderNumber, STUFF(PurchaseOrderNumber, 1, 2, 'PURC') AS NewPurchaseFormat FROM Sales.SalesOrderHeader
  • 77. IFF IIF you can test a predicate and specify a value if it evaluates to true and a value if it evaluates to false. SELECT BusinessEntityID, CASE WHEN Title IS NULL THEN 'No title' ELSE Title END AS TitleCase, IIF(Title IS NULL, 'No title', Title) AS TitleIIF, FirstName, LastName FROM Person.Person
  • 78. COALESCE, ISNULL and NULLIF With COALESCE we can specify a range of values and the first value that is not NULL is returned. It can actually make our IIF that checks for a NULL from the previous section even shorter. SELECT BusinessEntityID, COALESCE(Title, 'No title'), FirstName, LastName FROM Person.Person COALESCE returns NULL if all values that were passed to it are NULLs. ISNULL does the same as COALESCE, but with some differences. The first difference is that ISNULL can only have two values. So if the first value is NULL it will return the second value (which may also be NULL). SELECT BusinessEntityID, ISNULL(Title, 'No title'), FirstName, LastName FROM Person.Person DECLARE @first AS VARCHAR(4) = NULL DECLARE @second AS VARCHAR(5) = 'Hello' SELECT COALESCE(@first, @second) AS [Coalesce], ISNULL(@first, @second) AS [IsNull]
  • 79. Exception Handling TRY..CATCH SQL Server also has an exception model to handle exceptions and errors that occurs in T-SQL statements. To handle exception in Sql Server we have TRY..CATCH blocks. We put T-SQL statements in TRY block and to handle exception we write code in CATCH block. If there is an error in code within TRY block then the control will automatically jump to the corresponding CATCH blocks. In Sql Server, against a Try block we can have only one CATCH block. ERROR_NUMBER(): The number assigned to the error. ERROR_LINE(): The line number inside the routine that caused the error. ERROR_MESSAGE(): The error message text, which includes the values supplied for any substitutable parameters, such as times or object names. ERROR_SEVERITY(): The error’s severity. ERROR_STATE(): The error’s state number. ERROR_PROCEDURE(): The name of the stored procedure or trigger that generated the error. BEGIN TRY SELECT [Second] = 1/0 END TRY BEGIN CATCH SELECT [Error_Line] = ERROR_LINE(), [Error_Number] = ERROR_NUMBER(), [Error_Severity] = ERROR_SEVERITY(), [Error_State] = ERROR_STATE() SELECT [Error_Message] = ERROR_MESSAGE() END CATCH
  • 80. Through The role of the TRY statement is to capture the exception. If an exception occurs within the TRY block, the part of the system called the exception handler delivers the exception to the other part of the program, which will handle the exception. This program part is denoted by the keyword CATCH and is therefore called the CATCH block. THROW. This statement allows you to throw an exception caught in the exception handling block. Simply stated, the THROW statement is another return mechanism, which behaves similarly to the already described RAISEERROR statement. Drop table #TestRethrow CREATE TABLE #TestRethrow ( ID INT PRIMARY KEY ); BEGIN TRY INSERT #TestRethrow(ID) VALUES(1); -- Force error 2627, Violation of PRIMARY KEY constraint to be raised. INSERT #TestRethrow(ID) VALUES(1); END TRY BEGIN CATCH Declare @Errormessage nvarchar(100) SELECT @Errormessage = ERROR_LINE()+ERROR_NUMBER()+ERROR_STATE() ; THROW 6000,@Errormessage,1; END CATCH; --Test that next statement is executed or Not Select * from sys.objects THROW statement must be followed by the semicolon (;) statement terminator.
  • 81. Difference between Through and RAISERROR If a TRY…CATCH construct is not available, the session is ended. The line number and procedure where the exception is raised are set. The severity is set to 16. If the THROW statement is specified without parameters, it must appear inside a CATCH block. This causes the caught exception to be raised. Any error that occurs in a THROW statement causes the statement batch to be ended.] RAISERROR statement THROW statement If a msg_id is passed to RAISERROR, the ID must be defined in sys.messages. The error_number parameter does not have to be defined in sys.messages. The msg_str parameter can contain printf formatting styles. The message parameter does not accept printf style formatting. The severity parameter specifies the severity of the exception. There is no severity parameter. The exception severity is always set to 16.
  • 82. Locking in SQL Server Default transaction isolation setup : ALTER DATABASE AdventureWorks2014 SET READ_COMMITTED_SNAPSHOT ON; ALTER DATABASE AdventureWorks2014 SET ALLOW_SNAPSHOT_ISOLATION ON; ALTER DATABASE AdventureWorks2014 SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT ON; Read Committed whether or not row versioning is used. To get around this, you must use the SET TRANSACTION ISOLATION LEVEL statement at the session level, or use a table hint at the statement level, if you want your change to apply only to that statement. For example, the following SELECT statement specifies the TABLOCK table hint: SELECT EmpID, FirstName, LastName FROM EmployeeInfo WITH(TABLOCK) WHERE EmpID > 99 ORDER BY LastName; TABLOCK table hint directs the database engine to lock the data at the table level, rather than the row or page level. The table hint will apply only to the table targeted in this statement and will not impact the rest of the session, as would a SET TRANSACTION ISOLATION LEVEL statement.
  • 83. Transaction Isolation Levels READ UNCOMMITTED: A query in the current transaction can read data modified within another transaction but not yet committed. The database engine does not issue shared locks when Read Uncommitted is specified, making this the least restrictive of the isolation levels. As a result, it’s possible that a statement will read rows that have been inserted, updated or deleted, but never committed to the database, a condition known as dirty reads. It’s also possible for data to be modified by another transaction between issuing statements within the current transaction. Use the SET TRANSACTION ISOLATION LEVEL statement, as shown below: SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT * FROM EmployeeInfo WHERE EmpID = 1; Notice it is simple to specify the isolation level in our SET TRANSACTION ISOLATION LEVEL statement, in this case, Read Uncommitted. We can then run our query under that isolation level. Afterwards, we can return our session to the default level by issuing the following statement: SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
  • 84. Concurrency issues that each isolation level is susceptible to: Transaction Isolation Levels Isolation level Dirty read Nonrepeatable read Phantom read Read uncommitted ✔ ✔ ✔ Read committed ✗ ✔ ✔ Repeatable read ✗ ✗ ✔ Serializable ✗ ✗ ✗ Snapshot ✗ ✗ ✗
  • 85. The SELECT statement retrieves the transaction_isolation_level column from the DMV. The statement also includes a WHERE clause that uses the @@SPID system variable to specify the current session ID. In this case, the SELECT statement returns a value of 1. SQL Server uses the following values to represent the isolation levels available through the sys.dm_exec_sessions view: 0 = Unspecified 1 = Read Uncommitted 2 = Read Committed 3 = Repeatable 4 = Serializable 5 = Snapshot Transaction Isolation Levels
  • 86. READ COMMITTED: A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus preventing dirty reads. However, data can still be modified by other transactions between issuing statements within the current transaction, so nonrepeatable reads and phantom reads are still possible. The isolation level uses shared locking or row versioning to prevent dirty reads, depending on whether the READ_COMMITTED_SNAPSHOT database option is enabled. Read Committed is the default isolation level for all SQL Server databases. ALTER DATABASE AdventureWorks2014 SET READ_COMMITTED_SNAPSHOT ON; To disable the option, simply set it to OFF: ALTER DATABASE AdventureWorks2014 SET READ_COMMITTED_SNAPSHOT OFF; Transaction Isolation Levels
  • 87. SNAPSHOT: A statement can use data only if it will be in a consistent state throughout the transaction. If another transaction modifies data after the start of the current transaction, the data is not visible to the current transaction. The current transaction works with a snapshot of the data as it existed at the beginning of that transaction. Snapshot transactions do not request locks when reading data, nor do they block other transactions from writing data. In addition, other transactions writing data do not block the current transaction for reading data. As with the Serializable isolation level, the Snapshot level prevents dirty reads, nonrepeatable reads and phantom reads. However, it is susceptible to concurrent update errors. ALTER DATABASE AdventureWorks2014 SET READ_COMMITTED_SNAPSHOT ON; ALTER DATABASE AdventureWorks2014 SET READ_COMMITTED_SNAPSHOT OFF; Transaction Isolation Levels
  • 88. SERIALIZABLE: A query in the current transaction cannot read data modified by another transaction that has not yet committed. No other transaction can modify data being read by the current transaction until it completes, and no other transaction can insert new rows that would match the search condition in the current transaction until it completes. As a result, the Serializable isolation level prevents dirty reads, nonrepeatable reads, and phantom reads. However, it can have the biggest impact on performance, compared to the other isolation levels. Transaction Isolation Levels
  • 89. REPEATABLE READ: A query in the current transaction cannot read data modified by another transaction that has not yet committed, thus preventing dirty reads. In addition, no other transactions can modify data being read by the current transaction until it completes, eliminating nonrepeatable reads. However, if another transaction inserts new rows that match the search condition in the current transaction, in between the current transaction accessing the same data twice, phantom rows can appear in the second read. Transaction Isolation Levels
  • 90. Row Versioning When we update a row in a table or index, the new row is marked with a value called the transaction sequence number (XSN) of the transaction that is doing the update. The XSN is a monotonically increasing number, which is unique within each SQL Server database. When updating a row, the previous version of the row is stored in the version store, and the new version of the row contains a pointer to the old version of the row in the version store. The new row also stores the XSN value, reflecting the time the row was modified. Each old version of a row in the version store might, in turn, contain a pointer to an even older version of the same row. All the old versions of a particular row are chained together in a linked list, and SQL Server might need to follow several pointers in a list to reach the right version. The version store must retain versioned rows for as long as there are operations that might require them. As long as a transaction is open, all versions of rows that have been modified by that transaction must be kept in the version store, and version of rows read by a statement (RCSI) or transaction (SI) must be kept in the version store as long as that statement or transaction is open. In addition, the version store must also retain versions of rows modified by now-completed transactions if there are any older versions of the same rows.
  • 91. Row Versioning In Figure 1, Transaction T3 generates the current version of the row, and it is stored in the normal data page. The previous versions of the row, generated by Transaction T2 and Transaction Tx, are stored in pages in the version store (in tempdb). Before switching to a row-versioning-based isolation level, for reduced blocking and improved concurrency, we must carefully consider the tradeoffs. In addition to requiring extra management to monitor the increased use of tempdb for the version store, versioning slows the performance of UPDATE operations, due to the extra work involved in maintaining old versions. The same applies, to a much lesser extent, for DELETE operations, since the version store must maintain at most one older version of the deleted row.