5. Recovery Options
The recovery options determine the behavior of the transaction
log and how damaged pages are handled
Recovery Models : The recovery model determines the types of
backups
Full
Bulk-logged
Simple
6. Recovery Options
THE FULL RECOVERY MODEL
When a database is in the Full recovery model, all changes made,
using both data manipulation language (DML) and data definition
language (DDL), are logged to the transaction log.
Because all changes are recorded in the transaction log, it is
possible to recover a database in the Full recovery model to a
given point in time so that data loss can be minimized or
eliminated if you should need to recover from a disaster.
Changes are retained in the transaction log indefinitely and are
removed only by executing a transaction log backup.
Every production database that accepts transactions should be set
to the Full recovery model.
By placing the database in the Full recovery model, you can
maximize the restore options that are possible.
7. Recovery Options
THE FULL RECOVERY MODEL
View Recovery model for all databases:
SELECT name, recovery_model_desc
FROM sys.databases
Change Database Recovery model :
ALTER DATABASE <database name> SET RECOVERY
FULL
BULK_LOGGED
SIMPLE
8. Recovery Options
THE BULK-LOGGED RECOVERY MODEL
Certain operations are designed to manipulate large amounts of data.
The overhead of logging to the transaction log can have a detrimental impact
on performance.
The Bulk-logged recovery model allows certain operations to be executed
with minimal logging.
When a minimally logged operation is performed, SQL Server does not log
every row changed but instead logs only the extents, thereby reducing the
overhead and improving performance.
Because the Bulk-logged recovery model does not log every change to the
transaction log, you cannot recover a database to a point in time.
Microsoft recommends that this model only be used for short time.
9. Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL:
The operations that are performed in a minimally logged
manner with the database set in the Bulk-logged recovery
model are :
BCP
BULK INSERT
SELECT. . .INTO
CREATE INDEX
ALTER INDEX. . .REBUILD
10. bcp is designed as a very fast, lightweight solution for importing
and exporting data.
The utility bulk copy bcp copies data between an instance of
Microsoft SQL Server and a data file in a user-specified format.
The bcp utility can be used to import large numbers of new rows
into SQL Server tables or to export data out of tables into data
files directly from the command line
BCP Utility
Note :
If you use bcp to back up your data, create a format file to record
the data format. bcp data files do not include any schema or
format information, so if a table or view is dropped and you do
not have a format file, you may be unable to import the data.
bcp is a powerful tool for those seeking to insert data into a SQL
Server database from within a batch file or other programmatic
method.
Recovery Options
THE BULK-LOGGED RECOVERY MODEL
11. Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL
{table | view | "query "} represents the data source or destination in a
SQL Server database.
You can use the bcp utility to export data from a table or view or
through a query.
If you import into a view, all columns within the view must reference a
single table.
When you specify a table or view, you must qualify the name with the
database or schema names as necessary.
bcp {table | view | "query"}
{out | queryout | in | format}
{data_file | nul}
{[optiona l_argument]...}
12. Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL
{out | queryout | in | format} : determines the command’s mode (direction).
out: The command exports data from a table or view into a data file.
queryout: The command exports data retrieved through a query into a
data file.
in: The command imports data from a data file into a table or view.
format: The command creates a format file based on a table or view
bcp {table | view | "query"}
{out | queryout | in | format}
{data_file | nul}
{[optiona l_argument]...}
13. Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL
{[optional_argument]...} is the full path of the data file or, the null
If you’re importing data, you must specify the file that contains the
source data.
If you’re exporting data, you must specify the file that the data will be
copied to, (If the file does not exist, it will be created.)
When you’re using the bcp utility to generate a format file, you do not
specify a data file. Instead, you should specify nul in place of the data
file name.
bcp {table | view | "query"}
{out | queryout | in | format}
{data_file | nul}
{[optional argument]...}
-n (native format)
-N (Unicode native format)
-w (Unicode character format)
-c (character format)
14. bcp DBEmployee.dbo.tblNames out c:name.dat -S <Server Name>
-U <user name> -P <password>
Examples :
Recovery Options
THE BULK-LOGGED RECOVERY MODEL
(Continue…)
BCP Utility
15. C:> bcp "SELECT * FROM realty.dbo.tblgovern" queryout
d:DataPerson.dat -N -S localhostSqlSrv2008 -U alasql -P ala_sql
Exporting Data Returned by a Query
Examples :
Recovery Options
THE BULK-LOGGED RECOVERY MODEL
(Continue…)
BCP Utility
16. C:> bcp AdventureWorks2008.dbo.Employees in
C:DataEmployeeData_c.dat -c -t, -S localhostSqlSrv2008 -T
Importing Data into a Table
Examples :
Recovery Options
THE BULK-LOGGED RECOVERY MODEL
(Continue…)
C:> bcp realty.dbo.tblgovern in d:DataPerson.dat -N
-S localhostSqlSrv2008 -U alasql -P alasql
BCP Utility
17. Recovery Options
THE BULK-LOGGED RECOVERY MODEL
BULK INSERT
The BULK INSERT command has many of the same
options as BCP and behaves almost identically, except for
the following two differences:
BULK INSERT cannot export data.
BULK INSERT is a T-SQL command and does not need to
specify the instance name or login credentials.
Import data from a data file into a table or view.
You can specify the format of the imported data, based on
how that data is stored in the file.
18. Recovery Options
THE BULK-LOGGED RECOVERY MODEL
BULK INSERT
BULK INSERT Realty.dbo.tblgovern
FROM 'C:Dataperson.dat'
WITH
(
DATAFILETYPE = 'widenative’
);
BULK INSERT Realty.dbo.tblgovern
FROM 'C:Dataperson.dat'
WITH
(
FORMATFILE = 'C:DataEmployeeFormat_n.fmt’
);
19. Recovery Options
THE BULK-LOGGED RECOVERY MODEL
INSERT…SELECT
INSERT INTO
Realty.dbo.tblgovern
SELECT * FROM
OPENROWSET(BULK 'C:Dataperson.dat',
FORMATFILE = ' Dataperson.fmt '
) AS e;
BULK-IMPROT
20. Recovery Options (Continue…)
Minimal Logging
Minimally log is a method to maximize performance when bulk loading data
Minimal logging can make bulk load operations more efficient and minimize
the risk that the transaction log will fill up.
To minimally log a bulk load operation, the TABLOCK option must be
specified and the table must not be being replicated.
THE BULK-LOGGED RECOVERY MODEL
21. The operation can be minimally logged only under one of the following
conditions:
If the table has no indexes, the data pages can be minimally logged.
If the table has no clustered indexes, and is empty, data pages and
index pages can be minimally logged.
If the table has no clustered indexes, and has data, data pages can be
minimally logged but index pages cannot.
If the table has a clustered index but is empty, data pages and indexed
pages can be minimally logged. (Both types of pages are fully logged
whenever the table contains data
Minimal Logging
Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL
22. Table
Has
Data
Has
Index
Index Clustered Data page Index
pages
yes no no Minimal logging Can not
No yes No Minimal logging
Minimal
logging
yes yes No Minimal logging Can not
yes yes yes Minimal logging
Minimal
logging
Minimal Logging
Recovery Options (Continue…)
THE BULK-LOGGED RECOVERY MODEL
23. SIMPLE RECOVERY MODEL
SQL Server maintains only a minimal amount of information in the
transaction log.
SQL Server truncates the transaction log each time the database
reaches a transaction checkpoint, leaving no log entries for
disaster recovery purposes.
A database in the Simple recovery model cannot be recovered to
a point in time because it is not possible to issue a transaction log
backup for a database in the simple recovery model.
Recovery Options (Continue…)
24. Damaged Pages
Recovery Options
It is possible to damage data pages during a write to disk if you have a
power failure or failures in disk subsystem components during the
write operation.
If the write operation fails to complete, you can have an incomplete
page in the database that cannot be read.
Because the damage happens to a page on disk, the only time that you
see a result of the damage is when SQL Server attempts to read the
page off disk.
The default configuration of SQL Server does not check for
damaged pages and could cause the database to go off-line if a
damaged page is encountered
The PAGE_VERIFY CHECKSUM option can be enabled, which allows
you to discover and log damaged pages.
(Continue…)
25. When pages are written to disk, a checksum for the page is calculated
and stored in the page header.
When SQL Server reads a page from disk, a checksum is calculated and
compared to the checksum stored in the page header
If a damaged page is encountered, an 824 error is returned to the calling
application and logged to the SQL Server error log and Windows.
Damaged Pages
Recovery Options
When a corrupt page is encountered, the page is logged to the
suspect_pages table in the msdb database. If a database is participating
in a Database Mirroring session, SQL Server automatically retrieves a
copy of the page from the mirror, replaces the page on the principal, and
logs an entry in the sys.dm_db_mirroring_auto_page_repair view.
(Continue…)
26. Damaged Pages
Recovery Options
ALTERDATABASE <databasename> SET PAGE_VERIFY CHECKSUM.
Examples :
When the PAGE_VERIFY CHECKSUM option is enabled, SQL Server
calculates a checksum for the page prior to the write. Each time a page
is read off disk, a checksum is recalculated and compared to the
checksum written to the page. If the checksums do not match, the page
has been corrupted
If SQL Server begins writing blocks on a page and the disk system fails in
the middle of the write process, only a portion of the page is written
successfully, producing a problem called a torn page.
The page verification can be set to either TORN_PAGE_DETECTION or
CHECKSUM.
(Continue…)
28. Auto Options
AUTO_CLOSE
When the last connection to a database is closed, SQL Server shuts
down the database and releases all resources related to the
database.
When a new connection is made to the database, SQL Server starts
up the database and begins allocating resources.
A database that is frequently accessed should not be set to
AUTO_CLOSE because it would cause a severe degradation in
performance.
By default, AUTO_CLOSE is disabled,
EXEC sp_dboption 'MYDB', 'autoclose','off'
Examples :
ALTER DATABASE MYDB SET AUTO_CLOSE ON
(Continue…)
29. Auto Options
AUTO_SHRINK
If the AUTO_SHRINK option is enabled, SQL Server periodically
checks the space utilization of data and transaction log files.
If the space checking algorithm finds a data file that has more that
25% free space, the file automatically shrinks to reclaim disk space.
It is recommended to leave the AUTO_SHRINK option disabled and
manually shrink files only when necessary.
The only operations that cause one-time space utilization changes to
database files are administrative processes that create and rebuild
indexes, archive data, or load data.
EXEC sp_dboption 'MYDB', ‘autoshrink ',on'
Examples :
ALTER DATABASE MYDB SET AUTO_SHRINK ON
(Continue…)
30. Auto Options
AUTO_CREATE_STATISTICS
SQL Server automatically creates statistics that are missing during
the optimization phase of query processing.
Statistics allow the Query Optimizer to build more efficient query
plans.
EXEC sp_dboption 'MYDB', ‘auto create statistics', 'on'
Examples :
ALTER DATABASE MYDB SET AUTO_CREATE_STATISTICS ON
(Continue…)
31. Auto Options
AUTO_UPDATE_STATISTICS
AUTO_UPDATE_STATISTICS_ASYNC
AUTO_UPDATE_STATISTICS update out_of_date statistics during
query optimization.
If you choose to enable AUTO_UPDATE_STATISTICS, second
option AUTO_UPDATE_STATISTICS_ASYNC, controls whether
statistics are updated during query optimization or if query
optimization continues while the statistics are updated
asynchronously.
Allow server to update
out_of_date statistics
automatically
EXEC sp_dboption 'MYDB', ‘auto update statistics ', 'on'
Examples :
ALTER DATABASE MYDB SET AUTO_UPDATE_STATISTICS ON
(Continue…)
32. Change Tracking
One of the challenges for any multiuser system is to ensure that the
changes of one user do not accidentally over write the changes of
another.
Change tracking is a lightweight mechanism that associates a
version with each row in a table that has been enabled for change
tracking
Each time the row is changed, the version number is incremented.
You need only compare the row version to determine if a change has
occurred to the row between when the row was read and written.
you can choose which tables within a database that change tracking
information should be captured for.
you can also specify how long tracking information is retained
through the CHANGE_RETENTION option.
Tracking information should be automatically cleaned up with the
AUTO_CLEANUP option.
33. Change Tracking
Specifies the minimum period for keeping change tracking information
in the database. Data is removed only when the AUTO_CLEANUP
value is ON.
retention_period is an integer that specifies the numerical component
of the retention period.
The default retention period is 2 days. The minimum retention period
is 1 minute.
ALTER DATABASE database_name SET
CHANGE_RETENTION =retention_period { DAYS | HOURS | MINUTES }
ALTER DATABASE database_name SET AUTO_CLEANUP = { ON | OFF }
(Continue…)
34. ACCESS
Access to a database can be controlled through several options.
ONLINE You can perform all operations that would otherwise be possible You
can control the ability to modify data :
READ_ONLY Cannot be written to DB
SQL Server removes any transaction log file
that is specified for the database
Changing a database from READ_ONLY to
READ_WRITE SQL Server to re-create the
transaction log file.
READ_WRITE Normal operational mode
User access to a database can be controlled through
SINGLE_USER only a single user is
allowed to access the database
RESTRICTED_USER only allows access to
members of the db_owner, dbcreator, and
sysadmin roles
MULTI_USER Normal operational mode
The ALTER DATABASE command is blocked until all the non-allowed
users disconnect
35. ACCESS
ONLINE Instead of waiting for users to complete operations and disconnect from the
database, you can specify
ROLLBACK IMMEDIATE Option forcibly rolls back any open transactions,
along with disconnecting any non allowed users
ROLLBACK AFTER <Seconds> Waits for the specified number of seconds before
rolling back transactions and disconnecting
users.
OFFLINE A database is inaccessible
EMERGENCY Can be access only by a member of the db_owner role, and the Only command
allowed to executed is SELECT.
(Continue…)
36. To restrict database access to members of the db_owner role and
terminate all active transactions and connection at the same time :
ACCESS
Examples :
What backups can be executed for a database in each of the
recovery models?
ALTER DATABASE EmployeeDB SET RESTRICTED_USER WITH
ROLLBACK IMMEDIATE
You can create full, differential, and file/filegroup backups in the
Simple recovery model. The Bulk-logged recovery model allows
you to execute types of backups, but you cannot restore a
database to a point in time during an interval when a minimally
logged transaction is executing. All types of backups can be
executed in the Full recovery model.
(Continue…)
37. Parameterization
When a database call is parameterized, the values are passed as
variables.
SQL Server caches the query plan for every query that is
executed.
When a query is executed, SQL Server parses and compiles the
query. The query is then compared to the query cache using a
string-matching algorithm. If a match is found, SQL Server
retrieves the plan that has already been generated and executes
the query.
A query that is parameterized has a much higher probability of
being matched because the query string does not change even
when the values being used vary.
Parameterized queries can reuse cached query plans more
frequently and avoid the time required to build a query plan.
38. Parameterization
Because not all applications parameterize calls to the database, you
can force SQL Server to parameterize every query for a given database
by setting:
the PARAMETERIZATION FORCED database option.
The default setting for a database is not to force parameterization.
Forced Parameterization changes the literal constants in a query to
parameters when compiling a query.
Forced parameterization should not be used for environments that rely
heavily on indexed views and indexes on computed columns.
Generally, the PARAMETERIZATION FORCED option should only be
used by experienced database administrators after determining that
doing this does not adversely affect performance.
(Continue…)
39. Parameterization
Distributed queries that reference more than one database are eligible
for forced parameterization as long as the parameterization option is
set to FORCED in the database whose context the query is running.
Setting the parameterization option to FORCED flushes all query plans
from the plan cache of a database, except those that currently are
compiling, recompiling, or running.
The current setting of the PARAMETERIZATION option is preserved
when reattaching or restoring a database.
When the PARAMETERIZATION Option is set to FORCED, the
reporting of error messages may differ from that of simple
parameterization: multiple error messages may be reported in cases
where fewer message would be reported under simple
parameterization, and the line numbers in which errors occur may be
reported incorrectly.
(Continue…)
40. Parameterization
SELECT stats.execution_count AS cnt, p.size_in_bytes AS [size],
[sql].[text] AS [plan_text]
FROM sys.dm_exec_cached_plans p
OUTER APPLY sys.dm_exec_sql_text (p.plan_handle) sql
JOIN sys.dm_exec_query_stats stats
ON stats.plan_handle = p.plan_handle ;
(Continue…)
You can see that the query optimizer has rewritten the T-SQL query
as a parameterized T-SQL statement :
41. Collation Sequences
Collation name can be either a Windows collation name or a SQL
collation name.
Each SQL Server collation specifies three properties:
The Sort order to use for Unicode data types (nchar,nvarchar , and
ntext). A sort order defines the sequence in which characters are
sorted, and the way characters are evaluated in comparison
operations.
The sort order to use for non-Unicode character data types (char
,varchar , and text).
The code page used to store non-Unicode character data.
42. Collation Sequences
You Cannot specify the equivalent of a code page for the Unicode data
types (nchar, nvarchar, and ntext).
The double-byte bit patterns used for Unicode characters are defined
by the Unicode standard and cannot be changed.
ALTER DATABASE database COLLATE < collation_name >
ALTER DATABASE database COLLATE SQL_Latin1_General_CP1_CI_AS
Examples :
(Continue…)
43. Maintaining Database Integrity
Statement category Perform
Maintenance
statements
Maintenance tasks on a database, index, or filegroup.
DBCC DBREINDEX
DBCC DBREPAIR
DBCC INDEXDEFRAG
DBCC SHRINKDATABASE
DBCC SHRINKFILE
DBCC UPDATEUSAGE
Miscellaneous
statements
Miscellaneous tasks such as enabling row-level locking
or removing a dynamic-link library (DLL) from memory.
DBCC dllname (FREE)
DBCC HELP
DBCC PINTABLE
DBCC ROWLOCK
DBCC TRACEOFF
DBCC TRACEON
DBCC UNPINTABLE
Database Console Commands (DBCC)
DBCC HELP("?")
45. Maintaining Database Integrity
You can force SQL Server to read every page from disk and check the
integrity by executing the DBCC CHECKDB command.
When DBCC CHECKDB is executed, SQL Server performs all the
following actions:
Checks page allocation within the database
Checks the structural integrity of all tables and indexed views
Calculates a checksum for every data and index page to compare
against the stored Checksum
Validates the contents of every indexed view
Checks the database catalog
Validates Service Broker data within the database
DBCC CHECKDB checks the logical and physical integrity of every table,
index, and indexed view within the database, along with the contents of
every indexed view, page allocations, Service Broker data, and database
catalog.
(Continue…)
46. Maintaining Database Integrity
To accomplish these checks, DBCC CHECKDB executes the following
commands:
DBCC CHECKALLOC, to check the page allocation of the database
DBCC CHECKCATALOG, to check the database catalog
DBCC CHECKTABLE, for each table and view in the database to
check the structural integrity
DBCC CHECKDB [( 'database_name' | database_id | 0
[ , NOINDEX | { REPAIR_ALLOW_DATA_LOSS | REPAIR_FAST | REPAIR_REBUILD } ] )]
[ WITH {[ ALL_ERRORMSGS ] [ , [ NO_INFOMSGS ] ] [ , [ TABLOCK ] ]
[ , [ ESTIMATEONLY ] ] [ , [ PHYSICAL_ONLY ] ] | [ , [ DATA_PURITY ] ] } ]
To check the integrity of your Database execute the following code:
DBCC CHECKDB (‘EmployeeDB') WITH NO_INFOMSGS,
ALL_ERRORMSGS
The generic syntax of DBCC CHECKDB :
(Continue…)
47. SQL Server Hidden Stored Procedures
sp_msforeachdb and sp_msforeachtable are very powerful stored procedures.
They allow you to loop through the databases and tables in your instance and run
commands against them
Both of the stored procedures use a question mark as a variable subsitution
character. When using sp_msforeachdb, the "?" returns the databasename, and
when using sp_msforeachtable the "?" returns the tablename.
To change the owner of each database in the instance to sa.
sp_msforeachdb 'IF ''?'' NOT IN (''master'', ''model'', ''msdb'', ''tempdb'')
BEGIN
print ''?''
exec [?].dbo.sp_changedbowner ''sa''
END
Maintaining Database Integrity (Continue…)
48. Review
1 - You are the database administrator at Blue Yonder Airlines and are primarily
responsible for the Reservations database, which runs on a server running
SQL Server 2008. In addition to customers booking flights through the
company’s Web sit , flights can be booked with several partners. Once an
hour, the Reservations database receives multiple files from partners,
which are then loaded into the database using the Bulk Copy Program
(BCP) utility. You need to ensure that you can recover the database to any
point in time while also maximizing the performance of import routines. How
would you configure the database to meet business requirements?
A. Enable AUTO_SHRINK
B. Set PARAMETERIZATION FORCED on the database
C. Configure the database in the Bulk-logged recovery model
D. Configure the database in the Full recovery mode
49. Review (Continue …)
2 - Which commands are executed when you run the DBCC CHECKDB
command? (Check all that apply.)
A. DBCC CHECKTABLE
B. DBCC CHECKIDENT
C. DBCC CHECKCATALOG
D. DBCC FREEPROCCACHE
50. 1- Correct Answer: D
A. Incorrect: The AUTO_SHRINK option does not ensure that the
database can be recovered to any point in time.
B. Incorrect: Forced parameterization does not ensure that the database
can be recovered to any point in time.
C. Incorrect: While the bulk-logged recovery model allows maximum
performance and you can still create transaction log backups, you
cannot recover a database to a point in time during which a minimally
logged operation is executing.
D. Correct: The full recovery model ensures that you can always recover
the database to any point in time
Answers
51. 2- Correct Answers: A and C
A. Correct: A DBCC CHECKDB command executes DBCC
CHECKTABLE, DBCC CHECKALLOC, and DBCC CHECKCATALOG.
B. Incorrect: DBCC CHECKIDENT is used to check, fix, or reseed an
identity value.
C. Correct: A DBCC CHECKDB command executes DBCC
CHECKTABLE, DBCC CHECKALLOC, and DBCC CHECKCATALOG.
D. Incorrect: DBCC FREEPROCCACHE clears the contents of the query
cache.
Answers (Continue …)