This document discusses database backup strategies and procedures in SQL Server. Some key points:
- SQL Server uses a backup engine optimized for speed to quickly grab database pages during a backup without regard to page order. Multiple threads can write pages simultaneously to backup devices.
- A full backup captures all used data pages in the database. Transaction log and differential backups capture only changed pages since the previous backups.
- Partial backups can reduce size by backing up only specific filegroups. File/filegroup backups allow restoring portions of a database.
- Maintenance plans provide a graphical tool for automating common backup and maintenance tasks like reindexing.
The document provides information on how to use the Database Engine Tuning Advisor (DTA) in SQL Server. It discusses how DTA works with SQL Trace output to analyze workload queries and make recommendations such as adding or dropping indexes. It also describes the four main steps to an analysis using DTA: generating a workload, starting DTA and connecting to a database, selecting the workload, and specifying tuning options. Finally, it discusses how DTA evaluates the cost of queries against recommendations to identify indexes and partitions that could improve performance.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to analyze hardware bottlenecks by monitoring queue lengths for processors, disks, and networks. It also helps optimize SQL Server performance by capturing events using SQL Server Profiler.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to capture counter logs for analysis to troubleshoot issues like bottlenecks. It is recommended to select counter objects instead of individual counters to ensure all necessary data is captured.
This document summarizes the main parts of an Oracle AWR report, including the snapshot details, load profile, top timed foreground events, time model statistics, and SQL section. The time model statistics indicate that 86.45% of database time was spent executing SQL statements. The top foreground event was waiting for database file sequential reads, taking up 62% of database time.
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
Earl Shaffer Oracle Performance Tuning pre12c 11g AWR usesoramanc
Earl Shaffer will give a presentation on using Automatic Workload Repository (AWR), Automatic Database Diagnostic Monitor (ADDM), and Active Session History (ASH) to monitor and tune an Oracle database. He has over 30 years of experience as an Oracle DBA. The presentation will cover the basics of AWR, real examples of AWR reports and queries, and tips on using AWR, ADDM, and ASH to proactively manage database performance.
The document discusses Oracle Database performance tuning. It begins by defining performance as the accepted throughput for a given workload. Performance tuning is defined as optimizing resource use to increase throughput and minimize contention. A performance problem occurs when database tasks do not complete in a timely manner, such as SQL running longer than usual or users facing slowness. Performance problems can be caused by contention for resources, overutilization of the system, or poorly written SQL. The document discusses various performance diagnostics tools and concepts like wait events, enqueues, I/O performance, and provides examples of how to analyze issues related to these areas.
This document discusses SQL Server endpoints and security. It describes how endpoints control connections to SQL Server instances and define acceptable communication methods. Endpoints have transports like TCP and payloads that determine allowed traffic types. Access to endpoints can be controlled through permissions. Various endpoint types like for database mirroring have additional configuration options. The document also covers creating principals like logins and users, roles, and configuring the SQL server surface area to restrict features and harden security.
The document provides information on how to use the Database Engine Tuning Advisor (DTA) in SQL Server. It discusses how DTA works with SQL Trace output to analyze workload queries and make recommendations such as adding or dropping indexes. It also describes the four main steps to an analysis using DTA: generating a workload, starting DTA and connecting to a database, selecting the workload, and specifying tuning options. Finally, it discusses how DTA evaluates the cost of queries against recommendations to identify indexes and partitions that could improve performance.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to analyze hardware bottlenecks by monitoring queue lengths for processors, disks, and networks. It also helps optimize SQL Server performance by capturing events using SQL Server Profiler.
System Monitor is a Microsoft Windows utility that allows administrators to capture performance counters about hardware, operating systems, and applications. It uses a polling architecture to gather numeric statistics from counters exposed by components at user-defined intervals. The counters are organized in a three-level hierarchy of counter object, counter, and counter instance. System Monitor can be used to capture counter logs for analysis to troubleshoot issues like bottlenecks. It is recommended to select counter objects instead of individual counters to ensure all necessary data is captured.
This document summarizes the main parts of an Oracle AWR report, including the snapshot details, load profile, top timed foreground events, time model statistics, and SQL section. The time model statistics indicate that 86.45% of database time was spent executing SQL statements. The top foreground event was waiting for database file sequential reads, taking up 62% of database time.
The document discusses Oracle database performance tuning. It covers identifying and resolving performance issues through tools like AWR and ASH reports. Common causes of performance problems include wait events, old statistics, incorrect execution plans, and I/O issues. The document recommends collecting specific data when analyzing problems and provides references and scripts for further tuning tasks.
Earl Shaffer Oracle Performance Tuning pre12c 11g AWR usesoramanc
Earl Shaffer will give a presentation on using Automatic Workload Repository (AWR), Automatic Database Diagnostic Monitor (ADDM), and Active Session History (ASH) to monitor and tune an Oracle database. He has over 30 years of experience as an Oracle DBA. The presentation will cover the basics of AWR, real examples of AWR reports and queries, and tips on using AWR, ADDM, and ASH to proactively manage database performance.
The document discusses Oracle Database performance tuning. It begins by defining performance as the accepted throughput for a given workload. Performance tuning is defined as optimizing resource use to increase throughput and minimize contention. A performance problem occurs when database tasks do not complete in a timely manner, such as SQL running longer than usual or users facing slowness. Performance problems can be caused by contention for resources, overutilization of the system, or poorly written SQL. The document discusses various performance diagnostics tools and concepts like wait events, enqueues, I/O performance, and provides examples of how to analyze issues related to these areas.
This document discusses SQL Server endpoints and security. It describes how endpoints control connections to SQL Server instances and define acceptable communication methods. Endpoints have transports like TCP and payloads that determine allowed traffic types. Access to endpoints can be controlled through permissions. Various endpoint types like for database mirroring have additional configuration options. The document also covers creating principals like logins and users, roles, and configuring the SQL server surface area to restrict features and harden security.
This document provides an overview of database performance tuning with a focus on SQL Server. It begins with background on the author and history of databases. It then covers topics like indices, queries, execution plans, transactions, locking, indexed views, partitioning, and hardware considerations. Examples are provided throughout to illustrate concepts. The goal is to present mostly vendor-independent concepts with a "SQL Server flavor".
Database tuning is the process of optimizing a database to maximize performance. It involves activities like configuring disks, tuning SQL statements, and sizing memory properly. Database performance issues commonly stem from slow physical I/O, excessive CPU usage, or latch contention. Tuning opportunities exist at the level of database design, application code, memory settings, disk I/O, and eliminating contention. Performance monitoring tools like the Automatic Workload Repository and wait events help identify problem areas.
This document discusses SQL Server troubleshooting and performance monitoring. It begins with the basics of using tools like logs, Performance Monitor, traces, and third-party applications. It emphasizes starting monitoring before issues arise to establish baselines and identify bottlenecks. Common issues involve memory, processors, disks, queries, and maintenance. Specific performance counters are outlined to monitor these resources. Other troubleshooting aids discussed include dynamic management views, trace flags, and the Profiler tool. The roles of different database instances and importance of database design and queries are also covered.
This document provides an overview of Oracle database concepts including physical and logical structures, the system global area (SGA) and program global area (PGA), background processes, and the computer science database instance details. Specifically, it describes datafiles, control files, redo logs, tablespaces, segments, and schemas as logical structures and explains how the SGA contains the database buffer cache, redo log buffer, and shared pool. It also outlines several important background processes like SMON, PMON, DBWR, LGWR, and CKPT.
This document discusses how to optimize performance in SQL Server. It covers:
1) Why performance tuning is necessary to allow systems to scale, improve performance, and save costs.
2) How to optimize SQL Server performance by addressing CPU, memory, I/O, and other factors like compression and partitioning.
3) How to optimize the database for performance through techniques like schema design, indexing, locking, and query optimization.
This document provides an overview of performance monitoring and optimization for SQL Server databases. It discusses monitoring database activity using tools like SQL Profiler and Activity Monitor, identifying bottlenecks, using the Database Engine Tuning Advisor to generate optimization recommendations, and addressing issues related to processes, locking, and deadlocks. Best practices emphasized establishing a performance baseline, making incremental changes while measuring impact, and focusing on specific issues to optimize real-world workloads.
The document provides an overview of Oracle architecture including:
- Data is stored in data blocks which make up extents that form segments within tablespaces. Segments represent database objects like tables and indexes.
- The system global area (SGA) resides in memory and caches data and structures for efficient processing. It includes the database buffer cache, redo log buffer, and shared pool.
- Server processes handle SQL statements by parsing, executing, and returning results. Background processes perform functions like checkpoint, recovery, and writing data to disk.
- Transactions are written to the redo log and undo segments maintain rollback information. This supports data consistency, recovery, and rolling back transactions.
About the course:
This Oracle performance tuning online course is designed for the audience who want to learn basics and core concepts of Oracle PT. You will be learning about Introduction, basic tuning diagnostics, how to use automatic workload repository, defining of problems, how to create AWR baselines, monitoring of applications Etc. All Oracle performance tuning classes will be live and interactive.
Course Target:
Oracle performance tuning online training is designed to teach you fundamentals of PT.
Understand basic tuning diagnostics.
Learn how to use Automatic workload repository.
Obtain knowledge of using metrics and alerts.
Clear understanding of how to monitor applications.
Need to identify problem SQL statements
Learn how to influence the optimizer.
Understand SQL performance management.
Tuning the shared pool, I/0, Buffer cache, PGA and temporary space.
Course Targeted Audience:
Any candidate can join our Oracle performance tuning online course.
People who are from professional background can join.
Researches can also participate in this course.
Prerequisites:
Candidates with basic knowledge of computer.
Basics of database are recommended.
Training Format:
Kernel Training provides Oracle performance tuning online course led by real time expert.
Registered Candidates can interact with instructor in live interactive sessions.
Candidates will have life time access to learning material.
Companies Using Oracle PT:
Major international IT companies perform Oracle performance tuning for their operations.
This document discusses database backup and recovery strategies in Oracle. It covers different types of backups including logical, physical, online and offline backups. It emphasizes the importance of backups for recovery purposes. Different failure scenarios are described such as statement failure, user process failure, user error, instance failure and media failure. The roles of logical backups using Export and archiving redo logs are explained. Considerations for backup strategies include business needs, availability requirements, transaction volumes and read-only tablespaces. Testing backups is recommended to ensure recovery success.
The document discusses database backup and recovery strategies in Oracle. It covers the different types of backups including logical, physical offline ("cold") backups, and physical online ("hot") backups. It also discusses archiving redo logs, testing backup strategies, and implications of backup methods like downtime required and recovery time. Failure scenarios like statement failure, user process failure, user error, instance failure, and media failure are also summarized. Finally, it discusses logical backups using the Oracle Export utility and parameters that can be passed to it.
This document discusses backup and recovery of SQL Server databases. It explains that there are three types of database backups: full, differential, and transaction log backups. It provides steps for creating a full database backup and restoring from a backup. Backups protect from data loss due to failures like hardware issues, user errors, or disasters. However, backups require storage space and recovery is only possible from the last full backup. Regular backups are important for database administration and disaster recovery.
The document provides an overview of Oracle database physical and logical structures, background processes, backup methods, and administrative tasks. It describes key components like datafiles, control files, redo logs, tablespaces, schemas and segments that make up the physical and logical structure. It also explains the system global area (SGA) and program global area (PGA) memory structures and background processes like SMON, PMON, DBWR, LGWR and ARCH that manage the database instance. Common backup methods like cold backups, hot backups and logical exports are summarized. Finally, it lists some daily, weekly and other administrative tasks.
Sql server’s high availability technologiesvenkatchs
SQL Server provides several high availability technologies to protect against server, site, and database failures including failover clustering, database mirroring, log shipping, peer-to-peer replication, database snapshots, and backup and restore. Failover clustering protects at the server-level by allowing nodes to failover. Database mirroring and log shipping protect databases by copying transaction logs from a principal database to a mirror or secondary database. Peer-to-peer replication replicates changes between databases for availability. Snapshots enable quick recovery of databases. Backup and restore reduces recovery time through different backup types and transaction log application.
1 ISACA JOURNAL VOLUME 1, 2012FeatureThe ability to r.docxhoney725342
1 ISACA JOURNAL VOLUME 1, 2012
Feature
The ability to restore databases from valid
backups is a vital part of ensuring business
continuity. Backup integrity and restorations
are an important piece of the IT Governance
Institute’s IT Control Objectives for Sarbanes-
Oxley, 2nd Edition. In many instances, IT auditors
merely confirm whether backups are being
performed either to disk or to tape, without
considering the integrity or viability of the
backup media.
This article covers the topics related to
data loss and the types of database backup
and recovery available. Best practices that can
assist an auditor in assessing the effectiveness
of database backup and recovery are also
provided. This article focuses on the technologies
and capabilities of the Oracle relational
database management system (RDBMS) and
Microsoft (MS) SQL Server because, together,
they cover approximately 40 percent of all
database installations. Figure 1 provides a short
comparison of Oracle and MS SQL Server.
One of the key responsibilities of a database
administrator (DBA) is to prepare for the
possibility of media, hardware and software
failure as well as to recover databases during a
disaster. Should any of these failures occur, the
major objective is to ensure that the database
is available to users within an acceptable time
period, while ensuring that there is no loss of
data. DBAs should evaluate their preparedness
to respond effectively to such situations by
answering the following questions:
• How confident is the DBA that the data on which
the company business depends are backed up
successfully and that the data can be recovered
from these backups within the permissible time
limits, per a service level agreement (SLA)
or recovery time objective, as specified in the
organization’s disaster recovery plan?
• Has the DBA taken measures to draft and test
the procedures to protect as well as recover the
databases from numerous types of failures?
The following is a checklist for database
backup and recovery procedures that are
explained throughout this article:
1. Develop a comprehensive backup plan.
2. Perform effective backup management.
3. Perform periodic databases restore testing.
4. Have backup and recovery SLAs drafted and
communicated to all stakeholders.
5. Have the disaster recovery plan (DRP)
database portion drafted and documented.
6. Keep your knowledge and know-how on
database and OS backup and recovery tools up
to date.
Comprehensive BaCkup plan
DBAs are responsible for making a
comprehensive backup plan for databases for
which they are accountable. The backup plan
should include all types of RDBMSs within the
enterprise and should cover the following areas:
• Decide what needs to be backed up. It is
imperative that the DBA be aware of database
and related OS and application components
that need to be backed up, whether via an
online backup or an offline cold backup.
The following are d ...
This document provides an overview of the physical and logical structures of an Oracle database, including datafiles, control files, redo logs, and tablespaces. It also describes Oracle instances, the system global area (SGA), program global area (PGA), and background processes. Administrative tasks like backups, monitoring, and patching are discussed. Specific details are given about the Computer Science database, including its server, tablespaces, and 4mm DAT tape backup method.
The document discusses database recovery techniques. It describes ARIES, an algorithm that recovers a database to consistency after a crash in three phases: analysis identifies dirty pages, redo repeats logged actions to restore state, and undo undoes uncommitted transactions. The write-ahead logging protocol forces log writes before data page updates to allow recovery using the log. Checkpointing records dirty pages to reduce redo work during recovery.
This document discusses database backup and recovery. It defines backup as additional copies of data for restoration if the primary copy is lost or corrupted. There are several types of backups including full, incremental, differential, and mirror backups. Recovery brings the database back to a prior consistent state, using techniques like log files, check pointing, and immediate or deferred transaction updates. Factors like backup location, test restores, automation, and database design can influence recovery duration. Alternatives to traditional backup and recovery include standby databases, replication, and disk mirroring.
This document discusses backup and recovery strategies for databases using Oracle Recovery Manager (RMAN). It recommends maintaining redundancy through techniques like RAID and mirroring to prevent the need for recovery. It emphasizes the importance of keeping the redundancy set that is needed for recovery on separate disks from the primary database files. It also stresses the need for frequent, redundant backups of archived redo logs to enable recovery to any point in time.
DBRC (Database Recovery Control) is an IMS tool that controls logging and recovery of IMS databases. It stores database recovery information in RECON (Recovery Control) data sets. DBRC manages log control, recovery control, and share control through these records. It controls online log data sets, secondary log data sets, and other logs to enable full or time stamp recovery of databases. DBRC also controls database sharing levels to determine which subsystems can access and update databases.
This document provides an overview of database performance tuning with a focus on SQL Server. It begins with background on the author and history of databases. It then covers topics like indices, queries, execution plans, transactions, locking, indexed views, partitioning, and hardware considerations. Examples are provided throughout to illustrate concepts. The goal is to present mostly vendor-independent concepts with a "SQL Server flavor".
Database tuning is the process of optimizing a database to maximize performance. It involves activities like configuring disks, tuning SQL statements, and sizing memory properly. Database performance issues commonly stem from slow physical I/O, excessive CPU usage, or latch contention. Tuning opportunities exist at the level of database design, application code, memory settings, disk I/O, and eliminating contention. Performance monitoring tools like the Automatic Workload Repository and wait events help identify problem areas.
This document discusses SQL Server troubleshooting and performance monitoring. It begins with the basics of using tools like logs, Performance Monitor, traces, and third-party applications. It emphasizes starting monitoring before issues arise to establish baselines and identify bottlenecks. Common issues involve memory, processors, disks, queries, and maintenance. Specific performance counters are outlined to monitor these resources. Other troubleshooting aids discussed include dynamic management views, trace flags, and the Profiler tool. The roles of different database instances and importance of database design and queries are also covered.
This document provides an overview of Oracle database concepts including physical and logical structures, the system global area (SGA) and program global area (PGA), background processes, and the computer science database instance details. Specifically, it describes datafiles, control files, redo logs, tablespaces, segments, and schemas as logical structures and explains how the SGA contains the database buffer cache, redo log buffer, and shared pool. It also outlines several important background processes like SMON, PMON, DBWR, LGWR, and CKPT.
This document discusses how to optimize performance in SQL Server. It covers:
1) Why performance tuning is necessary to allow systems to scale, improve performance, and save costs.
2) How to optimize SQL Server performance by addressing CPU, memory, I/O, and other factors like compression and partitioning.
3) How to optimize the database for performance through techniques like schema design, indexing, locking, and query optimization.
This document provides an overview of performance monitoring and optimization for SQL Server databases. It discusses monitoring database activity using tools like SQL Profiler and Activity Monitor, identifying bottlenecks, using the Database Engine Tuning Advisor to generate optimization recommendations, and addressing issues related to processes, locking, and deadlocks. Best practices emphasized establishing a performance baseline, making incremental changes while measuring impact, and focusing on specific issues to optimize real-world workloads.
The document provides an overview of Oracle architecture including:
- Data is stored in data blocks which make up extents that form segments within tablespaces. Segments represent database objects like tables and indexes.
- The system global area (SGA) resides in memory and caches data and structures for efficient processing. It includes the database buffer cache, redo log buffer, and shared pool.
- Server processes handle SQL statements by parsing, executing, and returning results. Background processes perform functions like checkpoint, recovery, and writing data to disk.
- Transactions are written to the redo log and undo segments maintain rollback information. This supports data consistency, recovery, and rolling back transactions.
About the course:
This Oracle performance tuning online course is designed for the audience who want to learn basics and core concepts of Oracle PT. You will be learning about Introduction, basic tuning diagnostics, how to use automatic workload repository, defining of problems, how to create AWR baselines, monitoring of applications Etc. All Oracle performance tuning classes will be live and interactive.
Course Target:
Oracle performance tuning online training is designed to teach you fundamentals of PT.
Understand basic tuning diagnostics.
Learn how to use Automatic workload repository.
Obtain knowledge of using metrics and alerts.
Clear understanding of how to monitor applications.
Need to identify problem SQL statements
Learn how to influence the optimizer.
Understand SQL performance management.
Tuning the shared pool, I/0, Buffer cache, PGA and temporary space.
Course Targeted Audience:
Any candidate can join our Oracle performance tuning online course.
People who are from professional background can join.
Researches can also participate in this course.
Prerequisites:
Candidates with basic knowledge of computer.
Basics of database are recommended.
Training Format:
Kernel Training provides Oracle performance tuning online course led by real time expert.
Registered Candidates can interact with instructor in live interactive sessions.
Candidates will have life time access to learning material.
Companies Using Oracle PT:
Major international IT companies perform Oracle performance tuning for their operations.
This document discusses database backup and recovery strategies in Oracle. It covers different types of backups including logical, physical, online and offline backups. It emphasizes the importance of backups for recovery purposes. Different failure scenarios are described such as statement failure, user process failure, user error, instance failure and media failure. The roles of logical backups using Export and archiving redo logs are explained. Considerations for backup strategies include business needs, availability requirements, transaction volumes and read-only tablespaces. Testing backups is recommended to ensure recovery success.
The document discusses database backup and recovery strategies in Oracle. It covers the different types of backups including logical, physical offline ("cold") backups, and physical online ("hot") backups. It also discusses archiving redo logs, testing backup strategies, and implications of backup methods like downtime required and recovery time. Failure scenarios like statement failure, user process failure, user error, instance failure, and media failure are also summarized. Finally, it discusses logical backups using the Oracle Export utility and parameters that can be passed to it.
This document discusses backup and recovery of SQL Server databases. It explains that there are three types of database backups: full, differential, and transaction log backups. It provides steps for creating a full database backup and restoring from a backup. Backups protect from data loss due to failures like hardware issues, user errors, or disasters. However, backups require storage space and recovery is only possible from the last full backup. Regular backups are important for database administration and disaster recovery.
The document provides an overview of Oracle database physical and logical structures, background processes, backup methods, and administrative tasks. It describes key components like datafiles, control files, redo logs, tablespaces, schemas and segments that make up the physical and logical structure. It also explains the system global area (SGA) and program global area (PGA) memory structures and background processes like SMON, PMON, DBWR, LGWR and ARCH that manage the database instance. Common backup methods like cold backups, hot backups and logical exports are summarized. Finally, it lists some daily, weekly and other administrative tasks.
Sql server’s high availability technologiesvenkatchs
SQL Server provides several high availability technologies to protect against server, site, and database failures including failover clustering, database mirroring, log shipping, peer-to-peer replication, database snapshots, and backup and restore. Failover clustering protects at the server-level by allowing nodes to failover. Database mirroring and log shipping protect databases by copying transaction logs from a principal database to a mirror or secondary database. Peer-to-peer replication replicates changes between databases for availability. Snapshots enable quick recovery of databases. Backup and restore reduces recovery time through different backup types and transaction log application.
1 ISACA JOURNAL VOLUME 1, 2012FeatureThe ability to r.docxhoney725342
1 ISACA JOURNAL VOLUME 1, 2012
Feature
The ability to restore databases from valid
backups is a vital part of ensuring business
continuity. Backup integrity and restorations
are an important piece of the IT Governance
Institute’s IT Control Objectives for Sarbanes-
Oxley, 2nd Edition. In many instances, IT auditors
merely confirm whether backups are being
performed either to disk or to tape, without
considering the integrity or viability of the
backup media.
This article covers the topics related to
data loss and the types of database backup
and recovery available. Best practices that can
assist an auditor in assessing the effectiveness
of database backup and recovery are also
provided. This article focuses on the technologies
and capabilities of the Oracle relational
database management system (RDBMS) and
Microsoft (MS) SQL Server because, together,
they cover approximately 40 percent of all
database installations. Figure 1 provides a short
comparison of Oracle and MS SQL Server.
One of the key responsibilities of a database
administrator (DBA) is to prepare for the
possibility of media, hardware and software
failure as well as to recover databases during a
disaster. Should any of these failures occur, the
major objective is to ensure that the database
is available to users within an acceptable time
period, while ensuring that there is no loss of
data. DBAs should evaluate their preparedness
to respond effectively to such situations by
answering the following questions:
• How confident is the DBA that the data on which
the company business depends are backed up
successfully and that the data can be recovered
from these backups within the permissible time
limits, per a service level agreement (SLA)
or recovery time objective, as specified in the
organization’s disaster recovery plan?
• Has the DBA taken measures to draft and test
the procedures to protect as well as recover the
databases from numerous types of failures?
The following is a checklist for database
backup and recovery procedures that are
explained throughout this article:
1. Develop a comprehensive backup plan.
2. Perform effective backup management.
3. Perform periodic databases restore testing.
4. Have backup and recovery SLAs drafted and
communicated to all stakeholders.
5. Have the disaster recovery plan (DRP)
database portion drafted and documented.
6. Keep your knowledge and know-how on
database and OS backup and recovery tools up
to date.
Comprehensive BaCkup plan
DBAs are responsible for making a
comprehensive backup plan for databases for
which they are accountable. The backup plan
should include all types of RDBMSs within the
enterprise and should cover the following areas:
• Decide what needs to be backed up. It is
imperative that the DBA be aware of database
and related OS and application components
that need to be backed up, whether via an
online backup or an offline cold backup.
The following are d ...
This document provides an overview of the physical and logical structures of an Oracle database, including datafiles, control files, redo logs, and tablespaces. It also describes Oracle instances, the system global area (SGA), program global area (PGA), and background processes. Administrative tasks like backups, monitoring, and patching are discussed. Specific details are given about the Computer Science database, including its server, tablespaces, and 4mm DAT tape backup method.
The document discusses database recovery techniques. It describes ARIES, an algorithm that recovers a database to consistency after a crash in three phases: analysis identifies dirty pages, redo repeats logged actions to restore state, and undo undoes uncommitted transactions. The write-ahead logging protocol forces log writes before data page updates to allow recovery using the log. Checkpointing records dirty pages to reduce redo work during recovery.
This document discusses database backup and recovery. It defines backup as additional copies of data for restoration if the primary copy is lost or corrupted. There are several types of backups including full, incremental, differential, and mirror backups. Recovery brings the database back to a prior consistent state, using techniques like log files, check pointing, and immediate or deferred transaction updates. Factors like backup location, test restores, automation, and database design can influence recovery duration. Alternatives to traditional backup and recovery include standby databases, replication, and disk mirroring.
This document discusses backup and recovery strategies for databases using Oracle Recovery Manager (RMAN). It recommends maintaining redundancy through techniques like RAID and mirroring to prevent the need for recovery. It emphasizes the importance of keeping the redundancy set that is needed for recovery on separate disks from the primary database files. It also stresses the need for frequent, redundant backups of archived redo logs to enable recovery to any point in time.
DBRC (Database Recovery Control) is an IMS tool that controls logging and recovery of IMS databases. It stores database recovery information in RECON (Recovery Control) data sets. DBRC manages log control, recovery control, and share control through these records. It controls online log data sets, secondary log data sets, and other logs to enable full or time stamp recovery of databases. DBRC also controls database sharing levels to determine which subsystems can access and update databases.
This document discusses performing database backups using Oracle's Recovery Manager (RMAN). It covers creating consistent backups without shutting down the database, incremental backups, automating backups with a scheduling strategy, managing backups and viewing reports. Key terms discussed include full vs incremental backups, online vs offline backups, backup sets, image copies, and the flash recovery area.
24 HOP edición Español - Sql server 2014 backup encryption - Percy ReyesSpanishPASSVC
Veremos la mejora en seguridad que significa usar Backup Encryption en SQL Server 2014 así como también su impacto en el rendimiento y sus escenarios de usos.
The document provides an overview of the technical features of Oracle8i Recovery Manager. Key features discussed include proxy copy which allows backups to be offloaded from the Oracle host to a media manager, disk affinity which improves backup performance, enhanced LIST and multiplexed backup set commands, automatic catalog maintenance, and point-in-time recovery of individual tablespaces to improve data availability. The Recovery Manager aims to provide improved ease of backup/recovery administration while maintaining high performance and database availability.
This document discusses database backup and recovery strategies in SQL Server. It covers the different types of backups including full, differential, transaction log, and partial backups. It also describes how to create each type of backup using T-SQL commands. Database recovery is explained as relying on a full backup combined with subsequent transaction log backups to enable point-in-time recovery. Differential backup strategies are also covered as including differentials with only changed data for more frequent updates.
This document discusses resolving the "Backup failed to complete the command BACKUP LOG" error in SQL Server. It explains that this error occurs when trying to backup the transaction log without first performing a full database backup. It recommends using a SQL backup recovery tool to repair corrupted SQL backup files and recover the database. The tool can scan, repair, and recover SQL backup files, preview recovered data, and export recovered tables and records back to SQL Server.
This document discusses various methods for performing database backups, including Recovery Manager (RMAN), Oracle Secure Backup, and user-managed backups. It covers key backup concepts like full versus incremental backups, online versus offline backups, and image copies versus backup sets. The document also provides instructions on configuring backup settings and scheduling automated database backups using RMAN and Enterprise Manager.
The document discusses how to automate tasks in SQL Server using SQL Server Agent. It describes how SQL Server Agent provides a scheduling engine for automating backups, maintenance tasks, and other processes. It explains how to create jobs composed of steps to execute tasks on a schedule. Job steps can run T-SQL, files, packages and more. Schedules can be defined to control frequency and timing of job execution. Failed jobs and steps are logged to help troubleshoot issues. Alerts can be configured to notify administrators of problems or trigger corrective actions.
Policy Based Management in SQL Server 2008 allows administrators to define standard configuration policies and enforce compliance across multiple database instances. Key components of PBM include facets, conditions, policies, targets, and categories. Facets define object types or configuration options that can be controlled. Conditions specify allowed values for facet properties. Policies enforce conditions and can be scheduled, prevent changes, or log violations. Categories group related policies and a mandate setting enforces compliance when an instance subscribes to a category.
Table partitioning allows large tables to be split across multiple filegroups to improve performance. A partition function defines the data ranges and a partition scheme maps those ranges to filegroups. Tables, indexes, and views can then be created on partition schemes. The SWITCH operator can move partitions between filegroups with minimal locking to archive old data or distribute it across storage.
The document discusses various aspects of indexes in SQL Server including clustered and nonclustered indexes, index architecture and design, maintaining indexes through page splits and rebuilding/reorganizing indexes. It also covers full text indexes and features such as contains, freetext, stoplists and thesaurus files.
The document discusses different data types that can be used when creating tables in SQL Server, including numeric, character, date/time, binary, XML, spatial, and hierarchyid data types. It provides details on the storage size and valid values for each data type. It also discusses considerations for choosing the appropriate data type based on the intended use and storage optimization.
The document discusses database recovery options in SQL Server, including recovery models (full, bulk-logged, simple), how they affect transaction logging and restore options. It also covers using minimal logging to improve bulk load performance and the PAGE_VERIFY option to detect damaged pages.
The document discusses configuring files and filegroups in SQL Server. It describes how SQL Server uses data files to store database contents and transaction log files to store transactions. It also discusses filegroups, which map database objects to files on disk. The document outlines the types of file extensions (.mdf, .ndf, .ldf) used and how the proportional fill algorithm works. It recommends best practices for configuring files and filegroups when creating a new database. The document also briefly discusses FILESTREAM, the tempdb database, and file naming conventions.
The document discusses database recovery options in SQL Server, including recovery models (full, bulk-logged, simple), transaction logging behavior under each model, and how to configure the recovery model. It also covers using minimal logging to improve bulk load performance and the PAGE_VERIFY option to detect damaged pages.
The document discusses how to configure a SQL Server instance. It covers creating service accounts, understanding collation sequences and authentication modes, installing sample databases, and configuring instances. It also describes how to use SQL Server Configuration Manager to manage services and protocols and how to configure Database Mail to send notifications.
The document discusses how the brain functions through neural connections and networks, with different areas of the brain specialized for different cognitive functions like vision, motor skills, and language. As children develop and learn, their brains form new connections and pathways that support increasingly complex thinking skills and the ability to learn from experiences. Experience and learning shape how the brain is wired and how it continues to develop over the lifespan.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
3. Backups are taken to reduce the risk of data loss.
Because it is more common to back up a database than to restore
one, the backup engine is optimized for backup process.
The only two parameters required for a backup are the name of the
database and the backup device. UP to 64 devices could be used for
a backup.
Because the backup process is not concerned with the ordering of
pages, multiple threads can be used to write pages to the backup
devices.
When a backup is initiated, the backup engine grabs pages from the
data files as quickly as possible, without regard to the order of
pages.
The limiting factor for the speed of a backup is the performance of
the device where the backup is being written.
The backup engine is optimized for the backup process.
4. SQL SERVER performs the steps of the backup procedure as
follows:
1) Locks the database , blocking all transactions
2) Places a mark in the transaction log.
3) Releases the database lock
4) Extracts all pages in the data files and writes them to the backup
device
5) Locks the database, blocking all transactions
6) Places a mark in the transaction log.
7) Releases the database lock.
8) Extracts the portion of the log between the marks and appends it to
the backup.
5.
6. Design and implement a well thought backup strategy to suit the
needs of your organization.
Perform backups more often.
Decrease backup times by using a compression.
Use various media for backups.
Increase number of backup copies.
Keep backup copies at different places.
Allocate only a single backup per file.
Use of meaningful names for the backup files.
7. Full Backup
Captures all pages within a database that contain data.
Pages that do not contain data are not included in the backup.
The database is fully operational during a full backup. The
operations that are not allowed during the full backup are:
Adding or removing a database file.
Shrinking a database.
Partial Backup
Captures only the filegroups that can change. Read only filegroups
are not included to minimize the size of the backup.
Differential Backup
Captures all extents that have changed since the last full backup.
The primary purpose of a differential backup is to reduce the
number of transaction log backups that need to be restored. A
differential backup has to be applied to a full backup and can’t exist
until a full backup has been created.
8. Transaction Log
Every change made to a database has an entry made to the
transaction log.
Filegroup
Individual file or a filegroup backup.
9. Full backups capture all the used pages across the entire database.
A full backup captures all pages within a database that contain data.
Pages that do not contain data are not included in the backup.
A backup is never larger, and in most cases is smaller, than the database for
which it is created.
A full backup is the basis for recovering a database and must exist before you
can use a differential or transaction log backup.
Backing up a database to multiple files can lead to a significant reduction in
backup time, particularly for large databases.
An example of a multiple files (striped backup) is:
BACKUP DATABASE AdventureWorks
TO
DISK = 'AdventureWorks_1.bak',
DISK = ' AdventureWorks_2.bak'
10. Every change made to a database has an entry made to the transaction log.
Each row is assigned a unique number internally called the
Log Sequence Number (LSN).
LSN Is an integer value that starts at 1 when the database is created and
increments to infinity.
LSN Is never reused for a database and always increments.
LSN provides a sequence number for every change made to a database.
The contents of a transaction log are broken into two basic parts:
Inactive : contains all the changes that have been committed to the
database.
Active : contains all the changes that have not yet been committed.
When a transaction log backup is executed, SQL Server starts with the lowest
LSN in the transaction log and starts writing each successive transaction log
record into the backup. As soon as SQL Server reaches the first LSN that has
not yet been committed (that is, the oldest open transaction), the transaction
log backup completes.
11. The portion of the transaction log that has been backed up is then
removed, allowing the space to be reused.
A transaction log backup gathers all committed transactions in the log
since the last transaction log backup.
Based on the sequence number, it is possible to restore one transaction
log backup after another to recover a database to any point in time by
simply following the chain of transactions as identified by the LSN
If an LSN gap is introduced, you must create a full backup before you can
start backing up the transaction log.
Before you can issue a transaction log backup, You must execute a full
backup.
13.
The primary purpose of a differential backup is to reduce the number of
transaction log backups that need to be restored.
A differential backup captures all extents that have changed since the last
full backup.
For Example : If a full backup was taken at midnight and a differential
backup occurred every four hours, both the 4 A.M. backup and the 8 A.M.
backup would contain all the changes made to the database since midnight.
A differential backup has to be applied to a full backup and can’t exist until a
full backup has been created.
Each database in the header has a special page called the Differential
Change Map (DCM).
DCM keeps the counter of changes occurred since last full backup.
A FULL BACKUP zeroes out the contents of the DCM.
14. COPY_ONLY Option
The COPY_ONLY option allows to create a backup that can be used
to create the development or test environment as it does not affect
the database state or set of backups in production.
A FULL BACKUP with the COPY_ONLY option does not reset the
differential change map page and therefore has no impact on
differential backups.
A TRANSACTION LOG BACKUP with the COPY_ONLY option does
not remove transactions from the transaction log.
15.
File or filegroup backups are used to reduce the footprint of a
backup, as it only targets a portion of a database to be backed up.
Because for successful recovery of a database, you need all the
files underneath a filegroup, but not an individual files.
Filegroup backups can be used in conjunction with differential and
transaction log backups to recover a portion of the database in the
event of a failure.
The database can remain online and accessible to applications
during the restore operation. Only the portion of the database being
restored is off-line.
16. Partial Backup
To reduce the size of a backup to only the filegroups that can
change, you can perform a partial backup.
When a partial backup is executed, SQL Server backs up the
primary filegroup, all read/write filegroups, and any explicitly
specified read-only filegroups.
Partial backups are performed by specifying the
READ_WRITE_FILEGROUPS option as follows :
BACKUP DATABASE database_name
READ_WRITE_FILEGROUPS [,<file_filegroup_list>]
TO <backup_device>
17. Page Corruption
When SQL Server encountered a corrupt page, SQL Server detects
and quarantines corrupted pages using Checksum
ALTER DATABASE <dbname> SET PAGE_VERIFY CHECKSUM
When SQL Server writes a page to disk, a checksum is calculated for the
page. When you enable page verification, each time a page is read from
disk, SQL Server computes a new checksum and compares it to the
checksum stored on the page. If the checksums do not match, SQL Server
returns an error and logs the page into a table in the MsDb database.
SQL Server has a protection mechanism in place to protect your database
from massive corruption. You are limited to a total of 1,000 corrupt pages
in a database. When you reach the corrupt page limit, SQL Server takes the
database off-line and places it in a SUSPECT state to protect it from further
damage.
18. Maintenance Plans
Maintenance plans provide a mechanism to graphically create job
workflows that support common administrative functions such as
backup, re-indexing, and space management.
The most common tasks performed by maintenance plans are database
backups.
Tasks that are supported by maintenance plans are:
Backing up of databases and transaction logs
Shrinking databases
Re-indexing
Updating of statistics
Performing consistency checks.
Instead of writing the code to back up a database, you can configure a
maintenance plan to perform the backup operations that you need to
support your disaster recovery requirements.
19. Maintenance Plans (continue…)
Executing Maintenance Plans:
1) Loads the SQL Server Integration Services (SSIS) engine.
2) Then the .NET Framework interprets the tasks within the
package, constructs the necessary backup statements, and
executes the code generated.
20. Certificates and Master Keys
Transparent Data Encryption (TDE) allows us to encrypt the
entire database without requiring any changes to the structure of
the database.
It protects the database in situations where someone breaches
physical and login security and obtains access to the .mdf (data)
files or .bak (backup) files.
Without TDE or another third-party encryption solution, the files
could be taken offsite and attached or restored.
SQL Server offers two levels of encryption:
Database-level
Cell-level.
Both use the key management hierarchy.
When TDE is enabled on database, all backups are encrypted.
21. Certificates and Master Keys (continue…)
ENCRYPTING A DATABASE
1) Implementing TDE is in creating a Master key.
USE MASTER
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD =
'some password ' ;
2) Create a Certificate.
USE MASTER
GO
CREATE CERTIFICATE tdeCertificate WITH SUBJECT =
'TDE Certificate ' ;
If we want to restore an encrypted database to another server, the
certificate used to encrypt the database needs to be loaded to the other
server to enable the database to be restored.
22. Certificates and Master Keys (continue…)
3) We can back up the certificate and private key as follows:
-- Backup the certificate
-- Required if restoring encrypted databases to another server
-- Also required for server rebuild scenarios
USE MASTER
GO
BACKUP CERTIFICATE tdeCertificate TO FILE =
‘D:tdeCertificate.backup'
WITH PRIVATE KEY (
FILE = 'e:tdeCertificatePrivateKey.backup',
ENCRYPTION BY PASSWORD = ‘certsPassword ‘)
23. Certificates and Master Keys (continue…)
5) Create the database encryption key (DEK), used for encrypting the
database with Transparent Data Encryption:
-- Create a Database Encryption Key
-- AES encryption algorithm with a 128-bit key
USE [AdventureWorks2008]
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE tdeCertificate
GO
4) Optionally, enable SSL on the server to protect data in transit.
Perform the following steps in the user database. These require
CONTROL permissions on the database.
6) Enable TDE This command starts a background thread
(referred to as the encryption scan), which runs asynchronously.
ALTER DATABASE myDatabase SET ENCRYPTION ON
24. SQL Server Encryption Key Hierarchy with TDE and EKM
25. Service Master Key (SMK)
The Service Master Key is the root of the SQL Server encryption
hierarchy. It is generated automatically the first time it is needed
encrypt another key. By default , the service Master Key is encrypted
using the windows data protection API and using the local machine
key.
Each time that you change the SQL Server service account or
service account password , the service master key is regenerated.
The first action that you should take after an instance is started is to
back up the service master key.
You should also back up the service master key immediately
following a change to the service account or service account
password.
BACKUP SERVICE MASTER KEY TO FILE = ‘path_to_file’
ENCRYPTION BY PASSWORD = ‘password’
26. Database Master Key (DMK)
Database master key (DMK) is the root of the encryption hierarchy in
a database.
To ensure that you can access certificates, asymmetric keys, and
symmetric keys within a database, you need to have a backup of the
DMK.
BACKUP MASTER KEY TO FILE =‘path_to_file’
ENCRYPTION BY PASSWORD = ‘password’
Before you can back up a DMK, it must be open. By default , a DMK
is encrypted with the service master key. If the DMK is encrypted only
with a password, you must first open the DMK by using the following
command:
USE <database name>;
OPEN MASTER KEY DECRYPTION BY PASSWORD=
‘<SpecifyStrongPasswordHere>’;
27. Certificates
Certificates are used to encrypt data as well as digitally sign code
modules. Although you could create a new certificate to replace the
digital signature in the event of the loss of a certificate, you must
have the original certificate to access any data that was encrypted
with the certificate.
Certificates have both a public and a private key. You should back up
a certificate immediately after creation by using the following
command.
BACKUP CERTIFICATE certname TO FILE = 'path_to_file'
[ WITH PRIVATE KEY
( FILE = 'path_to_private_key_file' ,
ENCRYPTION BY PASSWORD = 'encryption_password'
[ , DECRYPTION BY PASSWORD = 'decryption_password' ] ) ]
You can back up just the public key by using the following command:
BACKUP CERTIFICATE certname TO FILE = 'path_to_file'
If you restore a backup of a certificate containing only the public key, SQL
Server generates a new private key.
28. Validating a Backup
To validate a backup , execute the following command:
RESTORE VERIFYONLY FROM <back device>
When a backup is validated, SQL Server performs the following checks:
Calculates a checksum for the backup and compares to the checksum
stored in the backup file.
Verifies that the header of the backup is correctly written and valid.
Transits the page chain to ensure that all pages are contained in the
database and can be located.
29.
All restore sequence begin with either a full backup or
filegroup backup.
When restoring backups, you have the option to terminate
the restore process at any point and make the database
available for transactions.
After the database of filegroup being restored has been
brought online, you can’t apply any additional differential or
transaction log backups to the database.
30.
The generic syntax for restoring a full backup is:
RESTORE DATABASE { database_name | @database_name_var }
[ FROM <backup_device> [ ,...n ] ]
[ WITH { [ RECOVERY | NORECOVERY |
STANDBY = { standby_file_name | @standby_file_name_var } ]
| , < general_WITH_options > [ ,...n ]
| , < replication_WITH_option >
| , < change_data_capture_WITH_option >
| , < service_broker_WITH options >
| , < point_in_time_WITH_options — RESTORE_DATABASE >
} [ ,...n ] ]
<general_WITH_options> [ ,...n ]::=
--Restore Operation Options
MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name'
[ ,...n ] | REPLACE | RESTART | RESTRICTED_USER
When a RESTORE command is issued, if the database does not
already exist within the instance, SQL SERVER creates the database
along with all files underneath the database.
The REPLACE option is used to force the restore over the top of an
existing database.
31. Database state after the
Restore has Completed
If you want the database to be online and accessible for transactions
after the RESTORE operation has completed, you need to specify the
RECOVERY option.
When a RESTORE is issued with the NORECOVERY option, the
restore completes, but the database is left in a RECOVERING state
such that subsequent differential and / or transaction log backups can
be applied.
The Standby option can be used to allow you to issue SELECT
statements against the database while still issuing additional
differential and / or transaction log restores.
If you restore a database with the STANDBY option an additional file
is created to make the database consistent as of the last restore that
was applied.
32. Restoring a Differential Backup
A differential restore uses the same command syntax as a full
database restore.
When the full backup has been restored, you can then restore the most
recent differential backup.
33. Restoring a Transaction Log Backup
RESTORE LOG { database_name | @database_name_var }
[ <file_or_filegroup_or_pages> [ ,...n ] ]
[ FROM <backup_device> [ ,...n ] ]
[ WITH {[ RECOVERY | NORECOVERY |
STANDBY = {standby_file_name | @standby_file_name_var }]
| , <general_WITH_options> [ ,...n ]
| , <replication_WITH_option>
| , <point_in_time_WITH_options —RESTORE_LOG> } [ ,...n ] ]
<point_in_time_WITH_options—RESTORE_LOG>::=
| { STOPAT = { 'datetime' | @datetime_var }
| STOPATMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime' ]
| STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' }
[ AFTER 'datetime' ]
The STOPAT command allows to specify a date and time to which SQL
Server restores.
The STOPATMARK and STOPBEFOREMARK options allows to specify
either an LSN or a transaction log MARK to use for the stopping point in
the restore operation.
34. Restore a Corrupt Page
Page corruption occurs when the contents of a page are not consistent.
Usually occurs when disk controller begins to fail.
Strategy for recovery :
Index files – drop and re-create
Data files - restore.
Page restore has several requirements :
The database must be in either the Full or Bulked – logged recovery
model.
You must be able to create a transaction log backup.
A page restore can apply only to a read/write filegroup.
You must have a valid full, file, or filegroup backup available.
The page restore cannot be executed at the same time as any other
restore operation.
35. Page Restore Process
1) Retrieve the PageID of the damaged page.
2) Using the most recent full, file, or filegroup backup, execute the
following command:
RESTORE DATABASE database_name
PAGE = 'file:page [ ,...n ]' [ ,...n ]
FROM <backup_device> [ ,...n ]
WITH NORECOVERY
3) Restore any differential backups with the NORECOVERY option.
4) Restore any additional transaction log backups with the
NORECOVERY option.
5) Create a transaction log backup.
6) Restore the transaction log backup from step #5 using the WITH
RECOVERY option.
36. Best Effort Restore
Because pages are restored in sequential order, as soon as the first
page has been restored to a database, anything that previously
existed is no longer valid.
If a problem with the backup media was subsequently encountered
and the restore aborted, you would be left with an invalid database
that could not be used.
SQL Server has the ability to continue the restore operation even if the
backup media is damaged. When it encounters an unreadable section
of the backup file, SQL Server can continue past the source of
damage and continue restoring as much of the database as possible.
This feature is referred to as best effort restore.
To restore from backup media that has been damaged, you need to
specify the CONTINUE_AFTER_ERROR option for a RESTORE
DATABASE or RESTORE LOG command
37.
A Database Snapshot is a point-in-time, read-only, copy of a
database.
Database Snapshot is available only in SQL Server 2008 Enterprise.
Database Snapshot is not compatible with FILESTREAM. If you
create a Database Snapshot against a database with FILESTREAM
data, the FILESTREAM filegroup is disabled and not accessible.
CREATE DATABASE database_snapshot_name
ON
(NAME = logical_file_name,
FILENAME = 'os_file_name') [ ,...n ]
AS SNAPSHOT OF source_database_name
38. Reverting Data Using
A Database Snapshot
RESTORE DATABASE <database_name> FROM
DATABASE_SNAPSHOT = <database_snapshot_name>
Only a single Database Snapshot can exist for the source database.
Full-text catalogs on the source database must be dropped and then re-
created after the revert completes.
Because the transaction log is rebuilt, the transaction log chain is
broken.
Both the source database and Database Snapshot are off-line during
the revert process.
The source database cannot be enabled for FILESTREAM
39. Backup Type TSQL
Full Backup BACKUP DATABASE {databasename} TO
{device}.
Transaction log
(Incremental) Backup
BACKUP LOG {databasename} TO {device}.
Differential Backup BACKUP DATABASE {databasename} TO
{device} WITH DIFFERENTIAL
Filegroup Backup BACKUP DATABASE {databasename} FILE =
{filename}, FILEGROUP = {filegroup} TO
{device}
Filegroup Differential
Backup
BACKUP DATABASE {databasename}
FILEGROUP = {filegroup} TO {device} WITH
DIFFERENTIAL
Summary Backup Types
40. System Database Backups
Database Description
Master Must be backed up after creating, altering or dropping a
database; altering a data or log file; changing logins;
changing configuration options; altering linked servers,
remote servers, etc. Rather than trying to keep track of
changes and only backing up MASTER after changes, it is
usually backed up daily.
Msdb Must be backed-up if the SQL Server Agent is being used to
backup configuration and historical data. It is typically
backed up daily.
Model The model database needs to be backed up only after
changes have been made to it.
Tempdb TEMPDB is never backed up as it is recreated each time
SQL Server is restarted.
41. Backup History
System Views Description
dbo.backupfile Contains one row for each data or log file that is
backed up
dbo.backupmediafamily Contains one row for each media family
dbo.backupmediaset Contains one row for each backup media set
dbo.backupset Contains a row for each backup set
dbo.backupfilegroup Contains one row for each filegroup in a database at
the time of backup
dbo.logmarkhistory Contains one row for each marked transaction that
has been committed
dbo.suspect_pages Contains one row per page that failed with an 824
error (with a limit of 1,000 rows)
The history of every SQL Server backup is written to the MSDB database.
This can be accessed easily via using the following system views.