SlideShare a Scribd company logo
1 of 22
Download to read offline
Database Log Shipping on Microsoft SQL Server




Database Log Shipping on Microsoft SQL Server
Applies to:
Any SAP Product with ABAP stack working with SQL Server 2005.


Summary
This document describes best practices in setting up, monitoring and administering Database Log
Shipping process which provides data redundancy and increases system availability. It focuses on the
common scenarios for disaster recovery, physical data corruption, protection against user errors and high
availability.

Author(s):       Alexander Kosolapov
Company:         SAP AG
Created on:      24 May 2007
Last revision:   10 October 2007




                                                                                                        1
Database Log Shipping on Microsoft SQL Server




Table of Content
Overview of High Availability solutions in Microsoft SQL Server ______________ 4
Log Shipping fundamental concepts _____________________________________ 4
  Overview _______________________________________________________________ 4
  Technical implementation of Log Shipping ____________________________________ 4
  Fail over ________________________________________________________________ 4
  Fail back________________________________________________________________ 5
  Backup directory (backup folder)____________________________________________ 5
  Copy directory (copy folder)________________________________________________ 5
  STANDBY mode of a database ______________________________________________ 5
    Technical background of STANDBY mode_____________________________________________ 5
  Influence of the Log shipping on a Backup Strategy ____________________________ 6
  Considerations about database corruption____________________________________ 6
  Considerations about user errors ___________________________________________ 6
  Log Shipping as a part of disaster recovery plan _______________________________ 6
  Evaluating data loss after failover ___________________________________________ 7
Log Shipping in SQL 2005 ______________________________________________ 7
  Prerequisites ____________________________________________________________ 7
    SQL Server prerequisites __________________________________________________________ 7
    SAP prerequisites ________________________________________________________________ 7
  Setting up Log Shipping ___________________________________________________ 8
  Preparations ____________________________________________________________ 8
  Configuring Log Shipping__________________________________________________ 9
  Preparing SAP System to failover __________________________________________ 10
  Failover scenarios _______________________________________________________ 12
    Disaster recovery scenario ________________________________________________________    12
    Partial database loss scenario _____________________________________________________   13
    DB corruption scenario ___________________________________________________________     13
    User error scenario ______________________________________________________________     14
    Maintenance scenario____________________________________________________________       15
    Completing failover ______________________________________________________________     16
    General post-failover actions ______________________________________________________   16
  Fail back_______________________________________________________________ 17
    Preparing the system to fail back ___________________________________________________ 17
    Failing back____________________________________________________________________ 17
  Monitoring and reporting for Log shipping ___________________________________ 18
  Suspend Log Shipping ___________________________________________________ 18
  Resume Log Shipping____________________________________________________ 19
  Remove Log Shipping permanently_________________________________________ 19

                                                                                            2
Database Log Shipping on Microsoft SQL Server


 Special attention is required by following actions _____________________________ 19
   Adjusting Windows system timer ___________________________________________________       19
   Adding data and log files to the Primary DB___________________________________________   20
   SQL Jobs _____________________________________________________________________           20
   Modifying the SAP DEFAULT profile ________________________________________________       20
Frequently asked questions ___________________________________________ 21
Appendix ___________________________________________________________ 22
 SAP Notes, referred in this document _______________________________________ 22




                                                                                             3
Database Log Shipping on Microsoft SQL Server


Overview of High Availability solutions in Microsoft SQL
Server

SQL Server 2000 supported two approaches to increasing DB service availability: High Availability Cluster
and DB Log Shipping.
With SQL Server 2005 a new functionality was introduced additionally, the Database Mirroring.
The current document focuses only on the DB Log Shipping.
        For more information on using SQL Server in a Windows Cluster (MSCS) environment, please refer to
        http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/hasog01.mspx.
        For more information on using SQL Server DB Mirroring, please refer to FAQ
        http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirfaq.mspx and to SAP Note 965908.


Log Shipping fundamental concepts
Overview
Log Shipping allows you to automatically send transaction log backup files (also referred to as TRN files)
from a primary database on a primary server instance to one or more secondary databases on
separate secondary server instances.
For the sake of simplicity the scenario with only one secondary database is considered in this document.
An optional third server instance, known as a monitor server records a history and a status of log backup
and restore operations.
Advantages of Log Shipping are as follows:
   - There is no limitation for a distance between the primary and secondary servers.
   - The Log Shipping mechanism is tolerant to delays in transferring of transaction log backup files.
   - In contrast to DB cluster or DB mirroring, a problem in the Log shipping itself (e.g. lost connection)
       doesn’t case a fault of the productive database.
   - No special expensive hardware is required
Disadvantages:
    - A switch to the secondary server must be done manually.
    - The databases are not at sync permanently, some data loss is possible.

Technical implementation of Log Shipping
Log Shipping process consists of 4 scheduled jobs:
        1. Backup log job, running on the Primary Server
        2. Copy job, running on the Secondary server
        3. Restore job, running on the Secondary server
        4. Monitoring and alerting job, running on the Monitoring server (optional).
All the jobs call the executable sqllogship.exe. The sqllogship application performs a backup, copy or
restore operation and associated clean-up tasks for a log shipping configuration. The operation is
performed on a specific instance of Microsoft SQL Server 2005 for a specific database.

Fail over
Failover is a capability of a service to switch over to a redundant or standby computer server in case of a
failure on the productive server.



                                                                                                              4
Database Log Shipping on Microsoft SQL Server


Log Shipping does not possess a capability to fail over automatically from the primary server to the
secondary server. If the primary database becomes unavailable, a manual intervention of a system
administrator is necessary to restore DB service. He must recover the rest of transaction log backups and
bring the database to the online state.

Fail back
Fail back is a process of moving the service back to the original hardware after it is ready to run again. In
other words, it is a process, reverse to a fail over.
Like a fail over, the fail back in a log shipping scenario must be done manually by the system administrator.

Backup directory (backup folder)
It is a network share, where transaction log backup files are stored temporarily.

Copy directory (copy folder)
It is a directory, to where the transaction log backup files are copied before they are restored. Usually it is
located on the Secondary server.


STANDBY mode of a database
This mode will allow read-only operations to be performed on the database and additionally further
transaction logs can be applied to the DB. A DB can get to STANDBY mode only after a restore database
or restore log operation.
 Technical background of STANDBY mode
A transaction log backup file may contain information about not yet committed transaction. Whether the
transaction will be committed or rolled back, is not yet known. That information will only come with the next
transaction log backup file. Thus, any full recovery of a DB (RESTORE DATABASE <…> WITH
RECOVERY) consists of 2 phases:
        redo of all modifications
        undo of all uncommitted transactions.
After the undo phase the DB will be in a transactional consistent state, but won’t accept any further
transaction log because some modifications are lost.
One can skip the undo phase by means of the option WITH NORECOVERY of the command RESTORE.
However, as the DB is in a transactional inconsistent state, SQL Server doesn’t allow using it even for
reads only.
When RESTORE <…> WITH STANDBY is used, SQL Server performs the undo phase but saves also the
state “before undo” in a so-called “standby file” or Transaction Undo File (extension TUF). This file is a
mandatory argument of the option STANDBY.
As the result, the database is transactional consistent and can be read. There are two possible transitions
from STANDBY state.
        The DB administrator can recover it completely and make it read/write with the command
        RESTORE LOG <dbname> WITH RECOVERY. A possibility to apply any further transaction log
        backup will be lost forever.
        The next transaction log can be applied to the DB.




                                                                                                                  5
Database Log Shipping on Microsoft SQL Server


Influence of the Log shipping on a Backup Strategy
As a matter of fact, the Log shipping dramatically changes a common backup strategy. When Log
Shipping is implemented, its Backup log job must be one and only who is doing transaction log backups.
As the job saves the backup files to a disk, they cannot be saved to tapes with SQL command “BACKUP
LOG”. Neither SAP transaction DB13 nor any other SQL Backup software may be used for that.
That’s why generic backup software for file backups (e.g. NTBACKUP) has to be used to save transaction
log backup files from a disk to durable media.
Database backups can be done as usual, directly to tapes with SQL command “BACKUP DATABASE”.


Considerations about database corruption
Throughout the current document the concept of database corruption means a state of a database which
causes impossibility to access at least part of data because it is lost or distorted and the state cannot be
repaired without data loss.
Physical data corruptions are caused exclusively with hardware reasons or erroneous program coding on
operating system level and in DBMS code itself. Note however, that the latter is only a theoretical
possibility which has not been observed as of SQL Server version 7.0 (1998) in combination with SAP
products.
Note, data corruption cannot be caused with applications, using DB service. Data corruptions cannot be
precisely tracked in time and often they pass unnoticed over substantial time periods of many days or
weeks. For more information about this topic please refer to SAP Note 327494.
Since there is no direct exchange of data pages between a primary and a secondary server, physical DB
corruptions cannot be transferred to the secondary server. Thus, Log Shipping provides a better protection
against DB corruptions. This protection is not absolute however. A hardware fault still may corrupt the
transaction log file itself. But such a corruption won’t pass unnoticed as it will be recognized on the
secondary server during restore.

Considerations about user errors
User errors include erroneous actions committed on DB level by DB administrators as well as on
application level by functional users and errors in applications (e.g. ABAP reports). In this document we
consider only user errors that cause deleting or overwriting of needful data. Also inserting of nonsense
data into the database can be considered as a user error. We limit to the cases of such errors that cannot
be recovered with application means.
As a rule, user errors become apparent very fast, within minutes or few hours and can be precisely
tracked.
As a rule, the DB must be stopped and restored to the point in time shortly before the time of the user
error. A common restore causes long system downtime of several hours and loss of all correct data
modifications done after that point in time.
Log Shipping allows intentional mistiming of the Secondary DB against the Primary DB by several minutes
or hours. This is controlled by “Log delay period” option. Thus, with this option set to e.g.120 min, the
Secondary DB at 10:00 has the same state as the Primary DB had at 08:00. This gives the DB
administrator time to react to user errors and rapidly switch to the secondary DB avoiding long downtime.

Log Shipping as a part of disaster recovery plan
Since the Secondary DB contains redundant data that is physically separate from the Primary DB, it can
continue servicing even after a total damage of the Primary server along with primary data storage. That’s
why Log Shipping can be used as a part of a disaster recovery plan.




                                                                                                               6
Database Log Shipping on Microsoft SQL Server


Evaluating data loss after failover
Since Primary and Secondary database are not permanently synchronized, some committed data may get
lost after failover. This depends on fail over scenario (see Failover scenarios, page 12) and on a
frequency of backups.


Log Shipping in SQL 2005
Prerequisites
To set up Log Shipping for an SAP Solution the following prerequisites should be met:
 SQL Server prerequisites
    1. SQL Server of the same version and edition must be installed on the Secondary server. If Service
       Pack 2 is installed on Primary SQL Server 2005, the Secondary server must have it too. This
       requirement comes from the fact that as of MS SQL 2005 SP2 the data type VARDECIMAL can
       be compressed, which is not supported in earlier builds.
    2. Primary and Secondary servers must have the same server-wide collation:
       SQL_Latin1_General_CP850_BIN2
    3. If you want to use an optional Monitoring Server, SQL Server software must be installed on that
       server.
    4. The SQL Server services on Primary and Secondary servers must be running under a domain
       user.
    5. A network share with sufficient space for transaction log backups must exist. The share must be
       accessible (read/write) for both Primary and Secondary SQL Server service users. By reliability
       reasons the share must not reside on the same hardware as the Primary server, though it is
       technically possible.
    6. The database must be set to the Full or Bulk-logged recovery model. Under no circumstances
       may the recovery model be switched to Simple, even for a short time.
    7. SQL Server Agent service must be up and running permanently on Primary, Secondary and
       Monitoring servers.
    8. Primary and Secondary servers must belong to the same Windows domain or there must exist a
       trusted relationship between the domain of the Primary server and the domain of the Secondary
       server.
    9. If the Secondary database must be in STANDBY mode (see STANDBY mode of a database,
       page 5), install Cumulative update package 3 for SQL Server 2005 Service Pack 2. Otherwise an
       error, described in http://support.microsoft.com/kb/940126/en-us may occur.
    10. The internal servername on the SQL Server must not be null, that means the SQL statement

        select @@servername

        must return the local servername on the Primary and Secondary server.
    The database format of SQL Server 2005 is independent from the server platform. Thus, it’s possible
    to have 64-bit Primary server and 32-bit Secondary server and vice versa.

 SAP prerequisites
    There is no mechanism in SAP kernel supporting DB Log shipping, thus there are no SAP parameters
    controlling that. All actions must be done manually. That’s why there is no special requirement to SAP
    Release, kernel patch level, SAP Support Package level.




                                                                                                          7
Database Log Shipping on Microsoft SQL Server


Setting up Log Shipping
In this document we only consider a scenario of failing over and back the database service. If both SAP
Central Instance (CI) and DB server coexist on the same hardware, special measures should be
additionally taken to failover the SAP CI to another hardware server. For more information, refer to SAP
Note 804078 “Replicated Enqueue Configuration on Windows”.
If all prerequisites above are met, the Log Shipping can be set up without downtime in the production
system.
Before getting started with log shipping, the following questions must be clarified:
        Do I need the Secondary DB to be readable?
        There are 2 options for the Secondary DB: to be in “Loading” state or in “Standby/Read only” state.
        In the “loading” state the database is not accessible at all. There is even no way to check the
        database options (what files it consists of, their size etc.). In the “Standby/Read only” state one
        can execute SELECT commands from DB tables, execute consistency checks. It can also be
        used for business reporting (with some precautions). However, an active use of the Secondary DB
        prevents restoring the transaction log files. That’s why there is an option for the Standby mode
        which is called "Disconnect users from the secondary database when restoring backups". When
        activated, the restore job has higher priority and cancels all established connections. If
        deactivated, the restore job fails if there is a connection to the secondary DB. Transaction log
        backups will accumulate until there are no user connections to the database.
        Decide about a time lag of the Secondary DB against the Primary DB (see Considerations about
        user errors, page 6).


Preparations
1. Make sure no transaction log backups for the DB <SID> are carried out until the Log Shipping
configuration is finished. If transaction log backups are scheduled as SQL job or using a third party backup
software, they must be deactivated.
2. Back up the complete <SID> database (full database backup). There is no necessity to stop SAP for
that, although lower DB performance during the backup must be taken into account. Any kind of backup
media (disk, tape) is suitable.
3. Restore the backup onto the secondary server. When using Management Studio, choose the option
“Leave the database non-operational, and do not roll back the uncommitted transactions.
Additional transaction logs can be restored. (RESTORE WITH NORECOVERY)”. See (2) on the figure
below. When using T-SQL command, add the option “WITH NORECOVERY” to the option list of the
RESTORE command.
During the restore there is a possibility to change data file destination folders. This is necessary if the
secondary server doesn’t have a drive letter which existed on the Primary server (see the page Options).




                                                                                                              8
Database Log Shipping on Microsoft SQL Server




                                                                                                 1




                                                                                                           2




Note, in the column “Restore As” you have to specify a full path to the files (1). All folders in the path must
exist, but the files itself must not.


Configuring Log Shipping
The most comfortable way to configure Log Shipping is SQL Server Management Studio.
1. Start SQL Server Management Studio, connect to the productive SQL Server with a user having
    sysadmin privileges.
2. Right click the database you want to use as your Primary database, and then click Properties.
3. Under “Select a page”, click “Transaction Log Shipping”.
4. Select the “Enable this as a primary database in a log shipping configuration” check box.
5. Under “Transaction log backups”, click “Backup Settings”.
6. In the “Network path to the backup folder” box, type the network path to the share you created for the
    transaction log backup folder. Leave the box “If the backup folder is located on the primary server,
    type the local path to the backup folder” empty.
7. Configure the “Delete files older than:” parameter. It is recommended to set the value to at least 3
    days. The maximum value is limited only through the space provided on the backup folder for TRN
    files.
8. Configure “Alert if no backup occurs” with desired parameters.
9. It is recommended to leave the default name under “Job name”.
10. Click “Schedule…” and adjust the SQL Server Agent schedule. Set up the desired interval in the
    group “Daily frequency”, field “Occurs every…”.
11. Click OK.
12. Click OK.

                                                                                                               9
Database Log Shipping on Microsoft SQL Server


13. Under “Secondary server instances and databases”, click Add.
14. Click “Connect” and connect to the Secondary SQL Server. It is strongly recommended to use
    Windows authentication because some problems were reported by using SQL authentication.
15. In the “Secondary Database” box, choose the <SID> database restored during the preparation.
16. On the “Initialize Secondary database” tab, choose “No, the secondary database is initialized”.
17. On the “Copy Files” tab, in the “Destination folder for copied files” box, type the path of the folder into
    which the transaction logs backups should be copied. Note, this folder is located on the secondary
    server.
18. Configure the “Delete copied files after” parameter. It is recommended to set the value to at least 3
    days. The maximum value is limited only through the space provided on the copy folder.
19. Click “Schedule…” and adjust the SQL Server Agent schedule as needed. This schedule should
    approximate the backup schedule with some shift necessary for a transaction log backup to complete.
    For instance, if you scheduled the backup job starting at 0:00 each 15 minutes, it makes sense to start
    the Copy job at 0:05 each 15 minutes.
20. On the “Restore Transaction Log” tab, under “Database state when restoring backups”, choose the
    “No recovery” mode or “Standby mode” option (see “STANDBY mode of a database” on page 5).
    Refer to SQL Server prerequisites page 7 point 9 in the latter case.
21. If you want to delay the restore process on the secondary server, type in the desired time lag under
    “Delay restoring backups at least…” (see Considerations about user errors on page 6).
22. Choose an alert threshold under “Alert if no restore occurs within…”
23. Click “Schedule” and then adjust the SQL Server Agent schedule as needed. This schedule should
    approximate the copy schedule with some shift necessary for a copy job to complete. For instance, if
    you scheduled the Copy job starting at 0:05 each 15 minutes, it makes sense to start the Restore job
    at 0:10 each 15 minutes.
24. Click OK.

On the same screen an optional monitor server can be configured. Once the monitor server has been
configured it cannot be changed without removing log shipping first.

25. Under “Monitor server instance”, select the “Use a monitor server instance” check box, and then click
    “Settings”.
26. Click “Connect” and connect to the instance of SQL Server that you want to use as your Monitor
    server. It is recommended to use Windows authentication. The user must be assigned to the
    “sysadmin” fixed server role.
27. Under “Monitor connections”, choose the connection method to be used by the backup, copy, and
    restore jobs to connect to the monitor server.
28. Under “History retention”, choose the length of time you want to retain a record of your log shipping
    history. It is recommended to retain the history for at least 2 weeks.
29. Click OK.
30. On the Database Properties dialog box, click OK to begin the configuration process.
In case of errors or warnings press “Report” and save the report to a file. If you cannot solve the problem
by your own, create a problem message at SAP attaching the error report file.

Preparing SAP System to failover
    1. Create the procedure which fixes SAP logins. As a matter of fact, all security-relevant
       information is stored in the “master” DB and thus is not transferred to the Secondary server
       automatically with Log Shipping. That’s why SQL Server logins and users must be created on the
       Secondary server manually. The easiest way to do so is to make use of the stored procedure
       “sp_check_sap_login” from SAP note 610640. Create this stored procedure in the “master”
       database on the Primary and Secondary servers.
    2. Create a copy of the DEFAULT.PFL profile which will be used for failover. As a fact, during a
       failover the DB server name changes and SAP system must be aware of the change. File
       DEFAULT.PFL in folder <drive>:usrsap<SID>sysprofile contains one obligatory
       and one optional parameter that specify the database server name.



                                                                                                            10
Database Log Shipping on Microsoft SQL Server


            SAPDBHOST is the obligatory parameter. It’s the hostname of the database server. Since
            the hostname is changed after failover, this parameter must be corrected appropriately.
            dbs/mss/server is an optional parameter. If not set explicitly, its value is copied from
            SAPDBHOST. This parameter is mandatory if SQL Server is installed as a named
            instance, i.e., network hostname is not equal to SQL Server instance name.
        The recommended procedure is as follows:
            Copy the actual DEFAULT.PFL file to DEFAULT_PRIMARY.PFL
            Copy the actual DEFAULT.PFL file to DEFAULT_ SECONDARY.PFL
            In the DEFAULT_SECONDARY.PFL substitute <Primary DB Server hostname> with
            <Secondary DB Server hostname> for the parameter SAPDBHOST.
            If the parameter dbs/mss/server is explicitly set, in the DEFAULT_SECONDARY.PFL
            substitute <Primary SQL Server name> with <Secondary SQL Server name>. This option
            accepts prefixes and port number that may be necessary when the Secondary server is a
            named instance. For instance,
        dbs/mss/server = tcp:QASDBTST,1433
            where “tcp:” is a protocol specifier,
            “QUASDBPRD” is the SQL Server named instance name,
            “1433” is the port number the SQL Server is listening to.
            Refer to SAP note 208632 for more information.
3. Windows environment variables. Standalone SAP programs (i.e. tp, R3trans, saplicense)
   require the environment variable MSSQL_SERVER to be correctly specified with SQL Server
   name (see SAP note 98678). Since after a failover the name has changed, the variable must be
   corrected too. This action is not as important as it is necessary only for technical transactions.
   SAP System itself doesn’t read the environment variables and can start even with wrong values.
On Windows 2003 Server you can prepare a script, setting Windows environment variables on the
central instance and all application servers. The command is
        SETX MSSQL_SERVER <Secondary SQL server name>
        For example:
            SETX MSSQL_SERVER QASDBTST,1433
    The script has to be executed before failover under user <domain><sid>adm.
    On Windows 2000 the action needs to me done manually as follows:
    -   Log on as user <domain><sid>adm.
    -   right click on “My computer”, choose “Properties”
    -   Go to tab “Advanced”
    -   Press button “Environment variables”
    -   Choose the variable MSSQL_SERVER in the “User variables for <sid>adm” and press “Edit”.
    -   Enter the name of the Secondary SQL server, e.g. QASDBTST,1433.
    -   Press OK until you leave the application.




                                                                                                       11
Database Log Shipping on Microsoft SQL Server


Failover scenarios
    Below only a failover of DB server will be discussed. If SAP Central Instance (SAP CI) coexists with
    DB instance on the same hardware, a lot of further actions must be prepared for failover. All they are
    out of scope of this document.

    Below we consider the following scenarios of failover:
    1. The Primary DB server is physically damaged or transaction log file is lost (Disaster recovery
        scenario, page 12).
    2. At least one of data files is lost, but the Primary server itself is in good order and all transaction log
        files (.LDF) are intact (Partial database loss scenario, page 13).
    3. Database server is functioning, but an irrecoverable database corruption has been detected (DB
        corruption scenario, page 13).
    4. Some irrecoverable data have been deleted or distorted as a result of a user error (User error
        scenario, page 14).
    5. Primary server must be stopped for hardware or software maintenance (Maintenance scenario,
        page 15).

    The crucial parameters for assessing data loss are:
              time interval between 2 consecutive runs of the Backup job on the Primary server. This will
              be referred below as backup interval.
              Applying of TRN files can be additionally delayed by restore delay.

 Disaster recovery scenario
In this case the Primary DB server suddenly suffers damage or completely destroyed along with data
storage. For SAP application servers it looks like it has just disappeared.
The same scenario is also applicable in case of damage or a loss of the transaction log file.
Data loss is calculated as time elapsed since the last backup and is less then backup interval.
To fail over to the Secondary server the following actions must be done:
         1. Stop the SAP System.
         2. Log on to the Secondary server with SQL Management Studio
         3. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs.
              By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name
              and choose “Start job at step…”.
         4. When the Copy job finished, start the Restore job. By default, its name is
              “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job
              at step…”.
         5. Suspend Log shipping as described in Suspend Log Shipping on page 18.
         6. If the restore delay is not used, proceed with step 9.
         7. Identify the last TRN file used for restore: right click the Restore job, choose “View history”,
              expand the most recent history record. The step logs are displayed in reverse chronological
              order. Find the topmost step containing the message
                   Restored log backup file. Secondary DB: '<SID>', File:
                   ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
              Note its name.
         8. Find in the Copy folder the TRN files with timestamps greater then the last restored TRN file
              and restore those files manually with a series of commands like follows:
                   RESTORE LOG <SID>
                   FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
                   WITH NORECOVERY
              Note, the TRN files must be processed in strict chronological order, otherwise an error is
              thrown. Should it happen, find the correct TRN file and proceed with the restore.
         9. Proceed with the section Completing failover on page 16.
In this scenario a simplified fail back (see Fail back on page 17) is not possible.



                                                                                                              12
Database Log Shipping on Microsoft SQL Server


 Partial database loss scenario
In this case the Primary server cannot access at least one of data files. As the result, the <SID> DB is in
state “Suspect” and cannot be opened. The SAP System can’t run. However, if all transaction log files are
available and not damaged, there is a chance to failover without loss of committed data.

To fail over to the Secondary server without data loss the following actions must be done:
         1. Suspend Log shipping as described in Suspend Log Shipping on page 18.
         2. Stop the SAP System.
         3. Back up the active transaction log on your primary server with option NO_TRUNCATE. That
              option allows making a log backup even if the database is unavailable. This last log is also
              called “tail log”. The command is as follows:
                    USE master
                    BACKUP LOG <SID>
                    TO DISK =
                    'backup_shareBackupFolder<SID>_YYYYMMDDHHMM_tail.trn'
                    WITH NO_TRUNCATE
               If the command finishes successfully, there will be no data loss, proceed with step 4.
               Otherwise the situation meets not this scenario, rather Disaster recovery scenario, because
               the transaction log is damaged or lost. Please proceed with step 2 of Disaster recovery
               scenario on page 12.
         4. Copy the tail log backup from the Backup folder to the Copy folder manually.
         5. Log on to the Secondary server with SQL Management Studio
         6. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs.
              By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name
              and choose “Start job at step…”.
         7. When the copy job is finished, start the Restore job. By default, its name is
              “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job
              at step…”.
         8. If the restore delay is not used, proceed with step 11.
         9. Identify the most recent TRN file used for restore: right click the Restore job, choose “View
              history” and expand the recent history record. The step logs are displayed in a reverse
              chronological order. Find the topmost step containing the message
                    “Restored log backup file. Secondary DB: '<SID>', File:
                    'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'.
              Note its name.
         10. Find in the Copy folder TRN files with timestamps greater then the last restored TRN file and
              restore those files in sequence with a series of commands like follows:
                    RESTORE LOG <SID>
                    FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
                    WITH NORECOVERY
               Note, the TRN files must be processed in strict chronological order, otherwise an error is
               thrown. Should it happen, find the correct TRN file and proceed with the restore.
         11. Restore the tail log TRN file with the command
                    RESTORE LOG <SID>
                    FROM DISK =
                    'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS_tail.trn'
                    WITH NORECOVERY
         12. Proceed with the section Completing failover on page 16.
In this scenario a simplified fail back is not possible (see Fail back on page 17).

 DB corruption scenario
For more information about DB corruptions please refer to SAP note 142731.
In this case the Primary DB appears to be corrupted after a regular check or because some SQL
statements fail causing an ABAP short dump in the SAP system. The DB server itself is functioning and
none of the database files is lost.


                                                                                                         13
Database Log Shipping on Microsoft SQL Server


If you faced a DB corruption, there is a probability that the transaction log has been corrupted too. In this
case the Restore job will be failing with errors like
        During redoing of a logged operation in database 'SID', an error
        occurred at log record ID (xxx:xxx:zzz).
If this is the case, the database can only be restored to the point of a corruption in the transaction log and
thus, some committed transactions will be lost.
If the restore operation finishes without errors, no data loss happens after a failover.
To fail over to the Secondary server the following actions must be done:
         1. Inform end users about an urgent SAP System shutdown and stop the SAP system.
         2. Log on to the Primary DB server with SQL Management Studio
         3. Start the Log shipping Backup job. It can be found under <Server> -> SQL Server
             Agent -> Jobs. By default, its name is “LSBackup_<SID>”. Right click the job name and
             choose “Start job at step…”.
         4. Suspend Log shipping as described in Suspend Log Shipping on page 18.
         5. Log on to the Secondary server with SQL Management Studio
         6. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs.
             By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name
             and choose “Start job at step…”.
         7. When the copy job is finished, start the Restore job. By default, its name is
             “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job
             at step…”. Check job history to identify failures during restore. If an error suggesting to a
             corrupted transaction log occurs, proceed with Completing failover on page 16. In this case
             take some data loss into account.
         8. If the restore delay is not used, proceed with step 11.
         9. Identify the last TRN file used for restore: right click the Restore job, choose “View history”,
             expand the recent history record. The step logs are displayed in reverse chronological order.
             Find the topmost step containing the message
                  “Restored log backup file. Secondary DB: '<SID>', File:
                  ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'.
             Note its name.
         10. Find in the Copy folder TRN files with greater timestamps and restore those files manually
             with a series of commands like follows:
                  RESTORE LOG <SID>
                  FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
                  WITH NORECOVERY
             Note, the TRN files must be processed in sequence in a strict chronological order, otherwise
             an error is thrown. Should it happen, find the correct TRN file and proceed with the restore.
         11. Proceed with the section Completing failover on page 16.
In this scenario a simplified fail back (see Fail back on page 17) is not possible.

 User error scenario
In this case the hardware is in good order and the DB is physically consistent, but the database is
unusable for business applications as it contains logical inconsistencies, or some data was deleted or a lot
of false data was inserted.
The most effective solution in this case is to bring the database to the state as it had been before the user
error was committed. The consequence of point in time restore is an inevitable loss of all modifications
that were done after that point in time.
In this scenario a simplified fail back (see Fail back on page 17) is not possible.

To fail over to the Secondary server without data loss the following actions must be done:
         1. Immediately suspend Log shipping as described in Suspend Log Shipping on page 18 and
              note the time when it was done. This crucial point will be referred as LS_STOP below.
         2. Inform end users about an urgent SAP System shutdown and stop the system.

                                                                                                            14
Database Log Shipping on Microsoft SQL Server


        3. Identify the point in time when the user error was committed. If it cannot be identified precisely,
           take the earliest time it might have happened. It will be referred as TIME_ERROR. Convert
           TIME_ERROR to UTC time, let’s call it TIME_ERROR_UTC.
        4. Let us call the difference between LS_STOP and TIME_ERROR as reaction time. If the
           reaction time is greater then the restore delay, the user error has already been applied to the
           Secondary DB. The error must be corrected by means of restore the DB from regular backups.
           Otherwise proceed with the next steps.
        5. Log on to the Secondary server with SQL Management Studio
        6. Identify the last TRN file used for restore: right click the Restore job, choose “View history”,
           expand the recent history record. The step logs are displayed in reverse chronological order.
           Find the topmost step containing the message
               “Restored log backup file. Secondary DB: '<SID>', File:
               ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'.
           Note its name.
        7. Find in the Copy folder TRN files with greater timestamps, but not exceeding
           TIME_ERROR_UTC and restore those files manually with a series of commands like follows
               RESTORE LOG <SID>
               FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
               WITH NORECOVERY, STOPAT = '<date_time>'
           where <date_time> should be 1 minute before TIME_ERROR. Adhere to the full date and
           time format ‘YYYY-MM-DD HH:MM:SS’, e.g. '2007-05-24 09:59:59'.
           Note, the TRN files must be processed in strict chronological order, otherwise an error is
           thrown. Should it happen, find the correct TRN file and proceed with the restore.
        8. Proceed with the section Completing failover on page 16

 Maintenance scenario
In this case the Primary server must be stopped for a substantial period of time while the SAP system
must go on running. In this scenario no committed data is lost.

To fail over to the Secondary server without data loss the following actions must be done:
         1. Suspend Log shipping as described in Suspend Log Shipping on page 18.
         2. Inform end users about an SAP System shutdown and stop the system.
         3. Close all connections to <SID> database. You can check for them in Management Studio
              under “Management” -> “Activity monitor”.
         4. Back up the active transaction log on your primary server with option NORECOVERY. That
              leaves the Primary DB in “restoring” state, where it is unavailable, but is prepared to simplified
              fail back (see Fail back on page 17). The command is as follows:
                   USE master
                   BACKUP LOG <SID>
                   TO DISK =
                   'backup_shareBackupFolder<SID>_YYYYMMDDHHMM_tail.trn'
                   WITH NORECOVERY
         5. Copy the tail transaction log backup file from the Backup folder to the Copy folder.
         6. Log on to the Secondary server with SQL Management Studio
         7. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs.
              By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name
              and choose “Start job at step…”.
         8. When the copy job is finished, start the Restore job. By default, its name is
              “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job
              at step…”.
         9. If the restore delay is not used, proceed with step 12.
         10. Identify the most recent TRN file used for restore: right click the Restore job, choose “View
              history”, expand the recent history record. The step logs are displayed in a reverse
              chronological order. Find the topmost step containing the message
                   “Restored log backup file. Secondary DB: '<SID>', File:
                   ‘CopyFolder<SID>_YYYYMMDDHHMMSS.trn'.
              Note its name.

                                                                                                             15
Database Log Shipping on Microsoft SQL Server


        11. Find in the Copy folder TRN files with timestamps greater then the last restored TRN file and
            restore those files in sequence with a series of commands like follows:
                RESTORE LOG <SID>
                FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'
                WITH NORECOVERY
            Note, the TRN files must be processed in strict chronological order, otherwise an error is
            thrown. Should it happen, find the correct TRN file and proceed with the restore.

        12. Restore the tail log TRN file with the command
               RESTORE LOG <SID>
               FROM DISK =
               'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS_tail.trn'
               WITH NORECOVERY
        13. Proceed with the section Completing failover on this page below.
If all actions are done correctly, there is a possibility to fail back very fast without copying the database
back to the Primary server (see Fail back below).

 Completing failover
        Independently from a failover scenario you have to execute the following actions to complete a
        failover process.
        1. Issue the command:
           RESTORE LOG <SID> WITH RECOVERY
           This brings the database online and it can be used with the SAP System.
        2. Execute the procedure, fixing mappings between SAP SQL users and server logins:
               use master;
               exec sp_check_sap_login <SID>, <schema>, <Windows domain>, repair;
           Where <schema> is the current DB schema (see SAP note 98678),
           <Windows domain> is the domain of standard SAP users <sid>adm and SAPService<SID>,
           ‘repair’ is the keyword instructing to perform the correcting actions.
        3. Copy the prepared SAP default profile file into DEFAULT.PFL in the folder
           <drive>:usrsap<SID>SYSprofile
               copy DEFAULT_SECONDARY.PFL DEFAULT.PFL
           overwriting the existing file.
        4. Correct environment variable MSSQL_SERVER on each application server as per Prepare
           SAP System to failover, point 3 on page 11.
        5. Start the SAP System.

 General post-failover actions
Independently from a failover scenario, some actions must be done in SAP system if it had to run in a
failover state for longer then one day.
     1. Backup DB job must be scheduled anew. If SAP transaction DB13 was earlier used for that use
         DB13 again, taking into consideration possibly changed backup device names. If other DB backup
         software was used, refer to its documentation.
     2. For SAP Basis Release 610 and higher: create DB collector jobs. Execute the post processing
         step "Creating the permanent stored procedures" described in SAP note 151603.
     3. Reschedule SAP blocking locks collector job.
             a. In a system with installed DBACOCKPIT, start transaction DBACOCKPIT, on the
                  navigation pane expand “Performance” -> “History”, then double click “Lock History”.
                  Press the button “Turn collector job on” which appears on the navigation frame.
             b. In a system where DBACOCKPIT is not installed yet, start transaction ST04, go to
                  “Detailed Analysis”, press “Blocking lockstats” button, then press “Turn collector job on”
                  button.




                                                                                                                16
Database Log Shipping on Microsoft SQL Server


Fail back
Fail back is a process of moving the database service back to the original Primary server. It can be
necessary by various reasons, for example:
        The original Secondary server is equipped with less powerful hardware and cannot permanently
        withstand working load with satisfied quality of service.
        The original Secondary server must be freed for another role, e.g. DB server for QAS or training
        system

Fail back should be carried out after the original problem on the Primary server has been solved.
There are 2 possibilities to fail back:
     1. Simplified fail back without moving the database. This is only possible in case of a maintenance
        fail over. It can be done within a couple of minutes.
     2. Regular fail back with moving the whole database. It needs to be executed in all other scenarios.
        As this fail back requires a full database backup and restore, it may take many hours to complete.
        The system is however not necessarily down during that time.

 Preparing the system to fail back
Technically, preparing a fail back is identical to setting up Log Shipping where the original Secondary
server plays the primary role and the original Primary server becomes the secondary one. Make sure Log
Shipping is suspended (i.e., all Log Shipping jobs are disabled) before you proceed.

    1. If the original Primary server is lost and must be set up from scratch, refer to the chapter SQL
       Server prerequisites on page 7.
    2. If the previous fail over was done as per Maintenance scenario (see page 15) and the originally
       Primary DB is in “restoring” state, proceed to step 4. Otherwise continue with 3.
    3. Back up the original Secondary DB and restore it onto the originally Primary Server as described
       in Preparations on page 8. This step may take a lot of time.
    4. Configure Log Shipping as described in Configuring Log Shipping on page 9, with the only
       difference: now under “Primary” the originally Secondary server is meant and vice versa.
            Note: use the same share for Backup folder that you used for the original log shipping.

    On this stage there are two configured Log shipping processes. For further reference let’s call them
    Primary-to-Secondary and Secondary-to-Primary based on original roles of the servers.
            The Primary-to-Secondary Log shipping is now suspended.
            The Secondary-to-Primary one is active.

 Failing back
   1. Fail over from the originally Secondary server to the originally Primary one as described in
       Maintenance scenario on page 15, except of step “Copy the prepared SAP default profile…”
       during completing failover. On that step copy the original SAP default profile file into
       DEFAULT.PFL in the folder <drive>:usrsap<SID>SYSprofile
               copy DEFAULT_PRIMARY.PFL DEFAULT.PFL
       overwriting the existing file.
   2. Resume Primary-to-Secondary Log shipping as per Resume Log Shipping on page 19.




                                                                                                           17
Database Log Shipping on Microsoft SQL Server


Monitoring and reporting for Log shipping
If the Monitor server is configured, an overall status of all parts of the log shipping process is available on
it within one screen. If no Monitor server is installed, a DB administrator has to check status of a Primary
and a Secondary server separately. The procedure below is generic for Monitor, Primary and Secondary
servers, however the result will look differently. The table below shows what kind of information is
available on each server.

        Column               Valid for     Displayed                           Comment
                                               on
Status                      Backup job     Primary
Time since last Backup      Backup job     Primary       Elapsed time since the last Backup in minutes
Backup Threshold            Backup job     Primary       Configured threshold. When “Time since last
                                                         Backup” exceeds the value, an alert will be raised
                                                         (if enabled).
Backup alert enabled        Backup job     Primary       True/False.
Time since last Copy        Copy job       Secondary     Elapsed time (in minutes) since the last copy of
                                                         TRN files from the backup folder to the copy folder.
Time since last Restore     Restore job    Secondary     Elapsed time (in minutes) since the last restore of
                                                         TRN files on the Secondary server.
Latency of Last File        Restore job    Secondary     Latency between the last backup job and the last
                                                         restore job in minutes. When “Time since last
                                                         Backup” exceeds the value, an alert will be raised
                                                         (if enabled).
Restore Threshold           Restore job    Secondary     Configured threshold. When “Time since last
                                                         Restore” exceeds the value, an alert will be raised
                                                         (if enabled).
Restore alert enabled       Restore job    Secondary     True/False.
Last Backup File            Backup job     Secondary     File name
Last Copied File            Restore job    Secondary     File name
Last Restored File          Restore job    Secondary     File name

To call Log Shipping status using SQL Management Studio as of Support Pack 2 (SP2):
         1. Start SQL Management Studio, connect to a server.
         2. Right click on the server name and choose “Reports” -> Standard reports -> Transaction Log
            Shipping Status.
To call Log Shipping status with a Monitor server using SQL Management Studio earlier then SP2:
         1. Start SQL Management Studio, connect to a server.
         2. Select the server instance in Object Explorer.
         3. In the “Object Explorer Details” page, display the list of available report types by clicking the
            arrow next to the «Reports» button. If the Object Explorer Details page is not displayed, select
            Object Explorer Details on the View menu.
         4. Click “Transaction Log Shipping Status”.

Suspend Log Shipping
        Disable the log shipping Backup job on the Primary server, if it still running.
        Disable the Copy and Restore jobs on the Secondary server.

The path in the Object Explorer of SQL Management Studio:
<Server> -> SQL Server Agent -> Jobs
Right click on the job then click “Disable”.
         The job name convention for a backup job is “LSBackup_<SID>”
         The job name convention is “LS<action>_<PrimaryServer>_<SID>”, where <action> can be
         “Copy” or “Restore”.
Note however, that the naming convention is not mandatory; another names may have been used.



                                                                                                             18
Database Log Shipping on Microsoft SQL Server


Resume Log Shipping
        Enable the log shipping Backup job on the Primary server.
        Enable the Copy and Restore jobs on the Secondary server.

The path in the Object Explorer of SQL Management Studio:
<Server> -> SQL Server Agent -> Jobs
Right click on the job then click “Enable”.
         The job name convention for a backup job is “LSBackup_<SID>”
         The job name convention is “LS<action>_<PrimaryServer>_<SID>”, where <action> can be
         “Copy” or “Restore”.
Note however, that the naming convention is not mandatory; another names may have been used.

Remove Log Shipping permanently
To remove log shipping
       Right-click the Primary database and then click “Properties”.
       Under “Select a page”, click “Transaction Log Shipping”.
       Clear the ”Enable this as a primary database in a log shipping configuration” flag. A confirmation
       popup immediately appears:




        Confirm it with Yes.
        Press OK, check the status popup window. In case of errors or warnings press “Report”, save it to
        a file, otherwise close the window. If you cannot solve the problem by your own, create a problem
        message at SAP attaching the error report file

Special attention is required by following actions
 Adjusting Windows system timer
    The Log shipping backup job creates in the backup directory files with the following naming
    convention:
    <dbname>_<YYYYMMDDHHMMSS>.trn,
    where
        <dbname> is the Primary database name,
        <YYYYMMDDHHMMSS> is the backup timestamp in UTC time on the Primary database server.
    UTC time is the world coordinated time which doesn’t depend on time zone or daylight saving settings.
    By that reason the timestamp in the file name does not correspond to local time besides countries in
    time zone UTC+0 (Greenwich Mean Time). Despite some inconvenience, the naming convention
    makes sense since the primary and the secondary databases may be in different time zones and/or
    use different daylight savings settings.
That’s why it is safe to change the current time zone or daylight saving setting on both Primary and
Secondary servers, no follow-up actions are required.



                                                                                                        19
Database Log Shipping on Microsoft SQL Server


The copy and restore jobs rely on correct timestamps in the file names. That’s why the primary and
secondary servers must have synchronized system clocks. Either Windows Time Service in a domain or a
network time protocol should be used for that.
If system clocks must be adjusted, it is recommended to suspend log shipping and resume it afterwards.

   Adding data and log files to the Primary DB
After a file is added to the Primary DB, the appropriate file definition is created as a log record and
transferred with the next TRN file. The restore job on the Secondary server tries to create the additional
file under the same path as on the Primary server.
If this path doesn’t exist, the restore job fails with the error

*** Error: Could not apply log backup file 'h:ls_copySID_20070926163724.trn' to
secondary database 'SID'.(Microsoft.SqlServer.Management.LogShipping) ***
*** Error: Directory lookup for the file "P:SIDDataSID_22.ndf" failed with the
operating system error 3(The system cannot find the path specified.).
File 'SID_22' cannot be restored to 'P:SIDDataSID_22.ndf'.
Use WITH MOVE to identify a valid location for the file.

As a matter of fact, the newly created file may be very large, several dozen or even hundred gigabytes. If
the path exists by accident, the file will be created successfully. If the appropriate disk doesn’t have
enough space or it is undesirable to put the new file to that folder, the following actions must be done:

    1. Disable the Restore job on the Secondary server.
    2. Add the new data file
    3. Start the Restore job manually, take the TRN filename from the Job history
    4. Let the Copy job run or start it manually.
    5. Restore the last TRN file manually with the command like follows:
                RESTORE LOG <SID> FROM
                DISK='<Copy folder><last TRN file>'
                WITH MOVE '<logical name of the new file>'
                TO '<desired location on the secondary server><new filename>.ndf',
                NORECOVERY
    6. Enable the Restore job on the Secondary server.

If the described procedure was not adhered to and the restore job has failed already, the DB administrator
can still execute step 5 for the failing TRN file.

  SQL Jobs
Some SAP Basis functions are implemented as scheduled SQL Jobs. They are listed in General post-
failover actions on page 16. Customers also may define own SQL jobs using standard tools (SQL Studio)
or third-party tools. All those jobs are stored in a standard database msdb, which is separate from <SID>.
As a result, any change in the jobs will not be automatically shipped to the Secondary server and should
be rescheduled by necessity on the Secondary server manually.

  Modifying the SAP DEFAULT profile
After each modification of a default profile, the modification has to be repeated for the file
DEFAULT_SECONDARY.PFL, which is used during fail over. Otherwise, after a failover the SAP system
starts with old parameters. Additionally, the new DEFAULT.PFL must be copied into
DEFAULT_PRIMARY.PFL.




                                                                                                             20
Database Log Shipping on Microsoft SQL Server


Frequently asked questions
Q1. Can Log Shipping coexist with Microsoft Cluster Services (MSCS)?
A1. Yes. Those two approaches ideally supplement each other. While MSCS provides an instant
automatic failover, it cannot protect a DB against physical corruptions nor against user errors. Because of
a limitation on the distance between cluster nodes and single data storage, MSCS hardly can be used for
disaster recovery. All mentioned features are covered by Log Shipping.

Q2. Can Log Shipping coexist with SQL Server database mirroring?
A2. Yes. A detailed explanation is out of scope of the document. Please refer to Microsoft Books Online,
article Database Mirroring and Log Shipping (http://msdn2.microsoft.com/library/en-us/53e98134-e274-
4dfd-8b72-0cc0fd5c800e.aspx).

Q3. What is a file with extension TUF, found in the Copy directory?
A3. The TUF file is the Transaction Undo File, and is created when performing log shipping to a server in
STANDBY mode of a database on page 5.
Q4. How a normal DB backup and TL backup on durable media (backup tapes) should be organized? Can
SAP transaction DB13 (DBACOCKPIT/DBA Planning Calendar) coexist with Log Shipping?
A4.The Primary database can be backed up with any tool, including DB13. However no transaction log
backups other then Log Shipping are allowed. That’s why the functionality of DB13 may not be used. This
is also valid for any third party backup software as well as for manual transaction log backups.
Q5. After somebody unintentionally switched the DB to SIMPLE recovery mode and then immediately
back to FULL, the Log shipping Restore Job is failing. How to fix it?
A5. The current Log Shipping must be removed and then set up again including full DB backup and
restore, because the transaction log backup chain is broken.

Q6. Because of error 9002 “Transaction log full” the DBA administrator backed up the log manually with
the option NO_LOG or TRUNCATE_ONLY. Afterwards the Log shipping Restore Job is failing. How to fix
it?
A6. The current Log Shipping must be removed and then set up again including full DB backup and
restore. The correct procedure in case of error 9002 would be to start the Log Shipping Backup job
immediately.
Q7. After somebody unintentionally deleted some TRN files from the copy folder, the Log shipping Restore
Job is failing. How to fix it?
A7. Check if the deleted files are still in the Backup folder. By default they are retained there for 3 days. If
yes, copy them manually to the Copy folder. If the files have already been deleted, the current Log
Shipping must be removed and then set up again including full DB backup and restore.




                                                                                                              21
Database Log Shipping on Microsoft SQL Server


Appendix

SAP Notes, referred in this document

  Note                                        Short text
98678     SQL Server Connection Issues
142731    DBCC checks of SQL server
151603    Copying an SQL Server database
208632    TCP/IP network protocol for MSSQL
610640    sp_check_sap_login




                                                                                            22

More Related Content

What's hot

2011 Collaborate IOUG Presentation
2011 Collaborate IOUG Presentation2011 Collaborate IOUG Presentation
2011 Collaborate IOUG PresentationBiju Thomas
 
Dataguard physical stand by setup
Dataguard physical stand by setupDataguard physical stand by setup
Dataguard physical stand by setupsmajeed1
 
CICS-COBOL to J2EE Migration – White Paper
CICS-COBOL to J2EE Migration – White PaperCICS-COBOL to J2EE Migration – White Paper
CICS-COBOL to J2EE Migration – White PaperKumaran Systems Inc
 
Sql Server 2008 Enhancements
Sql Server 2008 EnhancementsSql Server 2008 Enhancements
Sql Server 2008 Enhancementskobico10
 
DB2 Pure Scale Webcast
DB2 Pure Scale WebcastDB2 Pure Scale Webcast
DB2 Pure Scale WebcastLaura Hood
 
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices Session
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices SessionNZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices Session
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices SessionMichael Noel
 
Less04 database instance
Less04 database instanceLess04 database instance
Less04 database instanceAmit Bhalla
 
Oracle RDBMS architecture
Oracle RDBMS architectureOracle RDBMS architecture
Oracle RDBMS architectureMartin Berger
 
Dba 3+ exp qus
Dba 3+ exp qusDba 3+ exp qus
Dba 3+ exp quskrreddy21
 

What's hot (18)

SQL Database Mirroring setup
SQL Database Mirroring setupSQL Database Mirroring setup
SQL Database Mirroring setup
 
21 Pdfsam
21 Pdfsam21 Pdfsam
21 Pdfsam
 
2011 Collaborate IOUG Presentation
2011 Collaborate IOUG Presentation2011 Collaborate IOUG Presentation
2011 Collaborate IOUG Presentation
 
381 Pdfsam
381 Pdfsam381 Pdfsam
381 Pdfsam
 
Dataguard physical stand by setup
Dataguard physical stand by setupDataguard physical stand by setup
Dataguard physical stand by setup
 
CICS-COBOL to J2EE Migration – White Paper
CICS-COBOL to J2EE Migration – White PaperCICS-COBOL to J2EE Migration – White Paper
CICS-COBOL to J2EE Migration – White Paper
 
161 Pdfsam
161 Pdfsam161 Pdfsam
161 Pdfsam
 
Sql Server 2008 Enhancements
Sql Server 2008 EnhancementsSql Server 2008 Enhancements
Sql Server 2008 Enhancements
 
DB2 Pure Scale Webcast
DB2 Pure Scale WebcastDB2 Pure Scale Webcast
DB2 Pure Scale Webcast
 
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices Session
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices SessionNZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices Session
NZSPC 2013 - Ultimate SharePoint Infrastructure Best Practices Session
 
Why virtual private catalog?
Why virtual private catalog?Why virtual private catalog?
Why virtual private catalog?
 
201 Pdfsam
201 Pdfsam201 Pdfsam
201 Pdfsam
 
ORACLE ARCHITECTURE
ORACLE ARCHITECTUREORACLE ARCHITECTURE
ORACLE ARCHITECTURE
 
Less04 database instance
Less04 database instanceLess04 database instance
Less04 database instance
 
Oracle RDBMS architecture
Oracle RDBMS architectureOracle RDBMS architecture
Oracle RDBMS architecture
 
Auditing DB2 on z/OS with DBARS
Auditing DB2 on z/OS with DBARSAuditing DB2 on z/OS with DBARS
Auditing DB2 on z/OS with DBARS
 
IUG ATL PC 9.5
IUG ATL PC 9.5IUG ATL PC 9.5
IUG ATL PC 9.5
 
Dba 3+ exp qus
Dba 3+ exp qusDba 3+ exp qus
Dba 3+ exp qus
 

Similar to Log shippingbestpractices

Effective Usage of SQL Server 2005 Database Mirroring
Effective Usage of SQL Server 2005 Database MirroringEffective Usage of SQL Server 2005 Database Mirroring
Effective Usage of SQL Server 2005 Database Mirroringwebhostingguy
 
Tips for managing a VLDB
Tips for managing a VLDBTips for managing a VLDB
Tips for managing a VLDBJohn Martin
 
Help! my sql server log file is too big!!! tech republic
Help! my sql server log file is too big!!!   tech republicHelp! my sql server log file is too big!!!   tech republic
Help! my sql server log file is too big!!! tech republicKaing Menglieng
 
Mirroring in SQL Server 2012 R2
Mirroring in SQL Server 2012 R2Mirroring in SQL Server 2012 R2
Mirroring in SQL Server 2012 R2Mahesh Dahal
 
Microsoft sql server architecture
Microsoft sql server architectureMicrosoft sql server architecture
Microsoft sql server architectureNaveen Boda
 
Microsoft SQL High Availability and Scaling
Microsoft SQL High Availability and ScalingMicrosoft SQL High Availability and Scaling
Microsoft SQL High Availability and ScalingJustin Whyte
 
Sql Server tips from the field
Sql Server tips from the fieldSql Server tips from the field
Sql Server tips from the fieldInnoTech
 
Pdrmsqlsr services share_point_integrated_mode
Pdrmsqlsr services share_point_integrated_modePdrmsqlsr services share_point_integrated_mode
Pdrmsqlsr services share_point_integrated_modeSteve Xu
 
White Paper - Lepide SQL Storage Manager
White Paper - Lepide SQL Storage ManagerWhite Paper - Lepide SQL Storage Manager
White Paper - Lepide SQL Storage ManagerSumant Kumar
 
Dynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesDynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesCognizant
 
What is SQL Server 2019 Standard Edition
What is SQL Server 2019 Standard EditionWhat is SQL Server 2019 Standard Edition
What is SQL Server 2019 Standard EditionSoftwareDeals
 
Sql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiSql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiAlex Tumanoff
 
DBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docxDBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docxseifusisay06
 
Remote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts
 
SANKAR_PRASAD_SAHU_SQL_DBA
SANKAR_PRASAD_SAHU_SQL_DBASANKAR_PRASAD_SAHU_SQL_DBA
SANKAR_PRASAD_SAHU_SQL_DBASankar Sahu
 
Sql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleSql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleKlaudiia Jacome
 
KoprowskiT_SQLSat409_MaintenancePlansForBeginners
KoprowskiT_SQLSat409_MaintenancePlansForBeginnersKoprowskiT_SQLSat409_MaintenancePlansForBeginners
KoprowskiT_SQLSat409_MaintenancePlansForBeginnersTobias Koprowski
 
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginners
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginnersKoprowskiT_SQLSaturday409_MaintenancePlansForBeginners
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginnersTobias Koprowski
 

Similar to Log shippingbestpractices (20)

Effective Usage of SQL Server 2005 Database Mirroring
Effective Usage of SQL Server 2005 Database MirroringEffective Usage of SQL Server 2005 Database Mirroring
Effective Usage of SQL Server 2005 Database Mirroring
 
Tips for managing a VLDB
Tips for managing a VLDBTips for managing a VLDB
Tips for managing a VLDB
 
Help! my sql server log file is too big!!! tech republic
Help! my sql server log file is too big!!!   tech republicHelp! my sql server log file is too big!!!   tech republic
Help! my sql server log file is too big!!! tech republic
 
Mirroring in SQL Server 2012 R2
Mirroring in SQL Server 2012 R2Mirroring in SQL Server 2012 R2
Mirroring in SQL Server 2012 R2
 
Microsoft sql server architecture
Microsoft sql server architectureMicrosoft sql server architecture
Microsoft sql server architecture
 
Microsoft SQL High Availability and Scaling
Microsoft SQL High Availability and ScalingMicrosoft SQL High Availability and Scaling
Microsoft SQL High Availability and Scaling
 
Sql Server tips from the field
Sql Server tips from the fieldSql Server tips from the field
Sql Server tips from the field
 
Pdrmsqlsr services share_point_integrated_mode
Pdrmsqlsr services share_point_integrated_modePdrmsqlsr services share_point_integrated_mode
Pdrmsqlsr services share_point_integrated_mode
 
White Paper - Lepide SQL Storage Manager
White Paper - Lepide SQL Storage ManagerWhite Paper - Lepide SQL Storage Manager
White Paper - Lepide SQL Storage Manager
 
Dynamics of Leading Legacy Databases
Dynamics of Leading Legacy DatabasesDynamics of Leading Legacy Databases
Dynamics of Leading Legacy Databases
 
What is SQL Server 2019 Standard Edition
What is SQL Server 2019 Standard EditionWhat is SQL Server 2019 Standard Edition
What is SQL Server 2019 Standard Edition
 
Resume
ResumeResume
Resume
 
Backup And Recovery
Backup And RecoveryBackup And Recovery
Backup And Recovery
 
Sql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen NedaskivskyiSql server 2019 New Features by Yevhen Nedaskivskyi
Sql server 2019 New Features by Yevhen Nedaskivskyi
 
DBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docxDBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docx
 
Remote DBA Experts 11g Features
Remote DBA Experts 11g FeaturesRemote DBA Experts 11g Features
Remote DBA Experts 11g Features
 
SANKAR_PRASAD_SAHU_SQL_DBA
SANKAR_PRASAD_SAHU_SQL_DBASANKAR_PRASAD_SAHU_SQL_DBA
SANKAR_PRASAD_SAHU_SQL_DBA
 
Sql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scaleSql server 2008 r2 performance and scale
Sql server 2008 r2 performance and scale
 
KoprowskiT_SQLSat409_MaintenancePlansForBeginners
KoprowskiT_SQLSat409_MaintenancePlansForBeginnersKoprowskiT_SQLSat409_MaintenancePlansForBeginners
KoprowskiT_SQLSat409_MaintenancePlansForBeginners
 
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginners
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginnersKoprowskiT_SQLSaturday409_MaintenancePlansForBeginners
KoprowskiT_SQLSaturday409_MaintenancePlansForBeginners
 

Log shippingbestpractices

  • 1. Database Log Shipping on Microsoft SQL Server Database Log Shipping on Microsoft SQL Server Applies to: Any SAP Product with ABAP stack working with SQL Server 2005. Summary This document describes best practices in setting up, monitoring and administering Database Log Shipping process which provides data redundancy and increases system availability. It focuses on the common scenarios for disaster recovery, physical data corruption, protection against user errors and high availability. Author(s): Alexander Kosolapov Company: SAP AG Created on: 24 May 2007 Last revision: 10 October 2007 1
  • 2. Database Log Shipping on Microsoft SQL Server Table of Content Overview of High Availability solutions in Microsoft SQL Server ______________ 4 Log Shipping fundamental concepts _____________________________________ 4 Overview _______________________________________________________________ 4 Technical implementation of Log Shipping ____________________________________ 4 Fail over ________________________________________________________________ 4 Fail back________________________________________________________________ 5 Backup directory (backup folder)____________________________________________ 5 Copy directory (copy folder)________________________________________________ 5 STANDBY mode of a database ______________________________________________ 5 Technical background of STANDBY mode_____________________________________________ 5 Influence of the Log shipping on a Backup Strategy ____________________________ 6 Considerations about database corruption____________________________________ 6 Considerations about user errors ___________________________________________ 6 Log Shipping as a part of disaster recovery plan _______________________________ 6 Evaluating data loss after failover ___________________________________________ 7 Log Shipping in SQL 2005 ______________________________________________ 7 Prerequisites ____________________________________________________________ 7 SQL Server prerequisites __________________________________________________________ 7 SAP prerequisites ________________________________________________________________ 7 Setting up Log Shipping ___________________________________________________ 8 Preparations ____________________________________________________________ 8 Configuring Log Shipping__________________________________________________ 9 Preparing SAP System to failover __________________________________________ 10 Failover scenarios _______________________________________________________ 12 Disaster recovery scenario ________________________________________________________ 12 Partial database loss scenario _____________________________________________________ 13 DB corruption scenario ___________________________________________________________ 13 User error scenario ______________________________________________________________ 14 Maintenance scenario____________________________________________________________ 15 Completing failover ______________________________________________________________ 16 General post-failover actions ______________________________________________________ 16 Fail back_______________________________________________________________ 17 Preparing the system to fail back ___________________________________________________ 17 Failing back____________________________________________________________________ 17 Monitoring and reporting for Log shipping ___________________________________ 18 Suspend Log Shipping ___________________________________________________ 18 Resume Log Shipping____________________________________________________ 19 Remove Log Shipping permanently_________________________________________ 19 2
  • 3. Database Log Shipping on Microsoft SQL Server Special attention is required by following actions _____________________________ 19 Adjusting Windows system timer ___________________________________________________ 19 Adding data and log files to the Primary DB___________________________________________ 20 SQL Jobs _____________________________________________________________________ 20 Modifying the SAP DEFAULT profile ________________________________________________ 20 Frequently asked questions ___________________________________________ 21 Appendix ___________________________________________________________ 22 SAP Notes, referred in this document _______________________________________ 22 3
  • 4. Database Log Shipping on Microsoft SQL Server Overview of High Availability solutions in Microsoft SQL Server SQL Server 2000 supported two approaches to increasing DB service availability: High Availability Cluster and DB Log Shipping. With SQL Server 2005 a new functionality was introduced additionally, the Database Mirroring. The current document focuses only on the DB Log Shipping. For more information on using SQL Server in a Windows Cluster (MSCS) environment, please refer to http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/hasog01.mspx. For more information on using SQL Server DB Mirroring, please refer to FAQ http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirfaq.mspx and to SAP Note 965908. Log Shipping fundamental concepts Overview Log Shipping allows you to automatically send transaction log backup files (also referred to as TRN files) from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. For the sake of simplicity the scenario with only one secondary database is considered in this document. An optional third server instance, known as a monitor server records a history and a status of log backup and restore operations. Advantages of Log Shipping are as follows: - There is no limitation for a distance between the primary and secondary servers. - The Log Shipping mechanism is tolerant to delays in transferring of transaction log backup files. - In contrast to DB cluster or DB mirroring, a problem in the Log shipping itself (e.g. lost connection) doesn’t case a fault of the productive database. - No special expensive hardware is required Disadvantages: - A switch to the secondary server must be done manually. - The databases are not at sync permanently, some data loss is possible. Technical implementation of Log Shipping Log Shipping process consists of 4 scheduled jobs: 1. Backup log job, running on the Primary Server 2. Copy job, running on the Secondary server 3. Restore job, running on the Secondary server 4. Monitoring and alerting job, running on the Monitoring server (optional). All the jobs call the executable sqllogship.exe. The sqllogship application performs a backup, copy or restore operation and associated clean-up tasks for a log shipping configuration. The operation is performed on a specific instance of Microsoft SQL Server 2005 for a specific database. Fail over Failover is a capability of a service to switch over to a redundant or standby computer server in case of a failure on the productive server. 4
  • 5. Database Log Shipping on Microsoft SQL Server Log Shipping does not possess a capability to fail over automatically from the primary server to the secondary server. If the primary database becomes unavailable, a manual intervention of a system administrator is necessary to restore DB service. He must recover the rest of transaction log backups and bring the database to the online state. Fail back Fail back is a process of moving the service back to the original hardware after it is ready to run again. In other words, it is a process, reverse to a fail over. Like a fail over, the fail back in a log shipping scenario must be done manually by the system administrator. Backup directory (backup folder) It is a network share, where transaction log backup files are stored temporarily. Copy directory (copy folder) It is a directory, to where the transaction log backup files are copied before they are restored. Usually it is located on the Secondary server. STANDBY mode of a database This mode will allow read-only operations to be performed on the database and additionally further transaction logs can be applied to the DB. A DB can get to STANDBY mode only after a restore database or restore log operation. Technical background of STANDBY mode A transaction log backup file may contain information about not yet committed transaction. Whether the transaction will be committed or rolled back, is not yet known. That information will only come with the next transaction log backup file. Thus, any full recovery of a DB (RESTORE DATABASE <…> WITH RECOVERY) consists of 2 phases: redo of all modifications undo of all uncommitted transactions. After the undo phase the DB will be in a transactional consistent state, but won’t accept any further transaction log because some modifications are lost. One can skip the undo phase by means of the option WITH NORECOVERY of the command RESTORE. However, as the DB is in a transactional inconsistent state, SQL Server doesn’t allow using it even for reads only. When RESTORE <…> WITH STANDBY is used, SQL Server performs the undo phase but saves also the state “before undo” in a so-called “standby file” or Transaction Undo File (extension TUF). This file is a mandatory argument of the option STANDBY. As the result, the database is transactional consistent and can be read. There are two possible transitions from STANDBY state. The DB administrator can recover it completely and make it read/write with the command RESTORE LOG <dbname> WITH RECOVERY. A possibility to apply any further transaction log backup will be lost forever. The next transaction log can be applied to the DB. 5
  • 6. Database Log Shipping on Microsoft SQL Server Influence of the Log shipping on a Backup Strategy As a matter of fact, the Log shipping dramatically changes a common backup strategy. When Log Shipping is implemented, its Backup log job must be one and only who is doing transaction log backups. As the job saves the backup files to a disk, they cannot be saved to tapes with SQL command “BACKUP LOG”. Neither SAP transaction DB13 nor any other SQL Backup software may be used for that. That’s why generic backup software for file backups (e.g. NTBACKUP) has to be used to save transaction log backup files from a disk to durable media. Database backups can be done as usual, directly to tapes with SQL command “BACKUP DATABASE”. Considerations about database corruption Throughout the current document the concept of database corruption means a state of a database which causes impossibility to access at least part of data because it is lost or distorted and the state cannot be repaired without data loss. Physical data corruptions are caused exclusively with hardware reasons or erroneous program coding on operating system level and in DBMS code itself. Note however, that the latter is only a theoretical possibility which has not been observed as of SQL Server version 7.0 (1998) in combination with SAP products. Note, data corruption cannot be caused with applications, using DB service. Data corruptions cannot be precisely tracked in time and often they pass unnoticed over substantial time periods of many days or weeks. For more information about this topic please refer to SAP Note 327494. Since there is no direct exchange of data pages between a primary and a secondary server, physical DB corruptions cannot be transferred to the secondary server. Thus, Log Shipping provides a better protection against DB corruptions. This protection is not absolute however. A hardware fault still may corrupt the transaction log file itself. But such a corruption won’t pass unnoticed as it will be recognized on the secondary server during restore. Considerations about user errors User errors include erroneous actions committed on DB level by DB administrators as well as on application level by functional users and errors in applications (e.g. ABAP reports). In this document we consider only user errors that cause deleting or overwriting of needful data. Also inserting of nonsense data into the database can be considered as a user error. We limit to the cases of such errors that cannot be recovered with application means. As a rule, user errors become apparent very fast, within minutes or few hours and can be precisely tracked. As a rule, the DB must be stopped and restored to the point in time shortly before the time of the user error. A common restore causes long system downtime of several hours and loss of all correct data modifications done after that point in time. Log Shipping allows intentional mistiming of the Secondary DB against the Primary DB by several minutes or hours. This is controlled by “Log delay period” option. Thus, with this option set to e.g.120 min, the Secondary DB at 10:00 has the same state as the Primary DB had at 08:00. This gives the DB administrator time to react to user errors and rapidly switch to the secondary DB avoiding long downtime. Log Shipping as a part of disaster recovery plan Since the Secondary DB contains redundant data that is physically separate from the Primary DB, it can continue servicing even after a total damage of the Primary server along with primary data storage. That’s why Log Shipping can be used as a part of a disaster recovery plan. 6
  • 7. Database Log Shipping on Microsoft SQL Server Evaluating data loss after failover Since Primary and Secondary database are not permanently synchronized, some committed data may get lost after failover. This depends on fail over scenario (see Failover scenarios, page 12) and on a frequency of backups. Log Shipping in SQL 2005 Prerequisites To set up Log Shipping for an SAP Solution the following prerequisites should be met: SQL Server prerequisites 1. SQL Server of the same version and edition must be installed on the Secondary server. If Service Pack 2 is installed on Primary SQL Server 2005, the Secondary server must have it too. This requirement comes from the fact that as of MS SQL 2005 SP2 the data type VARDECIMAL can be compressed, which is not supported in earlier builds. 2. Primary and Secondary servers must have the same server-wide collation: SQL_Latin1_General_CP850_BIN2 3. If you want to use an optional Monitoring Server, SQL Server software must be installed on that server. 4. The SQL Server services on Primary and Secondary servers must be running under a domain user. 5. A network share with sufficient space for transaction log backups must exist. The share must be accessible (read/write) for both Primary and Secondary SQL Server service users. By reliability reasons the share must not reside on the same hardware as the Primary server, though it is technically possible. 6. The database must be set to the Full or Bulk-logged recovery model. Under no circumstances may the recovery model be switched to Simple, even for a short time. 7. SQL Server Agent service must be up and running permanently on Primary, Secondary and Monitoring servers. 8. Primary and Secondary servers must belong to the same Windows domain or there must exist a trusted relationship between the domain of the Primary server and the domain of the Secondary server. 9. If the Secondary database must be in STANDBY mode (see STANDBY mode of a database, page 5), install Cumulative update package 3 for SQL Server 2005 Service Pack 2. Otherwise an error, described in http://support.microsoft.com/kb/940126/en-us may occur. 10. The internal servername on the SQL Server must not be null, that means the SQL statement select @@servername must return the local servername on the Primary and Secondary server. The database format of SQL Server 2005 is independent from the server platform. Thus, it’s possible to have 64-bit Primary server and 32-bit Secondary server and vice versa. SAP prerequisites There is no mechanism in SAP kernel supporting DB Log shipping, thus there are no SAP parameters controlling that. All actions must be done manually. That’s why there is no special requirement to SAP Release, kernel patch level, SAP Support Package level. 7
  • 8. Database Log Shipping on Microsoft SQL Server Setting up Log Shipping In this document we only consider a scenario of failing over and back the database service. If both SAP Central Instance (CI) and DB server coexist on the same hardware, special measures should be additionally taken to failover the SAP CI to another hardware server. For more information, refer to SAP Note 804078 “Replicated Enqueue Configuration on Windows”. If all prerequisites above are met, the Log Shipping can be set up without downtime in the production system. Before getting started with log shipping, the following questions must be clarified: Do I need the Secondary DB to be readable? There are 2 options for the Secondary DB: to be in “Loading” state or in “Standby/Read only” state. In the “loading” state the database is not accessible at all. There is even no way to check the database options (what files it consists of, their size etc.). In the “Standby/Read only” state one can execute SELECT commands from DB tables, execute consistency checks. It can also be used for business reporting (with some precautions). However, an active use of the Secondary DB prevents restoring the transaction log files. That’s why there is an option for the Standby mode which is called "Disconnect users from the secondary database when restoring backups". When activated, the restore job has higher priority and cancels all established connections. If deactivated, the restore job fails if there is a connection to the secondary DB. Transaction log backups will accumulate until there are no user connections to the database. Decide about a time lag of the Secondary DB against the Primary DB (see Considerations about user errors, page 6). Preparations 1. Make sure no transaction log backups for the DB <SID> are carried out until the Log Shipping configuration is finished. If transaction log backups are scheduled as SQL job or using a third party backup software, they must be deactivated. 2. Back up the complete <SID> database (full database backup). There is no necessity to stop SAP for that, although lower DB performance during the backup must be taken into account. Any kind of backup media (disk, tape) is suitable. 3. Restore the backup onto the secondary server. When using Management Studio, choose the option “Leave the database non-operational, and do not roll back the uncommitted transactions. Additional transaction logs can be restored. (RESTORE WITH NORECOVERY)”. See (2) on the figure below. When using T-SQL command, add the option “WITH NORECOVERY” to the option list of the RESTORE command. During the restore there is a possibility to change data file destination folders. This is necessary if the secondary server doesn’t have a drive letter which existed on the Primary server (see the page Options). 8
  • 9. Database Log Shipping on Microsoft SQL Server 1 2 Note, in the column “Restore As” you have to specify a full path to the files (1). All folders in the path must exist, but the files itself must not. Configuring Log Shipping The most comfortable way to configure Log Shipping is SQL Server Management Studio. 1. Start SQL Server Management Studio, connect to the productive SQL Server with a user having sysadmin privileges. 2. Right click the database you want to use as your Primary database, and then click Properties. 3. Under “Select a page”, click “Transaction Log Shipping”. 4. Select the “Enable this as a primary database in a log shipping configuration” check box. 5. Under “Transaction log backups”, click “Backup Settings”. 6. In the “Network path to the backup folder” box, type the network path to the share you created for the transaction log backup folder. Leave the box “If the backup folder is located on the primary server, type the local path to the backup folder” empty. 7. Configure the “Delete files older than:” parameter. It is recommended to set the value to at least 3 days. The maximum value is limited only through the space provided on the backup folder for TRN files. 8. Configure “Alert if no backup occurs” with desired parameters. 9. It is recommended to leave the default name under “Job name”. 10. Click “Schedule…” and adjust the SQL Server Agent schedule. Set up the desired interval in the group “Daily frequency”, field “Occurs every…”. 11. Click OK. 12. Click OK. 9
  • 10. Database Log Shipping on Microsoft SQL Server 13. Under “Secondary server instances and databases”, click Add. 14. Click “Connect” and connect to the Secondary SQL Server. It is strongly recommended to use Windows authentication because some problems were reported by using SQL authentication. 15. In the “Secondary Database” box, choose the <SID> database restored during the preparation. 16. On the “Initialize Secondary database” tab, choose “No, the secondary database is initialized”. 17. On the “Copy Files” tab, in the “Destination folder for copied files” box, type the path of the folder into which the transaction logs backups should be copied. Note, this folder is located on the secondary server. 18. Configure the “Delete copied files after” parameter. It is recommended to set the value to at least 3 days. The maximum value is limited only through the space provided on the copy folder. 19. Click “Schedule…” and adjust the SQL Server Agent schedule as needed. This schedule should approximate the backup schedule with some shift necessary for a transaction log backup to complete. For instance, if you scheduled the backup job starting at 0:00 each 15 minutes, it makes sense to start the Copy job at 0:05 each 15 minutes. 20. On the “Restore Transaction Log” tab, under “Database state when restoring backups”, choose the “No recovery” mode or “Standby mode” option (see “STANDBY mode of a database” on page 5). Refer to SQL Server prerequisites page 7 point 9 in the latter case. 21. If you want to delay the restore process on the secondary server, type in the desired time lag under “Delay restoring backups at least…” (see Considerations about user errors on page 6). 22. Choose an alert threshold under “Alert if no restore occurs within…” 23. Click “Schedule” and then adjust the SQL Server Agent schedule as needed. This schedule should approximate the copy schedule with some shift necessary for a copy job to complete. For instance, if you scheduled the Copy job starting at 0:05 each 15 minutes, it makes sense to start the Restore job at 0:10 each 15 minutes. 24. Click OK. On the same screen an optional monitor server can be configured. Once the monitor server has been configured it cannot be changed without removing log shipping first. 25. Under “Monitor server instance”, select the “Use a monitor server instance” check box, and then click “Settings”. 26. Click “Connect” and connect to the instance of SQL Server that you want to use as your Monitor server. It is recommended to use Windows authentication. The user must be assigned to the “sysadmin” fixed server role. 27. Under “Monitor connections”, choose the connection method to be used by the backup, copy, and restore jobs to connect to the monitor server. 28. Under “History retention”, choose the length of time you want to retain a record of your log shipping history. It is recommended to retain the history for at least 2 weeks. 29. Click OK. 30. On the Database Properties dialog box, click OK to begin the configuration process. In case of errors or warnings press “Report” and save the report to a file. If you cannot solve the problem by your own, create a problem message at SAP attaching the error report file. Preparing SAP System to failover 1. Create the procedure which fixes SAP logins. As a matter of fact, all security-relevant information is stored in the “master” DB and thus is not transferred to the Secondary server automatically with Log Shipping. That’s why SQL Server logins and users must be created on the Secondary server manually. The easiest way to do so is to make use of the stored procedure “sp_check_sap_login” from SAP note 610640. Create this stored procedure in the “master” database on the Primary and Secondary servers. 2. Create a copy of the DEFAULT.PFL profile which will be used for failover. As a fact, during a failover the DB server name changes and SAP system must be aware of the change. File DEFAULT.PFL in folder <drive>:usrsap<SID>sysprofile contains one obligatory and one optional parameter that specify the database server name. 10
  • 11. Database Log Shipping on Microsoft SQL Server SAPDBHOST is the obligatory parameter. It’s the hostname of the database server. Since the hostname is changed after failover, this parameter must be corrected appropriately. dbs/mss/server is an optional parameter. If not set explicitly, its value is copied from SAPDBHOST. This parameter is mandatory if SQL Server is installed as a named instance, i.e., network hostname is not equal to SQL Server instance name. The recommended procedure is as follows: Copy the actual DEFAULT.PFL file to DEFAULT_PRIMARY.PFL Copy the actual DEFAULT.PFL file to DEFAULT_ SECONDARY.PFL In the DEFAULT_SECONDARY.PFL substitute <Primary DB Server hostname> with <Secondary DB Server hostname> for the parameter SAPDBHOST. If the parameter dbs/mss/server is explicitly set, in the DEFAULT_SECONDARY.PFL substitute <Primary SQL Server name> with <Secondary SQL Server name>. This option accepts prefixes and port number that may be necessary when the Secondary server is a named instance. For instance, dbs/mss/server = tcp:QASDBTST,1433 where “tcp:” is a protocol specifier, “QUASDBPRD” is the SQL Server named instance name, “1433” is the port number the SQL Server is listening to. Refer to SAP note 208632 for more information. 3. Windows environment variables. Standalone SAP programs (i.e. tp, R3trans, saplicense) require the environment variable MSSQL_SERVER to be correctly specified with SQL Server name (see SAP note 98678). Since after a failover the name has changed, the variable must be corrected too. This action is not as important as it is necessary only for technical transactions. SAP System itself doesn’t read the environment variables and can start even with wrong values. On Windows 2003 Server you can prepare a script, setting Windows environment variables on the central instance and all application servers. The command is SETX MSSQL_SERVER <Secondary SQL server name> For example: SETX MSSQL_SERVER QASDBTST,1433 The script has to be executed before failover under user <domain><sid>adm. On Windows 2000 the action needs to me done manually as follows: - Log on as user <domain><sid>adm. - right click on “My computer”, choose “Properties” - Go to tab “Advanced” - Press button “Environment variables” - Choose the variable MSSQL_SERVER in the “User variables for <sid>adm” and press “Edit”. - Enter the name of the Secondary SQL server, e.g. QASDBTST,1433. - Press OK until you leave the application. 11
  • 12. Database Log Shipping on Microsoft SQL Server Failover scenarios Below only a failover of DB server will be discussed. If SAP Central Instance (SAP CI) coexists with DB instance on the same hardware, a lot of further actions must be prepared for failover. All they are out of scope of this document. Below we consider the following scenarios of failover: 1. The Primary DB server is physically damaged or transaction log file is lost (Disaster recovery scenario, page 12). 2. At least one of data files is lost, but the Primary server itself is in good order and all transaction log files (.LDF) are intact (Partial database loss scenario, page 13). 3. Database server is functioning, but an irrecoverable database corruption has been detected (DB corruption scenario, page 13). 4. Some irrecoverable data have been deleted or distorted as a result of a user error (User error scenario, page 14). 5. Primary server must be stopped for hardware or software maintenance (Maintenance scenario, page 15). The crucial parameters for assessing data loss are: time interval between 2 consecutive runs of the Backup job on the Primary server. This will be referred below as backup interval. Applying of TRN files can be additionally delayed by restore delay. Disaster recovery scenario In this case the Primary DB server suddenly suffers damage or completely destroyed along with data storage. For SAP application servers it looks like it has just disappeared. The same scenario is also applicable in case of damage or a loss of the transaction log file. Data loss is calculated as time elapsed since the last backup and is less then backup interval. To fail over to the Secondary server the following actions must be done: 1. Stop the SAP System. 2. Log on to the Secondary server with SQL Management Studio 3. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs. By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 4. When the Copy job finished, start the Restore job. By default, its name is “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 5. Suspend Log shipping as described in Suspend Log Shipping on page 18. 6. If the restore delay is not used, proceed with step 9. 7. Identify the last TRN file used for restore: right click the Restore job, choose “View history”, expand the most recent history record. The step logs are displayed in reverse chronological order. Find the topmost step containing the message Restored log backup file. Secondary DB: '<SID>', File: ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' Note its name. 8. Find in the Copy folder the TRN files with timestamps greater then the last restored TRN file and restore those files manually with a series of commands like follows: RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' WITH NORECOVERY Note, the TRN files must be processed in strict chronological order, otherwise an error is thrown. Should it happen, find the correct TRN file and proceed with the restore. 9. Proceed with the section Completing failover on page 16. In this scenario a simplified fail back (see Fail back on page 17) is not possible. 12
  • 13. Database Log Shipping on Microsoft SQL Server Partial database loss scenario In this case the Primary server cannot access at least one of data files. As the result, the <SID> DB is in state “Suspect” and cannot be opened. The SAP System can’t run. However, if all transaction log files are available and not damaged, there is a chance to failover without loss of committed data. To fail over to the Secondary server without data loss the following actions must be done: 1. Suspend Log shipping as described in Suspend Log Shipping on page 18. 2. Stop the SAP System. 3. Back up the active transaction log on your primary server with option NO_TRUNCATE. That option allows making a log backup even if the database is unavailable. This last log is also called “tail log”. The command is as follows: USE master BACKUP LOG <SID> TO DISK = 'backup_shareBackupFolder<SID>_YYYYMMDDHHMM_tail.trn' WITH NO_TRUNCATE If the command finishes successfully, there will be no data loss, proceed with step 4. Otherwise the situation meets not this scenario, rather Disaster recovery scenario, because the transaction log is damaged or lost. Please proceed with step 2 of Disaster recovery scenario on page 12. 4. Copy the tail log backup from the Backup folder to the Copy folder manually. 5. Log on to the Secondary server with SQL Management Studio 6. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs. By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 7. When the copy job is finished, start the Restore job. By default, its name is “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 8. If the restore delay is not used, proceed with step 11. 9. Identify the most recent TRN file used for restore: right click the Restore job, choose “View history” and expand the recent history record. The step logs are displayed in a reverse chronological order. Find the topmost step containing the message “Restored log backup file. Secondary DB: '<SID>', File: 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'. Note its name. 10. Find in the Copy folder TRN files with timestamps greater then the last restored TRN file and restore those files in sequence with a series of commands like follows: RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' WITH NORECOVERY Note, the TRN files must be processed in strict chronological order, otherwise an error is thrown. Should it happen, find the correct TRN file and proceed with the restore. 11. Restore the tail log TRN file with the command RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS_tail.trn' WITH NORECOVERY 12. Proceed with the section Completing failover on page 16. In this scenario a simplified fail back is not possible (see Fail back on page 17). DB corruption scenario For more information about DB corruptions please refer to SAP note 142731. In this case the Primary DB appears to be corrupted after a regular check or because some SQL statements fail causing an ABAP short dump in the SAP system. The DB server itself is functioning and none of the database files is lost. 13
  • 14. Database Log Shipping on Microsoft SQL Server If you faced a DB corruption, there is a probability that the transaction log has been corrupted too. In this case the Restore job will be failing with errors like During redoing of a logged operation in database 'SID', an error occurred at log record ID (xxx:xxx:zzz). If this is the case, the database can only be restored to the point of a corruption in the transaction log and thus, some committed transactions will be lost. If the restore operation finishes without errors, no data loss happens after a failover. To fail over to the Secondary server the following actions must be done: 1. Inform end users about an urgent SAP System shutdown and stop the SAP system. 2. Log on to the Primary DB server with SQL Management Studio 3. Start the Log shipping Backup job. It can be found under <Server> -> SQL Server Agent -> Jobs. By default, its name is “LSBackup_<SID>”. Right click the job name and choose “Start job at step…”. 4. Suspend Log shipping as described in Suspend Log Shipping on page 18. 5. Log on to the Secondary server with SQL Management Studio 6. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs. By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 7. When the copy job is finished, start the Restore job. By default, its name is “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. Check job history to identify failures during restore. If an error suggesting to a corrupted transaction log occurs, proceed with Completing failover on page 16. In this case take some data loss into account. 8. If the restore delay is not used, proceed with step 11. 9. Identify the last TRN file used for restore: right click the Restore job, choose “View history”, expand the recent history record. The step logs are displayed in reverse chronological order. Find the topmost step containing the message “Restored log backup file. Secondary DB: '<SID>', File: ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'. Note its name. 10. Find in the Copy folder TRN files with greater timestamps and restore those files manually with a series of commands like follows: RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' WITH NORECOVERY Note, the TRN files must be processed in sequence in a strict chronological order, otherwise an error is thrown. Should it happen, find the correct TRN file and proceed with the restore. 11. Proceed with the section Completing failover on page 16. In this scenario a simplified fail back (see Fail back on page 17) is not possible. User error scenario In this case the hardware is in good order and the DB is physically consistent, but the database is unusable for business applications as it contains logical inconsistencies, or some data was deleted or a lot of false data was inserted. The most effective solution in this case is to bring the database to the state as it had been before the user error was committed. The consequence of point in time restore is an inevitable loss of all modifications that were done after that point in time. In this scenario a simplified fail back (see Fail back on page 17) is not possible. To fail over to the Secondary server without data loss the following actions must be done: 1. Immediately suspend Log shipping as described in Suspend Log Shipping on page 18 and note the time when it was done. This crucial point will be referred as LS_STOP below. 2. Inform end users about an urgent SAP System shutdown and stop the system. 14
  • 15. Database Log Shipping on Microsoft SQL Server 3. Identify the point in time when the user error was committed. If it cannot be identified precisely, take the earliest time it might have happened. It will be referred as TIME_ERROR. Convert TIME_ERROR to UTC time, let’s call it TIME_ERROR_UTC. 4. Let us call the difference between LS_STOP and TIME_ERROR as reaction time. If the reaction time is greater then the restore delay, the user error has already been applied to the Secondary DB. The error must be corrected by means of restore the DB from regular backups. Otherwise proceed with the next steps. 5. Log on to the Secondary server with SQL Management Studio 6. Identify the last TRN file used for restore: right click the Restore job, choose “View history”, expand the recent history record. The step logs are displayed in reverse chronological order. Find the topmost step containing the message “Restored log backup file. Secondary DB: '<SID>', File: ‘copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn'. Note its name. 7. Find in the Copy folder TRN files with greater timestamps, but not exceeding TIME_ERROR_UTC and restore those files manually with a series of commands like follows RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' WITH NORECOVERY, STOPAT = '<date_time>' where <date_time> should be 1 minute before TIME_ERROR. Adhere to the full date and time format ‘YYYY-MM-DD HH:MM:SS’, e.g. '2007-05-24 09:59:59'. Note, the TRN files must be processed in strict chronological order, otherwise an error is thrown. Should it happen, find the correct TRN file and proceed with the restore. 8. Proceed with the section Completing failover on page 16 Maintenance scenario In this case the Primary server must be stopped for a substantial period of time while the SAP system must go on running. In this scenario no committed data is lost. To fail over to the Secondary server without data loss the following actions must be done: 1. Suspend Log shipping as described in Suspend Log Shipping on page 18. 2. Inform end users about an SAP System shutdown and stop the system. 3. Close all connections to <SID> database. You can check for them in Management Studio under “Management” -> “Activity monitor”. 4. Back up the active transaction log on your primary server with option NORECOVERY. That leaves the Primary DB in “restoring” state, where it is unavailable, but is prepared to simplified fail back (see Fail back on page 17). The command is as follows: USE master BACKUP LOG <SID> TO DISK = 'backup_shareBackupFolder<SID>_YYYYMMDDHHMM_tail.trn' WITH NORECOVERY 5. Copy the tail transaction log backup file from the Backup folder to the Copy folder. 6. Log on to the Secondary server with SQL Management Studio 7. Start the Copy Job. It can be found under <Server> -> SQL Server Agent -> Jobs. By default, its name is “LSCopy_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 8. When the copy job is finished, start the Restore job. By default, its name is “LSRestore_<Primary server name>_<SID>”. Right click the job name and choose “Start job at step…”. 9. If the restore delay is not used, proceed with step 12. 10. Identify the most recent TRN file used for restore: right click the Restore job, choose “View history”, expand the recent history record. The step logs are displayed in a reverse chronological order. Find the topmost step containing the message “Restored log backup file. Secondary DB: '<SID>', File: ‘CopyFolder<SID>_YYYYMMDDHHMMSS.trn'. Note its name. 15
  • 16. Database Log Shipping on Microsoft SQL Server 11. Find in the Copy folder TRN files with timestamps greater then the last restored TRN file and restore those files in sequence with a series of commands like follows: RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS.trn' WITH NORECOVERY Note, the TRN files must be processed in strict chronological order, otherwise an error is thrown. Should it happen, find the correct TRN file and proceed with the restore. 12. Restore the tail log TRN file with the command RESTORE LOG <SID> FROM DISK = 'copy_shareCopyFolder<SID>_YYYYMMDDHHMMSS_tail.trn' WITH NORECOVERY 13. Proceed with the section Completing failover on this page below. If all actions are done correctly, there is a possibility to fail back very fast without copying the database back to the Primary server (see Fail back below). Completing failover Independently from a failover scenario you have to execute the following actions to complete a failover process. 1. Issue the command: RESTORE LOG <SID> WITH RECOVERY This brings the database online and it can be used with the SAP System. 2. Execute the procedure, fixing mappings between SAP SQL users and server logins: use master; exec sp_check_sap_login <SID>, <schema>, <Windows domain>, repair; Where <schema> is the current DB schema (see SAP note 98678), <Windows domain> is the domain of standard SAP users <sid>adm and SAPService<SID>, ‘repair’ is the keyword instructing to perform the correcting actions. 3. Copy the prepared SAP default profile file into DEFAULT.PFL in the folder <drive>:usrsap<SID>SYSprofile copy DEFAULT_SECONDARY.PFL DEFAULT.PFL overwriting the existing file. 4. Correct environment variable MSSQL_SERVER on each application server as per Prepare SAP System to failover, point 3 on page 11. 5. Start the SAP System. General post-failover actions Independently from a failover scenario, some actions must be done in SAP system if it had to run in a failover state for longer then one day. 1. Backup DB job must be scheduled anew. If SAP transaction DB13 was earlier used for that use DB13 again, taking into consideration possibly changed backup device names. If other DB backup software was used, refer to its documentation. 2. For SAP Basis Release 610 and higher: create DB collector jobs. Execute the post processing step "Creating the permanent stored procedures" described in SAP note 151603. 3. Reschedule SAP blocking locks collector job. a. In a system with installed DBACOCKPIT, start transaction DBACOCKPIT, on the navigation pane expand “Performance” -> “History”, then double click “Lock History”. Press the button “Turn collector job on” which appears on the navigation frame. b. In a system where DBACOCKPIT is not installed yet, start transaction ST04, go to “Detailed Analysis”, press “Blocking lockstats” button, then press “Turn collector job on” button. 16
  • 17. Database Log Shipping on Microsoft SQL Server Fail back Fail back is a process of moving the database service back to the original Primary server. It can be necessary by various reasons, for example: The original Secondary server is equipped with less powerful hardware and cannot permanently withstand working load with satisfied quality of service. The original Secondary server must be freed for another role, e.g. DB server for QAS or training system Fail back should be carried out after the original problem on the Primary server has been solved. There are 2 possibilities to fail back: 1. Simplified fail back without moving the database. This is only possible in case of a maintenance fail over. It can be done within a couple of minutes. 2. Regular fail back with moving the whole database. It needs to be executed in all other scenarios. As this fail back requires a full database backup and restore, it may take many hours to complete. The system is however not necessarily down during that time. Preparing the system to fail back Technically, preparing a fail back is identical to setting up Log Shipping where the original Secondary server plays the primary role and the original Primary server becomes the secondary one. Make sure Log Shipping is suspended (i.e., all Log Shipping jobs are disabled) before you proceed. 1. If the original Primary server is lost and must be set up from scratch, refer to the chapter SQL Server prerequisites on page 7. 2. If the previous fail over was done as per Maintenance scenario (see page 15) and the originally Primary DB is in “restoring” state, proceed to step 4. Otherwise continue with 3. 3. Back up the original Secondary DB and restore it onto the originally Primary Server as described in Preparations on page 8. This step may take a lot of time. 4. Configure Log Shipping as described in Configuring Log Shipping on page 9, with the only difference: now under “Primary” the originally Secondary server is meant and vice versa. Note: use the same share for Backup folder that you used for the original log shipping. On this stage there are two configured Log shipping processes. For further reference let’s call them Primary-to-Secondary and Secondary-to-Primary based on original roles of the servers. The Primary-to-Secondary Log shipping is now suspended. The Secondary-to-Primary one is active. Failing back 1. Fail over from the originally Secondary server to the originally Primary one as described in Maintenance scenario on page 15, except of step “Copy the prepared SAP default profile…” during completing failover. On that step copy the original SAP default profile file into DEFAULT.PFL in the folder <drive>:usrsap<SID>SYSprofile copy DEFAULT_PRIMARY.PFL DEFAULT.PFL overwriting the existing file. 2. Resume Primary-to-Secondary Log shipping as per Resume Log Shipping on page 19. 17
  • 18. Database Log Shipping on Microsoft SQL Server Monitoring and reporting for Log shipping If the Monitor server is configured, an overall status of all parts of the log shipping process is available on it within one screen. If no Monitor server is installed, a DB administrator has to check status of a Primary and a Secondary server separately. The procedure below is generic for Monitor, Primary and Secondary servers, however the result will look differently. The table below shows what kind of information is available on each server. Column Valid for Displayed Comment on Status Backup job Primary Time since last Backup Backup job Primary Elapsed time since the last Backup in minutes Backup Threshold Backup job Primary Configured threshold. When “Time since last Backup” exceeds the value, an alert will be raised (if enabled). Backup alert enabled Backup job Primary True/False. Time since last Copy Copy job Secondary Elapsed time (in minutes) since the last copy of TRN files from the backup folder to the copy folder. Time since last Restore Restore job Secondary Elapsed time (in minutes) since the last restore of TRN files on the Secondary server. Latency of Last File Restore job Secondary Latency between the last backup job and the last restore job in minutes. When “Time since last Backup” exceeds the value, an alert will be raised (if enabled). Restore Threshold Restore job Secondary Configured threshold. When “Time since last Restore” exceeds the value, an alert will be raised (if enabled). Restore alert enabled Restore job Secondary True/False. Last Backup File Backup job Secondary File name Last Copied File Restore job Secondary File name Last Restored File Restore job Secondary File name To call Log Shipping status using SQL Management Studio as of Support Pack 2 (SP2): 1. Start SQL Management Studio, connect to a server. 2. Right click on the server name and choose “Reports” -> Standard reports -> Transaction Log Shipping Status. To call Log Shipping status with a Monitor server using SQL Management Studio earlier then SP2: 1. Start SQL Management Studio, connect to a server. 2. Select the server instance in Object Explorer. 3. In the “Object Explorer Details” page, display the list of available report types by clicking the arrow next to the «Reports» button. If the Object Explorer Details page is not displayed, select Object Explorer Details on the View menu. 4. Click “Transaction Log Shipping Status”. Suspend Log Shipping Disable the log shipping Backup job on the Primary server, if it still running. Disable the Copy and Restore jobs on the Secondary server. The path in the Object Explorer of SQL Management Studio: <Server> -> SQL Server Agent -> Jobs Right click on the job then click “Disable”. The job name convention for a backup job is “LSBackup_<SID>” The job name convention is “LS<action>_<PrimaryServer>_<SID>”, where <action> can be “Copy” or “Restore”. Note however, that the naming convention is not mandatory; another names may have been used. 18
  • 19. Database Log Shipping on Microsoft SQL Server Resume Log Shipping Enable the log shipping Backup job on the Primary server. Enable the Copy and Restore jobs on the Secondary server. The path in the Object Explorer of SQL Management Studio: <Server> -> SQL Server Agent -> Jobs Right click on the job then click “Enable”. The job name convention for a backup job is “LSBackup_<SID>” The job name convention is “LS<action>_<PrimaryServer>_<SID>”, where <action> can be “Copy” or “Restore”. Note however, that the naming convention is not mandatory; another names may have been used. Remove Log Shipping permanently To remove log shipping Right-click the Primary database and then click “Properties”. Under “Select a page”, click “Transaction Log Shipping”. Clear the ”Enable this as a primary database in a log shipping configuration” flag. A confirmation popup immediately appears: Confirm it with Yes. Press OK, check the status popup window. In case of errors or warnings press “Report”, save it to a file, otherwise close the window. If you cannot solve the problem by your own, create a problem message at SAP attaching the error report file Special attention is required by following actions Adjusting Windows system timer The Log shipping backup job creates in the backup directory files with the following naming convention: <dbname>_<YYYYMMDDHHMMSS>.trn, where <dbname> is the Primary database name, <YYYYMMDDHHMMSS> is the backup timestamp in UTC time on the Primary database server. UTC time is the world coordinated time which doesn’t depend on time zone or daylight saving settings. By that reason the timestamp in the file name does not correspond to local time besides countries in time zone UTC+0 (Greenwich Mean Time). Despite some inconvenience, the naming convention makes sense since the primary and the secondary databases may be in different time zones and/or use different daylight savings settings. That’s why it is safe to change the current time zone or daylight saving setting on both Primary and Secondary servers, no follow-up actions are required. 19
  • 20. Database Log Shipping on Microsoft SQL Server The copy and restore jobs rely on correct timestamps in the file names. That’s why the primary and secondary servers must have synchronized system clocks. Either Windows Time Service in a domain or a network time protocol should be used for that. If system clocks must be adjusted, it is recommended to suspend log shipping and resume it afterwards. Adding data and log files to the Primary DB After a file is added to the Primary DB, the appropriate file definition is created as a log record and transferred with the next TRN file. The restore job on the Secondary server tries to create the additional file under the same path as on the Primary server. If this path doesn’t exist, the restore job fails with the error *** Error: Could not apply log backup file 'h:ls_copySID_20070926163724.trn' to secondary database 'SID'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Directory lookup for the file "P:SIDDataSID_22.ndf" failed with the operating system error 3(The system cannot find the path specified.). File 'SID_22' cannot be restored to 'P:SIDDataSID_22.ndf'. Use WITH MOVE to identify a valid location for the file. As a matter of fact, the newly created file may be very large, several dozen or even hundred gigabytes. If the path exists by accident, the file will be created successfully. If the appropriate disk doesn’t have enough space or it is undesirable to put the new file to that folder, the following actions must be done: 1. Disable the Restore job on the Secondary server. 2. Add the new data file 3. Start the Restore job manually, take the TRN filename from the Job history 4. Let the Copy job run or start it manually. 5. Restore the last TRN file manually with the command like follows: RESTORE LOG <SID> FROM DISK='<Copy folder><last TRN file>' WITH MOVE '<logical name of the new file>' TO '<desired location on the secondary server><new filename>.ndf', NORECOVERY 6. Enable the Restore job on the Secondary server. If the described procedure was not adhered to and the restore job has failed already, the DB administrator can still execute step 5 for the failing TRN file. SQL Jobs Some SAP Basis functions are implemented as scheduled SQL Jobs. They are listed in General post- failover actions on page 16. Customers also may define own SQL jobs using standard tools (SQL Studio) or third-party tools. All those jobs are stored in a standard database msdb, which is separate from <SID>. As a result, any change in the jobs will not be automatically shipped to the Secondary server and should be rescheduled by necessity on the Secondary server manually. Modifying the SAP DEFAULT profile After each modification of a default profile, the modification has to be repeated for the file DEFAULT_SECONDARY.PFL, which is used during fail over. Otherwise, after a failover the SAP system starts with old parameters. Additionally, the new DEFAULT.PFL must be copied into DEFAULT_PRIMARY.PFL. 20
  • 21. Database Log Shipping on Microsoft SQL Server Frequently asked questions Q1. Can Log Shipping coexist with Microsoft Cluster Services (MSCS)? A1. Yes. Those two approaches ideally supplement each other. While MSCS provides an instant automatic failover, it cannot protect a DB against physical corruptions nor against user errors. Because of a limitation on the distance between cluster nodes and single data storage, MSCS hardly can be used for disaster recovery. All mentioned features are covered by Log Shipping. Q2. Can Log Shipping coexist with SQL Server database mirroring? A2. Yes. A detailed explanation is out of scope of the document. Please refer to Microsoft Books Online, article Database Mirroring and Log Shipping (http://msdn2.microsoft.com/library/en-us/53e98134-e274- 4dfd-8b72-0cc0fd5c800e.aspx). Q3. What is a file with extension TUF, found in the Copy directory? A3. The TUF file is the Transaction Undo File, and is created when performing log shipping to a server in STANDBY mode of a database on page 5. Q4. How a normal DB backup and TL backup on durable media (backup tapes) should be organized? Can SAP transaction DB13 (DBACOCKPIT/DBA Planning Calendar) coexist with Log Shipping? A4.The Primary database can be backed up with any tool, including DB13. However no transaction log backups other then Log Shipping are allowed. That’s why the functionality of DB13 may not be used. This is also valid for any third party backup software as well as for manual transaction log backups. Q5. After somebody unintentionally switched the DB to SIMPLE recovery mode and then immediately back to FULL, the Log shipping Restore Job is failing. How to fix it? A5. The current Log Shipping must be removed and then set up again including full DB backup and restore, because the transaction log backup chain is broken. Q6. Because of error 9002 “Transaction log full” the DBA administrator backed up the log manually with the option NO_LOG or TRUNCATE_ONLY. Afterwards the Log shipping Restore Job is failing. How to fix it? A6. The current Log Shipping must be removed and then set up again including full DB backup and restore. The correct procedure in case of error 9002 would be to start the Log Shipping Backup job immediately. Q7. After somebody unintentionally deleted some TRN files from the copy folder, the Log shipping Restore Job is failing. How to fix it? A7. Check if the deleted files are still in the Backup folder. By default they are retained there for 3 days. If yes, copy them manually to the Copy folder. If the files have already been deleted, the current Log Shipping must be removed and then set up again including full DB backup and restore. 21
  • 22. Database Log Shipping on Microsoft SQL Server Appendix SAP Notes, referred in this document Note Short text 98678 SQL Server Connection Issues 142731 DBCC checks of SQL server 151603 Copying an SQL Server database 208632 TCP/IP network protocol for MSSQL 610640 sp_check_sap_login 22