DB2 z/OS Recovery Overview


Published on

1 Comment
  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • How many of you have mission-critical systems? How many of you have e-business applications? Did you know that industry analysts estimate that each minute of database downtime could cost your business up to $96K? That’s $6M/hour. Unfortunately, unplanned database downtime is a fact of life. And 80% of unplanned downtime is caused by software or human error. McGladrey and Pullen estimates that 50% of all businesses who experience a critical system outage of 10 days or more will NEVER recover. In those 10 days, the average business loses 2 - 3% of its annual revenue. For every 8 hours of outage, the average business loses 1/2 point of market share, which will take a full 3 years to recover. Database downtime can also result in lost customers. If you lose a customer, it costs 14 times your initial investment to get them back. If you CAN get them back. And industry research shows that the average DBA has less than a year of experience. So Are you ready to recover from unplanned downtime? Are you confident you can recover from any type of outage? Do you know how to minimize your time to recovery?
  • When a table space or index space is opened for update, an entry is made in SYSLGRNX to record the RBA range of the recovery log during the time when this object was being updated. Image copies are registered in the DB2* catalog table called SYSCOPY. Incremental image copies can be merged together with full image copies to form a new full image copy before a recovery is required. During recovery, SYSCOPY is scanned to find the most current full image copy for the object being recovered. SYSLGRNX is then scanned to determine the RBA range on the DB2 recovery log that will be used to apply all updates forward. The Boot Strap Data Set will be used to match the RBA ranges with the appropriate Active and Archive log data sets for the recovery.
  • All DB2 objects (tablespaces and indexes) must be cataloged. If you are using user defined objects (also known as VCAT defined), you must allocated the datasets manually (via the MVS utility IDCAMS). If you use DB2 STOGROUPs, the allocation is performed for you by DB2. Each DASD volume with DB2 objects has a Volume Table of Contents (a VTOC), and a VSAM Volume Data Set (VVDS). There is a shared Integrated Catalog Facility dataset (sometimes called a ‘User Catalog’) which points to the appropriate volume(s) for a cataloged dataset. If the dataset spans volumes, the ICF entry contains a list of all volumes.
  • The Boot Strap Data Set (BSDS) is fundamental to DB2 restart and recovery. There are usually two copies of the BSDS. Among other things, it contains the Active and Archive Log inventory. This inventory keeps track of which Active and Archive Log datasets contain particular log ranges. Once the recovery utility has determined what log data needs to be applied (from SYSLGRNX), the BSDS is used to determine the appropriate Log dataset. There are usually at least three Active Logs defined per DB2 subsystem (there can be up to 32). The Active Logs are usually dual copied. The size of the Active log varies on shop processing characteristics and recovery requirements. A common allocation is about 300 Cylinders (3390 device). As Active Logs fill, DB2 switches to the ‘next’ Active log, and writes the full Active Log to an Archive Log dataset. Active Log datasets are reused, so that when the ‘last’ Active Log is filled, DB2 switches back to the ‘first’ Active Log. Be sure the ‘first’ Archive Log has completed writing to Archive! Otherwise, DB2 will stop until the Active Log is available. Archive Logs are serial. Once an Archive Log has been written (typically to cart or tape), it is not reused. The Archive Log inventory can track up to the last 1000 Archive Logs (again, usually dual copied). Naming conventions usually looks like this (n = 1 or 2): BSDS - Vcat.BSDS0n Active - Vcat.LOGCOPYn.DSxx (from 02 to 32) Archive - Vcat.ARCHLOGn.Axxxxxxx (from 0000001 to 9999999)
  • Note: The number of archive logs that can be recorded in the BSDS is subject to a limitation set in the installation but, in any case, no greater than 1000. Some of the output from a DSNJU004 LTIME INDICATES LOCAL TIME, ALL OTHER TIMES ARE GMT. DATA SHARING MODE IS ON HIGHEST RBA WRITTEN 00097B660F8E 2002.029 23:51:57.8 HIGHEST RBA OFFLOADED 00097B150FFF ACTIVE LOG COPY 1 DATA SETS START RBA/LRSN/TIME END RBA/LRSN/TIME DATE LTIME DATA SET INFORMATION -------------------- -------------------- -------- ----- -------------------- 00097A071000 00097B150FFF 1995.150 13:36 DSN=DSNDBE.DBE2.LOGCOPY1.DS01 B71A19155E46 B71C66347141 PASSWORD=(NULL) STATUS=REUSABLE 2002.027 23:38:05.8 2002.029 19:33:46.8 00097B151000 00097C230FFF 2001.101 12:14 DSN=DSNDBE.DBE2.LOGCOPY1.DS0 B71C66347142 ............ PASSWORD=(NULL) STATUS=NOTREUSABLE 2002.029 19:33:46.8 ........ ..........
  • When spaces share a subsystem (or data sharing member) for update, they share log data sets. There is one set of log data sets per member. A commit for one transaction potentially externalizes log records for other transactions. When using RECOVER for one space, log records may be read and bypassed that are system records or log records for other spaces. Since each data sharing member has its own logs, a strategy of updating applications on different members would isolate those applications for log use.
  • A quick DB2 Log refresher course. As updates are executed on behalf of the user community, their activity is logged in the DB2 recovery log. The DB2 recovery log stores before and after images of data that is being updated, before images of data being deleted, and after images of data being inserted. These images are stored in the form of undo/redo records. The process starts by assigning a unit of recovery ID (URID). Following this, any pagesets that need to be opened are opened; this activity is also logged to the recovery log. Not only are data updates logged, but so are index modifications. Index modifications occur when new key values are inserted, old values are deleted, or RIDs are added to an existing key value list. If any index maintenance, such as page splits are required, all the operations to accomplish the split are logged as well. The bottom line is that not just one record is written to the recovery log when an update is performed to a data row. In fact, over one hundred records may be written to accomplish a single update.
  • DSN1LOGP is free when you buy DB2. It does have quite a few limitations but can be useful. Some parts of the log records are formatted and interpreted but some are not. Some items in the unformatted portion of the output can be interpreted by reviewing the DSNDQJ00 macro provided with DB2,. If you have a more sophisticated log processing tool available you should learn how to use it and do non-destructive testing with it.
  • The log ranges in the SYSLGRNX table help control the log read for any given space, but ranges can be open for long periods. Ranges are kept for index spaces only if they are currently defined COPY YES. The active log data set parameters ROSWITCH CHKPTS and ROSWITCH TIME (DSNZPARM values PCLOSEN and PCLOSET) are responsible for controlling when log ranges are terminated if a space or partition becomes inactive. The end value will reflect the last time the space was updated so infrequently updated spaces will have fairly discrete log ranges. However, frequently updated spaces may have quite long ranges extending over many logs. LRSN (log record sequence number) values in the SYSLGRNX rows coordinate the logs from different members of a data sharing group. They are relative byte addresses if a system is non-data sharing.
  • The Directory table SYSLGRNG/SYSLGRNX keeps track of when an object is open for update. When a tablespace is first opened for Read/Write, a row is inserted to SYSLGRNG/SYSLGRNX which contains the current Log RBA (Relative Byte Address). When the tablespace is closed for Read/Write, the SYSLGRNG/SYSLGRNX row is updated, and the End Range is populated with the current RBA. This range of RBAs (Start RBA to End RBA) represent a part of the DB2 log where the tablespace MIGHT have been updated. DB2 automatically closes SYSLGRNG/SYSLGRNX entries for quiet or ‘dormant’ tablespaces. DB2 does this by internally switching the tablespace to Read Only mode. By default, this occurs every 5 checkpoints or 10 minutes. This has the effect of minimizing log processing during recovery. (There’s no reason to mount and read hours of log data if the tablespace was in read only mode). In the event of recovery, the SYSLGRNG/SYSLGRNX information is used to help determine which Active/Archive Logs the recovery will need to process. The BSDS has Begin and End RBA ranges for each log dataset.
  • SYSUTILX has to be recovered first and independently so that other RECOVERs can have information about utilities executing on spaces and so they can be terminated if necessary. The copy of SYSUTILX is recorded in the log and it has no log ranges. SYSUTILX must be recovered independently and it must examine every log record after the copy looking for updates.
  • Log records in the subsystem log are not logical in the sense that they rely on the internal location of the information (internal space IDs, page numbers and an arbitrary ID within a page). Even log records for index spaces are by page. This causes certain activities to make different ranges of log records incompatible. This means that RECOVER needs a new starting point. LOAD REPLACE and REORG can be run with LOG YES. These activities create a new starting point without a copy. (LOAD REPLACE with a DUMMY SYSREC and LOG YES is often used to conveniently reset a space.) A REBUILD of an index in COPY YES status makes RECOVER impossible until a new copy is made. (REBUILD sets an index space with COPY YES set into the advisory status ICOPY.) If a data base, table space or table is accidentally dropped, not only can you not necessarily use RECOVER (because SYSCOPY entries and SYSLGRNX entries are lost when a table space is dropped) but you can not readily apply log after restoring a space from an image copy you might locate. There are tools directed at this problem. Some may require preparation in advance.
  • RECOVER cannot process with inconsistent log records. The log apply shown on the left can be done with its corresponding copy and the log apply shown on the right can be done with its corresponding copy, but they cannot be combined. All important events for the recovery of the spaces are in the SYSCOPY table. The events are identified by the ICTYPE column. That’s how RECOVER identifies this road block. SYSCOPY Columns Column Description ----------- ---------------------------------------------------------- DBNAME Name of the database. TSNAME Name of the target table space or index space. DSNUM Data set number or partition number. A zero indicates all. continued
  • When recovery must be performed, there are three general possibilities. One is a hardware failure that has affected the DB2 data. As RAID features protect more and more data from the head crashes of the past (by mirroring or parity recovery schemes), disk failure is less likely to cause a problem in DB2. However, the disk enclosure and connections may fail in such a way that any data in the enclosure is potentially compromised. There is more hardware and software in the enclosure that might fail than ever before. A second possibility is that the programs and transactions acting on the data have done something wrong. This is called a logical or application failure. The DB2 subsystem generally appears to be operating normally. The third possibility for a recovery is that a disaster has effectively destroyed the primary data center.
  • With a storage failure, there are three steps: Find out what DB2 objects are affected. (Volume or storage unit.) Identify unaffected data sets if it is a RAID unit failure. Prepare a RECOVER or REBUILD for every object affected to get them back to the exact desired state (often called ‘current’). Note that application spaces are not single points of failure as long as multiple copies are made and dual logs are maintained. The log supports bringing copies to current. DB2 subsystems containing important data should use dual logs, actives and archives. The two copies of active logs and the two copies of the BSDS (boot strap data set) should be on separate packs or RAID units. If this rule is not followed and there is a problem with the storage containing logs or BSDS, data loss may occur and a conditional restart of the system followed by recovery of all spaces to a prior time may be required.
  • It is only necessary to use RECOVER or REBUILD or some other technique for the objects that are damaged or missing as long as TOLOGPOINT (or TOCOPY) is not used. Normally you wouldn’t use these options when a media failure had occurred.
  • If DB2 thinks everything is fine, but the users don’t think so...then you have a logical problem! On rare occasions the DB2 subsystem might compromise the data. For example, there have been failures to externalize changed pages, failures to properly issue locks, etc. There is also a nasty sub-class of user error when a DBA or other authorized person drops an object such as a table, table space or data base. Usually, however you are looking for a bad or duplicate batch job, a bad transaction or a user who has done something wrong.
  • With data corruption, you need to find a prior time when the data was valid. This may be quite subtle. Just because no users noticed a problem between opening of business and 10AM, doesn’t mean everything was fine at the beginning of the day. If the source of the corruption is not known, the symptoms may be revealing. For example, if users are complaining that a group of customers seem to have duplicate order items and orders are grouped and processed at night, it is reasonable to assume that a duplicate batch job was run last night. If users are complaining about missing data, transactions or jobs that do deletes might be examined. DSN1LOGP and more sophisticated log tools can be invaluable in researching such problems. Searching for log data on recent logs for the table spaces in question can reveal the problem.
  • In the case shown here we have discovered a batch job that was rerun creating the problem. The corruption begins a bit earlier than when it was discovered, at the start of that job. Other work was going on for the same application tables. It might have been independent of the batch job but some or all of it might have depended on the values and rows corrupted by the batch job. This is an application question. One possibility is to go back to the time after the first batch job using a point-in-time recovery (RECOVER with TOLOGPOINT on all the table spaces and COPY YES indexes and a rebuild for other indexes). It looks like from this diagram that nothing was happening for the spaces involved in our application prior to the batch job rerun. In a real situation, however, we have to discover this point. If we had a QUIESCE after every batch job, we can find the QUIESCE in the SYSCOPY table for our spaces.
  • The first SQL shown on this slide shows all QUIESCE entries after a certain date for some databases. The second gives the maximum QUIESCE point. The third determines if one of the spaces does not have an entry at the QUIESCE point. These statements could be adapted to various situations. Caution should be taken if partitioned table spaces are quiesced by PART. This is not recommended.
  • The picture has been altered here to push the bad batch job to the left and show a (possibly more realistic) situation where online transactions are constantly running and overlapping. SQL that reverses everything done by the batch job provides several advantages. The SQL corrections could also run with other transactions. All the online transactions shown would still be reflected in the data. It would not be necessary to find a point of consistency. SQL to reverse the changes can be obtained by using a tool capable of creating UNDO SQL. (At least three vendors have one.) You could also write your own SQL. You could do this manually or using an application log and a planned procedure for processing it. It’s quite difficult to use the log as a source of manual data. For example, the log may contain incomplete updates; the log is in internal DB2 format; the log doesn’t include the key qualifiers nor column and table names; and the log data may be compressed. SQL reversing the changes must update the records with old data for updates, insert deleted rows; and delete inserted rows. To execute correctly when rows were affected multiple times, these inserts, updates and deletes should be in the reverse order from the original SQL. Cascading deletes or updates need to be accounted for as well.
  • Using SQL to undo corruption can have some striking advantages. However, there are also some difficulties. In the illustration, transactions change customer addresses using an old tape, regressing many customers’ addresses. Then good transactions switch the salesmen assigned in customer records based on zip code. Some of these transactions are incorrect because of the bad addresses. If we reverse the address changes, the salesmen assigned might still be wrong. Some tools can locate rows that had the address changes and also had salesmen changes. Here is another example of depending on incorrect data in later transactions. A corrupt transaction inserts rows for new customers using an incorrect credit evaluation. Some of the customers place orders and are given a confirmation on a credit plan based on the credit evaluation in the customer row. The order application requires the customer row and it’s already there so one is not inserted. If new customer rows are deleted, the order rows are orphaned. If the new customer rows are not deleted, they all have incorrect credit evaluations. If many rows were inserted and few orders subsequently placed by those customers, one approach is to reverse the inserts with the incorrect credit calculation except for those where the customers made orders. Some tools can create a report to help you locate those rows.
  • Columns that are updated by corrupt transactions and subsequently updated by good transactions with proper values may be incorrect if they are reversed based on the log records for the corrupt transaction. Many employee rows are updated by a corrupt transaction to zero a column that represents the 401K deduction percentage. Good transactions change some of these rows to have a new and correct value for employees that had submitted changes. For example, Jane Doe had a percentage of 10% that was then erroneously zeroed out. Subsequently, a request was processed from Jane to set it to 5%. Other employees had similarly submitted new amounts. If the log records for all the updates setting the values to zero are examined in isolation, Jane might have her percentage set to 10% again. So the good transaction for her request is effectively reversed. Some products can produce reports to point out these anomalies or have techniques to help avoid them. In this case, using point-in-time recovery on the application spaces to a point before the bad transaction and then ‘replaying’ the good transactions would work. Some products can create REDO SQL for this purpose. Columns that are accumulated can cause complex problems as in this example. A corrupt transaction adds one to a column that counts the number of requests that a customer will be charged for in accessing a legal database. The column initially held a value of 110 so after the corrupt transaction it is 111. Subsequently a good transaction adds two to the column making it 113. The column value should be the original number in the column (before the corrupt transaction) plus two or 112. If the update by the corrupt transaction is reversed in isolation and the original value returned, the value will be 110. For this reason, doing a traditional point-in-time recovery in cases like this and then rerunning the application to process the increments may be better practice. Some tools may be able to locate rows that have this type of anomaly, allowing a manual update for them and reversal for others.
  • If users may be reinserting accidentally deleted data as in the first bullet point, a log tool that bypasses -803 errors is useful. Tools can help but you must know your data. If you are not the application specialist you must engage the specialist to help avoid compounding logical errors.
  • Image copies are made with utilities, primarily COPY but also with other utilities (for example, LOAD and REORG). They are recorded in SYSCOPY, a DB2 catalog table. They contain logical pages from the space. There are also utilities to make new copies from multiple incrementals or a full and subsequent incrementals and functions to combine copies and logs to make a more up-to-date copy. For example, IBM’s MERGECOPY combines copies to create a new copy and DB2 Change Accumulation Tool uses copies and logs to create a new copy. One image copy (a separate sequential file) is for a data set, partition or the entire space. The most you can put in one image copy is one space (all partitions or data sets). You cannot have one sequential data set for all the spaces in a data base, for example. Nor can you copy a single table from a table space with multiple tables. SYSCOPY information is deleted when a space is DROPPED and some rows are cleaned up when you run the MODIFY utility. There are a few examples of copies that are not simple sequential data sets. Some utilities support hardware-assisted backups and record them for a special restore at RECOVER time. An example of this with the IBM COPY offering is the CONCURRENT COPY using the features of the some 3990 controllers.
  • IBM RECOVER uses the backup copy if the primary copy is unavailable. LOCAL or RECOVERY copy is selected based on the SITETYP ZPARM or can be overridden with a LOCALSITE or RECOVERYSITE option on the RECOVER. Other vendors may allow different types or syntax for selection from these identical copies. Multiple identical copies are usually made simultaneously during the COPY process. However, there are utilities that make and register additional copies later by copying the copy. IBM has a separate utility for this, COPYTOCOPY starting in V7. The syntax for making a full or incremental copy is the FULL YES/NO option. FULL NO (incremental) may take less time to produce and may take up less space in the sequential data set. Incrementals must be used with a FULL copy. If they can’t be used, the log after the last usable copy can replace them. Therefore you may not wish to make backups of incrementals. Incremental copies are more efficient if the table space has the TRACKMOD YES option set. This option causes an area in the space maps to be maintained that shows the pages updated since the last full or incremental copy. This option will cause updates to the space maps and logging of these updates. If no incremental copies are planned, it is more efficient during updating to have TRACKMOD NO. You cannot make incremental copies of index spaces with IBM COPY. Incremental copies are a way to limit the log processed by RECOVER without having to make a full copy but they require a more complex RECOVER and, if indexes are being copied and updated (e.g. INSERTs are occurring) then the incrementals may not substantially reduce RECOVER time.
  • Every expert should be confident that SHRLEVEL CHANGE copies work and why. A SHRLEVEL CHANGE copy is registered (in SYSCOPY) at a point before any updates that might be missing from it. There is no registration of the log point of the end of the copy. The COPY utilities guarantee that every log record before the registration point is reflected in the pages on the copy. Updates made during the copy may or may not be on the copy . In this case, Page 2 is copied without the update that occurred during the copy process. Page FFF2 is copied after the update that occurred during the copy process and contains it. During a RECOVER utility execution using the copy in this illustration, the pages from the copy and the log records representing the two updates are matched. Page 2 will not have the update shown on the copy. (It was copied before the update.) There is a log point that is on every logical page in the table space and that remains on the copied pages. This value is called PGLOGRBA in the Diagnosis Guide. It is a log record sequence number. The log point of every update is compared to the log point in the page before the update is applied. The log point of page 2 on the copy will be less than the log point of the update shown. The update will be applied. The log point in page FFF2 is equal to the log point of the update shown for it. A RECOVER utility does not attempt to apply that update.
  • Relying completely on data sets dumps is not a wise idea. Important data should be image copied on some schedule and image copies should be made after LOG NO activities, particularly REORG. (If LOAD REPLACE is executed, the LOAD file may be preserved to reload but saving and using this data requires a special process for that space and you will not be able to apply log to the reloaded space with RECOVER.) A complete set of system data (logs, bootstrap data sets and all spaces including the catalog and directory) can be copied after a subsystem is taken down cleanly or while a SET LOG SUSPEND is active, using special fast hardware copies. These data sets can be restored and used restart the entire subsystem at that point. Using SET LOG SUSPEND is the subject of past conference presentations. All the data is made consistent at RESTART in much the same fashion as after a power failure. Procedures have also been developed to attempt to use a restart with a more up-to-date logs and BSDS and to bring the system to a point further than the state of committed transactions at the time of the dumps. This involves using DEFER on all the spaces during restart and using LOGONLY recovery. There are a number of issues to manage. If application spaces are copied outside of DB2 while DB2 is down and it was shut down cleanly, these copies can be restored and LOGONLY RECOVER used on them. If a single data set is copied outside DB2 while the space is active, you might be able to do the same but this is not recommended by IBM.
  • SYSCOPY Columns, cont. Column Description ----------- ---------------------------------------------------------- ICTYPE Type of Operation: A - ALTER B - REBUILD INDEX (recorded for COPY YES index space) D - CHECKDATA LOG(NO) F - COPY FULL YES I - COPY FULL NO P - RECOVER TOCOPY or RECOVER TORBA or TOLOGPOINT; PIT_RBA reflects the log point of the copy or log point specified. Q - QUIESCE R - LOAD REPLACE LOG(YES) S - LOAD REPLACE LOG(NO) T - TERM UTILITY COMMAND (Terminated Utility) W - REORG LOG(NO) X - REORG LOG(YES) Y - LOAD RESUME LOG(NO) Z - LOAD RESUME LOG(YES) ICDATE YYMMDD START_RBA This is really an LRSN and orders the events in time. … DSNAME Copy data set name if ICTYPE is ‘F’ or ‘I’. If ICTYPE is ‘P’ and the RECOVER was TOCOPY, contains the copy data set name used for TOCOPY. Otherwise, dbname.spacename if created after V4.
  • The report was obtained by running the following SQL: SELECT DBNAME,TSNAME,ICDATE,ICTIME,ICTYPE,HEX(START_RBA) FROM SYSIBM.SYSCOPY ORDER BY DBNAME,TSNAME ICTYPE has many different possible values, those are listed below: F-full image copy I-incremental image copy P-partial recovery (point in time) Q-QUIESCE point R-LOAD REPLACE LOG(YES) S-LOAD REPLACE LOG(NO) W-REORG LOG(NO) X-REORG LOG(YES) Y-LOAD RESUME LOG(NO) Z-LOAD RESUME LOG(YES) T-terminated
  • The SYSCOPY entries for these spaces can still be found with REPORT RECOVERY and can be printed from the log with DSN1LOGP and the SYSCOPY option.
  • SYSCOPY TABLE continued ------------- Column Description ----------- ---------------------------------------------------------- ICTIME HHMMSS SHRLEVEL C - entry is a copy and was SHRLEVEL CHANGE R - entry is a copy and was SHRLEVEL REFERENCE blank - not a copy … TIMESTAMP Same value as represented in ICDATE, ICTIME, IBM recommends using this one. ICBACKUP When multiple copies are made, represents which one the entry is for. blank - LOCALSITE primary copy LB - LOCALSITE backup copy RP - RECOVERYSITE primary copy RB - RECOVERYSITE backup copy … STYPE Additional information about the event. For example, on a QUIESCE row (ICTYPE = ‘Q’) a ‘W’ in this column indicates QUIESCE WRITE(YES). For partial recovery (ICTYPE = ‘P’) an ‘L’ indicates LOGONLY. PIT_RBA An LRSN that is the log point of a TOLOGPOINT or TORBA recovery or the log point of a copy used in TOCOPY. …
  • The RECOVER utility can seem to have a daunting set of options but at its simplest it restores copies and applies logs from the point of the copy if required. The copies are registered with log points or LRSNs. If TOCOPY is not specified, logs are applied from the log point of the last copy. (If incrementals are involved, the full and incremental copies are merged.) We’ve already seen why SHRLEVEL CHANGE copies work with log records. When logs are applied, all log records in the subsystem log(s) after the copy that are indicated by directory table SYSLGRNX ranges are applied with two exceptions. You can request that RECOVER stop at a particular log record. This is usually done to get back to a point in time when the space wasn’t logically corrupted. This is done with the TOLOGPOINT option. When you do a TOCOPY or TOLOGPOINT RECOVER and later, before you have done another COPY, you do another RECOVER (either to current or to another point after you executed the first RECOVER), RECOVER skips the group of log records eliminated by the prior recovery. (This is shown in the following illustrations.) It is theoretically possible to execute a RECOVER … TOLOGPOINT operation using the (undamaged) space and processing the log backward to the TOLOGPOINT, avoiding image copies. IBM’s RECOVER does not have this function.
  • This is an example of the events for a space: a full image copy, a DFDSS dump taken when DB2 was stopped and a RECOVER …TOLOGPOINT. This is used in the following slides’ illustrations. We return to this set of events for each example and do a different recovery.
  • The LOGONLY option looks in the data set header page for the HPGRBRBA (recovery base RBA value). If the DFDSS dump was restored before the RECOVER execution, this log point will be consistent with a time less than or equal to the pack dump time. Efforts have been made to update HPGRBRBA more often in recent releases. This is the starting point for the log apply. HPGRBRBA is really an LRSN so in data sharing it will not be an RBA. Log records may be skipped (even though there are SYSLGRNX entries for them) if a RECOVER … TOLOGPOINT has been executed before as in this example. Since this RECOVER did not specify a TOLOGPOINT, it uses all the log records found until the end of the logs, using SYSLGRNX as a guide but skipping the area between PIT_RBA and START_RBA on the RECOVER SYSCOPY entry for the prior RECOVER. Note: A RECOVER, even TOLOGPOINT, sets HPGRBRBA to the current log point for the subsystem or data sharing group. LOGONLY cannot be used to continue a prior TOLOGPOINT recovery and bring it to a more current point.
  • In this example, we had the same starting situation (full copy, dump, prior RECOVER). But we used a LOGPOINT that was lower than the START_RBA (an LRSN naturally!) for the prior RECOVER. RECOVER finds the image copy using the information in SYSCOPY (including the START_RBA of the image copy which tells us where to start applying log records). The image copy pages are restored and log applied. If TOLOGPOINT is used as in this example, log apply stops at the point specified. Notice that in this case the point is prior to the point where the previous RECOVER ran. This RECOVER is ignored. Log records previously skipped are used to get to the indicated LOGPOINT.
  • If LOGONLY is not specified, registered SYSCOPY events are analyzed. In this case the full copy is found. If TOLOGPOINT is used, log apply stops at the point specified. Notice that in this case the point is after to the point where the previous RECOVER ran. That RECOVER is acknowledged by skipping log records. Note that this is reasonable because the log records in the second area were against a version of the data that didn’t include that part of the log.
  • Disaster Recovery support is always a balancing act between cost and complexity vs. data loss and recovery time. Some shops opt for the very simple (and efficient) approach of dumping all DASD volumes periodically (typically weekly). At the remote site, a full set of volume restores gets the entire data center ‘back’ to the same point in time. Unfortunately, executing the dumps requires an outage, and restoring at the remote site means losing data. (what if the disaster occurs on Friday, and your most recent dump was taken last Sunday?) Some companies can’t accept the outage for dumps, or the loss of data. There are techniques which allow for capturing DB2 log data, and transmitting it to a remote site. The log data can even be applied at the remote site, so the business is essentially shadowed. In the event of a disaster, this method ensures very little data loss (just the inflight transactions), and minimal recovery time. However, it is incredibly complicated and expensive. The compromise in most shops is to send a copy of all application and DB2 system object copies, along with copies of the DB2 log, to a remote vault. In the event of a disaster, recovery is to the end of the ‘last’ offsite Archive Log, which typically is from ‘last night’. This is a technique documented in the DB2 System Administration guide. It requires a complex set of procedures to support it.
  • DB2 z/OS Recovery Overview

    1. 1. DB2 Recovery 101 Overview Bill Arledge DB2 Data Management Strategist Mainframe DB Recovery
    2. 2. Availability is critical … <ul><li>Lost revenue… </li></ul><ul><li>Up to $6 million/hour for e-businesses </li></ul><ul><li>2 - 3% of annual revenue for every 10 hours of database outage </li></ul><ul><li>Lost customers… </li></ul><ul><li>Nearly 14 times your initial investment to win customers back </li></ul><ul><li>Lost market share… </li></ul><ul><li>Approximately 1/2 point of market share for every 8 hours of outage (It will take an estimated 3 years to win customers back) </li></ul>
    4. 4. When Availability is Critical, Recovery is Crucial! <ul><li>Unplanned downtime is an unfortunate fact of life…. </li></ul><ul><li>Up to 80% of all unplanned downtime is caused by software or human error* </li></ul><ul><li>Up to 70% of recovery is “think time” ! </li></ul>*Source: Gartner, “Aftermath: Disaster Recovery”, Vic Wheatman, September 21, 2001
    5. 5. DB2 Recovery Resource Review <ul><ul><li>Large lines = User data flow </li></ul></ul><ul><ul><ul><li>Updates to Tablespace are Logged, Copied </li></ul></ul></ul><ul><ul><li>Small lines = DB2 info flow </li></ul></ul><ul><ul><ul><li>Copies and Logs are registered </li></ul></ul></ul><ul><ul><ul><li>Log range for Tablespace recovery tracked </li></ul></ul></ul><ul><ul><li>MVS ICF Catalog </li></ul></ul><ul><ul><ul><li>Watches over all </li></ul></ul></ul>SYSLGRNX TABLESPACE BSDS SYSCOPY Archive Logs or ICF Catalog Active Log Full Copy or Incremental Copies
    6. 6. ICF Catalog <ul><li>All DB2 pagesets must be cataloged </li></ul><ul><ul><li>Cataloging updates three MVS system files </li></ul></ul><ul><ul><ul><li>VTOC - Volume Table of Contents </li></ul></ul></ul><ul><ul><ul><li>VVDS - VSAM Volume Data Sets </li></ul></ul></ul><ul><ul><ul><li>ICF - Integrated Catalog Facility </li></ul></ul></ul>CREATE TABLESPACE TS000001 IN DP000001 USING VCAT DB2PVCAT ... ICF Catalog DB2PVCAT.DSNDBC.DP000001.TS000001.I0001.A001 DB2PVCAT.DSNDBD.DP000001.TS000001.I0001.A001 VTOC, VVDS
    7. 7. Boot Strap Data Set (BSDS) <ul><ul><li>Boot Strap Data Set (BSDS) </li></ul></ul><ul><ul><ul><li>Active and Archive Log Inventories </li></ul></ul></ul><ul><ul><ul><li>Active Log is reused, Archive Log is serial up to 10000 (V8 and later) </li></ul></ul></ul><ul><ul><ul><ul><li>Current Active Log is DS01 </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Next Archive Log will be A004 </li></ul></ul></ul></ul>Active Logs DS01 DS02 DS03 ACTIVE RBA LOG START END STATUS DS01 40000 4FFFF Not Reusable DS02 20000 2FFFF Reusable DS03 30000 3FFFF Reusable ARCHIVE RBA LOG START END VOLSER A001 10000 1FFFF VOL=T00001 A002 20000 2FFFF VOL=T00002 A003 30000 3FFFF VOL=T00003 BSDS Archive Logs A001 A002 A003
    8. 8. BSDS and Relationship to the LOG DB2 BSDS LOG Checkpoint Incomplete UR Summary Record Open Pageset Summary Record Database and Pageset Exceptions Summary Record Active & Archive Log Dataset Information DDF Communication Record Checkpoint Queue
    9. 9. Print Log Map (DSNJU004) <ul><li>DSNJU004 (Print Log Map) provides </li></ul><ul><ul><li>Active log data set information </li></ul></ul><ul><ul><li>Archive log data set information </li></ul></ul><ul><ul><li>System checkpoints </li></ul></ul><ul><ul><ul><li>Driven by </li></ul></ul></ul><ul><ul><ul><ul><li>LOGLOAD ZPARM </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Active log switch </li></ul></ul></ul></ul>Local Time GMT
    10. 10. DB2 Logs – A Shared Resource <ul><li>Primary usage to provide for restart and recovery of DB2 subsystems and objects </li></ul><ul><ul><li>Other uses including audit and data migration </li></ul></ul><ul><li>Log records </li></ul><ul><ul><li>Record updates for all spaces with before and/or after images </li></ul></ul><ul><ul><ul><li>Data and index pages </li></ul></ul></ul><ul><ul><li>Record DDL operations (updates to catalog pages) </li></ul></ul><ul><ul><li>Capture checkpoint and restart information including </li></ul></ul><ul><ul><ul><li>transaction info </li></ul></ul></ul><ul><ul><ul><li>Object exception information </li></ul></ul></ul><ul><ul><li>Are identified by a log point </li></ul></ul><ul><ul><ul><li>RBAs (Relative Byte Addresses) for non-data sharing </li></ul></ul></ul><ul><ul><ul><li>LRSNs (Log Record Sequence Numbers) for data sharing </li></ul></ul></ul><ul><ul><li>Include the page (and row if applicable) being impacted </li></ul></ul>Twelve Byte Hexadecimal Numbers
    11. 11. DB2 Log Data Flow <ul><ul><li>Log Buffers </li></ul></ul><ul><ul><ul><li>RBA (Relative Byte Address) </li></ul></ul></ul><ul><ul><li>Written to Active Log </li></ul></ul><ul><ul><li>Full Active written to Archive </li></ul></ul>RBA 1000 RBA 2000 DB2 RECOVERY LOG MANAGER LOG BUFFERS INSERT ONE ROW Archive Log LOGSWITCH Active Log BEGIN_UR RECORD UNDO / REDO DATA UNDO / REDO INDEX COMMIT RECORDS END_UR RECORD UNDO / REDO INDEX OPEN PAGESETS Checkpoint RECORD
    12. 12. DB2 Log Data Breakdown Data 20% I ndex 50% Checkpoint 10% Commit 5% Other 15% UNDO REDO REDO UNDO
    13. 13. DB2 Log Archival Process <ul><li>Active log offload is triggered by several events, including: </li></ul><ul><ul><li>active log data set is full </li></ul></ul><ul><ul><li>Starting DB2 and an active log data set is full </li></ul></ul><ul><ul><li>ARCHIVE LOG command </li></ul></ul><ul><ul><li>Two uncommon events also trigger the offload: </li></ul></ul><ul><ul><ul><li>An error occurring while writing to an active log data set </li></ul></ul></ul><ul><ul><ul><li>Filling of the last un-archived active log data set </li></ul></ul></ul>Write to Active Log Triggering Event Update the BSDS Write the Archive Offload Process ARCHIVING THE ACTIVE LOG
    14. 14. DSN1LOGP - Looking at Log Records <ul><li>DSN1LOGP is a standalone utility available with DB2 </li></ul><ul><li>IBM does not set out to document everything you see in a detail report </li></ul><ul><li>You can still get lots of information but not easily </li></ul><ul><li>Most recovery experts would find a more sophisticated log tool handy to </li></ul><ul><ul><li>Re-create SQL from the log </li></ul></ul><ul><ul><li>report and filter on transaction and column data more effectively and </li></ul></ul><ul><ul><li>handle compression and other issues </li></ul></ul>
    15. 15. DSN1LOGP – Usage Examples <ul><li>Who updated that table that's not supposed to be updated? </li></ul><ul><li>Who DROPPED that DATABASE? </li></ul><ul><li>Did BADUSER update anything? </li></ul><ul><li>Are GRANTs and REVOKEs being done outside of our control? </li></ul><ul><li>Who FREEd that PLAN or PACKAGE? </li></ul><ul><li>Are SAVEPOINTs being executed on this subsystem? </li></ul><ul><li>Can I find a common quiet point for two tablespaces for Recovery? </li></ul><ul><li>Sample summary log record for a Unit of Recovery (UOR) </li></ul>DSN1151I DSN1LPRT UR CONNID=TSO CORRID=BADUSER AUTHID=BADUSER PLAN=DSNESPCS START DATE=05.059 TIME=14:29:20 DISP=COMMITTED INFO=COMPLETE STARTRBA=024AC666C4C7 ENDRBA=024AC666C76B STARTLRSN=BCA426C98AED ENDLRSN=BCA426C98C3E NID=* LUWID=USBMCN01.DEBALU.BCA4267B984C.0001 COORDINATOR=* PARTICIPANTS=* DATA MODIFIED: DATABASE=0600=KMMSEGDB PAGE SET=0002=KMMSEGS
    16. 16. Identifying Relevant Log <ul><li>The SYSLGRNX directory table records log ranges containing updates to a space (or partition) </li></ul><ul><ul><li>There are entries for each data sharing member updating and </li></ul></ul><ul><ul><ul><li>these entries give the location range on the logs (relative byte address--RBA) and </li></ul></ul></ul><ul><ul><ul><li>the relative time range (log record sequence number--LRSN) to coordinate with copies and other logs </li></ul></ul></ul><ul><ul><li>SYSLGRNGX output provided by REPORT RECOVERY utility </li></ul></ul><ul><ul><ul><li>Identifies assets required for recovery </li></ul></ul></ul>
    17. 17. DB2 Directory Table <ul><ul><li>SYSLGRNX </li></ul></ul><ul><ul><ul><li>Open update log ranges on the DB2 Log </li></ul></ul></ul><ul><ul><ul><li>Provides for faster recovery </li></ul></ul></ul>Quiet Point SYSLGRNX COPY Current LOG A B C D E F Open log ranges QUIESCE DBID PSID Start Range End Range 0105 000F A B 0105 000F C D 0105 000F E F
    18. 18. Spaces Absent in SYSLGRNX <ul><li>Some catalog and directory spaces don’t have SYSLGRNX entries </li></ul><ul><ul><li>DSNDB01.SYSUTILX </li></ul></ul><ul><ul><li>DSNDB01.DBD01 </li></ul></ul><ul><ul><li>DSNDB01.SYSLGRNX </li></ul></ul><ul><ul><li>DSNDB06.SYSCOPY </li></ul></ul><ul><ul><li>DSNDB06.SYSGROUP </li></ul></ul><ul><ul><li>DSNDB01.SCT02 </li></ul></ul><ul><ul><li>DSNDB01.SPT01 </li></ul></ul>
    19. 19. Log Compatibility <ul><li>Since log records reference pages and rows in spaces and spaces are identified by internal IDs: </li></ul><ul><ul><li>certain activities make one series of log records incompatible with others </li></ul></ul><ul><ul><li>require a new copy or starting point </li></ul></ul><ul><ul><li>LOAD REPLACE completely resets the data and REORG and REBUILD change row and key entry locations </li></ul></ul><ul><ul><li>Certain DROPs can be disastrous </li></ul></ul>
    20. 20. Log Roadblocks insert row, page 1F2, row 7 update row, page E2, row 1 DB2 LOG log apply COPY COPY REORG is executed Row is now on page E2, row 1 log apply
    21. 21. Problem Categories <ul><li>An expert categorizes failures and plans and performs accordingly. There are three common possibilities. </li></ul><ul><ul><li>A media failure destroys data or compromises it (Disk failure or controller or cache failure occurs) </li></ul></ul><ul><ul><li>Data becomes logically compromised by an incorrect job or transaction </li></ul></ul><ul><ul><li>The data center is unusable (aka disaster) </li></ul></ul>
    22. 22. Media Failure <ul><li>A media failure destroys data or compromises it </li></ul><ul><ul><li>Identify volume contents and RECOVER or REBUILD for traditional DASD </li></ul></ul><ul><ul><li>Identify objects affected by storage component and RECOVER or REBUILD bearing mind that some may not be affected because they weren’t recently updated </li></ul></ul>
    23. 23. Pop Quiz - Index Recovery Does a table space recovery mean indexes must be rebuilt (or recovered)? Maybe Not Recovery to current (for a media failure) would not require it . Some objects are being recovered to overcome media failure. Related objects should still be consistent if they were unaffected by the media failure.
    24. 24. Logical Data Corruption <ul><li>Data has become logically compromised by an incorrect job or transaction </li></ul><ul><ul><li>An expert finds the cause of the problem. </li></ul></ul><ul><ul><li>An expert knows the possible tools to use </li></ul></ul><ul><ul><ul><li>whether it is a set of RECOVER and REBUILD statements or </li></ul></ul></ul><ul><ul><ul><li>a special program or </li></ul></ul></ul><ul><ul><ul><li>a special log tool </li></ul></ul></ul><ul><ul><ul><li>or some combination of the above </li></ul></ul></ul>
    25. 25. Finding Corruption Point Look for a place where everyone agrees data wasn’t corrupted. Get as close to now as possible! Application Data Is fine POINT A Application Data Is Corrupted POINT B What happened in between DB2 LOG
    26. 26. Consistency Point If RECOVER must be used, a point of consistency across affected table spaces must be located and any good updates after that point will be lost. Batch Job Online Trans Online Trans Application Data Is fine Batch Job Rerun Application Data Now Corrupted DB2 LOG
    27. 27. Looking for QUIESCE Points Identifies all Quiesce Points after the specified ICDATE SELECT DBNAME, TSNAME, ICDATE, ICTIME, HEX(START_RBA) FROM SYSIBM.SYSCOPY WHERE DBNAME IN ('LSBX', 'LSBQ') AND ICTYPE = 'Q' AND ICDATE > ’020124' ORDER BY ICDATE, ICTIME; SELECT HEX(MAX(START_RBA)) FROM SYSIBM.SYSCOPY WHERE DBNAME IN ('LSBX', 'LSBQ') AND ICTYPE = 'Q'; SELECT DBNAME, NAME FROM SYSIBM.SYSTABLESPACE WHERE DBNAME IN ('LSBX', 'LSBQ') AND (DBNAME, NAME) NOT IN ( SELECT DBNAME, TSNAME FROM SYSIBM.SYSCOPY WHERE START_RBA = ( SELECT MAX(START_RBA) FROM SYSIBM.SYSCOPY WHERE DBNAME IN ('LSBX', 'LSBQ') ) ); <ul><li>Official points are recorded in SYSCOPY generally as a result of execution of the QUIESCE utility </li></ul><ul><li>Useful queries for evaluating available quiesce (quiet) points on the DB2 log </li></ul>Identifies the latest Quiesce point for a set Of objects Identifies related objects With no entry at the latest quiesce point
    28. 28. Recovering from Logical Errors Using SQL Processes If the result of the batch job was undone with SQL then the online transactions might be preserved and it would not be necessary to find a point of consistency for RECOVER. SQL INSERT SQL DELETE Batch Job Rerun Batch Job Reversed Application Data Now Corrupted Online Trans Online Trans Online Trans Online Trans Online Trans Online Trans Online Trans Online Trans Online Trans DB2 LOG
    29. 29. Caveats for a SQL approach <ul><li>Using SQL to correct logical errors has some possible pitfalls </li></ul><ul><ul><ul><li>The transactions being preserved may have depended on the incorrect data </li></ul></ul></ul>transaction changes addresses transaction changes salesmen based on addresses Good! BAD! DB2 LOG
    30. 30. Caveats for a SQL approach <ul><li>Using SQL to correct logical errors has some possible pitfalls </li></ul><ul><ul><ul><li>The transactions that ran during or after the corruption may have also updated the same column in some of the rows corrupted </li></ul></ul></ul>many employees 401K deductions set to zero Employee requests to set new percentages for 401K processed Good! BAD! DB2 LOG
    31. 31. Caveats for a SQL approach <ul><li>Allowing access to application spaces while they are corrupted may cause problems as in these examples </li></ul><ul><ul><ul><li>If a group of customers were accidentally deleted from your data base, then allowing salesmen to continue placing orders might cause them to recreate customer rows because they don’t see the rows. When a insert is attempted for the customer to correct the delete, it will likely receive -803 or cause an improper duplicate customer record </li></ul></ul></ul><ul><ul><ul><li>Customer shipping addresses corrupted might cause a label to be printed (read only) and packages to be misdirected </li></ul></ul></ul>
    32. 32. Image Copy <ul><li>An image copy is a sequential dataset </li></ul><ul><ul><li>Contains page images from the tablespace or indexspace </li></ul></ul><ul><ul><li>Represents at least one data set of a space and at most a complete space (all data sets or partitions) </li></ul></ul><ul><li>Image Copies </li></ul><ul><ul><li>Can be made while changes are taking place (SHRLEVEL CHANGE) or </li></ul></ul><ul><ul><li>Can be made allowing only reads so they are consistent (SHRLEVEL REFERENCE) </li></ul></ul><ul><ul><li>Registered in SYSCOPY and accessible via SQL SELECT </li></ul></ul><ul><ul><li>REPORT RECOVERY identifies copy required for recovery </li></ul></ul><ul><ul><ul><li>No guarantee that a copy in SYSCOPY is not deleted or not cataloged. </li></ul></ul></ul><ul><ul><li>Can be used to UNLOAD data </li></ul></ul><ul><ul><li>Deleted by </li></ul></ul><ul><ul><ul><li>DROP DDL against the space </li></ul></ul></ul><ul><ul><ul><li>Potentially by the MODIFY utility </li></ul></ul></ul>
    33. 33. Image Copy Types <ul><li>Multiple, identical image copies (four) may be made. They are identified as: </li></ul><ul><ul><ul><li>Primary or Backup; and </li></ul></ul></ul><ul><ul><ul><li>Local or Recovery site. </li></ul></ul></ul><ul><li>Image copies may be made with only changed pages. These are incremental image copies. </li></ul>copy x nK pages space
    34. 34. SHRLEVEL CHANGE Copies copy begins page 2 update page FFF2 update pg 0 pg 1 contents of copy page 2 copied pg 2 ... pg FFF0 copy ends page FFF2 copied pg FFF1 pg FFF2 1 2 3 4 5 DB2 LOG
    35. 35. External Copies Unknown to DB2 <ul><li>Data set dumps made by DFDSS or DSN1COPY or other mechanisms </li></ul><ul><ul><li>Aren’t registered but may be used by </li></ul></ul><ul><ul><ul><li>Restoring known copies that are consistent because the space was stopped or DB2 was cleanly stopped </li></ul></ul></ul><ul><ul><ul><li>Restoring a complete set of system data </li></ul></ul></ul><ul><ul><ul><ul><li>‘flash copied’ or ‘snapped’ between the SET LOG SUSPEND and SET LOG RESUME commands or </li></ul></ul></ul></ul><ul><ul><ul><ul><li>made while DB2 is down after it was taken done cleanly </li></ul></ul></ul></ul><ul><ul><ul><ul><li>and then restarting DB2. </li></ul></ul></ul></ul>
    37. 37. SYSCOPY Example <ul><ul><li>DB2 Catalog Table SYSIBM.SYSCOPY </li></ul></ul><ul><ul><ul><li>Backup and recovery point information </li></ul></ul></ul>IC Type Description F Full I Incremental Q Quiesce X REORG LOG(YES) SHRLEVEL Description R Reference C Change
    38. 38. Spaces not recorded in SYSCOPY <ul><li>Three catalog and directory spaces do not have entries in the SYSCOPY table </li></ul><ul><ul><li>DSNDB01.DBD01 </li></ul></ul><ul><ul><li>DSNDB01.SYSUTILX </li></ul></ul><ul><ul><li>DSNDB06.SYSCOPY </li></ul></ul><ul><li>Information on copies for these spaces resides in the DB2 log </li></ul>
    39. 39. Pop Quiz - Avoid Copies? Can you keep all logs and never make copies? PROBOBABLY NOT Not if REORG or LOAD are used with LOG NO.
    41. 41. RECOVER Flavors <ul><li>RECOVER can use all the log records to the end of the subsystem log(s) or </li></ul><ul><li>RECOVER can be instructed to stop at a particular log point </li></ul><ul><li>RECOVER usually starts by restoring image copies except in the rare cases where everything is on the log and </li></ul><ul><li>RECOVER has a LOGONLY feature that assumes the space is restored outside its control </li></ul>
    42. 42. Recovery Example RECOVER TOLOGPOINT full copy RECOVER TOLOGPOINT X’-----------’ DFDSS dump
    43. 43. Recovery Example RECOVER LOGONLY full copy RECOVER TOLOGPOINT X’-----------’ DFDSS dump RECOVER … LOGONLY find HPGRBRBA in data set
    44. 44. Recovery Example RECOVER TOLOGPOINT full copy RECOVER TOLOGPOINT X’-----------’ DFDSS dump RECOVER … TOLOGPOINT use full copy and apply log to log point
    45. 45. RECOVER Skips Range full copy RECOVER TOLOGPOINT X’-----------’ DFDSS dump RECOVER … TOLOGPOINT use full copy and apply log to log point
    46. 46. Disaster Recovery <ul><li>Options from weekly dumps to offsite logging </li></ul><ul><ul><li>Dumps - Simple, cheap, maximum data loss </li></ul></ul><ul><ul><ul><li>Weekly dumps means several days data loss </li></ul></ul></ul><ul><ul><li>Offsite Logs - Complex, expensive, no data loss </li></ul></ul><ul><ul><ul><li>Applying log data to shadow increases expense </li></ul></ul></ul><ul><ul><li>Compromise - Periodic vaulting of Copies & Logs </li></ul></ul><ul><ul><ul><li>Daily or hourly log shipment will minimize data loss </li></ul></ul></ul><ul><li>Good topic for a future presentation </li></ul>Cost Complexity Data Loss Outage Time
    47. 47. Expert Summary <ul><li>Know the basics and don’t be caught by the myths </li></ul><ul><li>Know the assets you are trying to protect </li></ul><ul><li>Know what you have to protect them with </li></ul><ul><li>Plan for each type of failure and practice if you can </li></ul>Questions?
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.