0
Rick Weaver BMC Software Audit Your SOX off,  and   other uses of a DB2 Log Tool
Overview <ul><li>Mostly generic look at Log Tools </li></ul><ul><ul><li>Basics </li></ul></ul><ul><ul><li>Examples </li></...
Log Tool Basics <ul><li>Read the log </li></ul><ul><li>Provide context </li></ul><ul><ul><li>Associate Unit of Recovery in...
Log Tool Basics <ul><li>Externalization of data </li></ul><ul><ul><li>Completion of partially logged updates </li></ul></u...
Log Tool Basics <ul><li>Resources Used </li></ul><ul><ul><li>BSDS – Boot Strap Datasets </li></ul></ul><ul><ul><li>ARCHIVE...
Mining the DB2 Log Data – BMC Log Master Batch  Log Scan Logical   Log Repository Reports Online  Interface DML DDL Load F...
Audit Your SOX Off Approach(es) <ul><li>Ongoing process storing data for later reprocessing </li></ul><ul><ul><li>Logical ...
Audit Your SOX Off Examples <ul><li>Power user (SYSADM, DBADM) activity </li></ul><ul><li>Track Authorization changes </li...
Audit Example Power user activity  <ul><li>Need to reconcile power user activity to change control </li></ul><ul><li>Detai...
Audit Example Power user activity <ul><li>LOGSCAN  </li></ul><ul><li>REPORT  </li></ul><ul><li>TYPE AUDIT  </li></ul><ul><...
ONGOING versus FOR LIMIT <ul><li>ONGOING – TO point resolution </li></ul><ul><ul><li>TO CURRENT </li></ul></ul><ul><ul><li...
Audit Example Track Authorization Changes <ul><li>Need to track all Authorization activity </li></ul><ul><li>Catalog Activ...
Audit Example Track Authorization Changes <ul><li>LOGSCAN  </li></ul><ul><li>REPORT  </li></ul><ul><li>TYPE CATALOG ACTIVI...
Audit Example Schema changing activities <ul><li>Report on ALL DDL activity in the system </li></ul><ul><li>Method </li></...
Audit Example Schema Changes – LLOG Generation <ul><li>LOGSCAN </li></ul><ul><li>LLOG  DATASET HLQ.DB2SSID.LLOG.JAN.DATA(+...
Audit Example Schema Changes – DDL reporting <ul><li>INPUT LLOG( HLQ.DB2SSID.LLOG.AUG.CNTL )  </li></ul><ul><li>LOGSCAN  <...
Audit Example Changes to sensitive columns or tables <ul><li>Need was to track the life cycle of a check (cheque) </li></u...
Audit Example Dynamic SQL activity <ul><li>WHERE  </li></ul><ul><li>PLAN NAME IN (  </li></ul><ul><li>'DSNTEP2',  </li></u...
Audit Example Report on Utility Execution <ul><li>Basically, an extract of SYSIBM.SYSCOPY </li></ul><ul><li>Can exclude no...
Audit Example Ad Hoc research  <ul><li>Looking for a column value being set to a known bad value </li></ul><ul><li>REPORT ...
Recovery Capabilities <ul><li>Transaction Level Recovery </li></ul><ul><ul><li>Avoid RECOVER outages and FIX IT programs <...
Recovery Capabilities High-speed Apply Engine <ul><li>SQL or LLOG input </li></ul><ul><ul><li>LLOG processing almost alway...
UNDO - Take Away Only the Bad Data <ul><li>BMC Log Master for DB2 can apply UNDO SQL to get rid of bad transactions.  </li...
UNDO - You take away  only  the bad transactions. <ul><ul><li>You can apply UNDO SQL to get rid of bad transactions.  Data...
Recovery Example UNDO <ul><li>LOGSCAN </li></ul><ul><li>SQL </li></ul><ul><li>UNDO </li></ul><ul><li>DATASET BADUSER.D&DAT...
REDO - You can re-apply only  the good transactions <ul><ul><li>You can perform a point-in-time recovery and then re-apply...
UNDO / REDO – one scan <ul><li>Single WHERE clause and time frame </li></ul><ul><li>UNDO/MIGRATE within FROM/TO and WHERE ...
UNDO/REDO as one <ul><li>LOGSCAN  </li></ul><ul><li>REPORT  </li></ul><ul><li>TYPE SUMMARY  </li></ul><ul><li>ORDER BY TAB...
Recovery Back out Integrity Report <ul><li>You choose to do an UNDO… </li></ul><ul><li>But, has anything happened to the r...
Recovery Back out Report syntax example <ul><li>LOGSCAN  </li></ul><ul><li>REPORT TYPE BACKOUT INTEGRITY SUMMARY  </li></u...
Recovery Back out Report syntax example <ul><li>Date: 2009-01-27  LOG MASTER FOR DB2 -  V09.01.00  Page:  1 </li></ul><ul>...
Recovery Example QUIET POINT Analysis <ul><li>Avoid QUIESCEs </li></ul><ul><li>Customer historically doing a QUIESCE durin...
Recovery Example QUIET POINT Analysis <ul><li>LOGSCAN  </li></ul><ul><li>REPORT  </li></ul><ul><li>TEMPLATE ID RDAKMM.QUIE...
Quiet Point Report Output <ul><li>Date: 2009-01-19  LOG MASTER FOR DB2 -  V09.01.00  Page:  1  </li></ul><ul><li>Time: 14....
Automated Drop Recovery – Reduce Risk Recreates dropped objects Process is initiated  from the online interface Scans DB2 ...
Recovery Example DROP RECOVERY <ul><li>DROPRECOVERY  </li></ul><ul><li>TABLESPACE NAME DATABASE.TABLSPC1 OUTCOPY YES  </li...
Recovery Example System Regression <ul><li>Test System Regression </li></ul><ul><ul><li>Back out the test cycle versus rec...
Recovery Example Post Processing <ul><li>Client has need to reverse changes up to 90 days </li></ul><ul><li>Many updates h...
Migration <ul><li>Data Warehouse </li></ul><ul><li>Test System Synchronization </li></ul><ul><li>To other Platforms </li><...
DB2 Data Migration <ul><li>Don’t replicate entire files, just migrate the changes!!! </li></ul>REPOSITORY Migrated RBA ran...
Migration Example To a Teradata Decision Support System <ul><li>Used SQL to port </li></ul><ul><ul><li>Post processed the ...
Migration Example Teradata Capture example <ul><li>LOGSCAN  </li></ul><ul><li>REPORT TYPE SUMMARY ALL ACTIVITY FORMAT SDF ...
Migration Example Synchronizing a Test System <ul><li>Needed to keep Regression Test system in sync with production to pro...
Migration Example Synchronizing a Test System <ul><li>LOGSCAN  </li></ul><ul><li>LLOG  </li></ul><ul><li>DATASET HLQ.DB2SS...
Migration Examples <ul><li>//SYSIN     DD *,DLM=## /STARTUP/   FILENAME=HLQ.SSID.LLOG.CONTROL(+0)   INPUTTYPE=LOGICALLOG  ...
Migration Dinking Skills <ul><li>VSAM file ported to DB2 – Two Column Tables </li></ul><ul><ul><li>Key Column / Remaining ...
UDCL Unload – Drop – Create – Load <ul><li>Reduce Object Restructuring outage window </li></ul><ul><ul><li>Standard Practi...
Traditional ‘UDCL’ Method Application Available Pre Image Copy Unload  Data Drop  Tablespace Create  Tablespace Load Data ...
UDCL Unload – Drop – Create – Load <ul><li>Reduce Object Restructuring outage window </li></ul><ul><ul><li>Log Master / Ap...
SHRLEVEL CHANGE TRANSFORM Prep Activities - Current tables, Data, Online and Batch Unaffected Create OCC From  TS_Orig,  S...
Wrap up <ul><li>Auditing </li></ul><ul><li>Recovery </li></ul><ul><li>Migration </li></ul><ul><li>Alternative uses of migr...
Ken McDonald BMC Software [email_address] Audit Your SOX Off, and other uses of a DB2 Log Tool
Upcoming SlideShare
Loading in...5
×

Audit your SOX off...

921

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
921
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • We will discuss methods of using a DB2 log processing tool to accomplish the auditing requirements mandated in your environment. Discussion and examples will be included to show how to monitor power users (e.g. SYSADMs), track changes in authorizations and underlying structures, look for updates to sensitive columns and more. The additional uses of a DB2 log tool will also be covered to show how they can be used for transaction level recovery, data migration, adhoc and regular reporting, and for problem diagnosis. This presentation will be largely vendor neutral, but BMC Log Master for DB2 will be used in the examples.
  • From the Abstract: Objective 1: Learn about log processing concepts with a brief overview Objective 2: Discussion and Examples of Audit compliance reporting Objective 3: Examples of how log based reporting can be used for problems diagnosis Objective 4: Transaction Level Recovery using a log tool Objective 5: Migration to a data warehouse, test system, or wherever you need
  • Combining multiple records comes in two flavors… The first… a log record can span multiple pages. The log contains 4K pages, so a 32K insert or delete can span over 10 log pages. An update can be even larger. The second… an update to a row which had overflowed can appear as two or three separate log records. The first time a row overflows, it shows up as an INSERT into the overflow page and an UPDATE in the home page to change the full row to a pointer to the new location. If it overflows again, we see an INSERT into the new page, a DELETE from the original over flow page, and an UPDATE to the pointer record to reference the new location. Rolled Back activity includes explicit ROLLBACK of units of recovery, ROLLBACK TO SAVEPOINT, and compensation caused by negative SQL Return Codes which can cause the entire unit to be rolled back or only a single insert/update/delete statement. ONGOING processing – Basically, the products keep track of where the last job ended and any units of recovery which were open at that point so they can pick up where they left off in the previous run. Log Master also has a concept of Dependent Rba – but, that’s an in depth discussion for a detailed presentation. For Reusable Inputs… Log Master generates a Logical Log which can be reprocessed at any time in the future. The other products have similar capabilities.
  • For decompression, the tools can read and obtain the appropriate dictionaries from either the current tablespace or image copies accounting for dictionary rebuilding utilities. If you have Field or Edit Procs, the associated DSNEXIT library must be in the STEPLIB (or equivalent – JOBLIB, LINKLIST, etc.) concatenation. For versioned rows, the products must be able to map any versions encountered in the log. Log Master normalizes all versions to the current schema definition. XML is stored in an internal format in DB2 with “NODE IDs” that correspond to Tag strings in SYSIBM.SYSXMLSTRINGS. Serialization is the process of converting from the internal format to a standard character string.
  • Log Master’s reusable input is the Logical Log.
  • Log Master is an ISPF based application. Using the task-oriented panel flows, you can quickly and easily specify the criteria for a particular event. You will always specify an Input, an Output, and a Filter. The Input can either be the DB2 Log (Active and/or Archive logs), or a Log Master Logical Log. The Logical Log is a previously built extract from the DB2 Log. For instance, you may set up a process to extract a broad range of information from the DB2 Log on a daily basis, and save it to a Logical Log dataset. Subsequently, you may want to report further on some event from the ‘past’, and the Logical Log can be your source. The Output can be any of several Reports, and/or some DDL/DML/DCL to be used for ‘playback’, and/or a Load file for populating another DB2 subsystem. You can create several outputs from one pass of the DB2 log. The Filter is your ‘WHERE’ clause. You use the Filter to subset or extract data from the DB2 log, much as you use SQL to create a result set from a DB2 table. The Filter has a component for Time Frame (the Begin and End points of your search), and predicate based on data in the Log Record Header area (about 14 fields in all).
  • In Log Master, VERBOSE SQL and DDL output include comments with context information about the log record – Unit of Recovery information, RBA/LRSN and timestamp, etc. The V910 release of Log Master introduced TEMPLATE based reporting. There are a couple of recorded Webinars about usage connected to the product page on the WWW: http://www.bmc.com/products/proddocview/0,2832,19052_19429_23053_2097,00.html
  • I bring this up here to point out options for setting the end point of an ONGOING job. Usually, people are happy with TO CURRENT or a relative time short of current to try to ensure pages are externalized. The OR LIMIT capability directs Log Master to calculate the end point based upon the LIMIT specified if the LIMIT is less that the explicit TO point. Some customers have used this capability to more easily bound daily processing… The initial FROM time was a microsecond after midnight… run for 24 hours. Repeat. This let’s Log Master calculate the Midnight time frame instead of using CURRENT, which will be the time the job actually gets on an initiator.
  • The Catalog Activity Report is a SUMMARY level report. For GRANT and REVOKE activity, it is somewhat sufficient with the GRANTOR, GRANTEE, and authorizations presented. For something like a CREATE TABLE or TABLESPACE, the summary information gives a number of COLUMNs count or number of partitions. For detailed information, we suggest generating a MIGRATE DDL file with the VERBOSE keyword. VERBOSE will include unit of recovery context as comments before the detailed DDL statement.
  • This example generates both the Catalog Activity report and DDL which can be referenced if the summary information in the report was not sufficient.
  • Actually, this customer started out producing a logical log of DB2 Catalog INSERT/UPDATE/DELETE activity on a daily basis and merging into a monthly at the end of the month. Their process evolved to produce multiple logical logs in a day, but they are still just doing the monthly merge.
  • The “INCLUDE DDL OBJECTS NO” clause is also the default. With Log Master, you can include what is basically blobs o’ catalog data from which we generate the DDL, where we have already grouped the multiple catalog records that represent a DDL statement into the BLOB. Or you can include the individual DSNDB06 records instead and let us blob them up when reprocessing the logical log. This customer is doing the latter. This WHERE clause is actually doing more than just extracting information for catalog activity… but, this Logical Log is used as input later for many reporting functions. The DATABASE = DSNDB06 predicate is getting the catalog activity necessary for DDL generation. The second and longer part of this filter is looking for power user updates to production tables. This is a customer’s filter. Some of the predicates are superfluous… the ‘AND AUTH ID LIKE’ and ‘UPDATE TYPE’ predicates are always true. They could gain some efficiency by not having the extra predicates. Our filters also follow standard BOOLEAN logic where AND has a higher precedence than OR and parenthetical groups override standard precedence.
  • Our full CATALOG ACTIVITY predicate list is ALTER, COMMENT ON, CREATE, DROP, GRANT, LABEL ON, RENAME REVOKE. The customer wanted only a subset for this specific report. They have additional jobs which produced output for DBADMs and other users as well.
  • An interesting extension a customer put in front of the product to give their auditors a view they would be more comfortable with.
  • This morphed from a customer asserting that there was no dynamic SQL activity on their system… that all SQL was done via statically bound packages. We discussed PREPARE and EXECUTE IMMEDIATE statements with them and looked in SYSIBM.SYSSTMT SYSIBM.SYSPACKSTMT for the culprits. The customer was surprised at how much dynamic activity actually occurred. Luckily they were not an SAP/R3 site.
  • A couple of difficulties to mitigate… some utilities do an INSERT at the beginning of the job and then UPDATE the IMRBA or other fields during termination. So, SYSCOPY event logging is not always a one for one paradigm.
  • This was an interesting case where data was corrupted and discovered to be so. It was discovered that a contractor had maliciously altered the values by running the report. When confronted the person initially denied any wrong doing, but when presented with the report, confessed and moved on to another job. This type of research can be done retrievably if you don’t know exact timing or values by creating a Logical Log for a larger range and then reprocessing changing the range and selection criteria until the data corruption cause is pin pointed.
  • Log Master can scan the DB2 log for changes made by a ‘bad’ transaction – which might mean everything associated with a particular PLAN, or executed by a certain USER, within a specified timeframe. The DB2 log records before and after images of all changes, and Log Master can use the ‘before’ images to generate what we call ‘UNDO’ SQL. If the bad transaction did an INSERT, we’ll do a DELETE. If the bad transaction changed ‘WEAVER’ to ‘BAKER’, we’ll change it back. The really cool part about this process is it’s all done while the table is available for updates. There’s absolutely NO OUTAGE required to remove the bad changes. An ‘online recovery’!
  • One approach to transaction recovery is UNDO transaction recovery. This is the simplest type of SQL-based transaction recovery. It involves generating UNDO SQL statements to reverse the effect of the transactions in error. To accomplish this transformation you will need a solution that understands the database log format and can create the SQL needed to undo the data modifications. With transaction recovery, the database log is read for the transaction in question, and the effects of the transactions are reversed. In SQL parlance this means: INSERT statements are converted to DELETE statements DELETE statements are converted to INSERT statements UPDATE statements are converted to modify the data to its state prior to the original UPDATE In this case, the portion of the database that does not need to be recovered remains undisturbed. When undoing erroneous transactions, recovery can be done online without suffering an outage of the application or database.
  • We have a plethora of UNDO tales… amazing how often an oops happens. Unqualified updates (e.g. forgot to uncomment that WHERE clause in SPUFI), application errors, etcetera. The most recent story is an insurance company that had accidentally changed the sex (Male or Female) of 1.5 million rows in one of their tables… They wanted to revert that pretty quickly.
  • This strategy is a combination of the first two technique’s with one twist. Instead of generating SQL for the bad transaction that we want to eliminate we generate the SQL for the transactions we want to save .Then we do a standard point in time recovery eliminating all the transactions since the recovery point.Finally we re-apply the good transactions captured in the first step. Unlike the UNDO process which creates SQL that are designed to back out all of the problem transactions, the REDO process creates SQL statements that are designed to reapply only the valid transactions from a consistent point of recovery to the current time. Because the REDO process does not generate SQL for the problem transactions, performing a recovery and then executing the REDO SQL can restore the table space to a current state that does not include the problem transactions. To generate REDO SQL statements, you will once again need a solution that understands the database log format and can create SQL needed to redo the data modifications. When redoing transactions in an environment where availability is crucial, a PIT recovery can be done and then the application and database can be brought online. The subsequent redoing of the valid transactions to complete the recovery then could be done with the data online, thereby reducing application downtime.
  • In this example, the UNDO (or MIGRATE) set includes everything that matches the filter and time range. The larger implication is in the REDO area which will have everything that touched the tablespace set implicated by the filter, but not selected by the filter. So, given a segmented tablespace with one of its tables referenced in the filter, the REDO would contain SQL for the other tables within the segmented space.
  • In this example, the UNDO SQL would contain updates by BADUSER to MY.SEGMENTED_TABLE_1 and MY.PARTITIONED_TABLE with the FROM/TO range. The REDO would contain all other updates to those tables as well as additional SEGMENTED_TABLES in the same space from the REDO POINT (LAST QUIESCE) to CURRENT. The INCLUDE RI clause is interesting in that we default to YES, include it for UNDO, but NO for REDO. The reasoning behind that is when UNDOing, the cascaded children deletes or updates need to be reverted. But, for a REDO, the regular RI rules will apply and the cascading activity will occur due to the parent SQL.
  • We also have a detailed report which lists column values which were changed.
  • The log based quiet point is any physical range in the log where there is not an active Unit of Recovery for the selection criteria. That said, a physical quiet point might not be the same as a ‘logical’ quiet point. Given a transaction process which does some activity, commits, does more, commits, does more and commits again... That’s three Units of Recovery. If this was the only activity on the system at the time of this process, we would report two quiet points between the activity. We mitigate that in Log Master by allowing you to specify a DURATION keyword to suppress quiet points shorter than the specified length.
  • I did this on one of my systems looking for a time with no DB2 Catalog activity… I ran it a few times upping the DURATION until I got a report small enough to fit on the next page. The TEMPLATE specification for the REPORT was introduced in V910 of Log Master. One of the features added as part of template reporting is the ability to ORDER BY the DURATION to influence Log Master to insert the QUIESCE for the longest quiet point period.
  • Max URIDs is another interesting field where we will allow you to find an *ALMOST* quiet point of no true quiet points exist. This capability is nice when using Recover Plus for TIMESTAMP recovery to reduce the number of open units of recovery it must mitigate to produce a consistent point.
  • The Drop Recovery report provides summarized information about the DB2 objects you want to recover. The information includes the specific image copy that should be used for the recovery and OBID translation information. The generated JCL automates the process of getting your dropped objects back completely up-to-date at the time of the drop . Dropped tables present a special problem since the normal recovery ‘unit’ in DB2 is the table space. This feature provides the ability to use the table space recovery resources but not disturb other tables in the space. The Log Master technology provides interpretation of the log that resulted from the drop to find image copies, object IDs and to recreate the DDL. RECOVER PLUS technology provides the ability to recover with explicitly named image copies, translating the IDs appropriately and applying log records so no transactions are lost.
  • A couple of things to note here… The ISPF Interface generates a multi-step JCL job stream of which the Log Master syntax above is just a small part. The ‘OLD’ disposition datasets in the Log Master syntax are allocated in IEFBR14 steps prior to the Log Master execution so they can be referenced by the Recover and Rebind steps afterwards. Log Master can also accomplish DROPRECOVERY by using DATABASE (multiple spaces dropped), TABLESPACE, or TABLE (a single table or subset of tables in multi (segmented) table space, but the space remains) keywords. However, COPIES are only produced when doing TABLESPACE driven drop recovery. DATABASE specification will leave the spaces in COPY PENDING. The most aggressive DROPRECOVERY war story I have was an accidental DROP DATABASE which contained 300 spaces. We used Log Master to extract the SYSIBM.SYSTABLESPACE delete rows and then ran a REXX to build the DROPRECOVER syntax to allow for copying of the 300 space during recovery.
  • We had users doing this on a regular basis during Y2K testing. We actually have some customers doing this with production where they create the Logical Log as the last step of their batch process and are set to quickly undo the cycle if the verification steps after the cycle uncovers something untoward.
  • We do the heavy lifting to extract the log data into a format to which application logic can be applied and logical decisions made as to how the data is processed.
  • Data warehousing application typically replicate huge files on a periodic basis. Often, most of the file is unchanged since the last replication. Using Log Master Migration support, you can discover the changes to your ‘source’ DB2 tables, and capture the data in a Logical Log. The data can then be applied in either LOAD or SQL processes. If SQL is chosen, the Log Master Execute SQL engine can speed the dynamic SQL, and allow for error handling. The Log Master repository keeps track of the log range processed by the ‘last’ execution of Log Master, and picks up where it left off. Any open URIDs (or non-externalized pages) are captured by this subsequent run, so the target file always receives complete updates. The period between Log Master runs is set by you, so your target file can receive updates daily, or hourly, or as often as you choose. The output of Log Master is a Logical Log, which has the completed row for all modified data. We complete the row information without requiring data capture or image copy mounts. The format of the Logical Log is documented, and sample routines for common 4GL are provided.
  • This was actually one of our highest volume customers which took advantage of newer function as it was introduced (e.g. OR LIMIT). They chose SQL as the format over LOAD CSV or SDF output because Teradata could execute the generated ANSI SQL. However, at some point after the process was started the Teradata system was changed to use a vertical bar – | – as a delimiter… so they actually edit the SQL file after generation to account for that. This extract started out running only once a day, but latency needs changed and they’re now running it every 4 hours. The range of latency I’ve seen for migration for our customer base goes from the low end of 15 minutes to 24 hours. We don’t see many customers wanting to scan more than a days worth of log in a single run.
  • The SUMMARY FORMAT SDF report is loaded into a ‘counter’ table for query analysis The size of the SQL file is HUGE The COMMIT FREQUENCY instructs us to not bother putting COMMITs in the SQL. Our default is to match the original Units of Recovery COMMIT pattern. UPDATE ALL COLUMNS influences the WHERE clause generation. We can do this based upon primary key, the key plus modified columns, or ALL columns. The FROM DATE… these guys have been doing this for a while. And finally, the LIMIT 7 LOG FILES. This was on a data sharing system… the LIMIT based upon LOG FILES on data sharing is sort of squishy… 7 log files on densely used member 1 is most likely a different elapsed time range than under utilized member 7… Our logic basically finds the lowest time covered in all the members and uses that as the end point… so you might not get 7 logs files for all members. I did not include the WHERE clause in this example…
  • This example expanded out into 462 Tables. If you care… that LRSN translates to October of 2006. We have RESET syntax allowable after the ONGOING HANDLE specification to direct us to honor the specified FROM time instead of determining the last end point from our repository table.
  • Interesting items on the Apply input… The LOGICALLOG section basically has the direction of how to implement the SQL… it corresponds to the SQL Generation syntax in Log Master. The OBJECTMAP section is analogous to the SQLXLAT DD statement in Log Master SQL generation, but we’re doing the translation in Apply instead.
  • This was an early customer of Log Master… We had a Software Consultant propose editing the Logical Log control file to remap the long column data. Our first response was “WHAT?!?!? You can’t DINK with the control file!” But, the SC was persistent… and we replied “You better have good dinking skills.” A typo happened in the problem record at one point, thus “Dinking Kills”, to which we responded “Yes, it does.”… It was a success and now done by a handful of customers. They must be careful and modify their CNTL files to keep up with version changes... But, they stay practiced on their “Dinking Skills”. There were caveats… the data column was VARCHAR FOR BIT DATA. All of the data mapped to standard z/OS and DB2 formats (e.g. Packed Decimal, variable lengths set correctly). There can be quirks with the DATE, TIME, and TIMESTAMP columns. The last bullet covers using Logical Log as input to the High-speed Apply Engine versus SQL. The original dinkers were customers prior to the availability of Apply. After Apply was on the market, the customer converted to use the Logical Log as the basis of migration, thus avoiding the generation and reparsing of SQL statements. Results were astounding and they estimated that they forestalled a CPU upgrade for over a year. The ported VSAM application is still in tact, but new applications are taking advantage of DB2 relational capabilities.
  • Many table and table changes require a DROP of the object and CREATE to introduce the alterations (e.g. DSSIZE, column reordering, etc.). We’ve seen customers use this practice to reduce the outage window to repartition a table space from 36 hours to minutes. Success Story URLs (at time of presentation) http://www.bmc.com/products/proddocview/0,2832,19052_19429_23053_2097,00.html http://documents.bmc.com/products/documents/53/73/65373/65373.pdf
  • Using standard techniques: Issue a DB2 command to start the database with read-only status Image copy the objects Unload the tables Issue a DB2 command to STOP the database DROP the objects CREATE the new objects Load the tables Issue a DB2 command to start the database in read-only status Take an Image copy Bind/Rebind all affected PLANs Issue a DB2 command to start the database in read-write status.
  • We’ve seen customers use this practice to reduce the outage window to repartition a table space from 36 hours to minutes. There could be actions before and after the RENAME… e.g. drop and recreate VIEWs, which also occur
  • Copy Original TS structure to New TS structure with MAXPART Execute OCC on Original TS Run Log Master Ongoing Process to capture SQL on Original Table Execute TRANSFORM with OCC input, Rebuild Index, Inline Copy, run RUNSTATS after Apply Log Master Captured SQL to New Table until RENAME Start Original TS RO, Apply final Log Master SQL to New Table STOP Original TS RENAME Original Table and Original Index to Backup Table and Backup Index RENAME New Table and New Index to Original Table and Original Index START New TS RW REBIND all affected Plans Update things that access Orig TS (e.g. utilities) to reference New TS, drop Original TS (Original Table now in New TS) Outage impact – RO at 6, No access from 7 to 10
  • Transcript of "Audit your SOX off..."

    1. 1. Rick Weaver BMC Software Audit Your SOX off, and other uses of a DB2 Log Tool
    2. 2. Overview <ul><li>Mostly generic look at Log Tools </li></ul><ul><ul><li>Basics </li></ul></ul><ul><ul><li>Examples </li></ul></ul><ul><ul><ul><li>Auditing </li></ul></ul></ul><ul><ul><ul><li>Recovery </li></ul></ul></ul><ul><ul><ul><li>Data Migration </li></ul></ul></ul><ul><li>BMC Log Master for DB2 used for examples </li></ul><ul><li>High-Speed Apply Engine benefits </li></ul>
    3. 3. Log Tool Basics <ul><li>Read the log </li></ul><ul><li>Provide context </li></ul><ul><ul><li>Associate Unit of Recovery info to DM activity </li></ul></ul><ul><ul><li>Combine multiple records into single entities </li></ul></ul><ul><ul><li>Manage committed versus rolled back activity </li></ul></ul><ul><li>Allow for robust selection criteria </li></ul><ul><li>Allow for “ONGOING” processes </li></ul><ul><li>Produce outputs </li></ul><ul><ul><li>SQL, DDL, LOAD, REPORTS, reusable inputs </li></ul></ul>
    4. 4. Log Tool Basics <ul><li>Externalization of data </li></ul><ul><ul><li>Completion of partially logged updates </li></ul></ul><ul><ul><ul><li>Potentially costly process </li></ul></ul></ul><ul><ul><ul><li>Avoid with DATA CAPTURE CHANGES </li></ul></ul></ul><ul><ul><li>Decompression </li></ul></ul><ul><ul><li>Invoke Edit and Field Proc Decode functions </li></ul></ul><ul><ul><li>Decode DB2 Internal Formats </li></ul></ul><ul><ul><ul><li>Numerics, Date, Time, and Timestamps </li></ul></ul></ul><ul><ul><li>Normalize Log Records to the current Version </li></ul></ul><ul><ul><li>Serialize XML </li></ul></ul>
    5. 5. Log Tool Basics <ul><li>Resources Used </li></ul><ul><ul><li>BSDS – Boot Strap Datasets </li></ul></ul><ul><ul><li>ARCHIVE and ACTIVE Log Datasets </li></ul></ul><ul><ul><li>DB2 Catalog </li></ul></ul><ul><ul><li>EDM Pool </li></ul></ul><ul><ul><li>TABLESPACE VSAM Datasets </li></ul></ul><ul><ul><li>IMAGE COPY Datasets </li></ul></ul><ul><ul><li>Reusable inputs for reprocessing, SQL application, etc. </li></ul></ul>
    6. 6. Mining the DB2 Log Data – BMC Log Master Batch Log Scan Logical Log Repository Reports Online Interface DML DDL Load File Report Writer SQL Generator DDL Generator Load Generator Archive Logs Active Logs Member BSDS Archive Logs Active Logs Member BSDS Archive Logs Active Logs Member BSDS SQL Processor High Speed Apply Load Utility DB2
    7. 7. Audit Your SOX Off Approach(es) <ul><li>Ongoing process storing data for later reprocessing </li></ul><ul><ul><li>Logical Log – Merged into Dailies, Weeklies… </li></ul></ul><ul><li>Report Generation </li></ul><ul><ul><li>Audit, Detail, Summary, Catalog Activity </li></ul></ul><ul><ul><li>User Control over content </li></ul></ul><ul><li>Verbose Detailed output </li></ul><ul><ul><li>SQL or DDL </li></ul></ul><ul><li>LOAD or LLOG file output to post process </li></ul>
    8. 8. Audit Your SOX Off Examples <ul><li>Power user (SYSADM, DBADM) activity </li></ul><ul><li>Track Authorization changes </li></ul><ul><li>Audit all schema changing activities </li></ul><ul><li>Look for changes to sensitive columns and tables </li></ul><ul><li>Detect Dynamic SQL activity </li></ul><ul><li>Report on Utility Execution </li></ul><ul><li>Ad Hoc research for cause of data corruption </li></ul>
    9. 9. Audit Example Power user activity <ul><li>Need to reconcile power user activity to change control </li></ul><ul><li>Detail Report produced on a daily basis for listed AUTHIDs and sensitive DATABASEs </li></ul><ul><li>Report ORDER-ed BY </li></ul><ul><ul><li>CORRELATION ID (Jobname, TSO ID, etc.) </li></ul></ul><ul><ul><li>UNIT OF RECOVERY </li></ul></ul><ul><li>Each activity has to be reconciled to a Change Control Ticket </li></ul>
    10. 10. Audit Example Power user activity <ul><li>LOGSCAN </li></ul><ul><li>REPORT </li></ul><ul><li>TYPE AUDIT </li></ul><ul><li>ORDER BY CORRELATION ID, UNIT OF RECOVERY </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) LRECL(121) NOHOLD </li></ul><ul><li>REPORT </li></ul><ul><li>TYPE SUMMARY </li></ul><ul><li>ORDER BY CORRELATION ID, UNIT OF RECOVERY </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) LRECL(121) NOHOLD </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM DATE(2006-01-01) TIME(08.00.00.000000) </li></ul><ul><li>TO CURRENT </li></ul><ul><li>WHERE AUTH ID IN ('SYSADM1', 'SYSADM2', 'SYSADM3', 'SYSADM4') </li></ul><ul><li>ONGOING HANDLE 473 </li></ul>
    11. 11. ONGOING versus FOR LIMIT <ul><li>ONGOING – TO point resolution </li></ul><ul><ul><li>TO CURRENT </li></ul></ul><ul><ul><li>TO DATE(*) TIME(-00.30.00) </li></ul></ul><ul><li>OR LIMIT </li></ul><ul><ul><li>5 LOG FILES </li></ul></ul><ul><ul><li>7 DAYS </li></ul></ul><ul><ul><li>24.00.00 </li></ul></ul>
    12. 12. Audit Example Track Authorization Changes <ul><li>Need to track all Authorization activity </li></ul><ul><li>Catalog Activity Report produced daily </li></ul><ul><li>Filter was for all GRANT and REVOKE activity </li></ul><ul><li>The Catalog Activity Report is summary level </li></ul><ul><li>Another option – VERBOSE DDL </li></ul>
    13. 13. Audit Example Track Authorization Changes <ul><li>LOGSCAN </li></ul><ul><li>REPORT </li></ul><ul><li>TYPE CATALOG ACTIVITY </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>DDL </li></ul><ul><li>MIGRATE </li></ul><ul><li>DATASET &SYSUID..D&DATE..T&TIME..MIGRATE.DDL NEW </li></ul><ul><li>CYLINDERS SPACE(50,25) UNIT(SYSDA) RELEASE </li></ul><ul><li>VERBOSE </li></ul><ul><li>DB2CATALOG YES </li></ul><ul><li>FROM DATE(2009-01-19) TIME(08.00.00.000000) </li></ul><ul><li>TO CURRENT </li></ul><ul><li>WHERE CATALOG ACTIVITY IN (GRANT, REVOKE) </li></ul><ul><li>ONGOING HANDLE 375 </li></ul>
    14. 14. Audit Example Schema changing activities <ul><li>Report on ALL DDL activity in the system </li></ul><ul><li>Method </li></ul><ul><ul><li>Generate Logical Log on an ONGOING basis </li></ul></ul><ul><ul><li>Merge into Weeklies, Monthlies </li></ul></ul><ul><ul><li>Generate Reports from Monthly Logical Log </li></ul></ul>
    15. 15. Audit Example Schema Changes – LLOG Generation <ul><li>LOGSCAN </li></ul><ul><li>LLOG DATASET HLQ.DB2SSID.LLOG.JAN.DATA(+1) NEW </li></ul><ul><li>CYLINDERS SPACE(250,250) UNIT(SYSDA) RELEASE </li></ul><ul><li>CONTROL HLQ.DB2SSID.LLOG.JAN.CNTL(+1) NEW </li></ul><ul><li>CYLINDERS SPACE(150,150) UNIT(SYSDA) RELEASE </li></ul><ul><li>DATEFMT DB2I INCLUDE DDL OBJECTS NO </li></ul><ul><li>DB2CATALOG YES </li></ul><ul><li>FROM DATE(2008-05-02) TIME(06.43.30.000000) </li></ul><ul><li>TO CURRENT </li></ul><ul><li>ONGOING HANDLE 4000 </li></ul><ul><li>WHERE (DATABASE NAME = DSNDB06 AND AUTH ID LIKE '*') </li></ul><ul><li>OR ((AUTH ID LIKE 'SYSADM%' OR AUTH ID LIKE 'DBADM%' </li></ul><ul><li>OR AUTH ID LIKE 'DBADM2%') AND (UPDATE TYPE = INSERT </li></ul><ul><li>OR UPDATE TYPE = UPDATE </li></ul><ul><li>OR UPDATE TYPE = DELETE) </li></ul><ul><li>AND (TABLE NAME LIKE 'PROD1.%' OR </li></ul><ul><li>TABLE NAME LIKE 'PROD2.%' OR </li></ul><ul><li>TABLE NAME LIKE 'PS%.%' ) </li></ul><ul><li>AND TABLE NAME NOT LIKE 'PST%.%' ) </li></ul>
    16. 16. Audit Example Schema Changes – DDL reporting <ul><li>INPUT LLOG( HLQ.DB2SSID.LLOG.AUG.CNTL ) </li></ul><ul><li>LOGSCAN </li></ul><ul><li>DDL </li></ul><ul><li>MIGRATE </li></ul><ul><li>DATASET HLQ.DB2SSID.SYSADM.CATALOG.RPT(+1) NEW </li></ul><ul><li>CYLINDERS SPACE(50,50) UNIT(SYSDA) RELEASE </li></ul><ul><li>VERBOSE </li></ul><ul><li>DB2CATALOG YES </li></ul><ul><li>FROM RBA X'000000000000' </li></ul><ul><li>TO RBA X'FFFFFFFFFFFF' </li></ul><ul><li>WHERE CATALOG ACTIVITY IN (ALTER, RENAME, CREATE, DROP) </li></ul><ul><li>AND AUTH ID LIKE 'SYSADM%' </li></ul>
    17. 17. Audit Example Changes to sensitive columns or tables <ul><li>Need was to track the life cycle of a check (cheque) </li></ul><ul><li>Customer wrote ISPF front end for Auditors </li></ul><ul><ul><li>Auditor would enter </li></ul></ul><ul><ul><ul><li>Account </li></ul></ul></ul><ul><ul><ul><li>Check number </li></ul></ul></ul><ul><ul><ul><li>Range for activity </li></ul></ul></ul><ul><ul><li>ISPF Application would generate and submit a logscan job </li></ul></ul><ul><ul><li>Report could be viewed after job execution </li></ul></ul>
    18. 18. Audit Example Dynamic SQL activity <ul><li>WHERE </li></ul><ul><li>PLAN NAME IN ( </li></ul><ul><li>'DSNTEP2', </li></ul><ul><li>'DSNTIAUL', </li></ul><ul><li>'DSNTIAD', </li></ul><ul><li>'DISTSERV', </li></ul><ul><li>'DSNESPCS') </li></ul><ul><li>OR </li></ul><ul><li>(PLAN NAME = 'ADMPROD' </li></ul><ul><li>AND CORRELATION ID <> 'ALLOWJOB') </li></ul><ul><li>OR </li></ul><ul><li>PLAN NAME LIKE 'QMF%' </li></ul><ul><li>OR </li></ul><ul><li>PLAN NAME LIKE 'CDB%' </li></ul><ul><li>OR </li></ul><ul><li>PLAN NAME LIKE 'ACT%' </li></ul>
    19. 19. Audit Example Report on Utility Execution <ul><li>Basically, an extract of SYSIBM.SYSCOPY </li></ul><ul><li>Can exclude non-utility ICTYPEs </li></ul><ul><ul><li>‘A’ – Alter </li></ul></ul><ul><ul><li>‘C’ – Create </li></ul></ul><ul><li>Approach </li></ul><ul><ul><li>Generate a LOAD CSV file to post process </li></ul></ul><ul><ul><li>Transformed ICTYPE to meaningful value </li></ul></ul>
    20. 20. Audit Example Ad Hoc research <ul><li>Looking for a column value being set to a known bad value </li></ul><ul><li>REPORT TYPE AUDIT </li></ul><ul><li>ORDER BY CORRELATION ID </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM DATE(2007-08-10) TIME(12.00.00.000000) </li></ul><ul><li>TO DATE(2007-08-10) TIME(17.00.00.000000) </li></ul><ul><li>WHERE </li></ul><ul><li>MY.TABLE.SENSATIVE_COL CHANGED </li></ul><ul><li>AND MY.TABLE.SENSATIVE_COL AFTER = bad value </li></ul>
    21. 21. Recovery Capabilities <ul><li>Transaction Level Recovery </li></ul><ul><ul><li>Avoid RECOVER outages and FIX IT programs </li></ul></ul><ul><ul><li>UNDO the bad </li></ul></ul><ul><ul><li>REDO the good after a PIT </li></ul></ul><ul><ul><li>Back out Integrity Checking </li></ul></ul><ul><li>Discover and generate QUIET POINTs </li></ul><ul><li>DROP RECOVERY </li></ul><ul><li>Test System Regression </li></ul><ul><li>Customized Post Processing </li></ul>
    22. 22. Recovery Capabilities High-speed Apply Engine <ul><li>SQL or LLOG input </li></ul><ul><ul><li>LLOG processing almost always faster </li></ul></ul><ul><ul><li>Avoids SQL generation and parsing </li></ul></ul><ul><li>Multi-threaded </li></ul><ul><li>Control over object distribution </li></ul><ul><li>Robust conflict resolution </li></ul><ul><li>Restartable </li></ul><ul><li>Used as the method for Migration as well as Recovery </li></ul>
    23. 23. UNDO - Take Away Only the Bad Data <ul><li>BMC Log Master for DB2 can apply UNDO SQL to get rid of bad transactions. </li></ul><ul><li>Business Value – ZERO downtime for transaction level recovery! </li></ul>Good Transaction 1 Good Transaction 2 Bad Transaction UNDO Bad Transactions Generate UNDO SQL Apply UNDO SQL
    24. 24. UNDO - You take away only the bad transactions. <ul><ul><li>You can apply UNDO SQL to get rid of bad transactions. Database remains online - no halt to normal processing. </li></ul></ul>Apply UNDO SQL or LLOG Bad Transaction Good Transaction 1 Generate UNDO SQL or LLOG UNDO Bad Transactions Good Transaction 2 “ Recovery” started
    25. 25. Recovery Example UNDO <ul><li>LOGSCAN </li></ul><ul><li>SQL </li></ul><ul><li>UNDO </li></ul><ul><li>DATASET BADUSER.D&DATE..T&TIME..UNDO.SQL NEW </li></ul><ul><li>CYLINDERS SPACE(500,100) UNIT(SYSDA) RELEASE </li></ul><ul><li>TEMPLATE BADUSER.D&DATE..T&TIME..UNDO.TEMPLATE NEW </li></ul><ul><li>CYLINDERS SPACE(100,100) UNIT(SYSDA) RELEASE </li></ul><ul><li>INCLUDE RI </li></ul><ul><li>REPORT TYPE SUMMARY </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM DATE(2008-09-26) TIME(15.30.00.000000) </li></ul><ul><li>TO DATE(2008-09-26) TIME(16.00.00.000000) </li></ul><ul><li>WHERE AUTH ID = 'BADUSER' </li></ul>
    26. 26. REDO - You can re-apply only the good transactions <ul><ul><li>You can perform a point-in-time recovery and then re-apply good transactions using REDO SQL. Database is offline to recover to consistency, then back online. </li></ul></ul>Bad Transaction 1. Generate REDO SQL 3. Apply REDO SQL 2. Point-in-time recovery to a quiet point prior to the bad transaction. Recovery started Good Transaction 1 Good Transaction 2 Good Transaction 2
    27. 27. UNDO / REDO – one scan <ul><li>Single WHERE clause and time frame </li></ul><ul><li>UNDO/MIGRATE within FROM/TO and WHERE </li></ul><ul><li>REDO everything else from PIT to CURRENT for the implicated tablespaces </li></ul>REDO SQL UNDO or MIGRATE SQL Redo Recovery Point FROM Point of LOGSCAN TO Point of LOGSCAN CURRENT
    28. 28. UNDO/REDO as one <ul><li>LOGSCAN </li></ul><ul><li>REPORT </li></ul><ul><li>TYPE SUMMARY </li></ul><ul><li>ORDER BY TABLE NAME </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>SQL </li></ul><ul><li>UNDO </li></ul><ul><li>DATASET &SYSUID..D&DATE..T&TIME..UNDO.SQL NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>TEMPLATE &SYSUID..D&DATE..T&TIME..UNDO.TEMPLATE NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>INCLUDE RI </li></ul><ul><li>REDO </li></ul><ul><li>DATASET &SYSUID..D&DATE..T&TIME..REDO.SQL NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>TEMPLATE &SYSUID..D&DATE..T&TIME..REDO.TEMPLATE NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM DATE(2009-01-19) TIME(10.00.00.000000) </li></ul><ul><li>TO DATE(2009-01-19) TIME(12.00.00.000000) </li></ul><ul><li>REDO </li></ul><ul><li>FROM LASTQUIESCE </li></ul><ul><li>WHERE </li></ul><ul><li>TABLE NAME IN (MY.SEGMENTED_TABLE_1, </li></ul><ul><li>MY.PARTITIONED_TABLE) </li></ul><ul><li>AND AUTH ID = 'BADUSER' </li></ul>
    29. 29. Recovery Back out Integrity Report <ul><li>You choose to do an UNDO… </li></ul><ul><li>But, has anything happened to the row between the UNDO range and current </li></ul><ul><li>BACKOUT INTEGRITY </li></ul>
    30. 30. Recovery Back out Report syntax example <ul><li>LOGSCAN </li></ul><ul><li>REPORT TYPE BACKOUT INTEGRITY SUMMARY </li></ul><ul><li>DATASET RDARJH.MJ354.BIC.REPORT NEW </li></ul><ul><li>LIKE(SYS1.MODEL) </li></ul><ul><li>CYLINDERS SPACE(25,25) UNIT(SYSDA) RELEASE </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM RBA X'C3A8D25C6D12' </li></ul><ul><li>TO RBA X'C3A8D2626C1F' </li></ul><ul><li>WHERE </li></ul><ul><li>TSNAME = 'MJ354DB.MJ354TS' </li></ul><ul><li>AND (UPDATE TYPE = INSERT OR </li></ul><ul><li>UPDATE TYPE = UPDATE OR </li></ul><ul><li>UPDATE TYPE = DELETE) </li></ul><ul><li>AND CORRELATION_ID LIKE 'RDARJH%' </li></ul><ul><li>AND CONNECTION_ID <> 'UTILITY' </li></ul><ul><li>AND PLAN_NAME <> 'DSNUTIL' </li></ul>
    31. 31. Recovery Back out Report syntax example <ul><li>Date: 2009-01-27 LOG MASTER FOR DB2 - V09.01.00 Page: 1 </li></ul><ul><li>Time: 15.36.14 Copyright BMC Software, Inc. 1995-2009 </li></ul><ul><li>Backout Integrity Summary Report, By Table Name, Index Value </li></ul><ul><li>-------------------------------------------------------------------------------- </li></ul><ul><li>From: RBA/LRSN x'04D0CAD47640' </li></ul><ul><li>To: RBA/LRSN x'04D0CAD4DE01' </li></ul><ul><li>-------------------------------------------------------------------------------- </li></ul><ul><li>Report Information: </li></ul><ul><li>Work ID : RDARJH.BICEXAMPLE Run Number: 1 Subsystem: DEDL </li></ul><ul><li>Description: </li></ul><ul><li>-------------------------------------------------------------------------------- </li></ul><ul><li>Table Name: TSTDBA.BICTSTTB DBID.OBID: (582.3) </li></ul><ul><li>Space Name: BICTSTDB.BICTSTTS DBID.PSID: (582.2) </li></ul><ul><li>Report Index: TSTDBA.BICTSTIX Type: Unique </li></ul><ul><li>-------------------------------------------------------------------------------- </li></ul><ul><li>IX Value: COL_INT1 : 0 </li></ul><ul><li>COL_CHAR5 : C5 </li></ul><ul><li>Selected Log Records: </li></ul><ul><li>Type Logpoint Auth ID Corr ID PlanName Timestamp </li></ul><ul><li>Insert X'04D0CAD47A97' RDARJH4 RDARJH4 DSNESPCS 2009-01-27-15.15.18.605 </li></ul><ul><li>Anomaly Log Records: </li></ul><ul><li>Update X'04D0CAD550F3' RDARJH4 RDARJH4 DSNESPCS 2009-01-27-15.16.56.119 </li></ul><ul><li>Delete X'04D0CAD55B4E' RDARJH4 RDARJH4 DSNESPCS 2009-01-27-15.16.56.122 </li></ul><ul><li>Insert X'04D0CAD56482' RDARJH4 RDARJH4 DSNESPCS 2009-01-27-15.16.56.126 </li></ul><ul><li>Update X'04D0CAD56CE9' RDARJH4 RDARJH4 DSNESPCS 2009-01-27-15.16.56.129 </li></ul><ul><li>-------------------------------------------------------------------------------- </li></ul>
    32. 32. Recovery Example QUIET POINT Analysis <ul><li>Avoid QUIESCEs </li></ul><ul><li>Customer historically doing a QUIESCE during a ‘low’ processing time every night </li></ul><ul><li>Still received 100-200 application time outs </li></ul><ul><li>Now use Log Processing to find and manifest QUIESCEs </li></ul><ul><li>Finding points with no open Units of Recovery when RECOVER is necessary versus QUIESCE </li></ul>
    33. 33. Recovery Example QUIET POINT Analysis <ul><li>LOGSCAN </li></ul><ul><li>REPORT </li></ul><ul><li>TEMPLATE ID RDAKMM.QUIETPOINTDURATION </li></ul><ul><li>TYPE QUIET POINT </li></ul><ul><li>DURATION 00.05.00.000000 QUIESCE YES </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>WHERE DBNAME = DSNDB06 </li></ul><ul><li>DB2CATALOG YES </li></ul><ul><li>FROM DATE(2009-01-19) TIME(13.30.00.000000) </li></ul><ul><li>TO DATE(2009-01-19) TIME(14.00.00.000000) </li></ul>
    34. 34. Quiet Point Report Output <ul><li>Date: 2009-01-19 LOG MASTER FOR DB2 - V09.01.00 Page: 1 </li></ul><ul><li>Time: 14.35.53 Copyright BMC Software, Inc. 1995-2009 </li></ul><ul><li>Quiet Point Report, By Duration(A) </li></ul><ul><li>--------------------------------------------------------------------------------------------------------------------- </li></ul><ul><li>From: Date 2009-01-19 13.30.00.000000 </li></ul><ul><li>To: Date 2009-01-19 14.00.00.000000 </li></ul><ul><li>--------------------------------------------------------------------------------------------------------------------- </li></ul><ul><li>Report Information: </li></ul><ul><li>Work ID : RDAKMM3.QUIETPOINTTEST Run Number: 4 Subsystem: DXW </li></ul><ul><li>Description: RDAKMM3 2009-01-19 14.07.45 REPORTS </li></ul><ul><li>Duration : 00.05.00.000000 </li></ul><ul><li>Max Urids : 0 </li></ul><ul><li>--------------------------------------------------------------------------------------------------------------------- </li></ul><ul><li>Table Space Name List Derived From Filter (omitted most of list for space) </li></ul><ul><li>Tablespace Name: DSNDB06.SYSDBASE DBID: 6 PSID: 9 </li></ul><ul><li>Range From: RBA/LRSN: x'C39EB46D4CEA' Date: 2009-01-19 13.32.34.977296 </li></ul><ul><li>To: RBA/LRSN: x'C39EB64E86F4' Date: 2009-01-19 13.40.59.580096 </li></ul><ul><li>Duration: 00.08.24.602800 </li></ul><ul><li>Open URIDs: 0 </li></ul><ul><li>Range From: RBA/LRSN: x'C39EB78DD197' Date: 2009-01-19 13.46.25.432672 </li></ul><ul><li>To: RBA/LRSN: x'C39EB991F617' Date: 2009-01-19 13.55.35.596272 </li></ul><ul><li>Duration: 00.09.10.163600 </li></ul><ul><li>Open URIDs: 0 </li></ul><ul><li>223 quiet point ranges shorter than the duration were suppressed in report </li></ul><ul><li>Quiesce recorded in SYSIBM.SYSCOPY at x'C39EB991F617' </li></ul>
    35. 35. Automated Drop Recovery – Reduce Risk Recreates dropped objects Process is initiated from the online interface Scans DB2 Log Records <ul><li>UNDO DDL to recreate the dropped object </li></ul><ul><li>Syntax for recovery and object ID translation </li></ul><ul><li>DB2 commands to rebind application plans that were invalidated when the object was dropped </li></ul><ul><li>Drop Recovery Report </li></ul>Generates JCL and outputs to automate Drop Recovery Post recovery SQL and Rebind Drive Recovery Technology using copy and log from Dropped Object. OBID Translation Applies log to point of DROP DB2 Subsystem RECOVER PLUS Technology Log Master Technology
    36. 36. Recovery Example DROP RECOVERY <ul><li>DROPRECOVERY </li></ul><ul><li>TABLESPACE NAME DATABASE.TABLSPC1 OUTCOPY YES </li></ul><ul><li>REGISTER ALL </li></ul><ul><li>OUTCOPYDDN(BMCCPY1) </li></ul><ul><li>TABLESPACE NAME DATABASE.TABLSPC2 OUTCOPY YES </li></ul><ul><li>REGISTER ALL </li></ul><ul><li>OUTCOPYDDN(BMCCPY2) </li></ul><ul><li>RECREATE DATASET RDAKMM3.DROPREC.RECREATE.DDL NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>EXECUTE </li></ul><ul><li>RECOVER DATASET RDAKMM3.DROPREC.RECOVER.CNTL OLD </li></ul><ul><li>OUTCOPY ASCODED </li></ul><ul><li>MIGRATE DATASET RDAKMM3.DROPREC.MIGRATE.SQL NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>REBIND DATASET RDAKMM3.DROPREC.REBIND.CMDS OLD </li></ul><ul><li>REPORT DATASET RDAKMM3.DROPREC.REPORT NEW </li></ul><ul><li>CYLINDERS SPACE(1,1) UNIT(SYSDA) RELEASE </li></ul><ul><li>FROM DATE(2009-01-27) TIME(12.00.00.000000) </li></ul><ul><li>TO DATE(2009-01-27) TIME(12.15.00.000000) </li></ul>
    37. 37. Recovery Example System Regression <ul><li>Test System Regression </li></ul><ul><ul><li>Back out the test cycle versus recover to PIT </li></ul></ul><ul><ul><li>Set LOGMARK at beginning of test </li></ul></ul><ul><ul><li>Set LOGMARK at end of test </li></ul></ul><ul><ul><li>Generate LLOG from Begin to End LOGMARKS </li></ul></ul><ul><ul><li>Execute High-speed Apply Engine to UNDO LLOG </li></ul></ul><ul><li>Production too… </li></ul>
    38. 38. Recovery Example Post Processing <ul><li>Client has need to reverse changes up to 90 days </li></ul><ul><li>Many updates have occurred since </li></ul><ul><li>Solution </li></ul><ul><ul><li>Produce LLOG or LOAD output for target data </li></ul></ul><ul><ul><li>Post Process to reverse changes while preserving current data </li></ul></ul>
    39. 39. Migration <ul><li>Data Warehouse </li></ul><ul><li>Test System Synchronization </li></ul><ul><li>To other Platforms </li></ul><ul><ul><li>High-Speed Apply Engine </li></ul></ul><ul><ul><li>LOAD CSV or SDF (all character) Formats </li></ul></ul>
    40. 40. DB2 Data Migration <ul><li>Don’t replicate entire files, just migrate the changes!!! </li></ul>REPOSITORY Migrated RBA range In-flight URIDs Logical Log(+1) Logical Log(+1) Logical Log(+1) (inflight URID 1988) RBA 2000 RBA 3000 RBA 4000 RBA 1000 Migrated 1000 - 2000 less inflight URID 1988 Migrated 2000 - 3000 plus inflight URID 1988 DB2 LOG Input to LOAD utility or Apply SQL process Migrated 3000 - 4000 Log Master BATCH PGM Log Master BATCH PGM Log Master BATCH PGM
    41. 41. Migration Example To a Teradata Decision Support System <ul><li>Used SQL to port </li></ul><ul><ul><li>Post processed the SQL to change the comma delimiter </li></ul></ul><ul><li>Wanted to limit size of any given extract </li></ul><ul><ul><li>Used the OR LIMIT # LOG FILES </li></ul></ul><ul><li>Selection criteria </li></ul><ul><ul><li>List of 52 Tables to include and 18 batch jobs to exclude </li></ul></ul><ul><li>> 800 Partitions – all compressed </li></ul><ul><li>Volume ~24 Billion transactions a year </li></ul>
    42. 42. Migration Example Teradata Capture example <ul><li>LOGSCAN </li></ul><ul><li>REPORT TYPE SUMMARY ALL ACTIVITY FORMAT SDF </li></ul><ul><li>DATASET HLQ.DB2GROUP.CAPTURE.ROWCNTS(+1) NEW </li></ul><ul><li>LIKE(STDMODEL) </li></ul><ul><li>UNIT(SYSDA) TRACKS SPACE(5,3) RETPD(60) RELEASE </li></ul><ul><li>SQL </li></ul><ul><li>MIGRATE </li></ul><ul><li>DATASET HLQ.DB2GROUP.CAPTURE.D&DATE..T&TIME NEW </li></ul><ul><li>MGMTCLAS(MCDEL400) </li></ul><ul><li>UNIT(SYSDA) CYLINDERS SPACE(1000,1000) MAXVOL(12) </li></ul><ul><li>RELEASE RETPD(60) </li></ul><ul><li>COMMIT FREQUENCY (NONE) </li></ul><ul><li>UPDATE ALL COLUMNS </li></ul><ul><li>DB2CATALOG NO </li></ul><ul><li>FROM DATE(2003-02-18) TIME(15.00.00.000000) </li></ul><ul><li>TO CURRENT </li></ul><ul><li>OR LIMIT 7 LOG FILES </li></ul>
    43. 43. Migration Example Synchronizing a Test System <ul><li>Needed to keep Regression Test system in sync with production to prove changes before implementation (462 Tables) </li></ul><ul><li>Use LLOG as the capture format </li></ul><ul><li>Use High-speed Apply Engine to process the LLOG </li></ul><ul><li>Has a process in place to “Refresh” the test system after major production DDL changes or utility windows </li></ul><ul><li>RESETs the ONGOING capture after refresh </li></ul>
    44. 44. Migration Example Synchronizing a Test System <ul><li>LOGSCAN </li></ul><ul><li>LLOG </li></ul><ul><li>DATASET HLQ.DB2SSID.LLOG.DATA(+1) NEW </li></ul><ul><li>CYLINDERS SPACE(500,500) UNIT(SYSDA) RELEASE </li></ul><ul><li>DATACLAS(VOLMODEL) </li></ul><ul><li>CONTROL HLQ. DB2SSID.LLOG.CONTROL(+1) NEW </li></ul><ul><li>CYLINDERS SPACE(25,50) UNIT(SYSDA) RELEASE </li></ul><ul><li>DATEFMT DB2I </li></ul><ul><li>REPORT TYPE SUMMARY </li></ul><ul><li>ORDER BY TABLE NAME </li></ul><ul><li>SYSOUT </li></ul><ul><li>CLASS(*) NOHOLD </li></ul><ul><li>DB2CATALOG YES </li></ul><ul><li>FROM LRSN X'BF9615B40406' </li></ul><ul><li>TO CURRENT </li></ul><ul><li>WHERE </li></ul><ul><li>( DATABASE NAME LIKE ‘DBMASK%' </li></ul><ul><li>OR DATABASE NAME = ‘ADDTNLDB' ) </li></ul><ul><li>AND TABLE NAME NOT IN (PROD.EXCLTB1, PROD.EXCLTB2, </li></ul><ul><li>PROD.EXCLTB3, PROD.EXCLTB4, </li></ul><ul><li>PROD.EXCLTB5, PROD.EXCLTB6) </li></ul><ul><li>ONGOING HANDLE 10 </li></ul>
    45. 45. Migration Examples <ul><li>//SYSIN     DD *,DLM=## /STARTUP/   FILENAME=HLQ.SSID.LLOG.CONTROL(+0)   INPUTTYPE=LOGICALLOG   SSID=DSU   PLANNAME=APTB9100 /RESTART/   RESTARTTYPE=NEW/RESTART   RESTARTID=PHXAPLPLUS42OH2 /LOGICALLOG/   SQLTYPE=MIGRATE   INCLUDERI=NO   INCLUDEDDL=YES   SORT=YES   QUALIFY=KEY   UPDATECOLUMNS=CHANGED   CCSIDCOMPATIBLE=NO /DISTRIBUTIONTUNING/   PARTITIONCLUSTERING=NO   RICLUSTERING=YES /BINDTUNING/   SYNCHRONOUS=YES /BIND/   BINDOWNER=TEST </li></ul>/COMMITTRIGGERS/   STATEMENTCOUNT=100 /AGENT/   MAXAGENTS=20 /ANYCONFLICT/   CODE=NEGATIVE   ACTION=ABORT   CODE=SQLWARN0   ACTION=WARN   CODE=POSITIVE   ACTION=WARN   CODE=MULTIPLEROWS   ACTION=WARN   CODE=NOROWS   ACTION=WARN   CODE=TIMEOUT   ACTION=RETRY   CODE=RICONFLICT   ACTION=ABORT /OBJECTMAP/   SOURCETABLE=PROD.*   TARGETTABLE=TEST.* ##
    46. 46. Migration Dinking Skills <ul><li>VSAM file ported to DB2 – Two Column Tables </li></ul><ul><ul><li>Key Column / Remaining Data </li></ul></ul><ul><li>Need to port to another DB2 Application </li></ul><ul><li>Generated LLOG </li></ul><ul><li>Dinked with the CNTL file to map data to multiple columns </li></ul><ul><li>Generated and Executed SQL </li></ul><ul><li>Converted to LLOG based migration with Apply </li></ul><ul><ul><li>70% reduction as compared to SQL capture </li></ul></ul><ul><ul><li>40% reduction as compared to SQL apply </li></ul></ul>
    47. 47. UDCL Unload – Drop – Create – Load <ul><li>Reduce Object Restructuring outage window </li></ul><ul><ul><li>Standard Practice </li></ul></ul><ul><ul><ul><li>Put object into Read only </li></ul></ul></ul><ul><ul><ul><li>Unload object being restructured </li></ul></ul></ul><ul><ul><ul><li>STOP all access to object </li></ul></ul></ul><ul><ul><ul><li>Drop object </li></ul></ul></ul><ul><ul><ul><li>Create object with appropriate changes </li></ul></ul></ul><ul><ul><ul><ul><li>Repartitioning </li></ul></ul></ul></ul><ul><ul><ul><ul><li>COLUMN changes (metrics or location) </li></ul></ul></ul></ul><ul><ul><ul><li>Load Table </li></ul></ul></ul><ul><ul><ul><li>REBIND plans </li></ul></ul></ul><ul><ul><ul><li>START access to object </li></ul></ul></ul><ul><ul><li>Outage lasts the entire length of the UNLOAD/LOAD window </li></ul></ul>
    48. 48. Traditional ‘UDCL’ Method Application Available Pre Image Copy Unload Data Drop Tablespace Create Tablespace Load Data Post Image Copy Runstats Bind/Rebind Read Only Stop all Objects - OUTAGE Limited RO Activity Install New Programs
    49. 49. UDCL Unload – Drop – Create – Load <ul><li>Reduce Object Restructuring outage window </li></ul><ul><ul><li>Log Master / Apply Plus Alternative </li></ul></ul><ul><ul><ul><li>Create new object with appropriate changes </li></ul></ul></ul><ul><ul><ul><li>Unload old object </li></ul></ul></ul><ul><ul><ul><li>Load new object </li></ul></ul></ul><ul><ul><ul><li>Run job to Capture changes between Unload and Current (Apply too) </li></ul></ul></ul><ul><ul><ul><ul><li>(Reiterravally, until ready to switch over) </li></ul></ul></ul></ul><ul><ul><ul><li>Put old object into read only </li></ul></ul></ul><ul><ul><ul><li>Do final Capture / Apply (reconcile) </li></ul></ul></ul><ul><ul><ul><li>STOP objects and RENAME </li></ul></ul></ul><ul><ul><ul><ul><li>Old object to temporary name </li></ul></ul></ul></ul><ul><ul><ul><ul><li>New object to old name </li></ul></ul></ul></ul><ul><ul><ul><li>REBIND plans </li></ul></ul></ul><ul><ul><ul><li>START and carry on </li></ul></ul></ul><ul><ul><li>Outage only lasts from the point of final capture through rename and follow on actions </li></ul></ul>
    50. 50. SHRLEVEL CHANGE TRANSFORM Prep Activities - Current tables, Data, Online and Batch Unaffected Create OCC From TS_Orig, Set Log Mark(0) Execute Log Scan from TB_Orig From Log Mark(0) to Current Time Creating Apply SQL Transform TS_New, Rebuild IX_New, Inline Copy, Runstats Create new Structures (TS_New, TB_New, IX_New) DB2 Log Set New Log Mark(+1) Apply SQL Files Update TB_New Start TS_Orig Read Only STOP all Rename TB/IX_Orig to TB/IX_Backup Rename TB/IX_New to TB/IX_Orig Start TS_New, TB_Orig, IX_Orig Application Unavailable Rebind Final Log Update TB_New Application RO Application Available Update all TS/IX jobs & Utils, DROP TS_Orig
    51. 51. Wrap up <ul><li>Auditing </li></ul><ul><li>Recovery </li></ul><ul><li>Migration </li></ul><ul><li>Alternative uses of migration </li></ul><ul><li>Additional ideas to share? </li></ul><ul><li>Questions? </li></ul>
    52. 52. Ken McDonald BMC Software [email_address] Audit Your SOX Off, and other uses of a DB2 Log Tool
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×