Case Studies in DB2 for z/OS Performance Tuning Part 2 Lock Lyon Fifth Third Bancorp May 9, 2007 3:00 p.m. – 4:00 p.m. Pla...
Outline <ul><li>Performance Tuning   for the intermediate DBA  who is  relatively new to DB2 for z/OS </li></ul><ul><li>Th...
Is Performance Important? <ul><li>The DBA’s Most Important Task:  Performance Monitoring  (81%) </li></ul><ul><li>Most Imp...
<ul><li>Differing Backgrounds </li></ul><ul><li>Differing Expectations </li></ul><ul><ul><li>Data Modeling and Database De...
Approaches to Tuning <ul><li>or, What’s the Best Use of Your Time? </li></ul><ul><li>Your Assignment: “Tune” </li></ul><ul...
Before You Tune: The Laws <ul><li>Lyon’s Laws of Database Administration </li></ul><ul><li>#1:  Data Recoverability </li><...
A Checklist for the DBA <ul><li>The Map (Pictures for Communication) </li></ul><ul><ul><li>The Mainframe Environment </li>...
Your Tools <ul><li>SMF Reporting </li></ul><ul><ul><li>Accounting Short and Long Reports </li></ul></ul><ul><ul><li>Statis...
Autonomics and Automation <ul><li>Autonomics </li></ul><ul><li>Automation </li></ul><ul><ul><li>Data Gathering </li></ul><...
The Three Most Important Performance Reports <ul><li>The Accounting Short Report </li></ul><ul><li>Explains </li></ul><ul>...
Historical Access Paths <ul><li>Extended EXPLAIN </li></ul><ul><ul><li>History of changes </li></ul></ul><ul><ul><li>Trend...
Accounting Long Report <ul><li>Drill Down into Plan/Package Details </li></ul>Elapsed Time Summary Time Details Global Con...
Accounting Long Report <ul><li>More! </li></ul>Stored Procedures UDFs Triggers Logging ROWIDs RID Lists Average SU Dynamic...
“Top n” Reports <ul><li>Time Period May Depend on Volume, Connection </li></ul><ul><ul><li>Daily (or shorter) </li></ul></...
Five Common Tuning Tactics <ul><li>Accounting Report Drill-Down </li></ul><ul><li>Automated “Top n” Analysis </li></ul><ul...
Accounting Report Drill-Down <ul><li>Beginning with Accounting Short Report, Drill Down (Usually via Accounting Long Repor...
Accounting Drill-Down Study ACCOUNTING REPORT - SHORT  PRIMAUTH PLANNAME  SELECTS INSERTS UPDATES DELETES CLASS1 EL.TIME  ...
Automated “Top n” Analysis <ul><li>Choose “Top n” Reports; Schedule Regular Executions and Review Results </li></ul><ul><l...
Selective Optimizer Hints <ul><li>Typical Uses </li></ul><ul><ul><li>“Pin” access paths for critical SQL statements </li><...
Constraint Tradeoffs <ul><li>Basic Steps: </li></ul><ul><ul><ul><li>List resource constraints and time periods </li></ul><...
Constraint Tradeoffs - Study <ul><li>Non-Data Sharing </li></ul><ul><li>CPU hovers near 100% during day </li></ul><ul><li>...
Constraint Tradeoffs - Study <ul><li>Team created a process for finding  CPU Hog  Applications </li></ul><ul><ul><li>Revie...
Constraint Tradeoffs - Study <ul><li>…  Team did Resource Constraint Analysis  … </li></ul><ul><ul><li>For many small and ...
Explain + Automation + Regular Review <ul><li>Specify EXPLAIN(YES) on Every Bind </li></ul><ul><li>Save Historical Explain...
Five Common Performance Challenges <ul><li>Dynamic SQL </li></ul><ul><li>Index Proliferation </li></ul><ul><li>Shrinking B...
Dynamic SQL <ul><li>“Surprise” SQL Arrives! </li></ul><ul><ul><li>Unknown access path(s) </li></ul></ul><ul><ul><li>CPU fo...
Dynamic SQL <ul><li>What Can You Analyze? </li></ul><ul><ul><li>Determine method of SQL “Capture” </li></ul></ul><ul><ul><...
Index Proliferation <ul><li>Old Indexes Never Die … </li></ul><ul><li>The Issues:  More Indexes = </li></ul><ul><ul><li>Lo...
Index Proliferation <ul><li>Develop Cross Reference Reports </li></ul><ul><ul><li>Index to packages and plans </li></ul></...
Shrinking Batch Window <ul><li>On-Line Day Extending; or,  </li></ul><ul><ul><li>Batch Schedule Extending Into On-Line Day...
Shrinking Batch Window <ul><li>Elapsed Time = Resource Constraint </li></ul><ul><li>Begin Here </li></ul><ul><ul><li>Revie...
<ul><li>PRIMAUTH  #OCCURS #ROLLBK SELECTS INSERTS UPDATES DELETES CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ </li></u...
Death by Random I/O <ul><li>Synchronous I/O Wait </li></ul><ul><ul><li>Use of surrogate keys </li></ul></ul><ul><ul><li>No...
Strategies for: Reorg + Stats + Hints + Rebind <ul><li>When and How Often Do You Execute Each? </li></ul><ul><li>Lots of R...
Strategies for: Reorg + Stats + Hints + Rebind <ul><li>Data-Driven </li></ul><ul><ul><ul><li>High freespace, non-volatile ...
Capacity Planning <ul><li>Resource Trending </li></ul><ul><ul><li>CPU Capacity </li></ul></ul><ul><ul><li>DASD Capacity (a...
Where to Begin <ul><li>Top “n” Reports </li></ul><ul><ul><li>Setup  </li></ul></ul><ul><ul><li>Initial Review </li></ul></...
Lock Lyon <ul><li>Fifth Third Bancorp </li></ul><ul><li>[email_address] </li></ul>Session: B11 Case Studies in DB2 for z/O...
Upcoming SlideShare
Loading in …5
×

May 9, 2007 3:00 p.m. – 4:00 p.m.

541 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
541
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • How many attended presentation “PART 1”? Tried not to use too many tools. Not “tool” examples, but “situational” examples. URL for updated presentation. An introduction to Performance Tuning for the beginning-to-intermediate DBA who is relatively new to DB2 for z/OS.
  • DISCLAIMER The author of this presentation makes absolutely no claims whatsoever about the usefulness, applicability, correctness, efficiency, or usability of any information, suggestions, or other data contained in this document. While attempts have been made to ensure the accuracy of these materials, any and all information contained herein are offered simply as ideas. Neither the author nor their employer warrant or guarantee any of this material for any purpose whatsoever. Anyone using this material assumes full responsibility for any effects and consequences of its use.
  • The CRUX of the IT Problem -- Disconnect between management and technologists about what is important. Where the two camps meet -- Costs. These figures are according to a recent survey (see below). They indicate that a great majority of current DB2 database professionals consider performance monitoring and tuning extremely important, while at the same time maintaining or reducing costs. Conclusion: New DBAs will be needed soon to support DB2 on z/OS, and they will need to get up-to-speed in performance tuning as soon as possible. One likely scenario is that as current, highly-experienced DBAs retire they will be replaced by new DBAs who do not have the same extensive experience. They will come from many possible areas within IT (or outside!) and they will join DBA teams that must deal with performance issues. (Above figures from IDUG&apos;s research study “ The Rise of the Renaissance Data Professional ”, which was sponsored by Embarcadero Technologies and produced by Unisphere Media, L.L.C. All use of this data is granted with the permission from IDUG, the independent IBM DB2 Users Group.)
  • Not all of your time will be spent doing tuning --and only cost-effective tuning, at that. It isn’t obvious where a new DBA will fit best; however, the need for database performance monitoring and tuning still exists. Many of the primary DBA tasks have already been implemented (backup and recovery), can be outsourced (object creation), or require specialized training and experience (subsystem installation). It’s becoming more and more likely that the new DBA will be given some kind of performance tuning responsibilities. As your organization matures infrastructure management will be dealing with new support personnel with a variety of backgrounds, experience, skills, and strengths. The natural tendency will be to match people’s backgrounds and skills to the tasks needed in the IT support organization.
  • Definitions here, more detail later. If you are one of these new DBAs now (or may be soon) what do you need to know? Where should you begin in order to maximize your productivity? How can you use your experience and background to best advantage? Let’s assume that as a new DBA you are given the assignment of performance tuning in the DB2 z/OS environment. Is there more than one way to do this? Yes, there are several. Here are five major categories or approaches to performance tuning. SQL Tuning involves SQL review as well as a knowledge of potential access paths, table and index data distribution, and statistics. Resource Constraint Tuning is an analysis of possible tradeoffs among CPU, elapsed time, I/O, memory, network traffic, and other resources. Application Tuning focuses attention on suites of programs that access particular tables, batch windows, and online service levels. Object Tuning concentrates on the general definitions and configurations of tables, indexes, stored procedures, and other database objects. System Tuning deals with the DB2 dbms infrastructure that supports all of the above.
  • Quick overview on Importance. Where to use Laws in Tuning. Examples: Add Index, RLL, Replication. For: Analyzing tuning effects; Prioritizing work; New DBA training Before you dive into tuning you must understand that simply tuning may not make things better (!). For example, you may ‘tune’ a long-running batch job to run in ten minutes instead of two hours; however, what if it now uses twice as much CPU as before? Additionally, you must understand the context within which your tuning changes will take place. Whether it’s backing up a database, tuning an SQL statement, or configuring a DB2 subsystem you must keep in mind Lyon’s Laws of Database Administration: Thou shalt not allow thy data to become unrecoverable Thou shalt not allow thy data to become unavailable Thou shalt not allow access to thy data by unauthorized personnel. For a more detailed explanation of the laws, see “The Laws of Database Administration”, by Lockwood Lyon, z/Journal, October/November 2006.
  • Here are the things I recommend you create, find, or acquire as part of being a database administrator. One thing not mentioned here: “The Catalog Poster”. This refers to any one of several large (30” x 42” or so) posters suitable for, well, posting in your office or cubicle. They are readily available from many third-party DB2 software vendors and contain pictorial information about the DB2 Catalog tables, including table names and column names.
  • SMF Reporting . There are many tools that input SMF records and produce DB2 reports. These include the DB2 Performance Monitor / Performance Expert from IBM (now the “Omegamon XE for DB2 Performance Expert”) as well as others from third-party vendors. RMF Reporting . These are reports on resource usage based on data gathered and reported by the Resource Measurement Facility. More for the advanced performance tuner, as they assume an expert level of knowledge of z/OS features. I recommend you check the manuals (“RMF Performance Management Guide” SC33-7992-04, and “RMF Report Analysis” SC33-7991-09) and read up on these reports to begin with: Coupling Facility Activity I/O Activity CPU Activity
  • Probably in Backup and Recovery First. Autonomic computing is the ability: “ ... to create computer systems and software that can respond to changes in the IT (and ultimately the business) environment, so the systems can adapt, heal and protect themselves.” (Autonomic Computing SG24-7099) The IBM autonomic computing vision is for intelligent, open systems that: Manage complexity. “Know” themselves. Continuously tune themselves. Adapt to unpredictable conditions. Prevent and recover from failures. Provide a secure environment. Autonomic computing systems consist of four attributes. They are: Self-configuring (able to adapt to changes in the system) Self-healing (able to recover from detected errors) Self-optimizing (able to improve use of resources) Self-protecting (able to anticipate and cure intrusions) Also see “ Embedding Autonomics into Legacy Processes ” by Lockwood Lyon, Technical Support, January 2006.
  • Reactive -- Goal: Cost Justification Pro-Active -- Goal: Application and System Productivity The Intermediate-level DBA should have a good knowledge of the Accounting Summary (or Accounting SHORT Report), Explain execution and interpretation, and health check information for their site. In addition, there are three more categories of reports that are most important. Historical Access Paths refers to storing access paths generated by Explain in order to compare, watch for changes, or do trend analysis. The Accounting Long Report is a way of drilling down beyond the summary information in the Accounting Short Report. Most of the SMF-based Reports (Accounting, Statistics, Performance, etc.) can be used with various keywords to limit the scope of the report (and keep the size down!), thus producing “Top n” reports.
  • SCALING to VOLUME Explain provides only access path information for an SQL statement, Plan, or Package. To go beyond basic SQL and access path tuning you must either buy or develop a process for storing historical access path information . This can be done by designating a “master” PLAN_TABLE in which you store all access paths, although you will need to somehow correlate access paths that change due to SQL statement number changes. (One way to do this is by implementing the QUERYNO parameter on all SQL statements.) With a master PLAN_TABLE created you can now execute various queries of your own. For example: Join with SysTables, SysIndexes, and SysColumns (and others) to display specific columns of indexes chosen for access, but also other table indexes. You can also select statistics such as table cardinality in order to determine which queries access tables with no RunStats, tables with few or no rows, or large tables. Join with catalog tables SysKeys and SysRels to determine which access paths will be affected by referential integrity Another option is to purchase a software tool that will gather, store, and display this information.
  • The Accounting Long Report gives extremely detailed information about Plans and Packages. This diagram gives an idea of how the report appears. Some of the more commonly-referenced sections are noted in color.
  • Even more of the Accounting Long Report.
  • Top N Reorg Jobs; Top N “Big Tables” Most of the SMF-based reports can be requested to display a limited number of items, so-called “Top n”. For example, you can request analysis of all packages during a time period, but only to report on the ten (or twenty, or fifty, or whatever) with the highest value of In-DB2 CPU Time, or Class 1 Wait Time, or any of a number of fields. Here is a list of some of the most useful fields: Elapsed Time (Application; In-DB2) CPU Time (Application; In-DB2) Class 1 Wait Time (In-Application) Class 2 Wait Time (In-DB2) Class 3 Wait Suspensions # DML Statements Executed (DSNTIAUL Unloads) # GetPages # Buffer Updates # Synchronous Reads
  • Beyond Beginner DBA. Emphasis -- Pro-Active Trends Heuristics SQL-based, RCA-based For the intermediate DBA there are many, many possible approaches to tuning. Since the most common approaches taken are SQL-based and Resource Constraint-based, the remainder of this presentation will concentrate on the most common tuning tactics you would used for these two approaches.
  • Here are some examples of symptoms you discover in the Short Report and one way to proceed to gather more information by drilling-down. Symptom Procedure High CPU (as % of Elapsed) See Long Report High Elapsed Time See Long Report: Waits, Claims High # of DML Statements See Long Report High # Synch Reads See Long Report
  • A Long-Running Application. Here is a pair of Accounting Short and Long reports that shows a long-running application. The succeeding reports show how the summary data in the Short Report leads you to “drill down” in the Long Report for additional information.
  • Dashboard -- I/O Profile (Write versus Read) - WHERE ??? Logging Load (Another measure of Write activity) Utility as a Percent of Load Dynamic SQL Profile (Select, Fetch versus Update) Typical Top n Report selections: Elapsed Time (Application; In-DB2) CPU Time (Application; In-DB2) Class 2 Wait Time (In-DB2) # Buffer Updates Once you get these scheduled you can expand your processing to automate some of the review. One possibility is to use REXX (or another other tool) to parse the report and store information in a table for later trend analysis or cross referencing. As you get more sophisticated in your review and analysis you will begin to “tune” your reports. For example, you may use and Exclude process to exclude certain “approved” long-running programs from your Top 10 Elapsed Time report in order to concentrate on real issues.
  • Good in “SQL Stable” environment. TEMPORARY. DOCUMENTED &amp; REVIEWED. EXPIRES. One typical “tuning” use is to override a bad Join type or tablespace scan for tables used as “work” tables (sometimes called “staging” tables). Along with the VOLATILE keyword on CREATE TABLE this can be used to get the DB2 optimizer to prefer indexed access to a table. Another typical use is to “pin” particular (good) access paths for certain SQL statements so that future changes, maintenance, etc., would not cause a changed (bad) access path to negatively impact a production application. You would then use differences in historical EXPLAIN output to trigger notification that the optimizer-chosen access paths had changed, but the Hints would ensure that no changes occurred. It is extremely important to document your use of Optimizer Hints and to regularly review your implementation. DB2 maintenance may make your hint redundant, unnecessary, or even unusable. Sometimes the maintenance HoldData indicates that certain plans and packages should be re-bound. Also, changes or maintenance to certain DB2 objects (such as data-partitioned secondary indexes) may make your hints sub-optimal. I recommend making access path analysis for every Hint a part of your regular verification process during maintenance.
  • Resource Constraint Analysis is a process whereby you can compensate for one resource constraint by using another. Some examples of this would be: CPU Constraint + DASD Availability Add Additional Indexes, Add Columns to Indexes Remove data Compression CPU Constraint + Appl Throughput Availability Artificially lengthen application elapsed times Appl Throughput Constraint + DASD Availability Denormalize Tables Partition Tables Implement Parallelism DASD Constraint + Appl Throughput Availability Eliminate Embedded Freespace Drop Indexes Warning! These are not recommendations! These examples are presented in order to show the process of resource constraint analysis. Any changes you propose should be done within the context of Lyon’s Laws.
  • Team created a process for finding CPU Hog Applications Review Acctg Report: Average In-DB2 TCB Time, # SELECTs, etc. Create Metrics: “CPU per SELECT”, “CPU per INSERT”, etc. Entered in SpreadSheet, sorted (descending!) Team did Resource Constraint Analysis Allocated additional 1 GB of real memory to DB2 Team allocated several BPs in dataspaces, assigned heavily-used tables Allocated 100 GB additional DASD to DB2 For many small and heavily-used tables set COMPRESS NO; DASD increased, CPU increased (additional I/Os) Net decrease in CPU due to elimination of compress/decompress Increased SLAs for certain low-priority distributed apps Combined “small” stored procedures into single modules Enlarged dynamic SQL cache Net decrease in CPU due to elimination of stored procedure overhead, fewer dynamic SQL binds
  • This is Triage.
  • The results in this case: An overall reduction in CPU used.
  • Can use BIND with COPY parameter to non-destructively Bind. EXPLAIN automation usually includes a regular review of access paths by both DBAs and application developers. To do this you need a place to store before/after EXPLAIN results in a manner that makes comparisons easy. The review should cover critical applications, any problem SQL, particular “bad” access paths (e.g., tablespace scans) and any other concerns you may have (such as access to special objects). Some typical items to review: Top performers (both GOOD and bad) For SCALING Current status of Hints, tricks (AND 0 = 1), overrides Cross-reference of Access Paths to Objects and to Applications Special objects (DPSIs, LOBs, MQTs, Triggers/UDFs) New SQL statements SQLs accessing Catalog Tables
  • Here are five common performance issues. In this last section we review tuning options and tactics for these challenges.
  • Dynamic SQL is most commonly encountered in a distributed environment, where SQL arrives at DB2 from … somewhere! DB2 uses CPU to prepare and execute the SQL statement, so CPU may become a constrained resource. Sometimes you can allocate a large enough dynamic statement cache so that commonly-submitted statements (and their access paths) are stored; this can reduce the CPU cost of re-execution. Of course, this is only true for statements that are Exactly Alike. This usually requires that the SQL statements use parameter markers rather than literals, since differences in the values of the literals makes statements different (as far as the cache is concerned). Along with the CPU used, another issue with dynamic SQL is that the access path may be unknown until execution time.
  • An overall view of DDF performance appears on the Statistics Reports. While these are not in the scope of this presentation, they can be another valuable tool for the DBA. Poorly-Coded SQL; Literals, not Parameter Markers; no EXPLAIN In order to tune dynamic SQL statements you need a method of “capturing” them for analysis. Some mainframe-based software tools allow you to capture SQL statements. Other options may include your gateway software (in the form of SQL traces) as well as the originating application system. Once you begin analysis you usually find at least one of the following: Access paths are sometimes very bad, and the most common cause is poorly-coded SQL statements (missing Join predicates, etc.). You must investigate to determine the source of the statements, which is sometimes a software tools used to “assist users” in coding SQL. Many SQL statements code literals rather than use parameter markers. This makes it difficult for the dynamic statement cache to function. Discover why literals are used, and attempt to get this changed. Developers in the distributed world are unaware (or not allowed to access) an EXPLAIN tool; hence, they are not able to view access path information during query development. Develop a method for them to get access paths, perhaps by installing an EXPLAIN tool locally.
  • Especially in Dynamic SQL Environment. Especially in “Mature”, or “Deliver” Environments. While many DBAs spend at least some time analyzing the need(s) for adding indexes to tables, it is rare that they review current index use to determine if redundant or unneeded indexes exist. The Big Question: What is the reason that each index exists? Some possible reasons are: Support of a uniqueness constraint or a Primary Key constraint Specification of partitioning key ranges (for an index-partitioned table) Provide an indexed (or index-only) access path Provide a high-performance access path to a foreign key in order to support DBMS-enforced referential integrity The performance issues listed here affect resource use, and so are relevant when doing performance tuning.
  • Example: User-defined Catalog Table Indexes. Since The Big Question is why does each index exist, the need for documentation should be obvious. Oddly, almost no DBA seems to have implemented the COMMENT ON INDEX statement in order to do this. Using COMMENT ON has additional advantages: one simple way of reporting index use would be to join SysIndexes, SysPackDep, and SysPlanDep tables in order to create cross references of where indexes are used … and why. One report could show where indexes are used … and not used. For example, if an index was specifically developed for “INDEX-ONLY ACCESS TO ACCOUNT INFORMATION” but is not used in any package or plan, then it becomes a candidate for review (maybe leading to an SQL review). Similarly, if a five-column index appears in multiple EXPLAIN rows but the highest Match Columns value is 2, then further review may be necessary. Health checks can assist in your analysis of indexes. (See Part 1 of this presentation.) Some example health checks would be to report indexes: Referential Integrity relationships For tables having more than “n” indexes Indexes with no references in any plan or package (RI !) Defined as CLUSTER but with poor Cluster% With low cardinality for first key column With FULLKEYCARD &lt; 1% of Table cardinality
  • Here, the environment allows (or requires) that long-running batch jobs (scripts) process while on-line applications are either down, unavailable, or in a period of low-activity. In this way some batch jobs can get the exclusive use of resources that they need. One example would be a payroll application that must prohibit access during pay calculations. The issue that many companies are experiencing is that their on-line day is getting longer, either because the organization is becoming international or because they communicate and trade information with other firms whose scope is international. Another similar issue is when the amount of work being processed in the batch cycle is increasing, thereby lengthening the cycle.
  • If the goal is to shorten the batch cycle, then one possibility is to consider “elapsed time” as a constraint and do resource constraint analysis. Before doing that, however, use your scheduling tool (or some other method) to review batch cycle times historically. Attempt to determine which jobs or portions of the cycle have been running longer, and analyze if this trend is data volume-related. If so, then your only alternative (short of finding lots of SQL that needs fixing) may be to acquire resources in the form of additional CPUs, memory or DASD. If you believe that you can tune the DB2-related portions of the batch cycle, review the resource constraints and determine if there is unused CPU, memory, or DASD available. If so, you may be able to “trade off” these resources in order to shorten elapsed times. Some possibilities include implementing a form of parallelism and partitioning (or re-partitioning) tables. It is also possible that you may discover plans and packages having excessive Wait times that you can reduce.
  • VLDB Environment. Surrogate Keys (Lawson) “… typically means that performance was not seriously considered.” In cases where you require a LOT of rows from a table, DB2 performs best when it can pre-fetch data (asynchronously from you). This requires that your SQL statement can be coded so DB2 determines that prefetch of data will be efficient. This means that tables should be processed in clustering sequence, and indexes in key sequence (thus invoking sequential prefetch or list prefetch). If not, and DB2 chooses a non-cluster index in an access path, you may possibly be processing table rows by “bouncing” semi-randomly from one page to another in the pageset. This happens frequently in database designs that include a lot of surrogate keys without use of natural keys for clustering or partitioning. The result: queries experience a lot of waits for synchronous I/Os, usually against tablespace pagesets.
  • Our last performance challenge is that surrounding the prerequisites for access path selection; namely, tablespace and index Reorgs, RunStats, use of optimizer hints, and ReBind. The Reorg and RunStats utilities consume a lot of I/O and CPU resources; thus, they should be executed only when required. How often is this? If table rows and values are relatively unchanging, maybe RunStats only needs to be run once (and Reorg not at all). Tables loaded daily (assuming they are loaded in clustering sequence) should likewise never need Reorg. Programs that are infrequently re-bound do not require weekly up-to-date statistics (at least for access path selection). On the other hand, dynamic SQL may be re-prepared often, depending upon whether access paths are saved in the dynamic statement cache. Tuning all of this requires an understanding of how often access paths change, and how they change. Begin with health checks, particularly ones that analyze packages and plans that have not been recently bound. Next, analyze access path history to determine the scope of potential access path changes and whether they will become necessary. For example, you may find that SQL statements specifying equals predicates (“=“) for all the columns of a unique index will always generate an indexed access path, regardless of statistics. Or, you may find that a Cursor that selects from a single table and has no WHERE predicates at all will always generate a tablespace scan.
  • Here are some common DB2 environments. In these cases, your strategy is based on balancing the need for good data distribution and high-performance access paths with the resources required to execute Reorg and RunStats.
  • In the process of performance tuning, the DBA has the opportunity to both reduce and increase resource usage. So, any choices made that may significantly change the amount of CPU, memory, I/O, elapsed time, or other real/perceived resource should be coordinated with the capacity planning function. For example, suppose your batch cycle is consuming a lot of elapsed time. You implement some query parallelism to shorten one long-running batch job. It runs faster, but now uses much more CPU time. Because of this, you may have inadvertently created a CPU constraint, which now causes other processes to perform poorly. Worse, since all z/OS subsystems must use the CPU, any increase in DB2 CPU usage means that less CPU is available for other subsystems such as IMS, CICS, and JES. Part of the answer is to be able to forecast resource constraints, which is the job of formal capacity planning. The DBA has a major input into this process, since their work can have a significant effect upon resource usage.
  • No matter what areas of tuning you decide to begin with, one thing is certain: you will have a a lot of work to do! To reduce your workload you should concentrate on automating any reporting or analysis processes. One good place to begin is to set up, review, and automate your “Top n” reports. Chose the the report or reports relevant to your tuning area and implement a regularly-scheduled process to produce the reports. Schedule regular meetings to review the results with the appropriate people. For example, if your are concentrating on SQL tuning, set up regular reports of your Top Ten (or fifty) SQL statements. Chose your criteria for reporting: Total elapsed time, total CPU time, and total synchronous I/O wait time are common items. Run these reports daily or weekly and review them with your peers or with the proper application area. Review available health checks and/or develop your own. Run them regularly and review them. Develop plans to address issues when they arise. Even the most frugal shops have tools that may help you. Discover what IBM and third-party tools exist and determine if they can help you get your job done. Pick what looks like the most useful tool and learn it.
  • May 9, 2007 3:00 p.m. – 4:00 p.m.

    1. 1. Case Studies in DB2 for z/OS Performance Tuning Part 2 Lock Lyon Fifth Third Bancorp May 9, 2007 3:00 p.m. – 4:00 p.m. Platform: z/OS Session: B11 Beginner/Intermediate www.mdug.org “ Download Presentations”
    2. 2. Outline <ul><li>Performance Tuning for the intermediate DBA who is relatively new to DB2 for z/OS </li></ul><ul><li>The New DB2 DBAs: Differing Backgrounds, Different Expectations </li></ul><ul><li>Five Common Tuning Tactics </li></ul><ul><li>Five Common Performance Challenges </li></ul><ul><li>Where to Begin </li></ul>
    3. 3. Is Performance Important? <ul><li>The DBA’s Most Important Task: Performance Monitoring (81%) </li></ul><ul><li>Most Important Tool Functions: Monitoring Performance and Availability (83%) </li></ul><ul><li>Top DB Challenge: Reducing Cost (47%) </li></ul><ul><li>Areas of Responsibility for DB Monitoring: Performance and Tuning of Database (70%) </li></ul>REVIEW
    4. 4. <ul><li>Differing Backgrounds </li></ul><ul><li>Differing Expectations </li></ul><ul><ul><li>Data Modeling and Database Design </li></ul></ul><ul><ul><li>SQL Review </li></ul></ul><ul><ul><li>Backup and Recovery </li></ul></ul><ul><ul><li>Security </li></ul></ul><ul><ul><li>Subsystem Installation and Configuration </li></ul></ul><ul><ul><li>Performance Monitoring and Tuning </li></ul></ul>The New DB2 DBAs Operations Application Support Programming DBA (other platform) Management Systems Programming University Technical School REVIEW
    5. 5. Approaches to Tuning <ul><li>or, What’s the Best Use of Your Time? </li></ul><ul><li>Your Assignment: “Tune” </li></ul><ul><li>How do you judge the best strategies? </li></ul>SQL Tuning behavior-based Resource Constraint Tuning resource-based Application Tuning application-based Object Tuning architecture-based Systems Tuning enterprise-based REVIEW
    6. 6. Before You Tune: The Laws <ul><li>Lyon’s Laws of Database Administration </li></ul><ul><li>#1: Data Recoverability </li></ul><ul><li>#2: Data Availability </li></ul><ul><li>#3: Data Security </li></ul>REVIEW
    7. 7. A Checklist for the DBA <ul><li>The Map (Pictures for Communication) </li></ul><ul><ul><li>The Mainframe Environment </li></ul></ul><ul><ul><ul><li>TCP/IP, JES, z/OS, CICS, DBMSs, IRLM, CPU, Memory, DASD, etc., etc </li></ul></ul></ul><ul><li>Your DB2 Subsystem Profile </li></ul><ul><li>Tools </li></ul><ul><li>Change Control </li></ul><ul><li>Survival Kit </li></ul>See Part 1
    8. 8. Your Tools <ul><li>SMF Reporting </li></ul><ul><ul><li>Accounting Short and Long Reports </li></ul></ul><ul><ul><li>Statistics Reports </li></ul></ul><ul><ul><li>Deadlock/Timeout Report </li></ul></ul><ul><ul><li>SQL Trace Report </li></ul></ul><ul><li>EXPLAINs </li></ul><ul><li>RMF Reports (Advanced) </li></ul>
    9. 9. Autonomics and Automation <ul><li>Autonomics </li></ul><ul><li>Automation </li></ul><ul><ul><li>Data Gathering </li></ul></ul><ul><ul><li>Reporting </li></ul></ul><ul><ul><li>Analysis </li></ul></ul>
    10. 10. The Three Most Important Performance Reports <ul><li>The Accounting Short Report </li></ul><ul><li>Explains </li></ul><ul><li>Health Checks </li></ul><ul><li>Historical Access Paths (Explain ++) </li></ul><ul><li>Accounting Long Report (“Accounting Detail”) </li></ul><ul><li>“Top n” Reports </li></ul>Reactive Predictive, ProActive
    11. 11. Historical Access Paths <ul><li>Extended EXPLAIN </li></ul><ul><ul><li>History of changes </li></ul></ul><ul><ul><li>Trend analysis </li></ul></ul><ul><li>Possible Extensions (without tools) </li></ul><ul><ul><li>Join to SysTables, SysIndexes, SysColumns </li></ul></ul><ul><ul><ul><li>Alternate indexes </li></ul></ul></ul><ul><ul><ul><li>Stats (column attributes cardinality, etc.) </li></ul></ul></ul><ul><ul><li>Joins to SysKeys, SysRels (RI) </li></ul></ul>
    12. 12. Accounting Long Report <ul><li>Drill Down into Plan/Package Details </li></ul>Elapsed Time Summary Time Details Global Contention SQL DML Locking Class 2 Time Summary Suspensions Highlights SQL DCL Terminations SQL DDL Drains / Claims Data Capture Data Sharing Query Parallelism
    13. 13. Accounting Long Report <ul><li>More! </li></ul>Stored Procedures UDFs Triggers Logging ROWIDs RID Lists Average SU Dynamic SQL Misc (LOBs) Buffer Pools Grp Buffer Pools
    14. 14. “Top n” Reports <ul><li>Time Period May Depend on Volume, Connection </li></ul><ul><ul><li>Daily (or shorter) </li></ul></ul><ul><ul><li>Weekly (may not be possible unless “reduced”) </li></ul></ul><ul><li>Customize for Environment </li></ul><ul><ul><li>On-line </li></ul></ul><ul><ul><li>Distributed </li></ul></ul><ul><ul><li>Batch (IMS, TSO, utility, etc.) </li></ul></ul><ul><li>10, 20, 50, 100 Based on Experience </li></ul>
    15. 15. Five Common Tuning Tactics <ul><li>Accounting Report Drill-Down </li></ul><ul><li>Automated “Top n” Analysis </li></ul><ul><li>Selective Optimizer Hints </li></ul><ul><li>Constraint Tradeoffs </li></ul><ul><li>Explain + Automation + Regular Review </li></ul>
    16. 16. Accounting Report Drill-Down <ul><li>Beginning with Accounting Short Report, Drill Down (Usually via Accounting Long Report) </li></ul><ul><li>Typical Use </li></ul><ul><ul><li>Execute, look for “exceptions” </li></ul></ul><ul><ul><li>Run Long Report for exceptions </li></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Become familiar with Long and Short reports </li></ul></ul><ul><ul><li>Run regularly for a “critical” system </li></ul></ul>
    17. 17. Accounting Drill-Down Study ACCOUNTING REPORT - SHORT PRIMAUTH PLANNAME SELECTS INSERTS UPDATES DELETES CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES MEMBER FETCHES OPENS CLOSES PREPARE CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT ---------- ------ ------- ------- ------- -------------- -------------- -------- PRODBAT1 DSNTIB71 0.00 0.00 0.00 0.00 5:06.436522 5:05.754667 1500.4K DB2P 70.0K 1.00 1.00 1.00 52.485688 52.275601 154.9K   ----------------------------------------------------------------------------- |PROGRAM NAME SQLSTMT CL7 ELAP.TIME CL7 CPU TIME CL8 SUSP.TIME CL8 SUSP| |DSNTIAUL 70.0K 5:05.754665 52.275600 3:39.298783 73.3K| ----------------------------------------------------------------------------- ACCOUNTING REPORT - LONG PRIMAUTH: PRODBAT1 PLANNAME: DSNTIB71 MEMBER: DB2P   ELAPSED TIME DISTRIBUTION ---------------------------------------------------- APPL ] DB2 ]==============> 28% SUSP ]====================================> 72% ACCOUNTING REPORT - LONG CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT -------------------- ------------ -------- LOCK/LATCH(DB2+IRLM) 0.225273 43876.00 SYNCHRON. I/O 0.961634 450.00 DATABASE I/O 0.961634 450.00 LOG WRITE I/O 0.000000 0.00 OTHER READ I/O 3:37.465217 28995.00 ACCOUNTING REPORT - LONG BP10 BPOOL ACTIVITY AVERAGE TOTAL --------------------- -------- -------- BPOOL HIT RATIO (%) 30.91 N/A GETPAGES 1178.2K 1178234 BUFFER UPDATES 0.00 0 SYNCHRONOUS WRITE 0.00 0 SYNCHRONOUS READ 157.00 157 SEQ. PREFETCH REQS 22349.00 22349 LIST PREFETCH REQS 0.00 0 DYN. PREFETCH REQS 3559.00 3559 PAGES READ ASYNCHR. 813.8K 813836 ACCOUNTING REPORT - LONG BP7 BPOOL ACTIVITY AVERAGE TOTAL --------------------- -------- -------- BPOOL HIT RATIO (%) 82.14 N/A GETPAGES 151.1K 151054 BUFFER UPDATES 154.9K 154874 SYNCHRONOUS WRITE 0.00 0 SYNCHRONOUS READ 97.00 97 SEQ. PREFETCH REQS 6897.00 6897 LIST PREFETCH REQS 0.00 0 DYN. PREFETCH REQS 0.00 0 PAGES READ ASYNCHR. 26880.00 26880
    18. 18. Automated “Top n” Analysis <ul><li>Choose “Top n” Reports; Schedule Regular Executions and Review Results </li></ul><ul><li>Typical Uses </li></ul><ul><ul><li>Specific applications having issues </li></ul></ul><ul><ul><li>Connections, utilities </li></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Resource constraints </li></ul></ul><ul><ul><li>Critical applications (elapsed, CPU, wait times) </li></ul></ul>
    19. 19. Selective Optimizer Hints <ul><li>Typical Uses </li></ul><ul><ul><li>“Pin” access paths for critical SQL statements </li></ul></ul><ul><ul><li>Possible workaround for some Dynamic SQL </li></ul></ul><ul><ul><li>SQL accessing special objects (“work” tables) </li></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Organize documentation and regularly review </li></ul></ul><ul><ul><li>Choose queries that are critical </li></ul></ul><ul><ul><li>Re-evaluate after maintenance </li></ul></ul>
    20. 20. Constraint Tradeoffs <ul><li>Basic Steps: </li></ul><ul><ul><ul><li>List resource constraints and time periods </li></ul></ul></ul><ul><ul><ul><li>List co-incident resource availability </li></ul></ul></ul><ul><ul><ul><li>Develop method to trade-off resources </li></ul></ul></ul><ul><li>Typically Used in an Environment having Short-Term Resource Constraints </li></ul><ul><li>Begin Here </li></ul><ul><ul><ul><li>Use Accounting Reports to discover Targets </li></ul></ul></ul>
    21. 21. Constraint Tradeoffs - Study <ul><li>Non-Data Sharing </li></ul><ul><li>CPU hovers near 100% during day </li></ul><ul><li>Analysis: </li></ul><ul><ul><li>Review DB2PM Accounting Reports </li></ul></ul><ul><ul><li>Excessive CPU due to DB2 </li></ul></ul><ul><ul><li>Excessive CPU due to applications </li></ul></ul><ul><ul><li>Excessive CPU due to access paths </li></ul></ul><ul><ul><li>Other (?) </li></ul></ul>
    22. 22. Constraint Tradeoffs - Study <ul><li>Team created a process for finding CPU Hog Applications </li></ul><ul><ul><li>Review Acctg Report: Avg In-DB2 TCB Time, # SELECTs </li></ul></ul><ul><ul><li>Create Metrics: “CPU per SELECT”, per INSERT, etc. </li></ul></ul><ul><ul><li>Entered in SpreadSheet, sorted (descending!) </li></ul></ul><ul><li>Team did Resource Constraint Analysis </li></ul><ul><ul><li>Allocated additional 1 GB of real memory to DB2 </li></ul></ul><ul><ul><li>Team allocated several BPs in dataspaces, assigned heavily-used tables </li></ul></ul><ul><ul><li>Allocated 100 GB additional DASD to DB2 </li></ul></ul>
    23. 23. Constraint Tradeoffs - Study <ul><li>… Team did Resource Constraint Analysis … </li></ul><ul><ul><li>For many small and heavily-used tables, COMPRESS NO </li></ul></ul><ul><ul><ul><li>DASD increased, CPU increased (additional I/Os) </li></ul></ul></ul><ul><ul><ul><li>Net decrease in CPU due to elimination of compress/decompress </li></ul></ul></ul><ul><ul><li>Increased SLAs for certain low-priority distributed apps </li></ul></ul><ul><ul><li>Combined “small” stored procedures into single modules </li></ul></ul><ul><ul><li>Enlarged dynamic SQL cache </li></ul></ul><ul><ul><ul><li>Net decrease in CPU due to elimination of stored procedure overhead, fewer dynamic SQL binds </li></ul></ul></ul>
    24. 24. Explain + Automation + Regular Review <ul><li>Specify EXPLAIN(YES) on Every Bind </li></ul><ul><li>Save Historical Explain Data & Review </li></ul><ul><li>Typical Uses </li></ul><ul><ul><ul><li>Cross-reference to objects, applications </li></ul></ul></ul><ul><ul><ul><li>Accessing special objects (DPSIs, MQTs, UDFs) </li></ul></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Develop method(s) of saving history </li></ul></ul><ul><ul><ul><li>May need to implement QUERYNO in SQL </li></ul></ul></ul><ul><ul><li>Consider most useful comparison reports </li></ul></ul>
    25. 25. Five Common Performance Challenges <ul><li>Dynamic SQL </li></ul><ul><li>Index Proliferation </li></ul><ul><li>Shrinking Batch Window </li></ul><ul><li>Death by Random I/O </li></ul><ul><li>Strategies for: Reorg + Stats + Hints + Rebind </li></ul>
    26. 26. Dynamic SQL <ul><li>“Surprise” SQL Arrives! </li></ul><ul><ul><li>Unknown access path(s) </li></ul></ul><ul><ul><li>CPU for prepare/execute </li></ul></ul><ul><li>Activity in Statistics Reports </li></ul>? DDF Gateway ? ?
    27. 27. Dynamic SQL <ul><li>What Can You Analyze? </li></ul><ul><ul><li>Determine method of SQL “Capture” </li></ul></ul><ul><ul><li>Use SQL cache for access path re-use </li></ul></ul><ul><ul><li>Consider Optimizer Hints </li></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Review current tool suite </li></ul></ul><ul><ul><li>Review Statistics Reports </li></ul></ul>STATISTICS REPORT (SHORT) CPU TIMES TCB TIME SRB TIME TOTAL TIME ------------------------------- --------------- --------------- --------------- SYSTEM SERVICES ADDRESS SPACE 2:55.755805 36:16.539125 39:12.294929 DATABASE SERVICES ADDRESS SPACE 8:43.111144 2:38:04.085334 2:46:47.196478 IRLM 4.462056 11:24.705839 11:29.167895 DDF ADDRESS SPACE 6:16.023680 9:11:17.281676 9:17:33.305356 STATISTICS REPORT (SHORT) SQL DML QUANTITY -------- -------- SELECT 42607.7K INSERT 24061.8K UPDATE 5292.6K DELETE 9605.9K PREPARE 5331.8K DESCRIBE 1861.0K DESC.TBL 0.00 OPEN 10536.5K CLOSE 6623.5K FETCH 170.3M TOTAL 276.2M SQL DML QUANTITY /SECOND /THREAD /COMMIT --------------------------- -------- ------- ------- ------- SELECT 384.6K 4.55 141.51 6.89 INSERT 10127.00 0.12 3.73 0.18 UPDATE 5281.1K 62.42 1943.02 94.63 DELETE 196.00 0.00 0.07 0.00 PREPARE 40863.00 0.48 15.03 0.73 DESCRIBE 7386.00 0.09 2.72 0.13 DESCRIBE TABLE 0.00 0.00 0.00 0.00 OPEN 8671.4K 102.50 3190.38 155.39 CLOSE 8629.3K 102.00 3174.86 154.63 FETCH 14872.9K 175.80 5471.99 266.51 TOTAL 37897.9K 447.96 13.9K 679.10 DYNAMIC SQL STMT QUANTITY /SECOND /THREAD /COMMIT --------------------------- -------- ------- ------- ------- PREPARE REQUESTS 40863.00 0.48 15.03 0.73 FULL PREPARES 7665.00 0.09 2.82 0.14 SHORT PREPARES 40410.00 0.48 14.87 0.72 GLOBAL CACHE HIT RATIO (%) 84.06 N/A N/A N/A IMPLICIT PREPARES 0.00 0.00 0.00 0.00 PREPARES AVOIDED 0.00 0.00 0.00 0.00 CACHE LIMIT EXCEEDED 0.00 0.00 0.00 0.00 PREP STMT PURGED 57.00 0.00 0.02 0.00 LOCAL CACHE HIT RATIO (%) N/C N/A N/A N/A SQL DML SQL DCL Stored Procedures Triggers UDFs DDL EDM Pool Subsystem Services Open/Close Activity Log Activity Plan/Package Processing DB2 Commands RID List Processing Authorization Management Locking Activity Data Sharing Locking Global DDF Activity CPU Times DB2 Appl.Pgrmr.Interface Data Capture Dynamic SQL Statements Buffer Pools . . . Query Parallelism
    28. 28. Index Proliferation <ul><li>Old Indexes Never Die … </li></ul><ul><li>The Issues: More Indexes = </li></ul><ul><ul><li>Longer recovery time </li></ul></ul><ul><ul><li>Slower insert / update / delete performance </li></ul></ul><ul><ul><li>Additional DASD space (uncompressed) </li></ul></ul><ul><ul><li>Longer Reorgs </li></ul></ul>
    29. 29. Index Proliferation <ul><li>Develop Cross Reference Reports </li></ul><ul><ul><li>Index to packages and plans </li></ul></ul><ul><ul><li>Index to EXPLAINs </li></ul></ul><ul><li>Begin Here </li></ul><ul><ul><li>Implement index health checks </li></ul></ul><ul><ul><li>Implement cross references </li></ul></ul><ul><ul><li>Regularly review </li></ul></ul>--************************************************ --* .I.6 CLUSTERING INDEXES WITH POOR CLUSTER % --************************************************ DBNAME IX_NAME TB_NAME FULLKEYCARD CL_RATIO STATS_DATE ---------+----------------+---------+---------+---------+---------+---- DB001101 IAA00101 TAC_POST_CR_REF 29477183. 43 08/11/2006 DB020201 IAA66901 DEP_PICK_DAY_DW 1061081. 53 04/18/2006 DB020101 IAA58401 SRV_STG_RES_DEV 1389347. 55 07/23/2006 . . . --************************************************************ --* .I.11 INDEXES WITH FULLKEYCARD < 1% OF TABLE CARD * --************************************************************ DBNAME TSNAME TB_NAME IX_NAME IDX_FKYCRD TBL_CARD ---------+---------+---------+-----------+-------+---------+-------+--- DB060101 SAA64301 JOUR_TRAN_STG IAA64301 1. 12524221. DB000101 SAA00101 SRV_DW_BAL_TRAN SAA00106 12. 4232191. DB010301 SAA40401 T121_AT_BAL IAA40402 14. 70278361. . . .
    30. 30. Shrinking Batch Window <ul><li>On-Line Day Extending; or, </li></ul><ul><ul><li>Batch Schedule Extending Into On-Line Day </li></ul></ul><ul><li>Batch Elapsed Times Longer </li></ul><ul><ul><ul><li>(Accounting “Top n” Report) </li></ul></ul></ul><ul><li>DBA Job Constraints </li></ul><ul><ul><ul><li>Reorgs, Image Copies, RunStats, etc. </li></ul></ul></ul><ul><li>Application Job Constraints </li></ul><ul><ul><ul><li>Timeouts / Deadlocks, Concurrency, etc. </li></ul></ul></ul>
    31. 31. Shrinking Batch Window <ul><li>Elapsed Time = Resource Constraint </li></ul><ul><li>Begin Here </li></ul><ul><ul><li>Review batch schedule (using tool) </li></ul></ul><ul><ul><ul><li>Trend analysis </li></ul></ul></ul><ul><ul><ul><li>Potential parallelism opportunities </li></ul></ul></ul><ul><ul><li>Create “Top n” Report for batch cycle </li></ul></ul>
    32. 32. <ul><li>PRIMAUTH #OCCURS #ROLLBK SELECTS INSERTS UPDATES DELETES CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ </li></ul><ul><li>PLANNAME #DISTRS #COMMIT FETCHES OPENS CLOSES PREPARE CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT TOT.PREF </li></ul><ul><li>---------- ------- ------- ------- ------- ------- ------- -------------- -------------- -------- -------- </li></ul><ul><li>  </li></ul><ul><li>PRODDM15 1 0 0.00 0.00 0.00 0.00 1:16:53.512404 1:15:17.386423 53177.4K 35644.00 </li></ul><ul><li>DSNTIB71 0 1 9127.3K 1.00 1.00 3.00 41:39.881114 41:12.572460 11302.6K 738.7K </li></ul><ul><li>  </li></ul><ul><li>PRODDX07 1 0 4237.7K 4248.0K 4214.0K 0.00 1:30:05.808801 1:24:20.802578 43838.9K 3445.3K </li></ul><ul><li>CEF001BP 0 5745 0.00 0.00 0.00 0.00 21:24.313339 16:28.946885 13421.6K 2405.00 </li></ul><ul><li>CBT001BV 3 1 1582.8K 0.00 0.67 0.00 3:10:44.137728 2:08:12.472110 111.8M 542.00 </li></ul><ul><li>DBT1 0 28620 953.3K 38.1K 38.1K 0.00 1:46:33.792589 1:37:08.071649 0.00 13196.67 </li></ul>Shrinking Batch Window - Study PRIMAUTH #OCCURS #ROLLBK SELECTS INSERTS UPDATES DELETES PLANNAME #DISTRS #COMMIT FETCHES OPENS CLOSES PREPARE ---------- ------- ------- ------- ------- ------- -------   PRODDM15 1 0 0.00 0.00 0.00 0.00 DSNTIB71 0 1 9127.3K 1.00 1.00 3.00 PRIMAUTH CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ PLANNAME CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT TOT.PREF ---------- -------------- -------------- -------- --------   PRODDM15 1:16:53.512404 1:15:17 .386423 53177.4K 35644.00 DSNTIB71 41:39.881114 41:12. 572460 11302.6K 738.7K PRIMAUTH #OCCURS #ROLLBK SELECTS INSERTS UPDATES DELETES PLANNAME #DISTRS #COMMIT FETCHES OPENS CLOSES PREPARE ---------- ------- ------- ------- ------- ------- ------- PRODDX07 1 0 4237.7K 4248.0K 4214.0K 0.00 CEF001BP 0 5745 0.00 0.00 0.00 0.00 PRIMAUTH CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ PLANNAME CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT TOT.PREF ---------- -------------- -------------- -------- -------- PRODDX07 1:30:05.808801 1:24:20 .802578 43838.9K 3445.3K CEF001BP 21:24.313339 16:28 .946885 13421.6K 2405.00 PRIMAUTH #OCCURS #ROLLBK SELECTS INSERTS UPDATES DELETES PLANNAME #DISTRS #COMMIT FETCHES OPENS CLOSES PREPARE ---------- ------- ------- ------- ------- ------- ------- CBT001BV 3 1 1582.8K 0.00 0.67 0.00 DBT1 0 28620 953.3K 38.1K 38.1K 0.00 PRIMAUTH CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ PLANNAME CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT TOT.PREF ---------- -------------- -------------- -------- -------- CBT001BV 3:10:44 .137728 2:08:12 .472110 111.8M 542.00 DBT1 1:46:33.792589 1:37:08 .071649 0.00 13196.67
    33. 33. Death by Random I/O <ul><li>Synchronous I/O Wait </li></ul><ul><ul><li>Use of surrogate keys </li></ul></ul><ul><ul><li>Non-standard clustering </li></ul></ul><ul><li>High (Total) Wait Time </li></ul><ul><li>Begin Here </li></ul><ul><ul><li>“Top n” - Synchronous I/O Wait </li></ul></ul><ul><ul><li>EXPLAIN cross-reference </li></ul></ul>Accounting SHORT Report PRIMAUTH CLASS1 EL.TIME CLASS2 EL.TIME GETPAGES SYN.READ PLANNAME CLASS1 CPUTIME CLASS2 CPUTIME BUF.UPDT TOT.PREF ------------------------- -------------- -------- -------- P0DWDZ30 26:57.828009 26:29. 551153 10826.4K 231.8K DSNTIB71 9:06.093016 8:54 .703632 9240.2K 587.3K --------------------------------------------------------------------- |PROGRAM NAME CL7 ELAP.TIME CL7 CPU TIME CL8 SUSP.TIME CL8 SUSP| |DSNTIAUL 26:29.551149 8:54.703629 15:03.282831 578.8K| -------------------------------------------------------------------- Accounting LONG Report AVERAGE DB2 (CL.2) CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT ------------ ---------- -------------------- ------------ -------- ELAPSED TIME 26:29.5512 LOCK/LATCH(DB2+IRLM) 2.248301 21759.20 NONNESTED 26:29.5512 SYNCHRON. I/O 6:46.292538 231.8K STORED PROC 0.000000 DATABASE I/O 6:46.292538 231.8K UDF 0.000000 LOG WRITE I/O 0.000000 0.00 TRIGGER 0.000000 OTHER READ I/O 8:12.470159 324.0K OTHER WRTE I/O 1.966059 1237.60 Accounting LONG Report (DSNDB07) BP7 BPOOL ACTIVITY AVERAGE TOTAL --------------------- -------- -------- BPOOL HIT RATIO (%) 52.60 N/A GETPAGES 7975.4K 39876905 BUFFER UPDATES 8006.1K 40030431 SYNCHRONOUS WRITE 0.00 0 SYNCHRONOUS READ 4267.80 21339 SEQ. PREFETCH REQS 486.4K 2431985 LIST PREFETCH REQS 0.00 0 DYN. PREFETCH REQS 0.00 0 PAGES READ ASYNCHR. 3775.8K 18879044 Accounting LONG Report (Tablespaces) BP10 BPOOL ACTIVITY AVERAGE TOTAL --------------------- -------- -------- BPOOL HIT RATIO (%) 38.46 N/A GETPAGES 2215.3K 11076745 BUFFER UPDATES 1234.1K 6170329 SYNCHRONOUS WRITE 0.00 0 SYNCHRONOUS READ 97963.00 489815 SEQ. PREFETCH REQS 92302.40 461512 LIST PREFETCH REQS 0.00 0 DYN. PREFETCH REQS 4826.80 24134 PAGES READ ASYNCHR. 1265.3K 6326306 Accounting LONG Report (Indexes) BP20 BPOOL ACTIVITY AVERAGE TOTAL --------------------- -------- -------- BPOOL HIT RATIO (%) 63.34 N/A GETPAGES 635.2K 3176216 BUFFER UPDATES 0.00 0 SYNCHRONOUS WRITE 0.00 0 SYNCHRONOUS READ 129.5K 647661 SEQ. PREFETCH REQS 132.40 662 LIST PREFETCH REQS 0.00 0 DYN. PREFETCH REQS 3595.40 17977 PAGES READ ASYNCHR. 103.4K 516861
    34. 34. Strategies for: Reorg + Stats + Hints + Rebind <ul><li>When and How Often Do You Execute Each? </li></ul><ul><li>Lots of Resources Consumed by: </li></ul><ul><ul><li>Reorg (+ Copy?) </li></ul></ul><ul><ul><li>RunStats </li></ul></ul><ul><li>How Often Do Access Paths Change? </li></ul><ul><li>Choose Strategy Based on Resources Available </li></ul><ul><li>Begin Here </li></ul><ul><ul><li>Health checks </li></ul></ul><ul><ul><li>EXPLAIN history </li></ul></ul>
    35. 35. Strategies for: Reorg + Stats + Hints + Rebind <ul><li>Data-Driven </li></ul><ul><ul><ul><li>High freespace, non-volatile tables, careful COMPRESS, Stats once, few Hints </li></ul></ul></ul><ul><li>Activity-Driven </li></ul><ul><ul><ul><li>Medium freespace, some volatile tables, lots of COMPRESS, Stats when “things change”, some Hints, semi-regular ReBinds </li></ul></ul></ul><ul><li>Performance-Driven </li></ul><ul><ul><ul><li>Little freespace, volatile tables, many indexes, some COMPRESS, infrequent Stats, few Hints, regular ReBinds </li></ul></ul></ul>
    36. 36. Capacity Planning <ul><li>Resource Trending </li></ul><ul><ul><li>CPU Capacity </li></ul></ul><ul><ul><li>DASD Capacity (and configuration) </li></ul></ul><ul><ul><li>Throughput Capacity </li></ul></ul><ul><li>Forecasting Constraints </li></ul>
    37. 37. Where to Begin <ul><li>Top “n” Reports </li></ul><ul><ul><li>Setup </li></ul></ul><ul><ul><li>Initial Review </li></ul></ul><ul><ul><li>Automation </li></ul></ul><ul><li>Health Checks </li></ul><ul><ul><li>Determine Relevance </li></ul></ul><ul><ul><li>Execute, Review; Automate </li></ul></ul><ul><li>Tool Review and Education </li></ul>
    38. 38. Lock Lyon <ul><li>Fifth Third Bancorp </li></ul><ul><li>[email_address] </li></ul>Session: B11 Case Studies in DB2 for z/OS Performance Tuning: Part 2 www.mdug.org “ Download Presentations”

    ×