Your SlideShare is downloading. ×
Overview of the Optimization Service Center
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Overview of the Optimization Service Center

488
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
488
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • When you launch OSC, you’re brought right into the connection and configuration screen. This slide is a picture of the entire screen, and we’ll take a little tour to introduce some basic aspects of the OSC GUI which are common throughout OSC, and some which are specific to connection and configuration. Stay with me, we’re laying a foundation here.
  • The project navigator area is a “folder like” area on the left. It’s probably worth mentioning that OSC is developed on Eclipse. The project navigation area is common to eclipse applications. As you perform different tasks within OSC such as tuning a query, tuning a workload, etc. The project navigation area will allow you to move quickly within a project, and between projects. This will become more obvious later. It may take a little while to get comfortable with the GUI changes between VE and OSC – in my opinion, the project navigation makes it much easier to work on multiple queries, multiple subsystems at the same time. It’s easier to switch what you’re working on without completely closing what you’re working on down.
  • After connecting OSC to DB2, the project navigator can be used to navigate to different options within OSC. When a project is opened, or when you have multiple projects open, the project navigator is useful to move within a project, and to different projects.
  • In addition to the project navigator on the side bar, you can navigate through projects, workload views, and monitor view areas through the navigator tabs at the top of the GUI.
  • If you have any connections configured already – which you configured through DB2 Connect Client Configuration, or cataloged via Visual Explain, OSC will pick these configured connections up automatically. So hopefully, you’ll have your connections cataloged already and you can simply connect. If you don’t have any connections configured, then you would do so.
  • To add a subsystem, click subsystem and choose add.
  • Enter your connection information. I always pull up the <ssid>MSTR address space in sdsf and find DSNL004I message which has all the connection information needed, nice and condensed.
  • Here’s a screen shot of the DSNL004I DDL start message. If you don’t have authority to display the <ssid>MSTR address space, then talk to your DBA or systems programmer to obtain this information.
  • Enter your login information. Underneath the login information, there are status points. When you connect, OSC performs a few checks. Are the required packages are bound? Is the SQLID provided explain enabled? (Are all of the explain tables OSC requires explain enabled?) Are the OSC tables created. OSC tables are tables specific to the OSC product.
  • After connecting, if you’d already setup OSC, then you would have clicked on a “finished” button which I didn’t show you. If this is the first time OSC has connected to a DB2 subsystem, then you’ve got to bind some packages. This has to be done by an ID with SYSADM authority. This only needs to be done once, on the first OSC connection to DB2. It’s worth noting, another difference between Visual Explain and OSC. OSC uses static SQL to access the catalog where Visual Explain used dynamic SQL exclusively.
  • After binding the packages, if the explain tables aren’t created, the next step is to create the explain tables. The action is a radio bar. The default is to create explain tables, but there are more options. Create Explain tables Use existing explain tables Grant access to existing explain tables Create ALIAS to existing explain tables
  • After connecting, if you’d already setup OSC, then you would have clicked on a “finished” button which I didn’t show you. If this is the first time OSC has connected to a DB2 subsystem, then you’ve got to bind some packages. This has to be done by an ID with SYSADM authority. This only needs to be done once, on the first OSC connection to DB2. It’s worth noting, another difference between Visual Explain and OSC. OSC uses static SQL to access the catalog where Visual Explain used dynamic SQL exclusively.
  • After binding packages, need to create or upgrade explain tables.
  • All I really want to show here is OSC will make some of the explain management easier. OSC can create all the explain tables, you can create ALIASes to the explain tables, and grant authorizations to the explain tables. Of course, you can only do this if you have the proper privileges and authorities to perform these actions.
  • Here’s the create ALIAS screen, unobstructed.
  • Status section shows connection, bind, explain table creation status.
  • On the main connection / configuration page, there is a section of the page which shows the status of the subsystem. There are also buttons to allow the user to jump into management functions… You can disconnect by pressing disconnect. You can bind / free packages by pressing the BIND button. You can drop / manage the OSC tables by pressing the drop button.,
  • The EXPLAIN-enabled section. This section shows what authid’s have explain tables created.
  • In the explain enabled authid’s section, the user can: Enable an authid for explain by creating explain tables and/or ALIASes to explain tables. Migrate existing explain tables Disable explain tables.
  • When you launch OSC, you’re brought right into the connection and configuration screen. This slide is a picture of the entire screen, and we’ll take a little tour to introduce some basic aspects of the OSC GUI which are common throughout OSC, and some which are specific to connection and configuration. Stay with me, we’re laying a foundation here.
  • After connecting, the welcome page is where I go to navigate through OSC. It is the portal into the functions provided by OSC. From the welcome page, you can connect and configure DB2 subsystems, view workloads, view query activity, tune a workload, tune a single query, and view monitor profiles.
  • We just reviewed configure DB2 Subsystems area. View query activity is one avenue to find queries you wish to look at and/or tune.
  • View workloads will show existing workloads, and has capabilities to launch the create workload wizard. Tune a workload drops right into creation of a new workload.
  • Tune a single query allows the user to enter or search for a query to tune, and brings up the tuning tools. View monitor profiles is where users can view and create new monitors, which also create workloads.
  • With the introduction of workloads, users can now perform “application” tuning, rather than just query focused tuning.
  • With the introduction of workloads, users can now perform “application” tuning, rather than just query focused tuning.
  • How to use OSC to tune an application. First, you’ve got to create the workload.
  • When clicking Tune a workload… You can select the subsystem. The last subsystem you connected to defaults. You can choose a different one if you like. Provide a project name, this will be the name of the workload. You can alternately append an existing workload.
  • Here’s a screen shot of the workload sources page. A workload can be composed of multiple sources.
  • Provide filter options.
  • Primarily, you’d use this tab if you want all the SQL for a particular plan.
  • Screen shot of the plan filter options.
  • The cost and object filters allow you to supplement package / plan filters. You can further limit the SQL retrieved to match any or all of the criteria specified for the cost and object filters. So if you only want to see SQL which has a PROCMS over a certain value, or SQL which uses a particular index, or SQL which accesses a particular table or uses a particular index.
  • The cost and object filters allow you to supplement package / plan filters. You can further limit the SQL retrieved to match any or all of the criteria specified for the cost and object filters. So if you only want to see SQL which has a PROCMS over a certain value, or SQL which uses a particular index, or SQL which accesses a particular table. I had to cut off the bottom of the screen, but there is a choice at the bottom where the user selects to return rows which meet ANY or ALL of the above conditions.
  • Access path filters allow you to limit SQL in the workload to those with specific access path characteristics. If you’re doing this, you’re not so much profiling an application, as finding a grouping SQL which have a shared set of circumstances for more specific analysis. Certainly this is one way to use workloads. Remember that the suggestions made from statistics advisor in this circumstance are targeted to only SQL within this workload and not SQL which are eliminated via access path filters.
  • The cost and object filters allow you to supplement package / plan filters. You can further limit the SQL retrieved to match any or all of the criteria specified for the cost and object filters. So if you only want to see SQL which has a PROCMS over a certain value, or SQL which uses a particular index, or SQL which accesses a particular table. I had to cut off the bottom of the screen, but there is a choice at the bottom where the user selects to return rows which meet ANY or ALL of the above conditions.
  • After the workload is captured, you’re brought to the workload project screen. Wanted to show a snapshot of it. We’ll step through the different areas you can go from here.
  • Workload header information is at top.
  • The workload header shows the project name, the subsystem we’re connected to, the workload name, workload owner, and description.
  • Workload options shows what you can do with this workload you’ve just collected.
  • There are 5 options. Workload Statements is where you go to look at the individual statements. Run Advisors is where you execute advisors against the workload Schedule tasks – you can schedule the capture of workloads and gather explain information for the workload. Users – you can access to the workload to other users History – show what’s been done.
  • Workload options shows what you can do with this workload you’ve just collected.
  • Workload statements shows the SQL within the workload. You should note that executions is set to 1, and there is no runtime execution information contained in this view. For the catalog source, as well as manual input, text file, and QMF sources – there is no runtime information available through these workload generation sources. Earlier I indicated that one aspect of workload tuning is understanding the workload – how often statements execute and what their performance is, and how that is helpful for guiding analysis. Using the catalog source tells us what SQL are in the workload, but not how often they execute, and what their typical performance is. We will be able to generate SQL targeted statistics recommendations even without knowing the frequency of execution and performance characteristics. So there is value to using this method, but we’re still lacking the runtime information which we also would want.
  • You can capture more statements using another catalog extraction, or via other workload input options.
  • Run workload advisors. Note, for Optimization Service Center there currently is only one workload advisor – workload statistics advisor. There is a for-charge product built on the same base which includes workload statistics advisor and another advisor. I expect there also will be more workload capabilities coming soon.
  • Schedule workload activities. You can schedule the capture of workload SQL Consolidation will consolidate duplicate SQL statements obtained from multiple captures Explain of the workload to occur later Execution of the advisors can also be scheduled for later
  • You can remove individual SQL, or all SQL from the workload.
  • Two things to note: Edit statement runtime information. You can launch the individual query based advisors right from within a workload. We’re going to cover the query based tuning tools later.
  • Edit runtime information The workload advisor tools use execution count, elapsed (seconds), and cpu (seconds). When the workload input method is catalog, qmf, file, etc. The number of executions and cpu / elapsed times are not available, so they default. When the statement cache source is used, or workload monitoring generates the workload – these values will be set based on real runtime executions. For workload statistics advisor, you should be fine with the defaults. If there are some significant SQL you want weighted higher, then go ahead and do so. Workload Statistics Advisor does take the runtime information into account – especially regarding more aggressive statistics. But it’s manually intensive to do this work, and you can get a good targeted statistics recommendation without performing this manual work. OSC and Optimization Expert (OE) share the workload infrastructure. It is much more important for the index advisor in OE to have frequency of execution and performance information when determining which indexes to build, with which columns, and which column sequence than workload statistics advisor.
  • Run workload advisors. Note, for Optimization Service Center there currently is only one workload advisor – workload statistics advisor. There is a for-charge product built on the same base which includes workload statistics advisor and another advisor. There are other workload analysis features under consideration but not yet available.
  • Ok, the next few slides – I’m going to lobby you on why you should be collecting targeted statistics. Let me trial a parallel metaphor for those of you reading the notes. You’ve graduated college, and you’ve interviewed with 3 different companies. What if the companies sent you offer sheets and didn’t include the salary? Sometimes, optimizer is costing a query – and we’re looking at a predicate, and we have no idea whether that search condition is going to “pay well” by being selective, or “pay poorly” and not filter any rows at all. Statistics Advisor is asking for information so DB2 better understands where the filtering is occurring. Sometimes without statistics, the optimizer has to “guess” between 3 offers not knowing which one pays the best, and sometimes both DB2 and the user are disappointed. Now, is knowing the salary always sufficient to differentiate between competing offers? We might also want to look at their healthcare plans, amount of vacation and holidays, 401k matching. Then there’s the questions – are you and the manager / team a good fit, do you like the work environment and expectations. There’s a lot of information you’d like to know when making these decisions, and sometimes when you don’t have this information – and your assumption isn’t right, it leads to disappointment. It’s also true that not everything is knowable in advance.
  • Workload statistics advisor recommendations page.
  • Workload Statistics Advisor will ALWAYS generate complete workload statistics recommendation. It will recommend recollection of the statistics you’ve got, and supplement with any correlation (COLGROUP and KEYCARD) and skew (frequencies, histograms) which it believes are required based on the predicates in your SQL. If you’ve collected all the statistics that statistics advisor thinks are necessary, it will STILL generate a complete suggestion marked as high priority, but the LOW priority “repair” recommendation will disappear. This has caused some confusion with users. The primary purpose of WSA is to generate a complete, targeted statistics recommendation which recollects existing statistics as well as supplementing with missing / conflicting statistics. If the low priority “repair” suggestion is absent, then WSA has determined that your statistics are good. We’re working on a GUI change which will make this more clear.
  • Runstats control statements This section contains the actual RUNSTATS suggestions.
  • After the workload is captured, you’re brought to the workload project screen. Wanted to show a snapshot of it. I also want to highlight in the workload navigator, you can see a new project has been opened for the workload that’s been captured.
  • For those of you in the back row, here’s a bigger picture.
  • After the workload is captured, you’re brought to the workload project screen. Wanted to show a snapshot of it. I also want to highlight in the workload navigator, you can see a new project has been opened for the workload that’s been captured.
  • When you’re working within a project, you can navigate within the project from two areas. The project navigator on the left, or the tabs on the bottom of the screen (see the box…). The tabs at the bottom seem faster for navigating within a project than the project navigator, and it’s easy enough to use.
  • Ok, let’s walk through some more sources…
  • Cache source.
  • When you snap the dynamic statement cache, if you’ve got IFCID 316 – 318 turned on, you also get runtime information. So with the dynamic statement cache source input, you can get a better picture of how often SQL are running and it’s performance characteristics – which is useful for targeting your tuning efforts, understanding your application, and for measuring performance improvements. One drawback of the statement cache is – it’s a snapshot in time. Some SQL run by the application may not have executed yet, or their frequency of execution could be underreported depending on when the snapshot is taken. So it’s not an ideal way to get an accurate profile of all SQL for an application.
  • Snapshot of what you’ll see…
  • With the introduction of workloads, users can now perform “application” tuning, rather than just query focused tuning.
  • Create a normal or exception monitor.
  • You can build a list of packages as well as dynamic statements to monitor.
  • Set what information should be pushed out.
  • Just wanted to show a snapshot of the runtime information for static SQL. Number of executions, accumulated / average elapsed times, and accumulated / elapsed CPU times.
  • For an exception monitor, you set thresholds on which statements will generate the workload. This is useful to generate a workload from SQL consuming high CPU.
  • Transcript

    • 1. Patrick Bossman IBM Silicon Valley Lab Tuning with Optimization Service Center Part I ® Columbia, MD September 12, 2007
    • 2. Agenda
      • Overview of Optimization Service Center
      • Connect and configure OSC
      • Workload (application) Tuning
      • Query tuning (Part II)
    • 3. Overview
      • Optimization Service Center (OSC)
        • New no charge product supported for connections to DB2 9.
          • Support for DB2 for z/OS V8 connections in open beta
        • Workstation tool for monitoring and tuning of queries
        • Facilitates the identification and tuning of workloads (sets of queries) as well as individual queries
        • New powerful query diagnostic tools enable faster deep analysis of queries
    • 4. Overview
      • OSC feature list
        • Support for workload tuning
          • Stand-alone workload creation
          • Push-out model (monitoring)
        • Visual Explain query graphing capabilities
        • Query formatting and annotation
        • Query report
        • Visual Plan Hint
        • Statistics Advisor - query and workload
    • 5. Connect and configure
      • Launch OSC
        • Initial screen is connection and configuration
        • Review look and feel
        • Review layout of screen
    • 6. Launch OSC
    • 7. Project navigator area
    • 8. Project navigator
      • Navigate through open projects
    • 9. Top navigation tabs
    • 10. Top navigator tabs close-up
    • 11. Connection / configuration
    • 12. Connection close-up
      • Connection
        • Subsystems cataloged in DB2 Connect listed
        • Use subsystem menu option to add / remove subsystems
    • 13. Add Subsystem
      • Add DB2 subsystem connection
        • Select subsystem button
        • Choose Add
    • 14. Connection information
      • Add DB2 subsystem connection
        • Subsystem alias is a meaningful name to you
        • Location, hostname, port
          • Can be found in <ssid>MSTR address space DDF startup message DSNL004I
    • 15. DSNL004I
      • DSNL004I message
        • Location = location
        • Domain = hostname
        • TCPPORT = port
    • 16. Jeez Pat, Connect already!
      • Connect to subsystem
        • Choose Connection, connect
        • You can connect to more than one subsystem and perform activities on more than one subsystem at a time.
    • 17. Login information…
    • 18. Bind packages
    • 19. Create explain tables
    • 20. Create explain tables
    • 21. Create explain tables
    • 22. Create alias to explain tables
      • Create aliases
        • You can use OSC to create ALIASes to the explain tables also.
    • 23. Create alias to explain tables
    • 24. Subsystem status
    • 25. Subsystem status
    • 26. Explain-enabled Authid’s
    • 27. EXPLAIN enabled authid’s
    • 28. Connection complete. Now what? (Welcome menu…)
    • 29. Go to welcome page…
    • 30. Welcome page
    • 31. Welcome page close-ups
    • 32. Welcome page close-ups
    • 33. Welcome page close-ups
    • 34. Application tuning
      • Application tuning
        • Creating workloads
        • Workload options
        • Workload tuning features
    • 35. Application tuning process
      • What is application tuning?
        • Identify what the application workload is
          • Individual SQL statements
        • Get an understanding of the applications behavior
          • How often are individual SQL statements executing?
          • What is the performance of individual SQL statements?
        • Determine statistics for workload
          • Workload statistics advisor
          • More workload analysis features coming… (?)
        • Remeasure workload
          • Identify top tuning candidates
          • Use query based tools to further analyze (next session)
    • 36. Creating workloads
      • Creating a workload
        • Tune a workload
        • View workloads
        • Monitoring functions
    • 37. Tune a workload
    • 38. Workload sources
      • Workload sources
        • Snap statement cache
        • Catalog (static SQL)
        • QMF / QMF HPO
        • File
        • Categories
        • Other workloads
    • 39. Workload sources
    • 40. Catalog source…
    • 41. Catalog source
    • 42. Package filter options
      • Package filter options
        • Collection id, name, owner, …
        • Can use equals, like, in, etc.
    • 43. Package filter options
    • 44. Plan filter options
      • Plan filter options
        • Plan name, plan creator, etc.
        • Can use equals, like, in, etc.
    • 45. Plan filter options
    • 46. Cost & Object filter options
      • Cost and Object filters
        • Filters SQL within package / plan filter
        • Requires the static SQL be bound with explain yes.
        • DSN_STATEMNT_TABLE must already be populated.
        • Show only SQL with PROCSU > …
        • Show SQL which uses table…
        • Show SQL which uses index…
      • Choice to return rows which qualify for ANY or ALL of the cost / object filter conditions
    • 47. Cost & Object filter options
    • 48. Access path filters
      • Access path filters
        • Filters SQL within package / plan filter
        • Requires the static SQL be bound with explain yes.
        • PLAN_TABLE must already be populated.
      • Show SQL which performs…
        • Tablespace scan
        • Sort
        • Non-matching index scan
        • List prefetch
        • Outer join
        • … .
    • 49. Cost & Object filter options
    • 50. Capture the workload
      • After selecting packages / plans / filters…
        • Click finish to capture workload
        • OSC goes about the business of collecting the statements…
      • Queue Jeopardy music, get some coffee.
    • 51. Workload captured…
    • 52. Review workload screen
      • Workload header
      • Workload options
      • Project Navigator
    • 53. Workload header
    • 54. Workload header
    • 55. Workload options
    • 56. Workload options
    • 57. Workload option close-up (1)
    • 58. Workload options close-up (2)
    • 59. Workload statements…
    • 60. Workload statements
      • Workload statements
        • Show the SQL within the workload
        • Refine execution runtime information
          • For catalog capture, runtime information would have to be manually obtained.
          • You don’t really need to change it for statistics advisor…
        • Let’s walk through it…
    • 61. Workload statements view
    • 62. Capture more statements…
    • 63. Description
    • 64. Schedule workload activity
    • 65. Remove statements from workload
    • 66. Execute query tools on selected SQL
    • 67. Select “edit runtime information…”
    • 68. Edit runtime information
    • 69. Run workload advisors (SA)
    • 70. Why Statistics Advisor?
      • Importance of statistics
      • Difficulty of determining what statistics to collect
      • Solution: Workload Statistics Advisor!!!
    • 71. Estimating query cost
      • Accurately estimating the number of rows is important for to accurately cost a query
        • Index matching
          • Accurately estimate index cost
        • Total index filtering
          • Estimate table access cost via index(es)
          • Choose how to use index (prefetch?)
        • Total table level filtering
          • Efficient join order
          • Efficient join method
          • Appropriate sorts
    • 72. Symptoms of inaccurate estimation
      • Sometimes - great performance!
        • Few competitive choices
        • Good performance possible, but not a &quot;winning&quot; strategy
      • Inefficient access path
        • Query never performs at optimal level
      • Unstable access path
        • Slight changes in input result in dramatic swings in actual performance
        • Efficient and inefficient access paths can end up with close cost estimates
        • More accurate selectivity estimates can improve optimizer &quot;differentiation&quot; of efficient and inefficient access
    • 73. Some observations
      • Several candidate indexes?
        • Need sufficient statistics to differentiate the candidates
      • Local filtering spread across many tables?
        • The more &quot;potentially&quot; desirable outer tables there are, the more important it is to accurately estimate the true qualified rows
      • Lot's of predicates which don't filter much?
        • Important to make sure DB2 knows that
        • When many rows are processed, a single bad choice can be magnified
      • The necessity for accurate estimation is related to the number of competitive choices.
    • 74. Why statistics advisor?
      • Statistics are collected to help optimizer in costing available access paths
      • Determination of statistics required can be daunting
      • Automate the determination of statistics required
    • 75. What are the right statistics?
      • Identify interesting objects
        • Tables
        • Indexes
      • Identify interesting columns, column groups
        • Predicate analysis
          • Which column (groups) have predicates?
          • Which column (groups) are indexed?
      • Which statistics should I collect?
        • Column statistics
        • Correlation statistics
        • Frequencies
        • Histograms
        • Statistic needed depends on predicate types (range, equal)
    • 76. Determining the &quot;right&quot; statistics
      • Manual analysis very time consuming, requires significant skill
        • Format the query to make it readable
        • Recognize predicate transformations
          • Predicate transitive closure
          • Between to equal opens new possibilities
          • Or to IN creates new boolean term predicates
          • Has DB2 rewritten the query (predicate transitive closure)
        • Some columns used as predicates hidden in views
        • Recognize which statistics are appropriate in which circumstances
      • Significant effort even for expert
      • Error prone
    • 77. Run workload advisors (WSA)
    • 78. Complete recollection, or repair?
    • 79. RUNSTATS command
    • 80. Table portion of command RUNSTATS TABLESPACE DB2OSC.WSATS0 TABLE(DB2OSC.DSN_WSA_DATABASES) COLUMN(NAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_CGFREQS) COLUMN(TBCREATOR,TBNAME,COLGROUPCOLNO,SESSIONID) TABLE(DB2OSC.DSN_WSA_KTGFREQS) COLUMN(IXCREATOR,IXNAME,SESSIONID,KEYGROUPKEYNO) TABLE(DB2OSC.DSN_WSA_SESSIONS) COLUMN(SESSIONID) TABLE(DB2OSC.DSN_WSA_COLGROUPS) COLUMN(TBCREATOR,TBNAME,COLGROUPCOLNO,SESSIONID) COLGROUP(COLGROUPCOLNO,SESSIONID,TBCREATOR,TBNAME) TABLE(DB2OSC.DSN_WSA_TSS) COLUMN(NAME,DBNAME,SESSIONID) COLGROUP(DBNAME,SESSIONID) COLGROUP(DBNAME,NAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_KTGS) COLUMN(IXCREATOR,IXNAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_KEYS) COLUMN(IXCREATOR,IXNAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_CGHISTS) COLUMN(TBCREATOR,TBNAME,COLGROUPCOLNO,SESSIONID) TABLE(DB2OSC.DSN_WSA_KTGHISTS) COLUMN(IXCREATOR,IXNAME,SESSIONID,KEYGROUPKEYNO) TABLE(DB2OSC.DSN_WSA_INDEXES) COLUMN(TBCREATOR,TBNAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_TABLES) COLUMN(CREATOR,NAME,TSNAME,DBNAME,SESSIONID) COLGROUP(CREATOR,NAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_LITERALS) COLUMN(TBCREATOR,TBNAME,SESSIONID,COLNO,VALUE) COLGROUP(COLNO,SESSIONID,TBCREATOR,TBNAME,VALUE) TABLE(DB2OSC.DSN_WSA_KEYTARGETS) COLUMN(IXCREATOR,IXNAME,SESSIONID) TABLE(DB2OSC.DSN_WSA_COLUMNS) COLUMN(TBCREATOR,TBNAME,SESSIONID) SORTDEVT SYSDA SHRLEVEL CHANGE REPORT YES
    • 81. Index portion of suggestion RUNSTATS TABLESPACE DB2OSC.WSATS0 … INDEX(DB2OSC.DSN_WSA_IDX_DBS, DB2OSC.DSN_WSA_IDX_CGFS, DB2OSC.DSN_WSA_IDX_KTGFS, DB2OSC.DSN_WSA_IDX_SSNS, DB2OSC.DSN_WSA_IDX_CGS, DB2OSC.DSN_WSA_IDX_TSS, DB2OSC.DSN_WSA_IDX_KTGS, DB2OSC.DSN_WSA_IDX_KEYS KEYCARD, DB2OSC.DSN_WSA_IDX_CGHS KEYCARD, DB2OSC.DSN_WSA_IDX_KTGHS KEYCARD, DB2OSC.DSN_WSA_IDX_IDXS, DB2OSC.DSN_WSA_IDX_TABLES, DB2OSC.DSN_WSA_IDX_LTRS KEYCARD, DB2OSC.DSN_WSA_IDX_KTS KEYCARD, DB2OSC.DSN_WSA_IDX_COLS KEYCARD) SHRLEVEL CHANGE REPORT YES
    • 82. Output looks complex…
      • RUNSTATS commands are more targeted
        • Only collect on tables used in queries
        • Only collect column statistics on columns used as predicates
        • Collect frequency and histogram statistics based on predicates which would benefit
    • 83. Pause for questions on WSA?
    • 84. Back to our regularly scheduled programming…
    • 85. More OSC GUI review
    • 86. Open projects as tabs on top…
    • 87. Project Navigator
    • 88. Project navigator
      • Navigate across projects
        • As multiple projects opened, you can switch between projects you’re working on
      • Navigate within a project
        • As you perform different tasks within a project, you can navigate through the different tuning sections
    • 89. Project Navigator
    • 90. Project tabs
    • 91. Project tabs
    • 92. Project tabs
      • Navigate within project
        • As multiple projects opened, you can switch between projects you’re working on
      • Close sections of project you’re done with
        • Click x on the tab to close selected tab
    • 93. More OSC GUI review
    • 94. Workload sources - cache
    • 95. Cache statement view
      • Cache statement view
        • Number of executions
        • Accumulated & average CPU time
        • Accumulated & average elapsed time
        • SQL text
      • Cache as input provides more visibility into how often SQL executing, cost of statements
    • 96. Workload – cache statements view
    • 97. Monitor workload
      • Normal monitor workload
        • All SQL based on certain criteria
        • Useful to profile an application
          • Which SQL running most frequently?
          • Which queries consuming most elapsed / execution time?
      • Exception monitor
        • Workload generated based on exception criteria
          • All SQL for package > x seconds elapsed
          • All SQL for package > x seconds elapsed
    • 98. Application tuning process (revisit)
      • What is application tuning?
        • Identify what the application workload is
          • Individual SQL statements
        • Get an understanding of the applications behavior
          • How often are individual SQL statements executing?
          • What is the performance of individual SQL statements?
        • Determine statistics for workload
          • Workload statistics advisor
          • More workload analysis features coming… (?)
        • Remeasure workload
          • Identify top tuning candidates
          • Use query based tools to further analyze (next session)
    • 99. Create normal monitor
    • 100. Add monitor sources
    • 101. Set monitor options
    • 102. Monitor statement view
    • 103. Exception monitor options
    • 104. Application tuning
      • Normal monitor allows for capture of all SQL
        • Specify list of packages
          • Prevents workload “pollution” which can occur with statement cache
        • Capture valuable execution information
        • Ideally initially used in QA environment to profile / tune application before production implementation
          • “How often does this execute?”
          • “How long does this run?”
          • … you can have answers to these questions…
        • Selectively use in production to develop application profile and tune application
    • 105. Application tuning process (review)
      • What is application tuning?
        • Identify what the application workload is
          • Individual SQL statements
        • Get an understanding of the applications behavior
          • How often are individual SQL statements executing?
          • What is the performance of individual SQL statements?
        • Determine statistics for workload
          • Workload statistics advisor
          • More workload analysis features coming… (?)
        • Remeasure workload
          • Identify top tuning candidates
          • Use query based tools to further analyze (next session)
    • 106. Which workload source to use?
      • Which workload source provides…
        • Identification of all SQL statements executed
        • Accurate number of execution counts
        • Accurate accumulated and average execution performance
        • Provides runtime information for any source
        • Can re-measure and expect to see the same SQL again, compare results
        • Is not polluted with dynamic SQL which are not relevant to the application?
      • Workload monitoring (normal)
    • 107. Tuning with OSC I
      • End of OSC tuning Part I
        • Show some demo if time permits
      • Next up, Query tuning with OSC
        • Statistics Advisor
        • Query annotation
        • Query report
        • Visual Explain
        • Visual plan hint
    • 108. Patrick Bossman [email_address] Tuning with Optimization Service Center I ®

    ×